AI and machine learning algorithms have fast-tracked automation in South Africa’s insurance industry, but these solutions also introduce unique cybersecurity vulnerabilities.
Specifically, adversarial attacks can compromise the accuracy of risk assessments, bypass fraud detection, and disrupt claims processing.
Without proper digital security in place, this can lead to compliance violations and severely damage the reputation of local insurers.
Adversarial attacks are deliberate manipulations of input data, designed to fool machine learning models into making incorrect predictions or classifications.
These sophisticated attacks, which exploit vulnerabilities by introducing small and often imperceptible changes, can cause significant errors and pose a serious threat. This can cost insurers both money and consumer trust.
One of the key applications of AI in the insurance industry is to identify fraud. Large volumes of data are fed into machine learning algorithms, which identify patterns and red flags for potential fraudulent claims.
This is a much more efficient way of processing claims compared to traditional manual methods, since AI can detect patterns much faster than paging through applications and claim forms.
AI is also used to automate underwriting decisions, to improve response times and streamline the customer experience.
If a cybercriminal gains access to the fraud detection system, they can alter fraudulent claim data to make it appear legitimate, in a way that bypasses AI-powered detection mechanisms.
Similarly, adversarial attacks could influence pricing models, allowing high-risk customers to pay artificially low premiums, which could ultimately erode an insurer’s financial stability.
To protect themselves from adversarial attacks and other cybersecurity risks, insurers using AI need to adopt a multi-layered cybersecurity strategy that focuses on securing both the AI systems themselves and the company’s overall digital infrastructure.
This can range from broader protections like access control, network security, and data encryption to specific AI defences like adversarial training.
AI technologies are still evolving, and many of the vulnerabilities associated with them are not yet fully understood. Adversarial attacks, for example, are often an afterthought when developing AI systems.
The focus tends to be more on performance and functionality, rather than on security. This puts the onus on the user – in this case, insurance companies – especially if they are using ‘black-box’ AI models that are not purpose-built for their needs.
As AI continues to revolutionise the insurance industry in South Africa, the cybersecurity risks – like adversarial attacks – will only increase Businesses in the financial sector are a prime target for cybercrime.
Not only do they offer access to money and other assets, but they also collect and store vast amounts of sensitive personal information.
Read also: AI helps remove billions of harmful ads – Google Ads Safety Report
By adopting robust cybersecurity measures and staying informed about emerging threats, insurers can protect their systems, safeguard customer data, and maintain trust in their AI-driven services.
Allan Juma is a cyber security engineer at ESET South Africa
SOURCE: IT News Africa.