About Avanta Ventures
As the venture capital arm of CSAA Insurance Group, we're empowering startups to bring fresh ideas forward.

Category

Tags

Navigating generative AI’s dual impact on insurance fraud

Companies that act now to develop risk-aware genAI cultures and capabilities will gain a competitive advantage in navigating the AI future. 
Published on Oct 26, 2023

Generative artificial intelligence (genAI) is artificial intelligence capable of creating new content. Its meteoric rise promises to transform business and society but also poses new risks of misuse. For the insurance industry, genAI brings both emerging fraud threats from content fabrication as well as fraud prevention opportunities through anomaly detection using new synthetic data (artificially generated information as opposed to real-world event-based data).

Realistic fake claims evidence is easier to manufacture at scale with genAI, but it can also be used to generate training data to detect fraud. Proactive governance, integrated human-in-the-loop systems, and continuous learning will enable insurers to ethically apply genAI to fraud prevention while averting its misuse. Companies that act now to develop risk-aware genAI cultures and capabilities will gain a competitive advantage in navigating the AI future. 

Emerging fraud risks from generative AI

In 1995, fraud was estimated to have cost the insurance industry approximately $80B by the Coalition Against Insurance Fraud. In comparison, in 2022, before the surge of genAI, fraud was estimated to have cost the industry over $300B. How genAI will affect those numbers in 2023 is hard to tell. While most examples of fraud center around genAI’s capability to create realistic content, it’s hard to quantify the cost of or even identify all the possible criminal use cases of genAI. A recent study found that 53% of participants couldn’t identify content created by AI and that the percentage rose to 63.5% when using the improved GPT4.0 model. Some emerging fraud risks relevant to insurers include: 

  • Counterfeit claims evidence
    Highly realistic images, police reports, witness statements, medical documents, and other evidence can be generated to substantiate fictitious claims. In just the last year, genAI’s capability to create believable and realistic new content has grown significantly and can often bypass traditional fraud data checks in place today.
  • Sophisticated phishing
    Spear phishing campaigns can leverage genAI to generate personalized messages tailored to victims’ contexts, interests, and communication patterns. Earlier this year, the Washington Post reported on how telephone-based scams have become even more believable in the last year due to genAI. Darktrace researched the impact of genAI on social engineering attacks and found a 135% increase from January to February 2023 alone which aligns with the increased adoption of OpenAI’s ChatGPT.
  • Synthetic identities
    Mixing generated personal details, profile images and credentials enable the creation of large volumes of fake identities for account takeover scams, laundering, and resale.
  • Automated vulnerability probing
    GenAI tools can be misused to analyze systems, identify weaknesses, and generate tailored social engineering pretexts and exploits. Common to this type of attack that lends itself well to genAI is the “man-in-the-middle”, where malware sits between the victim’s computer or phone and the system, they’re trying to interact with to capture the data attempting to be transmitted. With genAI, this type of attack could conceivably be automated. As sophistication, accessibility, and automation increase, the cyber risks to insurers and corporations will compound.
  • Misinformation
    Generating hyper-realistic media like deepfakes can disseminate false narratives that appear credible and sow social discord that could be used to negatively impact brands.
  • Rogue insiders
    Insider threats augmented by genAI to design sophisticated fraud are emerging risks. GenAI-created synthetic data could be used to cover up malfeasance.

GenAI lowers barriers to scalable fraud innovation. Earlier this summer, the AI community was introduced to WormGPT, a tool that purportedly outperformed the capabilities of ChatGPT and advertised to use AI to write malicious software. Days after WormGPT’s debut, FraudGPT was unveiled with hints at the imminent launch of a series of malicious AI tools: DarkBERT, DarkBARD and DarkGPT. While most of these tools have been taken down, new ones continue to emerge. 

While the standard “defense-in-depth” approach to corporate cybersecurity is an accepted cybersecurity architectural approach, its benefits don’t generally protect against genAI-based attacks. While the value of security awareness training, teaching employees to spot attempts at social engineering delivered in text format, has been shown to help improve the ability to identify malicious content, that effect is diminishing rapidly as gen-AI content creation fidelity improves.

Combatting fraud

New companies responding to the darker use cases of genAI are only beginning to identify human-generated content from AI-generated content: 

  • Text content detection:
    Models leveraging existing large language models (LLMs) and other content creation markers to determine if the content was generated by an AI. GPTZero is an example that has become popular especially in academia to distinguish AI content from human content.
     
  • Identity verification:
    Companies detecting deepfakes and fake documentation to verify identities (i.e., for KYC purposes). Sensity AI analyzes documents and performs eKYC to identify fraudsters.
  • Preventing misinformation and scams:
    Companies like Reality Defender, Sentinel, and Optic are combating AI-created deepfakes and content to battle the spread of misinformation and scams.

While the above tools are helpful in identifying static synthetically created data, fighting live fraud attempts continues to be challenging as prevention also requires a human component. Companies like Oscilar, a real-time data fraud prevention platform for Fintech companies, have been developing their existing platforms to incorporate and address the risk of genAI. However, the genAI fraud detection landscape is still nascent, with new tools continue to emerge to help combat new types of fraud. Over 75% of genAI startups are still early-stage (Series A or earlier) or haven’t raised any external equity funding.

With a massive data generation capacity, genAI also offers insurers new opportunities to improve existing fraud prevention, detection, and response measures. GenAI’s capability to rapidly create new data and augment existing data sets can help with synthetic data creation for fraud analytics in claims and other behavioral modeling at different points in the customer journey with a carrier. For example, genAI can rapidly generate simulated claims data and customer profiles to train AI fraud detection systems that can supplement existing data to improve model accuracy and generalization. Synthetically created data provides more variability than original datasets currently allow due to limited real-world examples. The usage of synthetic data isn’t limited just to combat AI-created fraud as additional synthetic data could be created to help augment and supplement a number of dataset-dependent use cases such as optimizing pricing while remaining compliant with data regulations like CCPA by leveraging synthetic geolocation data, eliminating biases (such as racial or gender biases) from models, and training other predictive behavioral models (such as churn). 

Applied judiciously, genAI adds powerful new arrows to insurers’ quiver to combat crimes. However, the insurance sector operates within strict regulatory frameworks where upholding compliance is of utmost importance. Employing third-party or external LLMs to transmit customer data necessitates meticulous architecting, legal due diligence, and the establishment of guidelines delineating what data can be shared with these models. Given the benefits of genAI, it’s unlikely that the Departments of Insurance will completely prohibit the technology, but strict protocols and regulations will likely be put in place regarding how the technology can and can’t be used with frameworks for preventing biases. 

Looking ahead

As bad actors weaponize genAI, businesses need to be constantly improving their defensive capabilities since attackers will be constantly attempting to one-up existing cybersecurity and fraud prevention measures. In the past, increased fraud attempts have usually been met with focused, point solutions.

The genAI threat has the potential to impact the entire threat landscape and changes need to be made at a more fundamental, methodology level. AI tools to detect fraud are far from fail-safe. For example, OpenAI’s classifier tool is estimated to miss 74% of AI-generated text and was recently shut down by OpenAI for lack of performance.

The challenge for any tools created as countermeasures is that they are usually new, unproven and offer mixed success until they have been trained with a huge data set. Sharing data to train anti-cyber threat software, including anti-genAI threats, could be an approach to stay ahead of fraudsters. 

Related Posts:

We remain eager to explore new opportunities, partnerships and collaborations with promising start-ups. If you have an idea or proposal that aligns with our mission, we encourage you to reach out to us.

Share This