Generative A.I. Generative Scams

With such a high frequency of all the new forms of Artificial Intelligence (AI) emerging and the jaw- dropping nature in terms of its capabilities, it was only a matter of time before the use cases for fraud leveraging AI turned into real events and outcomes. That time is now. 

With multiple reports emerging of social engineering scams resulting in sometimes double-digit millions in business losses, and sextorsion cases being realized by laypeople already, AI-powered scams are rapidly becoming a serious threat due to their scalability, accessibility, and effectiveness. 

In this summary, we evaluate and articulate three risks that AI will exacerbate, and we identify key controls that will be necessary to combat threats of fraud in the AI era. 

Traditional Fraud Attacks 

The risks of AI are made more visible through the lens of the traditional fraud attack; First, the scammer must survey the attack surface and find a vulnerability to exploit. AI brings this about with massive efficiency. It can perform all the due diligence on a target, acquire the details about the target, and prepare the attack with the necessary payloads that will be effective to manipulate the mark. This includes detailed intelligence on the mark, some authentic personal information or knowledge that is actionable; account numbers, vulnerable family members, a business initiative, etc. AI can be utilized to emerge these details and be automated to create dossiers on targets, and even evaluate their susceptibility, prioritize them, and create the “script” of the approach--even in a programming language for an automated attack, where desired.  

Bait and Hook Refinement 

Certainly, there are elements of AI that create not only great efficiency in the setup of attacks, but also lethal accuracy in terms of making the trap persuasive. The attacker must bait a hook for the victim, and make the lure look convincing. Consider for a moment the usual recommendations you would receive for detecting a phishing email: there might be typos or grammar that a native speaker would be able to detect; a URL link or an email address might have a tattletale element; or a form presented might look suspect in contrast to a legitimate artifact. These details and nuances will be far more convincing if they were run through an AI tool that minimized the potential for discovery, by best emulating a legitimate source with precise detail from a large language model.  

 

Scams at Scale 

The third risk AI increases is the mystery of how it can be abused and leveraged nefariously, at scale. While the typical scammer is limited to performing one task at a time, AI provides scammers with automated tools performing their work for them, the volume of abuse cases is likely to become quite high. Further, there is a high likelihood that even the least sophisticated threat actors can be made more effective at a level that is likely to scale for them. The most realistic potential is that a single individual would be able to command a large resource base, performing the work of many people acting in coordination. The outcome of this is that the lower quality actors are empowered to act as a far larger threat.  

AI-Powered Scams: Our Best Defenses 

The scam and fraud environment will be enabled in this business cycle like no time before it, with many new entrants who have powerful tools at their disposal. These tools will supercharge a scammer's ability to be convincing in laying out a threat, and will be enabled by their detailed observations, efficient, and scaled to be as effective as possible. While consumer awareness campaigns and vulnerability testing are important and will require additional investment, these approaches are only going to be partially effective. But there is technology that is enabling the detection of scams now. 

Behavioral biometric intelligence that leverages similar machine learning technologies can elevate the detection of anomalies, such as robotic automation in digital environments, or that an individual is under duress, confused or has demonstrated hesitation or distraction in an online session. This can then be used to slow down or interdict payments that are high risk, preventing the scammers from realizing their goals of financial enrichment via malicious AI.  

The anti-fraud industry is now realizing the power and potential of behavioral biometric intelligence as a tremendous asset to organizations that must insulate their customers and their assets from the risks of scams. Therefore, while the use of AI for the purposes of deception is already here, and the fear is palpable, the time to develop a readiness for resilience is now. 

Previous
Previous

$50 billion Business Email Compromise

Next
Next

Streaming A.I. voice vishing