Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud
Summary:
Generative Artificial Intelligence (AI) has paved the way for criminals to commit fraud at a scale larger than ever before. “Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud,” notes the FBI in a new advisory.
According to the FBI, criminals are increasingly leveraging AI-generated content to amplify their fraudulent schemes. Notably, AI-generated text is being used to create convincing fake social media profiles, messages, and fraudulent websites for scams such as romance, investment, and confidence fraud. The absence of grammatical errors in AI-generated text makes these fake profiles and sites more convincing, increasing the likelihood of victims falling for these scams. Additionally, AI tools are being used to generate realistic images, such as fake IDs and social media photos, to facilitate identity theft and impersonation. In some cases, criminals have even used AI to produce images of celebrities or social media influencers to promote counterfeit products or non-delivery schemes. AI-generated audio and video are also being employed to impersonate loved ones or public figures, with criminals creating highly convincing clips to manipulate victims into sending money.
Analyst Comments:
AI is still in the early stages of development, with significant potential for growth. Although the technology has existed for years, the release of AI services like ChatGPT has brought it into the spotlight, leading to widespread awareness and popularity. A growing number of specialized AI tools are now available, each designed for tasks such as writing emails, generating realistic images, or creating videos that impersonate individuals or personas. While these models are far from perfect, they are expected to improve over time, thanks to algorithms that learn and adapt based on user input and feedback. Although this progress holds promise, it also presents a concerning possibility: criminals may exploit AI to rapidly generate more convincing but error-prone scams at an unprecedented scale.
Suggested Corrections:
FBI tips to protect yourself from AI related scams: