In an interview with Fox News, Haywood Talcove, CEO of LexisNexis Risk Solutions’ government division, says AI-related fraud could cost taxpayers hundreds of billions to $1 trillion over the next year.
Talcove said LexisNexis Risk Solutions encountered situations where AI deep fakes were used to defraud government and state agencies. For instance, fraudsters allegedly use deep fake techniques to extract money from Social Security, Medicare, and Medicaid users.
“Being one of the wealthiest countries in the world makes us a huge target,” Talcove told Fox News. “The amount of money that we’re going to lose over the next 12 months, if we do nothing, is going to make the COVID pandemic look like child’s play.”
Virtually every individual with an online presence is susceptible to AI-assisted scams. There are cases of fraudsters using fake voices, images, and even videos to pose as others. Analysts have determined this as one of the technology’s most immediate risks.
The FBI recently warned about AI sextortion and revenge porn, where fraudsters generate fake explicit images to hold people to ransom.
These deceptive techniques could be extended to almost anything, such as associating people with extremist material or criminal activity.
“AI technology is already so advanced that if any of your information is available on the web or social media, it can be used to create a generative AI model and circumvent most of the government’s outdated authentication systems,” Talcove warned.
Talcove highlighted the need for action, “I suspect over the next 12 months, if the government doesn’t react to this quickly, we’re going to lose over $1 trillion to these criminal groups, some of which are located outside the country.”
In response to what needs to be done to prevent AI fraud, Talcove stated ominously, “It’s already too late,” and pointed out that AI-assisted fraud is already rife and going unnoticed.
Talcove concluded with an alarming warning: “It is no longer a question of if, but rather when and how severely, these AI-powered threats will impact their agencies. We must acknowledge the stark reality that AI fraud is not a distant threat, but one that’s knocking at our door.”
Are these claims substantiated?
AI-supported fraud primarily revolves around deep fakes. Deep fakes range from the benign – like Elon Musk as a baby – to the harmful – like this fake image of an explosion at the Pentagon, which caused US markets to briefly dip once it circulated on social media.
There are numerous stories of fraudsters targeting vulnerable individuals with deep fake scams, including a spate of so-called “grandparent scams.” A US mother was embroiled in a fake kidnap plot, where fraudsters demanded a $1m ransom.
China is also suffering from AI-supported scams, including one incident where a man conversed with an AI-generated video avatar of his friend.
AI is becoming better and better at imitating humans – which is one of the technology’s foundational qualities. The Turing Test, developed by cryptanalyst and mathematician Alan Turing in the 1950s, assesses a machine’s ability to imitate humans. Realism is what AI technology strives for.
AI’s incredible ability to copy, fake, and imitate human behavior is readily weaponized by fraudsters and will inevitably lead to new cybercrime techniques.
AI will also be used to tackle them in what’s likely to become an arms race between cybercriminals, law enforcement, and security companies.