New McAfee report indicates the rise of AI voice fraud

July 3, 2023

McAfee

A recent McAfee report revealed the alarming rise in fraud that uses AI-supported voice cloning.

So-called ‘imposter scams’ are nothing new. The Federal Trade Commission reveals that such imposter scams caused a massive loss of $2.6 billion in the US in 2022 alone, representing a rise of 30% from the year before.

These scams, also known as ‘family emergency’ scams, involve the fraudster manipulating the victim into believing a loved one urgently needs money.

In a recent McAfee survey, 1 in 4 respondents reported encountering an AI voice scam, while 1 in 10 confessed to being personally targeted. McAfee’s study highlights that fraudsters can clone voices from audio snippets extracted from social media clips.

For example, social media influencer Eddie Cumberbatch, who has over 100,000 followers, experienced an AI scam when his grandparents received a fraudulent call impersonating his voice. They claimed he was in a devastating car crash and urgently needed money.

In another incident, a Canadian couple lost $21,000 in Canadian dollars (U.S. $15,449) to a similar AI voice scam.

An AI clone of Benjamin Perkin’s voice convinced his parents that he was imprisoned for accidentally killing a diplomat in a car crash and needed the money for legal fees. His parents obliged, and Perkins disclosed to The Washington Post, “There’s no insurance. There’s no getting it back. It’s gone.”

In response to rising AI fraud, Steve Grobman, McAfee’s Chief Technology Officer, warned, “One of the things that’s most important to recognize with the advances in AI this year is it’s largely about bringing these technologies into reach of many more people, including really enabling the scale within the cyberactor community. Cybercriminals are able to use generative AI for fake voices and deepfakes in ways that used to require a lot more sophistication.”

What the McAfee report says about AI fraud

According to recent data from McAfee, voice scams typically exploit victims’ emotional connections to their loved ones. 

1 in 10 people surveyed have been personally targeted, and 15% reported that someone they know has been victimized.

The problem appears to be most severe in India, where 47% of respondents reported experience with this type of scam, with the US in second at 14% and the UK with 8%. 36% of all adults surveyed said they had never heard of the scam.

Nearly half (45%) of people are likely to respond to a request for money if it comes from a friend or loved one, with certain fabricated scenarios appearing more convincing. 

Topping the list is a car crash or breakdown, with 48% of respondents indicating they would likely react to such a situation. This is closely followed by a reported robbery, at 47%. 

If the caller claims to have lost their phone or wallet, 43% of people are likely to respond, while 41% would assist someone who said they were traveling abroad and needed help.

Around 40% of respondents said they would likely reply to a partner or spouse, while their mother came in at 24%. For parents aged 50 or over, 41% are most likely to respond if the call purportedly comes from their child. 

Interestingly, most known cases involve parents or grandparents reporting that a scammer cloned the voice of their child or grandchild to carry out the deception. 

Be AI-scam aware

McAfee makes several recommendations to avoid AI voice scams:

  • Establish a unique ‘codeword’ with your children, family members, or close friends. If you receive a suspicious emergency call, ask for the codeword. 
  • Always question the source. Whether it’s a call, text, or email from an unfamiliar sender or even a number you recognize, take a moment to think. Ask questions that a scammer wouldn’t be able to answer, like “Can you confirm my son’s name?” or “When is your father’s birthday?”. This tactic could unsettle a scammer and present unnatural pauses in the conversation, provoking suspicion.
  • Maintain emotional control. Cybercriminals prey on your emotional ties to the person they’re impersonating to provoke a hasty reaction. Take a step back and assess the situation before responding. 
  • Be wary of unexpected calls from unknown numbers.

AI-related fraud is becoming ubiquitous, and there’s no hard and fast way to deal with it. In a recent Fox News interview, a cyber security analyst said this could cost Americans $1tn a year if left unchecked.

Education surrounding the issue, including using codewords and other tactics to spook fraudsters, is perhaps the most effective way to deal with voice fraud for the time being.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions