Deep fake audio getting easier to make, harder to detect

January 31, 2024

Fake AI-cloned voices made the news recently because of a “Biden” robocall, but ordinary people are being affected as the technology becomes more accessible and harder to detect.

Two weeks ago, an audio recording of Pikesville High principal Eric Eiswert was released in which it sounded like Eiswert made racist and antisemitic comments about staff and students.

Eiswert denied the authenticity of the audio, a stance supported by Billy Burke, the executive director of the Council of Administrative and Supervisory Employees, representing Baltimore County administrators.

“We believe that it is AI generated,” Burke said. “He did not say that.”

In the age of AI fakes, the “liar’s dividend” gives anyone an easy out to cry “Fake!” when in a tight spot. At the same time, AI voice cloning can cause a lot of reputational damage to ordinary people like Eiswert.

What do you think? Fake or real?


View this post on Instagram


A post shared by @murder_ink_bmore

Either the audio is genuine and he should be fired, or it’s an AI fake and someone should be sued.

Two weeks later, no one can say, so Eiswert’s job and reputation remain in limbo. It’s a credit to how good these voice cloning tools are getting and the complex issues the tech raises.

A year ago, we might have dismissed Eiswert’s claim of AI fakery, arguing that such advanced AI technology didn’t exist. Now, companies like Eleven Labs or cheap tools like Parrot AI make it easy for anyone to make impressive voice clones.

OpenVoice, released earlier this month, uses just seconds of audio to clone a voice and allows granular control over emotion, accent, tone, rhythm, and more.

Hany Farid, a professor at the University of California, Berkley, specializes in digital forensics and authenticating digital media. When asked by a WJZ reporter to analyze the clip, Farid said that it had obviously been edited but beyond that, he could not confirm whether it was authentic or not.

In an interview with Scientific American, Farid said, “I have analyzed the audio with some of our tools, which aren’t yet publicly available. I think it is likely—but not certain—that this audio is AI-generated…Overall, I think the evidence points to this audio being inauthentic. But before making a final determination, we need to learn more.”

Farid said that there were perhaps 5 or fewer labs worldwide that could reliably determine whether the audio is an AI fake or genuine.

The AI clone that Dudesy made of George Carlin is a great example of how AI voice cloning is getting really good at matching inflection and emotion. That video has since been made unavailable.

The people behind the chatbots have set up a parody Trump vs Biden debate. The things that ‘Trump’ and ‘Biden’ say are so crazy that it’s obviously made for comedic effect, but they sound really good.

As these tools become better and more freely available, situations like the one facing the principal in Baltimore are going to increasingly affect politicians and everyday people alike.

If you’ve sent a WhatsApp voice note or left a message on a call answering service, then you could be next. Or, if someone recorded you saying something awkward, you could just say it’s an AI fake. Nobody seems to be able to prove it either way.

Join The Future


Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.


Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions