Fake reviews are ubiquitous and unavoidable, but they’ve never been this hard to identify.
By some estimates, 40% of Amazon reviews are fake, but this also extends to sites like Trustpilot, Tripadvisor, as well as Google Reviews.
In April, CNBC reported that some Amazon reviews give away their origin by beginning their write-up with “As an AI language model…” However, for the most part, AI-generated fake reviews are challenging to identify with the naked eye.
In an effort to clamp down on fraudulent reviews, the Federal Trade Commission (FTC) recently discussed establishing rules for banning fake and paid reviews with harsh penalties for anyone that violates them.
Michael Atleson, a lawyer in the FTC’s Division of Advertising Practices, said, “We don’t know — really have no way to know — the extent to which bad actors are actually using any of these tools, and how much may be bot-generated versus human-generated.”
However, that relies on distinguishing fake reviews from real ones, which is exceedingly difficult now that AI is linguistically precise and knowledgeable across a multitude of subjects.
AI is leveling up fake reviews
Fakespot, a startup that deploys AI to discern fraudulent reviews, registered surges of AI-generated fake reviews, according to CEO Saoud Khalifah. The company is now investing in techniques designed to identify reviews created by AI systems, such as ChatGPT.
“The thing that is very different today is that the models are knowledgeable to a point where they can write about anything,” Khalifah stated.
Another issue is, who’s to say someone isn’t using AI to help them write an authentic review?
AI-generated reviews aren’t strictly against Amazon’s policies. According to a company spokesperson, the company allows customers to post AI-crafted reviews as long as they’re authentic and adhere to policy guidelines.
The key question is whether the AI tasked with detecting fake reviews can outsmart the AI generating them. Saoud Khalifah of Fakespot noted the first AI-generated fake reviews his company detected originated from India and were created by “fake review farms.”
“It’s definitely a hard test to pass for these detection tools,” said Bhuwan Dhingra, an assistant professor of computer science at Duke University. “Because if the models are exactly matching the way humans write something, then you really can’t distinguish between the two.”
Ben Zhao, a professor of computer science at the University of Chicago, similarly argued that it’s “almost impossible” for AI to effectively catch AI-generated reviews.
“It’s an ongoing cat-and-mouse chase, but there is nothing fundamental at the end of the day that distinguishes an AI-created piece of content,” he said.
For now, fake reviews will remain part of the internet’s furniture, and they’re likely to become increasingly harder to distinguish from the real thing, both for humans and the advanced AI systems tasked with identifying them.
“It’s terrifying for consumers,” said Teresa Murray, director of the consumer watchdog office for the U.S. Public Interest Research Group.
“Already, AI is helping dishonest businesses spit out real-sounding reviews with a conversational tone by the thousands in a matter of seconds.”
Gone are the days of poorly written fake reviews — get ready for precisely written AI reviews that you can’t discern from the real thing.