OpenAI closes down its AI-detection tool due to ongoing inaccuracies

July 25, 2023

OpenAI

OpenAI has discontinued its AI text-detection tool due to a disappointingly “low rate of accuracy” in discerning text created by ChatGPT from human-written text.

The tool was made public in January but now returns a ‘page not found’ error when you try to access it via this blog.

OpenAI intends to rebuild a better tool in time, stating, “We are working to incorporate feedback and are currently researching more effective provenance techniques for text.” 

In addition, OpenAI is researching how to build AI detectors for audio and visual content, such as images generated with DALL-E. AI image detector tools are available already, but they’ve proved easy to trick into labeling content as ‘real’ by manipulating images and adding artifacts such as noise

This coincides with the voluntary AI framework announced by the White House last week, which OpenAI opted into alongside 6 other leading AI companies. The framework outlines commitments to clamp down on potentially harmful deep fake content, and AI detectors will be a key component in achieving that.

OpenAI introduced the text-detection tool in January 2023 to establish an official method for analyzing AI-generated content.

The same month, OpenAI, Stanford University, and Georgetown University published a paper on AI-generated disinformation, highlighting the need for tools that accurately discern human-written text from AI-generated text. 

“Generative language models have improved drastically, and can now produce realistic text outputs that are difficult to distinguish from human-written content…For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text,” the paper argues.

Potential ‘threats’ range from students misusing AI to assist in cheating on assignments to more extreme scenarios like election interference and beyond. 

OpenAI’s tool only correctly classified 26% of AI-generated text as ‘likely AI-written’ and falsely identified human-composed text as AI-written 9% of the time. 

It was particularly ineffective at analyzing text passages under 1,000 characters and written in non-English languages. 

OpenAI deactivated access to the tool on July 20, with no clear indication of when a newer version will be released. 

AI detector tools face ongoing issues

In a recent Stanford study, AI detection tools proved ineffective when applied to texts written by non-English native speakers. 

Specifically, AI detection tools incorrectly labeled these texts as ‘AI-written,’ likely due to non-native standard phrasing and syntax. This risks prejudice if deployed into educational settings – texts could be falsely flagged as AI-written despite being authentic. 

Ironically, the research team could often trick AI detectors into re-labeling the same texts as ‘human written’ by inputting them into ChatGPT and asking it to rewrite the text with more complex phrasing. Thus, students could turn to AI to avoid AI detection. 

As AI-generated content becomes more sophisticated, the accuracy of AI detectors will become increasingly fraught.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions