New ‘ChatGPT detector’ discerns AI-written academic papers

November 6, 2023

AI text

A new machine-learning model beats AI text detection models for papers in the field of chemistry.

The study, released in Cell Reports Physical Science, describes an AI text classifier that surpasses the performance of two other popular AI detection systems, including ZeroGPT.

The model scrutinizes 20 stylistic features of writing, such as sentence length variation and specific word and punctuation use, to assess if a piece was composed by an academic or by ChatGPT. 

Researchers trained the model on the introductions from 100 published papers across ten chemistry journals from the American Chemical Society (ACS). The researchers then prompted ChatGPT-3.5 to craft 200 introductions in a style consistent with ACS journals, supplying the titles for half and the abstracts for the other half.

Upon evaluation, the detector flawlessly identified 100% of the introductions authored by ChatGPT based on titles. When analyzing introductions generated from abstracts, the accuracy was slightly reduced to 98%.

The detector’s proficiency was consistent even with text from the GPT-4 model. It was compared to ZeroGPT and a text classification tool from OpenAI, both demonstrating significantly lower accuracy rates.

The study’s co-author, Heather Desaire, a chemist at the University of Kansas in Lawrence, emphasized the unique focus of their tool, “Most of the field of text analysis wants a really general detector that will work on anything. We were really going after accuracy by making a tool that focuses on a particular type of paper.”

Although the tool showcased its strength across various journal styles and prompts, it’s highly specialized for scientific articles and was less effective with material from university newspapers.

Since the AI was only applied to introductions and abstracts, it wouldn’t effectively work on an entire paper. 

More about the study

Given the poor performance among existing AI writing detectors and the controversy they’re causing, any model with a near-100% accuracy rate is very interesting indeed. 

  • This AI text detector was designed for scientific journal articles, specifically chemistry journals, demonstrating remarkable accuracy in distinguishing between human and AI-generated text, including GPT-4 text. 
  • The detector, utilizing an XGBoost machine learning algorithm based on 20 distinct text features, outperforms current AI detection tools and shows a 98%-100% accuracy rate.
  • The tool successfully identified AI-generated text in various testing scenarios, even with prompts designed to conceal the use of AI, indicating robustness against different writing styles and complexities. 

However, with such a small training dataset, you’d have to say this approach seems vulnerable to overfitting, meaning the model might work exceptionally well for the data used but exhibit poor performance outside of that. 

Moreover, there might be an implicit bias toward labeling text as human-written in ambiguous cases, given that the detector is being developed to catch AI-generated text, possibly prioritizing false negatives over false positives.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions