Several universities, including Vanderbilt, Michigan State, Northwestern, and the University of Texas at Austin, are refraining from using Turnitin, which detects AI-generated content in student submissions.
Turnitin, renowned for its plagiarism detection tools, introduced a feature in April to identify machine-written content.
However, current AI models such as GPT-4 often evade detection, and genuine human-written content is frequently falsely flagged for being plagiarised.
The latter is more concerning for colleges and universities, which could lead to false accusations and even legal repercussions.
Moreover, studies suggest that those who speak English as a second language or aren’t native speakers are considerably more likely to have their work flagged as AI-generated.
When activated, AI detection software evaluates documents, breaking down the text and assigning scores based on whether the content appears human or AI-generated.
The accuracy of Turnitin’s AI detection has been questioned. Michael Coley, an instructional technology consultant at Vanderbilt University, remarked, “There is a larger question of how Turnitin detects AI writing and if that is even possible. To date, Turnitin gives no detailed information about how it determines if a piece of writing is AI-generated. The most they have said is that the tool looks for patterns common in AI writing, but they do not explain or define what those patterns are.”
Vanderbilt University highlighted that even a false positive rate of less than one percent, as claimed by Turnitin, could result in 750 papers being incorrectly flagged annually, given that they processed 75,000 papers using Turnitin in 2022.
Coley further expressed concerns about data privacy, stating, “There are real privacy concerns about taking student data and entering it into a detector managed by an external company with unclear privacy and data usage policies. AI detection is challenging, and its complexity will only increase as AI tools evolve. We don’t see AI detection software as an effective tool.”
Annie Chechitelli, Turnitin’s chief product officer, emphasized that their AI-flagging tool shouldn’t be used to automatically penalize students.
She stated, “At Turnitin, our guidance is that there’s no substitute for understanding a student’s writing style and background. Our technology isn’t meant to replace educators’ judgment. Reports on AI writing presence are meant to facilitate discussions with students, not to conclude misconduct.”
Future generations of AI models will only become more sophisticated and better able to evade AI detectors, which have already had their limitations exposed.
AI’s role in education is sure to be further tested.