As educators wrestle with the rise of AI in the classroom, many institutions are scrambling to “ChatGPT-proof” their curriculum.
AI chatbots, most notably ChatGPT, are quickly altering the educational landscape, blurring the lines between AI-generated and authentic work.
Furman University’s philosophy professor, Darren Hick, revealed his experiences of AI-generated work in late 2022, stating, “Academia did not see this coming. So we’re sort of blindsided by it.”
He described AI-generated work as “a clean style. But it’s recognizable. I would say it writes like a very smart 12th-grader.”
Timothy Main, a writing professor at Conestoga College in Canada, stated they were in “full-on crisis mode” with dozens of cases of AI plagiarism.
This has sparked an intense debate about if and how this can be stopped. Or should academia simply accept the rising role of AI in society and teach students how to use it effectively?
Many are concerned that AI’s role in education will erode critical thinking, leading to societies that depend on AI for even rudimentary knowledge and decision-making.
With AI becoming a ‘single source of truth,’ can we truly trust it? What about bias? What about when it lies and hallucinates?
Dealing with such complications requires us to think critically, a faculty the technology threatens if we aren’t pragmatic and aware.
Educators suggest practical solutions
Educators are exploring ways to safeguard academic integrity, but there is little consensus.
- Paper exams are making a comeback, but this seems like an archaic solution in the age of IT.
- Editing histories and drafts might become auditable for proving students’ original work.
- Exam questions are being revised to become less generic, making it harder for chatbots to provide accurate answers.
Ronan Takizawa, a computer science major at Colorado College, expressed that while reintroducing “blue books” (traditional exam booklets) feels regressive, it ensures students grasp the necessary concepts.
While educators and their institutions work on solutions, students aren’t getting an easy ride either.
Wrongful accusations
Detecting AI-generated content is deceptively complex.
AI detectors were adequate for GPT-3-generated text, but now that we’ve moved on to GPT-3.5 and GPT-4, their usefulness has dropped significantly.
These detectors measure perplexity and burstiness, which together predict the likelihood of a string of words being AI-generated.
Evidence suggests they suffer from a high false-positive rate, especially when exposed to text written by non-native English speakers, as they may be more likely to write lower perplexity text.
The surge in AI-generated text has also resulted in wrongful accusations and role reversal, where students are re-writing parts of authentic work to avoid false positives from AI detectors.
A Texas A&M professor mistakenly accused an entire class of using ChatGPT for their final assignments, only later admitting he was wrong – an embarrassing situation for the university after the story was picked up by papers such as the Washington Post and Insider.
Ronan Takizawa, a computer science major at Colorado College, emphasized that many students are unsure about when the use of AI is considered cheating.
For instance, in an ironic turn of events, Nathan LeVang from Arizona State University started using AI detectors on his authentic assignments to ensure his work wasn’t mistakenly flagged as AI-generated.
After his human-written essay was deemed ‘AI-generated,’ he had no choice but to rewrite sections. A study found that some students’ work was more likely to be labeled ‘human written’ when they wrote it with AI.
AI detectors come under fire
These issues are further compounded by the fact that AI detectors vary in performance, with some producing ludicrous 98% false positive rates when subject to non-Native English-written text.
One such detector, originality.ai, has been widely criticized for an unacceptable false positive rate.
Someone’s review of the tool says, “Absolutely fake and there to scheme you off your finances. Pasted a job I did in 2014 and still got 99 % ai plagiarized. I tend to think we use the same alphabet and words with ai as well as grammatical structures, so there is a thin line between human and ai content. But this originality stuff is pure scam and trash!”
Another says, “This is a terrible service. It flags original and completely rewritten content as AI. It is not accurate at all because of the false positives that it gives out. Just a waste of time and money. It basically flags anything with proper grammar and professional language.”
AI detectors charge for their services, and they wouldn’t be worth paying for if they didn’t attempt to do what they’re designed to do – flag AI.
The battle over AI’s role in education could turn bitter, especially if it results in student injustice.