This year, the use of generative AI tools in academic settings has sparked a fierce ongoing debate. There are few, if any, answers.
Does AI rate as an academic research assistant? Is it a tool for plagiarism? What about misinformation and hallucinations? When is AI-assisted work permitted, and when is it cheating?
The problem is nobody knows, and opinions across different demographics seem to differ.
A recent study involving 154 students and 89 academics across Australian universities revealed a significant increase in AI usage among students, with many seeking clarity on what constitutes cheating.
Jemma Skeat, Deakin University’s Director of Speech Pathology, voices the students’ concerns: “They want the detail around… How can I use it for this assignment? What is cheating? What is not? And where is the line?”
She highlights the ambiguity in the current academic policies: “It’s not very clear… It’s not black and white what is cheating.”
The study illustrates a divided stance among universities regarding detecting AI usage in assignments. It confirms previous surveys and research, some showing a clear divide in how educators perceive AI versus students.
As you might guess, students are all for it – they’re at the forefront of technological trends after all. Teachers, however, are not so confident.
Skeat explains, “A lot of universities have taken onboard specific software that aims to detect generative AI in written assignments and some universities haven’t because they feel that the level of false positives, so students who are detected as cheating when they haven’t actually cheated, is too high.”
AI overuse risks human enfeeblement, but isn’t it inevitable?
Andreas Gwyther-Gouriotis, a recent computer science graduate, discussed AI’s potential to aid students, especially in lower-level courses: “People who don’t really know how to program are able to pass those (lower level) classes by just asking ChatGPT how to solve these typical programming problems that they teach at lower levels.”
He also remarks on the unclear regulations: “The rules are a bit vague.” There’s also a major concern that AI will impact people’s already-dwindling attention spans.
Giampaolo Viglia from the Department of Economics and Political Science, University of Aosta Valley, Italy, argued, “The advent of ChatGPT – if used in a compulsive way – poses a threat both for students and for teachers. For students, who are already suffering from a lower attention span and a significant reduction in book reading intake the risk is going into a lethargic mode.”
The wider impact of this is called ‘enfeeblement,’ where humans’ physical and mental abilities decline as we choose AI to do our bidding – a possible future brilliantly encapsulated by the film WALL-E.
This debate raises concerns about equitable access to AI tools, as there are fears that such technologies might become paid services, disadvantaging students from lower socio-economic backgrounds. GPT Plus with GPT-4, for example, offers massive advantages to students versus those using the free version.
Another complication is false positives. Even if teachers feel that content is AI-written, it’s tough to prove, with many AI detectors displaying unacceptable false positive rates. Some universities, schools, and colleges have stopped using them for this reason.
As AI tools like ChatGPT become more integrated into the educational landscape, there is an urgent need for clear policies and guidelines to ensure ethical usage.