AI could be a boon for overburdened teachers, assisting with tasks from creating quizzes to offering feedback.
However, according to AQA, one of the UK’s leading exam boards, the technology necessitates stringent human oversight.
The organization recently shared its findings with the UK government to contribute to a broader consultation on the use of AI in the educational sector.
AQA researchers conducted trials with various AI tools, such as ChatGPT, GPT4, LLaMA, and Alpaca, across an assortment of science papers.
“If AI can reduce workload by helping with lesson planning and marking, then the brightest people will be more likely to become teachers and stay in the job,” said Alex Scharaschkin, AQA’s executive director of assessment research and innovation.
The trials revealed that AI could facilitate the creation of custom quizzes on specific topics, automatically grade student responses, provide immediate feedback, and generate curriculum summaries.
AI’s advantages in education are undeniable, but students realize that, too, and educational establishments have become the center of a bitter debate about if, how, and when AI should be integrated into the sector.
AI’s impacts on education are hotly debated
AI’s role in education is still embryonic, and concerns about its limitations and risks are impossible to put aside.
If students are to rely on AI, then not only can they secure an easy ride to top grades – it could impact the very fabric of human knowledge.
Rather than knowledge being dispersed through numerous belief systems and sociocultural lenses, it might become highly centralized and essentially controlled by a finite quantity of training data.
Teachers are struggling to handle the risk of AI-generated content, too, as chatbots are already excellent at evading AI detectors.
To salt the wound, the use of AI detectors is becoming increasingly controversial to the point that using them presents a greater risk than not – as the risk of false positives is so high.
Additionally, research shows that AI models are already ingesting their own outputs, creating a ‘feedback loop,’ which means limited fresh information is inducted into the system.
AQA cautioned in a recent blog post, “There is a chance that the AI systems simply perpetuate popular myths, as they have no real-world context to draw upon beyond ‘what’s talked a lot about on the internet.’” This is an apt observation, as large language models (LLMs) like ChatGPT rely on contemporary internet content to function.
On the topic of AI-related bias, AQA writes, “In terms of bias, an AI system could treat some groups of people more favourably or discriminate against them, based on characteristics such as sex, ethnicity or religious beliefs.” Recent evidence indicates chatbots feature political bias in addition to these other forms of bias.
And what about children who are growing up amid the storm of generative AI?
The World Economic Forum (WEF) recently highlighted a distinct lack of policy surrounding AI’s interaction with children, particularly as apps such as Snapchat add chatbots to their services.
Right now, AI’s role in education is a real head-scratcher with many questions and few answers.