Leading UK universities issue joint statement on the use of AI

July 4, 2023

Russell Group AI

The Russell Group, comprising some of the UK’s top universities, intends to adapt university teaching standards in response to AI.

Backed by the vice-chancellors of 24 top UK universities, the Russell Group published a statement to foster ethical and responsible use of AI tools like ChatGPT while preserving academic integrity.

The guidance says, “These policies make it clear to students and staff where the use of generative AI is inappropriate, and are intended to support them in making informed decisions and to empower them to use these tools appropriately and acknowledge their use where necessary.”

“We believe these AI tools could enhance students’ learning experiences, improve critical thinking abilities, and prepare them for real-world applications of these AI technologies.”

Dr. Tim Bradshaw, the Russell Group chief executive, wrote in the statement, “The transformative opportunity provided by AI is huge and our universities are determined to grasp it. This statement of principles underlines our commitment to doing so in a way that benefits students and staff and protects the integrity of the high-quality education Russell Group universities provide.”

Educational establishments are adjusting to generative AIs, like ChatGPT, which enable students to conduct research, write essays, solve coding problems, and complete various academic tasks. Harvard recently announced an AI teaching tool for their computer science course. 

This Russell Group’s move answers the UK Education Secretary Gillian Keegan’s call for evidence on how generative AI could be used safely in educational environments. The Group’s statement responded, “All Russell Group universities have updated their academic conduct policies to address the emergence of generative AI, providing clear guidelines on its appropriate use.”

Professor Andrew Brass, head of the School of Health Sciences at the University of Manchester, said, “We know that students are already utilising this technology, so the question for us as educators is how do you best prepare them for this, and what are the skills they need to have to know how to engage with generative AI sensibly?

Professor Brass also noted that the assessment process must evolve, stating, “We need to shift our focus to problem-solving and critical thinking skills rather than knowledge recall.”

Professor Michael Grove, Deputy Pro-Vice-Chancellor at the University of Birmingham, highlighted the potential of generative AI, stating, “[AI[ could be used to support the development of stylistic writing skills or to make learning materials more accessible and inclusive for students from different cultural or linguistic backgrounds.”

What are the Russell Group’s 5 principles?

This looks like the first coherent set of cross-university AI policies. In a detailed PDF, the Russell Group summarized the 5 principles as follows:

1: Universities will support AI literacy 

Universities will ensure students and staff understand the strengths and limitations of generative AI tools. This includes ethical concerns like bias. 

They’ll need to provide necessary guidance and training to increase AI literacy, equip users with skills to use these tools properly, and ensure staff competence. This includes understanding privacy risks, biases, inaccuracies, ethics codes, and plagiarism.

2: Staff should be equipped to support students in using generative AI tools appropriately

Universities will create resources and training for staff to guide students on using AI. 

As appropriate uses may differ between academic disciplines, universities will encourage departments to apply policies within their context. Regular engagement between staff and students is vital for a shared understanding of generative AI tools usage.

3: Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access

Universities will evolve their teaching and assessment methods to incorporate the ethical use of generative AI tools. 

While adaptations may vary across universities and disciplines, all staff supporting student learning should design their sessions, materials, and assessments accordingly. 

As new AI tools become available, some may be restricted to paying subscribers. Universities need to ensure equal access to such tools for their staff and students.

4: Universities will ensure academic rigor and integrity are upheld

All Russell Group universities have updated their academic conduct policies to reflect the emergence of generative AI. 

These policies highlight inappropriate uses of AI and help students and staff make informed decisions. Transparent policies are critical for maintaining academic integrity.

5: Universities will work collaboratively to share best practices as the technology evolves

Navigating the evolving landscape of AI will require collaboration between universities, students, schools, employers, industry leaders, and professional bodies. 

Policies and guidance on generative AI use will be evaluated regularly, including monitoring its integration into academic life and adapting to technology evolution. 

Encouraging relationships between different stakeholders will be key to addressing emerging challenges and promoting ethical AI use.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions