Technological developments have always caused a stir in education. Some of us are old enough to remember the concerns of many teachers in the late 1970s and early 1980s that ownership and use of the pocket calculator would result in an innumerate population. But as slide rules and log tables went into hibernation, mathematics examinations evolved and adapted to become ‘calculator’ and ‘non-calculator’. Problem solved.
In the early noughties, there were similar concerns when Wikipedia arrived on the scene. It became a source of inaccurate plagiarism for some. But today, Wikipedia is much better controlled content wise and much less plagiarised. Users visit the site for information rather than in search of text to copy. And most recently, there is the mobile phone, a powerful personal computer in the palm of our hands. Schools (and society as a whole) are still trying to figure out what to do about them.
Generative AI is the most current perceived disruptor of teaching and learning. AI can undoubtedly help humans learn better and achieve educational goals more effectively, but it also raises serious concerns for both teachers and students regarding fairness, transparency, accountability and ethics. Like a knife, it has its good and bad uses.
A recent paper by Dunnigan and colleagues asks us all to take time to reflect on the impact of AI as it simultaneously creates opportunities, complications and undermines conventions. Universities have started to do this, and most now have an AI policy that attempts to establish permitted uses and behaviours with a view to avoiding academic misconduct. But what’s currently lacking is guidance on how to use AI as a tool to support learning.
In order to develop this kind of guidance, we feel it’s essential to build a community of teachers, educators, researchers and developers to enable sharing insights about AI. Institutions should offer AI literacy workshops or courses that explicitly address the capabilities and limitations of AI in academic study and research – for both staff and students. For example, students and staff need to understand that using AI to generate content for a presentation could be perceived as dishonest, while using AI to help plan for the same presentation may be acceptable – and even a good idea.
But we also need to emphasise that generative AI is a cognitive friend, and not a teller of truth. Our students need help to understand that when using AI, they are at the heart of any decisions made. This requires training in critical skills to enable them to assess what AI might tell them. There are examples such as the guidance offered by the University of Saskatchewan and training developed at Chalmers University in Sweden.
Ultimately, we need to embrace AI – teachers and students alike – but we need to make sure that we are in the driving seat, and that means we all need training.
Andy Bullough is a senior research fellow in the Sheffield Institute of Education and the project director of the EEF funded Frames For Learning (F4L) project Frames for Learning – Applying psychology and cognitive science in the classroom. Andy teaches and supervises on a range of masters programmes and is the health and safety coordinator for SIOE.
Oksana Topchii is Associate professor at the Department of English Philology and Foreign Language Teaching Methods, School of Foreign Languages of V. N. Karazin Kharkiv National University (Ukraine). She is a visiting fellow at College of Social Science and Arts at Sheffield Hallam University.
Leave a Reply