As a language model, ChatGPT has the potential to impact universities in various ways. One of the most significant impacts could be on the way students learn and interact with educational content. ChatGPT can assist students in various learning activities such as generating personalized study materials, answering questions, grading assignments, and providing feedback. By doing so, the technology could enhance the learning experience of students by providing more personalized and efficient support.
Another way that ChatGPT could impact universities is by improving administrative processes. ChatGPT can automate administrative tasks such as answering routine inquiries from students, scheduling appointments, and managing records. This can significantly reduce the workload of university staff, allowing them to focus on more complex tasks that require human input and expertise.
Moreover, ChatGPT can enhance the accessibility and inclusivity of universities. For instance, it can translate educational materials into various languages, making them accessible to a more diverse group of students. Additionally, ChatGPT can provide support to students with special needs, such as those with visual or hearing impairments, by generating text-to-speech or sign language translations.
However, it is also important to note that ChatGPT is not a substitute for human teachers or staff. While it can perform some tasks more efficiently than humans, it lacks the creativity, empathy, and critical thinking skills that humans possess. Therefore, it is crucial that universities use ChatGPT as a tool to enhance the learning experience and improve administrative processes, rather than a replacement for human interaction and expertise.
In summary, ChatGPT has the potential to make significant contributions to universities by enhancing the learning experience, improving administrative processes, and promoting inclusivity. However, its impact will depend on how universities integrate the technology into their practices and how they balance the benefits of automation with the importance of human input and expertise.
Generative Artificial Intelligence (AI) has generated widespread discussion, thinking, development, and a fair amount of angst, in higher education since the launch of ChatGPT3 at the beginning of the year. The planned release of Microsoft Co-pilot, accompanied by a jaw-dropping video looked to bring even more debate though Microsoft announced last week that it would not be releasing Co-pilot to education customers at the moment.
If the debate has passed you by, Chat GPT is, a large language model which uses deep learning techniques to generate human-like responses to natural language queries. The opening italic paragraphs of this blog were written by ChatGPT in response to my request for text on its potential impact on universities.
The debate has already been furious. On the one side are those who see ChatGPT as a threat. Newspapers (in the circumstances, a quaint term) have been loudest in this space. One has reported that forty universities have banned ChatGPT. Responding to this sort of coverage, the head of the schools’ examination regulator, Ofqual, expressed her view that all assessments should be undertaken using pen and paper in controlled settings. The fear is obvious: if Chat GPT can produce convincing text in response to natural language queries, how can anyone be sure that any student’s written submission is their own work
Others take a very different view. Cheating is pernicious, but it has not been invented by natural language programmes. Indeed, ChatGPT is now just one of several hundred available such programmes – banning it feels like a game of ‘whack-a-mole’, or the twenty-first century equivalent of early nineteenth century machine breaking. Students are learning in a generative AI world. If the problem is cheating, then the real question is about assessment design, and one very strong positive consequence of the development of generative AI technologies may well be to accelerate further the movement toward authentic assessment modes. This is certainly the view of Professor David Smith from Hallam’s BioSciences Department, who has thought faster and deeper about this than most other people.
It’s interesting that much of the discussion of generative AI has moved straightaway to discussions of assessment integrity and cheating – which tells us something both about moral panics and about the febrile public perception of higher education. But the discussion I joined a couple of weeks ago at the Jisc Edtech Advisory Group was expansive about the other possibilities of generative AI: the potential it offers to assist non-English speakers, to support students with additional needs, to provide rapid responses to frequently asked questions, allowing student support teams to focus on adding value where their support is most needed; to think across the often multiple platforms of virtual learning environments; to expand the disciplinary canon beyond often conventionally-overused resources; to personalise support; to further enhance academic staff development in making effective use of technology. The concerns, on the other hand, went beyond questions of assessment integrity. Because of the way natural language programmes work – essentially, off very large text corpuses – there is a potential to embed and reinforce stereotypes. There are concerns that regulation, both in and across universities is moving more solely than the capacity of technology. There are, of course, concerns about further extending the power of large technology companies – particularly if regulation is slow.
But what is clear is that higher education cannot avoid the transformative potential of generative AI. Our students and our researchers are operating in a world of powerful machine learning. Our graduates will be working in environments where these tools will rapidly become commonplace – shifting the nature of many jobs and threatening others. As David Smith comments “You cannot stop the use of AI, the genie is out of the bottle and it’s only going to get smarter”.