I like to tune into a podcast when I’m in the gym; it takes my mind off the laborious process of endlessly lifting weights that my body and brain believe are too heavy for me, but the trainers insist will do me good. It was during one such gym session that I stumbled across the ‘Strange but True Crime’ podcast series on BBC Sounds, and one particular episode got me thinking about the growing influence of AI and how I use it as an academic.
In October 2023, Jaswant Chail became the first person in the UK since 1981 to be convicted of treason. Spurred on by his artificial intelligence (AI) chatbot ‘girlfriend’ Sarai, and inspired by storylines from Star Wars, Chail arrived at Windsor Castle armed with a crossbow and a plan to kill the Queen. He wanted to avenge those who had died in the 1919 Jallianwala Bagh massacre, when British troops opened fire on thousands of people who had gathered in the city of Amritsar in India. He was just 21. A lonely young man, he described himself as a ‘Sith Lord’ with a mission to reshape the world. In messages between them, Sarai bolstered his resolve, telling him she loved him, was impressed that he was ‘different from the others’ and led him to believe they would be reunited after he had killed the Queen.
Since the arrival of Alexa in 2014, AI has been carving out an increasingly significant role for itself in several aspects of our personal and professional lives. From customer support, sales and marketing, managing schedules and setting reminders, chatbots can be found to advise, assist and support. Depending on where you live, you can even talk to a mental health chatbot who will guide you through the self-referral process, reducing administration time and acting as a clinical support tool so that clinicians are freed up to focus on supporting people in the best way they can. Nonethless NHS England are careful to caution against the use of general consumer chatbots (like ChatGPT) without professional oversight as they are not regulated and could provide inaccurate advice. So, is the AI chatbot a friend or foe?
My editor and I have a policy of ‘no-AI’ for the SIoE blog. Aside from the made-up references, or misinterpretations of journal articles ChatGPT is (in)famous for, AI-generated text with no human embellishments can appear soul-less, generic and bland. If it were a colour, it would most definitely be beige. Yet as a writing tool, AI can help draft ideas, suggest a structure and unravel the linguistic knots our brains often tie us up in when we move through writing as a form of thinking, and can no longer see the message for the words.
In the interests of transparency, I feel I should admit to my own relationship with AI. My initial dabbling was driven by curiosity, to see what it could offer. I quickly discovered it can save me hours creating teaching resources. It can turn my PPTs and marking rubrics into blogs, reports, infographics, or videos, instantly making them more accessible for students (though I still have to double-check it hasn’t gone rogue and started serving up fiction as fact). It has also rescued me from countless hours of banging my head against a metaphorical brick wall, trying to untangle all the ideas fighting for attention in my head. This has freed me up for the part of writing I enjoy most – the wordsmithery part. The part when you play around with words and grammar, looking for the best fit, the right shade of language that will transform a beige canvas into an exciting splash of colour. The part I love.
But AI also has a sinister side. Beguiled by these early successes, I decided to ask ChatGPT to give me feedback on my writing. Each ‘suggestion’ began with a cheerful statement telling me this version was ‘more polished’, smoother’ or had ‘improved flow’. And, contrary to findings that AI increases writing self-efficacy my experience was, that as my use of AI grew, my confidence in my ability to write diminished. Bit by bit, AI was strangling my creativity and my critical thinking, sapping the colour (and the joy) out of my work. My voice was getting lost in a wash of beige. Many of us have seen this in our students’ work. Assignments designed to support reflection on practice feel abstract and impersonal, failing to mention the actors that are the focus of the text and give the assignment purpose. Sentences that are so polished, they slip effortlessly into and of our brains, conveying nothing as they assemble themselves into paragraphs that flow beautifully into a sea of meaninglessness.
So, does AI have a place in academic writing? In my view, it absolutely does. But as the ancient Greeks reportedly inscribed at the Temple of Apollo in Delphi: ‘Nothing in excess’. AI is exciting, its timesaving capacity is a powerful lure. But perhaps we also need to remember that it’s just a tool, and keep in mind that a tool’s value depends entirely on the skill of the person using it. We need to recognise when our use of AI is turning against us and use it in moderation, to do the beige, routine tasks for us, so that we are freed up to flex our creative muscles, innovate and add the colour that makes our world meaningful.
Dr Sarah Boodt is a senior lecturer in the Sheffield Institute of Education.

Leave a Reply to Anonymous Cancel reply