Are We Getting Stupider? The Consequences of ChatGPT.
As an AI model, I cannot comment on my limitations.
Oops, I’ve been caught out again! I’ll have to have a crack at it on my own.
Students are regularly warned of the dangers of ChatGPT. Sitting in an introductory lecture at university, my cohort was warned of the penalty of expulsion if caught using the language model. Cut to two years later and the threats have, unfortunately, materialized with many students in fact being expelled. However, the situation is more complex than it seems; although it may seem like it, it is not just laziness that motivates students to take this risk. Hannah, who contracted Covid before one of her deadlines, felt significant pressure to live up to expectations: ‘I needed to maintain that level of grades, and it just kind of really pushed me into a place of using artificial intelligence’. The misuse of ChatGPT, like spending hours mindlessly ‘doom scrolling’ on TikTok, is yet another temptation of modern, algorithmic technology. Even if some students ‘are still somehow reluctant towards its use due to distrust’, the temptation of convenience, unfortunately, preys mainly on the vulnerable, like Hannah, who are caught between extenuating circumstances and good practice.
According to a study by Andrea Martínez-Salgueiro et al, 12% of students admit using ChatGPT to forge work. I contend that this is a problematically significant number, if not cataclysmic. For Salgueiro, it can be ‘a threat to human intellectual development’. When presented with the statement: ‘ChatGPT reduces the effort in my assignments’, and asked to rate it on a scale of (1-5), with (1) being the least agreeable and (5) the most, 256 bachelor degree responders replied with a median of (4) and, crucially, no one rated it lower than (3). Reducing the effort in assignments is not necessarily a negative feature of the tool: it may speed up the academic process, rather than dumb it down. Nevertheless, over reliance on ChatGPT is particularly dangerous considering its preconceived biases, programmed into the model by all too fallible developers and the fact that it generates imagined sources.
On a more promising note, in Martínez-Salgueiro’s study, 34% of respondents reported that they used ChatGPT for research purposes. Outside academia, ChatGPT has proven effective at combating conspiratorial thinking. According to H. Holden Thorp: ‘Conspiracy believers reduced their misinformed beliefs by 20% on average’, when using large language models like ChatGPT. In not ‘showing the kind of perceived bias that might be attached to a human interlocutor’, ChatGPT’s arguments appear impartial and convincing.
Conspiracy aside, ChatGPT may stifle intellect in another, less obvious way than simply cheating. When you ask the chat a question, instead of offering multiple responses like a typical search engine, it generates tailor-made, highly specific responses. Typically, the broader details on a subject matter will only be given when prompted further. Contrastingly, however, when you type a question into a typical search engine (e.g. Google, Safari, Edge), it will offer a range of responses that can be tangential to the original subject matter. This, inevitably, makes the user curious. In that sense it can be likened to browsing a library for books in real life; you might come across something novel that you hadn’t considered and discover other resources - they aren’t handed to you on a silver platter. ChatGPT is an echo chamber of a resource: it will only yield as inspired responses as its users can generate prompts for.
In ChatGPT’s world, stupid people who ask stupid questions remain stupid, intelligent responses are reserved for intelligent prompts. Here’s what my best source (ChatGPT) had to say on this matter: ‘Relying too much on ChatGPT is like using a GPS for every trip—you’ll get where you’re going, but forget how to read a map’.