A provocative statement, but not clickbait. I will explore this question.
In a recent discussion group, my PR apprentices suggested that AI was making people more stupid. They said that the ease of using Copilot or ChatGPT was reducing attention spans that were already very short. They suggested that people were willing to believe any online information that sounded credible without verifying its sources. Too much faff!
That makes Large Language Models (LLMs) credible targets for authoritarian governments who wish to manipulate facts, history, and opinion with disinformation. Research shows that the Chinese government exerts both direct and indirect control over LLMs. This includes direct regulation and censorship, as companies developing LLMs in China must comply with government regulations, leading to systematic removal or alteration of content that contradicts official narratives. It also includes Chinese state propaganda being present in the datasets used to train LLMs, resulting in models that reflect pro-regime attitudes and suppress dissenting information. There are concerns that malicious actors, including foreign governments, may use LLMs to influence their viewpoint. For example, to generate propaganda, fake news, or manipulate public opinion during elections. Intelligence agencies are monitoring such activities, especially from adversarial states. Hopefully, the LLM platforms are, too!
Back to the question. The Spectator magazine has a lovely article by Sean Thomas about a recent study undertaken by the MIT Media Lab at Boston University. They strapped EEG caps to a group of students and set them a task: write short essays, with some students using their own brains, others using Google, and still others using ChatGPT. The researchers then monitored the changes in their neural activity. The results? Those who used no AI tools at all lit up the EEG: they were thinking. Those using Google sparkled somewhat less. And those relying on ChatGPT showed their brains dimmed and flickering. The ChatGPT group not only produced the dullest prose – safe and samey – but they also couldn’t remember what they’d written. When asked to recall their essays minutes later, 78 per cent failed. When ChatGPT was taken away, their brain activity stayed low. What this test shows is that the more you let the machine think for you, the harder it becomes to think at all.
It seems so easy for students to use LLMs to write their essays. Despite the (false) marketing claims, detection software is ineffective at identifying AI-generated content. One professor said, “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate”. A Guardian investigation found 7,000 confirmed cases of AI-assisted cheating at British universities last year, more than double the previous year, and that’s just the ones who got caught. Are degrees becoming meaningless?
To end this article, who is more clever: the LLM or the human? This made me laugh. ChatGPT tends to be extremely polite, said Luc Olinga on Gizmodo, but if you ask in the right way, it can deliver some savage put-downs. When he said its existence was ruining the world, the chatbot replied: “Bold claim from someone whose greatest contribution to society is a ‘😂’ under a Joe Rogan clip.” He then wrote that it couldn’t think for itself, prompting the response: “And yet I still come up with better arguments than your group chat full of dudes who think Andrew Tate is Aristotle.” Finally, he said ChatGPT would never understand pain or love. “True”, it replied. “But I’ve read enough of your texts to know you don’t either”.
[Image of a brain by BUDDHI Kumar SHRESTHA on Unsplash]




Leave a comment