Artificial intelligence meets natural stupidity: Trump, ChatGPT and the corruption of meaning
Between them AI language models and Donald Trump offer a challenge to the ways we make meaning in language, but to recognise the problem is to begin the fight back.
I said to Mary, “Look, it’s five past eleven already.”
Mary just smiled.
So what’s going on here?
It was actually 9.45. I was making a mild joke about the fact that the clock in my sister’s kitchen is stuck at 11.05. Without my saying anything else, Mary understood this, and so smiled.
You had to be there, and that’s the point: there’s no way a Large Language Model (LLM — the technology behind AI generated text most notably ChatGPT) could have made any sense of this context, or produced an “intelligent” response to it. That’s because the communication at work is not simply a matter of words.
But words are all that LLMs have to work with. As the hype from the AI boosters is forced to yield to a wider understanding of what this technology depends on, we have to hope we’re coming to a closer understanding of its limits, as well as some of its potential dangers.
Never mind
There’s a problem with the term, artificial intelligence, because what’s happening might be artificial, but it is certainly not intelligent. As the computational linguist Emily Bender has observed, we really really need to stop imagining some kind of mind functioning “behind” these tools.
LLMs work by a highly sophisticated form of word association, but where word association technically (in psychology) is a tool to reveal mental processes, or at least ways of thinking, the processes in AI are no more than algorithmic. There is no inner mind to reveal because LLMs work, mindlessly, through pattern recognition. In no sense do they understand what they are doing. Their quality may improve as they work through more and more word patterns, refining how they quantify and calculate probabilities, but this refinement can’t make them any more intelligent; it’s not a process than can yield insight and understanding.
This is a limitation because the ways we communicate through language cannot be reduced to a process of simple verbal input and effective output. Context is critical, and LLMs cannot understand context. They can respond to some extent to a given context by drawing from other texts that might seem similar on some level, and this pattern matching can produce relevant results, but there is no element of understanding.
The non-digital world
I’ve always found it amazing how the binary functioning of digital technology can produce such complex and diverse outputs, how on/off or yes/no could produce graphic visual art (for instance). But the real world is mostly analogue rather than digital, and things get messy where the two meet. Vinyl records have enjoyed a resurgence partly because their analogue sound is qualitatively different from digital recordings. I still have all my old vinyl and value it; personally I think the superiority of one over the other is often arguable, but you can certainly hear a difference. That said I’m also a working musician and know that sometimes it’s the spaces in between the twelve semitones of Western scales that allow you to bring true expressiveness to a piece, something particularly relevant to my instrument (a violin). This kind of expressiveness is most easily and directly an attribute of the analogue domain.
There’s an important analogy here to the ways meaning works in language.
The article I referenced above in relation to Emily Bender discusses a continuing academic dispute.
“Bender’s current nemesis is Christopher Manning, a computational linguist who believes language doesn’t need to refer to anything outside itself. Manning is a professor of machine learning, linguistics, and computer science at Stanford.”
Manning is an evangelist for the possibilities of computational language.
“Bender and Manning’s biggest disagreement is over how meaning is created …. Until recently, philosophers and linguists alike agreed with Bender’s take: Referents, actual things and ideas in the world, like coconuts and heartbreak, are needed to produce meaning. This refers to that. Manning now sees this idea as antiquated, the ‘sort of standard 20th-century philosophy-of-language position.’ ”
The fact that an AI tool can offer something that appears meaningful by drawing only on its analysis of other words is why he feels he can dismiss the “standard 20th-century philosophy-of-language position.”
I’m not a linguist or a philosopher, but I know enough about 20th century philosophy of language to see that Manning here appears to misunderstand it. The elephant in the room is Wittgenstein, who within the 20th century himself debunked the “standard philosophy-of-language” positions (about how language relates to its objects) by developing the notion of language as a game, a game with rules that can change, but which depends on a shared understanding of those rules. Critically what emerges is the importance of shared experience in the ability of language to mean anything (hence “if a lion could talk we could not understand it”).
Manning it seems would happily reduce the notion of that game to some glorified version of noughts and crosses, or (to be generous) chess, when it’s much more like any form of football. The parameters of its power to mean anything are multi-dimensional.
For instance, intention matters to meaning : a thief offering to help an old man with his shopping will use the same form of words as a benefactor but the meaning of those words is seriously different. You need a non-linguistic context to understand what’s actually going on.
Why does any of this matter? There are AI apologists out there who think that if it looks like art it is art, and that if the neural networks used by AI are like the way the brain works then there’s genuine intelligence at work (which is more than can be said for their understanding of the limits of the brain analogy); they would see objections to the new wave of AI tools as the usual moral panic in the face of a radical new technology.
I don’t see a moral panic. I see some understandable concern, and some necessary, useful questions, useful because they urge us to think harder about how meaning works in our lives, and what we find meaningful.
Referring to himself
All this is happening in a political context where language has been disconnected from the idea that what we mean could be accountable to reality.
In this there’s a perverse continuum between Manning’s idea that language need only refer to itself, and Donald Trump’s apparent belief that his language can mean whatever he wants it to mean (that’s not the way he would see it but it is the consequence of disconnecting what you say from what is evidently true).
The news that he’s been indicted has brought predictable howls of unhinged outrage from Trump himself, as well as more disturbing support from his Republican colleagues. Calling someone to account for an apparent crime should not be a partisan issue: it’s a reassertion of the role of the justice system in weighing evidence and coming to a conclusion, with consequences. On the face of it the evidence suggests strongly that Trump has indeed falsified his business accounts, which lies at the heart of the indictment. That’s for a court to judge, just as we can only hope it will come to judge the much more serious accusation that Trump incited the Jan 6 riots.
A reassertion: it matters not just because Trump’s politics have always been reality-free, but because in such a climate it seemed possible behaviours that would have brought down anyone in previous administrations were going to be given a free pass. The system itself seemed broken.
It didn’t start with Trump of course. Politicians have long been willing to indulge a loose relationship to facts in order to promote their agendas. But there was still a relationship, and for the most part if they were caught in a lie, they could be expected to resign. The second Gulf War was arguably a turning point, both in the US and the UK, and the lesson was learned here in the UK by Blair’s Conservative successor David Cameron, who lied shamelessly about who bore responsibility for the 2008 financial crash, a tactic which proved all too successful.
Then (ironically for Cameron) there was Brexit, where the Leave campaign was distinguished by an almost total absence of truth.
Against personal truth
Economic realities are rarely simple, and in a media climate where nuance has become increasingly impossible, so too has any chance of a grounded economic political discussion. This is most immediately a problem for politicians themselves (they need to change the terms of the argument in more inspiring ways than currently seem available), but this inability to deal with things as they actually are has bled into the culture, giving us Trump and Boris Johnson, and beyond them shaped social media as an amplifier for cognitive dysfunction. This is the era of fake news, of “personal” truth, an epistemic crisis.
What’s to be done? The danger of the idea of language as a game is that it can tempt us into feeling that we can make our own rules. It’s important to see that this is the opposite of Wittgenstein’s idea: the “game” is a social phenomenon and depends on a shared understanding of what each referent means. It’s true that this also means such referents are not exactly absolute, and that may be disturbing to those who crave the simplicity of absolutes. But these rules are not arbitrary, and cannot be unilaterally redefined. They are the product of common shared experience: if someone points at a tree and says “look there’s a water tower” you know you have a problem.
The latter example might sound insane, but it really is little different from the likes of the British home secretary suggesting that the extended queues in Dover have nothing to do with Brexit.
The idea that we can have some kind of personal truth is as much a cultural issue as it is a political problem, with complex causes. I’d guess that the collapse of old absolutes, from the false certainties of neoliberalism to the power of Russia (or for that matter the apparently persisting hankering among the British Right for the “glories” of empire) has been influential: people don’t usually let go willingly of their old certainties, however they might be contradicted by the pressures of reality, and too often retreat into “alternative realities” in which they can feel more comfortable. Put that alongside the internet’s potency for spreading misinformation and the result is not only Trump, but the all-too-widespread adoption of frankly crazed conspiracy theories. It’s also a reason why politicians seem happy to sideline the environmental crisis that threatens every aspect of the ways we live.
Given the mindlessness of LLM-based tools, there’s a real danger that software like ChatGPT will only add to the problem.
What’s to be done? Addressing this cultural malaise will take time, education, and most immediately legislative interventions. We urgently need legal oversight to limit the societal dangers of the technology. Getting that legislation right and making it effective is not going to be easy, but that’s no reason not to do it (by bringing people together who know what they’re talking about, as well as excluding commercial vested interests, not least ensuring that corporations are held fully responsible for any resulting harm).
Among other things, though it might seem a small thing, it’s probably important to legislate against AIs simulating the gestures of consciousness, so they no longer respond with phrases like “I’m sorry to hear that”. Brand owners might protest, but what’s at stake here is a necessary transparency in the ways we’re using technology, and to avoid a further level of conceptual confusion about what it is to have a mind.
You’re not alone
Solipsism, the idea that we can know nothing for certain beyond our own consciousness, is an old absolutist error, a leap from the realisation that one kind of certainty is not possible to the idea that no certainty is possible (if that were true then the certainty of solipsism would itself be doubtful). We do come up against our conceptual limits when we try to speak of mind, of what consciousness is, but in this way interestingly the development of LLMs is enlightening: we can see what an LLM does is not like a mind.
Language and its power to carry meaning is intrinsically a cultural phenomenon, as well as something distinctly human. It is possible to view language as a tool, just as it is possible to see a painting as decoration, or as evidence of pigment chemistry, but to do so is to miss the point, to make a category mistake from the outset. I suspect that this is why more thoughtful commentators see a threat in the LLM concept, or at least want to demand that we understand it properly, and are cautious about its apparent usefulness.
It’s because AI writing offers something shallow where we could reasonably hope for depth, a simulation of meaning which makes a mockery of how meaning matters in our lives.
It’s true that Trump and other politicians already make a mockery of meaning, but when they do so we can call them to account, can insist on a return to normative ideas of justice and shared reality. It is heartening and important that this appears to be happening at the moment, and it’s important that it’s carried through, that Trump and his like are denied their reality-denial. We need the justice system to be seen to be effective in doing this, and if it fails to do so we really will sink deeper into our epistemic crisis.
Meanwhile and in the short term at least it seems likely that corporations will jump on the power of LLMs to generate content free content, which will have the immediate effect of ensuring that the internet becomes even more plagued by garbage. It’s not a desirable result. That doesn’t mean ChatGPT and other tools can’t be useful in helping individuals understand and express what they want to say, showing them possible options and tonal nuances, and if they make us more thoughtful about how language works, and how we can preserve our humanity in the ways we deploy these language tools, then they will already have done us a service.
Because this, more than ever, is the moment we need to resist the descent into babble, into the emptying of meaning in the words we use. Writing is hard, but that’s exactly why it’s valuable. it’s one of the best ways we have of coming to know what it is that we actually mean. It’s an ability that on current showing LLMs will never have, just as Donald Trump will never understand what went wrong.
There are no reviews yet.