AI and writing: the value of the hard way
Published by Paul Brasington in Good work · Friday 10 Feb 2023
Tags: AI, ChatGPT, AI, writing, copywriting
Tags: AI, ChatGPT, AI, writing, copywriting
The easy availability of AI powered writing is going to change how we do many things. If we want to ensure this change is for the better we’re going to need to be clear about what we value and what we’re trying to achieve.
In a lecture I heard as an English literature undergraduate the late poet Geoffrey Hill commended the American poet and critic John Crowe Ransom for his affirmation that we should “go the bloody hard way”. I may be misremembering this because it seems unlikely that an American would have said “bloody”, or maybe Hill just threw in the emphasis. Ironically given what I’m about to write, if I had the new AI-powered Bing search engine I could probably find out, but in any event I do know that Hill was talking about the correlation between effort and quality, and that could not be more apposite for the moment we’ve reached.
The Luddites have a bad press. We’ve come to use the word broadly to mean anyone resistant to technology. But the Luddites weren’t worried about technology itself (such a concept barely existed). They saw correctly that the introduction of steam-powered weaving machines threatened their livelihoods, and hit back in the only way they could. I imagine they knew full well that the gesture would not save their work, but their anger and desperation needed expression.
The new steam
Artificial intelligence (AI) is the new steam, and it’s set to change how writing gets used in business and many other contexts forever. A couple of months ago I wrote about dall-e and AI-generated images, suggesting that the same thing was about to happen with writing, and since then dall-e’s sister tool ChatGPT has hit the headlines. Suddenly everyone’s talking about it (at least in the websites I dwell among), while digital marketers and other media owners eye its possibilities greedily. Microsoft has announced it will be part of its transformed Bing search engine. Google, which has been working on its own version for some time, has had to push forward its launch in order not to be seen to be left behind. The launch didn’t go too well but certainly chatbot conversations look like the future of search.
Like the Luddites I know what I’ve mostly done for a living (writing for business) is changing dramatically because of AI technology, even if it’s not exactly going away. That’s painful because I’ve long been interested in technology and even been excited by it. I’m writing this on hardware and software that was no more than a dream when I started out in work, and it’s enabled me to work freelance from a home office very happily. I was an early adopter of the internet and have been writing for (and thinking about) the different demands of digital and analogue media more or less since it needed thought.
Aside from search, AI-powered chatbots appear to offer fast and easy article or essay generation: just type in some keywords and away you go. The quality is already good enough to push educational establishments into worrying about their assessment mechanisms: what’s the point of asking a student to go away and write an essay if what comes back is the result of a few key presses?
Learning and thinking
Critically writing an essay is not just about the end result. It’s also how a student learns, both in terms of relevant facts/sources and research techniques. In some ways of course the internet has already transformed that process, in theory at least allowing anyone with a browser to trawl the world’s data and see the search results. In reality students still need to learn the critical skill of how to judge and use that information.
There’s some analogy with the use of calculators. It’s still useful to teach students how to multiply decimal fractions (say), so they have some insight into what’s going on. Once that knowledge has been acquired few of us would turn to paper when a calculator is available. But the analogy is limited: something like multiplication is only a technique in the wider work of problem solving. When it comes to writing an essay a chatbot appears to make the problem go away.
Writing matters because it’s the pre-eminent way we learn to think. Reading is a critical stimulus to our thinking, but it’s only in writing that we are challenged by the need to scrutinise and organise our thoughts. I realise there are other modes of thinking (visual, musical, mathematical for instance) but verbal thinking offers the consolations and challenges of immediately apparent coherence. Words are in a self-evident way the language of thought, and for most of us at least getting the words out of our heads and into some written medium is how we can best understand what we’re thinking, see its limitations and develop it.
One of the most striking effects of information technology has been to push everyone to become a writer. We communicate mostly through email and messaging; no one has a secretary in the older sense of the role; subeditors have all but disappeared from journalism. Some businesses think they can write for customers or other audiences. Perhaps some can, but there has been a visible decline in quality control across business and other communication. Technology provides the means to get your writing out there, but it really doesn’t provide the skills.
Useful robots
It’s trying. Microsoft Word offers increasingly sophisticated tools to spot grammar and stylistic mistakes. Microsoft is promising to use its investment in OpenAI (the developer of dall-e and ChatGPT) to take this to another level. We’ll see. As it stands I’ve found Word’s grammatical and stylistic advice mostly useless (it’s too dumb to understand context, and you can’t begin to communicate effectively without understanding context), and occasionally plain wrong. I’ve been using Word since it was a DOS underdog to WordPerfect, but I’ve long settled on less pretentious or intrusive alternatives.
It’s conceivable that chatbots, or AI more generally, could be usefully confined to contexts where people expect to interact with a robot, search being a prime example. Such confinement would require discipline and restraint from the tech community. That restraint has not been a hallmark of previous behaviour so I suspect it could also require legislation with punitive teeth. This wouldn’t stop rogue operators of course, and we’re already awash with bot-driven garbage (because my email address is necessarily visible on my website I even get spam from myself, trying to extort money from me for my non-existent porn watching habits, the worst of it being that others are no doubt getting the same message with my name attached).
But it might help a bit.
The looming proliferation of garbage is a threat in itself. Here’s a good piece about it.
https://maggieappleton.com/ai-dark-forest
But even leaving aside the likely damage to the Internet as the repository of the world’s knowledge, the act of writing is where we learn what we’re really thinking, what needs to be said, whether that’s a research report or a poem.
Creating new knowledge
It’s illuminating to think about reports here, because it highlights how there are two very different types of research. One involves a synthesis of existing reported knowledge, a trawling of the obviously relevant, which is all an AI tool can do. It has a value, because it’s going to save some time if that kind of synthesis is all you need.
But more interesting and important is the research that seeks new knowledge, which by definition will not be available to an AI trawl, because this kind of work means generating, codifying and analysing data that did not exist before. AI might be able to accelerate elements of this work by collating and filtering some previously analysed material, but if we relied on it alone we would literally learn nothing new.
As Appleton notes in her strategies for resisting chatbot generated text
“Language models regurgitate text from across the web, which some humans read and recycle into ‘original creations’, which then become fodder to train other language models, and around and around we go recycling generic ideas and arguments and tropes and ways of thinking.”
More than this, the skills required to create new knowledge depend on understanding and practising the research and analytical work together. If we bypass the work, like an unused muscle our abilities will be diminished.
As it stands an AI tool cannot generate new ideas, or at least, not unless/until AI evolves to the point where we could sensibly say it has consciousness. That would bring its own problems and as far as I can understand we’re nowhere near it yet.
AI and art
On the other hand I’m not particularly concerned about an AI’s ability to write novels. Publishers might jump on the opportunity to create “product” without the burdens and costs of (sometimes) wayward authors, and I imagine this product will find an audience. There are no important differences between an AI text and an airport novel: both would be created to entertain at a very basic level, and not much else, and that’s fine. Narratives written to a formula (whether for books or films) might as well be done by a machine, and indeed the machines might be better at the task. (There are some thorny issues around copyright here but that’s beyond the scope of this piece.)
For creative writing though, the concept of intention in the creator is important in how our responses develop. This is itself an act of imagination in the reader, and of course we could be fooled into imagining that an AI is a real person. But we would legitimately feel differently about the writing if we later found out it wasn’t true. The notion that we’re somehow autonomous in our response to a piece of art in any medium is solipsistic, and under-describes what’s going on. It doesn’t matter that the author or creator will for most of us be an imagined consciousness (unless we happen to know them personally); if we know some facts about the creator’s life this might further colour our responses, and I’d argue that this too is legitimate, because when we respond to art we place ourselves in a relationship to its totality, including the circumstances of its creation. We want to imagine intentions, and we want to be right about them. AI has no intentions.
It’s possible that with capitalism being what it is we will see a flood of AI generated material in areas previously reserved largely for creative work, and this will be frustrating to watch as well as making life less pleasant (we have enough noise). There may be significant consequences for publishing models, and the options available for writers wanting to be paid for their work. This is happening already and I suspect will need some time to get us to a better place, (to take an optimistic view of the possible outcome).
The industrial scale
Few of us wear hand-woven fabrics these days. Those fabrics are still around but the Luddites were right about the future. The analogy is not exact, because writing features in many different elements of our lives, with some of those elements being far more important than others. I can see many ways in which AI-generated text could be useful, as a help towards getting something else done. This could typically be the case with a lot of business-related writing, and that might not be a problem.
But because we’re at a turning point we have the power to make some choices, and it’s important we use that power wisely.
We shouldn’t overstate the threat. AI-generated text cannot work in the many contexts where human imagination and insight matter, whether that be in a novel or the development of fresh insight. As I noted in my previous blog about dall-e, even in business writing there will be a perceptible difference between AI-generated writing and premium human copy. It will be interesting to see how much business is prepared to pay for that premium.
But we also need to be very careful that the availability of AI-powered shortcuts doesn’t damage our ability to develop new insight in the future. I’m fearful too that being able to generate mediocre but adequate copy, rather than sharpening our appreciation of the alternative might flatten expectations. Faced with a deluge of this stuff we’ll become lazy readers as well as lazy non-writers.
That’s because AI is to the craft of writing what industrial agriculture is to good farming. Industrial farming offers something cheap and adequate at some level, but its indirect costs are real and potentially catastrophic. (Unless we find ways of sustainably powering the server farms on which large scale AI tools depend those tools will be contributing directly to that catastrophe.)
Energy aside AI may be less obviously catastrophic. But the dangers are real, and already apparent enough to inform the choices we need to make right now. There are good reasons why writing doesn’t need to go the way of weaving, but it will take deliberate interventions, a forceful assertion of what we value, to ensure those reasons prevail.
There are no reviews yet.