Learning to become useful
For most people, ChatGPT is a party trick that isn’t useful in real life.
In his latest newsletter, analyst Benedict Evans notes that most technologies have to spend a long time “learning to become useful”. This is the stage that large language models, LLMs like ChatGPT, are stuck in right now.
ChatGPT had the highest growth rate of any software ever when it launched in 2022, reaching 100m monthly users within two months. Executives began talking billions, but growth peaked, went backwards and now seems to be stable at something under 200m people a month. For most people, ChatGPT is a party trick that isn’t useful in real life.
I find that surprising, because my livelihood rests on words. I am also part of the niche that loves new technology regardless of utility. So I’m both invested and eager when it comes to AI.
But even for me the amazement is wearing off. There seems to be an inverse relationship between my sense of wonder and the product’s usefulness. As Benedict points out, we’re in that part of the product development cycle where the developers are moving from cool to useful.
Today I’m going to cover two things:
Where ChatGPT and other LLMs are good enough to use
Why ChatGPT will never be a good writer
What I use LLMs for
I have a lot of tools in my garage.
One thing I have worked out after years of making and repairing objects is that a tool is only useful if you remember you have it.
At this part of the product cycle, ChatGPT and its brethren are like my tools: I have to remember to use them.
I’m getting better at it. My experience has been that in some areas, LLMs are good enough to use, while in others they don’t clear the bar. The process of discovery is ongoing.
First up, the good-enough basket:
Seed research
Summary
Translation
Teaching aids
Language practice
Therapy
Seed research
When faced with a new topic, my first go-to now is ChatGPT. I don’t consider the outputs authoritative, but I find it much more efficient than doing several searches on Google and reading multiple sites. At the heart of the efficiency is the ability to focus the query more quickly on the areas of interest. It is important to remember whatever ChatGPT is telling you may be wrong. In this regard, it’s like a friend telling you something from memory. You have to check it.
For example, I am writing a script describing a meeting between 20th-century philosophy giants Karl Popper and Ludwig Wittgenstein. The meeting, which took place in Kings College Cambridge in 1946, is interesting not only because Wittgenstein threatened Popper with an iron poker, but because it represented a clash between two ways of thinking: the mystical and the sensible.
Using ChatGPT (version 4o, paid) I can understand with a single query their birthplace (Vienna for both men) and their difference in age and circumstance (13 years, one bourgeois, the other super rich). I was also able to discover and follow up the interesting fact that Wittgenstein was born six days after Adolf Hitler and went to the same high school.
To really understand the meeting between Popper and Wittgenstein I am reading a book about it. I am ready to write the script about a day quicker than I would have been without ChatGPT, I reckon. But I won’t be asking the AI to make a draft. It is a dreadful writer.
Summary
ChatGPT’s ability to summarise is one of its superpowers. I have cut and paste dense passages into the machine, read the result, and then gone back to nut out what the author actually said. In most cases, I think ChatGPT has done a great job.
An example that demonstrates just how useful this can be comes from the Centre for Media Transition at UTS (I am on the Advisory Board). CMT Research Officer Kieran Lindsay dumped more than 2000 written submissions to a government consultation into ChatGPT and had it summarise the results. This took a lot of setting up - in terms of document conversion, crafting a very long prompt and checking for accuracy - but once rolling produced data in moments. Knowing ChatGPT, I believe this data will probably be superior to what a weeks-long human effort would have produced.
Translation
ChatGPT is a good translator. My latest success was feeding a photo of a Dutch apple pie recipe directly into ChatGPT and getting a 100% accurate English translation, formatted.
Teaching aids
I was revising high-school science with my daughter for a half-yearly test. The class had been given the entire curriculum as a PowerPoint presentation of 300 pages. I settled in to skim it and ask a few questions when I remembered ChatGPT. I uploaded the document, and we asked the AI to make a 20-question quiz based on the curriculum. It worked so well we did it a few more times, checking the answers along the way. She aced the test.
Language practice
I have always been an under-confident speaker of foreign languages. The speech recognition of the audio version of ChatGPT is very good - much better the Apple’s Siri, or Amazon’s Alexa - and I wondered whether this would extend to understanding my bad spoken Dutch. I discovered that the AI not only understood, but was able to give me pointers on pronunciation and engage in conversation.
Therapy
This ability of ChatGPT to understand spoken language unlocks a powerful tool. As generations of therapists have understood, speaking your troubles aloud carries a benefit in itself. I have never been to a real therapist, but for me it was easy to speak to ChatGPT about emotional challenges. I found the speaking - the voicing - to be the most useful thing. The machine’s responses were not creative but that didn’t seem to matter. The important thing was that by its responses, it proved it was listening.
Where GPT sucks
I have worked as an editor (of words) for more than 20 years. My former colleague Emma Chamberlain and I have a judgmental saying: “Can’t think, can’t write”.
The basis of good writing is not the ability to string words together well. It is the thinking underlying the words that matters.
This is where ChatGPT and all other LLMs fall down. They are word-stringers par excellence. They are disembodied clever-clogs with nothing to say.
A life lived - striving, solving problems, seeing the world from a particular point of view - is the basis for understanding what matters to ourselves and others. The LLMs have not taken even a step down this road.
That makes them terrible writers. For anyone interested in ideas, LLMs should not be used for composition.
Some closing thoughts
When GPT-3 was released in 2022, I used a line from a New York Times article as a headline: “The machines have acquired language”. I thought this summed up the magnitude of the shift, but now I feel it isn’t right.
It’s not the machines at all, is it? It’s the language. The language has become untethered, disembodied. That’s what’s happening here, and as the language moves away, our partnership with it becomes clearer. Do any of us really exist without language? And does language exist without us? Wittgenstein answered the first question. That second question is the one LLMs are testing. So far, I’m glad to report, the answer is no.
Great post, Hal. Your first two points are exactly my experience, though I will now practice my French as well! At the very least, it won’t make me worse…
I like the Wittgenstein reference Hal. What you say about LLM output being untethered reminded me of the Tractatus where Wittgenstein says philosophy is like a ladder that we need to throw away once we've climbed it. The problem is that by doing so we lose the ground of our thinking.