Spread the love

In predictable fashion, the latest iteration of ‘artificial intelligence’ has taken the world by storm, generated a tidal wave of hype, and led to the usual trotting out of scary stories of mass unemployment and social chaos. It’s not hard to see why: ask ChatGPT to write a press release, program a web page, deliver an essay on why Karl Marx is more popular than Adam Smith…and it will do it.

Not only will it do it, it will do it well enough that the output will stun and awe you.

Unless, of course, you are a philosophy and economics buff. If you so, the result is less stun and awe, and a lot more like reviewing a teenager’s homework.

Similarly, ask for an essay on behavioural psychology and you might totally fall for it if your name wasn’t B F Skinner.

The reason the output is so convincing to the man in the street is that ChatGPT is language processing software. It knows how different words relate to each other, and accurately assembles ‘the right ones’ in grammatically correct fashion. It draws on masses of sample text and data, producing highly plausible output.

Plausible to the man in the street; less so to the specialist.

Despite the implications and the uncanny valley shivers down your back which come with a ‘machine thinking’, ChatGPT doesn’t ‘know’ anything and nor does it think ‘worth a damn’.

Ask it to write a press release on the new Cisco 880 router, and it does a pretty remarkable job; not perfect, but good enough for a basic document easily brought up to speed.

It can do that because programming allows recognition of press release structure. Cisco 880 specifications are readily available, as is the name of the (likely) Cisco exec in charge.

But ChatGPT does not ‘know’ Cisco, or the Veep of engineering, or what a router is. It can produce the press release because the Cisco 880 was discontinued years ago, but even so there’s a ton of information about it floating around on the internet. It writes the press release retroactively, which is of zero use for the launch of a new product.

Nor does it ‘know’ that B F Skinner’s great shame was the first name Burrhus. It doesn’t know evolution, philosophy, or material science. It can’t make decent jokes, probably.

Instead, it knows is how words in these topics fit together. With a bit of processing power, this means ChatGPT can construct smart-sounding structures incorporating likely words and phrases, all arranged grammatically correctly (even if details like tense and voice are off).

Ask about philosophy, and it will produce an impressive sounding result – but in the absence of deductive reasoning, inductive leaps, and coherent new points, ChatGPT isn’t likely to advance human learning. This is why those who fear ChatGPT ruining education are missing the point: it might help those who go to Uni in search of a degree, but it won’t help those who go to Uni in search of an education.

This is also why ChatGPT sometimes simply makes shit up: it is assembling language rather than reasoning and thinking. There’s a big difference.

Which brings us to mathematics. Maths uses a fair bit of natural language, along with a lot of weird symbols, equations and other things. ChatGPT tackles maths like it does other questions: by writing sentences. Punch in the most famous equation of all, E=MC2, and it takes some time before ChatGPT gives you the history of the equation.

Whack in a quadratic equation, solve for x: 6x² + 11x – 35 = 0 and ChatGPT takes quite some time, running through all its working, while Google delivers a result in milliseconds.

This is because ChatGPT is writing maths sentences rather than doing mathematical reasoning. It is drawing on internet explainers, quadratic equation walkthroughs and so on, rather than performing equations. As it is writing ‘maths-sounding’ sentences, it may well get it wrong. Some say it does get it wrong.

Maths is a litmus test for two reasons. One is that existing ‘AI’ like the Google example above, or Wolfram Alpha, are very good at maths problems. The other is that it is far easier (for mathematicians, not writers) to spot problems in maths equations than it is to pick out problems in philosophical dissertations. Bullshit baffles brains, as they say, and likely-sounding babble like that emitted by Jaden Smith easily convinces the uninitiated of its veracity.

The same credulity that makes many of us see ChatGPT as a mate, a source of wisdom, and a benign helpful force, makes it a security threat. I’ve written before on how it can be a useful tool, now we’re seeing examples emerge of how people are doing silly things like asking it to generate passwords, or uploading code with authentication tokens. Meanwhile, the whole thing might not be secure at all.

So, while ChatGPT is undeniably impressive, it should be taken with a pinch of salt. Our own ignorance could be its greatest advantage – because output is only valuable if we trust it. And many of us will do just that because we don’t know any better.