Skip to content

Cybersecurity in the ChatGPT age

The artificial intelligence taking the world by storm is once again raising the eternal spectre of throwing people out of work. While this fallacy has been debunked countless times since Adam Smith first pointed out that technical change increases productivity, real wages, and employment, while simultaneously decreasing prices, widening the market and raising wealth and prosperity, we are assured that this time, it is different. Certain doom awaits the white-collar worker, including those in infosec, while emboldened and empowered hackers run amok.

(Adam Smith – no relation – was a Scottish economist and made this observation…oh, back in 1776. The famous culmination of fear of technological progress would come some 35 years later with the Luddites protesting factory automation).

In preferring Adam Smith’s version where technological advances underpin prosperity, ChatGPT promises to make a lot of things more efficient. It may well have produced a draft of this very article for me, after which I could merely edit for accuracy and perhaps tone, getting the job done in a fraction of the time and therefore at a far reduced cost. But, it didn’t.

The point is that the real advantage of any tool is making the things we have to do faster, easier and cheaper. It’s a blunt instrument but consider the humble shovel. Over your bare hands, it gets a faster and easier hole. A computer, or more specifically software on a computer, does much the same thing even if it is rubbish at carving into the earth.

Any white collar worker worth his salt, therefore, should be looking at ChatGPT not as a threat, but an opportunity. It writes code! It writes prose! It writes romantic poetry (this one is a minefield, even for an AI; my sense is we’ll soon see the first AI blush with shame).

A far better interpretation of what ChatGPT and any other AI coming down the line – Google has just announced Bard – can and can’t do is offered by SentinelOne’s Aleksandar Milenkoski. He tackles the application as a tool, not a threat, and a tool which can help the infosec industry. He provides 11 examples:

  • Learning how to use reverse engineering tools more effectively
  • Teaching yourself assembly language
  • Understanding how source code translates to disassembly
  • Writing POC source code quickly
  • Translating between instruction sets
  • Comparing language or platform specific conventions
  • Analyzing code segments in malware samples
  • Identifying malicious activities in code
  • Speculating on function purposes and objectives
  • Understanding vulnerabilities and exploit code
  • Automating reverse engineering tasks

Now, you can bet your bottom dollar that if the infosec industry is finding ways ChatGPT and other applications like it can accelerate their work, there’s another crew doing precisely the same thing. Hackers are known for their ingenuity, and, unconstrained by such inconveniences as morality, justice or honesty, are highly likely to be jumping right in. If ChatGPT can write half your malware for you, hey, that’s efficiency.

Did we mention certain doom in at the beginning of this article? Sure did. But here’s the thing. There is no certain doom, not for the white collar worker nor the black hat hacker. Neither will be thrown out of work and rendered obsolete by the emergence of an apparently intelligent computer.

That’s really the biggest revelation of Aleksandr’s blog, and it is hidden in the subtext. Among his observations are the various limitations of an AI, which some might say rather dilutes the ‘intelligence’ part of the phrase. Intelligence is generally defined; ‘the ability to acquire and apply knowledge and skills’.

ChatGPT is pretty good at acquiring information and it does so from public sources. This has limitations, as Aleksandr notes: ‘Even on topics that appear to be relatively well-established, ChatGPT’s output is very much ‘the thoughts of the crowd’, not ‘irrefutable facts’. Naturally, those two may often overlap, but it’s not necessarily always the case.’

As for applying knowledge, that’s something humans are uniquely good at. There’s a lot which will always be hard for a computer to learn: context, subtext, purpose, and so on. As a security professional, seeing new tools emerge is exciting. It empowers us to do more and help customers protect their data assets effectively. But as always in an arms race, ‘the other side’ is just as eager to take up any new technique, tool or practice.

The real trick, therefore, isn’t about ChatGPT or AI, it is about staying current and taking advantage of what’s out there, before someone else does. Not just in infosec, but in life, generally.

Facebook
Twitter
LinkedIn

Related Posts

One of the most important things to remember about hackers, miscreants, malcontents, criminals and the like is that they’re not stupid.
How can a legal firm protect against data loss when blocking and traditional DLP techniques would restrict business operations?
There are many types of malware. One of the most common is called “malvertising.” It crops up everywhere. Including social media sites and websites. You can also see these malicious ads on Google searches