Skip to content

Mis- and disinformation are fundamental tools for hackers

Misinformation and Disinformation

Something interesting caught my eye, and it is the intersection between misinformation, disinformation, and cyber security and the World Economic Forum’s perceived level of the threat we all face. Those of us in this business know that for hackers, cunning uses of information are the name of the game in getting past defences, often forming the basis for one of the most powerful of all attacks – social engineering.

It’s interesting because, of course, probably most if not all hacks start with some form of disinformation (that’s deliberately using false information with the purpose of misleading). In its simplest form, disinformation is duping people, which you’ll recognise as occurring every time a Nigerian prince promises enormous sums of money in exchange for being his friend.

It’s also interesting because certain governments around the world have waded into misinformation/disinformation by viewing it as a societal problem to be solved via bureaucratic bodies like our very own Disinformation Project. Former PM Jacinda Ardern even spoke out against mis/disinformation at the United Nations General Assembly some time ago.

Perhaps unsurprisingly, the World Economic Forum has waded into the debate from a cybersecurity perspective.

The WEF, of course, is something of a polarising body these days, as it is unelected and yet commands considerable influence on political policy and strategy across the world. Some people like that, others not so much. But whatever your views of the WEF, it is fair to say it holds some sway and has a major voice.

Let’s see then, what it has to say about mis/disinformation: If you were expecting hyperbole, you won’t be disappointed.

Because indeed, the organisation’s Global Risk Report 2024 says the biggest short-term risk facing us all stems from misinformation and disinformation.

In a world where nearly everyone who wants a say has one, what are we to make of this quandary?

Ardern, in her UN speech, essentially argues for policing what people can and can’t say, and one very much tends to get the impression that the Disinformation Project has a similar stance.

Things, of course, get very Orwellian very quickly when that starts happening. After all, who determines what is, and isn’t disinformation? The Disinformation Project itself? The government of the day? And because one should never miss an opportunity for Latin, quis custodiet ipsos custodes – who watches the watchers?

Definitions start mattering, too. Just what is classified as mis and dis-information, and does that change depending on what is said, by whom, and to whom? Political events are a great litmus test of the notion that ‘one person’s meat is another one’s poison’. Just look at the furore around any contentious issue; introducing or rolling back 3 waters, for example, provides ample evidence than any one action is viewed quite differently from across the political divide.

Then there’s the trouble with denotative and connotative meaning. The exact same words which have denotative meanings set in stone, elicit emotionally different responses depending on the ears they go into. Oh, and set in stone? Language changes constantly (and it is, in any case, a tool we use for encoding and decoding our thoughts and those of others).

And I haven’t even mentioned free speech yet. In a (nominally) free society, people should be free to say and think as they please, while appreciating that there are consequences for what one utters if not for how one thinks.

These are all fascinating if not discombobulating ideas, particularly in the context of ever-expanding and evolving artificial intelligence services. We’ve looked at some aspects of generative AI with ChatGPT and so on, and now there’s the emergence of video-generating services like Sora.

On the one hand, we’re rapidly getting to a point where we need to ask ourselves whether anything at all we read is true or not. Is it disinformation or is it accurate?

On the other, hasn’t it always been like this? Persuasive arguments have been around probably long before Aspasia of Miletus brought Socrates up to speed on rhetoric back in 460-odd BCE.  Sophistic (plausible but misleading) arguments are a routine feature of modern and one imagines ancient dialogue.

We’ve always been beset with mis- and disinformation, in other words. It isn’t some new feature of society, but has something of a storied tradition behind it. Some wag even said ‘you can fool some of the people all of the time’, etc, because that’s just how it is.

In a world where targeting mis- and disinformation, essentially, would require outlawing the everyman concept of talking sh*t.

So, on that bold note, I’m bringing it back to our Nigerian prince. He is free to make his wild claims, so long as he has the decency not to spam anyone (yes, I know). And on the receiving end, we must as always keep our wits about us. The joy of this simple approach is that you get to decide what is and what isn’t disinformation.

Which is, if you think about it, the way it has always been.

Facebook
Twitter
LinkedIn

Related Posts

Secure, optimized work from anywhere is already recognized as a necessity and a primary business opportunity for all IT teams. SSE successfully enables this by converging Web Proxy (SWG), ZTNA, CASB, and DLP into one, powerful, high-performing solution.
Overland’s security is up against the unique challenges of a retail environment. Anyone can walk up to a point-of-sale system and try to gain access when no one’s looking, a problem that’s amplified when staff share passwords to serve customers quickly.
Have you ever seen a video of your favorite celebrity saying something outrageous? Then later, you find out it was completely fabricated?