Main content

Chatham House Cyber 2024 - will AI spur a dis-information revolution?

Chris Middleton Profile picture for user cmiddleton June 10, 2024
Summary:
The international affairs think tank delved into one of the most contentious, but important, issues of the day: can any of us trust what we read anymore?

fake news

We are often told that Artificial Intelligence (AI) will enable and increase the spread of misinformation – including via unchecked hallucinations – and active dis-information campaigns.In those scenarios, the worry is that we may lack the first-hand knowledge, experience, or context to know whether the information that an AI has given us is trustworthy and accurate, or an incomplete picture based on flawed or biased data. Or simply wrong.

Meanwhile, bad actors may use freely available tools to spread deliberate untruths, knowing that in the world of constant noise that results, a community could be made to turn against itself, or listen to an entirely new signal. The wisdom of crowds, versus the madness of crowds – and in election year for many countries.

But with human beings, from political leaders downwards, more than capable of spreading mis-information and dis-information themselves, is the technology dimension really as significant as commentators believe?

Panellists at last week’s Chatham House Cyber 2024 conference discussed the complex interplay between digitization, AI, democracy, education, ethics, and both misinformation and disinformation. Trust is perhaps the critical factor, they said: trusted data on the one hand, or misplaced trust in technology on the other.

Truth

Ben Strick is Investigations Director of an organization called the Centre for Information Resilience (CIR), a social enterprise and non-profit that aims to expose war crimes, and prevent human rights abuses, disinformation, and the persecution of women and minorities online. Put another way, its aim is to fight abuse with persistent, trustworthy data – with signal amidst the noise. He said:

A lot of what I say will revolve around community. [But that community] is not just in this room. It doesn't just speak English, and it doesn't just look like us. For example, we are able to message or receive messages from people in the jungles of Myanmar, which we do. Or in Sudan, or in ethnic communities in the Darfur region, which we do. All the way to a guy taking a selfie in the Canary Islands, who may be from a ‘black PR’ firm.

Holding up his phone, Strick explained:

This is a form of ‘ecocosm’. This isn't just a phone, it's a system designed to bring us all together. But it can also break us apart, thanks to some of the rapid advancements in technology.

On that point, Strick cited AI vendors that release “fast cars” instead of “safe cars”, referring to previous Cyber 2024 discussions about the return to the technology sector of the mantra ‘move fast and break things’. Plus the ongoing battle between the effective acceleration (e/acc) faction among AI vendors and the global counter movement to pause AI development. He continued:

At the CIR, we are focused on documenting war crimes, countering disinformation, and countering online harms against minority groups. So, that means a lot of our people don't look like me. We're not all white, in a suit, speaking English. Often, they come from countries they can't go back to – Sudan, Ethiopia, Myanmar, Palestine, and many from Ukraine, where we have an office as well. They're seeing the direct impact of this kind of problem. That might, for example, be a simple, innocent-looking tweet, but that tweet might be the subtle seed of a narrative that will eventually lead to a hate-speech campaign, or a mature campaign towards potential genocide, which we're seeing – for example – against Masalit communities in Darfur, and in other areas.”

Strick shared some further examples of why – and where – the CIR seeks to help, noting that:

Female journalists and activists from Myanmar who are supporting pro-democratic values against the military regime are being trolled. AI is being used to generate deep fakes about them, which are sent to their families. This isn’t just online, they're being doxxed as well. Hackers are identifying where they live, where family members live, what their phone numbers are, and getting that content out, to the point where they're waking up with a group of men at their door saying, ‘We’re going to come in and rape you’. Or their children or parents are getting deep-fake pornography sent to them.

So, there's a level with this technology where we think, ‘ChatGPT is cool: mums and dads can use it to find their five-week marathon plan’. But AI can also be used by another group to generate a deep-fake image and undermine someone’s credibility. Or plant a seed to tarnish or undermine democratic values. And the canary in the coal mine is journalists. A lot of our work is about empowering journalists, and empowering survivors to tell stories about their community. And making sure those skills stay within that community, that the power is always within that community, so that we [the CIR] are not there in the future.

So, not just information resilience, but the persistence of survivors’ stories.

Optimization 

Another challenge facing journalists is less shocking, but I would argue just as persistent. Even in mature democracies that are not, currently, at war with themselves (though the looming risk of conflict can hardly be ignored), established publishers are pouring more money into AI and search optimization tools than they are into training reporters.

The speed with which some in the information business have shifted to machine-generated content for machine-readable purposes – bits for clicks, not human-authored content for people – is alarming. But that is partly because, 30 years on from the popular rise of the Web, most media organizations are still struggling to make money from it.

Expertise is expensive, so it’s much cheaper to recycle a PR company’s press releases for clicks than investigate the facts. This is why a new generation of ‘policy institutes’ has risen – organizations that, in some cases, are really PR fronts for tech vendors and multinational companies (see diginomica, passim). In the absence of anyone to hold them to account, therefore, vendors are free to nudge discussions towards their own interests.

Plus, we live in an economy of attention-seekers, where extreme opinions generate a snowball effect – even to criticize a tweet is to draw attention to it, which increases its power. As a result, many publishers are no longer in a lucrative information game, but are really adrift in an overcrowded market in which the sole aim is to hold people’s attention for a few seconds – by any means necessary.

AI can only speed up these trends, though fact checkers and community notes are spreading too.

Madeline Carr is Professor of Global Politics and Cybersecurity at UCL. While acknowledging Strick’s point about the deliberate misuse of technology to persecute minorities and spread disinformation, she said:

We don't want to lose sight of the incredible potential of this technology too. Think about the [UN] Sustainable Development Goals, and how we could use AI to further those goals and promote reaching them. The potential is incredible – on pretty much every one of the 17 goals. But if we are unable to address these cybersecurity concerns, there would be a massive opportunity cost – the things that we could have done, but were unable to do because we couldn’t address those issues.

She added:

In the context of democracy, the real risk is this: the possible loss of public confidence in legitimate public authority. It comes very much down to the role that, until the last 20 years, the media played in a democracy, which was really to hold power to account and do the kind of investigative work that Ben is describing. To act as a mediator between civil society and power.

As noted above, that is partly because traditional media have broken apart and are floating in a stream of attention-seekers, some of whom see economic advantage in promoting distrust in mainstream information sources. But equally, we should acknowledge that social platforms and AI have given everyone a voice that could match the reach of a multi-national.

The key is always to think critically, Professor Carr continued:

Remember those heady days of Twitter and the Arab Spring? I clearly remember saying to my parents, ‘My God, people are tweeting from Tahrir Square [focal point of the 2011 uprising against Egyptian President Hosni Mubarak]. Isn’t it amazing that we can get news immediately from people on the ground!’ But then a couple of days later, I remember thinking, ‘Why are they all tweeting in English?’ This was the first time it had occurred to me [that things might not always be as they appeared]. And we spent the next 14 years undermining the platform of the media in our society. And, of course, AI can accelerate these challenges. But we've laid the groundwork, because the whole economic model of news production is so challenging now.

People now have a lot of difficulty – or find it impossible – to distinguish between what is legitimate – what is the truth and what is a legitimate news source – and what is fabricated. And this is why we have so much anxiety now about elections. Post the January 6 attacks, there is a kind of fragile foundation for authority, and for belief in authority. And when that crumbles, if an election doesn't go the way they wanted it to, how many people will believe that the election was rigged? This is a problem that used to only occur in developing countries. It is not something that we have had to face before.

Then she added:

With AI, the big difference – and I have studied technological policy and trends from the last 50 years – is that this is the only time I've ever seen human beings at the centre of the discussion. And the impact on human beings. So, this focus – let’s not say the obsession with AI ethics, but how much time we spend talking about AI ethics – is good. Perhaps it can become tedious, but it's good. This moment… we're talking about it. And I think that's very healthy.

Vulnerability 

Carme Colomina is Research Fellow at the Barcelona Centre for International Affairs. She said:

There is a degree of vulnerability with Artificial Intelligence that didn't exist before. Because the capacity for manipulation – not just classical disinformation – with deep fakes and the manipulation of video, goes far beyond the idea of mistrust. That’s because it's not just that we don't believe in what we see, it’s that we don't believe in our capacity to realise what's true or not. And the second challenge is, who is responsible for using it? We are always talking about individuals, but political parties are also using these techniques to boost their messages. And those are the parties that, when in government, also try to regulate.

We do need to talk to the innovators, to the industry. Because, at the end of the day, what is artificial intelligence? Is it a tool, or is it a system? If it’s a tool, then you can make a regulation for it. But if it's a system, then you have to talk to those who are developing it. And the problem is, artificial intelligence legislation now is treating AI mainly as a tool, because it's regulating the users, but not the core of the technology. And we still don't know how to deal with that. We still don't have the capacity, the influence, or even the consensus, to deal with that.

My take

A fascinating discussion from a worthy event. But are we any nearer to finding solutions to these problems? That is another matter entirely. It has been suggested in recent years that one reason we might be alone in the universe is that other civilisations may have reached exactly the same point with AI and imploded in a mess of insoluble contradictions.

On that note, good luck. Vote wisely, critically, and with your conscience, and hope that men with guns don’t knock on your door the next day.

Loading
A grey colored placeholder image