AI and inequality - avoiding social upheaval in an age of automation

Cath Everett Profile picture for user catheverett March 27, 2023
Although Bill Gates believes accelerating automation will help “reduce some of the world’s worst inequities”, others are not so sure, fearing social unrest if the transition is not handled effectively.


"The Age of AI has begun", pronounced billionaire philanthropist and Microsoft co-Founder Bill Gates in his most recent blog. Current developments in the field will transform everyday life in a similar fashion to the advent of the microprocessor, personal computer, internet and mobile phone, he argued: 

It will change the way people work, learn, travel, get health care and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

So far, so predictable. But more interestingly, Gates also claimed that AI could help “reduce some of the world’s worst inequities”. In the US, he believes the most effective way of doing so would be to use tech technology to help tailor education to learners’ needs and abilities. This would be particularly helpful in areas, such as mathematics, where performance is dropping across the country but especially among Black, Latino and low-income students, Gates pointed out.

Climate change is another area in which he is “convinced” that AI could make a real difference in tackling injustice, although he admitted he was not quite clear how yet. 

The downsides of automation

But not everyone is so persuaded of the unremitting benefits of automation, whether we are talking about AI or other forms of technology too. New research from the Frankfurt School of Finance & Management, for instance, reveals that the adoption of industrial robots could end up increasing wealth inequality. 

It found that low-skilled, less-educated individuals working in sectors with significant rates of robot usage generally experienced lower income growth and a higher risk of unemployment than their better educated, more highly-skilled colleagues, widening the wealth gap between them in the process. 

A key challenge in this context is that public anger over inequality and the cost-of-living crisis is already generating social problems, even before automation technologies, such as AI, disrupt our lives in ways predicted by the likes of Gates. In fact, over the year ahead, such inequality is expected to be a key contributor to rising civil unrest around the world, according to a new report from insurer Allianz global Corporate and Speciality. Other factors in the mix include plunging faith in government and other institutions, increasingly polarized politics, and a rise in activism and environmental concerns. 

Management consultancy KPMG is another organization that believes social inequality is a major business risk.  This, it says, is because:

It limits productivity and has the potential to constrain consumer spending and growth, destabilize supply chains, trigger political instability, and jeopardize [organisations’] social licence to operate.

The impact of transitioning to the automation era

So how is this scenario likely to play out? Justin Bean is Sustainability Strategy and Innovation Lead at Hitachi’s Environmental Business Division and author of ‘What Could Go Right: Designing our ideal future to emerge from continual crises into a thriving world’. In his view, while social instability is a “real risk”, it is not a foregone conclusion:

My view is that we’re in transition and we’re already seeing conflict arising from that transition. There are three bell curves here in my view – the first is the status quo and the old world, which is declining. The third is the future that we’re envisioning and building, and that’s growing. The second is in the middle and that’s about innovation and the new solutions we’re testing and trying - and as it peaks, conflict peaks too. Livelihoods, vested interests and fortunes are all locked up in the status quo and so there’s always the potential for conflict as the old world falls away.

Bean draws parallels with the first Industrial Revolution, a time when people shifted from agrarian society to living and working in towns and factories. While technology and innovation created more “abundance” over the longer-term, he says, there was undoubted conflict during the difficult transition phase:

As automation takes over, we’ll see similar things happening. Machines replaced a lot of the manual work, which required physical strength and was often dangerous, and automation will take over some of the work currently done by our minds. It’ll be the stuff we’re not good at, such as sorting through large amounts of data. It’ll destroy some jobs but will create others, particularly in service-based areas, such as coaching and teaching. Will it create more inequality? Yes. Inequality rises with any boom. But ‘a rising tide lifts all boats’, and even the poorest of societies today are richer than they were 150 years ago.

Bean believes that the current transition is already generating social conflict as evidenced by the rise of populism and the growing attraction of authoritarian leaders. He explains:

People feel like they’re losing the stability and predictability of the past, which isn’t something you can overlook. So from a government standpoint, it’s important to work out how best to support people through the transition, which is where things like the debate on universal basic income comes in. If you’re talking about creating a sustainable world, all the wealth can’t belong to the top 10% of the population or it’ll lead to social upheaval.

The future is in our hands

Applied Futurist Tom Cheesewright agrees that whether a short-term widening of the income gap morphs into longer-term, entrenched inequality or not is likely to be decided by political responses based on “how the economy is structured to take account of the technologies available” going forward.

When he is in optimistic mood, Cheesewright believes that, over the next 10 years, governments will “get their heads round this” and start taking action by making fundamental changes to everything from welfare and housing policy to transport infrastructure and education. He explains:

The changes that could shift the needle significantly come down to government policy. But employers also have an important role to play. For example, every employer I know complains about the lack of available digital skills but underinvests in training. But there’s the question of flexibility too. We need a more sophisticated conversation than has been going on over the last few years about what we want work to look like.

A lot of time has been spent talking about where we work, but just as important is when we work and how much. There’s been experimentation around the four-day week, but what if we cut people’s hours after developing an honest understanding of what a role entails. So if something was actually a three-day-a-week job, would someone accept a 20% pay cut to get two days back? It could lead to some interesting conversations. 

Another important consideration is behaving ethically and thinking longer-term about what organisations need to do to be sustainable over the next decade. Failure to do so is likely to cause them problems over the next five to 10 years as “they won’t have invested in the right people or technologies”, Cheesewright adds.

When in pessimistic mood though, he sees growing levels of automation as simply ending up as a “race to the bottom”, leading to decade-by-decade increases in the Gini Coefficient,  which evaluates income distribution across the world:

In this instance, the rapid adoption of automation technology could play into a rapid increase in income inequality, and then I do see social unrest being likely.

My take

It appears that the world is at a turning point right now as we start moving into a new accelerated era of automation, with both policy-makers and employers having a key role to play in how things turn out. Dr Cansu Canca, AI Ethics Lead & Research Associate Professor at the Institute for Experiential AI at Northeastern University, offers some thought-provoking insights of where we are to date:

If we want to ensure that [AI and robotics] are developed and deployed in ways that are ethical and benefit humans (I would argue that focusing on humans is an unjustifiably narrow focus, rather we should focus on their impact as a whole on the world), we need to build a value alignment into them as well as into their use. These safeguards are certainly not in place at the moment…It is worth noting that the integration of ethics requires multiple layers of safeguards, incentives and tools. Companies must have internal systems that go beyond compliance and actually make ethical concerns an integral and innovative part of the innovation cycle. But that is not sufficient without an incentive structure around it, and that structure is often supplemented by the regulatory framework, which is also currently inadequate.

A grey colored placeholder image