MIT Director Dr Yossi Sheffi on the future of AI - ‘I hope we don’t repeat the mistakes we made with globalization’
Dr Yossi Sheffi shares his views on the impact of AI on supply chains, but also warns that we shouldn’t wait for AI to penetrate every aspect of our lives before we think about the consequences.
Ahead of the release of his new book ‘The Magic Conveyor Belt: Supply Chains, AI, and the Future of Work’, Dr Yossi Sheffi, Director of the Center for Transport and Logistics at MIT, discusses the application of Artificial Intelligence (AI) in global business, how leaders might implement this technology, how governments might regulate it, and how workers might protect their livelihoods in the rapidly changing workplace.
Sheffi has authored numerous books on the topics of supply chain management, technological advances in global business, and more recently the impacts artificial intelligence will have on the way businesses operate. Beyond academia, he has founded and sold logistics, SCM, and business consultancy companies, consistently arguing that resilience, from supply chain to talent, can give corporations a competitive edge amidst global crises.
Supply chains and AI
I began by asking Sheffi what businesses stood to gain as they move to implement AI in their supply chains. He is optimistic about the transformative effects that greater prediction accuracy can have, stating:
There's a lot of work on risk management in general, trying to find out what's going on with all my suppliers. I mean, because when you look at the financial data, it's backwards-looking. It's a quarter behind.
You want to know what's happening now and in the future, and for this a very good source is the media and social networks and be able to mine data… it's not just numbers, but it's text and pictures and videos and whatever, we can find all this and make sense of it.
Sheffi points out what is perhaps the most promising application of artificial intelligence in global supply chains, is its propensity to forecast the results of decisions made at every stratum of the network. The predictive function of AI, which once would have employed teams of analysts, holds the potential to instil dynamism into stagnating industries; drawing on previously underused data such as social media and internet usage to drive business decisions.
Sheffi is ultimately optimistic about the implementation of AI generally. However, on the potential risks, he had this to say:
First of all, if you look back at many technologies, this was exactly the feeling in the population. When computers just came about, especially the PC, the joke was that every manufacturing plant will have a man and a dog. The man will be here to look over the equipment and the dog will make sure the man doesn't touch anything.
The difference is today, the companies are totally aware of the dangers and they're working on it. They're already putting in guidelines.
This response typifies Sheffi’s overarching stance on this latest tech revolution. He frequently places AI within the historical context of the Industrial Revolution. Comparisons such as these are tempting to make as they suggest a predictability that is reassuring to workers and businesses alike (although the Luddites might have something to say on this subject!). However, there is a risk in trusting patterns that do not take into account the changing face of global business and the conflicting priorities of states who look to regulate this technology. On this unpredictability, Sheffi argues:
And then we can have what's called emerging properties that the algorithm itself will create - the designers never foresaw, never imagined that this could happen.
AI development has experienced a sudden acceleration in the last year, and it is difficult to determine exactly what it might look like in the near future. This unpredictability has earned it some prominent opponents recently, including the so-called ‘Godfather of AI’, Geoffrey Hinton, who's publicly raised concerns about the rate of change in the technology and won a lot of mainstream media headlines in the process.
How might workers prepare?
The foremost fear being felt amongst workers globally is the impact this technology will have on mass employment. Sheffi approaches this question with the same historically-minded optimism. He states:
In the late 70s and the beginning of 80s, the ATM, the automatic teller machine, became widespread in the United States. At that point, there were 300,000 tellers, total, in the United States. You know how many there are now? Over 600,000. Why? Because of the ATMs, it became less expensive to open bank branches…more jobs were created than destroyed.
Any shift in technology now represents, in a globalized world, a potential impact on millions, if not billions, of individuals. We ought to remain sceptical that AI will follow the same pattern seeing as the nature of global supply chains requires countless tributaries that are difficult if not impossible to follow and predict. The issue here is therefore not in the inherent nature of AI, but rather in the haste with which it is being implemented in what is an entirely different context to the ATM. Sheffi agrees with this, stating:
I hope we're not going to repeat the mistake that we made with globalization. Globalization was great, on average it was great for people. People made money, the standard of living of the average person went up, especially in the rich world. Europe, the United States, Japan, China. But quite a few people are left behind.
Jobs left, whether it's the UK or Europe or us, many manufacturing jobs left and went to China and people lost their jobs and communities suffered. What didn’t we do then? We didn't realize that it was happening until it was pretty deep into the process.
How to regulate?
The final piece of this puzzle is ultimately, what now? Now that this technology has developed to a point where its widespread application is possible and is ongoing, how should states react to regulate it and protect their workers and businesses? On the subject of regulation, Sheffi makes an interesting point:
The harshest regulation is in China…they control the training data. Train only on certain data. So it doesn't go off the rails because it's tied to the training data. But this is a very extreme regulation that you do it on the input rather than the output.
China’s solution to limit AI’s access to data, thus delineating its capabilities, is a heavy-handed approach, even if by some contrarian standards a measure that's proving 'successful'. China’s approach to regulation is one with authoritarian governance front of mind, which will have highly dubious moral consequences (e.g. social scoring).
But the unintended result is perhaps the kind of future-minded regulation that might be implemented in other contexts as well (for instance, limiting the circulation of deep fakes is arguably a positive ambition). Whilst the likes of the US and the UK might be accused of being focused on an ‘AI arms race’, where the accumulation of wealth is the end goal, China has a vastly different political stance that enable it to regulate swiftly, if not humanely.
Once implementation has occurred, however, how should governments respond to the need for countless workers to re-skill? Sheffi says:
We have to start now to make sure that we have options for the people whose jobs will have to change… governments simply have to buy their time [to allow workers to re-skill].
Fundamentally, therefore, the onus is on 'the state' to approach this new technology with caution and to invest the potentially massive sums of money it will generate into the workers who are displaced. Re-skilling programs would see workers funnelled from their current industries into the new fields of work that AI would generate.
Supply chains have been a hot topic since the pandemic, with their future called into question amidst global shortages. Sheffi presents a future wherein AI is used to bolster the strength of global supply chains with massively increased efficiency and more accurate forecasting. Caution should be exercised when bright prospects are dependent on countless agents making the right decision, however.
I doubt the propensity for AI to instigate a kind of ‘creative destruction’ in terms of worldwide employment, and worry that livelihoods are riding on the effective regulation of this looming technological revolution; as well as the good practice of business leaders around the globe.