Companies, governments and regulators around the world are currently anxious and optimistic about the sudden rise in use of Artificial Intelligence (AI) technologies. It’s early days still, but amongst all the polite smiles and public declarations of collaboration, it’s hard to not imagine that behind closed doors politicians are desperately trying to figure out how to become leaders in the AI arms race.
This is largely because of the likelihood of AI being a general purpose technology and reshaping economies entirely. Previous general purpose technologies, such as the steam engine, electricity and the Internet, have gone on to change the way we work, created new goods to consume, and have reshaped labor markets. Not to mention, they have created global superpowers.
These technologies also largely resulted in productivity improvements and positive changes to economic metrics (healthcare, wages, standards of living) - for the countries that understood their imminent impact and set to work making their widespread use possible, whilst leaving space for further innovation.
There are huge risks associated with how AI will bring about change to our lives - most notably, in my opinion, when it comes to the legal implications of ‘ownership’ (when the cost of creating anything digitally approaches near zero, it’s difficult to assign copyright and IP in the ways we’ve become accustomed to). There are other risks of course too, particularly around the spread of disinformation and the impact on white-collar workers. But we need to be realistic that AI is coming and things are going to change. And as such, those responsible for bringing about those changes need to be flexible in their approach and take effective action in navigating the path successfully.
However, in my view, there is too much of a preoccupation amongst governments and regulators on fostering the right environments to create the ‘next generation of AI companies’. Leaders in the UK and Europe made this mistake decades ago when they saw the Internet companies springing up around Silicon Valley - and swung into action to market new technology hubs in places like East London, with less glamorous titles like ‘Silicon Roundabout’.
And that’s not to say those endeavors didn’t yield some results, there were certainly some success stories. But by and large the US (and now China) continue to dominate when it comes to building and scaling large Internet organizations. And there’s a reason now too why those regions are spawning the next generation of AI companies - access to data.
Artificial Intelligence is useless without extensive data and the likes of Google, Meta and Microsoft have been quick to market…because they’re sitting on huge troves of the stuff. Chip companies will of course see success too as the demand for their powerful hardware accelerates, but again, that’s a race that’s hard to enter at this stage of the game. They’ve all got a first mover advantage and, in my view, it’s going to be hard to compete on that front.
So, if you’re a regulator, politician or organization sitting outside of these markets, it might be worth considering asking yourself the question: is it even worth trying to compete?
A new path
I don’t pose the ‘Is it even worth trying?’, question to be pessimistic. That’s not the aim of this (admittedly rather speculative) piece. Instead, I hope it offers a prompt for those people with influence to perhaps take a broader view of what the competition actually looks like in the context of AI.
A good comparison is perhaps the age-old argument in enterprise technology of ‘build versus buy’. With the onset of cloud computing and XaaS (everything as a service), enterprise buyers had to sit down and ask themselves: Where am I able to add value? And the answer is most often reallocating resources away from building technology, towards buying it. The most successful companies today understand that they can’t compete on the technology front, but they can use others’ technology to compete in areas such as service, personalization, original content and operational efficiency.
If we extend this thinking to AI, organizations and countries should be thinking of the outcomes of the impact of the technology, rather than trying to win big by building the next great foundational model, and what complementary enablers are needed to achieve success.
Much of this won’t be down to the technology itself and the answers will vary from region to region and organization to organization. For instance, Britain was successful during the Industrial Revolution in part not just due to new technologies emerging, but because of the flexibility of Parliament at the time to enable new property laws, as well as government intervening to finance complementary systems that enabled the new technologies to scale (e.g. the development of the Railway Clearing House and subsequent laws to set standards). There were also unique wage dynamics in the country (amongst a plethora of other things) - but the key thing to understand is that it wasn’t just the ‘invention’ that spurred the economic growth, but rather coordinated efforts to enable its success.
This of course leads us to the discussions on regulation. Governments around the world are obsessing over regulation, particularly as it relates to safety and risk, which is of course a high priority agenda. The UK, as an example, has spoken at length about AI safety and made it its focus to encourage investment - the thinking being if it is a safe region to develop and deploy AI, this will encourage confidence. The EU has pushed forward with its AI Act - again encouraging the safe use of data and seeking to replicate the influence it saw with GDPR - and the US is forging ahead with its own agenda.
None of this is wrong, far from it. As we saw with data protection, consumer and copyright laws (many of which apply to AI, by the way), the advancements in Artificial Intelligence will succumb to regulation that is harmonized down the line after years of being bounced between the courts and government institutions.
But I would argue that whilst attention is paid to regulation that sets the guardrails for AI use (important), we should be paying equal attention to regulation and efforts to enable it effectively (complementary factors). You only have to look to China for how it rapidly adopted mobile-enabled technologies, including payment systems, to fuel growth and new experiences for consumers.
Equally, the UK has struggled to spawn many huge Internet companies, but it has become a leader in FinTech and is one of the world’s leading e-commerce regions. In those two examples, there were enabling features that include geography, access to broadband and centers of knowledge.
No clear answers, but pause for thought
Annoyingly for the reader of this piece, I’m unable to provide a clear answer to ‘how’ to do this for every region in the world. I live in the UK, so I’d like to see the British Government be smarter on this front, but the same thinking applies to all politicians and regulators. And to be perfectly honest, it’s a problem for people much smarter than I am and the different ‘enabling and complementary factors’ will vary region to region.
But I’d like this piece to offer some pause for thought to think about enablement rather than simply ‘invention’ as we accelerate towards our AI future. There are huge risks involved, of course, and I’m not encouraging a Wild West approach where people, companies and governments can do as they please.
Rather, I’d like regions such as the UK to ask itself: what do we already do well? What competitive advantages do we already have as a market and how can AI accelerate those advantages? And how could we use AI to transform our economy into one that is beneficial for most and equitable? In terms of policy recommendations, much of this will likely come down to the government setting standards, easing paths to adoption, encouraging knowledge networks, investing heavily in skills, providing financial incentives, investing in R&D and remaining flexible and consistent. There’s likely more, but it certainly requires a different type of thinking and a new focus.
It’s easy to get distracted by the ‘shiny new thing’, and unfortunately politics has become a game dominated by short term thinking and vested interests, but the outcome of the AI arms race is yet to be decided. And I’d argue that a lot of it will be decided by those who adopt AI smartly, rather than those who see returns by creating it.