The advancements in AI technologies in recent months, mostly as a result of OpenAI’s ChatGPT tool, have woken some people up to the potential of the technology. It feels like all of a sudden generative AI tools are capable of output that we couldn’t have imagined just a short time ago. From AI generated images that look so genuine that they’re fooling swathes of internet users that they are real, to Large Language Models (LLMs) having intensely unnerving ‘conversations’ with journalists.
There are ongoing debates about how quickly we will accelerate to Artificial General Intelligence (AGI) - AI capable of performing most cognitive tasks a person can do - but it’s becoming increasingly clear that development is happening at pace and the impact will be far reaching.
In fact, I’d argue that the use cases and applications we are seeing today are just the tip of a very large iceberg, and more often than not are trivial. Much like the development of social media, and the Internet before it, the consequences of new technologies aren’t realized until much further down the line. But it’s arguable that AI will have a much greater impact than any technology development in our lifetime.
Open AI itself has said most jobs will soon be influenced by AI, arguing that those jobs that require college education will see the greatest impact, and that at least half of peoples’ tasks may be affected by AI. Nothing new there, we’ve been hearing those predictions for a while - but those far off predictions are creeping closer and closer by the day.
To put it simply, we shouldn’t be viewing AI as a complementary technology to the Internet and digital tools - but as a technology that will shape democracy, economies, labor markets and political outcomes for the next generation.
A concern should be, however, that these powerful technologies - where we understand very little about how they actually work (black box territory) - are seemingly going to end up in the hands of very few US-based corporations. Why should this be a concern? Well, how should we feel about tools being created that have the power to shape our lives being in the hands of a select few, with little democratic oversight?
Regulation is not able to currently keep up with the pace of developments. And whilst some countries, including the UK, are producing ‘AI strategies’ that amount to frameworks that seek to guide the development of the technology and encourage investment, companies in the US are leaping years ahead with technology we don’t fully understand in their hands.
This thinking should be at the top of the political agenda here in the UK, but unfortunately the current government seems more preoccupied with stirring up culture wars, targeting low-level criminal activities and demonizing asylum seekers. Meanwhile across the pond a technology wave is coming that we in the UK are wholly unprepared for.
A unique threat
Given this framing, it was with great interest that I stumbled across a blog by James W. Philips, a researcher at University College London, and a former special advisor to the Prime Minister for Science and Technology (April 2020 to September 2022). Philips proposes that the UK is uniquely positioned to ‘secure liberal democratic control of AGI through UK leadership’, but has a very short window of opportunity to do so.
The challenge, Philips outlines, is as such:
Advances in AI capabilities are now exponential. A decade ago, AI could barely recognise a photo of a cat. Today, Bing’s chatbot is capable of a wide range of tasks including searching the internet to write and edit essays and code computer programs, whilst AI is superhuman at many narrowly defined tasks including those involved in strategy such as the game Diplomacy.
In private, leading figures believe that within another 5-10 years we will be able to build AI capable of performing almost all cognitive labor a human can currently do, and doing most of it far beyond human ability. Such an Artificial General Intelligence (AGI) would likely alter society unlike any advance before in human history, and will represent a major strategic surprise to the UK.
Today's narrow AI systems have already shown immense economic and social potential to improve our lives. They have shown the potential to replace a large amount of search (a market of ~£350bn/year), pass a medical license exam, pass a bar exam, write half of all code, and write, illustrate, and voice movies. They have increased writing productivity by 50%, increased coding productivity by 55%, and research from MIT suggests that the broad adoption of such systems could triple national productivity growth rates. We believe these benefits will continue and affect nearly every part of the economy on the path towards AGI.
Philips goes on to say that AGI could present a novel and existential threat if it takes on ‘agentic’ form that is capable of long-term planning and can autonomously execute on goals. In other words, if the AI becomes smarter than you are I, and is able to carry out tasks on its own, the consequences could be huge. We’ve written previously about the threat of disinformation as a result of generative AI, which under an autonomous AI system could be hugely challenging to control. As Phillips writes:
There are currently no good theories for how to keep a superintelligent AI system aligned with human interests.
Even if AGI can be aligned to our values and controlled, Philips argues that the UK also has substantial vulnerabilities to the technology compared to other nations. The UK is highly dependent on services and creative exports, for example, which are already being highly automated.
Equally, our capitalist systems also encourage the rapid development of AI systems. Companies know that the first to achieve widespread use of AI - and create AGI tools - are the ones likely to win big financially, at least in the medium term. As such, there is incentive to remove human oversight and outpace regulations.
However, not all is doomed according to Philips, if the authorities in the UK can wise-up to what’s coming. Philips highlights that largely due to Google-owned DeepMind, which is based in the UK, it is one of the few nations besides the US that has the capability to lead the world towards AGI. In addition, he argues that unlike companies in the US, London benefits from the co-location of technical and government expertise.
However, this opportunity isn’t being recognized. Philips writes:
Whilst the UK has a leadership role through DeepMind, this advantage is a) dependent on a US tech giant's funding and b) diminishing as competition grows globally.
In the coming years, experts believe AI companies are going to move from spending tens of millions of pounds on training single AI models to hundreds of millions or billions of pounds - if the UK doesn't act now, it will have no hope of leading the future. OpenAI has just launched GPT-4, a tens-of-million dollar model which will further fuel the resource-intensive commercial AI race set off by ChatGPT and Bing. If we don't act now, we will likely lose our advantage as we lack the resources to sustain a competition with these companies, or the US and China, and possibly even the EU.
Continuing our existing approach would be a deliberate choice for the UK state to head towards strategic irrelevance in AI. Whilst we cannot stop AGI development unilaterally, we must ensure we and allied liberal democracies are in a position to control it.
What Philips is arguing for is a ‘nation-state effort’ to pursue the sovereign, accountable and safe development of AGI. However, a new approach is needed and what Philips is calling for is similar to a ‘CERN for AI’.
It’s worth reading Philips’ blog in full for an extensive outline of what’s being urged here, but his key action takeaways for the government include:
Serious AGI expertise advising the highest levels of government - officials need to be receiving advice on AGI from experts actually building it (individuals in frontier tech labs)
Building supercomputing necessary for AGI - Philips notes that one private lab in California is now using at least 25x the total compute capacity available through the entire UK state, just to train a single model. The lack of compute in the UK undermines its ability to make headway in this area and AI supercomputing is a foundational infrastructure that enables all of society. Philips argues for the UK renting 30,000 GPUs as soon as possible and building out dedicated engineering resources for using these.
Build an elite public-sector research lab at the frontier of safe AI research - to ensure the UK has a seat at the table in the development of AGI, it needs to create a national research lab that is led by a deep technical expert, is sufficiently resourced and will form the “nucleus” of a whole of nation effort towards AGI.
Work in partnership with allies aligned with our values - focus on them providing funding for supercomputing and salaries, as well as on data collection. Philips argues that this be pursued so as to avoid a zero-sum technology race and to make a multilateral partnership more palatable to the US.
Lead in the governance of AGI - the UK should spend “significant resources” to prepare a variety of regulatory responses in this space, in partnership with technical experts in the area.
Philips’ proposal may seem aggressive, but a couple of points certainly ring true. Firstly, the lack of regulatory oversight of A development and legislators’ inability to keep pace. Secondly, the window of opportunity is diminishing and the UK could miss out on becoming a leader for this next technology revolution. Thirdly, does anyone really want these powerful tools in the hands of a very select few companies that are answerable to shareholders? The consequences of AGI will be significant and having the UKCcc government have a stake in how it shapes our lives - rather than a reactor to what’s put out from countries further afield - could be a worthwhile pursuit.