Main content

Book Review: The Handover explores how AI is shifting the delicate balance between states, companies, and institutions

George Lawton Profile picture for user George Lawton January 4, 2024
In his new book, The Handover, David Runciman argues the real danger of AI lies in shifting the delicate balance between states, companies, and civil institutions – the original artificial agents.


Many practical concerns about AI relate to hallucinations, bias, and copyright. These stand apart from the existential risks of artificial general intelligence (AGI), which attracts a lot of ink but mostly has less practical substance, at least in the short term. 

In the new book The Handover: How We Gave Control of Our Lives to Corporations, States and AIs, University of Cambridge academic David Runciman argues the real danger is the rapid adoption of machine learning and AI risks upsetting the delicate balance between the existing artificial agents that have fueled our prosperity, safety, and well-being over the years.  This includes states, corporations, and civil institutions like unions, political parties, churches, clubs, newspapers, and others. 

Understanding the dynamic between states, corporations, and people can provide insight to guide our thinking about the rapid rise of automation, large-scale data harvesting, and AI that have escaped human control. Runciman says:

States and corporations reflect two sides of our contemporary fear of machines that have escaped human control. One is that we will build machines that we don’t know how to switch off, either because we have become too dependent on them or because we can’t find the off switch. That’s states. The other is that we build machines that self-replicate in ways that we can no longer regulate. They start spewing out versions of themselves to the point where we are swamped by them. That’s corporations.

In his telling, modern states dating back to the 1600s in England are the original artificial general agents that allowed cities to grow large and civilizations to flourish at a level far beyond what was previously possible under ancient Greek, Roman, Chinese, and other civilizations. But this growth required a narrow balancing act between different agents, such as corporations that knew how to scale exploitation and innovation and other more human agents better at uniting various communities, contextualizing diverse experiences, and complaining effectively. 

This reframing of our existing institutions as artificial agents is a refreshing reframing of the current hype about the novelty of AI agents as an entirely new phenomenon. Indeed, he argues that states, as artificial general agents, don’t have to be smart, efficient, or optimized to work well. They just have to live past the life of individuals, scale the coordination of the multitudes, and respond to human feedback. The new danger lies in not considering how automated decision-making and information gathering threaten to disrupt this delicate balance. 

The Leviathan

Our success as humans and nations arose from this delicate balance rather than access to coal, rivers, oil, or technology, although those things may have helped. 

Runciman traces the evolution of existing artificial agents to a framing first espoused by Thomas Hobbes in The Leviathan in the 1600s. Hobbes lived an exceptionally long life during which he witnessed the defeat of the Spanish Armada, a civil war in England, and several decades of war with France. In the shadow of the English Civil War, a new organizing principle emerged, eventually spurring the beginning of the Industrial Revolution. 

This marked the first time imagined communities (governments and corporations) became mechanical in that similar versions of the same idea could be built in different places. Runciman says:

This was the crucial coming together of scientific understanding and human imagination. It was not that our imaginations became mechanized. Rather, we began to imagine what it would be like to organize collective enterprises as though they had the durability of machines. 

Second, these imagined enterprises proved remarkably adept at exploiting the advantages of the scientific revolution. He notes:

They could sustain projects of knowledge acquisition and application in ways that were beyond any rival organisational model. They could do this because they were able to borrow, invest and reap rewards over the long term.

And now, states and corporations are responsible for what comes next. He writes:

If some human beings have a different relationship with thinking machines, it is because of their hold over the states and corporations that ultimately create and regulate those machines. Musk and his like have their power because of their relationship to states and corporations, not in spite of them.

My take

Runciman does not leave us with any pat answers about how to shape the future of AI. But we do need to keep in mind our lessons from the past. We are already living in a world dominated by artificial persons in the form of states and corporations. Each of these have their own imperatives and respond to threats to their existence. But they have no vision of the future of the human race. 

He notes:

What comes next depends on what states and corporations might do with the powers they have been given and what we might still be able to do to shape how they use them.

The one advantage we still have in shaping AI, surveillance, and automation is that we have created states to be like us, and this gives them the power to shape the corporations that build machines that align with our visions of the future. In Runciman’s other book, Confronting Leviathan, he concludes:

Indeed, it is even possible to say that the state is the only instrument we have because it is the instrument that we built to be like us. It is the only instrument we have to take on the machines.

The book left me with a kind of hope and curiosity in ways most accounts about the future of AI have not. There is a kind of humility in pondering how some of the most contentious aspects of states, corporations, and the machines they build could be a promising feature and not a bug when we know what to look for.

A grey colored placeholder image