Robotics is about much more than just industrial hardware - the subject of the first part of this two part special report. The “wave of creative destruction” (to quote Sean Culey) also includes AI, chatbots, and the challenge of creating robots to care for humans, rather than simply replace them with cheap, smart labour. And it should be about entirely new business processes.
Avida Hancock is COO of UCL spinout Satalia, which – despite Hancock’s job title – operates without managers or administrators. Founded in 2008, the company is positioned at the new sweet spot between AI and organizational design.
Hancock believes that data science and AI will replace traditional leaders and information hierarchies, producing a new kind of decentralised organization: a coalition of talent and ideas. Her aim is to export this model to business and government worldwide. She explains:
Satalia was born as an optimisation company, a process of using algorithms to find the best solution to mathematical problems. We had a vision of bringing these algorithms out of academia – where they were languishing and dying – and into industry where they could be used to make the world more efficient.
Right from the beginning, we decided to architect what we think of as a new operating system for organisations, by using our technology to power the way we work, to enable people to collaborate in a completely decentralised, self-organised, purposeful way.
So how does it work? Hancock explains:
We collect all of our data from every event that happens within the organization, across every network tool, and we pour it into a data lake, called Cosmos. We use data science and machine learning to extract insights from it and to power a system that provides information to everyone in the organization: what opportunities are available, what activities people are working on, who is connecting with who, how much projects are costing us, and our priorities within a collective strategy. This enables us to work in a completely non-hierarchical way.
For enterprises in thrall to superstar CEOs, and others who seek to apply AI and automation as a brute force within traditional hierarchies (to maximize profits for the few), this poses a necessary challenge; one that feels closer to how millennials like to do business. Hancock continues:
A hierarchical construct is completely detrimental to an organisation: everyone has to make themselves look better than someone else in order to progress, but all this does is create power dynamics and information silos. Yet we’re programmed from birth, in our education systems, to work in this way. But today, technology allows organisations to use really advanced, data-driven peer-to-peer systems.
Hancock explains that Satalia is not completely “leaderless”, but there’s no static within its fluid organisation. This is a replicable model.
We’re able to break decision-making down into proximity, skill, and strategic understanding. I can tell who has the most skills and experience for a particular decision, by associating their background with the history of a particular topic.
Imagine what could be achieved in government using such a model: a sector in which it sometimes seems as if people who are uniquely unqualified to make decisions are running departments. We’re seduced by personalities, schooling, and gamesmanship, when we should be looking for skill and insight. Technology can knock down those walls, rather than reinforce them.
Chatbots are another incoming technology – part automation, and part AI. Kriti Sharma is VP of Bots and AI at Sage Group, where she’s the inventor of “invisible accounting” with Pegg, a personal assistant chatbot that manages business finances. She says:
Sage hired me to build new products for millennials, so as a token millennial, I arrived! Who loves expense reports? Or chasing purchase orders and asking people to pay you on time? I realised that what Sage does is not something that most people enjoy, so how about we give them someone else to do it for them? Business owners are not geared towards mundane work, they would rather be building new products.
Gartner predicts that people will soon spend more time chatting to AIs like Pegg than to their own partners, but for Sharma, the industry’s efforts to create human-like software or hardware is a strategic error. Instead, AI should “embrace its botness”, she says.
Sharma makes the point that many domestic AIs – Siri and Alexa among them – are feminine personalities with female voices, and are designed to respond to routine commands. Meanwhile, some industry-specific systems – in legal services and banking, for example – are designed to be ‘male’. In this way, she suggests, we risk automating societal problems and “projecting workplace stereotypes onto AI”.
This lack of diversity is a much bigger problem in AI development than most people realise. In many cases, AI relies on training data supplied by human beings, where the risk of introducing prejudice, confirmation bias, or tainted data extends as far back as the design stage – a subject I've discussed elsewhere.
Dr. Kerstin Dautenhahn is aware of the need to design machines that are not ‘too human’. She is Professor of Artificial Intelligence in the School of Computer Science at University of Hertfordshire. Her aim is to design robots to be assistive technologies for ageing populations and other groups, such as children on the autism spectrum.She says:
Robots can play important parts in our daily lives. Examples include trying to help older people to live longer in their own homes. In these contexts, it’s important to view robots as complementary to human care, and not to replace human contact.
This is particularly important for some vulnerable groups, she says:
Robots as tools in the hands of teachers, therapists, or parents can help some children with autism to learn about human communication and interaction. We exploit the fact that our Kaspar robot is a machine, not a person. This demonstrates that creating robots that are indistinguishable from people is not an inevitable or necessary development.
Indeed, Dautenhahn admits that humanoid robots often disappoint us, because their design sets up unrealistic expectations of seamless conversation – although the availability of IBM’s Watson in the cloud is bringing natural language processing closer to everyday speech for robots.
She’s also refreshingly honest about the future. Despite being one of the prime movers of robotics in the UK, Dautenhahn worries about how these technologies will be used in the future:
Researchers often have a benign vision of how much robots could play a useful role in our society, to assist people, rather than replace humans. While I’m involved in building these systems, I will not be the person making the decisions about how they will be used. These are decisions made by politicians and healthcare providers.
There’s a danger that cutting human labour costs will be the main driver of using robots and AI in care, which is not how many researchers, including myself, envisage it.
That danger does not just apply in healthcare. When robotics and AI meet the worlds of traditional hierarchies, politics, and profit, the risk of misapplying them for tactical gain is overwhelming, as is the risk of using them to replicate or automate existing problems.
So let’s apply Einstein’s thought experiment model to the world of robots and automated systems, as envisaged by some of the more cost-focused analysts and think tanks. A possible future emerges of a small number of super-powerful organisations owning or managing not only property, networks, and wealth, but also the IP that can be turned into products via the PAL model set out in our previous report.
In this future, those organisations are increasingly automated – staffed by robots and AI – while many humans lack full-time employment and compete locally for ad hoc work, via the network.
Right-wing think tank Reform suggests that this would be via reverse auction – competing to work for less money – and believes that sweeping aside a quarter of a million public sector workers (doctors, nurses, and teachers among them) is a good thing, simply because it reduces central costs. This is an admission that those ‘outside the wall’ will have less and less money to live on, by design.
Let’s take that position at face value. While the opportunities for people to find ad hoc work in this way are certainly increasing via the network – let’s call it the Uber effect – the ability for them to make enough money to live on is already getting smaller, because over time the network commoditises even skilled labour and experience. One day, even a brain surgeon will be replaced by a robot.
Let’s stick with Uber as an example. Countless people find work today as drivers, many via the ride-sharing app. Yet Uber’s endgame is automation: a world of driverless, autonomous cars. Uber has built the app and the infrastructure for passengers to summon rides with a click, but for a future in which few people will own or drive cars, while smart vehicles will be able to drive, park, and update themselves overnight.
Uber has used freelance human workers to create and prove its market, but in the long run Uber drivers have no future with the company; they’re peripherals. Uber is an app, not an ad hoc employment agency.
This possible future of a human scrabble for portfolio work and (ever reducing) micropayments, while automated business runs in the background racking up the profits, is why concepts such as a universal income and a robot tax are increasingly vital to consider.
But can our more conservative, traditional, or hierarchical thinkers overcome their opposition to paying the alleged ‘scroungers’ in our midst a living wage, even if there’s no work? Or will they tut, don their stovepipe hats, and continue to see one side of the robotics debate – cost savings and productivity – and neglect its repercussions?
Because in the future we may all be scroungers. And that’s a real sustainability challenge.