2022 - the year in AI and Robotics
- The robots are still coming and AI is on the rise - highlights from the past 12 months.
2022 was the year in which Optimus the Teslabot wobbled like Kanye’s grasp of history, the UK set its sights on space for robotics and more, and Meta’s Galactica AI was a colossal failure. (As well as being a tool and data model, ‘large language’ is a good euphemism for the expletives doubtless uttered by academics when, thanks to Galactica, their bylines appeared next to imaginary or inaccurate research.
Meanwhile, the Mayflower uncrewed autonomous ship finally crossed the Atlantic in June, just weeks after our profile on that venture. Then ChatGPT rounded off the year by becoming a viral phenomenon, with many users mistaking the OpenAI chatbot’s clever responses for real intelligence.
However, ChatGPT’s impressive ability to write poetry to order was often superior to its grasp of physics and other subjects, suggesting that we may face years of false information being accepted as fact simply because an AI gave it in answer to a question. After all, the risk of people having misplaced faith in a technology simply because it seems authoritative is the same reason Galactica was taken offline after less than three days. Remember: none of these systems is sentient, but all are adept at appearing to be. That can be a risk.
But what were diginomica’s other key stories from this year?
UK AI strategy: A half-term report
With the above issues in mind, the UK has been among the governments forging an AI strategy to help policymakers navigate the waters of risk, innovation, and economic potential as effortlessly as IBM’s captain-less boat crossed the Atlantic. Back in the Spring, Tabitha Goldstaub, Chair of the UK Artificial Intelligence Council, gave Britain’s strategy her six-month progress assessment. Welcoming its ambitions, she warned:
We don't believe that the AI Strategy will work until we have a nation of conscious, confident consumers of the technology, as well as a diverse workforce capable of building AI that benefits not just the company, but also the economy and society as a whole. This means we need to have a population that can do three things: build AI, work with and alongside AI, and live and play with AI.
A lack of diversity in the sector is certainly hampering the technology’s development, she added, while welcoming Whitehall efforts to tackle this from the top down.
IBM: Poor AI skills undermining UK leadership ambitions
Low diversity is not the only obstacle to the UK’s plans to become an AI superpower, warned a major vendor survey in May. Relevant business and IT skills are also thin on the ground.
According to an IBM study, only one-third of UK companies have accelerated their use of AI in the past two years, compared with a European average of 49%. More than one-third (36%) of UK companies have stalled their AI investments during the survey period (which coincided with the pandemic), versus 27% across Europe. Globally, IBM found that 35% of companies are using AI in their business, while an additional 42% are exploring it. Adoption is growing steadily, up four points from IBM’s 2021 survey.
Can a new UK Hub shape global AI standards?
Back in the dim and distant past of January – that’s two Prime Ministers ago! – the plucky UK not only set up a new Digital Markets Unit (DMU) to keep Big Tech in check, but also piloted an AI Standards Hub to “lead the global debate”.
But the challenge facing the UK is that, while it may be leading Europe in key technology areas, from AI to FinTech and quantum computing, the tech behemoths that have the spending power of mid-sized nations are either American or Chinese. How those corporations, including Google, Microsoft, Amazon (now a strategic supplier to Whitehall), IBM, Apple, and Facebook shape AI standards is largely up to them. At least, until somebody stops them.
That ‘somebody’ is more likely to be the economic and legislative might of the EU than the UK, respected though Britain remains in the standards world.
Also see: https://diginomica.com/ai-demands-contract-trust-says-kpmg
AI ethics: How do we put theory into practice when international approaches vary?
On the world stage, ethics have been at the centre of the AI debate. While in the West we like to pretend that good behaviour is underpinned by common standards – a global ethical baseline, if you will – the reality is rather different. So, when an AI system is designed in one nation and trained on local data under local laws and regulations, it may not be exportable/importable elsewhere with ease, either practically or culturally.
The data itself and core algorithms themselves may create as many problems as they solve, warned Louise Sheridan, Deputy Director of the new Centre for Data Ethics and Innovation (CDEI), the first organization of its kind in the world:
New forms of decision-making have surfaced numerous examples where algorithms have either entrenched or amplified historic biases, or indeed created new forms of bias or unfairness. Action-steps to anticipate risks and measure outcomes are needed to avoid this. But organizations often lack the information and ability to innovate with AI in ways that will win and retain public trust. This means that organisations often forego innovations that would otherwise be economically or socially beneficial.
Neil Raden offered a different perspective: why should academics and ethicists outside the industry have such a strong influence at all?
Davos 2022: Involve workers for successful automation, says the WEF
Robots’ societal and economic impacts, together with those of other Industry 4.0 technologies, have also been debated this year. For example, at the World Economic Forum (WEF) in Davos, a panel of experts looked at robots’ effect on human workplaces.
Sharon Burrow, General Secretary of the International Trade Union Confederation (ITUC), said:
Fully automated workplaces are still the stuff of science fiction and should probably remain so. But if you’ve invested in people and they know what the business is, then there’s no question that augmentation can assist workers and productivity. But if you’re not in sync with the workers, if you’re not blending the knowledge and use of technology, then frankly you will be poorer for it as a business.”
Also see Stuart’s excellent report on Ocado’s automation and robotics ambitions.
Good vibrations: How Augury’s AI sounds like factory uptime
In one of our many fascinating C-Suite interviews this year, we spoke to Saar Yoskovitz, co-founder and CEO of New York-headquartered machine-health unicorn, Augury (“Machines talk, we listen”). The company’s AI detects minute changes in the way machines sound to diagnose problems that may be hidden from human operators. Yoskovitz told us:
If you have a pump in your factory, we don't need to build a new model for your specific one, because we've seen over 20,000 pumps before. We know exactly what cavitation or cracked bars sound like. We have over 200 million machine hours that we've monitored, and all that data is sitting in our cloud. […] From the very first moment when the sensor is installed on your machine, we can tell you what's wrong with it.
The biggest year for industrial robotics, but China dominates
The world industrial robot market experienced stellar growth in 2021. That’s according to full-year figures presented by the International Federation of Robotics (IFR) at its 2022 annual conference. Over half a million new units (517,000) were shifted worldwide, representing a year-on-year increase of 31% – the highest figures ever reported by the IFR – and a compound annual growth rate (CAGR) of 11% since 2016. The bumper year brings the world industrial robot population to roughly 3.5 million.
At 268,200 new industrial robot installations, China alone accounts for nearly 52% of all global sales and 51% year-on-year growth, followed by Japan (47,200 installations, nine percent of sales, and YOY growth of 22%). Over the past five years, the CAGR of industrial robot demand in China has hit 23%. By contrast, the US bought just 35,000 industrial robots, up 14% year on year. South Korea – previously acknowledged as the world’s most highly automated country – installed a further 31,000 (up two percent year on year) and Germany 23,800 (up six percent).
Drop the pilot: Should future flight really be automated?
While outside the IFR’s strict definition of robots, remotely piloted or autonomous aircraft and drones – aka unmanned aerial vehicles (UAVs) – will become familiar sights in our skies. But just how familiar may surprise or even alarm you, if the industry’s own predictions prove correct. Anthony Spouncer, Senior Director of UAV & Unmanned Traffic Management at satellite giant Inmarsat, told us:
We expect there to be over 10 million commercial UAVs in flight by 2030, and an estimated 600,000 of these will be flying beyond visual line of sight (BVLOS), outside the [ground-based] pilot’s visibility. This is in addition to an already crowded airspace in many parts of the world – especially in European skies. Drones promise huge opportunities, but also present safety and security challenges that need to be addressed to make the most of these investments.
Indeed. Accordingly, our August report looked at the pros and cons of autonomy spreading to cargo and other forms of commercial flight – an issue we have been tracking for the past two years.
Does the humanoid robot idea really have legs?
Tesla’s Optimus robot was unveiled in three forms this year: a man in a suit; a working prototype; and a wobbly work-in-progress model. The latter gave some idea of the final shape of the machine that will one day stand unused in your kitchen. But why should robots take humanoid form at all? we asked back in July, in the wake of an insightful white paper from the UK-RAS network, which explained how machines need to navigate a built environment designed for human beings.
However, our report contained some prescient words on the public’s media-stoked fears about killer robots:
As ethicists have pointed out, individual innovations on the battlefield can be justified on logical grounds, and even on moral and ethical ones – keeping soldiers out of harm’s way, for example. […] But taken together, such innovations may, over time, add up to a significant moral hazard. At what point might we, in the real world, have given machines the power of life and death, as well as the power to protect us?
Those words proved prophetic more quickly than we had anticipated. At the end of the year, it was reported that San Francisco was giving robots – controlled remotely – legal permission to kill in critical situations. More accurately, therefore, it was giving police and security forces the ability to use machines as lethal weapons. Either way, this was an ethical backflip for a city that, not long ago, banned the use of real-time facial recognition systems by police over fears over their potential misuse.
Thankfully, the policy was scrapped after a public and media outcry, doubtless inspired by concerns that such a law might open the door to autonomous killer robots in the future. So, will the Terminator be back? Somewhere in the world, no doubt. Merry Christmas!