Is the future of robots in good hands?

Chris Middleton Profile picture for user cmiddleton January 21, 2020
Summary:
The rise of anthropomorphic robot hands – one of the keys to building successful robots – is not just a physical engineering challenge.

robotics

This year marks 100 years since the word ‘robot’ was coined into popular usage via the English translation of Karel Čapek’s 1920 play R.U.R (Rossumovi Univerzální Roboti) – aka Rossum’s Universal Robots, the first production of which took place in January 1921. The word, proposed by his brother Josef, came from the Czech robota, meaning forced labour, and described artificial slaves grown from organic matter – more akin to the replicants of the movie Blade Runner than the machine men of science fiction lore.

So how are real-world robots faring in 2020? Hardware robots will overcome dexterity and locomotion challenges this decade, with significant improvements in their gripping and mechanical abilities and in their ability to move around complex environments. These are among the predictions in a new report, 2020 Tech Trends, from market analyst firm CBInsights.

While improvements in robotic engineering, software, and sensors have led to the emergence of so-called ‘cobots’ (collaborative robots) – highly programmable machines that can work safely among human beings – the sector remains “plagued by dexterity issues” and locomotion challenges, claims the firm.

In CBInsights’view, this is down to a lack of physical dexterity in robotic hands and grippers – plus many robots’ difficulty in moving around unstructured environments or on uneven ground. But the apparent criticism of engineering quality in robot hands is misleading.

In fact, that's an oversimplification of the problem – at least when it comes to picking up objects or handling tools, which robots need to be able to do safely in order to work independently or alongside humans in industries such as infrastructure maintenance, offshore or subsea engineering, oil and gas, nuclear decommissioning, mining, manufacturing, space exploration, and satellite maintenance.

The vision thing


Innovators such as the UK’s Shadow Robot Company have been able to produce highly sophisticated anthropomorphic robot hands and specialist grippers; the problem has more to do with robots’ ability to see and recognise different objects before picking them up.

To a computer vision system mounted on a robot, all objects are merely collections of pixels; computers have no human-style knowledge of different objects and their likely weight and composition, unless they have been exposed to vast amounts of training data about them.

Robots have to be taught that a certain arrangement of pixels is likely to be a hammer or a screwdriver, for example, and to recognise that object type regardless of angle, size, composition, lighting, colour, visibility, and weather conditions – and whether the object is partly obscured by another similar or dissimilar object. That adds up to a lot of data for just one simple object, let alone task.

How robots do this is the subject of in-depth research worldwide, including at NASA’s Johnson Space Center, using different combinations of sensors, point clouds, object libraries, and artificial intelligence (AI), among other technologies. 

For example, NASA’s Robonaut 2 humanoid is being trained to recognise, pick up, and use tools in order to assist astronauts – whose time is better spent on science projects than on routine maintenance – in the cramped, dangerous environments of space stations. 

The robot is also being trained to respond to human voice commands. NASA is working with popular tools such as Google’s Tensor Flow and Voice API to break these tasks down into smaller, meaningful chunks. The US military is among other organisations training robots to respond to voices (at the US Army Research Lab), in order to minimise soldiers’ need to type commands on a computer.

Virtualisation

Teaching robots to manipulate different objects successfully and efficiently is again as much a matter of intelligence and granular data as it is physical engineering excellence. 

In recent years, some researchers have begun using combinations of virtualisation, 3D digital modelling, AI, and reinforcement learning so that robots are able to analyse thousands of different scenarios virtually before deciding the best course of action and actually picking up or using an object. This can drastically reduce the amount of time it takes to train robots.

For example, in July 2018 OpenAI created a reinforcement learning system that allowed a robot hand engineered by the Shadow Robot Company to gather the equivalent of a century of real-world testing experience in just 50 hours, and teach itself how to manipulate a cube. The project was called Dactyl, from the Greek daktylos, meaning finger.

The space between us

In many sectors, particularly in extreme environments such as nuclear energy, space exploration, and offshore engineering, there are other difficulties for robots to overcome in order to safely manipulate objects. Again, these have little to do with the physical engineering quality of robot hands.

While some robotic manipulation will be controlled remotely by human operators, many robots will need to be able to act autonomously, semi-autonomously, or via assisted control systems in order to overcome any latency in communications between controller and machine.

Even a fraction of a second communications delay between human operator and robot could make attempted real-time manipulation of hazardous objects or materials dangerous via a haptic system or joystick, so it is critical to develop a measure of robotic autonomy for some tasks.

This is particularly true for undersea environments, where radio signals can be delayed by salt-water’s natural conductivity, and in interplanetary or deep space exploration, where the extreme distances between operator and robot make even light-speed communications too slow for real-time human control.

Soft bots

Another area of advancement is in new materials, points out CBInsights – rightly on this occasion. Because they are made from elastic substances and their design is often inspired by nature, so-called soft robots are uniquely suited for tasks that conventional robots can’t accomplish, says the report. For example, in healthcare this can include surgery and rehabilitation – especially minimally invasive surgery (MIS). The report explains:

Because of their pliable materials, soft robots are well-suited to navigate small areas inside the human body without risking tissue damage. Their added flexibility may also reduce post-op pain for patients. However, while soft robotics technology has promising implications for MIS, it still faces a set of challenges, including ‘low force exertion, poor controllability, and a lack of sensing capabilities’, according to a study published by Frontiers. We expect to see companies in the space addressing these issues. 

Soft robotic devices are also assisting with physical therapy and rehabilitation. For example, there are a number of neurological disorders that have debilitating effects on hand motor functionality, which soft robotics could help restore.

Which neatly brings us back to the bio-machines of R.U.R, a century on from their introduction to speculative fiction.

My take

Robots’ century-long journey into the real world continues, albeit still very slowly. The challenge in realising humans’ dream – or dystopian nightmare – of creating anthropomorphic machines to assist us in carrying out repetitive, dangerous, or remote tasks with minimal risk to humans involves bringing together a huge number of different technologies, plus a vast amount of data. 

Loading
A grey colored placeholder image