Social and health care is one of the hotspots for the transformative application of robotics and AI.
The market has particular appeal in both the UK and US, where 65+ populations are set to soar over the next 20 years.
So the opportunity for the sensitive application of new technologies – including humanoid care robots, tele-robotics, tele-medicine, autonomous vehicles, assistive devices, light exoskeletons, smart domestic machines, and more – is huge, given the current shortage of qualified staff.
Human skeletons, musculature, and internal organs can be scanned and mapped in minute detail, as can the genome itself. So robotic-assisted microsurgery and the use of AI in disease detection, diagnosis, prevention, and cures are good applications of these technologies, which can both extend and complement human skills – as the developers intend.
But the well-being of elderly, sick, and disabled people is a different matter. It’s a far more controversial area for robotics and AI, because of the implication that these technologies will dehumanise care, in every sense.
That’s the ethical dimension, but there are countless technology challenges before advanced social-care technologies can become a reality in the first place.
In-depth research is being carried out worldwide, including at several universities, such as Oxford, Bristol, and Hertfordshire (where a typical townhouse has been filled with autonomous robots to explore how human beings interact with them). Yet the obstacles remain numerous, according to UK-RAS, the organisation for AI/robotics research in the UK.
Its special report, Robotics in Social Care: A Connected Care Ecosystem for Independent Living, the society says that major advances are needed in many areas. These are:
(1) Scene awareness
GPS doesn’t operate inside buildings, so AI needs to advance to the point where it can map and understand home environments that are designed for people, not machines. This demands the interoperation of several technologies, including: object recognition; machine learning; smart home integration; and environmental tagging, along with the flexibility to understand that human environments are subject to constant change.
(2) Social intelligence
To a computer, a human being is just a collection of pixels, despite advances in facial recognition systems. And that’s just the tip of a very big iceberg. UK-RAS says:
To be able to help people, robots will first need to understand them better. This will involve being able to recognise who people are, and what their intentions are in a given situation, then to understand their physical and emotional state at that time, in order to make good judgements about how and when to intervene.
Many human beings find this difficult, so the task facing researchers is daunting – especially for those who (as Joichi Ito of MIT’s Media Lab observed at Davos earlier this year) prefer the binary world of computers to the messy, emotional world of people.
(3) Physical intelligence
Some humanoid robots can walk, talk, climb stairs, and even run (in the case of Honda’s ASIMO), but robots still fall well short of the sophistication needed for close physical contact with vulnerable people.
Emulating the dexterity and sensitivity of a skilled doctor, nurse, or care worker – not to mention their tact and strength when lifting or dressing patients – is a huge challenge for a machine. These aren’t assembly line processes. Safety must be paramount, and many robot designers are moving away from replicating humanoid forms. Wheels are safer and more stable than legs in many situations, for example.
Anyone who has first-hand experience of advanced robots such as PAL Robotics’ REEM, or Aldebaran/SoftBank’s NAO and “emotion-sensing” Pepper, is aware that expectation management is a major factor. The gap between what people expect from interacting with a robot (intelligent one-to-one conversation) and the reality (a clever mix of programmed phrases and cues) remains a chasm.
The apparent sophistication of some robots’ speech masks an almost complete lack of understanding of what people are actually saying to each other: most robots that have onboard intelligence respond to trigger words with a range of pre-programmed actions. Smartphone/home hub systems such as Alexa and Siri are much better (and constantly improving), but these are still a long way from natural language communication.
Natural language processing in the cloud, via AI services such as IBM’s Watson, are a partial solution, but most robotics researchers accept that onboard or local intelligence will be vital: robots can’t rely on the stability, security, and speed of future networks, for example.
Automatic speech recognition (ASR) systems are improving, too, but these often fail in noisy environments, and with impaired speech, different accents, and local dialects. Just as important, human beings need to be able to hear, see and/or understand the robot.
(5) Data vs. security
UK-RAS says that robotics and autonomous systems will benefit from “learning offline and on the job”, based on the latest advances in machine learning. It adds:
Acquired knowledge should be transferable between robot platforms and tasks, given appropriate safeguards around data privacy.
However, this raises interesting questions about data protection, privacy, security, transfer, and sovereignty – especially post-GDPR. Will people have to opt in to being recognised, treated, and discussed by robots, and can they insist on being forgotten by them? And how can robotics and AI developers ensure that social/healthcare robots are safe from hackers – especially when they might have cameras and eyes, and work in a private residence or hospital?
These are considerations that affect all IoT devices to a degree, and are also relevant in the next two categories, below.
Human beings make sense of the world through memory, as much as via their senses. Autonomous machines will need to do the same: robots working in social and healthcare environments will need to retain a memory of previous events and be able to associate it with people’s routines and preferences, says UK-RAS.
To be useful, this data will need to be classified and tagged, and retrieved via contextual and user-provided cues – but it will also need to conform to data regulations such as GDPR, or whatever equivalent is in force at the time.
(7) Autonomy and safe failure
Care robots will need to be effective 24/7, says UK-RAS. As a result, self-monitoring, self-diagnosis, and autonomous failure recovery will need to be robust. When systems fail, it’s important that they can do so without compromising user safety.
Imagine a smartphone that weighs as much as a person and is full of electric motors. It stands to reason that battery life is a major challenge in autonomous robots that move around and – in the case of humanoid machines – carry their own weight.
Assistive technologies will need to be designed to optimise energy efficiency and with a clear strategy for reuse and recycling, cautions UK-RAS. At the same time, mobile systems should be able to recharge autonomously, without the need to carry heavy, expensive, and/or environmentally hazardous batteries.
(9) Dynamic autonomy and responsibility
In many cases, human care providers will need to be able to intervene by overriding or tele-operating robotic care systems, says UK-RAS. However, while variable autonomy is an emerging discipline, it is still in its infancy.
(10) Validation and care
Future care systems will need to be tested, benchmarked, regulated, and certified to perform their tasks reliably, safely, and securely, says UK-RAS.
However, one thing omitted from the report is the question of clinical cleanliness and disinfection. In hospitals and care homes, robots will need to prevent the spread of infections, too, which may mean being able to self-disinfect. Either way, easy cleaning will need to be a design factor.
As with autonomous vehicles, the issue of responsibility when a human being is harmed by a robot remains highly contentious. If someone trips over a small delivery robot in the street today, the legal questions remain unresolved; with medical and social care, these issues may be an order of magnitude greater and more complex.
The idea of humans replicating themselves in machine form may be the ultimate in narcissism, but it may serve little real-world function. There’s no logical reason why robots should take humanoid form, unless it is necessary to carry out a task more successfully.
The only other reason might be to entertain people or put them at ease, but robots horrify just as many people as they fascinate – another factor that will influence their design.
To avoid this problem, many robots – such as NAO and Pepper – are designed to be ‘cute’, so that people want to take care of them. Reversing that process to create a robot that human beings are happy to be looked after by will be a fascinating challenge.
But whatever form care robots take, interface design will be critical, says UK-RAS: voice command and gesture will both be important, for example. Touch screens are increasingly common in robot design, too, suggesting that many machines may evolve into smartphones on wheels, in effect.
However, some elderly, sick, or disabled people find movement and communication difficult. As a result, UK-RAS notes:
Research will develop new ways of operating or interacting with robotic devices, including virtual reality, smart clothing, and brain-machine interfaces that allow control of a wearable prosthetic or remote assistive device, simply by thinking.
Arguably, it may prove to be the case that creating humanoid robots, specifically, is a waste of engineering and design talent, except insofar as their development creates useful spinoff technologies, as the space programme has consistently done.
But one thing is clear: Doctor Robot will soon be ready to see you, and to help you live independently in your home when you retire. But what he, she, or it will look like will depend on solving all of the above problems.
Image credit - Pinterest
Disclosure - Chris Middleton owns several robots, including an Aldebaran Robotics NAO machine.