Being human - Watson boots up a new future for IBM in cloud robotics

Chris Middleton Profile picture for user cmiddleton July 24, 2016
Supercomputer Watson joins quantum services and the TrueNorth chip in helping IBM understand what it means to be human.

Here they come

The 2011 triumph of IBM’s Watson supercomputer in US game show, Jeopardy, was the moment it became a real-world commercial venture within the enterprise services giant. The question-answering system, named in honour of IBM’s first CEO, Thomas J Watson, defeated two former winners of the show, Brad Rutter and Ken Jennings, to clinch a $1 million prize, using onboard (rather than cloud-based) data.

IBM began offering Watson as a cloud service in 2015, and since then the company has found itself at the centre of a range of new, speculative ventures. As we will explore, some of these blur the lines between classical computing, AI, and machine learning, and may point towards a networked future for humanoid robots.

Duncan Anderson, IBM’s European CTO of the Watson Program, picks up the story:

We started to think about how we could make the Watson technology more consumable and less resource intensive. And the emergence of the cloud story and the APIs we provide for cognitive services was one avenue for that.

IBM’s supercomputing cloud services have grown rapidly, with APIs that allow users to deploy as much or as little of Watson’s capability as they wish, with much of its development focused on enabling more natural communications between people and machines.

Says Anderson:

Watson developer cloud provides APIs around natural language, dialog services, analysing the nature of text, and picking up emotions. For example, does a piece of text have a high degree of anger in it? There is a set of APIs around speech recognition and object recognition, and because these are all offered as discrete cloud services via a pay-per-use licensing model, the costs start very low.

This ability for developers to start small with Watson and ramp up to much larger cloud deployments has propelled IBM out of its enterprise comfort zone and into the small startup community. Indeed, the move has signalled a 21st Century shift in culture away from the ‘sell, sell, sell’ philosophy of IBM’s first CEO. Says Anderson:

In the old world, our technology required a big employed salesforce, but because these things are designed for self-service, we’ve started actively encouraging startups to use our services – we go to conferences and talk to them.

The nature of what we’re trying to do with Watson is different: natural language conversation, recognising objects... these are new types of service. We’re as likely to see the applications for these come from the small start-up world – where, typically, people are more willing to take risks and try new things – as we are from large enterprises.

While Big Blue and big data would seem to be made for each other, in fact Watson’s natural language processing sits alongside IBM’s quantum computing services in offering a cloud-based solution to a subtly different problem: finding meaning in smaller amounts of complex, unstructured data.

In this sense, Watson is closer to its other namesake, Sherlock Holmes’ medical sidekick, in searching for clues and uncovering their hidden meaning, explains Anderson:

Natural language text processing is a big process. With quantities of unstructured text and documentation, you can apply Watson to that – classifying the contents of a document and trying to increase the efficiency of processing. For example, an insurance company looking at medical reports and trying to pull out medical terms.

Certainly this was Anderson’s own focus as former technical lead within IBM’s insurance business, so perhaps the Watson subtext isn’t so different from IBM’s core story after all: its presence in the banking, insurance, and financial services sectors.

And it’s no coincidence that these industries are in the vanguard of another revolution: robotics, AI, and automation. Says Anderson:

There is a lot of talk around chatbots. Customers could have a conversation with a computer over, say Facebook Messenger or Twitter, and use that chatbot to answer questions about their accounts.

But all industries and sectors are potential customers. You could have a chatbot where you are talking to a government agency, for example, asking questions about your tax situation.

All of which brings us to robotic software’s physical manifestation: humanoid robots.

Here come the robots

The engineering problem of building functioning humanoids that can walk/roll and pick up objects has largely been solved. However, science fiction has taught us to expect something more from our machine counterparts: natural language conversations, yet in most cases the science fact is a range of stilted, programmed responses that are usually based on trigger words and preset behaviours.

Put simply: humanoid robots may be impressive engineering feats, but they’re disappointing to talk to and their blank faces tell us much about the lack of real intelligence behind the camera eyes.

The problems that IBM is trying to solve with Watson are exactly the same as those facing developers in the humanoid robotics community: natural language processing, context, meaning, and object recognition – those subtleties of human behaviour that are distinctly different from number-crunching. Machine learning is an extraordinarily complex challenge.

IBM partner Aldeberan Robotics (founded in France and now majority owned by Japan’s SoftBank) is the company behind the NAO, Pepper, and Romeo machines. Humanoid research platform NAO is well known from TV appearances, while the larger ‘emotion sensing’ Pepper is on general sale in Japan, where all production runs to date have sold out in minutes.

The challenge facing companies such as Aldebaran has long been putting sufficient intelligence and processing power into a heavy machine that has to walk, talk, and navigate terrains, while controlling dozens of power-hungry servos. That’s a big ask: imagine a massive smartphone full of motors and weighing as much as a child, and then consider how long the battery might last.

Watson may be the answer. IBM has partnered with Aldebaran to make Watson available in the cloud as a business service to humanoid robots and their owners, says Anderson:

This ability to link the Aldebaran robots to a cloud service [is a major differentiator]. A robot is effectively a piece of physical equipment, so the amount of processing power you can put in is very small. But once you break out into the cloud, that blend of on- and off-board processing has very interesting potential.

So why has it taken so long for the obvious solution, networked intelligence, to present itself? Anderson suggests:

The way that the market has matured [is partly to blame]. Cloud-based cognitive APIs are a relatively new concept, stuff that has only really been available for the past year or 18 months.

Now that cloud-based APIs to cognitive services are becoming established, it stands to reason that the next generation of innovative startups will be in industry-specific datasets. WayBlazer is one such company. Describing itself as a ‘cognitive travel agent’, the startup licenses Watson in the cloud to provide a natural-language interface to local travel- and tourism-specific data.

It already has real-world applications. Hilton Hotels is piloting a Watson-powered humanoid concierge, Connie – a NAO robot named after the hotel group’s founder, Conrad Hilton – which guests can chat to and ask to recommend local attractions, restaurants, and more.

In this way, Connie poses a real challenge to the instinctive idea that customer-facing service jobs are best carried out by other human beings – people who typically don’t have instant recall of terabytes of local (perhaps crowdsourced) data. Says Anderson:

Connie is at pilot stage. We don’t know what the reaction will be: it’s a new context for humans to deal with and Conny is collecting information about what they actually ask rather than what we think they might ask, so that a robot can understand the different types of questions. It needs to be able to handle that. You can’t have something that’s completely scripted.

And this is the ultimate irony of customer service today: as more and more human-to-human tasks are becoming entirely scripted – forcing people to behave like automatons – robots are tearing up their scripts and beginning to improvise.

My take

It’s clear that we’re seeing the emergence of a new type of IBM for the 21st Century, an IBM that is slowly revealing itself in the cloud via projects like Watson and the new quantum computing service. And it is revealing itself in other innovations, too, such as its next-gen ‘neuromorphic’ chip, TrueNorth, a synapse-like architecture of nodes that seeks to mimic the way the human brain operates.

So in a sense, this new IBM is much more human than the number-crunching sales machine of old – or rather, more humanoid. A Big Blue humanoid that is learning to mimic its carbon-based counterparts, a quantum-powered robot friend to large enterprise and small startup alike.

One that may wield the power of data to hasten the demise of many human-based, human-facing industries. Soon we may all be talking to machines that are learning all about us from how we talk to them.

But as Microsoft discovered with its Tay chatbot, which learned racism and homophobia from internet trolls, the challenges facing robots in our complex human world cannot be summed up in a simple answer to a simple question.

Life isn’t a quiz show. At least, not yet.


Disclosure - The author owns a NAO robot named Stanley, but has no commercial relationship with either Aldebaran Robotics or with IBM.

A grey colored placeholder image