Main content

To be; to think; to know

Peter Coffee Profile picture for user Peter Coffee October 31, 2018
We are on a frontier of old concepts in new contexts – and we must arrive at new understandings, says Salesforce's Peter Coffee.

Peter Coffee

When Apple introduced the Series 4 update to its Apple Watch, one piece of the back story jumped out of the noise of the shiny-new-object reviews. Among the widely-reported features of the new device is an ability to detect that the wearer may have fallen, and to make an autonomous phone call for assistance if the wearer does not move (or respond to an alert) within one minute. What keeps this from being just a dubious technology trick is this datum:

Apple says that to build its fall detection algorithms, it used data from a study involving 2,500 participants over several years, and it also worked with assisted living facilities and movement disorder clinics.

It seems important that Apple didn’t build something based on some abstract concept of “falling”: they went out and studied the real thing. In a world where computation is abundant, and the not-quite-inevitability of oft-misdescribed “Moore’s Law” progress makes us continually expect more for less, the value of measuring reality – and from that, of knowing stuff – will rise above other measures of competitive advantage.

“Excuse me, did you just fall?” No, that’s not something that one person often needs to ask another. Knowing the difference between someone slipping on the ice and falling flat on their back, versus someone executing a perfect judo ukemi maneuver, is the kind of unconsidered analysis that people do all the time – but that we struggle to capture in code. This is one tiny node in a huge network of “common sense” challenges, which have been recognized and addressed for at least the past sixty years.

(Coincidentally, “common sense” acquisition and understanding is a notable focus of the Allen Institute for Artificial Intelligence – whose founder, Microsoft co-founder and Renaissance man Paul Allen, recently left us far too soon, but also left us a legacy of extraordinary contributions and continuing programs of great promise. Sometimes, saying “he changed the world” is not an exaggeration.)


Looking back at the earliest origins of artificial intelligence research, generally dated in August 1955, it now seems clear that artificial thinking was thought to be the defining goal. “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves,” wrote the famous quartet of McCarthy, Minsky, Rochester and Shannon in proposing a “2 month, 10 man study” (yes, they said “man” – it was the 1950s) that they projected would deliver “a significant advance.”

Had they succeeded, despite the handicap of having no women in the room, the unsocialized artificial thinker that they might have produced would have been even less useful than a similarly behaved human being. At least a human has some capacity for recognizing its own limitations: “I’m not a psychopath, I’m a high-functioning sociopath. Do your research,” says one of the most contemporary dramatic interpretations of the character of detective Sherlock Holmes.

The problems of relying on a merely “smart” machine, lacking any self-awareness of its ignorance, are barely suggested by the comment of researcher Douglas Lenat, who said in 1997: “Before we let robotic chauffeurs drive around our streets, I'd want the automated driver to have general common sense about the value of a cat versus a child versus a car bumper, about children chasing balls into streets, about young dogs being more likely to dart in front of cars than old dogs (which, in turn, are more likely to bolt than elm trees are), about death being a very undesirable thing, and so on. That ‘and so on’ obscures a massive amount of general knowledge of the everyday world without which no human or machine driver should be on the road, at least not near me on in any populated area.”

Before we say that the problem is therefore one of acquiring more knowledge of the world, in a machine-usable form, let’s be careful what we wish for. Consider the thought experiment of the fictional researchers who tried to teach a robot how to retrieve its own spare battery, from a locked room, with a time bomb in the room that would soon go off. “It had just finished deducing that pulling the wagon out of the room would not change the colour of the room's wall, when the bomb exploded. ‘We must teach it to ignore irrelevant implications,' said the designers.” On the next hypothetical trial, the robot successfully discards thousands of irrelevant implications – but while it is still working on that process of innumerable eliminations, the bomb goes off. And so on.

Summarizing this (often called the) “frame problem,” Daniel Dennett went on to say (in 1993) that:

What is needed is a system that genuinely ignores most of what it knows, and operates with a well-chosen portion of its knowledge at any moment. Well-chosen, but not chosen by exhaustive consideration. How, though, can you design a system that reliably ignores what it ought to ignore under a wide variety of different circumstances in a complex action environment?

Twenty-five years later, we would probably feel that we should add:

And once you have done that, how much insurance will you need against the day that the system does ignore something that should not have mattered? But that turned out, against all odds, to be crucial?

Once in a while, that odd noise will turn out to the sonic boom from a falling chunk of asteroid – which possibility we had pre-classified as too unlikely to consider. (Litigation and liability left as an exercise for the student.)

We are on a frontier of old concepts in new contexts – and we must arrive at new understandings before we can make good choices about who is obligated to think about what. In particular, we are accustomed to static and clearly unintelligent tools, for whose actions some person is always absolutely responsible: what happens when that responsibility starts to be more distributed?

A hammer, for example, does nothing on its own. It merely is – but we are well along the path to active tools, such as the robot that we see putting up a wall in an on-line video. That robot responds to its environment, to a limited degree, and therefore could (generously) be said to have started climbing the ladder, ascending from “to be” to “to think.”

On the other hand, that impressively capable robot would put up that very same wall, regardless of whether the thing being walled out was a light breeze or an approaching hurricane. It would have no awareness of either, and even less understanding of why the difference might matter.

For machines to be called “intelligent,” therefore, and not merely “assistive” or (on a good day) “autonomous,” they will have to (at least) simulate something that resembles understanding of the real world’s subtleties and their consequences. To return to our original example of an Apple Watch: putting barely a toe on that rung of the ladder, it does not merely detect that its wearer has gone from vertical to horizontal. It has, if only barely, some internal representation of the idea of “falling,” based on a determined effort to capture and work with real-world knowledge.

When we realize that this is barely a start, a millimeter of progress on a journey of light years, perhaps it explains why we call ourselves homo sapiens rather than homo rationo. Knowing is so much harder than thinking.

A grey colored placeholder image