The flawed quest for Artificial General Intelligence - do you need to know how the mind works for AGI?

Neil Raden Profile picture for user Neil Raden June 16, 2022
Summary:
The debate over so-called "sentient AI" is raging again. But the context is the problematic pursuit of Artificial General Intelligence - and the misguided idea that AGI should resemble the human brain.

self-healing-gears

François Chollet (@fchollet, Senior Staff Software Engineer at Google), posted on Twitter on May 21, 2022:

The dominant intellectual current in AI research today is the belief that we can (and soon will) create human-level AI without having to understand how the mind works (and without even having a proper definition of intelligence) through pure behaviorism and gradient descent.

This is not a position that thoughtful AI researchers share (Editor's note: it's unclear if Chollet shares this position, as subsequent tweets seemed to self-critique it). In her paper, Why AI Is Harder Than We Think, Melanie Mitchell of the Santa Fe Institute describes the fallacies of AI development:

Fallacy 1: Intelligence is all in the brain: She questions the common assumption that "intelligence can in principle be 'disembodied, or separated conceptually from the rest of the organism it occupies because it is simply a form of information processing. Instead, evidence from neuroscience, psychology, and other disciplines suggests that human cognition is deeply integrated with the nervous system's rest.

Behaviorism and gradient descent? Behaviorism employs an approach (Operant conditioning), a method of learning where the consequences of a response determine the probability of it being repeated. I am trying to imagine how this would work in the “Human-Level AI” being trained to drive a car. If it drives 100 yards and doesn’t hit anything or kill anyone, there would be a reward. What kind of a reward would that be? What kind of punishment would that be? How many trillions of these little experiments would it take? And gradient descent is simply a process of differential equations and linear algebra. I don’t see how either of these could yield reliable decision-making in a “human-like way.”

The problem is that developing a human-level AI without understanding how the mind works will be an entity that does not possess the same reality as humans.

Therefore it is unlikely that one could ever flawlessly communicate with it. Presumably, this “human-level AI” would be incorporated into both digital locations as well as physical ones, and would need to be generic enough to be universally applicable to, and indifferent to, specific morphologies, and in the latter case, where it exists.

The problem with incorporating is the difficult semantic comprehension expressed in natural language. At last count, there are over 7,000 languages.  Already familiar and straightforward concepts, such as "harm," cannot be naively related to the AI's perspective because it is challenging to have an AI that understands what constitutes harm and avoids inflicting it.

That means that this AI will have a radically different perspective from humans and an utterly different reality. It's doubtful that AI and humans could ever have an approximation of a common language. Consider today’s so-called “conversational AI,” a contrivance of trained NLP domains. How many millions of those would be needed to produce an Artificial “General” AI?  

François Chollet posted on Twitter on May 21, 2022:

What an odd coincidence that the path to AGI, which has eluded us for so long, is clearly to use the tools we have today and do more of what we've been doing so far. It reminds me of all those times I found my lost keys under a lone streetlight.

Fallacy 2: This is another fallacy that Michell points out, that “narrow intelligence” is on a continuum with general intelligence. Advances in narrow AI aren't "first steps" toward AGI (Artificial General Intelligence) because they still lack common-sense knowledge. Mitchell:

Advances on a specific AI task are often described as “a first step” towards more general AI. Indeed, if people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI. The philosopher Hubert Dreyfus (using a term coined by Yehoshua Bar-Hillel) called this a “first-step fallacy.” As Dreyfus characterized it, “The first-step fallacy is the claim that, ever since our first work on computer intelligence we have been inching along a continuum at the end of which is AI so that any improvement in our programs no matter how trivial counts as progress.” Like many AI experts before and after him, Dreyfus noted that the “unexpected obstacle” in the assumed continuum of AI progress has always been the problem of common sense.

Ray Kurzweil, Google’s Director of Engineering, and, I suppose, Chollet’s boss, had this to say:

There is a common viewpoint that reacts against the advance of technology and its implications for humanity.

I address this objection by saying that the software required to emulate human intelligence is not beyond our current capability. We have to use different techniques — different self-organizing methods — that are biologically inspired. The brain is complicated but it's not that complicated. You have to keep in mind that it is characterized by a genome of only 23 million bytes.

The human genome is 6.4 billion base pairs. I wouldn’t call that “only.” Kurzweil seems to be using Fallacy 2, that there is a continuum of development in AI that is leading straight to AGI.  His allusion, “If you understand how the mind works at a high level, then you no longer need to understand the fine-grained details,” describes a “biologically inspired” AI.

My take

Maybe we don’t need to know how exactly how the brain works (and other sentient areas of the human body, such as the gut and the heart lining, not to mention the microbiome, which someone I know describes as “they are in charge, we’re just their meat suits so they can get around."). Perhaps the minimalist approach Chollet suggests is adequate to develop a facsimile of a human. But which one? Nelson Mandela, or Charles Manson?

Editor's note: yes, Neil plans on addressing the latest "sentient AI" debate that fired up this week, also out of Google.

 

.

Loading
A grey colored placeholder image