Dismissing ethical issues means AI has a long way yet to go in the enterprise

Profile picture for user Neil Raden By Neil Raden October 25, 2019
Summary:
The time has come think critically about the value of AI as it stands, and whether to be concerned that a concerted effort to press it forward to true intelligence bypasses ethical questions.

Hand of business man using smartphone and laptop on AI concept background with blue tone © TippaPatt - shutterstock

Is the rush to intelligent AI raising ethical issues that the industry is ignoring? Initially, industry looked to AI as a technology to automate processes, reduce costs (of labor, too), and to streamline operations, which turned out to be harder to come by and less dramatic than the hype. In fact, the much-discussed job loss question is quickly being refined and minimized. Jobs are more complicated than AI at the moment, and as Oliver Ratzsesberger, CEO of Teradata, said on stage yesterday, while functions will be replaced, it will reveal a great deal of work needs to be done and isn’t being done now.

Impediments to broader and more impactful AI include difficulty sourcing data for AI models, a lack of highly skilled practitioners and a dawning realization that predicting the future with data about the past is sorely lacking. An article in Medium by Brian Bergstein , In “A. I. Isn’t as Advanced as You Think,” makes the case that there is a long way to go for AI to have a substantial impact on the enterprise, and elsewhere.

And what of the super-intelligence that is promised (and feared)? In reviewing Melanie Mitchell's new book, "Artificial Intelligence, A Guide for Thinking Humans.” (Personal note, Dr. Mitchell is the co-chair of the Science Board of the Santa Fe Institute, our hometown collective of Brainiacs. ) Bergstein writes:

As Mitchell notes, a lot of triumphal AI narratives are floating around. In these accounts, recent breakthroughs in computer vision, speech recognition, game playing, and other aspects of machine learning are indications that artificial intelligence might surpass human competence in a wide range of tasks in the coming decades.

What is a thinking machine? Facial recognition software may seem like magic, but it's just matrix algebra. A deep learning model has no consciousness. It isn't affected by what it recognizes or not. It isn't intelligence. I’m not convinced, for purely scientific reasons I’ll discuss below.

Turing Test time

However, nowhere is this clarion call for the rapid evolution of humanity into something else more recurrent than from Ray Kurzweil, Google’s Director of Engineering. At the SXSW Conference in Austin, Texas, in 2017, Kurzweil predicted the technological Singularity would happen sometime in the next 30 years. He’s claimed that AI will pass a Turing Test by 2029 with human-levels of intelligence and 2045 for the Singularity when humans will "multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

Kurtzweil envisions this happening as human bodies are filled with billions of nano-bots that transform us into super-humans.

Why should you care? Well, 2029 is only ten years away so it is obviously something you need to think about, more each day. But if you are planning on human intelligence in robots to supercharge your business, my advice is not to wait underwater.

There are two issues here. The first is the ethical issue of a device with intelligence at a human or greater level; the second is the question of whether a few technologists have the right to pursue this? The second is more difficult to explain because it involves dealing with consciousness and quantum theory, which I’ll explain without equations.

Kurzweil, in my opinion, is not thinking clearly about this. Not every human is an acclaimed scientist at Google, not to mention that he spends no time considering whether these superhumans would just annihilate each other, or just create a small class and wipe the rest out. That apparently isn’t his concern:

Change is happening so rapidly that there appears to be a rupture in the fabric of human history. Some people have referred to this as the ‘Singularity.’ There are many different definitions of the Singularity, a term borrowed from physics, which means an actual point of infinite density and energy that's kind of a rupture in the fabric of space-time.

There is also the more fundamental issue of whether or not ethical debates are going to stop the developments (emphasis mine) that I'm talking about. It's all very good to have these mathematical models and these trends, but the question is if they going to hit a wall because people, for one reason or another — through war or ethical debates such as the stem cell issue controversy — thwart this ongoing exponential development. 

I strongly believe that's not the case. These ethical debates are like stones in a stream. The water runs around them.

Ethical debates as  stones in a stream? When you consider that there are only a handful of companies in the world with the resources and the desire to push AI ahead, and the person in charge of that program at one of them dismisses ethical issues as irrelevant, I see a clear and present danger. I’d stop there, but it leads into the second issue (spoiler alert: it may be impossible). Kurzweil continues:

There is a common viewpoint that reacts against the advance of technology and its implications for humanity.

I address this objection by saying that the software required to emulate human intelligence is not beyond our current capability. We have to use different techniques — different self-organizing methods — that are biologically inspired. The brain is complicated but it's not that complicated. You have to keep in mind that it is characterized by a genome of only 23 million bytes.

It depends on how you define complicated. What Kurzweil is getting at is the melding of AI and the brain. Here is what Dr. Stuart Hameroff has to say about that:

The AI approach would be, roughly speaking, that a neuron fires or it doesn’t. It’s roughly comparable to a bit, 1 or 0. It’s more complicated than that but roughly speaking, each neuron has roughly 8-10 tubulins switching at roughly 7-10 per second, getting 10-15 operations per second per neuron. If you multiply that by the number of neurons you get 1026 operations per second per brain. AI is looking at neurons firing or not firing, 1,000 per second, 1,000 synapses. Something like the 1015 operations per second per brain… and that’s without even bringing in the quantum business. So that alone was pushing the goalpost way, way downstream into the future.

Keep in mind that the world’s fastest supercomputer can handle 1015 Flops, and that’s just one $500 million computer, compared to seven billion human brains.

When Kurzweil describes "human-levels" of intelligence, he may not include self-awareness, empathy, consciousness. Hameroff had a theory which he proposed decades ago. If Hameroff proposed these ideas himself, he might have been ignored, but his co-theorist was Sir Roger Penrose, an esteemed figure in mathematical physics. Their theory, dubbed “orchestrated objective reduction,” or Orch-OR, suggests that structures called microtubules, which transport material inside cells, underlie our conscious thinking.

To say this is controversial is a gigantic understatement, but Penrose’s reputation is so towering, that:

Their theory is almost certainly wrong, but since Penrose is so brilliant - ‘One of the very few people I’ve met in my life who, without reservation, I call a genius,’ physicist Lee Smolin has said- , we’d be foolish to dismiss their theory out of hand.

Penrose explains quantum theory with a simple example:

An element of proto-consciousness takes place whenever a decision is made in the universe,’ he said. ‘I’m not talking about the brain. I’m talking about an object which is put into a superposition of two places. Say it’s a speck of dust that you put into two locations at once. Now, in a small fraction of a second, it will become one or the other. Which does it become? Well, that’s a choice. Is it a choice made by the universe? Does the speck of dust make this choice? Maybe it’s a free choice. I have no idea.

My take

In the near term, Machine Learning/Deep Learning do have useful contributions to make. The problem is that humans are experts at anthropomorphizing, referring to their inventions in human terms (Watson) inflating their expectations of intelligence.

So what is Orch-OR? Basically (as if anything involving quantum mechanics can be basic) is that consciousness is an emergent chaotic process that arises from the collapse of the wave function (quantum) in the microtubules of the neurons that yields a conscious moment. And that is the central, undeniable reason why a computer will never have consciousness. Computers are algorithmic. Computers cannot simulate chaos, and chaos is at the root of consciousness.

That is if you accept the Orch-OR theory. Many decades ago, I was first overwhelmed by Gödel’s theorem, which proved that lots of things in formal systems may be true, but cannot be proven.  Penrose, speaking about Gödel’s theorem, said: 

It told me that whatever is going on in our understanding is not computational.