Thinking about thinking

Profile picture for user Neil Raden By Neil Raden March 27, 2019
Summary:
Thoughts on thinking in an AI context.

Brain in an open storage jar © ktsdesign - Fotolia.com
I've been thinking a lot about AI lately. The gap between what AI does and what people fear about AI are two different things. I don't see winning at chess or Go as intelligence when a machine sifts through huge amounts of data to determine the best move. That's not thinking.

Situations in the world don’t come framed, like a chess game or a Go, they have no boundaries at all, you don’t know what’s in the situation, what’s out of the situation.

If I'm stuck in a traffic jam on an interstate, I'm thinking about a lot of things. How long will I be stuck? Will I be late? Should I call ahead? Should I try a U-turn and find another way? How long can I wait to find a bathroom? I'm hungry. An autonomous car isn't thinking this way.

The entire effort of AI is a fight against a computer's rigidity. But there is a trend that AI may be getting closer to actual intelligence. Or is it?

If you think about it, each new step towards AI, rather than producing something which everyone agrees is real #intelligence, merely reveals what real intelligence is not.

Playing chess, for example. Facial recognition is more than just associating an image with another. When you see a face, a whole range of emotions flood your consciousness. That's thinking.

But, if over time, everything we assume is part of human intelligence can be replicated in a machine, then what is thinking after all?

The most hyped aspect of AI today is deep learning, an extension of neural net algorithms that to goes back to the 80’s.

In an article “Does Deep Learning Actually Learn,” Michael K. Spencer writes,

According to Mike Davies, Intel’s director of its neuromorphic computing initiative, believes deep learning’s approach fails to add up to actual “learning”. Davies made his recent commentary during a talk in February, 2019 at the International Solid State Circuits Conference in San Francisco, a prestigious annual gathering of semiconductor designers.

The rationale of Davies is that ‘back-propogation doesn’t correlate to the brain’, therefore he sees deep learning more as a kind of optimization than the truly intelligent computation we find in the brain or an AI capable of AGI (artificial general intelligence). Davies of course leads another approach, where they the contention of neuro-morphic computing advocates is that the approach more closely emulates the actual characteristics of the brain’s functioning.

Don’t count on neuro-morphic computing being commercially available anything soon.

My take

I'm not worried about autonomous cars or facial recognition. That's no different from next-best-offer. It's just curve-fitting as Judea Pearl opines. But will we ever get to the point where a machine thinks like a human? I don't know how to answer that, but I do know that there are 7 billion people in the world, most of whom need something to do, so we need human-thinking machines about as much as we need airplanes that flap their wings. However, there is clear and present danger in the execution of available AI today (and Data Science).