Is AI too dangerous to be left to technologists?

Jerry Bowles Profile picture for user jbowles April 30, 2019
MIT and Stanford have launched major multidisciplinary programs to make certain that AI development remains ethical and human-centered.

The breakneck development and application of AI and related cognitive technologies like machine learning, voice and face recognition software, the Internet of Things (IoT), robots, natural language processing to perform traditional tasks previously assumed to require human intelligence is creating a lot of angst and soul searching among academics, researchers, analysts, deep thinkers and even some industry leaders.

At the heart of the concern is a simple but profound question - Is AI meant to enhance and extend the ability of human beings to work and solve complex problems or is it destined to replace human labor on a massive scale?

In just the past month, two of America’s most prestigious institutions—Stanford and MIT--have addressed those questions by launching major interdisciplinary programs committed to studying, guiding and developing AI technologies and applications that remain “human-centered.” Wrote Joi Ito, director of MIT's Media Lab:

“Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines--not artificial intelligence but extended intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.”

Studying machine behavior

To that end, a group led by researchers at the Media Lab have just imagined a new field of research—called machine behavior—which would study AI intelligence beyond the narrow parameters of computer science and engineering and into behavioral and social sciences like biology, economics, and psychology. Iyad Rahwan, who leads the Scalable Cooperation group at the Media Lab, wrote:

“We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously. This calls for a new field of scientific study that looks at them not solely as products of engineering and computer science, but additionally as a new class of actors with their own behavioral patterns and ecology.”

Rahwan, along with his MIT colleagues Manuel Cebrian, and Nick Obradovich, collaborated with 20 other researchers from numerous institutions, including Facebook AI, Microsoft, Stanford, the sociology department of Yale University, and Berlin's Max Planck Institute for Human Development, among others, to produce a paper called Machine Behaviour for the British science journal Nature (which explains the British spelling). In it, they argue that understanding the behavior of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms.

Human-Centered AI

Meanwhile, two weeks earlier and 3,000 miles away, Stanford University launched a new multidisciplinary research group called The Stanford Institute for Human-Centered Artificial Intelligence (HAI) whose stated mission is to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition.

Co-directed by John Etchemendy, professor of philosophy and former Stanford University provost and Fei-Fei Li, professor of computer science and former director of the Stanford AI Lab, the new body aims to become an interdisciplinary hub for researchers, developers, users and policymakers who want to understand fully AI’s impact, potential and dangers. Said Miss Li:

“This is a unique time in history. We are part of the first generation to see this technology migrate from the lab to the real world at such a scale and speed. This is a technology with the potential to change history for all of us. Intelligent machines have the potential to do both good and harm. The question is, ‘Can we have the good without the bad?’”

Prof. Li coined the concept of “human-centered AI” in a March 2018 New York Times op-ed piece titled How to Make A.I. That’s Good for People in which she pointed out:

“There is nothing ‘artificial’ about this technology--it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.”

A total of 15 courses have been announced for the multidisciplinary institute which will have an 80-person faculty team and three key research focuses: studying and forecasting the human and societal impact of AI, designing AI applications that augment human capabilities, and developing AI technologies inspired by the versatility and depth of human intelligence.

MIT’s new AI college

MIT’s new initiative comes at a time when the school is already gearing up to accept the first students this fall in its new billion-dollar MIT Stephen A. Schwarzman College of Computing (so named for the billionaire businessman who gave the school a $350 million dollar gift to get it started.)

The new college will focus on interdisciplinary AI education, training those studying biology, chemistry, history, and linguistics in the ability to use artificial intelligence in their own field. As MIT President L. Rafael Reif explained:

“The Stephen A.Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.”

My Take

The question for those of us who follow the tech industry closely is whether AI is really so important and dangerous and different that it deserves all this extraordinary attention? Will it really “change everything” the way the internet did?

How is it different, say, from the usual tech industry hype cycle from optimism and over promises to disillusionment and disappointment that most highly promoted new technologies go through?

Some skeptics say we’re years away from a point where intelligent machines will be smart enough to “think” and make autonomous decisions on their own. Most AI is still very narrow, capable of doing one thing very well but not ten things. Maybe we won’t ever get to the scary part when autonomous machines make life and death decisions.

Fair enough. But count me among those who believe that thinking about and researching the road ahead for what will certainly be the most useful, and potentially, lethal technology of our time is a good thing. If MIT and Stanford can keep big tech on the ethical and human-centered track when it comes to AI, bless them.

After all, the great reality of life is that Chicken Little only has to be right once.

A grey colored placeholder image