Should we think of AI as a basic human right?
That was the question raised by Salesforce CEO Marc Benioff in a session at the United Nations General Assembly yesterday.
The backdrop for Benioff’s comments was debate around the impact of technology and innovation on the UN’s 17 Sustainable Development Goals (SDGs). These were agreed back in 2015 as set of specific targets based on a sustainable development agenda with the overall goal of ending poverty, protecting the planet and ensuring prosperity.
When the SDGs were laid down, there was a window of 15 years to hit the targets. But Benioff suggested that such is the transformative potential of AI, that those SDGs need to be revisited:
When we look across all of our SDGs, and specifically when we look at the cost of food and healthcare and education and jobs, all of these things can be radically accelerated through new technologies. We know, for example, that when it comes to growing more food, that the use of robotics and sensors and AI can dramatically boost crop yields.
It is really time for us to re-think every one of the SDGs and re-evaluate how some of these new technologies can help accelerate [their work]. We can see this incredible potential. I become quite ‘Pollyanna-ish’ when I start to talk about some of these changes in regard to technology and how it can make the world better.
We’ve never seen a technology quite like this before that is going to have widespread implications into each of our individual lives, every one of our companies, every one of our NGOs and every one of our countries. This technology probably requires some kind of Special Envoy from the UN who is dedicated to Artificial Intelligence, just like there are other Special Envoys dedicated to other critical changes going on in the world.
Certainly [AI] needs to be looked at in relation to each one of the SDGs. I don’t know if it needs to be its own SDG, but I do think soon we should look at whether AI is a basic human right that everybody has access to.
That human right is inextricably tied to the concept of equality, in this particular definition equality relating to universal access to new technologies.
Benioff cited recent comments made by Russian President Vladimir Putin to the effect that the country that controls mastery of AI can control the world. It’s an idea that might have been the start of a James Bond movie plot in the past, but now it’s one that’s all too real.
Benioff argued that the world faces what he calls “a crisis of equality” and AI is perfect case in point:
We are living in a world where we have amazing technologies, but will we all have these technologies? Will they be offered to us democratically so that we can all participate in these incredible changes. Or is there going to be an incredible new divide? I don’t think anyone wants that.
Somebody who has this kind of [AI] capability is at dramatically more of an advantage than someone who doesn’t. Even from a personal standpoint, I know that if I have AI technology, I’m going to be healthier, I’m going to be wealthier, I’m going to be more educated. But if I don’t have that technology, I’m going to be less of all of those things. That’s true for companies and I think that’s going to be true for countries as well.
He cited an example from the healthcare sector, pointing to the development of pocket ultra-sound devices by firms likes Phillips and Siemens which allow pregnant mothers to carry out scans of their unborn babies. These devices coupled with mobile phones mean that healthcare advice can be transmitted to and from medical experts in areas where there aren’t robust medical centers:
That’s a big game changer. So are we going to give access to that type of technology to everybody?
While remaining in self-styled Pollyanna-mode, Benioff was also ready to acknowledge some more troublesome aspects of the AI revolution:
I am also deeply deeply worried about some of the dark side of these technologies. Technology itself is really never good or bad; it’s what you do with it that matters. We know that in the local harbor here [in New York] – and this is true of several other nation states – that autonomous warships have been launched. These are warships that have no sailors. That’s quite a big change in how we think about our global militaries. We know that we also have planes without pilots. In San Francisco, we have taxis without taxi drivers, buses without bus drivers, trucks without truck drivers, cargo ships without crews.
This AI revolution is going to be quite disruptive. It’s going to change very much the job landscape. So while we can look to many advantages and excitements, the reality is that AI is also going to destroy many millions of jobs. People do have a right to be worried. I worry about it myself. I worry about those who don’t understand the technology and the innovation and how it’s going to impact them. That’s what we’re doing here today. We’re thinking about them. We’re thinking about how are we going to work together to solve these complex issues.
It’s about getting the right mix in place, he concluded:
This is a very, very serious time, yet it’s a very, very exciting time. We have to keep these two ideas in balance.
In recent times here at diginomica we’ve despaired at the near hysterical threat level warnings from Elon Musk about out-of-control AI and robots killers stomping through the local neighborhood. It was good then to hear a more reasoned set of questions being raised at the UN yesterday.
There was a wider sustainable development and innovation debate that went on, which we’ll pick up on tomorrow, but the idea of AI as a basic human right is a very interesting topic in its own right. Sadly I fear that, in a world where Russian dictators are eyeing up AI as the latest weapon in the arsenal, this is an ambition that will remain elusive. That doesn’t mean we shouldn’t keep aspiring to reach it. Some important questions aired yesterday. This is only the start of a longer debate.
Image credit - United Nations
Disclosure - At time of writing, Salesforce is a premier partner of diginomica.