AI and ethics - ‘Unbiased data is an oxymoron’

Derek du Preez Profile picture for user ddpreez October 31, 2019
Summary:
A panel of experts at this week’s IoT Solutions World Congress in Barcelona debated how and if the industry can promote fairer and more transparent AI development.

Photo of panel on AI and ethics

The technology industry, regulators and privacy advocates continue to debate and push forward the idea that AI development needs to be ‘responsible and ethical’. However, what that actually looks like - considering so much AI and ML activity is veiled in secrecy - continues to be up for debate. Sure, controls can be put in place, organisations can have strong governance structures, but we are far from an internationally recognised ‘standard’ around how AI should be created and used. 

This was the topic of debate during a panel at this week’s IoT Solutions World Congress in Barcelona, where experts and delegates from industry debated the challenges and pitfalls of developing AI applications and tools ethically. 

The conversation was one of the more honest ones I’ve listened to in recent years on the topic. Some conclusions that were drawn included the suggestion that the tech industry should hand over their ‘black boxes’ and trade secrets, and that maybe as a society we should make the decision to just not use some technology.

However, one of the key points that hit home for the audience came from Ariel Guersenzvaig, Senior Lecturer & Researcher, ELISAVA Barcelona School of Design and Engineering, who declared that there is no such thing as unbiased data. This comment came after a lengthy discussion on removing bias from AI applications. However, Guersenzvaig’s point really seemed to resonate with the audience. He said: 

Unbiased data is an oxymoron. Data is biased from the start. You have to choose categories in order to collect the data. Sometimes even if you don’t choose the categories, they are there ad hoc. Linguistics, sociologists and historians of technology can teach us that categories reveal a lot about the mind, about how people think about stuff, about society. And what a good society is. 

Data is one thing, but also the models that you use. What is a good thing that is not biased? What is desirable? That is definitely not unbiased. So even if a dataset looks fair, the way we put it together is not unbiased.

One example Guersenzvaig gave is how if we used photos of people on social media for some sort of AI tool, this would be biased because the photos chosen by people to upload are presumably (or more often than not) pictures that they like. It would be unfair to compare that dataset, to say, mugshots. 

However, a good counterpoint came from David King, CEO, FogHorn Systems, who said that this principle is likely only applicable to the application of AI and people. Whereas, he argued, industrial processes often create and use unbiased datasets. King said: 

We’ve done a lot in machine learning and deep learning for industrial assets and processes. And to be honest the fundamental challenge, mostly, is that you don’t get a good dataset. Getting data out of industrial processes is not a simple thing. All you’re trying to do is take all of that non-stop sensor data coming out of the machines and trying to get deeper insights into that data. 

So I would say there’s not a lot of bias in trying to make a prediction about asset monitoring and optimisation. There’s not a lot of bias in trying to figure out what’s happening in the machine.

Where does the responsibility lie? 

The panel also raised the question of who is responsible for the outcomes of AI applications when things go wrong? How is that responsibility managed? Can we ever place blame on the machine? 

Both Guersenzvaig and King agreed that industry can do more to improve transparency around the development of these systems, particularly the technology industry. Whilst King said the tech industry “could do itself a great favour” by being more transparent, Guersenzvaig actually argued for a more aggressive approach. He said: 

I think they should open up. I think they should give up their trade secrets. Or make their algorithms highly explanatory. Otherwise, when we get to issues like health, education, social welfare - we cannot trust companies to do that in black boxes.

However, Guersenzvaig added that the point around responsibility doesn’t necessarily have to be too complicated, given that trust in systems and who to point the finger at when things go wrong has been a well explored path in engineering in recent decades. He said: 

This is not a new problem. In engineering we have been talking about this for many years and it’s called the problem of many hands. It shouldn’t necessarily be a very different issue to when a plane goes down. You have several contractors and companies working together for a plane and we can allocate responsibility when things go south. Judges and regulating agencies can often tell when things go wrong and who should be held responsible. 

The problem with AI is that some vendors are using data that was bought from someone else, who bought it from someone else. They cannot trace the data. That brings a new ethical issue. 

But what we should do is retain responsibility on the human side. We should never allow a machine to be held responsible for anything, because then we would be surrendering responsibility. That would be a dangerous thing.

Companies behaving poorly

However, Michael Godoy, Program Director, Telemedicine and Scalable Therapeutics, University of California, said that companies still don’t understand that behaving responsibly when using AI will build trust with customers. Godoy said that companies are not being transparent enough, citing US retailer Target as an example. 

He referenced how Target created a system that could predict with fairly high accuracy whether or not one of its customers was pregnant or not, based on previous purchases. However, one incident saw Target accidentally revealing to a parent that their teenage daughter was pregnant, by sending out marketing materials for parents-to-be to the household address. 

Target’s response, according to Godoy, was to mix the marketing materials for pregnant persons with random untargeted materials so as not to make the same accidental revelation again. However, this is not a transparent approach ,Godoy argued. He said: 

Instead of saying, we are going to be more transparent with the data you give Target, what they’re doing is being more obscure. Making it less transparent. They’re making it so that consumers aren’t aware that they’re being targeted in this way. I

I truly believe that it’s the responsibility of these companies and the people developing these systems to take into account that consumers need to be aware of their data is being used and they need to be transparent about how they’re implementing these systems.”

Got a battle on our hands

Interestingly, the panelists also all agreed that the future of AI is going to be defined by those on the national stage, by those setting laws and creating regulations. And, at least for the time being, it’s likely that it could take a while for any sort of global consistency on standards, expectations and governance. King said: 

I think, unfortunately, a lot of these questions are going to be questions of national policy. Surveillance in China is definitely more acceptable than in the US or Europe. It just is. But I think in a lot of the democratic societies, it’s going to come down to politics. 

I hate to say it, but it’s going to be a brute force battle and will depend on who is in charge, making these decisions. I think it’s going to be very difficult to define objectively, but I think it will be a matter of politics.

Guersenzvaig finished by noting that, however, we as a society have choices to make about AI and none of it is inevitable. Something that’s worth remembering. He said: 

I don’t know where we are going to be in five years, but it’s up to us. The collective us. It’s not up to the technology, the robots are not going to rise and get our jobs. I think we need to realise that we have to discuss with our politicians, it will be a political bullfight on this issue. 

Regulations will be a challenge because we won’t become a united world in five years, so we will have different scenarios. But I do want to encourage everyone to think about technology not as an autonomous agent, it’s not, it’s up to us. We might refuse to use some technology that’s already available out there.

Loading
A grey colored placeholder image