Professor Dame Wendy Hall is a leading figure in the UK on the development of AI technologies, including on the discussions around AI and ethics. As Professor of Computer Science at Southampton University, Hall - alongside former IBM Watson VP Jerome Pesenti - wrote the government’s inaugural report on AI - ‘Growing the Artificial Intelligence Industry in the UK’.
However, speaking at a Westminster eForum event on AI in central London today, Professor Hall gave a damning verdict on the industry’s work on diversity and inclusion within AI development, stating that “nobody knows how to fix it”.
Hall urged attendees and those working in the industry to come together to try and solve how AI can be used to solve diversity and inclusion challenges of the past, so that the technology is more reflective of the society we live in.
Much has already been written about how bias in data can lead to disastrous results for AI development. For example, Amazon recently had to scrap a new AI recruitment tool, as it found that the recruiting engine did not like women and was not rating candidates in a gender-neutral way.
The problem being, if we base future decisions using AI, we’ve got to be sure that the data used does not contain bias that favours certain sections of society (which is likely, given that humans have been proven to show bias). This could lead to certain sections of society being automated out of the workforce, for example, as per the Amazon tool.
Professor Hall said:
“There is not enough action on diversity. Everyone always says “yes we need diversity” and “yes we need inclusion”. But we are tired of talking about diversity, because nobody knows how to fix it. Right? Nobody knows how to fix it.
“I can give you a whole lecture on what went wrong and what has been tried to fix it. I think what we have to do is make it very clear that you can’t develop ethical principles without thinking about diversity. I think we have to put diversity and inclusion firmly into the ethical debate.”
Hall said that she is planning to run a workshop through the newly created Office for AI next year, to “think the unthinkable about diversity” and to hopefully ensure that we get a “better balance overall into our AI workforce”.
She added that better work on diversity within the ethics debate on AI needs to actioned on now.
“[We need to] get great minds thinking about this...through means that have not been tried. We’re all using this, we all need to be part of building it, because of the bias problems.
“I think AI is too important to be left to the AI experts, we’ve all got to be involved. I think interdisciplinary teams are the answer. I think we’ve got to encourage lots of people to come into AI from different backgrounds. And we’ve got to do that now.
“Yes, we’ve got to get it right in schools. But those kids in primary schools won’t be in the workforce for another 15 to 20 years. It will all be over.
“I for one am going to try and make a difference to get diversity into AI in the broadest sense, at every stage of the AI process. And I need you to help.”
Another interesting aspect of Professor Hall’s talk today was around the creation of new AI experts, who could help lead the creation of much needed AI audits - the aim of which would be to infuse ethics into AI development. Again, Hall’s focus was on getting people from a varied number of backgrounds into the field of AI to translate society’s and business’s needs to those coding solutions.
Whilst the government’s AI report focused on getting industry funded AI masters into the system as a priority, for those that will be doing the coding, Hall is also very focused on new conversion courses for students that come from non-STEM backgrounds. She said:
“[We want] students from philosophy, humanities, geography, history, English literature. That sort of background, any sort of background. And we will develop courses with the universities. They will be courses that will enable people from a non-technical background to work in AI.”
And the ambition, according to Professor Hall, is to get these people to think about ‘AI audits’. She added:
“[These courses] could mean learning to code, but it might actually be much more about training the people that are going to do the AI audits. Who are going to help companies adopt AI. Who are going to look at the ethical principles and work alongside the coders, to get those ethical principles into the algorithms.
“What does algorithm accountability and transparency mean? To take those ideas, which are very complicated and difficult to measure, into industry and into government. They’re the translators, if you like. They’re the link between what technology is doing and what society and business needs. That’s very important to me, that we develop those people with the skills that the country needs.
“I think AI audits are the way to go. We’ve taken decades to work out how to do financial audits. We don’t know how to do AI audits. We have to learn. A lot of research has to be done. We don’t know how to do this stuff.”