Artificial Intelligence (AI) is becoming increasingly powerful over so many aspects of our lives, from whether we get a mortgage to what treatment we get in hospital. But women and minority groups aren’t proportionally represented in the development of the technology, leading to concerns that AI systems will become more biased and unethical.
According to the World Economic Forum, only 22% of all AI professionals globally are women; meanwhile, research from The Alan Turing Institute has revealed that only eight percent of UK researchers who contribute to the leading machine learning conferences are women. So, decision-making machines are shaping the world, but the people whose work underpins that vision aren’t representative of the society these systems are supposed to serve.
Speaking at the CogX Festival 2021, Erin Young, Postdoctoral Research Fellow at The Alan Turing Institute, cited some real-world examples of the impact of this lack of representation: an Amazon hiring algorithm that discriminated against women, and a UK passport photo tracker, which was found to discriminate against darker-skinned women in particular:
Technology, as I'm sure we all know, is not neutral. It's not age neutral, race neutral, gender neutral, ability neutral, it’s shaped in all sorts of ways by the people who build this technology. It reflects their history, their values, their choices, so there’s potential for biases to seep in at every stage.
There's a wide and growing array of examples of bias in/bias out, which show how this feedback loop further discriminates against those who are not involved in the creation and development of this technology.
Adrian Joseph OBE, Managing Director, Group Data & AI Solutions at BT Group, noted that there are a number of biases we need to think about - bias in the data, bias in the algorithms, and bias in the individuals building AI systems. He added:
Some of this has been accelerated in the midst of pandemic. We've seen people with darker skin tones being affected more aversely by health outcomes, by job outcomes and in policing.
I'm still quite shocked that in this day and age, with the most disruptive technology trend that we're going to see for the next three to five years, the AI big tech companies were still not able to discern that Serena Williams, Oprah Winfrey, Michelle Obama were women. That had a lot to do with the color of their skin. If we don't recognize faces of diverse populations, then AI actually isn't working.
When it comes to the barriers to diversity in AI, Joseph pointed to parents who might encourage their child to be a doctor, lawyer or accountant but not promote careers in technology, AI or data. The way a career in AI is portrayed may also be offputting to women.
“[It’s portrayed as] a 24/7 thing, you've got to work outside of your normal time, in your spare time and all the rest of it. And it's such a fast-moving area that some of them may feel that it's a bit harder or there's a risk if you take time out.
My AI director is a woman, but the first five candidates that I got sent were all men. I reached out and said, 'Can somebody please send me some women candidates' and Zoe was the best by far, so we hired her.
Speaking on the same panel, Wendy Hall, Regius Professor of Computer Science & Executive Director, Web Institute at the University of Southampton, admitted she found the amount of time she has been working on diversity in technology very depressing. Hall wrote her first paper called ‘Where have all the girls gone’ in 1987, highlighting the lack of girls reading Computer Science at university back then – and the numbers are exactly the same now. She also wrote a paper about women in AI back in 1988, warning that if women aren't doing computing, then they’re also not doing AI - and that could be a real problem:
I just get fed up with talking about this all the time. But I'm so passionate about this, we have to fix it.
Hall urged the CogX audience to share ideas for the UK Government’s National AI Strategy she’s working on, as they want to put diversity and AI for public good at the core of the strategy.
Attracting people from subjects other than maths and computer science into AI is another goal of the strategy. This is being aided by a conversion Masters degree, converting people into AI so they can work in the field but not necessarily as machine-learning programmers:
That was a program that came out of the AI review at a number of universities and it's been incredibly successful because the scholarships we award as part of that funding, 50% have to go to underrepresented groups, particularly women, people of color and disabled.
There's all sorts of exciting jobs in AI across all sorts of different skill sets. But at the core at the moment - and this might change over time - we need programmers, and we need diversity as much as we can get in our programming workforce. Otherwise the products they're going to produce will be biased, unfair, not accessible and not trusted because people are biased.
One of the major challenges around tackling bias in AI systems is that the problem needs to be fixed now, but we haven’t got the diverse teams in place to facilitate this. Programs in schools are important, but Hall said there isn’t time to wait for these to work their way through the system; we need to find measures that will make a difference to university and workplace intakes in the next year or two:
It's a huge conundrum. We can't say to people who are running masters in machine learning programming, you've got to have 50 percent of your students coming from underrepresented groups, as they won't fill the places. We could say our target is 50/50 men/women on any course that we fund or any scholarship, but then we would hugely lose out in terms of skills. We need as many people as possible, and, of course, of the 50% of men, some of those will be from under-represented groups. We've got to think of some radical things, really radical to change this culture.
To help enact this culture change, The Alan Turing Institute has made a number of policy recommendations, including greater transparency from tech companies around their diversity reporting, and ethical regulatory frameworks and standards around the development and auditing of AI.
Joseph agreed that we need to see more data and metrics around who is working in AI, as well as needing a wider mix of role models, and for managers and leaders to take responsibility over diverse hiring and promotion. This could include using machine learning to detect issues or certain words in job descriptions that may be off-putting to women.
Taking these steps does have an impact: as well as leading to the appointment of a female AI director, 44% of Joseph’s AI team are women, while 19%t come from ethnic minority groups.
We also need to get comfortable having conversations about this topic, particularly about race, Joseph added.
It's a lot easier to talk about gender and sexuality, but you put race on the table and people get a little bit apprehensive. I also believe there are many other skills we need to think about in anthropology, sociology and the arts that are relevant to AI, and the ethical and diverse use of it.
Taking the emphasis off women and underrepresented groups to change and focusing more on why men feel much more comfortable in the technology workplace is another measure that could help tackle AI inequalities. Young called on institutions to start asking what could they do differently rather than what could women and groups that aren’t included do differently. She finished with a warning note:
Where there began to be more prestige, money and the idea of power in the computing industry as a whole, this is when men began to take dominance. We see this repeating itself in AI.
Thinking about diversity and ethics of AI, we want to try and rewrite this narrative of what's already happened in computing and see how we can change this before it escalates.