Main content

Diversity in AI - women need strong role models to inspire them, hears techUK

Chris Middleton Profile picture for user cmiddleton March 27, 2024
Summary:
The need for greater diversity in tech is now even more acute, due to the rise of AI. A techUK panel offered some personal insights.

Two women meeting leadership © Christina@wocintechchat - unsplash
(© Christina@wocintechchat - unsplash)

Poor diversity in tech is a subject we hear a great deal about, because survey after survey reveals that IT is a sector overwhelmingly dominated by white men. 

The figures are certainly stark, given that women make up 50% of the workforce. UK research typically estimates that 81-85% of tech employees are men, while recent US surveys have found that men constitute three-quarters of the sector, and two-thirds of the workforce in science, technology, engineering, and maths (STEM) careers. 

And as I noted in my earlier report on this subject, roughly two-thirds of US tech workers are white, with the figure more like 90-plus percent in the UK. 

To recap – it bears repeating – there is nothing wrong with being a white male, and articles that urge greater diversity in this and other sectors, including media and entertainment, should never be seen as attacks on white men, as some political commentators believe is the case. The problem is more that teams that lack diversity tend – often unconsciously – to reflect the needs of their own members when designing products that should be accessible to, and usable by, everyone. 

A recurring example over the past ten years has been those real-time facial recognition systems that struggle to identify women or ethnic minorities, because of a lack of diversity in their training data and their development teams.

But the problem has come to a head over fears that Artificial Intelligence (AI) may automate ancient societal biases. With software engineering dominated by white men – often young and relatively privileged males at that – the risk is that this key technology may be designed by teams that fail to reflect all of society. 

And not just in Europe and the US, of course: AI will be used in every nation on Earth, and so needs to embody a huge variety of different cultural, linguistic, ethnic, behavioural, gender, and religious perspectives. 

These were among the topics for a techUK roundtable this week, which looked at digital ethics and the safety of AI systems from the perspective of women who work in the industry.

The core problem was set out in a light-hearted – though instructive – anecdote by Felicity Burch, Executive Director of UK government organization the Responsible Technology Adoption Unit (RTA), formerly known as the Centre for Data Ethics and Innovation.  She said:

My Dad was a helicopter pilot. When I was a little girl, he took me to look at one of his helicopters – he was a rescue pilot, where helicopters go out for long, long periods of time. And he pulls out this tube and says, ‘This is our toilet’. And I remember thinking, ‘Oh, so that's not designed for me.’

The point is it doesn't matter which group you're talking about, but unless you have diverse teams designing something, it's just not going to work for everyone. 

For example, one of the key things the disability community talks about is ‘nothing about us without us’. It’s this idea that you really need to involve the people who are affected by a design, right from the start. And that’s partly through building a diverse team.

She added:

One of the things we do in government is partnering with other departments to help them think through these issues, to engage with the public and with affected individuals. This typically leads to much better outcomes for the people involved, to make sure that technology is absolutely being designed with them in mind.

Elizabeth Seger is Director of CAMS, a digital policy research hub within the UK’s cross-party think tank, Demos. She also shared a personal perspective, this time on the need for positive representation:

One of the best things that started me getting involved in these areas myself was a robotics team initiative in the US, which has spread internationally, called First Robotics. There was a really strong push to have a lot of inclusion of women and minorities in these teams, and to really foster a culture of excitement around technology. 

Not just around the technology itself, but also around a culture of cooperation and integration. Really feeling part of that community, and all the effort that went into this initiative to focus on inclusion and participation. For me, having those foundational experiences early on is important for making women feel like they belong.

Abigail Oppong is AI Ethics Researcher at Ghana NLP, an open-source initiative looking at natural language processing in Ghanaian languages – an important area in itself, given the dominance of written and spoken English among Western technology companies. 

She said:

When I started in AI ethics, particularly in Africa, I didn't really have people to look up to. But then I found myself in more international groups, which have been very helpful in reaching out, because people had more time to talk to me, when I had nobody to look up to in the community that I'm actually from. So, that gave me more of a sense that there is a community I belong to.

Automating privilege?

Which brings us to AI. Among the many areas in which AI is already being applied are recruitment and HR. This has given rise to fears that, if jobs have historically gone to people from just one section of society, or an employer has actively excluded others in the past, then an AI may begin to automate that privilege, that exclusion, from its training data, and give it a false veneer of neutrality.

So, how can biases be mitigated to ensure much greater accountability? The RTA has recently published guidance on the use of AI in these areas. Burch explained:

There are lots of opportunities for AI-enabled tools in HR and recruitment to improve diversity and inclusion, from improving the candidate experience to helping people with more diverse lists of candidates. But AIs also pose novel risks. In particular, perpetuating bias, exclusion, and even discriminatory job advertising.

She shared a personal example from an area that is often poorly understood by employers:

I have ADHD, and one of the things I really struggle with is eye contact. And I’ve seen other people talking about their experiences of using an AI tool when they have ADHD or autism. They just couldn't get through it, because they weren't focusing on the screen in the same way that a neurotypical person might. 

So, there are opportunities to improve things with AI, but also risks as well. That's why our team has updated its guidance, which is very much looking to help organizations who are procuring and deploying AI systems. 

The aim is to make sure that guidance is not only relevant for businesses adopting technology, but also reflects the needs of those people whom the technology is going to affect.

Suzanne Brink is Data & AI Ethics Manager at digital transformation specialists the Kainos Group. She said:

You need wide, varied data to build this kind of technology in a sensible way. You need lots of checks and lots of monitoring. It potentially takes a little army to get it right, and you more quickly have the resources for that through a specialist provider than trying to build it in house.

Then she added:

We know that, in traditional recruitment, it's useful to have multiple panellists in an interview. So, you could see AI as an additional panellist [rather than as a replacement], with different strengths and weaknesses from humans. 

The biggest strength is that we know humans are notoriously inconsistent, and AI won't be like that. But a weakness is that AI will typically be less good at conceptualizing. So, as a human, you would be able to judge if somebody has made an amazing career progression, despite being on maternity leave for a year, for example, compared to another candidate who didn't have that break in between. 

The lesson is that humans have flaws, but AI has flaws too. So, by partnering and recognizing their relative advantages, you can bring some real strengths to your recruitment process.

Wise words. But, as successful women in technology, who did the panel say had inspired them to pursue careers in the sector? Demos’ Seger said:

For me, one of the earliest and most impactful women in tech was the mother of a good friend who was on the robotics team with me. She had gone to Caltech and got her PhD in materials science at a time when women neither went to Caltech nor got PhDs in materials science! She was just a really strong champion for me through high school and through being an undergraduate. 

Anytime I would hit a wall, she was there to say, ‘No, you can do this!’ and give me pointers. She even gave me a lesson on how to ‘speak guy’!

I have also seen women in leadership roles that I'm inspired by, and who I want to emulate. So, for me, having those human connections and role models has been really important.

But these ideas sometimes come with biases of their own, explained Ghana NLP’s Oppong:

In one of my employers I was the only female on the team. So, one day, I brought up the idea that we should bring more women into the team. And there was this comment from a peer that we shouldn't be looking for females to join, but instead for people who have confidence. 

And that told me that this was an idea that society had placed in their mind. So, I see two things. First of all, it’s about trying to help them understand that there are women out there who are doing this. And it’s also about helping them understand that having women around creates diversity and diverse perspectives.

Indeed, an organization called Women in AI Ethics publishes an annual list of 100 brilliant women in the field – strongly recommended, if you are looking to read up on some inspiring individuals.

However, diversity – while no tick-box exercise – is not always a simple cure for AI’s ills, explained Burch:

What’s common in both responsible AI and progressing women in the workplace is the need to address your blind spots, and to recognize that we all have them. Diverse teams are critical, but I don't think they're always enough, because there are so many different ways in which diversity can express itself.

What did she mean by that? She explained:

There's intersectionality to consider as well. And it's unrealistic to think that your AI development team is going to represent all the different flavours of every type of diversity.

Perfection vs Progress

Intersectionality is the phenomenon that arises when systemic behaviours that enable different forms of inequality or discrimination – such as exclusion based on gender, ethnicity, sexuality, or disability – intersect with each other, and create new problems for members of multiple groups.

In a previous role, Kainos’ Group’s Brink was asked to be an LGBTQ+ champion and ally. She explained:

I remember feeling really insecure a lot of the time about that, thinking ‘Am I doing this right?’ But it’s also about learning to be a bit more forgiving [of oneself], to learn over time and make mistakes, as long as you invite feedback. 

I think that's a really important mindset to have, because we can sometimes get too perfectionist about inclusion, and that doesn't always help progress.”

The RTA’s Burch added a note of (qualified) optimism for the future, as a new generation enters the world of work – an environment increasingly underpinned by AI:

I would note that my last three job titles did not even exist when I left university!

My take

Be resilient, optimistic, and adaptable: great advice for the future. While an hour-long roundtable could merely touch on subjects that could each support a day-long conference, it was time well spent. 

As my previous reports this year on diversity in tech explained, more girls drop out of STEM subjects at school than boys, even though they have an equal (or greater) aptitude. The reason is thought to be the lack of high-profile role models – women who are there to say, ‘You can be this’ in a world dominated by wealthy tech-bros. So, all power to those women who are trying to change things, as we all stand to benefit from their example.

Loading
A grey colored placeholder image