Tackling bias in an AI sector that's already too 'pale, male and stale'
- Summary:
- AI may still be a relatively young sector of the tech industry, but the same old diversity mistakes are already all too evident.
Walsh, formerly Chief Data and Analytics Officer at Vodafone, will be focusing on growing Levi’s “data, analytics and artificial intelligence enablers” to strengthen the company’s present and future business ventures, according to the company’s announcement of the appointment back in February.
It’s a bold new step for Levi, but sadly also one that stands out in the wider AI industry for a depressingly familiar reason - Walsh is a woman taking on a role in a sector that may be young and power-charged, but is already living down to the tech cliche of being ‘pale, male and stale’.
While it might be hoped that the relatively nascent state of the AI market might have led to some of the mistakes of the wider tech sector’s past being avoided, the reality is that AI has a diversity crisis well underway, with issues around women and people of color
That’s according to a study from New York University’s AI Now Institute - Discriminating Systems - Gender, Race, and Power in AI - which warns that this isn’t just a problem of recruitment and talent management, but one which has potentially problematic knock-on effects when it comes to elimination of bias in AI systems themselves. The report notes:
To date, the diversity problems of the AI industry and the issues of bias in the systems it builds have tended to be considered separately. But we suggest that these are two versions of the same problem: issues of discrimination in the workforce and in system building are deeply intertwined.
There are some depressing stats cited to justify concern. Only 15% of AI research staff at Facebook are women, although that’s better than the 10% at Google. Only 2.5% of Google's workforce is black, compared to 4 percent at Facebook and Microsoft. Outside the business world, more than 80% of AI professors are male.
Data isn’t available on other communities, such as trans workers or other gender minorities, but there’s not a lot of reason for expecting anything particularly positive there:
Both within the spaces where AI is being created, and in the logic of how AI systems are designed, the costs of bias, harassment, and discrimination are borne by the same people: gender minorities, people of colour, and other under-represented groups.
And that lack of diversity will inform how bias manifests itself in AI systems.
Bias concerns
There have been a number of high-profile ‘embarrassments’ around bias that have made the public domain, including:
- A Google image recognition algorithm that auto-tagged pictures of black people as gorillas.
- Sentencing algorithms piloted in US courts were statistically more inclined to discriminate against people of color,
- Amazon’s experimental hiring tool to rank job candidates, which began to downgrade applicants who’d attended all women colleges or resumes with the word ‘women’s’ in them.
Such examples point to the potential of wide-scale damage if the question of bias is not tackled head on at this still early point in the AI revolution:
As AI systems are embedded in more social domains, they are playing a powerful role in the most intimate aspects of our lives: our health, our safety, our education, and our opportunities…“It’s essential that we are able to see and assess the ways that these systems treat some people differently than others, because they already influence the lives of millions.
There are some legislative efforts being made. Earlier this month, the Algorithmic Accountability was introduced to the US Congress. This would require algorithms used by organisations which hold information on more than one million users or which generated more than $50 million a year, to be regularly reviewed for signs of bias. Jerry Bowles observed on diginomica/government:
Regulation is coming, as more and more policymakers discover just how powerful AI really is. The UK, France, Australia, and others have all recently drafted or passed similar legislation to hold tech companies accountable for their algorithms but, alongside China, the US is the world leader in AI so it has an opportunity to shape future development.
Meanwhile the European Union’s proposed new guidelines on AI ethics dedicates attention to the question of equality and diversity:
Equal respect for the moral worth and dignity of all human beings must be ensured. This goes beyond non-discrimination, which tolerates the drawing of distinctions between dissimilar situations based on objective justifications. In an AI context, equality entails that the system’s operations cannot generate unfairly biased outputs (e.g. the data used to train AI systems should be as inclusive as possible, representing different population groups…This also requires adequate respect for potentially vulnerable persons and groups, such as workers, women, persons with disabilities, ethnic minorities, children, consumers or others at risk of exclusion.
Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models.The continuation of such biases could lead to unintended (in)direct prejudice and discrimination against certain groups or people, potentially exacerbating prejudice and marginalisation…Identifiable and discriminatory bias should be removed in the collection phase where possible. The way in which AI systems are developed (e.g. algorithms’ programming) may also suffer from unfair bias. This could be counteracted by putting in place oversight processes to analyse and address the system’s purpose, constraints, requirements and decisions in a clear and transparent manner. Moreover, hiring from diverse backgrounds, cultures and disciplines can ensure diversity of opinions and should be encouraged.
Such government efforts are to be encouraged, but the legislative wheels grind slowly. In reality, it’s down to business and the tech industry to tackle this problem and not wait for regulators and legislators to wave a big stick.
Solutions?
To fix the issue, it’s necessary first to understand what’s gone wrong to date. The report’s authors present two theses on this.
The first is too much focus on increasing the number of “women in tech” as a headline priority and not enough on the wider question of tackling diversity shortfalls as they relate to, for example, race or gender:
The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others. We need to acknowledge how the intersections of race, gender, and other identities and attributes shape people’s experiences with AI. The vast majority of AI studies assume gender is binary, and commonly assign people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity.
The second error has been too great an emphasis on so-called ‘fixing the pipeline’, or ensuring that recruitment and talent management strategies pull in more and more candidates from under-represented groups. While this is important, it can result in a ‘tick box’ quota mindset that doesn’t deal with the underlying problems:
Fixing the ‘pipeline’ won’t fix AI’s diversity problems. Despite many decades of ‘pipeline studies’ that assess the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI industry. The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether.
It is critical that we not only seek to understand how AI disadvantages some, but that we also consider how it works to the advantage of others, reinforcing a narrow idea of the ‘normal’ person. By tracing the way in which race, gender, and other identities are understood, represented, and reflected, both within AI systems, and in the contexts where they are applied, we can begin to see the bigger picture: one that acknowledges power relationships, and centers equity and justice.
With those two ideas as the baseline, the report presents a series of recommendations:
- Publish compensation levels, including bonuses and equity, across all roles and job categories, broken down by race and gender.
- End pay and opportunity inequality, and set pay and benefit equity goals that include contract workers, temps, and vendors.
- Publish harassment and discrimination transparency reports, including the number of claims over time, the types of claims submitted, and actions taken.
- Change hiring practices to maximize diversity: include targeted recruitment beyond elite universities, ensure more equitable focus on under-represented groups, and create more pathways for contractors, temps, and vendors to become full-time employees.
- Commit to transparency around hiring practices, especially regarding how candidates are leveled, compensated, and promoted.
- Increase the number of people of color, women and other under-represented groups at senior leadership levels of AI companies across all departments.
- Ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups.
- For academic workplaces, ensure greater diversity in all spaces where AI research is conducted, including AI-related departments and conference committees.
- Recognising that remedying bias in AI systems is almost impossible when these systems are opaque. Transparency has to begin with tracking and publicizing where AI systems are used, and for what purpose.
- Rigorous testing is needed across the lifecycle of AI systems in sensitive domains. Pre-release trials, independent auditing, and ongoing monitoring are necessary to test for bias, discrimination, and other harms.
- Research into bias and fairness needs to go beyond technical de-biasing to include a wider social analysis of how AI is used in context.
- Methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all, based on a thorough risk assessment.
My take
A depressing read, but an important one. The AI Now team has done an excellent job in highlighting a crisis that needs to be seen off as soon as possible.
The point is well-made that while getting more women into tech is a genuine challenge that needs to be overcome, it is a topic that has dominated the headlines - albeit unfortunately due to lack of progress - to such an extent that it can arguably be suggested has led to other diversity shortcomings to be overshadowed.
The AI industry is fast-growing, but still relatively young, There is surely time to learn from historic mistakes across the wider tech sector and get it right this time. With decision-making by AI systems set to impact on all our lives, this is more than just a recruitment challenge.