A blueprint for unbiased AI that benefits all - leadership tips from Google DeepMind's Obum Ekeke
- Summary:
-
Google DeepMind's Obum Ekeke shares the firm’s vision for the next generation of AI leaders.
Here’s a question - if diversity is so critical to prevent harms from AI technology, why is the field so overwhelmingly white and male?
That being so, efforts are being made to address the issue. Google, for example, has revealed steps it’s taking to ensure AI benefits everyone, not just those who build it, and that the next generation of AI leaders are fully representative of the wider world.
Google’s AI technology is indisputably having a positive impact in certain areas. With its DeepMind AlphaFold AI system, Google has released the protein structures of almost every organism on earth, making it easier to fight disease.
Proteins are the building blocks of life, and a protein’s structure is closely linked with its function. Once a protein structure is known, it’s easier to understand what it does and how it works. Prior to last year, figuring out that structure was a hugely expensive and time-consuming process. But AlphaFold can predict the 3D structures of proteins just from their 1D amino acid sequence.
Google has made these structures freely and openly available to the global research community. To date, more than 1.4 million researchers from over 190 countries have assessed the database, using the information to design vaccines for diseases like malaria and treatments for liver cancer and Parkinson's; and tackle pollution through creating plastic-eating enzymes.
DeepMind’s AlphaMissense is another way AI is advancing science. Missense variants are genetic mutations that can affect the function of human proteins; in some cases, they can lead to serious diseases such as cystic fibrosis or cancer.
In September, DeepMind released a catalog of missense mutations so researchers can learn more about their potential effect. The AlphaMissense AI model has been able to categorize 89% of the 71 million possible missense variants as either likely disease causing or likely benign. By contrast, only 0.1% had been confirmed by human experts.
Using AI predictions, researchers can preview thousands of proteins at a time, helping to prioritize resources, advance scientists’ understanding of diseases and accelerate drug discovery. DeepMind has made the AlphaMissens catalog freely available to the global research community and open sourced the model code.
Risk
While AI can provide benefits like these, Google DeepMind's Head of Education Partnerships, Obum Ekeke, notes that the technology can have serious risk and negative impacts unless it is built and used responsibly.
The aim of Google’s DeepMind unit is to ensure that AI is built responsibly, and delivers scientific breakthroughs and products that improve lives across the world. This means developing AI systems that benefit society without reinforcing bias and unfairness, and can invent new ideas but also reliably behave in expected ways.
One example of this is DeepMind’s research into how post-colonial and de-colonial theories can shape ongoing advances in AI to better support vulnerable people who continue to bear the negative impacts of innovation and scientific progress.
Another DeepMind study explored the positive and negative impacts of AI on LGBTQ+ communities, highlighting concerns across privacy, censorship and online safety, and the importance of developing new approaches for algorithmic fairness. Ekeke adds:
The challenges of bringing the benefits of AI to all of society are significant. Individual algorithms need to be fair. They need to not be biased against particular groups, and AI needs to create local benefits for everyone, not just those who live in countries with a strong research presence.
Addressing this problem is a significant challenge, and one that requires a more diverse workforce, a more democratic spread of AI across the world, and the inclusion of a broader set of perspectives in building products and algorithms. Ekeke notes:
If diversity is so critical to ensuring innovation and prevent harms from this technology, why then is the field of AI so overwhelmingly white and male? I don't need to set out the statistics to grab the scale of the imbalances in tech and AI. Mostly it just takes a look around the office or lecture theater or group video call. These imbalances are not only gender, they are racial, they are socio-economic, they are geographical.
Three statistics bring the scale of these issues into perspective: according to UK government and Higher Education Statistics Agency data, only 2.5% of school teachers, fewer than 1% of university professors, and just 10 of the 340 people who graduated with postgraduate research degrees in computer science in the UK in 2021/22 were Black. Ekeke says:
These figures are compounded by narrow curricula, insufficient role models in leadership positions, a lack of options for promising students and minimal funding for postgraduate study.
Ekeke cites various ways to counter these problems, and encourage and support a wider range of people to get into the field of AI. At secondary school level, investment in long-term programs to increase enrolment of students from under-represented groups into undergraduate AI degrees should be a top priority. There also needs to be more support for the attainment of underrepresented students, particularly in maths and science at GCSE and A-level, and encouraging female students’ interest in STEM.
One way to do this is for companies to increase their support for informal curriculum and enrichment activities in STEM subjects. Google DeepMind has partnered with the Raspberry Pi Foundation to co-create Experience AI. This learning program offers free lesson plans, slide decks, worksheets and videos to address gaps in AI education, and support teachers in engaging students in the subject.
New forms of work and research experience schemes are also needed to give undergraduate students early practical exposure to research career pathways, and to help them transition to postgraduate study or gain industrial employment at the postgraduate level.
Scholarships that combine financial with pastoral support and mentoring are vital. Since 2017, Google DeepMind has supported 300 postgraduate students across 13 countries to get into the field of AI, including underrepresented regions like Africa and Latin America.
At the post-doctoral and broader research community level, more support is needed for early career researchers from underrepresented groups, especially Black researchers. Google DeepMind has partnered with UK universities to encourage these individuals to pursue post-doctorate study in AI and hopefully transition to permanent faculty positions. Ekeke explains:
That way, they can become visible academic role models and help train the next generation of diverse researchers in the field.
Foot in the door
But even when you have a foot in the door of industry or academia, it’s not always easy to progress. While it’s crucial to dedicate time and resources to educating future generations of AI leaders, that doesn’t mean ignoring other immediate issues, argues Ekeke:
If you are a Black person like me in a majority white environment, or a woman in a male-dominated environment, how can you feel safe enough to share an opinion that is different from the rest of the room? How can we ensure that the right support is there so underrepresented team members can thrive, be promoted and be valued?
A combination of initiatives is required: inclusive meeting practices, ongoing education about stereotypes and bias, allyship training, and a review of processes that have a high impact on people and their progress, such as hiring and promotion, through the prism of equity and fairness. Ekeke adds:
Most importantly, we need to make sure that leaders, managers and every individual in the organization is accountable for change; and that diversity, equity and inclusion programs are recognized not as a side job, but as part of the work itself.
One final cautionary note from Ekeke - a fairer world won't emerge by accident and it won't happen overnight. AI for all requires a shift in thinking away from immediate returns on investment to the long-term benefits of supporting the next generation of AI leaders.