How AI ethics falls short - preserving jobs is not enough
- Summary:
- AI ethics has been a hot topic for about five years, but are we getting to the fundamental issues? Here's why well-intentioned ideas for preventing AI job loss come up short.
I read a presentation proposal today for the Stanford Human-Centered Artificial Intelligence conference for Shared Prosperity Targets for the AI Industry. I followed up on some other material from the submitter's site, the AI and Shared Prosperity Initiative.
The premise, briefly, is that: "When AI companies develop new technologies, they should be required to perform a distributive impact assessment to ensure that inventions enhance human job opportunities rather than solely displacing human workers" - sort of like an Environmental Impact Statement.
Right off the top, why limit it to workers? Are we not concerned with the welfare of the young, the old, the infirm, oppressed minorities? Wouldn't a scheme like this, even if it didn't displace human workers (presumably those making $7.25/hour), merely reinforce the status quo?
I'd put this in in a class of proposals that are aspirational, but without any foundation for success. But even more importantly, it seems to overlook the baseline we have at present and why we haven't fixed that.
Consider these key findings from the Robert Wood Johnson Foundation Survey:
- Nearly half (45%) of African Americans experienced racial discrimination when renting an apartment or buying a home.
- 18% of Asian Americans say they have experienced discrimination when interacting with police. Indian-Americans are much more likely than Chinese-Americans to report unfair police stops or treatment.
- Nearly 1 in 5 Latinos have avoided medical care due to concern of being discriminated against or treated poorly.
- 34% of LGBTQ Americans say that they or a friend have been verbally harassed while using the restroom.
- 41% of women report being discriminated against in equal pay and promotion opportunities.
And that's just a small sample of the misery of people in the US without resources or opportunities. I've been in the technology business for decades, and I don't recall such a wellspring of initiatives to protect people from digital technology, as we see now with AI.To be fair, the aforementioned AI organization, Partnernship on AI, is a well-funded non-profit with numerous initiatives (see their 2020 annual report).
What is it about AI that attracts so many people to sign in on AI Ethics? I'll get to that in a minute.
There was a section at the last conference at Stanford, "Universal Basic Income to Offset Job Losses Due to Automation." Same question - why just job losses? There were four speakers, and three of them were professors (their input is needed, but their talk lacked perspective of how technology gets developed and implemented). But, surprisingly, the fourth speaker was Andrew Yang, candidate for President of the United States, a successful entrepreneur, and a source during the 2020 campaign of some of the most creative proposals. Yang is a proponent of a Universal Basic Income.
But let's get back to the central question here. I wrote recently that quantitative methods had buttressed bias and discrimination for decades. There is nothing new in AI Ethics. So why do we pretend there is? Bias in AI is just a repeated symptom going back almost 150 years where statisticians used their cooked-up data to justify everything from slavery to Jim Crow, immigration, intelligence testing, to the oppression of women. And it's still happening.
So why does AI ethics get unprecedented attention? Here are some hypotheses:
AI is just so much sexier than statistics or rules engines. That banks had digital systems to decline mortgage applications for African-Americans for decades didn't attract much notice. But AI is a big, scary, emotional thing. In the early days of writing about AI Ethics (4-5 years ago), the mention of AI did not harbor versions of commercial applications making credit decisions. It was about the singularity, the loss of our humanity. That perception endures. That's when "ethicists" started to flood into the mix. And that's why we see so many impractical suggestions. And that's why the population of AI ethicists is predominantly academic, non-profit, legal, humanist, but not technologists.
We need experienced, practical technologists who can develop sound systems and maintain them ("Everyone wants to make something, but not maintain it" - Kurt Vonnegut). In those early days when I thought teaching developers ethics was the solution, I learned that they understood the concepts but did not know how to apply them to their work, especially when their work may have been skirting the fence of what is ethical.
Another group, AINOW, boldly focuses on Rights and Liberties, Accountability of the Public Sector, Bias and Inclusion, Healthcare, and Safety and Critical Infrastructure. They have a section on Labor and Automation, but their published material emphasizes the role of AI to both monitor and promote these other categories.
My take
I think that this first organization missed the boat. The AI and Shared Prosperity Initiative display a certain naivete about how business and regulations work in the US. Proposing a scheme where all developments in AI are tested to preserve jobs is 1) probably unenforceable, if not undefinable, and 2) it's more critical that AI contributes to a vibrant and fair world.