Unethical AI unfairly impacts protected classes - and everybody else as well

Neil Raden Profile picture for user Neil Raden September 16, 2020
Summary:
We've established that unethical AI hurts protected classes - but it doesn't stop there. Across industries and regions, unethical AI can impact the entire population. Here's some questions to consider.

dominos fall
(via Shutterstock.com)

There are well-documented examples of AI systems making decisions that affect protected classes, such as housing assistance or unemployment benefits. AI can be used to screen resumes; banks apply AI models to grant individual consumers credit and set interest rates for them.

Many small decisions, taken together, can have large effects, such as: AI-driven price discrimination could lead to certain groups in a society consistently paying more. But are there AI applications today that affect everyone, no matter their "class"?

Let's start with deepfakes:

  • Deepfakes are manipulated videos or other digital representations produced by AI that yield fabricated images and sounds that appear to be real. 
  • Personal reputations, relationships and livelihoods can be destroyed by deepfakes.
  • Gender discrimination cuts across age, income and education brackets. 

As I mentioned earlier, we are shifting our AI Ethics courses to more practical, useful techniques. And we discovered a way to spot deep fakes: Benford's Law.

Benford's Law of anomalous numbers, or the first-digit Law, is an observation about the frequency distribution of leading digits in many real-life numerical data sets. Take a picture or video in just a set of numbers. Find the distribution of the digits 0-9, and if they don't follow the Benford curve (the digit one will always be 31%), it's a fake. I mention this because you have two choices with deepfake: "deepfakes are unethical," or use Benford's Law. Useful, practical guidance is best. 

Gerrymandering - can algorithms change the equation?

The question was: can AI affect all groups of people? One area where AI can make a substantial positive difference is gerrymandering.  What is political gerrymandering? It is a technique used by the majority party in state legislatures to draw district lines to maximize their seats in Congress.

Districting can influence elections' outcomes, because determining which voters are in which districts can alter the district's majorities (see: Political Gerrymandering Explained - Subscript Law). When the party in control of the map-drawing process draws the lines to its own advantage, to the detriment of the disfavored party, it engages in political gerrymandering. Sometimes, mapmakers get so specific with carving that the district shapes end up looking pretty bizarre, like the Pennsylvania Seventh District picture.

A computer scientist team has come up with a new algorithmic approach to redistricting that's potentially less political and more mathematical.

Check out Wendy Tam Cho and Bruce Cain's work in the latest issue of Science, which has a special section dedicated to democracy. Cho and Cain claim mathematics can solve the problem. Cho, who teaches at the University of Illinois at Urbana-Champaign, has been pursuing computational redistricting for years. Just last year, Cho was an expert witness in an ACLU lawsuit that ended up overturning Ohio's gerrymandered districts as unconstitutional. Cho emphasized that although automation has potential benefits for nearly every state process, "transparency within that process is essential for developing and maintaining public trust and minimizing the possibilities and perceptions of bias."

Where are the ethical traps here? The algorithms can draw thousands of maps, based on the criteria the governing party favors. But there is a precedent for an AI-driven application for government that was an epic disaster - COMPAS, Correctional Offender Management Profiling for Alternative Sanctions. It was massively flawed, biased and impenetrable. Since the algorithms it uses are trade secrets, they cannot be examined by the public and affected parties, which may violate the due process.

Your digital footprint is an AI ethics problem

Why is ethics so important now with AI? Wherever there is a social context, anything involving people, ethical questions are necessary because it becomes personal. Before big data and data science, researchers categorized people into cohorts, or categories, such as tofu lovers with a college degree, or evangelical Christians. There wasn't enough data available at the individual level to draw inference on a single person. Even when evaluating a single person for credit or life insurance, the few available characteristics were used to compare with a larger group.

What is different today is an avalanche intimate, personal detail, exacerbated by a shift in sources, from interval "operational exhaust" to a myriad of external, non-traditional data, such as pictures and videos that are not even vetted. In the wrong hands, with the wrong model, it can wreak havoc to people's lives. The capability to produce errant models and inferences and put them in production at a scale that is orders of magnitude greater than anything before compounds the potential adverse outcomes.

Today, your "digital footprint," information about you on the internet, is so enormous that it is estimated the growth of your personal data on the internet is two megabytes per second.  Because of AI, data is processed at a rate of billions of inferences per second on an individual level, creating an exquisitely intimate picture of you, continuously updated.

Nuclear waste disposal - the perils of risk assessment

Hundreds of models were developed to determine the Waste Isolation Protection Plant's safety near Carlsbad, New Mexico. I've written previously about my involvement in this effort: U.S. Department of Energy's Waste Isolation Pilot Plant.

The Waste Isolation Pilot Plant (WIPP) is the nation's only deep geologic long-lived radioactive waste repository. Located 26 miles southeast of Carlsbad, New Mexico, WIPP permanently isolates defense-generated transuranic (TRU) waste 2,150 feet underground in an ancient salt formation. TRU waste began accumulating in the 1940s with the beginning of the nation's nuclear defense program. As early as the 1950s, the National Academy of Sciences recommended deep disposal of long-lived TRU radioactive wastes in geologically stable formations, such as deep salt beds. Sound environmental practices and strict regulations require such wastes to be isolated to protect human health and the environment.

Bedded salt is free of fresh flowing water, easily mined, impermeable and geologically stable - an ideal medium for permanently isolating long-lived radioactive wastes from the environment. However, its most important quality in this application is how salt rock seals all fractures and naturally closes all openings.

We determined it was safe, that the probability of any TRU escaping over the next 10,000 years was so small that it was considered negligible. Were we ethical? Did we operate with virtue? Waste emplacement operations were suspended in February 2014, following two unrelated events. On February 5, a salt haul truck caught fire. On February 14, a waste drum breached, resulting in a radioactive release of plutonium into the environment. The ethical mistake was being so confident in our models that we failed to consider the risk of transporting TRU from Los Alamos to WIPP, 400 miles, the populations at risk from an accident and a gross error at Los Alamos on packing the TRU for transport. 

My take

AI Ethics as a practice stresses unfairness to protected classes - the poor, minorities, women, people with disabilities, but in fact, there are far more instances of ethical programs that affect the entire population. Principles, directions and documents issued by governments, organizations and even companies that have been unaware of these issues - and instruction in AI Ethics - are lacking.

Loading
A grey colored placeholder image