After three years speaking about, writing about and training in AI Ethics, organizations I speak with report that many of the students come back with a good understanding of the elements and remedies for ethical issues. But they continue to work as before. My observation is that this is the result of four factors:
- An overemphasis of the topic of ethics, per se, rather than the pragmatics of how to do this work. It's complicated by the many and emerging declarations of "principles," which lack practical guidance.
- Teaching to identify insidious problems before they start, that engender ethical traps, is more effective than dealing with outcomes at the inference stage. Teaching "ethics" is only marginally helpful. Perhaps we should drop the word "ethics" altogether.
- The inevitable conflict between applying the ethical concepts and the organization's pressure to go faster and/or cut corners. Hire people with forbearance to resist company pressure
Part of the problem is the use of the term "ethics." It's too ineffable for most people to grasp. It harkens to Socrates or the church. Some groups have tried "trustworthy." In an article, Artificial Intelligence and Legal Liability, the author, John Kingston, asks the question, "When an AI Finally Kills Someone, Who Will Be Responsible?" This is an issue of civil or criminal liability, which will surely be addressed by government judicial systems.
When an engineer is tasked with building AI models for radioactive waste storage, how broad should the ethical review be? I try to explain this to corporate AI engineers, that it is insufficient to address any ethical concerns of their model, such as bias or privacy. They need to look over the horizon to how it will propagate. It happens that the engineers truly understand their ethical responsibility in principle, but fail to see the bigger picture.
Not all work in AI is done in commercial companies. First, you have to consider who is developing AI apps. If you ask the technology vendors, they'll tell you it's the enterprise, and that's where most of our analysis is. But there are also applications in Defense, Intelligence, NGO's, all sorts of other big AI you don't see, which has a different set of ethical issues. It's distributed and diffuse.
For example, AI work at Google can be highly theoretical and distanced from the effects on people. For instance, TensorFlow is a machine learning platform, not an AI application to sell more shoes. Microsoft struggles to find a way to use its facial recognition technology without abuse of people's privacy. One ethical requirement I have is not to let anyone hack into my systems and cause mayhem in a defense operation. On the other hand, perhaps I'm using AI technology to make better Hellfire missiles to kill people. If I'm a lender, an ethical requirement is to not discriminate against protected classes, but in 2008-2009, an emergent protected class was millions of people being foreclosed.
It's a no-brainer that the exploding use of AI in policing is frightening. But is there some relative degree of ethics at play? It is widely claimed that predictive policing algorithms are racist. It is also widely claimed that it reduces crime. The current thrust of predictive policing initiatives is based on convincing arguments and anecdotal evidence rather than systematic empirical research. Racist algorithms can be fixed, but how many innocent people are affected by it until it is? Compare this to the working poor are unfairly rated for car insurance (sometimes double or triple what you will pay) because of a clear correlation between low FICO score and experience, even though there is no causal relationship. (poor people have low FICO scores because they're poor, not because they represent an increased hazard).
Predictive policing is dramatic, but mundane things may have a much more positive effect on many more people if you can't afford your car insurance (which is mandatory in every state, a regressive tax on the working poor primarily funds wealthy personal injury lawyers). If you can't drive, you can't take kids to a magnet school, drive to work, or a reasonably priced grocery store. People don't have poor credit because they're poor. They're poor because they have poor credit - no mortgage, no credit cards, no conventional auto financing but plenty of rent, and high-interest rates.
When AI is aimed at people, we call that the Social Context. Ethical practices are vital for AI because it's personal. Before big data and data science, researchers would categorize people into tofu lovers with some college or evangelical Christians. Still, today the number of inferences/second is massive, on an individual level, potentially causing harm before you know it.
What is different today is the capability to produce errant models and inferences and put them in production at a scale that is orders of magnitude greater than anything before, compounding the potential negative outcomes.
This is exacerbated by a shift in sources, from interval "operational exhaust" to a myriad of external, non-traditional data that is often poorly vetted. Using the terms "ethics" or some other variant, the clear need is to understand the source and remedy of these problems and not dwell on the minutiae of ethics. If you want ethical AI, don't pray people can take a class and figure it out. Instead, seed your group with people with the fortitude to withstand the pressure to relax the rules - people with virtues.
You heard me. Virtues. The Ancient Greeks called them Cardinal Virtues. Prudence, Fortitude, Temperance, and Justice. That's a good start.
Ethics principles and checklists are no longer sufficient. A focus on ethics as a driving subject is not delivering the results expected. Situation-sensitive behavior must be based on virtues and personality dispositions. It doesn't require a cognitive psychologist - people who have these qualities are easy to spot. They have virtues and operate with sensitivity to individual situations. By their nature, organizations place people in a position where they must decide to either "toe the line" or be firm about their values.
Ethics review boards
I have almost completely dropped this topic in my workshop because they are not helpful:
- They don't scale. As various groups in an organization grow their AI portfolio, how can an Ethics Review Board keep up?
- They tend to be bureaucratic, slowing the progress without a clear mandate
- They have no teeth, just text
Professional societies - with teeth
Some professional societies can be very effective. Professional Engineers for example. Another I'm familiar with is the the Society of Actuaries (and many other actuarial societies around the world). These societies have teeth. It takes 6-10 years to pass all of the actuarial exams (some geniuses do it in two).
When I took them, the FIRST exam covered three semesters of Calculus and one semester of Linear Algebra. That was the easy one. But if you get one DUI, your case will be printed in the society's newsletter, and you will lose the credentials that you struggled so hard for (and they are hard), for the rest of your life. Professionalism, ethics and code of conduct are all parts of mandatory continuing education.
Here's what Gisele Waters, PhD, and @EthicalBAU on Twitter, had to say:
AI readiness isn't just a tech issue; it's socio-technical too social context matters. When an engineer designs something for others to use, the use of that tech in the social context matters. Engineers make things. How and why they make them matters. Plus - it needs to be taught to a wide range of people. My PhD work hopes to help a wide variety of PhDs understand how their work may intersect with AI. Even if they don't plan to use AI themselves, they need to future proof their work in case someone else applies AI to it.
The invention of the ship was also the invention of the shipwreck' ethics, or trustworthy AI require more than a position paper. It's a complicated subject that requires training and monitoring to keep you from foundering on the rocks. Some superficial ethics experts offer only ethics material but provide insufficient guidance in applying it to AI work. They generally have no experience with developing AI applications. The field of AI Ethics has been diluted with too much theory and not enough practicum.