Revisiting ethical AI - where do organizations need to go next?

Profile picture for user Neil Raden By Neil Raden November 17, 2020
Summary:
AI ethics are having a hard time keeping up with AI. Academic debates may be interesting, but organizations need a practical AI ethics framework. Where do we go from here?

Businessman holds tech circle over world map © jimbophotoart - Fotolia.com

After two or three years researching and consulting this burgeoning field, it seems appropriate to compile what I've found so far. This article (actually, it's in two parts) is more of an op-ed than a typical article.

Over time, I de-emphasized ethics and moral philosophy as a subject. They aren't necessary to create practical frameworks for producing ethical AL, and they crowded out the prescriptive things that are necessary in the industry.

AI ethics - we need a practical framework, not an academic debate

Reviewing my early contributions, and those of others, too, three things are missing: First, there have been substantial developments in AI in the past two or three years, and second, those developments also raised new ethical issues, and third, I did allude to practices and remedies but did not provide a prescriptive framework for resolving these pressing issues. 

There is an overemphasis on the topic of ethics, per se, rather than the pragmatics of how to do this work ethically. It's complicated by the many and emerging declarations of "principles" by governments, regulators, academics, and now, "ethicists." This bumper-crop of well-meaning academics do provide needed counsel and guidance on the foundational concepts of ethics and moral philosophy. But the tendency of them to wade into implementation guidance is, well, unethical.

Lacking technical expertise in the discipline (AI) or experience implementing it, they are not qualified for that aspect. However, there is a noticeable trend of ethicists interceding into these programs. Developing successful AI applications requires knowing how to navigate the fiefdoms and politics in an organization and, often, to finesse resource holders to push the technology out the door. Lacking that experience, their contribution is limited.

Given the state of the world today, I suppose the more ethicists, the better, but not in the field of ethical AI, which is a multidisciplinary pursuit. Teaching to identify insidious problems before they start to engender ethical traps is more effective than exercises in general ethical dilemmas. I attended a two-day purported AI Ethics class taught by a former professor of ethics. The first day was mostly working through "ethical dilemma" exercises. I have to admit, it was excellent, it was challenging, but it was about 90% more on the subject of ethics than needed in ethics training for AI. Teaching "ethics" is only marginally helpful because of the inevitable conflict between applying the ethical concepts and the organization's pressure to go faster or cut corners. A painfully obvious analogy: I may know that in natural gas-fired burners, the stoichiometric air required 9.4-11 ft^3/1.0 ft of natural gas. But if the pilot won't stay lit, that information isn't helpful.

Part of the problem is that ethics isn't one thing, one answer. There is a 4,000-year trail of Ethics and Moral Philosophy. My favorites are highlighted: The Buddha, Laozi, Confucius, Socrates, Aristotle, Epicurus, Jesus, Augustine, Al Ghazali, Maimonides, Hobbs, Spinoza, Locke, Hume, Rousseau, Kant, Bentham, John Stuart Mill, Nietzsche, Russell, Wittgenstein, Gandhi, Buber, Goidel, Niebuhr, Peter Singer.

My position on ethicists teaching ethics is that we only need a useful framework, not a comprehensive survey of these titans of ethics and moral philosophers. If you want to study deontological ethics or utilitarianism, that's fine, I've dabbled in some of these myself, but it isn't for making progress with AI ethics.

What are the obvious ethical issues with AI?

At the high level, ethical concepts are conscience, choice, honor value, integrity, morality, principles, honesty, right, fairness, responsibility. Most people understand this, maybe to varying degrees. But ethics is a broad field, and when it comes to AI ethics, the areas of concern are: 

  • Risks and Ethical Issues with Predictive Analytics and AI
  • Discrimination: Age, race, gender, religion, and any other categories that group people
  • Privacy: Confidential information/data protected/retained
  • Bias: Assumptions, data, code, algorithms, results
  • Unrepresentative Data: Does not represent the population you are modeling

Discrimination, privacy, and bias are the categories that need your attention. All three can derive from many causes, but a leading one is that last one, under-representative data. If you train a model with under-representation, you will deploy an unethical model.

I have not resolved how one can act ethically with AI when the organization lacks an ethical compass. Part of it is, as Brent Daniel Mittelstadt of the University of Oxford proposed in The Ethics of Computing: A Survey of the Computing-Oriented Literature, is values: ingrained dispositions to act by standards of excellence." He writes:

Not all computing professionals have a deep intrinsic interest in understanding the details of ethics. It is, therefore important to point out that many of their practical discussions and decisions are nevertheless driven by ethical ideas and principles; thus, it is important to have a general grasp of them. This may be most obvious in cases of moral dilemmas, in which a technical choice is made in the face of competing values.

You can try to teach ethics in a class or a workshop or from a book, but you can't learn it there. You learn throughout your life. If someone doesn't see the need to apply some moral thinking to their work, they shouldn't be developing decisioning systems. AI has manifest opportunities to be weaponized in ways that threaten privacy, regulations, the stability of your business, and your reputation. I like to stress the practical - what to do, what not to do, and how to decide when faced with uncertainty.

Where we start with AI ethics - the fundamental concept of social context

Whenever you engage with people, you are in the Social Context. Your primary responsibility is to ask the ethical questions whenever you are in the social context. For example, if you develop an AI application to arrange a closet with 400 pairs of shoes, there is no ethical issue. If you are designing an autonomous underground drilling machine, the is no social context. But if it's an autonomous car…well, there is the social context in spades. But some things get a little tricky.

Suppose you are an electrical engineer designing controller chips for monitoring devices at the edge. A common practice is to leave a software backdoor in the chip that can only be sealed by one person with an encrypted key. Suppose that person becomes compromised by some espionage, and a hostile country gets access to the backdoor and brings down the power grid of the entire country, causing misery and death. Who is responsible? Or should ethical questions be asked in the first place? If you build an AI model that compromises certain people, the responsibility is pretty straightforward. But in the case of the chip designer, whose company sells them to a distributor who supplied to a manufacturer that builds a device that is hacked and brings down the energy grid, who is responsible?

Keep that question in mind for when we get to responsibility. 

What parts of AI do you need to think about?

What you need to know concerning AI Ethics:

  • Algorithms designed to repeat in high volume create uniformity. There is an aching desire to have it fire a million times, because it doesn't cost anything, as opposed to, for example, having a team of HR analysts screening resumes.
  • Products you use with embedded AI must be considered.
  • Data sourced from outside of your organization, or the complexity of blending multiple data sources.
  • The "social context" - what are you doing to people
  • Fairness: example: use of FICO score, zip code, incomplete harvested (Rx) data.
  • Subsequential bias

Embedded AI is an essential aspect of your ethical AI. If you buy, rent, or borrow software with embedded AI, it is your responsibility to understand what it is doing and what data it applies. This isn't easy. But it is all about responsibility. In my consulting practice in this area, that gets a lot of resistance. "How can we be responsible for something we didn't make?" Well, you didn't make your car, but you are responsible for that.

Subsequential bias - suppose you put an AI app in production. What if your perfect model inadvertently created an environment that fosters bias? For example, a symphony orchestra was criticized for not having any women in the orchestra. They devised a solution. Auditions would be blind and recorded and played into a machine learning algorithm that identified 300 attributes of their musicianship, and created clusters of desirable and undesirable qualities in the audition. It worked. New hires were averaging 55% women and 45% men. Problem solved. Not exactly. It turns out the women were consistently paid less than men. The orchestra's older men did everything they could to block women from progressing to the first seat.

Dressing rooms were assigned on seniority, and women complained about pests in theirs. In other words, the developers of the unbiased solution failed to consider what would happen if they succeeded, failing to look over the horizon, "snatching defeat from the jaws of victory." Just in case you're not convinced how serious the AI ethics problem is, here is a snippet from Fast Company, " What HBR Gets Wrong About Algorithms and Bias.

The Verge investigated software used in over half of U.S. states to determine how much benefits people receive. After its implementation in Arkansas, many drastically had their benefits cut. 

A woman with cerebral palsy who needs an aide had her hours of help suddenly reduced by 20 hours a week. She couldn't get any explanation for why her healthcare was cut. A court case revealed that there were mistakes in the algorithms, biased against people with diabetes or cerebral palsy.

The creator of the algorithm, a professor earning royalties, was asked whether there should be a way to communicate decisions. Here is what he said: ‘It's probably something we should do. I should also probably dust under my bed.'

In part two of this article, I'll cover issues such as bias, and data responsibility.