It's time for AI ethics to get real

Neil Raden Profile picture for user Neil Raden November 9, 2022
Summary:
After five years of AI ethics, it's time to get real. We need better projects - and a better approach to the pitfalls of data and talent. Here's why I find the academic approach to AI ethics limiting.

reality check

I am probably going to get some heat for this. Still, I have reservations about academics, especially those with research degrees in philosophy, taking a leading role in counseling organizations on the nature of ethical and trustworthy AI.

A central aspect of developing and practicing ethical AI is understanding how things get done (or not) in organizations. 

In a previous diginomica article, I reviewed a report, “A Review of Why Are We Failing at the Ethics of AI?” by Anja Kaspersen and Wendell Wallach, senior fellows at Carnegie Council for Ethics in International Affairs. This output reflected their academic orientation but provided little understanding of the dynamics of AI development in the enterprise. Their premise in the paper on the state of AI Ethics is that in the last few years, a proliferation of initiatives on ethics only developed a narrow set of principles and guidelines. As “ethicists,” few have made any real impact in modulating the effects of AI because they failed to understand the subtleties and life cycles of AI systems and their consequences.

The fraternity of ethicists speak, publish and form their institutes, where they grant each other awards and certificates. All the talk about ethics is simply that: talk. Discussions on AI and ethics are still primarily confined to the ivory tower. One prominent former philosophy professor quite skilled at self-promotion has written a book they consistently flog on social media and conveniently contends that every organization needs an Ethics Committee with the essential ingredient of Chairman of the committee: a Ph.D. in Philosophy. 

Not only is the contention that every organization needs an Ethics Committee flawed, but it is also highly suspect that a philosopher is needed to chair it. Yes, some oversight by an ethicist is necessary, provided their role is limited to identifying ethical dilemmas and offering some guidance. I agree that the input of ethicists is an element that can be useful. Still, academically trained ethicists are limited by 1) just a specific area of ethics and 2) ethics is not a subject like mechanical engineering. It is discursive, and the work of ethics is typically discourse without agreement. 

Or they focus on some aspects of ethics while ignoring other more fundamental and challenging elements. This is the problem known as "ethics washing" – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed to justify pressing forward with systems that end up deepening current patterns.

Advice from ethicists that lacks perspective gained from experience inside a commercial organization is incomplete. Employees take a few years to understand how the company works with funding, executive support, forming cooperative working relationships, seeking informal advice from others, preferred contractors (and whether they are a good fit for an AI project), and a host of other issues. Other employees take a few years to get an MBA to learn about organizations at a more abstract level. Most ethicists lack all of these characteristics. 

There is no school for training ethicists in an organization's practice of AI ethics.

The best way to productively channel ethical practices in their work is to blend technical skills with solid business acumen. This is essential for helping the organization you’re working for explore new business opportunities. Without it, purely ethical guidance may not be able to discern the problems and potential challenges that need to be solved for an organization to grow.

AI is susceptible to algorithmic bias, data quality issues, inherent data bias, imperfect models, overfitting and underfitting, errors of scale, and hyperparameter tuning, among many others. Depending on the situation, problems can create severe ethical gaffes. 

It seems that the level of ethics people need to practice AI ethically does not require re-reading Aristotle or Kant or Spinoza. If someone doesn't see the need to apply some moral thinking to their work, they shouldn't be developing decisioning systems. AI has enormous potential to be weaponized in ways that threaten privacy, regulations, the stability of your business, and your reputation, or to be deliberately maleficent. What practitioners need is offerings that stress the practical – what to do, what not to do, and how to decide when faced with uncertainty.

Existing ethics "audits" tend to begin and end with the models. A comprehensive ethical evaluation starts with data and extends beyond the nearby results, the "sequelae," the consequent effects when your vetted, bias-clean solution inadvertently creates a set of circumstances leading to unanticipated ethical issues. 

Some other issues that are beyond the usual "AI Ethics" discussion are:

  • A shift in data sources exacerbates the risk
  • Most organizations still operate without a comprehensive data management infrastructure
  • There is a conflict between applying ethical concepts and the organization's strategy, the "this is how we do it syndrome." 
  • Finding focal points in the instruction is complicated by the many emerging declarations of "principles" and guidelines from governmental and non-governmental sources, most of which lack practical guidance. 

My take

In our practice, we observe two causes of problems more frequently than any others:

  • Data: data sources are not compatible with AI. They are composed of “operational exhaust,” data stored from your operational systems, whose context and semantics are not often well-understood and misconstrued by their structural metadata, and “digital exhaust” from your websites, blogs, syndicated and semi-structured data. The corpus of data may be collapsed into a “data lake” or be partially or fully federated. The context of data - why the data were collected, how they were collected and transformed - is always relevant. Data cannot manifest the perfect objectivity that is imagined. (We devised a solution to this problem).
  • Competence: there are essential components of an AI team, but in particular, it requires sufficient knowledge of the domain, the tools, the function, and the environment where the application will be deployed, which argues for diversity in the team (which doesn’t necessarily mean physical variety, just the sensitivity), and particularly where the applications will affect the population.

Teaching to identify insidious problems before they start to engender ethical traps is more effective than dealing with outcomes at the inference stage. As the French say, "Mieux vaut prévenir que guérir” - it's better to prevent than to heal. 

Loading
A grey colored placeholder image