The AI ethics review - eight sticking points we haven't resolved

Profile picture for user Neil Raden By Neil Raden April 3, 2020
Summary:
AI tech is moving quickly - but the ethical problems aren't going away. Here's eight AI ethics issues that persist.

problems-and-questions

Well over three years ago, I started to research and write about AI Ethics.Den Howlett of diginomica interviewed me about the topic in an article in September, 2018 - Can AI be bounded by an ethical framework?

I have since written about various aspect of AI and AI ethics for diginomica. Though I stand by the principles in those documents, they are neither comprehensive, nor completely current.

AI is moving so fast, and new ethical issues are apparent. It is time to review the subject, first by commenting on what has materially changed in the last few years, and what ethical issues have arisen.

Specifically, the White House Office of Science and Technology Policy (OSTP) announced 10 Principles for Stewardship of AI Applications (PDF). These are important indications of the federal government's direction. Some states in the U.S. already put a stake the ground, not to mention the EU, and the Organization for Economic Co-operation and Development (OECD), an intergovernmental organization that represents 37 countries for economic concerns and world trade. OSTP probably shouldn't have bothered, as there was nothing new or even provocative in the ten principles, and it has no teeth. If anything , it most resembled similar doctrines from China.

Eight AI ethics issues to confront

Ethical issue #1 - How can organizations follow an ethical path with AI when the central government gives no guidance? The State of Washington just signed into law landmark legislation about facial recognition. According to the WSJ, "Washington state adopted a Microsoft Corp-backed law enshrining the most detailed regulations of facial recognition in the US, potentially serving as a model for other states as use of the technology grows." But should we entrust these issues to be addressed at a per-state level?

Ethical issue #2 - Should a mega-tech company be writing legislation about a controversial AI application? Not everyone is comfortable with this, according to the WSJ article. Some feel the bill gives the Washington State too much leeway. One provision allows police to use the technology without a warrant if "exigent circumstances exist."

I covered data bias in a report, Ethical Uses for Artificial Intelligence for Actuaries (PDF), sponsored by the Society of Actuaries, but not elsewhere, so it needs some explaining here. Data is an ethical problem, and always has been. Businesses should take every effort to minimize risks form data, especially when the data is from a third party, even Data.gov or CDC.gov, because data on its own has no context. How it was recorded and under what logic is missing when looking at a table.

There must be transparency around lineage, acquisition methods, and model assumptions, both initially and on an ongoing basis when the data is changing. There must be mandated security procedures to prevent loss from tampering and introduction of malware - all reinforced by comprehensive rights to audit, seek injunctive relief and terminate. The problem is that data brokers are mostly unwilling and unable to provide this.

Ethical issue #3 - AI engineers, data scientists and predictive modelers crave new data. There is an aching desire to try all kinds of data to see if they can improve their models. The issue is that many data brokers are unscrupulous, and developers are wittingly or unwittingly poisoning their models with bad data. The problem is even worse when the data is reliable but the motivations of the modelers are less than pristine. Example: a health insurance underwriter used information about the type of car purchased to screen out potentially pregnant applicants, because pregnancy is still classified as a "disease" to health insurers.

Some decisions cannot be made by matching against known patterns. According to Vegard Flovik in "How do you teach physics to machine learning models:

If there is no direct knowledge available about the behavior of a systems it is not possible to formulate any mathematical model to describe it in order to make accurate predictions.

In Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge, Emmanuel de Bezenac adds that the most prevalent use of AI (outside defense and intelligence, where it is not possible to gauge its breadth) is in targeted selling. The reason is (as James Taylor and I pointed out in 2007), "Little decisions add up." In sales, the price of being wrong is almost zero and the value of being right is so high, so AI can be wrong a lot of the time and still do well.

Ethics issue #4 - the people building AI are not sophisticated enough to engineer in domain expertise. Here is the big potato: job loss from automation. One school of thought is that most jobs have unseen complexities that currently require a human in the loop, such as different types of data a machine can't cope with, or the person who remembers birthdays with thoughtful presents. This subtlety and finesse is never described. Automation can only go so far as the AI engineer understands the job.

In many periods of realignment, organizations found that staff made redundant were responsible for many tasks that were never recognized. AI obviously cannot replace things it isn't aware of. However, learning AI watches and learns, and as time goes on, more work is done by the machine than the person - and you get this mix of human agents and cognitive robots working together. But as time goes by, the proportion of work done by the robot could increase and the human part could decrease.

Ethical issue #5  - Good intentions are that the AI will augment workers not replace them. This overlooks the learning aspect of AI, and organizations may fail to plan for the situation where the employee actually becomes redundant. And why is all this on the employer? Because AI as a machine doesn't have an ethical framework; we have to give it an ethical framework.

If you put in enough data at the right level of quality, the AI will eventually become very good at spotting a pattern, and can tell you about it. That may be good recommending stuff to buyers, but for more multiplex problems, the question of what to do next is complex. AI cannot, at this point, tell you what to do next. The only way is through modeling and simulation. Data never speaka for itself. With ML, the action is not learned. It is predetermined. "If you see this pattern, perform this action."

Ethical issue #6 - Understand the limits of what the AI can tell you. Conway's Law: organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.In implementing AI solutions, developers must be aware that people are diverse and complex and live within groups and cultures.

Ethical issue #7 - Using AI ethically ought to reflect that diversity. "Fairness" isn't uniform; there are different versions of it. The emergence of Federated Learning, on the one hand, has positive implications for privacy, but on the other, is likely to exacerbate the Explainability issue. I wrote about it this in Federated machine learning is coming - here are the questions we should be asking.

Ethical issue #8 - It is too easy to by lulled into giving up your personal information. In fact, it is too easy for bad actors to snatch your data when you're not looking. Federated learning is a powerful idea for distributed applications and data, but first movers have chosen your medical data as a testbed. Tread carefully.

This is hardly a complete list - so it will be a recurring series.