Do we understand the importance of ethical technology?

Den Howlett Profile picture for user gonzodaddy October 17, 2018
Summary:
Understanding the frameworks around which ethical models are built helps contextualize the often confusing world of ethical technology. Here is my stab at a view of the landscape and what should happen next.

Robot hand in trust AI machine learning © zapp2photo
Any discussion about ethical technology is fraught with risk. One person's good is another person's bad yet this is a topic that is bubbling just beneath the surface on multiple fronts.

One of the main problems is of interpreting definitions. For example, the Oxford English dictionary defines ethics as follows:

Moral principles that govern a person's behaviour or the conducting of an activity.‘medical ethics also enter into the question’

Schools of ethics in Western philosophy can be divided, very roughly, into three sorts. The first, drawing on the work of Aristotle, holds that the virtues (such as justice, charity, and generosity) are dispositions to act in ways that benefit both the person possessing them and that person's society. The second, defended particularly by Kant, makes the concept of duty central to morality: humans are bound, from a knowledge of their duty as rational beings, to obey the categorical imperative to respect other rational beings. Thirdly, utilitarianism asserts that the guiding principle of conduct should be the greatest happiness or benefit of the greatest number

‘a code of ethics’

Got your head around any of that? I'm not sure I have because the introduction of terms like 'moral principles' is capable of a variety of interpretations. Or what about 'duty as rational beings' in Kant's school of thought?

Two positions, equally valid

In my view, it is these foundational differences and uncertainties that allow us to find apparently opposing views on the topic of ethical technology but which make sense in specific contexts. So for example, Jeff Bezos, CEO Amazon's position regarding the DOD JEDI bid has an air of authority when he says:

“One of the jobs of a senior leadership team is to make the right decision even when [it] is unpopular,” Amazon CEO Jeff Bezos said Monday at the WIRED25 summit. “If big tech companies are going to turn their back on the DOD, then this country is going to be in trouble.”

Bezos is playing on the Kant expression of duty and in that sense is perfectly justified in his position.

For its part, Google has famously withdrawn from the JDEI bid citing 'ethical concerns' but also saying it was not certain it could match the JEDI security requirements. Which represented the bigger weight? We cannot know but you can be sure that Google was playing to its developer employee crowd who have been vocal in their condemnation of Google's part in U.S. government-related AI work.

In discussing Microsoft's position on JEDI, CEO Satya Nadella said:

We're an American company and a multinational company. We fundamentally rely on the trust in American values. We fundamentally rely on our form of government to engender trust in everything that we do, not just in the United States but across the world.

Again, this is applying the Kant school to ethical technology, despite the fact anonymous Microsoft employees allegedly posted to Medium in opposition to Microsoft's bid. Note their use of language:

The contract is massive in scope and shrouded in secrecy, which makes it nearly impossible to know what we as workers would be building. At an industry day for JEDI, DoD Chief Management Officer John H. Gibson II explained the program’s impact, saying, “We need to be very clear. This program is truly about increasing the lethality of our department.”

Many Microsoft employees don’t believe that what we build should be used for waging war. When we decided to work at Microsoft, we were doing so in the hopes of “empowering every person on the planet to achieve more,” not with the intent of ending lives and enhancing lethality. For those who say that another company will simply pick up JEDI where Microsoft leaves it, we would ask workers at that company to do the same. A race to the bottom is not an ethical position. Like those who took action at Google, Salesforce, and Amazon, we ask all employees of tech companies to ask how your work will be used, where it will be applied, and act according to your principles.

This is representative of a purist, more Aristotelian position. Is it any more (or less) valid than the positions of Microsoft and Amazon's CEOs? I don't think so but it sets up a conundrum that is difficult to resolve.

At the state level, and especially for a power as dominant as the U.S., I can readily see the arguments made by Nadella and Bezos as representative of what they expect to be in the best interests of the country from an ethical technology perspective. In that context, it really doesn't matter who is sitting in the White House. But it does matter what Gibson means by 'increasing the lethality of war.' War against who? and under what circumstances? might be useful questions given recent history.

Ethical technology - guiding principles

These kinds of argument are spilling over into the more practical fields of topics related to artificial intelligence, where the expectation is that AI related disciplines will enhance human capabilities. But as we've already seen on numerous occasions, it is far from clear how some algorithms arrive at their conclusions. As Neil Raden said in a recent and widely read discussion:

I start from certain guiding principles, one of which is that if an algorithm is seeking to take over a task that would have been done by a human where there is a social context, then the algorithm takes on those social attributes. To the best of my knowledge, there’s no AI programmer or engineer who knows how to codify ethical behavior requirements into a machine but we’re racing forward with applications.

Second, you need to explicitly define ethical behavior and that’s something where even in academic circles they’ve been struggling for many years.

Then you have to look at the models. I think Judea Pearl is right when he talks about Baysian networks as a more transparent method of understanding what the algorithm is doing rather than the black box deep and machine learning systems of today.

Fragmented solutions

What are tech companies doing to solve this problem? Casting around for clues, I've discovered a variety of initiatives, none of which are wholly satisfying. In a recent Future of Work newsletter, Stowe Boyd noted:

Short Takes
LinkedIn has a new AI feature to increase diversity in hiring | Rosalie Chan reports on LinkedIn efforts to help companies hire a more diverse workforce using AI, which is a major trend (see Textio helping Cisco, Atlassian improve workforce diversity).
This AI platform aims to close the gender gap in the workforce| Yet another piece on AI being applied to deal with diversity at work, this time about Katica Roy and Pipeline.

Boyd's observations and the content to which they relate hit on two of our favorite topics - diversity and bias. Unfortunately, I see little evidence of coordinated effort to solve for these issues.

In a recent conversation with Steve Miranda, executive vice president of Oracle Applications product development, I asked how Oracle is managing bias in certain of its talent related ML applications. He agreed there is no sure fire way to avoid bias but that having a large pool of practitioners working on the topic across multiple industries is one way in which Oracle is pooling outcomes to determine if ML is providing the outcome customers expect to see. This is a work-in-progress as are many AI-related activities.

PAI expands rapidly

Elsewhere, I note that a consortium of vendors founded in 2016, was recently joined by Baidu, the Chinese firm specializing in Internet-related services and products, and artificial intelligence:

The Partnership on Artificial Intelligence to Benefit People and Society – known as the Partnership on AI (PAI) – was formed in 2016 by Google, Facebook, Amazon, IBM and Microsoft to act as an umbrella organisation for the five companies to conduct research, recommend best practices and publish briefings on areas including ethics, privacy and trustworthiness of AI.

In the two years since it was founded, it has grown rapidly, with more than 70 members across the private sector and academia, including Apple, which joined in 2017. But until now, it has had no representation from mainland China – although Hong Kong University’s engineering school is a member of the partnership.

The stated aims of PAI are that it will:

...conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology.

That's one heck of a menu and I do wonder how ethical positions will play out when we already see conflicting positions among some of the charter firms.

My take

We have a long way to go and I worry that without a framework of the kind Raden proposes that many of these consortia or one-offs like IBM's 'safe AI' initiative will end up on the scrap heap of technology experimentation. There is just too much fragmented thinking for raitonal postions to emerge.

None of this will be easy. It may, for example, mean pitting the moral imperatives of individuals against the practical reality of large tech vendor leadership in a battle for intellectual one-upmanship. Here I think academia needs to be brought into the fold to act as an impartial (not eay) arbiter and moderator.

In that scenario, I can envisage the emergence of a fourth school of thinking that takes the best of the Aristoelian and Kant schools of thought. That might be naive on my part, given the dominance of the profit motive among tech companies. However, if those same tech companies are to live up to their 'promise of 'making the world a better place' then I see no reason why that cannot be achieved while delivering results with which market makers can be satisfied. After all, when you're treading into uncharted territory, there is always an opportunity to set the pace. That, in my view, represents the best outcome for ethical technology considerations.

 

Loading
A grey colored placeholder image