We are faced with these issues, even at our scale. The best example is reporting the earnings call although it could apply to major product announcements. We follow a bunch of companies and see the earnings call as providing valuable insights into topics that matter to customers. Examples might be expansion into new territories, product direction, market success and momentum.
Some think the bulk of that work could be automated through technology like those offered by Narrative Science. Others think this is a bad idea because we could end up spending more time fixing poor language use to ensure contextual validity. My view is that we should work with these technologies as a way of making our production more efficient and expansive. But that still doesn't answer the ethical question of how much we trust to technology and how much we leave to human creativity. It's a question we will have to answer for ourselves, likely through trial and error.
Much the same can be said about driverless cars, although in that debate, we already see the emergence of immoral thinking that seeks to displace the rule of law with the rule of everyone else. That logic has no place in a world where digital ethics matter.
Ethics and the world of work
Last week, Jon Reed hinted at the ethical question when thinking about the future of work as it relates to the emerging on-demand economy:
The strength of “Designing a New Operating System for Work” is its focus on humanizing work platforms. Gorbis and her colleagues are correct to insist that systems are not neutral. Issues like portable benefits and converting worker contributions into equity can be baked into platform design – if those who design such systems have good intentions, regulatory motivation, external/competitive pressures, or some combination of the three...
...The “operating system for work” paper is diminished by polishing the future of work with the sheen of flexibility and autonomy. Too many freelancers are freelancers by desperation. The “we” who invented these new ways of working are pretty darn good at creating wealth for themselves and their investors; some are good at saving companies money through reverse auctions and other cost efficiencies. But they aren’t very good at creating lucrative and rewarding work for freelancers.
I'm not sure that 'good intentions' alone can ever carry the day, but it is a starting point for discussion.
The spotlight on AI
But it is in the area of artificial intelligence or AI that we see the most prominent debates. Last month, Cath Everett opened up that can of worms, reviewing the academic discussions currently underway at Cambridge University and elsewhere. There are deep concerns about the impact AI augmented systems may have on the future of work and the potential for bringing mass unemployment or job displacement.
People like Gerd Leonhard are very clear. It is not a case of if work will be displaced but when. The argument is based upon the wholly desirable notion of automating and robotizing anything that can be automated as the pathway to creating abundance. Leonhard recognizes, as do many of us, that when you can drive the cost of a good or service to near zero then you create the conditions for abundant availability on a global scale. That's how the Internet works. But equally, Leonhard warns that we have not yet thought about the unintended consequences that arise in these conditions.
I argue that it would not be so bad if we were only talking about the Ubers or AirBnBs of the world but we are talking about anything and everything. I worry that the speed at which automation and robotization, aided by AI, will likely come at us is much faster than we can imagine today. I am far from convinced that we are equipped to handle the dislocations those changes will bring.
Moving sideways, Everett cites ex-Autonomy chief Mike Lynch on the topic of mortgage fraud:
For example, in the past, when particular foreign organised crime groups were undertaking mass mortgage fraud, if you asked an AI system to spot which applications were fraudulent from a pile of thousands, the machine may – in a perfectly unbiased way – tell us that individuals from certain ethnic groups are more likely to commit mortgage fraud. Of course, that is completely unacceptable as a way of screening mortgage applications, so we have to make sure that we are fitting AI into our ethical guidelines.
The OpenAI movement
It is in that spirit that I see the creation of OpenAI as a venture that any responsible software business should support. Why? Check this out:
Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.
The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field...
...AI systems today have impressive but narrow capabilities. It seems that we'll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly.
I'm relieved that even as we are at the very early stages of discovering what AI can contribute to economic efficiency, technology leaders see a much broader palette of challenges. This from Vishal Sikka's personal weblog that discusses OpenAI:
I would only support this venture if such an openness was a fundamental requirement! In all my experience with corporate research teams, I found a continual struggle for the teams to find relevance with the work in the "here and now", usually knowing that this unnecessary and premature seeking of relevance not only blinds us to those opportunities that can shift our paradigms, it defeats the point of research.
When Sikka, CEO Infosys talks in these terms then others should listen. AI was central to his doctoral thesis and in my discussions on this topic, he has always been very clear that the inherent risks of blindly passing over trust to an algorithm are too high. Instead, he sees AI as a pathway to creativity and the extension of our humanity.
That may sound like mumbo jumbo in a world where quarterly number reporting rules the CEOs mindset. But as both Sikka and Leonhard will agree, automation and robotization are the start of a journey that emphasizes the betterment of society as a whole, rather than simply pointing towards efficiencies that currently tie up trillions of dollars in global supply chains.
When thinking about this topic and its importance, I was surprised that we have only written three pieces with the term 'ethics' in the title. All of them have been in the last 90 days.
As 2016 unfolds, we will have much more to say as we test the new-new things that are coming at us. Like it or not we may well have to declare that some things really are evil and not worthy of support.
The burning question will be whether enough people understand the implications to ensure we don't wander down a road that leads to a hideous dystopian future. Most are optimistic in that regard. I'm not so easily persuaded.
Balancing the bright future of ubiquity and abundance against the inevitable risks is step one.
Featured image credit: via io9.com