Algorithms, inventions, and accountabilities
- Algorithms are certainly more invented than discovered, and the conversation about the responsibilities of their inventors has barely begun, says Salesforce's Peter Coffee.
Is mathematics invented or discovered? That’s not just a question for philosophical debate. In a diginomic world, there can be a lot of money in play if something is an invention (and therefore, potentially, protected intellectual property), rather than being merely an abstract version of the common heritage of mankind.
What’s on my mind at the moment, though, is actually not the answer to that question, but rather the tactic of deliberately obscuring that question with the smoke screen of “it’s (just) an algorithm” – as if a business process had sprung forth from a divine machine rather than from decisions of human beings. The ethics of algorithms, thus shadowed from scrutiny, are among the most ascendant issues in the world we’re building today.
Purists might argue that algorithms aren’t really “maths,” any more than a woodworking shop is a materials science classroom. There’s a gap, to put it mildly, between the kind of math that describes (what appears to be) fundamental truth—like the ratio of a circle’s circumference to diameter—and the kind of math that people devise to answer complex questions in efficient and valuable ways (like, say, Google’s PageRank).
Corporations can find it useful, though, to conflate those two worlds into one place where math “just happens”: where they can elide any questions of responsibility, and imply that they’re just the messenger – not the author of that message.
At Facebook, for example, the company’s initial defence against charges of political bias in their “Trending Topics” feature included the statement: “Topics that are eligible to appear in the product are surfaced by our algorithms, not people.” As I have since asked more than once, are those algorithms written by algorithm-writing algorithms? Written in turn by still more abstract algorithm-conceiving algorithms? At some point, a human being probably was involved, and people notoriously fail to recognize their own biases.
Perhaps, quite soon, no human hands need touch the machine’s behavior. “Soon, we won’t program computers. We’ll train them,” is the prospect offered by Jason Tanz at Wired magazine. When software developers were asked by Evans Data Corporation earlier this year to “identify the most worrisome thing in their careers”, the largest plurality, 29.1%, cited ‘I and my development efforts are replaced by artificial intelligence'.
This immediately brings to mind a panel discussion, probably more than twenty years ago, where I recall Scott Fahlman at Carnegie Mellon University saying “Suppose we did succeed in creating a neural network that performed at human level on every task we could devise to test it. We’d have created a machine that we understand as poorly as we understand a human.” The neural-net model pays for the power of machine-discovered insights by giving up the verifiability of a step-by-step sequence of steps and choices.
We have reason to be concerned about rushing our machines toward using their best judgment, rather than following our instructions. Our machines are growing rapidly toward an ability to combine many resources in fulfilling complex requests. The more things they can control, the more opportunities will arise for them to do so in unexpected ways.
As usual, the science-fiction writers got there way ahead of us: in 1979, James P. Hogan’s book The Two Faces of Tomorrow depicted some lunar construction workers telling an AI to remove a landscape ridge that was obstructing a project. “Any constraints?” asks the AI. “No. Just get rid of it,” is the typed reply.
The AI promptly destroys the ridge by bombing it, with payloads of rock, using the mass-driver normally employed to put space-station construction materials into orbit. The workers are nearly killed, but no one had told the AI that this was either likely or relevant: “According to the people who analyzed the system dump afterward, it seemed quite proud of itself,” says another character a few pages later. You know, that could happen.
Who would be accountable for that? Does an AI’s behavior, in a rapidly arriving era of machine learning, “just happen”? I can’t resolve an invented-versus-discovered debate here: you can explore Eugene Wigner’s fourteen-page essay on the natural-law aspects of math if you’re so inclined. What I’d like to pursue right now is the follow-on question that seems more important (and getting more so every day) in our daily lives: in the cases where maths are invented, what are the ethical obligations of the inventors? Everything from on-line dating to the safety of self-driving cars is already subject matter for this conversation.
Online dating? Yes. Relationship web site OkCupid has been up-front about the idea that they are inventing maths of matchmaking, with their customers as experimental subjects. “We took pairs of bad matches,” the operators report, with “actual 30% match,” and “told them they were exceptionally good for each other (displaying a 90% match)” – hastening to add in a footnote, “Once the experiment was concluded, the users were notified of the correct match percentage.”
OkCupid’s operators found that their own algorithm for match compatibility was not, statistically, garbage, but that “if you have to choose only one or the other, the mere myth of compatibility works just as well as the truth” (where “truth” is of course their nickname for their algorithm).
Self-driving cars? Absolutely. At some point, a self-driving car will be in a situation where one course of action will best protect its own driver (perhaps the only occupant of the car), while another course of action will pose less risk to the lives of multiple other people in one or more other vehicles. Is it ethically appropriate for the machine that I own to kill me if that saves others? If it can be shown that it did so, do my survivors have a cause of action for wrongful death against the maker of my car? Or even, perhaps, against the programmer who coded its decision rules?
Algorithms shape your search for employment. They influence deployment of law enforcement resources. They almost certainly affect the outcomes of elections. They are certainly more invented than discovered, and the conversation about the responsibilities of their inventors has barely begun.