We owe a lot to PARC, the Xerox-owned Palo Alto Research Center in the heart of Silicon Valley – the mouse-based GUI, laser printing, ethernet networking, PCs, the mouse itself, as well as inspiring Apple founder Steve Jobs to change the course of computing history.
Since its salad days, PARC has been quieter about its work, not least because in 2002 it broadened its focus away from the purely Xerox world and became a wholly (Xerox) owned but independent business, PARC Inc, providing R&D resources, and what its recently appointed CEO, Tolga Kutoghu calls open innovation, for a wide range of client companies.
Its work is therefore well behind the scenes, often based on project ideas set in motion by those clients. These include recent developments such as self-healing trains, and a current cloud service project on the development of modelling and simulation technologies for complex engineering systems that aim to increase the reliability and safety of such complex systems.
It’s hardly surprising that AI and machine learning now feature large in the company’s research programs, but there is a theme underpinning much of that work which does tend to set the company apart from most others. While some of its work is driven by the demands of client projects, Kutoghu is also keen on fostering the pure research side of business, where the need then is to find a market for the ideas that spring from the company’s internal resources.
One such area at the forefront of his mind right now is the question of the interaction between machine and human, especially at the bleeding edge where the machine starts to push humans beyond their level of understanding of events and consequences.
The machine knows, but the human may not
We are not there just yet, but Kutoghu is aware that the capabilities of complex data analytics, coupled with AI and machine learning algorithms, will start coming up with business decisions or management and control actions that human managers and operators will not comprehend, or be able to reason through, from a complex set of start point data to a decision/action – at least, not in the timescale where such a decision or action is viable. Thereby lies a real problem, for there is now good money to be made for businesses that can act in real time – or even act on predictions of real time. But at the same time there could be painful and costly unintended consequences from such decision/actions, and they might be consequences that a human mind could infer and identify……given enough time or information.
There is coming a time when the human half of the equation will need to be asking the machine half:
What the hell are you doing, and why are you doing it?
PARC, it transpires, is already researching these issues and developing algorithmic tools that will help systems explain themselves. Kutoghu explains:
What we see is AI and machine learning progressing very rapidly in a way that is very black box in nature. So you’re trying to develop algorithms that are faster, and they pop up everywhere. So one challenge that we foresee is what does that mean for the end user? What does that mean for people? Are we just simply going to outsource all of our decision making to this set of smart algorithms? What is the limit of that and where do we stop? This high level problem of how to blend AI and machine learning together with humans and human decision-making, and how to think about AI in the context of building these collaborative and symbiotic teams between humans and AI agents, is an area we are highly interested in exploring at PARC.
He sees the future belonging to those set of algorithms that can build such symbiotic teams efficiently and really understand the complexity of the problems that they’re facing. They will need to target the decisions space between the humans and algorithms, with the goal being better outcomes and better decision making overall. He also sees the likes of Uber as being just early examples of the disruption that the integration of the physical and digital in real-time can have:
All of the future emerging technologies are a form of the cyber and the physical worlds coming together in real time and that’s very disruptive in nature for almost all major industries.
This leads straight to an intriguing issue of moving into an era where it gets difficult to pinpoint quite where disruption will occur, or why. The law of unintended consequences comes into this now, especially with AI and machine learning.
Kutoghu states that, while this is part of PARC’s thinking, it is not a subject for direct research. But he agrees there is a need to think about what kind of unintended consequences or interactions any set of algorithms might lead to. It is very inter-disciplinary in nature, ranging across issues such as regulations, compliance, and legal/non-legal liabilities.
The mention of legality is likely to be a source of serious unintended consequences. The more that AI has a direct and controlling input over a decision or action, what then happens in terms of the legality of that decision or action? The danger here is that it reaches a point where the humans involved actually can’t comprehend the AI processes that have decided the system ‘should do this, not that’. But somebody is being made legally liable for something they have no understanding of at all:
On the legal side, understanding the policy, the development of the laws and regulations, are subjects for other research groups to really take on. But what we do is work really closely with the end users. Our way of innovating is by really coupling technology development with extensive user studies. We have a group of social scientists, we have cryptographers, anthropologists, user study experts, user experience experts, and they’re very deeply embedded within our technology development teams. Their daily job is to interact with end users of the technology options that we develop, and really probe and understand what kind of needs the things we’re developing are intended to address, as well as what might not work.
Giving AI a sense of ‘responsibility’
And when it comes to unintended consequences that might arise from adoption of these technologies, there is a feedback mechanism from that kind of highly embedded market-led technology development at PARC. This is designed to identify consequential issues early on.
As a corollary of this, PARC is also working on developing a set of algorithms, inside the AI algorithms themselves, that will enable the AI agents to articulate and explain themselves. They would then be able to describe what decisions they came up with and why. They should also be able to describe the decision-making path, including what other options have been considered and what has been ruled out, argues Kutoghu:
I think that has the potential to be eventually linked with other research groups. That may be a part of really understanding some of the other legal and liability types of issues.
This then begs an apparently simple question, but one with potentially significant importance. Could such an AI system not just explain its thinking but then identify important caveats, such as ‘this is subject to no legal input’ or ‘there are some factors here you haven’t given me so I can’t take it any further’? This could produce a tool where an AI agent can start to explain itself and help humans understand where the holes in their own thinking lie.
Kutoghu concurs, but adds a clarification:
This particular body of research at PARC is not intended for legal use cases – we don’t do that. But it might be useful for that particular use case as well. Our aim is making AI understandable and explainable by itself. So with that goal in mind, that’s the first step. If these algorithms can explain themselves and why they came to a particular conclusion, what were the set of assumptions that were made, and what were some of the critical decision points in that decision making process, I think that’s the first step.
The future of this is not just the algorithms explaining themselves, it is about bringing that together with the humans. That’s the foundation for really building a truly collaborative human/computer team to solve these complex tasks and challenges of the future. So we suggest a different AI algorithm that can say `here are my assumptions and here are the precision parts that I went through’. But it also needs to be able to say `oh gee, if I had this, and that, and the other, then I could have gone a different way’. Then the humans can look at that and reflect on their cognitive processes and understand the gaps in their thinking. I think there is a tremendous potential multiplying effect there where you can actually completely change the level of reasoning capabilities that we have today.
And, the (taking a wild guess) $64 trillion question…..when? Kutoghu bravely stuck his neck out with a spot of speculation. While acknowledging it is extremely challenging to make predictions about the future of technology, he gives it a go:
I think we’re looking at a five year time period where will have some significant evidence that this kind of capability could be deployed for a number of initial applications.
AI and machine learning undoubtedly offer us all huge potential, but there is a danger that we will seek more from them than we understand their ability to provide. It is far from rabble-rousing to suggest that a reasonably significant disaster may be in the offing from an AI system taking actions based on rules set by humans that did not clearly understand all the potential consequences.
So to have AI systems that can question the solution it is currently looking at, ask further questions, identify unintended consequences and ask: `is that what you really want?’ may not only a `good thing’. It may be the basis of creating a huge market where AI is the de jure option for most business and industrial operations.
Image credit - Xerox