Engineering giant Rolls-Royce is putting AI ethics and the integrity of its ‘black box systems' at the forefront of its data analytics projects. The company has established an ethical check framework that sits separately from the AI black boxes, in an attempt to avoid any bias or other ethical concerns that may arise. Not only this, but Rolls-Royce is also carrying out its own research into lessons that can be learned from genetic mutation and applied to AI work.
The details of the comany's work in AI ethics was revealed as part of a talk at IOT Solutions World Congress, where Lee Glazier, head of service integrity at Rolls-Royce, outlined the company's long history in data analytics and why it is critical to put ethics front and centre.
Earlier this year Rolls-Royce CEO Warren East gave an overview of the company's plans in this area, stating that there is no practical reason why trust in AI cannot be created now.
In terms of the history of data use at Rolls-Royce, Glazier explained that the company started using advanced analytics for health monitoring and predictive maintence for its airplane engines way back in 1980, through the use of sensors. In 1998 this work was extended and Rolls-Royce started transferring engine sensor information from actual in-service flights.
This ultimately led to the creation of a new business model for Rolls-Royce. Glazier explained:
In 1999 from all that work, we realised we could offer a disruptive business model known as TotalCare, evolving into corporate care as well. That was where we established a standalone company to underpin that offer, offering data science, engine health monitoring, to provide insights into the health of the engines. Rolls-Royce took the disruptive risk to take over maintenance and health of the engines from the airlines themselves.
Fast-forward to 2005 and Rolls-Royce started taking data from the engines and aircraft while they were in-flight, via satelite communications, which included the use of machine learning and AI. From 2010 to 2015 the company was also then using high frequency data taken at the end of flight, using WiFi connectivity.
But in 2015 Rolls-Royce sought to advance its AI capabilities and embarked on an agenda it calls AI 2.0. Glazier said:
So in 2015, AI 2.0 was brought in. Basically we identified that engineers can't look beyond three, four or five dimensions on the traces - but AI can go way beyond that. AI 2.0 is up to 26 dimensional, multi-variate analysis looking across those 3,000 engines in the sky at any one time, 24x7x265. It's so complex, it cannot be programmed with pre-configured fault signatures. And really that's not what it's intended to do. It's looking for first type of failures, failures we've never seen before.
Algorithms are there to identify the anomalies. It's looking for those anomalies across 26 dimensions, multi-variate analysis - way beyond what a human can do. We are doing more and we are doing better with AI, using the same people. Rather than those people looking for anomalies, the AI does it. The engineers can now look at ‘what does the anomaly mean?', rather than looking for it.
The capabilities of AI at Rolls-Royce are huge. It has 11,672 engines hooked up to the health monitoring system, where 7,000 flights a day are monitored 24 hours a day, all year long. There are over 5 million data parameters monitored each day, with the average processing time standing at approximately 3 minutes - from sensing the data, transmitting it, taking it through the analytics system and then providing an insight to the engineer.
What results is Rolls-Royce seeing? Well, last year an Airbus 380 completed 50,000 flying hours without an overhaul, thanks to this predictive monitoring. This is the equivalent to travelling around the world more than 1,000 times, or the engines being in operation non-stop for more than five years.
Whilst Rolls-Royce is largely using these AI systems for engine maintenance - and there may be some ethical considerations there (if something went wrong, for example) - this is becoming even more pertinent as it looks to expand its use of AI into its factories, business processes, contract administration and document sifting (including CV sifting). Glazier said:
How do we govern those ethical concerns? AI does present a philosophical challenge. If I look at how a human would convert data into a decision about value stream - we start off with data. If you look at an Excel spreadsheet, it would be all of that data in all of those cells. Very difficult to take any meaning from it, so we convert that into information. What is it telling me? What can I see? Often we will convert it into a graph and from that graphical information, we may well know something. If we've got enough of that, we will be able to understand what is going on and gain some insights from all of that data presented to us in an Excel spreadsheet. From that, based on various policies, principles, morals, ethics, we make a decision from our understanding, knowledge and data.
So where do we see AI in this value stream? AI doesn't need to convert data into information for knowledge, it basically goes straight to understanding, straight to providing insights, on which we will make a decision. So that gives us a problem, because with a data scientist, a human, we can go and ask - how did you come to that insight? They will talk about how they converted the data into graphical information and multiple graphs providing knowledge and understanding. You can't do that with an AI. The AI is effectively a black box. You put data in, you get insights out, and you've got to govern that.
Glazier said that Rolls-Royce works from the principle that AI is a black box. It acknowledges that and works with that, which is quite a different stance to many other companies which seek to simply say ‘trust us'. Not only this, but having trust in AI is a business imperative for Rolls-Royce, as if it carries out maintenance too early, or too late, there is a cost associated with that. It wants a trustworthy output from AI so that it can establish the optimum point for maintenance.
Creating a new philosophy
Glazier explained that in order to achieve this, Rolls-Royce is bringing a framework of checks and balances that it uses elsewhere in the company to establish authority for its AI systems. For example, if Rolls-Royce was offered a new energy supplier, it would go through a number of steps to establish whether it was a good offer or not. These would include:
A sense check - is it a good deal or too good too be true?
Continuous test system - you'd do some calculations from previous home test data, to look at whether the offer being made is trustworthy or not.
An independent check - Rolls-Royce may carry out a Google search, for example, to seek out independent reviews.
A comprehensive check - are all the company's needs being met? Does the offer include gas an electricity, for example?
Data integrity check - does all the data stack up from the quote? Has the data got integrity?
Rolls-Royce is essentially applying these same checks to AI in what it calls the ‘Rolls Royce 5 Checks Philosophy'. It applies these checks to identify mutations or business in the system itself. Glazier said:
So we do a sense check of the processed data, is the result within the expected range? There are all sorts of sense checks you can do. You can do it on engine health monitoring - does the life of the engine decrease after every flight? Because if it doesn't, something has gone on the calculations. These are quite simple, but they identify whether something has gone wrong quite obviously in the system.
For engine health monitoring we aso basically have a dummy airline with a million synthesised flights that we continue to pump through the system via known inputs, known outputs, looking for a malfunction in the system that may result in a mutation, or a bias change within the AI system itself.
For the independent check we verify the calculations against completely different algorithms, completely different data from the same flight for engine health. Or we might do a human check, completely independent. What would the independent system identify and is that the same as the real system?
On the comprehensive check, we ask: have the correct number of checks taken place? And then a data integrity check, are there any corruptions in the data? A cyclic redundancy check being one that you can use to identify any corruptions in the data.
These five checks are running continuously and they are looking outside the black box. If anything goes wrong in the black box, these five checks will pick it up. So we are assuring we have got trustworthy AI by continuously running these.
Finally, Glazier said that he, his team and with the help of external scientists and researchers are now looking at if lessons can be learned from biomedical science identification of genetic mutation techniques and whether these can be used in AI mutation.
Glazier explained that a genetic mutation is a change in coding portions of the DNA, which may alter the sequence of proteins. This has the potential for changing the expression of the gene. AI mirrors this, in that an AI mutation is a change in coding in portions of the AI application, which may alter the logical instructions. This in turn has the potential for changing the expressions of the AI. Glazier said:
There are techniques within genetic mutation identification that can be used to look inside the black box, looking at fit, form and function of the AI. We are going to pursue that research.