When does machine learning acquire a social context?
- Summary:
- Whether it’s MCS, simple linear regression or Adversarial Neural Networks, if it affects people, then there is an ethical issue.
Some of these simulations had over a million replicates and took more than a month to run on homemade MPP (massively parallel architecture) architecture in 1998. The models were developed from an extensive list of FEPs (features, events, and processes) designed to consider every possible thing that could cause the repository to fail. Can you imagine modelling glaciation in southern New Mexico? Try to imagine thinking about something that could happen in thousands of years.
The outcome was to certify to the Department of Energy (DOE) that the design was safe based on a probability less than very small number over 10,000 years. The chart below is from a public document Sandia National Laboratories Waste Isolation Pilot Plant Revision 0 Analysis Package for Salado Transport Calculations: CRA-2014 Performance Assessment that is just an example of the kind of horsehair plots we used to visualize the outcome of millions of replicates.
Is a Monte Carlo simulation AI or ML? Technically, it is neither, it falls under the category of stochastic models, rather than starting with a dataset from which the algorithm learns. Monte Carol simulations are seeded with random variables and allowed to iterate through many possibilities. MCS is a technique used to understand the impact of risk and uncertainty in financial, project management, cost, scientific and other forecasting models. We used it to determine whether the WIPP was safe over 10,000 years. It may not fit into what we refer to today as AI, but because it dealt with outcomes that potentially had, well, disastrous consequences for human life, and all life for that matter, the models assumed a social context and should have been subject to ethical evaluation.
Was this sufficient? This is what I mean when I use the term “social context.” Any quantitative or probabilistic technique that affects people, has a social context and therefor, it’s use is subject to ethical concerns. For example, an application that automatically lubricates an autonomous mining machine may not have a social context, but adjusting a medical claim surely does. So whether it’s MCS, simple linear regression or Adversarial Neural Networks, if it affects people, then there is an ethical issue.
We proved to the DOE, EPA, the State of New Mexico and other regulatory bodies that the WIPP was safe and the probability of a radioactive release was well below the accepted guidelines FOR 10,000 YEARS! It opened in 1999 and for fifteen years performed flawlessly, but an explosion occurred on Feb. 14, 2014, when a single drum of nuclear waste burst open.
“On February 14, a heat-generating chemical reaction − the Department of Energy (DOE) calls it a deflagration rather than an explosion − compromised the integrity of a barrel and spread contaminants through more than 3,000 feet of tunnels, up the exhaust shaft, into the environment, and to an air monitoring approximately 3,000 feet north-west of the exhaust shaft1 The accident resulted in 22 workers receiving low-level internal radiation exposure.” - WIPP waste accident a 'horrific comedy of errors'
The cause was someone at the Los Alamos Labs (and they’re supposed to be the geniuses) changed the brand of kitty litter in the drum from an inorganic to an organic mix. In modeling the WIPP, we considered thousands of scenarios, but that one we missed.
The ethical question is, if it was not realistic for us to model every circumstance, and model it with very high fidelity, should the WIPP have been certified?
My take
In thinking about ethical implications of any sort of modeling, from the simplest to the most esoteric, If there is a social context to the problem, then those responsible for the model absorb that social responsibility and ethical concerns. Lesson learned is that when dealing with something like radioactive waste, modeling the risk without accepting the ethical responsibility that you may be wrong is a lapse.
In a conversation with Dennis Howlett of Diginomica, “Can AI be bounded by an ethical framework,” I laid five out “five pillars” if you will of ethical issues for AI:
- Responsibility
- Transparency
- Auditability
- Incorruptibility
- Predictability
Sadly, these are not sufficient. Our hubris with WIPP allowed us to assume that our methods were adequate (and in truth, those methods were dictated by the various certification authorities) to the point of not considering an any systematic way, the social context of what we were doing.