Two of the most important sectors are defined by a risk management culture - financial services and the public sector. They are often criticized for their caution, but their longevity hints at some success from this approach. An audit process acts as the foundation for the risk management culture. Artificial intelligence (AI) and automation technologies require the same auditable culture in order to protect citizens and, ultimately, organizations, says ForHumanity, a not-for-profit organization advocating the auditing of AI.
Ryan Carrier founded ForHumanity in 2016 following eight years in hedge fund management and a 25-year financial services career. Today ForHumanity has 46 fellows in all four corners of the globe, six streams of work, a leadership board and over 800 contributors to its knowledge banks, codes of ethics and conduct and audit programme.
Carrier's time in hedge funds brought him into contact with AI, which was used to manage the diversification of investment portfolios. Parenthood would shift Carrier's perception of AI, though, he says:
I started looking at the industry and the way that AI was impacting people and extrapolated this to the lack of a risk management culture.
Instead of managing risk, the technology industry has become hooked on the vernacular of move fast and break things, whilst the word disruption took on a positive connotation, rather than the pain and negativity it has always been associated with in the past. Carrier says he founded ForHumanity not to put an end to innovation at pace but to ensure there is some thought about the consequences. He says:
I don't mind some disruption. I am a capitalist, after all. But what I started to see was moving fast and breaking things was breaking people and breaking relationships. I played that out with my boy's future in mind, and I got scared.
In financial services, you are coming from a heavy risk management culture, but looking at technology, and in particular Silicon Valley, and you see none at all.
Audit and liability
ForHumanity believes that the audit process, which publicly listed companies are subject to, has now become vital to the safe development of AI and automation technology. Carrier says the principles of transparency are tried and tested. He explains:
There are entire organizations and industries that rely on independent auditing; that is an enormous amount of trust.
ForHumanity admits that the audit is far from foolproof. Carrier adds:
Any transparent system can be subject to the vagaries of fraud and malfeasance. When you tell people what the rules are, you are also telling people how to beat them.
The alternative is to continue with no system of redress or analysis, which Carrier says is not sustainable. He defends audits and says:
People say look at Enron and WorldCom, they demonstrate that the audit doesn't work, that is not true, the system eventually caught them out.
ForHumanity claims if the technology is to benefit society and the organizations deploying the technology, then an audit will begin the process.
AI and automation are challenging existing liability frameworks. For example, in the case of the automotive industry, the liability of a product has traditionally been laid with the vehicle manufacturer, but if the autonomous software used in a vehicle comes from another provider, then the liability becomes complicated. Carrier says the European Union is leading the way with its proposed Artificial Intelligence Act that defines the liability of software.
ForHumanity believes that auditing and legislation will change the risk/reward dynamic, which is too beneficial to the software industry. Carrier says:
Users are bearing all the risk. If I use my face to pay for lunch using facial recognition, then what happens if there is a breach? GDPR does a very poor job of protecting biometric data.
ForHumanity proposes shifting the risk/reward dynamic through three ways, government legislation, which GDPR and the UK's Children's Code are examples of. Although not perfect, Carrier says these acts are important steps towards balancing the risk/reward. He adds:
The Children's Code is the first time that a law says you have to balance the children's wellbeing against shareholder value.
ForHumanity secondly calls for giving citizens the right to legal action. This would give individuals the right to sue software companies for harm caused by automation and AI tools. Carrier says:
This will result in damages, so it raises the risk for companies, and that is a good thing. More importantly, the settlement comes with remediation, which becomes policy.
ForHumanity says consumers must demand responsible AI products in the same way they expect food hygiene standards. Carrier adds:
When safe and responsible is profitable, and danger is costly, that is when humanity has won.
Delivering on the ideas
ForHumanity has a mission statement, which says:
To examine and analyze the downside risks associated with the ubiquitous advance of AI & Automation, to engage in risk mitigation and ensure the optimal outcome… ForHumanity.
Carrier says of it:
If we do this, we get the best benefit for humanity, and that is where the overly ambitious title comes from.
A data taxonomy by the organization is being released to help CIOs and data leaders define data types, metrics, outcomes, pipelines and the process flow across the organization.
ForHumanity also recently worked with the UK Information Commissioner's Office to define UK GDPR, a derivative of the EU GDPR that is necessary as a result of the Conservative Party Brexit policy. Carrier explains that UK GDPR will see increased accountability placed on the data protection officer than currently exists in GDPR. However, ForHumanity is concerned by other proposals by the government, which reduce the levels of protection that citizens in the UK will receive compared to their European neighbours. He says:
I was disappointed to see the corporate-centric changes suggested.
ForHumanity will offer licences to its models for data auditing and trust as well as training for its core revenue models to ensure the organisation remains sustainable.
Recently an old friend shared a meme on social media which featured an image of various single-use plastic water bottles. The image was captioned that capitalism drives innovation; the statement was not untrue but ignored the fact that well-regulated capitalism drives innovation. Single-use plastic bottles and their proliferation are an example of a product that could and should be regulated out of production. If AI and automation technologies are to benefit business and society alike, then good regulation will become necessary - and as consumers, we should demand it. Although far from perfect, auditing leads to disclosure and transparency, which more often than not leads to good business practice.