AI demands a contract of trust, says KPMG
- Summary:
-
KPMG provides advice on how to keep customers happy in the risk-filled world of artificial intelligence
The UK’s AI Summer is upon us. But only if we avoid the shadows of data bias, and the chilly winds of poorly designed algorithms, short-sighted aims, and mistaken belief in quick cost-savings that I described in my previous report.
But what practical steps can decision-makers actually take to accentuate the positives and sidestep the negatives? And just as important, why should they bother?
One answer is that it is not just about ethical behaviour at a macro level – vital though that is in global, socially connected markets.
Leanne Allen is Partner, Financial Services Data, at consulting and services giant KPMG UK. Speaking this week at a Westminster eForum on next steps for AI, she explained that all organizations should use AI responsibly to make their business smarter and more responsive. In turn, other benefits will accrue – both economic and social. She said:
Consumers and investors in society are demanding much more from organizations, and that is across all industries. Whether that's the upside of better, frictionless services for consumers, or an end to ‘industrial experiences’ with greater personalization, or a desire that industries such as Financial Services should do more to help address inequality and promote sustainable finance.
The expectations on firms to innovate and drive real value from both data and new technologies continue to grow rapidly. And adoption of advanced techniques that include machine learning [ML] and AI give organizations the edge in responding to those demands.
In this sense, improving what Allen called the “customer experience journey” is just as important as a general desire for ethical behaviour, because in the long run business will become more considerate and sustainable, she suggested:
That's making better and faster decisions. That’s increased accuracy, which means understanding customers better, and leads to enhanced products and services. That's things like pricing risk or pricing products more accurately, and enabling a step change in operational efficiencies. And that, of course, has been really beneficial to organizations in driving down internal costs.
So, in Allen’s view, there is “no contention” about the benefits to organizations and customers from using big data analytics and AI. But users should avoid getting carried away by all these new possibilities. That’s where the real danger lies, she warned:
All that potential does come with new and enhanced risks. The fact is that without appropriate patrols and governance over both the design and use of advanced techniques, we’ve already started to see unintended harms.
Unfair bias on decision model outcomes is causing financial harm to consumers, and can cause reputational damage to organizations. Unfair pricing practices are causing systematic groups of society [sic] to be locked out of insurance, and that's removing access to the pooling of risk.
The selling of unsuitable or poor-value products and services to customers is another example, or targeted ads, dynamic pricing, and ‘purpose creep’ on the use of data, which have resulted in non-compliance with existing data protection laws.
These are just a few examples of the harms and challenges that industry is facing.
That’s quite a list of downsides. And the knock-on effect is a loss of trust between consumers/citizens and whoever is trawling their data. Such repercussions may have far-reaching effects on people’s credit histories and financial inclusion, for example.
This is why decision-makers should never – deliberately or otherwise – put consumer trust at risk in the pursuit of easy wins, said Allen:
Trust is the defining factor in an organization's success or failure. So, as firms progress at pace, transforming their businesses to become more data and insight led, they have to focus on building and maintaining that trust.
We're seeing many organizations now embarking on their own initiatives to build out governance and controls around the use of big data and AI. But the pace of progress does vary.
Typically, we’re seeing Financial Services lead the way, and those organizations are defining their own ethics principles. They're operationalizing those and taking a risk-based approach, aligning with core principles such as fairness, transparency, ‘explainability’, and accountability. Collectively, those actively promote trust.
The ‘true north’ of corporate ethics
Yet in a deepening recession, struggling consumers might take the idea of Financial Services leading the charge towards a fairer society with a pinch of salt. But let’s hope that firms are sincere.
For Ian West, Partner in a different section of KPMG’s UK operation, as Head of Telecoms, Media and Technology, trust is the “golden thread” of business. He added:
We need to make sure that businesses are ready to deploy AI responsibly. KPMG distils the actions necessary to point an organization towards the ‘true north’ of corporate and civil ethics by laying out five guiding pillars for ethical AI.
Talk about mixing your metaphors! But West (or is that North?) continued:
First, it's key to start preparing employees now. The most immediate challenge to business when implementing AI is disruption to the workplace. But organizations can prepare for that by helping employees adjust effectively to the role of machines in their jobs quite early in the process.
There are many ways of doing this. But it's worth considering partnering with academic institutions to create programmes to address the need for skills. This will help educate, train, and manage the new AI-enabled workforce, and also help with mental well-being.
Second, we recommend developing strong oversight and governance. So, there need to be enterprise-wide policies about the deployment of AI, specifically around the use of data and standards of privacy. And this comes back to the challenge of trust. AI stakeholders need to trust the business, so it's crucial that organizations fully understand the data and frameworks that underlie their AI in the first place.
Third, autonomous algorithms prompt concerns about cybersecurity, which is one of the reasons why governing machine learning systems is an urgent priority. Strong security needs to be built into the creation of algorithms, and into the governance of data. And of course, we could have a bigger conversation about quantum technologies in the medium term.
Fourth, there is the unfair bias that can occur in AI without the correct governance or controls to mitigate it. Leaders should make efforts to understand the working of sophisticated algorithms that can help eliminate that bias over time.
The attributes used to train algorithms must be relevant, appropriate for the goal, and allowed. It's arguably worth having a team dedicated to this, as well as setting up an independent review of critical models. Bias can produce adverse social impact.
And fifth, businesses need to increase transparency. Transparency underlies all the previous steps. Don't just be transparent with your workforce – of course, that's very important – but also give customers the clarity and information they want and need.
Think of this as a contract of trust.
My take
Well put. The important lesson, therefore, is don’t sacrifice user trust in your quest to gain competitive advantage. Take your customers with you on a shared journey. Help them see how you are making their lives better, as well as your business smarter.