AI in financial services - what sort of future do we want to see? Some considerations from The Alan Turing Institute
- AI offers benefits to financial services users and providers, but also brings potential threats. It's important to start considering the future we want to see...
The adoption of AI in financial services can deliver significant benefits, but also raises equally significant opportunities for harm, according to a new report commissioned by UK regulator, the Financial Conduct Authority (FCA).
The AI in Financial Services report, produced by the Alan Turing Institute, pulls out three main underpinnings - machine learning (ML), non-traditional data, and automation - that need to factored in and addressed around AI use in the sector. They need to be measured against four background considerations, identified in the report as:
- The performance of AI systems crucially depends on the quality of the data used, but data quality issues can be difficult to identify and address.
- Models developed with ML can have model characteristics that set them apart from more conventional models, including opaqueness, non-intuitiveness, and adaptivity.
- The adoption of AI can be accompanied by significant changes in the structure of technology supply chains, including increases in supply chain complexity and the reliance on third-party providers.
- The use of AI can be accompanied by an increased scale of impacts when compared to conventional ways of performing business tasks.
Against this backdrop, specific concerns around AI include:
- AI systems performance.
- Legal and regulatory compliance.
- Competent use and adequate human oversight.
- Firms’ ability to explain decisions made with AI systems to the individuals affected by them.
- Firms’ ability to be responsive to customer requests for information, assistance, or rectification.
- Social and economic impacts.
It’s a lengthy and very thorough report and well worth a read, not only if you’re in the financial services sector - the ideas under discussion are transferable in many cases to other industries. At the CogX virtual event earlier in the week, the study was formally launched, with Ravi Bhalla, Head of Department, FCA Innovate at the Financial Conduct Authority, and Cosmina Dorobantu, Deputy Director of Public Policy Programme and Florian Ostmann, Policy Theme Lead at The Alan Turing Institute, the two authors of the report, on hand to provide some context.
Bhalla kicked off by explaining the importance of AI’s application to the FCA in its role as a regulator:
Finance plays a fundamental role at the heart of daily life for almost every single person. In a joint survey with the Bank of England on the use of machine learning in financial markets in the UK, published in October 2019, we noted that AI applications in financial markets are likely to more than double in the next two years…COVID-19 has accelerated this trend. The Bank of England recently surveyed banks, half of which reported an increase in the importance of AI as a result of the pandemic.
While there are clear benefits to be accrued for consumers from financial organizations tapping into AI - such as operational efficiencies, lower costs, making credit more freely available etc - there is also the potential for harm, he noted:
The FCA doesn't have one universal approach to harm across financial services because harm takes different forms in different markets, and therefore has to be dealt with on a case by case basis. This is the same with AI. If firms are deploying AI and machine learning, they need to ensure they have a solid understanding of the technology and the governance around it.
AI and its application in financial services is causing us to ask big questions. We can't arrive at these answers on our own given the broad implications. But as the regulator of one of the world's biggest financial centres, we believe we have a key role to play. This supports the FCA's business priority of fair value in the digital age. Understanding technological change and the way it changes markets and outcomes for consumers is a core aspect of the work in this area.
Part of the FCA’s efforts to reach this understanding has resulted in the report commissioned from The Alan Turing Institute on AI transparency. For the Institute, Cosmina Dorobantu, Deputy Director of its Public Policy Programme, was keen to step back from the all-too-common image of AI as “futuristic looking images of typing robots” as he put it. What the Institute finds exciting about AI in financial services is the creation of a set of digital technologies that can help with important tasks:
First and foremost, prediction and forecasting. These are tasks focused on estimating the future value of variable interest. Examples here include predicting loan defaults, insurance costs, purchasing decisions for stock values and portfolio returns. Secondly, optimization tasks, namely tasks, which focus on estimating the optimal value of a given variable of interest. Relevant examples here include identifying optimal pricing or prudential risk management strategies. And last, but certainly not least, detection tasks, which are focused on identifying the occurrence of phenomena of interest, often through the detection of outliers or anomalies in the data. Examples here include detecting cyber cyber-security threats, or identifying different forms forms of fraud, market abuse or suspicious activities in the context of anti-money laundering.
Ostmann picked up on the three aspects of AI innovation that are at the fore today - ML, non-traditional data and automation:
The premise of our report is that it important and useful to distinguish between these elements and rather than thinking and discussing AI systems in the abstract, to think quite concretely about the role that each of these elements play in the context of a given AI system. That's useful for two reasons. First of all, it helps understand what exactly are the technological changes as a firm moves from conventional ways of doing things, performing a business task to adopting a given AI system. And secondly, it's crucial for an adequate understanding of relevant risks. The reason for that is that each of these three elements can be associated with specific challenges and risks. And therefore, in order to understand the risks that a specific AI system poses, it is important to consider the role that each element plays in the context of that system.
And that’s where I’m going to leave this. As noted above, this is a lengthy, but important report and it’s one that I’d urge anyone with a stake in the financial services sector to sit down with a large cup of coffee and read through. The Alan Turing Institute is keen to get feedback from interested parties on its findings. This is, inevitably, not a ‘one and done’ piece of research work. As the report itself concludes:
AI is already having transformative impacts on the delivery of financial services. Its role is set to increase further in the years to come. Like in other sectors, firms in financial services, regulators, consumers, and society at large are confronted with an evolving landscape of promising technological innovations and newly emerging challenges and risks. This report’s contribution is to equip stakeholders with the understanding needed to navigate this landscape in pursuit of responsible and socially beneficial innovation.
To shape the fin tech future we want, engagement in its foundational principles needs to start now.