Main content

Does Brexit Britain have a data strategy fit for purpose? - the private sector perspective

Chris Middleton Profile picture for user cmiddleton March 9, 2020
Data sharing is a crucial ingredient of the digital economy and one that post-Brexit Britain is going to have to get on top of to compete with its US and EU peers.

data sharing

We often think about the risks of sharing data, but rarely acknowledge those of not sharing it, or the lost socio-economic opportunities. That was the observation from SAP’s Dr Chris Francis, Director of UK Government Relations, co-chairing last week’s Westminster eForum conference on the UK’s National Data Strategy.

Francis may have his work cut out for him with a UK administration that is adept at describing what it doesn’t want, but is less clear about what it does. Yet either way, the National Data Strategy – currently in development in Whitehall for publication later this year – ought to have innovation and measurable value at its core.

In his own presentation to delegates, Francis quoted IBM’s Chris Godwin by saying:

'Research is turning money into ideas. Innovation is turning ideas into money.’

I'm going to try and broaden that a little here and talk about successful innovation. Successful innovation should show up somewhere. It should show up in productivity – public sector or private sector – or it should show up as public value.

Quite. But the problem, Francis acknowledged, is that UK productivity has not increased for a decade, despite the Industrial Strategy and a wealth of new ventures in artificial intelligence (AI), fintech, robotics, the Internet of Things, smart transport, mobility, analytics, and more. Over 80% of the UK economy is in services, so for national productivity to be flatlining might suggest that the country is not making the best use of its data resources – 10 years of austerity notwithstanding as another contributing cause.

According to Francis, failing to capitalise on data is indeed the problem. Using Eurostat figures – possibly the last time the UK will be able to do that, he said – Francis explained that while Britain is doing well in some areas of the data economy, such as e-government, the private sector is not the stellar performer that many would have us believe (especially policymakers in Whitehall).

Just 20% of UK organizations are “digitally intensive”, he said, meaning that 80% are not. Information sharing between organizations’ departments is particularly poor compared with their corporate peers in Europe, as is their use of data within the supply chain. With nearly all EU members doing better, these are areas where the UK clearly needs to improve before it can really compete in the data economy and see that vital uptick in its productivity.


One market where the UK has been in a stronger position than most is financial services, with its financial technology (fintech) sector second only to that of the US in terms of investment, M&As, and number of startups. At the heart of that burgeoning market is customer data, so how well is the UK doing?

While international banks have pulled nearly $1 trillion in balance sheet assets out of the UK since 2016 (according to Bloomberg’s Brexit Impact Tracker), the local fintech sector remains buoyant – if almost entirely reliant on the City of London, where at least three-quarters of British-headquartered fintech companies are based.

By contrast, a report by analyst firm CBInsights last year found that the US fintech sector is booming in 70% of that country, with venture capital investments of $46 billion across 35 states in 2019 alone – not just in technology hotspots such as California or Texas, or in financial centers such as New York.

Viewed in that light, the UK’s dominant position in European fintech seems rather more fragile, particularly as Brexit is cutting the nation off from vital sources of data in a single, integrated market of half a billion people.

Open Banking has long been seen as the key to unlocking data-fuelled innovation in financial services, but two years on from it coming into force under the new Payment Services Directive (PSD2), the UK has only racked up one million customers for services that use open APIs, data portability, open data, and/or disclosed private data.

Speaking at a September 2019 Westminster eForum on Competition in the Digital Economy, Richard Rous, Competition and Regulatory Strategy specialist at Lloyds Banking Group, said that one reason for the “slow burn” of Open Banking to date had been the industry’s focus on customers sharing data, which he described as “a red herring”.

However, consumers would be attracted by low-friction services that helped them better manage their finances, he said – which could be of particular benefit to the many young adults who are priced out of the property market, for example.

At the eForum on data strategy last week, I put this to Alan Ainsworth, Head of Policy at the Open Banking organization. Has the financial services industry misunderstood what customers want from the data economy? Might it be that most people don't want to share their data at all, but simply want easier access to their money? He said:

You don't share data for the sake of it, you share it because you're being offered a good value proposition by one of the regulators [providing Open Banking services]. And in order to get that proposition, you share your data.

But that's not the core of what you're doing, you're looking at a proposition, which might be applying a personal financial management tool, one of the vanilla products offered within the Open Banking ecosystem, where you can see all of your bank balances and your credit cards in one place with different providers, and potentially move money between those accounts without having to get out of those or go into your individual bank apps.

That seems to be a good proposition for lots of people and they can buy into that. But they also need to understand that there are safeguards in place around the industry.

Fair points, but one million Open Banking customers more than two years into the programme is not such an impressive figure when set against the estimated 70 million current accounts held by British citizens. So what is Ainsworth’s view of progress against the strategic aims of Open Banking? He said:

What we found out when we started this process was that just providing the API and the technical spec was not sufficient for a good customer experience. So if you want people to use services provided by a third party, you need to make sure that the third party can access that data easily and simply for the customer – without the customer spending three days trying to give their consent, or looking at 27 screens of messages from the provider of the data. Some of the initial implementations of Open Banking were clunky.

You also need a trust framework. [...] If you're a bank you need to know that the third party is regulated and that it is the third party. And you need to be able to do that within the API flow, so you need some sort of digital identity certificate that says, ‘This is Third Party X’, and you need to make sure that they're still regulated by the FCA.

They're the things you need to know. We created that – and it works – so the good thing about this is that this is the kind of infrastructure that can be used for other regulated permissions, and indeed for non-regulated permissions.

In Ainsworth’s view, Open Banking could be a model for similar initiatives by other sectors:

Smart data is a base initiative that's looking at other regulated sectors. Energy, utilities, telecoms are probably the most advanced of those, and open comms is in there too. They are all working together to see if we can get a harmonised approach to the consented, secure sharing of data.

In the UK, why not extend this to ‘Open Everything?’ Why not use the methodology, the frameworks, the ecosystem, the engagement we talk about, to enable consented and secure sharing of data between all good actors and providers of data – as long as the customers want that data to be shared? That seems to me to be a good thing, provided that certain safeguards are in place.

What do people want?

But do most citizens want to share their data like this? And what stands in the way of a thriving data-based economy of mutual consent and trust? The answer lies in the question, of course: trust in private organizations and governments is low, not just regarding why they want to access our data in the first place, but also their ability to secure it.

This problem was both raised and, unintentionally, underlined by another speaker at the Westminster eForum: Patrick Stephenson, Client Managing Director of Central and Regional Government for Fujitsu. First he made an insightful and almost poetic observation:

Who has seen their digital twin in the room? Has anyone met their digital twin, do they know it well? I can feel my digital twin’s presence; when I go shopping or look for insurance I can feel it lurking in the background.

I've reported my digital twin as a missing person to the Information Commissioner’s Office. The ICO confirmed that my twin exists, but it’s been on the run across Europe, Asia, and America – sometimes in multiple places at the same time, which is concerning. The ICO told me it's been aided and abetted by my fridge, my car, my vacuum cleaner, and my phone. So the key thing for me is: how can I trust my digital twin if I just don't know it?

A good point well made, but then Stephenson inadvertently telegraphed the underlying problem to delegates: it’s companies like Fujitsu, despite its recent attempts to understand the digital world with research into consumer behaviour:

We've got lots of feedback from citizens directly about their fears and hopes about the digital future, and about one third of people are concerned about security. [...] But what if trust wasn't such an issue and we could actually give citizens their data, so they can own their data, and start to ‘gamify’ and manage that data going forward? What if we gave all that data to individuals?

This is a problem for one simple and obvious reason: that data already belongs to the consumer; it certainly doesn’t belong to Fujitsu or to companies like it, so it is simply not in their gift to present it to consumers. That many private companies believe they own their customers’ data may be at the root of consumer distrust, where it exists, of those same companies.

I put it to Stephenson that his presentation consistently implied ownership of customers’ data. Interestingly, he responded with an observation from his experience of the public sector, namely his recent treatment within the NHS:

Over a period of months I didn't feel like my medical information, my record, was my record; it felt like the health system owned my data. I think citizens will always be surprised, in a time of need, that data isn't shared; we expect it to be shared.

My take

This point is crucial to understanding the tensions inherent in the data economy. For Fujitsu’s Stephenson, there is implicitly no difference between a private company and a public sector organisation ‘owning’ consumer or citizen data. This was revealed by him defaulting to discussing the NHS when he was challenged with his own loaded statements about companies ‘giving’ customers their data – data that already belongs to them.

But there is a difference and it has to do with value: private shareholder value (frequently offshore, tax-avoiding shareholders at that) versus taxpayer value. If companies really want consumers to trust them, they must abandon the arrogant idea that they somehow own the customer, that customers are a product, and that private data is an enterprise asset. Especially if all they give consumers in return is advertising noise while charging them for the privilege.

A grey colored placeholder image