US and UK governments are ‘AI ready’ - but are not being responsible
A new global index out this week ranks the US and the UK as the highest scoring regions in terms of preparedness for using AI to deliver public services. But they fall short when it comes to a responsible approach.
The influence of ‘AI hype' on the private sector has unsurprisingly also filtered down to governments around the world in recent years, with politicians and digital leaders citing the ample opportunity for artificial intelligence and automation to improve the delivery of public services for citizens. Healthcare, education, transportation and welfare systems have all been cited as ripe for disruption from AI, as governments get better at collecting data and machine learning and algorithmic technologies begin to mature.
And so it is with interest that this week the third annual Government AI Readiness Index has been released, ranking the USA and the UK as the countries most prepared to take advantage of AI in government. However, the report, which has been released by Oxford Insights and the International Research Development Centre, also now includes a ‘responsible use of AI index' for the first time - where the US and the UK performed much more poorly.
Which therefore raises the question: are you really prepared if you're not being responsible? But more on the ethics later…
Let's first breakdown how countries performed overall in terms of preparedness, where the report assesses each country across four dimensions and 33 indicators. The dimensions include: government, technology sector, and data and infrastructure. Some of the indicators within those pillars include government vision, digital capacity, adaptability, size of technology sector, human capital, as well as infrastructure and data availability.
The report states:
Last year's Index built on four hypotheses about government AI readiness. This year, we have changed our approach to try and get to the heart of what makes a government AI ready. We therefore developed three new hypotheses, each of which corresponds to a fundamental pillar of government AI readiness:
- The Government needs to be willing to adopt AI, and able to adapt and innovate to do so;
- The Government needs a good supply of AI tools from the technology sector; and
- These tools need to be built and trained on high quality and representative data, and need the appropriate infrastructure to be delivered to and used by citizens.
As noted above, the US came out on top in the AI readiness index, with it performing particularly well for its private sector innovation. The report states that although the US also scored well in the government, data and infrastructure pillars, it is the sheer size and innovative power of its technology sector that really gave it the edge.
Following the US, the rest of the top five places went to Western European nations (the UK, Finland, Germany and Sweden). Although Western Europe hasn't been able to compete with the scale of Silicon Valley in the US, they perform particularly well on the data and infrastructure points. In addition to this, the researchers highlight the willingness of countries in the region to collaborate to support the development of AI (citing the European Commission's EU-wide strategy to make the region a global centre of excellence in AI).
Surprisingly, China only ranked 19th in the world. However, the report does state that given the size of the country and the urban-rural inequalities that exist there, looking at China as a whole may underestimate the strength of its regional hubs. The researchers do point out, however, that China may be making more progress on its implementation of AI, even if it is less prepared. The report states:
We define government AI readiness as the raw materials and enabling factors needed to make AI implementation possible. China lags behind many Western nations on some of these indicators, especially for its technological infrastructure, with lower Internet and mobile phone penetration and uneven broadband coverage. However, in terms of implementation, we would argue that China is making better use of the capabilities it has than many other countries in the top 20 of the Index.
While the case of China does not completely break the correlation between government AI readiness and implementation - it is still among the top 20 countries in the world, and a lower- ranking country would struggle to achieve a similar level of implementation - it does show how some governments are pushing AI implementation more than others.
Other countries in the top ten include Singapore, the Republic of Korea, Denmark, the Netherlands and Norway.
A point on global inequality
The researchers found that the lowest-scoring regions on average are Sub-Saharan Africa, Latin America and the Caribbean, and South and Central Asia - with the report stating that "it is clear that the Global South is lagging behind the Global North".
This has been attributed to few countries in this region setting out their vision for the implementation of AI, small technology sectors, weaker technology infrastructures and a lower availability of data. There are few easy solutions to changing this at scale across all dimensions and could have long-lasting consequences, the researchers add.
If inequality in government AI readiness translates into inequality in AI implementation, this could entrench economic inequality and leave billions of citizens across the Global South with worse quality public services. We hope that the findings of our Index alert governments across the Global South to the importance of building their AI readiness, even in areas such as infrastructure where investment now may not come to fruition for a few years. We also hope that development organisations and the global community as a whole support governments in the Global South in their efforts, to ensure that the benefits of AI are shared by all.
Responsible use of AI
For the first time, the researchers decided to not only produce an index for AI readiness, but also attempt to assess how likely it is that governments will use AI responsibly. The field of AI ethics has grown in significance in recent years, as more and more cases have emerged highlighting how easy it is to introduce bias into systems or to infringe on the privacy of individuals.
You only have to look at the UK's recent A-level grading algorithm fiasco this summer, which significantly disadvantaged lower income families, to see how an irresponsible use of AI could have a lasting impact on people.
The Responsible Use Sub-Index measures nine indicators across four criteria selected to replicate the OECD's principles on AI; Inclusivity, Accountability, Transparency and Privacy.
Interestingly, the US only comes in 24th place when looking at the Responsible Use index, whilst the UK sits at number 22 on the list. The top five spots are taken up by Estonia, Norway, Luxembourg, Finland and Sweden.
These high ranking countries score well in terms of accountability and transparency, as well as having low levels of social inequality, which should allow AI to be adopted in an inclusive manner.
The report states:
There are a number of possible factors behind this gap. The USA and the UK both have significant technology sectors in which a number of companies score poorly on the Transparency International Corporate Political Engagement Index. There is therefore a risk of regulatory capture, where government policy reflects the interests of tech companies more than those of citizens. For example, the USA and the UK both have major surveillance industries, and in the UK the Metropolitan Police faced criticism for their trialling of facial recognition.
In addition, the USA and the UK have higher levels of inequality than responsible use leaders such as Sweden and Finland - this increases the risk that AI will be implemented in a manner that is not inclusive.
Coming in at the bottom of the rankings are Russia and China, which both have developed a reputation for mass surveillance and restrictions on internet freedoms.
I was very pleased to see the addition of the responsibility index to this year's report. My personal view is that a country isn't really prepared unless they're thinking about these things responsibly and ethically - the stakes are too high and I don't think those countries that rank at the top of the ‘preparedness' list should be praised too much, unless they're taking measures to be transparent and open. The use of AI may seem like a technological opportunity, but the consequences could be very real for people already struggling within an unbalanced system.