Main content

The US Government's big question - where do we find 100 Chief AI Officers by the end of May?

Cath Everett Profile picture for user catheverett April 23, 2024
The US Government's recent plans to tighten up AI governance, which includes hiring a raft of new Chief AI Officers, are all geared to help it maintain its current market dominance. They are also a first step towards new legislation in the area.


The US Government’s announcement last month of plans to tighten up AI governance at its Federal agencies is intended to set a precedent both for the private sector and for other nations around the world, experts believe. But there are questions about how to meet the stated requirements. 

The move, announced by Vice-President Kamala Harris, mandates the appointment of around 100 Chief AI Officer (CAIO) roles by the end of May. Also on the agenda are the creation of AI governance boards and the production of annual reports laying out the risks involved in using such technology and how agencies are mitigating them.

Dr Haniyeh Mahmoudian is Global AI Ethicist at generative and predictive AI management platform, DataRobot. Also a member of the US National AI Advisory Committee, which advises the President and the White House National AI Initiative Office, she says:

The Administration has made it clear that it believes ethical and Responsible AI are very important. It’s advocated for them in the past with things like its Blueprint for an AI Bill of Rights and Risk Management Framework. Last year’s Executive Order was also about trying to solidify all its efforts here and create a single process. These are first steps towards raising consciousness and regulation and taking a more comprehensive approach. It’s not just about talking the talk. The government is showing it’s walking the walk too. It’s trying to be a leader and role model here.

Casey Coleman, Senior Vice President of Global Government Solutions at Salesforce, agrees:

The US Government is setting an important precedent with its new guidelines. When the government acts, private industry takes notice, and we often see the standards the US Government sets get adopted both by the private sector and by state actors. That being said, an open dialog between regulators, businesses and civil society will be critical to ensuring the proper implementation and usage of AI in the long-run.

Creating balanced AI legislation

Another important consideration behind the government’s move though is that it should give Congress enough breathing space to create effective AI legislation. Mahmoudian explains:

The Executive Order and Memos are all first steps. They enable Federal agencies to understand what’s needed around AI governance, what has to be changed, and how to create the necessary processes and infrastructure. But it also gives Congress time to bring in balanced legislation that doesn’t block innovation but rather helps research and development to flourish while ensuring federal agencies abide by Responsible AI values.

She points, for instance, to concerns that the European Union’s (EU) AI Act could smother innovation:

The benefit of the AI Act is that it comes up with a risk-based approach to evaluating use cases. It defines high-risk ones, clarifying how organizations should view and evaluate them. While this offers clarity, there have also been concerns around small businesses that the laws will prove too restrictive for them to truly leverage AI. So, we’ll have to see how it plays out. 

As for the UK Government’s approach, Mahmoudian says that having a voluntary aspect to its Generate AI Framework means:

Agencies can think about it in the form of a pilot. So, they can see what works and what doesn’t. But from the US side, the belief is that if you want to be an R&D [research and development] leader in Responsible AI, you need to introduce mandates to ensure everyone’s following the principles being talked about. So, it’s hard to say which approach is better as they all have their own benefits.

Becoming top dog in Responsible AI

But it is clear this moral and ethical leadership is important for the US if it is to retain its existing dominance of the often controversial AI market. The country is already the frontrunner in terms of AI equity investment, with nearly half of all funding deals going to US start-up companies last year. 

Moreover, according to statista, it is forecast to have a $50.16 billion share of a total AI market that will be valued at $184 billion by the end of this year. Between now and 2030, the sector should also show an annual compound growth rate of 28.46% worldwide. 

This values it at a huge $826.7 billion in only six years’ time, with the US predicted to have the largest chunk. In other words, the country cannot afford for anything to go wrong that could damage such a significant cash cow.

Furthermore, in a situation that undoubtedly focused minds, President Biden has experienced first-hand what it can mean when AI is misused. In February, thousands of deepfake AI calls were made in his name to Democrats urging them not to vote in state primaries. As Coleman points out:

While AI’s potential in government is significant, innovation without guardrails can lead to varied and unexpected risks. Public agencies often deal with sensitive information and face legal restrictions on sharing data…If agencies don’t know what’s approved and authorized when it comes to AI use, they’ll try to figure things out themselves. This can open the government to risk and slow AI adoption down to a grinding halt. A clear standard set across the federal level in this sense can actually accelerate innovation.

But while the US Government may be keen to be seen as top dog in the Responsible AI space, it also understands the need for international collaboration too, Mahmoudian says. This is evidenced by agreements with countries, such as the UK, with which it recently signed a Memorandum of Understanding. The aim here is to jointly develop safety tests for advanced AI models.

Where will all the CAIOs come from?

As to where government is to find around 100 Chief AI Officers (CAIOs) by the end of May to provide the necessary focus for its strategy though, that is a rather trickier question. Waseem Ali is Chief Executive of Rockborne, which trains and deploys data and AI consultants. 

He indicates there are few (board-level or otherwise) CAIOs on the market today, beyond those working at AI-dedicated companies, particularly in fintech and healthtech. They can also be spotted in ecommerce vendors and some large insurance firms. 

But the same does not apply to lower level ‘head of AI’ positions, which may help the situation. According to LinkedIn’s Future of Work Report, published in November last year, the number of employers creating this post has tripled over the last five years, and grown 13% since December 2022 alone.

It is worth noting though that things are not being helped in general terms by a marked lack of AI expertise in public sector IT teams. It is currently well below the industry average in skills gap terms, according to research from Salesforce.  For instance, only just under a third (32%) of tech professionals understand generative AI use cases, such as content creation and data analytics. A mere 30% claim to be experts in implementing AI within their organization.

How to find 100 CAIOs

So, the question becomes where will all these rare, and expensive, AI leaders come from? Muddu Sudhakar is Chief Executive of generative AI-based workflow automation platform, Aisera. He believes agencies’ most likely approach will simply be to add ‘AI’ to existing tech leader job titles:

Some will be Chief Digital and Data Officers, CIOs and CTOs, depending on who’s most appropriate. You’ll see AI being collapsed into their job title and they’ll just change the name in most cases.

Mahmoudian agrees – up to a point:

Government already has its own talent in the form of CIOs, CTOs and Chief Data Scientists. So, depending on how you define the role and how you want to expand CAIO responsibilities, most roles will overlap. With a bit of additional training, they’ll be able to take on the position as they already have a strategic view of operations. One of the reasons the position was created in the first place was to implement governance processes, and governance isn’t unique to AI. Processes will just need to be modified to include AI elements, such as bias, privacy or data leakage. 

But Mahmoudian also believes that some expertise is likely to be brought in from the private sector too. As she points out:

Agencies may also want to bring in people from outside - it’ll depend on their structure. I agree on the salary side that the private sector can afford to pay more for experts and that there’s a shortage of talent, but it’s not necessarily always an obstacle to finding someone suitable.

My take

The US government is in a tricky position in terms of balancing its need to ensure the ethical and responsible use of AI without damaging future innovation. If it tips too far to one side or the other, the repercussions for the country – and the rest of the world - could be vast. 


A grey colored placeholder image