"The EU AI Act is actually happening, and we are working with customers on this in a constructive way" - Chandler Morse, Workday VP Public Policy, on the challenges of AI regulation

Sarah Aryanpur Profile picture for user saryanpur February 28, 2024
At last week's Innovation Hub gathering in Dublin, Workday executives emphasized the importance of trust in AI adoption and the need for effective regulation.

Image of a chalk board with writing on that says ‘Know the Rules’

The recent debacle after the launch of Google’s AI generated images tool, Gemini, shows that effective AI regulation that offers confidence in the technology, and engenders trust, can’t come soon enough. (Google had to stop the AI model generating images of people because it was producing some historical figures in a variety of ethnicities and genders, which were then being gleefully posted on social media.) 

Chandler Morse, Workday VP Public Policy, believes building trust will be imperative to the development of AI based solutions:

We want people to trust AI technology, because if they don’t, they won’t use it. Regulation can’t come quickly enough. Unlike previous situations the industry has recognized that good regulation will actually support AI innovation. There is a trust issue here.

It's a message that the firm has emphasized heavily with co-founder and now Executive Chairman Aneel Bhusri previously stating

We firmly believe that to deliver on the possibilities it offers AI and ML (Machine Learning) must be leveraged in a trustworthy and ethical way. Workday has always been a trusted partner to help companies keep their most critical assets, their people and money, safe, secure and private. This approach becomes even more essential when leveraging AI and ML.

In January this year Workday presented the results of its global survey into responsible AI. The report called Ensuring Trust and Leadership in Innovation, found that there was a significant AI trust gap within the workplace. Business leaders and employees polled agreed that AI offered great opportunities, but there was a lack of belief that it will be deployed responsibly, with employees more concerned about this than their bosses.

Jim Stratton, Workday CTO, argues:

There's no denying that AI holds immense opportunities for business transformation. However, our research shows that leaders and employees lack confidence in, and understanding of, their organizations' intentions around AI deployment within the workplace. To help close this trust gap, organizations must adopt a comprehensive approach to AI responsibility and governance, with a lens on policy advocacy to help strike the right balance between innovation and trust.

Workday's survey found that 64% of business leaders saw the potential for AI, but only 52% of employees expressed the same sentiments. Nearly a quarter of employees thought their companies might put company interests above those of their staff, and 42% thought their company didn’t have a good understanding of which systems should be fully automated, and which needed a human in the loop.

There was also poor communication about AI implementations with three in four employees saying their organization is not collaborating on AI regulation, and four in five saying their company hasn’t shared any guidelines on the responsible use of AI. Morse observes:

Workers believe there will be disruption, and are not confident about AI augmenting their roles, rather than replacing them. To make the most of human potential, trust in AI is imperative and that can only be achieved through good governance, regulation and communications.

EMEA progress

The report was launched to coincide with the recent World Economic Forum in Davos, which hosted sessions on the challenges facing regulating AI on a global basis, with representatives from the US, the EU and Singapore taking part, albeit with polite smiles all round

For her part,  Clare Hickie, Workday CTO EMEA, believes the EU AI Act is helping close the trust gap in a number of critical areas:

Showing how AI is augmenting human potential is important in terms of trust, and AI has to impact positively on society, and there must obviously be fairness and transparency. This approach will address the trust gap issues, and we need to apply these principles in practice through risk identification and then mitigation.

Hickie says from a people perspective, good communication is essential. For example, at Workday every executive officer sits on an AI advisory board, which has monthly meetings to put new guides in place and to see how the company is progressing. She adds:

From a product perspective we really are closing the trust gap. Our developers are very conscious of responsibility for AI, and we are finding it is a downhill battle because we are already designing for safety, and for responsible AI development outcomes. We have a framework in place and are ensuring there is trust in our relationships with customers and developers.

Government engagement

Morse, who worked on Capitol Hill for more than a decade, and is no stranger to engaging with legislators and government bodies, says the company has been working with a range of governments across the EU, US and AJAP regions as an advocate of AI regulation:

There has to be regulated safeguards. There is skepticism, and organizations won’t use AI to its full potential if they don’t trust it. Obviously we want people to use AI technology, so they have to trust it. For that we need regulation.

But as the Davos conversations illustrated last month, setting AI regulation on a global level will not be an easy trick to pull off, which is why the EU legislation is so encouraging. Morse comments:

The EU AI Act is actually happening, and we are working with customers on this in a constructive way, looking at amendments and key points, and I have to say there is a lot to like in the policy framework. There is a focus on risk and the impact on people’s lives, and AI developers and deployers have different responsibilities and risk based frameworks within it.

He thinks that the process the EU went through to come up with General Data Protection Regulation (GDPR) has helped speed up the creation of a regulatory framework for AI. He said:

The EU AI Act is the first mover in this technology policy, and we are hoping it will be the first massive domino that falls and starts interoperability, so that all global frameworks work together.

As for the US, Morse points out that Congress has not got its act together, but suggests that a voluntary framework that harmonizes with the EU AI Act will probably be the way forward, with individual States getting to grips with regulation. California has already begun to move, and Workday is working with lawmakers there.  There's also some interesting work going on in Asia Pac, he adds. 

Regulating new tech is never easy, but Morse concludes by suggesting that AI might be different, at least in terms of a recognition of the need for action:  

We are pushing on an open door this time...I have never seen this level of connection before - the global engagement is huge, We have to walk forward to balance the advantages of AI, but the benefits are real and governments have understood and recognised that. It’s very unusual.

My take

The ball has to keep moving forward to get economic development and trust in these technologies. The difference with AI is that people say ‘Please regulate’ rather than dragging their feet.

The Workday public policy team seems pretty optimistic about the impact of the EU AI Act, and the work the rest of the world is doing on regulation, so maybe there is some hope of global consensus taking shape. Time will tell. 


A grey colored placeholder image