The rapid pace in the evolution of AI has left policymakers around the world scrambling to keep up, concerned about the potential impact on employment, data privacy, security, intellectual property and a swathe of other issues — even whether AI could pose an existential threat to the human race. This has created even more of a scramble among vendors that want to harness the technology, to try and steer regulation in a direction that won't create unnecessary barriers to adoption, or stifle useful innovation.
Therefore it's becoming commonplace for the leaders of technology giants such as Facebook, Google and Microsoft, along with others such as X and OpenAI, to be seen testifying before Congress, lobbying the EU, or schmoozing with political leaders at events like the recent AI Safety Summit hosted by the UK at Bletchley Park. But where does that leave enterprise technology vendors who may not be household names to the same extent, but for whom AI is becoming increasingly core to their own offerings?
At last week's Workday Rising EMEA conference I sat down with Chandler Morse, Vice President of Corporate Affairs at the HCM and finance software giant, to find out more about the company's public policy work around this very topical issue. It's clear from our conversation that, while Workday's participation may not be making the headlines, it is playing a proactive role to advance agreement on what AI regulation should look like. Morse tells me:
In many ways, we think — and this is our message to regulators — there's a path here, we know what the path is, we just need to put it on paper.
That's actually where Workday’s really been shining. We're the team that shows up with the legislative language, the amendment language, the crafted bills, to say, 'Here's a way forward.' We're pretty proud of the impact we're having at the moment.
And what is Workday's aim? He responds:
Workday is all-in on the possibility of AI to unlock human potential. One of the things we know is that there is this trust gap on the technology. People don't tend to want to use technology they don't trust. So from our perspective, it's issues around transparency, explainability, bias and discrimination, that would most influence how people felt about the use cases we would be trying to elevate ...
Our motive is very clear. We want our customers to use this technology, and the uptake and the level of comfort with using AI in the HR context goes up when people feel more comfortable.
Engaging with policymakers
Having been developing AI technology for the past decade, Workday first started engaging heavily in discussions about HR AI policy in 2019. Therefore it can bring a lot of experience to discussions that have become much more intense since the popular success of ChatGPT. Legislators aren't necessarily up-to-speed on how the technology all fits together, or even what exactly Workday does. But at least, unlike some more longstanding public policy issues, this is not a topic where the politicians have already made up their minds. He comments:
It has been refreshing how much policymakers are aware of how much they need to learn.
The main task is moving the discussions towards taking some concrete steps forward, and policymakers welcome the guidance Workday can offer, he believes:
There's this sense that, well, nobody knows what to do. ‘What are we going to do? How are we going to do this?’ Our response to that is that it's not particularly novel, regulating technology. This is just another example of technology that we have to apply a regulatory regime on. It is possible to do them in a way that aren't heavy handed, but lead to good outcomes.
He points to the experience of working through data privacy regulation, where taking a risk-based, impact assessment approach helped clarify the way forward. He adds:
We just don't think this stuff is really that hard to do. And, frankly, it's what everyone should be doing anyway.
Therefore, Workday's engagement with the EU encouraged the risk-based approach that he says has now become "table stakes" following the adoption of the EU's draft AI Act. The company then turned its attention to NIST, the US standards body, becoming an early supporter of work to create its AI Risk Management Framework, as a first step towards creating some common ground on AI regulation in the US.
Codifying existing practice
Often, the simplest way to make progress is to codify existing practices that have already been adopted on the ground. For example, Workday recently partnered alongside ADP, Indeed and LinkedIn to produce a document on HR AI practices as part of the Future of Privacy Forum. He says:
Our assumption was ... that there was quite a bit of consensus around best practices for HR AI ... The goal was to wrap our arms around existing practices, and we found it to be incredibly easy to lay down, and we produced the document.
Globally, there are many different initiatives going on. Workday is engaging in the regulatory process in Canada, in the United States, in Singapore and Australia, in the UK, and in the EU at the governmental level, and there are other bodies involved, such as the G7 and OECD. In the US, if Congress is slow to act, then as we've seen with privacy, individual US states will start to do their own thing. Regulatory proliferation then becomes an issue. Morse comments:
We're very focused on international harmonization as a key to a successful AI regulatory regime. I think we have a lot of work to do.
Obviously, I think the EU AI Act is going to be the first domino that's going to fall. Not unlike privacy, I think that will have an outsized impact globally. In the environment we're in now, there isn't a framework to point to, to say, 'Well, are we going to do that or something else?' Now it's sort of a free-for-all, where everyone's trying to figure it out. Once the EU AI passes, it'll be the starting point of conversations around, 'Do we want to differ from that? Do we want to be the same as that?' ...
We're not trying to say everyone should adopt the same thing. We're saying, one, there should be a premium on regulations that build trust, but also support innovation. So workability and trust are key components. But then also interoperability. Is there a way to have frameworks work together, at least?
Social impact of AI
Meanwhile, the impact of AI on the workforce is also attracting the attention of policymakers. Workday recently appeared before the Senate Health Committee on this topic. Here, it's important to emphasize the positive impact of the technology as well as the downside risks. He says:
We're still in the early phases. But AI is an interesting technology in that it's not only going to cause shifts in the future of work, but it also provides the tools for ameliorating those issues — in that you can take a skills-based approach to talent to scale with AI-driven tools like [Workday's] Skills Cloud that can provide additional ability to drive reskilling, to drive talent marketplaces, to drive career planning.
This kind of discussion is much more relevant to real-world outcomes than talk of an existential threat posed by AI, which he describes as "a distraction." He goes on:
It's distracting from real-world issues and concerns that are going to affect more people in the short term ...
Our goal is to ensure that benefits, when implemented in a responsible way, of these technologies can be realized. In many ways, our regulatory view is on a shorter horizon of where we think AI is going to have an impact. We think it's going to impact the workforce, and we think AI can support tools that can help workers and employers deal with those impacts. But in order to get there, we're going to need some safeguards to be put in place so that people trust and can feel good about how they're using it.
That's the context in which we're operating, with concrete proposals, with clear language. There's not a bill we won't redline, there's not a conversation we won't join, again, around this idea of how can we get something done? In many ways, the existential conversation is not helpful to that.
It's interesting to see how much work is taking place away from the headlines to advance the regulatory regime around AI and, as Morse puts it, "get something done." For Workday, this is all about putting a responsible regulatory foundation in place that will build public trust in the technology and allow enterprises to get on with harnessing its potential.