Generative AI and HR - Oracle's Yvette Cameron cites keen interest among customers

Stuart Lauchlan Profile picture for user slauchlan March 19, 2024
Gartner and Valoir studies highlight concerns, but Oracle's Cameron makes the case for gen AI's role in the HCM space.

Yvette Cameron

As the debate around the impact generative AI will have on various job functions rumbles on, Oracle last week announced a series of enhancements to its Fusion applications suite, including its HR offering. 

As per the official announcement, Oracle Fusion Cloud Human Capital Management (HCM) gains the following:

  • Job category landing pages: Help career site administrators quickly build high-quality landing pages for different job categories. With richer generative AI-built career sites in Oracle Recruiting, organizations can deliver tailored experiences for candidate audiences and increase candidate engagement.
  • Job match explanations: Help candidates spend less time determining if a job opening is best suited for their background and career goals by presenting them with a summary of how well they fit a job. With generative AI providing immediate feedback in Oracle
  • Recruiting, candidates can better understand where they may best fit within an organization. Candidate assistant: Helps candidates find answers to common questions about the company, benefits, and job-specific requirements in a simple conversational experience. With generative AI providing immediate answers in Oracle Recruiting, organizations can keep candidates engaged while reviewing career sites and job opportunities.
  • Manager survey generation: Helps managers generate quick surveys for their teams with manager-defined structure and formatting. With generative AI in Oracle Cloud HCM’s Manager Activity Center, organizations can get timely employee feedback to quickly inform actions and decisions.

Analyst view 

All of that sounds entirely sensible. But is HR really ready for gen AI? Recently Gartner’s latest survey research suggested that while more organizations are moving from exploring generative AI to implementing solutions, over two-thirds of the 179 global respondents (67%) state that they don’t plan to add any gen AI-related roles to the HR function in the next 12 months.

The same study confirms that 38% of HR leaders are piloting, planning implementation, or have already implemented generative AI. Gartner’s assessment is that HR leaders see the value in tapping into gen AI tech to reduce resource-intensive processes, eliminate mundane tasks, and co-author HR-related content or documentation.

A separate report from Valoir concludes that alongside productivity gains there are a number of areas of risk around gen AI as it relates to the HR function, noting that, as of mid-2023, only 16% of organizations had policies in place around the use of the tech, while only 14% had a training policy in place for effective use of AI. That said, over a third (35%) of HR employees’ daily routine is ripe for automation, with nearly 25% of organizations using AI-supported recruiting.

The four main areas of risk identified by Valoir are: 

  • Data compromises where apps or employees unwittingly expose confidential data to Large Language Models (LLMs) where it can then be leaked to other public sources or become part of a public model data set.
  • Hallucinations - which is why you need humans in the loop.
  • Bias and toxicity - AI can amplify the biases of its model builders/trainers.This poses a serious concern around recruitment. 
  • Recommendation bias - risk that employees accept AI recommendations as truth rather than critically evaluating outputs. That's of particular concern when staff are incentivised to complete tasks as quickly as possible. 

Oracle's view 

All that said, I sat down with Yvette Cameron, SVP Global Cloud HCM Product Strategy at Oracle, to discuss the role of gen AI in HR, starting by asking for her reaction to the survey from her former employer, Gartner. Cameron began by making the point that generative Ai isa recent phenomenon, with awareness triggered by ChatGPT’s rise: 

Suddenly we all became prompt engineers and we all understood at a personal level what the real advantages are. Now, understanding personally what's possible, we can see opportunities for application in the workforce, to streamline, to create content, to improve and accelerate the way we're doing our jobs.

Managers are able to use gen AI to remove hours from writing job descriptions or poring through and badly summarizing a year's worth of reviews for an employee, and can instead focus on the starting point that gen AI has given me and improving that. The performance and the quality and consistency improvements are significant and people are really waking up to that now.

There is a real interest out there among customers, she stated, citing a meeting the day before with around 45 customers: 

We asked who is using some gen AI today - not necessarily ours. Ours is six months old. It's ahead of others, but still not very mature as far as time and market - and I would say somewhere between a third and a half raised their hands. And then we asked who's intending to use to implement either ours or somebody else's [gen AI]  and then we had probably closer to 85%-90%. There were just a few hands that were not up. And that room was a micro-cosm of what we hear every day in our conversations.  

I won't try to explain the difference between an analyst survey and our own, but I can tell you that, again, when you look at the capabilities of gen AI to improve the candidate experience, the employee journey,  to empower managers with greater decision support and productivity and truly human-like ways, that explains to me why customers and organizations are saying, 'I'm keenly interested in gen AI'. 

But there is caution as well, as other vendors have noted. Cameron argued: 

Having gone out to ChatGPT,  most of us have found that the answers that it provides are pretty wonky sometimes. And it's one thing to say, ‘What's the square root of 16?’ and three comes back and you know that's wrong. It's another to say, ‘Write this thing for me, summarize this’. Can you really tell if it's wrong or not? That's where the concern is. 

Our approach to gen AI across all Fusion apps is to embed the gen AI contextually in the flow of work. We put guardrails around the prompt itself that is invoking the call to the Large Language Model to eliminate hallucinations, to ensure that the results are accurate. We test the results before we launch the solution out to our customers and are constantly looking for that. I will tell you, for example, in recruiting, we partner with an organization called Textio and their goal is to help remove bias and things from the way you post a job. If customers are using Textio and are recruiting, their job descriptions are generally less biased., We applied gen AI to it and consistently found no bias in those job descriptions when you're using the Textio and Oracle solutions together.


As the two studies mentioned above said, recruiting is a sweet spot for Oracle. I posed the question about how candidates felt about the role of AI in looking for a job - is there a danger of worrying that you’re being ‘interviewed’ by computer? Cameron said: 

Our application of AI and gen AI doesn't serve to kick out candidates. What we are doing is we are providing transparency into their fit for opportunities. One of the features is a job match assistant. It demonstrates for the candidate, as they're applying, how they match to the job's  overall score, that will also provide the components of that matching score. So how do you fit against skills? How do you fit against education? How do you fit against experience? By providing that level of transparency, the candidate can say, 'This is really not a good fit for me, I'm not going to waste my time',  or they go in knowing that they only scored a four out of five or something on the skill, but that's an area for them to manage in the communication. So again, it's about enabling the candidate to have the data they need to make the right choices. 

We embed skills recommendations using AI in the candidate process. So when I apply and I upload my resume and it extracts seven skills or I keyed in seven skills, the gen AI or the the AI will look at those skills and say based on our understanding, because it's a dynamic skills ontology and AI-driven ontology that understands relationships, 'I see you've uploaded seven skills. We believe you actually have these other five for a total of 12'. And if they agree, they might say, 'Yes, I have a five' or 'I have three of five', but by augmenting their profile, they become a better match either to the existing position, or we're able to surface new opportunities that they wouldn't have been a fit for before. They can receive new recommendations based on that augmentation. So our focus on the experience for the candidate is to help them match to the best opportunity and then focus their time there. So, give them the information they need to take control of the decision process.

In part of two of this interview, the wider state of the post-pandemic HCM landscape comes under scrutiny. 

A grey colored placeholder image