Your Friday challenge: name one vendor that doesn't have some kind of generative AI news. Do you hear crickets chirping? So we can't be surprised with press releases like Oracle's latest, Oracle Introduces Generative AI Capabilities to Help HR Boost Productivity.
But for once, I'm not giving vendors too much grief about this - why? Because at every spring event I attended, customers wanted answers about generative AI.
As so-called "Shadow AI" makes fast and unsettling inroads, with risky potential consequences on data privacy, customers want something better. They expect their trusted vendors to deliver this functionality.
And, as Oracle believes, customers want this functionality embedded in relevant processes. As per the release:
Built on OCI and leveraging its best-in-class AI services, the embedded generative AI capabilities within Oracle Cloud HCM are designed to provide high levels of security, performance, and business value. Built-in prompts help customers get the best results while helping reduce undesirable side effects such as factual errors and bias.
Attention, enterprise software vendors: you have a chance to address some of generative AI's glaring weaknesses. Example? Data privacy. From Oracle:
With Oracle Cloud HCM, customers use their own data to refine models for their specific business needs—each customer's dedicated generative AI models are only tuned on the customer's own proprietary data. By giving customers control of the data used by generative AI, Oracle is helping keep sensitive and proprietary information safe.
Generative AI for HCM - use cases with surprisingly high stakes
HR is a fascinating pursuit for AI. Overworked HR admins can truly benefit from automated or "smart" processes. But I also worry greatly about what I call the "AI overreach" in HR, which can have negative impacts on areas like applicant screening - if systems aren't designed with inclusion in mind. Therefore, I wanted a deeper understanding of how Oracle is applying generative AI to HR. I got that this week, via a demo and (virtual) sit down with Guy Waterman, Global Strategy Lead & VP, People Analytics, HCM Technology, and Innovation.
This HCM AI functionality will ship to Oracle Fusion HCM customers via their normal SaaS updates; Waterman says anything we talk about here will be generally available by the Q4 2023 release.
Oracle explains this HCM functionality by citing three different generative AI capabilities. As per these bullets I trimmed from the press release:
- Assisted Authoring: "Examples of assisted authoring use cases include writing job descriptions and requisitions; automated goal creation, including detailed descriptions and measures for success; and the generation of HR Helpdesk knowledgebase articles to help employees efficiently complete HR tasks."
- Suggestions: "Examples of suggestion use cases include automated recommendations for survey questions based on the type of survey being designed, or development tips for managers to provide to their employees."
- Summarization: "An example includes providing a summary of the employee's performance for submission in the employee's regular review cycle based on feedback gathered across the year from the employee, peers, or managers, and goal progress and achievements."
This is a good encapsulation of what large language models excel at currently, applied to HR pain points. The "suggestions" aspect, as I see it, builds on AI's predictive capabilities - something Oracle has been utilizing for quite some time. Contextualizing those as "suggestions" is a generative AI strength. But hold up. In theory, there are thousands of HR process points where generative AI could be embedded. So how did Oracle prioritize? As Waterman told me:
This is something that we've worked on with existing customers to say, 'Is this something you're interested in? And if so, what do you call the highest value areas? And if we were to deploy it, if we were to build this, would you use it?' So we've been able to work with customers on the prioritization of literally over 100 different use cases that we've delivered, or that we plan on delivering.
We narrowed down to the ones that they ranked as the highest priority. That prioritization was based upon, number one, are they comfortable reviewing it? Are they comfortable deploying it? Would they allow it to be used by their workforce? Does it provide them value in general, just from time savings and productivity and other capabilities?
Going deeper into Fusion HCM AI - the performance management summarization example
During our demo, I gravitated towards a "Summarization" example in performance management. Why? Because the stakes are higher. Authoring job descriptions is a comparatively straightforward use case - it's a good one I expect all HR vendors to pursue. But summarizing someone's job performance is a really big deal. These AI tools are not always perfect in how they do that, to say the least. How will Oracle make sure that we don't summarize people's performance in ways that are inaccurate, or harmful to their future prospects?
In response, Waterman walked me through a performance management summarization demo. The first encouraging thing I saw: the performance "summary" text was drawn from a range of topic areas that both the employee and their manager have already reviewed. In other words, the content being summarized was already essentially vetted by humans. The time saved by such an "assist" seems pretty obvious to me. The following screen shot shows the performance summary field, as initially populated by generative AI:
The manager can revise this machine-generated summary as needed, prior to posting it. In other words, this is not an AI-driven process. This is the infusion of AI into a performance management process that has already moved well beyond the static annual reviews of old. Waterman explained the employee's view of this information:
This is my manager's evaluation. I've already responded to a lot of information through check ins and follow ups and goal attainment and other things.
The manager's screen is what you see directly above:
So this is me, the manager, giving "Jen Jacobs" some feedback. It is as simple as going in and saying, 'Here's the AI assist component; based upon all of the context that you've provided, this is what we're going to give you.'
Matching skills and advancing talent - can AI do a better job than humans?
I still fret about things like "star ratings" to rate employees. Example: what if an employee is only average at their core duties, but is spending extra time supporting other employees in backchannels or on forums? Or answering customer questions on social media? But employee scoring and performance ratings are a fact of corporate life, and not something I can fairly blame on AI. Waterman, for his part, envisions how harnessing this data can lead to a more holistic view of career development:
With competency information and other steps from these performance reviews, the next logical step is to then take a look at and evaluate, 'Where are you in your current grade position? And then are we looking at different career paths? Are there? Are you considering that? Do you want to proceed in your existing path? Or would you like to consider an adjacent type career path that you can build upon from where you're at'?
I also asked about applicant screening. In my view, the industry's automation of applicant screening, starting with rules-based systems, has done as much harm as good, excluding applicants due to tunnel-visioned screening on narrow requirements. It's such a poor way of truly identifying exceptional talent. But what I saw from Waterman's demo was different.
He showed me examples of Oracle AI helping an applicant to summarize their qualifications, something that is not easy for many applicants to do. He also pointed towards what I believe is the future of a better HR: recruitment and career advancement based on AI-powered skills ontologies, not on overly-rigid job screening. If we do this right, AI systems should make hiring more inclusive, and actually compensate for and challenge human hiring biases - rather than perpetuate them. Granted, that is a very big "if."
But if AI can surface talent we have overlooked, via skills that are tagged and don't require degrees or certifications, that seems like a realistic pursuit to me. In other words, could we avoid the reduction of applicant pools by brute force screening, because AI doesn't need a smaller pool to review like humans do? As long as your skills ontology is comprehensive, can't AI surface applicant possibilities from a much wider applicant net?
Generative AI didn't invent this approach to AI in HR. But as Waterman explains, we can now take it further:
We introduced AI when it wasn't very popular and wasn't very comfortable. It was done in our governance risk and compliance area, with our advanced controls capabilities. We graduated from there into our recruiting cloud, providing next best content, and next best candidate, then next best job. Now we've even added in here some of the things that we've done, we have a candidate qualification summarization.
Yes, this is the applicant AI assist I referred to earlier. Waterman adds:
At the end of them submitting an application, we can put in an AI-generated, 'Why am I a good fit for this job' summary. And what about me resonates with this job, so that I can attach it to my job application.
Over time, that skills ontology has powered HR areas like Oracle Grow. But as Waterman points out, the big drawback with skills ontologies was always keeping the data clean, up to date, and tied into recruitment and talent development. This "data hygiene" challenge is another area where generative AI can have impact:
The more consistent your data is, the cleaner it is, the more you practice data hygiene, the better off your responses are going to be from the AI component. So generative AI is actually enabling AI to make better decisions faster. And then also share those in a common language with all the consumers of the information. And then consumers of the information, for us, that's every employee or anyone that works within your organization.
As I've said before, I believe enterprise software vendors can make generative AI more effective and responsible, by: reducing hallucinations, increasing explainability, reducing bias in training models, making those models smarter through tactics such as "reinforcement learning," and getting customer data privacy right.
This Oracle HCM AI discussion basically checked all those boxes. Now, some generative AI evangelists get upset when I call this an "evolutionary," not a "revolutionary" technology. The reason this is evolutionary is because: 1. the kinds of 'trust guardrails' Oracle is building take time, and 2. sensitive generative AI scenarios with legal implications require "human in the loop" design, to ensure compliance - and a better experience for the humans on the receiving side of these systems.
Hopefully Oracle isn't offended by my evolutionary assertion either. Here's the good thing: if this were truly a revolution, that would mean AI could essentially pick the hiring needs, post the jobs, interview and hire the candidates, and we'd be looking at massive/sudden job displacement in HR departments. It's precisely the limitations in today's technology that buy us time to make these profound transitions. That's not a bad thing - even with limitations, this is potent stuff when you integrate it properly. And: proper human in the loop designs can minimize the downsides.
But it's more than just minimizing downsides. As Waterman asserts, automating the tedious HR aspects such as generating viable job descriptions will save Oracle customers a good amount of time - and that may be a productivity understatement.
There is another topic I want to get into deeper with Oracle: how Oracle's large language models were trained and applied to these use cases. To be clear: I have no concerns about Oracle's handling of customer data, and ensuring data privacy. But customers do want to understand how these processes work: explainability is a big deal. That includes how the models were trained. Also, on the consumer side, ChatGPT only became a potent (and problematic) tool when it was trained on massive Internet data sets.
I am someone who believes that expanding the parameters on those massive data sets further is not going to make generative AI much better. I think what enterprises do with smaller, more focused large language models is where the real action is with LLM-driven AI going forward (less hallucinations, more effective content). But: we need to better understand how well LLMs will do with smaller data sets.
It's a technical discussion with broad scientific relevance - one that I am pursuing with experts across the industry. But for now, Oracle deferred those questions. I got the strong impression that at Oracle CloudWorld in September, they'll have a lot more to say about this aspect, so I look forward to that. You can count on me to ask those pesky questions again. For now, Vegas can wait - but the generative AI summer news train rolls on.