Cloud World 23 - Oracle addresses the hot questions on generative AI pricing, customer data, and where SIs fit in
- Summary:
- As Oracle CloudWorld 2023 kicked off, important questions about Oracle's generative AI strategy had not been publicly addressed. But via keynotes and sessions for media/analysts, that changed. Here's what I learned on crucial questions for AI customers, from data privacy to pricing.
Heading to Oracle CloudWorld, I had a laundry list of open questions about Oracle's generative AI strategy - would they get answered?
I did know a few things: in June, I published a review of Oracle's generative AI plans for HR - where the stakes for biased data are high. I aired my concerns; Oracle's responses indicated to me that Oracle has thought through the need for human-in-loop processes, and designed accordingly.
We also know that Oracle considers itself to have an edge on generative AI infrastructure, dating back to its March 2023 announcement of its OCI (Oracle Cloud Infrastructure) partnership with NVIDIA. (Oracle was the first major IaaS provider to partner with NVIDIA on its DGX Cloud "AI supercomputer service.")
Finally, even if some of Oracle's LLM plans lacked details, we knew, based on Oracle's May 2023 partnership with LLM provider Cohere, that Oracle doesn't plan to build its own LLM for now - but to use external LLM providers instead. But what does that mean for customer data? How would Oracle fare versus my AI questions list I have raised?
External LLMs - picking the best option for Oracle customers
During CloudWorld, a candid AI Q&A for media brought new information to light. While Oracle made clear they are committed to their partnership with Cohere, during the panel, Oracle's Steve Miranda told us in the long term, they will use whichever third party LLMs suit the needs of Oracle customers:
Six months ago, we probably wouldn't talk about LLMs at all, and now we're talking about which one. Six months from now, we'll talk about maybe four different ones. It's sort of irrelevant to us; we're going to rely on whichever LLM [meets our infrastructure, compliance and security needs]. If that world changes tomorrow, we just swap out the new LLM, and we're ready to go.
The role of customer data and reinforcement learning - a crucial part of Oracle's strategy?
In my last generative AI missive, I juxtaposed different vendor attitudes about how enterprise AI value will be delivered. Some vendors seem inclined towards the out-of-the-box value they think their AI will bring for customers (perhaps with results improved by industry-specific LLMs and industry data sets, like regulatory requirements). Other vendors are currently more focused on customer co-innovation and involving customers (and their data) to achieve better LLM results. As I told the panel, I got the impression Oracle falls more into the out-of-the-box camp. Clay Magouyrk, EVP, Oracle Cloud Infrastructure, disagreed:
No, I don't think that it's a tenet of ours that says, 'Reinforcement learning or fine tuning isn't helpful.' The point is, we don't believe that customers should have their data automatically taken, and then give it into a model and share it with other customers without that being something that they consent to. So as an example, our generative AI service has a shared model where your inferencing requests are not used for further retraining. We also offer a model where basically, it's a single tenant deployment, where you can do retraining with that data. But that model now only lives in your tenancy, and doesn't get shared across the customers.
So I don't think we believe that we have the magical panacea that says, 'Hey, this useful technique isn't useful.' But when I talk to customers, the thing I hear that's actually the biggest barrier to actually adopting generative AI - when we move past the fact that the capacity is hard to get these days right now - it becomes, 'How do I use it in a way that I can control? Because I'm very uncomfortable with the idea that I take my data, I use this great new technology, but then suddenly, all of my data has been given to other people. Am I violating laws, or am I giving data to my competitors?' So it's not that we're not pro [model tuning] techniques; it's about giving the customers control.
The generative AI pricing debate - where Oracle stands
This brings us to generative AI pricing. This issue needs to be unraveled, but it won't happen overnight. Each vendor sees it differently. First, value - if great value is delivered out of the box, then you can surely charge a premium for it. But, if it turns out that achieving generative AI value - at least in the early stages - requires customer data, co-innovation, and the participation of the customer's domain experts in prompt testing and so on, should you charge an AI premium for that? As I put it in my last piece:
Does it make sense to charge aggressive AI premiums when the model's accuracy depends on the infusion of the customer's data, and the significant effort of their own domain experts?
There are other pricing issues:
- Generative AI as a service won't come cheap, but other similar IaaS services are priced on a consumption model.
- Add-on applications - including third party apps from partners - have always been priced accordingly; the same will obviously apply to AI. But:
- If vendors embed AI into their core cloud applications, charging for that separately seems unworkable, at least in the long term. Ultimately, customers will just expect that their enterprise applications will be infused with AI. The price point will surely reflect the AI value/costs, but in the long run, AI apps will just be how apps are built.
These topics all came up during the Q&A. While Oracle hasn't sorted every aspect of what I just laid out - no vendor has - the answers brought some initial clarity. Magouyrk explained the generative AI infrastructure service concept, where consumption-based pricing should be a good working model:
When it's a generative AI service, it's much more obvious and clear. It's a functionality; you'll get one price. It's probably more of a question from an application standpoint, right? Where would you integrate that functionality? Do you change that pricing? But from an infrastructure perspective, we charge for GPUs; we charge for networking; we will charge for our generative service.
Miranda spoke to the apps pricing side, where he believes charging a premium doesn't make sense:
I think on the apps, generally speaking, [charging a premium for AI] is sort of silly. We used to have this for self-service HR, [charging] separately for self-service - no, that's just HR. So charge separately for AI and financials? No, I think that's just going to be financials. As always, when we introduce new applications, we're going to charge the new application users. There's not going to be a world where there is ERP and AI ERP. AI is here, and it's AI ERP.
Oracle noted how this evolved on the database side:
We've been doing predictive analytics using machine learning algorithms for about 25 years. Initially, we charged extra for that, because it was really aimed at hardcore data scientists... Right before COVID, we actually made all that stuff free with the database. We decided since you didn't really have to be as big of a specialist as before, that this is the new normal. This is how you run databases. So it's just part of the database.
Oracle said the same holds true for its applications now:
It's the same for the vertical applications, the horizontal applications. We're building the AI feature sets into the applications, and there's a price for the applications, but not separately for the AI.
The role of partners and SIs - can smaller partners factor into AI?
Oracle had a lot to say about the role of partners in their generative AI strategy - enough for a full piece. This is an area I am tracking closely. It goes without saying that the large SIs will be big players in enterprise SI, but what about smaller partners, who may have less access to major hyperscalers and massive data sets? Smaller ISVs can bring industry know-how and fresh/edgy ideas, but will they have trouble making inroads here? After all, we call them Large Language Models for a reason. From what I've heard from Oracle so far, smaller ISVs should factor in.
To capitalize, partners must change their own offerings. But as Oracle notes, for larger SIs with AI skills in-house, it's a chance to help customers with AI adoption:
So at least from the new gen AI service layer, it's actually a time of great change for partners. If you look at, say system integrators - a lot of customers, they want access to this technology, but they don't have the skills. And so it does create new opportunities for those partners, where we work together with many of the largest size, people like Accenture or Deloitte and others, where we work together to make sure we both understand the environment. And then we jointly solve problems for customers.
But smaller ISVs can factor in also, accessing Oracle's services globally.
It also opens up deeper partnership opportunities with ISVs, where suddenly, you can take the great new technology that crosses all of this. The ecosystem that we have at Oracle, because we have so much data and so much technology, they can build integrations that use both generative AI services, as well as access to other pieces. Imagine extensions to your SaaS applications - custom features that we haven't built yet for industry applications.
How can partners differentiate? Vertical expertise is a great starting point - even if the partner doesn't have access to massive data sets.
There's an opportunity for partners to offer differentiated change management services, particularly verticals like healthcare and life sciences, where you've got to think through patient consent, the identification, tokenization. So no matter if the training data is in a shared model or a dedicated model, you want to make sure that as an institution, the supplier of the data, that you're not inadvertently violating HIPAA policies and things like this. And that requires some things - process inspections and thought leadership from our partners, to help customers who probably haven't had policies in place at that level.
My take
It's too early to say exactly how generative AI value will be achieved in the enterprise. Based on my research grind to date, I have a strong conviction that industry-specific LLMs, infused with customer data, will be a key aspect of achieving more accuracy/relevance than we have seen from generalized, ChatGPT-type LLMs trained on the vast and problematic Internet. What we don't know yet is how much customer-specific co-innovation effort will be involved.
Some believe that technical advances in deep learning, or techniques such as RAG, which Oracle explains on its website and seems enthusiastic about, will keep the need for human reinforcement learning at a minimum (this is important for cost-effectiveness and out-of-the-box impact). There is one area of near-consensus: enterprise AI vendors agree that human-in-the-loop for output monitoring will be non-negotiable.
Out-of-the-box versus generative AI co-innovation is a story to watch, as pricing is deeply implicated. I suspect some initial use cases will work well out-of-the-box, but for more advanced use cases, we'll need to watch closely.
During Larry Ellison's CloudWorld keynote, his story of an Israel-based healthcare startup gave a good indication of how expert ISVs and niche startups fit into Oracle's industry plays:
A company in Israel called Imagene [Oracle is an investor] took a bunch of cancer biopsies, biopsy slides, and fed it into the AI model. The computer is now able to diagnose cancer in a matter of minutes... So this will be a more normal thing to do - a lot of people will be building these specialized models. Not many companies are going to build foundational models, and that makes complete sense.
We're doing the same thing, by the way. We're not taking biopsy slides and trying to train a model for cancer detection. But we are taking a lot of electronic health data and training it... Again, the idea is to give the doctor a draft of a discharge note, or a draft of an order. The doctor reviews it, edits it and then submits it. So the model is making the doctor's job easier. It is not taking taking over the doctor's job. The idea is to be this very, very powerful tool that makes the physician's job easier.
This kind of human-in-the-loop medical AI talk is much more thoughtful than the grandiose statements we used to hear about radiologists being summarily replaced. We still see absurd headlines (though not from Oracle) about how "AI" outperforms doctors in some types of illness identification. That type of hyberbole misses the point entirely; Ellison has it right on this one.
Every enterprise vendor believes that their generative AI strategy is uniquely differentiated. Having dug into plenty of position statements, I can't say I agree. Once we get further - and customer use cases in production can be validated - we may have a better sense of who truly excels. But differentiation isn't always the right mindset. Every vendor must provide its customers with a viable way forward that addresses everything from explainability to model drift to customer data privacy.
In that sense, differentiation is less important than giving your own customers what they need. Oracle has good answers but then again, I've heard good answers from most vendors (with the exception of Zoom, which comes off as more of a consumer tech player, and not in a good way).
Oracle does potentially have some differentiators combining their generative AI apps play with OCI-powered generative AI services, with the NVIDIA tech partnership factoring in. But exactly how those pieces could fuse together into a better value proposition for Fusion Cloud customers will need time to research and document. Cost savings and performance at scale are not necessarily easy for customers to benchmark, especially with AI pricing still in flux. Though over time, the better performing solutions do have a way of grabbing market share.
I found this post by Avasant's Tom Dunlap interesting: Oracle Racing to Catch Up with Focus on Generative AI. I don't believe Oracle is substantially behind (or ahead) of its main enterprise software competitors when it comes to releasing generative AI functionality (all have pretty similar release timeframes). From an generative AI services/infrastructure standpoint, you could argue Oracle was temporarily ahead here, at least in terms of their NVIDIA partnership. Whether that helps them claim a bigger stake in the broader IaaS market remains to be seen.
Obviously, Azure, AWS and Google claimed a big piece of that pie, thus the title to Dunlap's piece; it's unclear whether AI will shake up who claims those workloads. I suspect not substantially, but it's too early to assess the impact of Oracle's ability to offer both AI apps and AI infrastructure services to customers.
For Oracle's own customers, when it comes to preserving customer data privacy, having everything in-house is surely going to help with that data trust factor - except for LLM, where Oracle has now stated its plans for data protection. Oracle is clearly confident; now we track the progress.
Updated 5am UK time September 27th, with a few minor tweaks for reading clarity.