The "deep water of the Information Age" offers Oracle great opportunity, argues CTO Larry Ellison, but Wall Street takes a cool view of Q1 numbers

Stuart Lauchlan Profile picture for user slauchlan September 11, 2023
There's a lot of need for more and more data in a generative AI world and that plays to Oracle's strengths, says Ellison.

Larry Ellison

Oracle’s signed up AI firms to $4 billion of capacity in its Gen2 Cloud in its fiscal 2024 Q1, more than twice as much as it did in the previous quarter. But a mixed bag of quarterly numbers saw Wall Street sending the company’s share price down.

Net income for Q1 rose to $2.42 billion, up from $1.55 billion in the year-ago quarter, while overall revenue was up nine percent year-on-year to $12.45 billion. Meanwhile cloud services and license support revenue of $9.55 billion was up 13% year-on- year.

But cloud license and and on-premises license revenue fell ten percent year-on-year to $809 million, while hardware revenue was down six percent to to $714 million. And while cloud infrastructure revenue of $1.5 billion was up a hefty 66% year-on-year, that was a slower quarterly growth rate sequentially, down from 76% in Q4.

It was to the AI wins that CTO Larry Ellison drew attention on the post-results announcement analyst call:

Is generative AI is the most important new computer technology ever? Maybe, and we are about to find out. Self-driving cars, computer-designed antiviral drugs, voice user interfaces. Generative AI is changing the automobile industry, the pharmaceutical industry, how people communicate with their computers. Generative AI is changing everything…The largest AI technology companies and the leading AI start-ups continue to expand their business with Oracle for one simple reason - Oracle's RDMA interconnected NVIDIA superclusters train AI models at twice the speed and much less than half the cost of other clouds.

Data needs

It’s a big opportunity, he attested:

You can't build any of these AI models without enormous amounts of training data. So if anything, generative AI has shown that the big issue about training one of these models is just getting that vast amount of data ingested into your GPU supercluster. It is a huge data problem in the sense [that] you need so much data to train OpenAI, to train ChatGPT 3.5. They read the entire public Internet, they read all of Wikipedia, they read everything, they ingest everything. You take something like ChatGPT 4.0 and you want to specialize it, you need specialized training data from electronic health records to help doctors diagnose and treat cancer, let's say,.

He cited a healthcare example of imaging:

That is ingesting huge amounts of image data to train their AI models. We have another partner of ours in AI, ingesting huge amounts of electronic health records to train their models. AI doesn't work without getting access to and ingesting enormous amounts of data. So in terms of a shift away from data or a change in gravities at AI, AI is utterly dependent upon vast amounts of training data. Trillions of elements went into building ChatGPT 3.5, multiple times that for ChatGPT 4.0 because you have to deal with all the image data and ingest all of that to train image recognition.

All of this is good news for Oracle’s database business, Ellison argued:

Oracle's new vector database will contain highly-specialized training data, like electronic health records, while keeping that data anonymized and private, yet still training the specialized models that can help doctors improve their diagnostic capability and their treatment prescriptions for cancer and heart disease and all sorts of other diseases. So we think it's a boon to our business. We are now getting into the deep water of the Information Age. Nothing has changed about that. The demands on data are getting stronger and more important.

That opens up new prospects for growth elsewhere he added:

If you're constantly training these models, keep in mind, you have to bring in new data. If you're in the healthcare field, in the legal field, new cases are being judged, new research is being published all the time, and for your AI models to be relevant they have to be up-to-date. So it's not that you train and then do nothing but inferencing thereafter. Your training and your inferencing sit right next to each other. As long as we can do this stuff twice as fast as everybody else, not just on the training side, [but]  also on the inferencing side, then we are going to be half the cost or better. So we think we are going to be very, very competitive across the board, whether it's training or an inferencing. We are pretty confident that we've got a cost-performance advantages.

My take

Next up - Mr Ellison goes to Redmond to meet with Microsoft CEO Satya Nadella at the end of this week. Ellison pitched:

We will be substantially expanding our existing multi-cloud partnership with Microsoft by making it easier for Microsoft Azure customers to buy and use the latest Oracle cloud database technology in combination with Microsoft Azure cloud services. 

Then after that, it’s Oracle Cloud World.

Watch this space on both.

A grey colored placeholder image