Silicon Valley semiconductor company NVIDIA is forecasting Q2 2024 revenues of $11 billion, a more than $4 billion increase on what it secured in the same period last year. The rapid surge in growth is being fuelled by NVIDIA’s AI investments, as buyers rush to figure out how they can make the most of the advancing technology in the wake of generative AI tool ChatGPT being released.
NVIDIA announced its Q1 2024 earnings this week, with revenues of $7.19 billion (up 19% from the previous quarter), which sent the company’s share price up nearly 30% in after hours trading on Wednesday.
The company has struggled to meet demand for its AI chips, but CEO Jensen Huang told Reuters that NVIDIA had started full production of its latest graphic processing units (GPUs) in August last year, which helped expand to meet buyers’ needs when ChatGPT had its ‘iPhone moment’ earlier this year.
Speaking with analysts upon release of the earnings, NVIDIA EVP and CFO Colette Kress, said:
Outlook for the second quarter fiscal '24…total revenue is expected to be $11 billion, plus or minus 2%. We expect this sequential growth to largely be driven by [our data center division], reflecting a steep increase in demand related to generative AI and large language models.
This demand has extended our data center visibility out a few quarters and we have procured substantially higher supply for the second half of the year.
The key numbers for this quarter, Q1 2024, include:
Quarterly revenue of $7.19 billion, up 19% from previous quarter
Record Data Center revenue of $4.28 billion
Firstly, CFO Kress highlighted that the growth for NVIDIA is coming from the company’s data center division and she also provided some insights on the customer categories. She said:
“Generative AI is driving exponential growth in compute requirements and a fast transition to NVIDIA accelerated computing, which is the most versatile, most energy-efficient, and the lowest TCO approach to train and deploy AI. Generative AI drove significant upside in demand for our products, creating opportunities and broad-based global growth across our markets.
Let me give you some color across our three major customer categories…first, CSPs around the world are racing to deploy our flagship Hopper and Ampere architecture GPUs to meet the surge in interest from both enterprise and consumer AI applications for training and inference.
Multiple CSPs announced the availability of H100 on their platforms, including private previews at Microsoft Azure, Google Cloud, and Oracle Cloud Infrastructure, upcoming offerings at AWS, and general availability at emerging GPU specialized cloud providers like CoreWeave and Lambda. In addition to enterprise AI adoption, these CSPs are serving strong demand for H100 from Generative AI pioneers.
Second, consumer Internet companies are also at the forefront of adopting Generative AI and deep learning-based recommendation systems, driving strong growth. For example, Meta has now deployed its H100 powered Grand Teton AI supercomputer for its AI production and research teams.
Third, enterprise demand for AI and accelerated computing is strong. We are seeing momentum in verticals such as automotive, financial services, healthcare, and telecom, where AI and accelerated computing are quickly becoming integral to customers' innovation roadmaps and competitive positioning. For example, Bloomberg announced it has a 50 billion parameter model, BloombergGPT, to help with financial natural language processing tasks such as sentiment analysis, named entity recognition, news classification, and question-answering.
Kress also highlighted the recently announced NVIDIA AI Foundations, which are modern foundry services that allow companies to build, refine and operate custom LLMs and generative AI models, trained with their own proprietary data, created for domain-specific tasks.
We saw a recent example of this at ServiceNow’s Knowledge user event in Las Vegas last week, where the cloud workflow vendor announced a partnership with NVIDIA to develop enterprise-grade generative AI capabilities, using NVIDIA software, services and infrastructure, to create its own custom LLMs trained specifically for the Now platform. On the new partnership, Kress said:
ServiceNow, a leading enterprise services platform, is an early adopter…they are developing custom large language models trained on data specifically for the ServiceNow platform. Our collaboration will let ServiceNow create new enterprise-grade generative AI offerings, with the 1,000s of enterprises worldwide running on the ServiceNow platform, including for IT departments, customer service teams, employees, and developers.
NVIDIA CEO Jensen Huang also took time to talk to analysts about competition facing the company, given the heightened interest in where the AI market is going and how it will develop. Huang suggested that whilst competition is rife, he also argued that to gain advantage, companies need to be able to tackle the problem at scale and with the full stack - which few will be able to do. He said:
Regarding competition, we have competition from every direction. Start-ups, really-really well-funded and innovative startups, countless of them all over the world. We have competition from existing semiconductor companies. We have competition from CSPs, with internal projects. And so, we're mindful of competition all the time, and we get competition all the time. But NVIDIA's value proposition at the core is: we are the lowest cost solution. We're the lowest TCO solution.
And the reason for that is, because accelerated computing is two things that I talk about often, which is it's a full stack problem, it's a full stack challenge, you have to engineer all of the software and all the libraries and all the algorithms, integrated them into the frameworks and optimize it for the architecture - of not just one chip, but the architecture of an entire data center. The amount of engineering and distributed computing, fundamental computer science work is really quite extraordinary. It is the hardest computing as we know.
The second part is that generative AI is a large scale problem, and it's a data center scale problem. Another way of thinking about it is that the computer is the data center, or the data center is the computer, it's not the chip. It's the data center and it's never happened like this before.
And in this particular environment, your networking operating system, your distributed computing engines, your understanding of the architecture of the networking gear, the switches and the computing systems, the computing fabric, that entire system is your computer and that's what you're trying to operate. And so in order to get the best performance, you have to understand full stack and you have to understand data center scale, and that's what accelerated computing is.
This company has a whole lot of opportunity ahead of it - one to watch over the coming months and years, particularly as it partners with vendors higher up the stack (e.g. ServiceNow). The worlds of infrastructure and software are becoming more closely tied as the advancements in AI progress.