Hot on the heels of Google and Microsoft, Amazon dropped its latest numbers yesterday with AWS growth looking healthy, up 12% year-on-year to $22.1 billion.
With the cloud business making up 70% of Amazon’s total operating profit of $7.7 billion, Amazon CEO Andrew Jassy said growth had "stabilized":
AWS remains the clear cloud infrastructure leader with a significant leadership position with respect to number of customers, size of partner ecosystem, breadth of functionality and the strongest operational performance. These are important factors for why AWS has grown the way it has over the last several years and for why AWS has almost doubled the revenue of any other provider.
Jassy claimed that customer focus was also a critical factor:
As the economy has been uncertain over the last year, AWS customers have needed assistance cost-optimizing to withstand this challenging time and reallocate spend to newer initiatives that better drive growth. We've proactively helped customers do this. And while customers have continued to optimize during the second quarter, we've started seeing more customers shift their focus towards driving innovation and bringing new workloads to the cloud. As a result, we've seen AWS' revenue growth rate stabilize during Q2 where we reported 12% year-over-year growth.
Product innovation continues apace, he added:
The AWS team continues to innovate and change what's possible for customers at a rapid clip. You can see across the array of AWS product categories where AWS leads in compute, networking, storage, database, data solutions and machine learning, among other areas, and the continued invention and delivery in these areas is pretty unusual. For instance, a few years ago, we heard consistently from customers that they wanted to find more price performance ways to do generalized compute. And to enable that, we realized that we needed to rethink things all the way down to the silicon and set out to design our own general purpose CPU chips.
Today, more than 50,000 customers use AWS' Graviton chips and AWS Compute instances, including 98 of our top 100 Amazon EC2 customers, and these chips have about 40% better price performance than other leading x86 processors.
Generative AI in three parts
As is now compulsory, Jassy made a generative AI pitch:
Generative AI has captured people's imagination, but most people are talking about the application layer, specifically what OpenAI has done with ChatGPT. It's important to remember that we're in the very early days of the adoption and success of generative AI, and that consumer applications is only one layer of the opportunity.
Amazon’s view is that Large Language Models (LLMs) have three key layers, he explained:
At the lowest layer is the compute required to train foundational models and do inference or make predictions. Customers are excited by Amazon EC2 P5 instances powered by NVIDIA H100 GPUs to train large models and develop generative AI applications. However, to date, there's only been one viable option in the market for everybody and supply has been scarce.
That, along with the chip expertise we've built over the last several years, prompted us to start working several years ago on our own custom AI chips for training called Trainium and inference called Inferentia that are on their second versions already and are a very appealing price performance option for customers building and running Large Language Models. We're optimistic that a lot of large language model training and inference will be run on AWS' Trainium and Inferentia chips in the future.
The middle layer, he went on, is Large Language Models-as-a-Service:
To develop these Large Language Models, it takes billions of dollars and multiple years to develop. Most companies tell us that they don't want to consume that resource building themselves. Rather, they want access to those large language models, want to customize them with their own data without leaking their proprietary data into the general model, have all the security, privacy and platform features in AWS work with this new enhanced model and then have it all wrapped in a managed service. This is what our service Bedrock does and offers customers all of these aforementioned capabilities with not just one large language model but with access to models from multiple leading Large Language Model companies like Anthropic, Stability AI, AI21 Labs, Cohere and Amazon's own developed large language models called Titan.
Customers, including Bridgewater Associates, Coda, Lonely Planet, Omnicom, 3M, Ryanair, Showpad and Travelers are using Amazon Bedrock to create generative AI application. And we just recently announced new capabilities from Bedrock, including new models from Cohere, Anthropic's Claude 2 and Stability AI's Stable Diffusion XL 1.0 as well as agents for Amazon Bedrock that allow customers to create conversational agents to deliver personalized up-to-date answers based on their proprietary data and to execute actions.
Both these layers have a common purpose, said Jassy:
What we're doing is democratizing access to generative AI, lowering the cost of training and running models, enabling access to Large Language Models of choice, instead of there only being one option, making it simpler for companies of all sizes and technical acumen to customize their own large language model and build generative AI applications in a secure and enterprise-grade fashion. These are all part of making generative AI accessible to everybody and very much what AWS has been doing for technology infrastructure over the last 17 years.
Finally, the third or top layer is made up of the applications that run on LLMs:
ChatGPT is an example. We believe one of the early compelling generative AI applications is a coding companion. It's why we built Amazon CodeWhisperer, an AI-powered coding companion, which recommends code snippets directly in the code editor, accelerating developer productivity as they code. It's off to a very strong start and changes the game with respect to developer productivity. Inside Amazon, every one of our teams is working on building generative AI applications that reinvent and enhance their customers' experience.
This then leads back opportunities for AWS growth, he argued:
While we will build a number of these applications ourselves, most will be built by other companies, and we're optimistic that the largest number of these will be built on AWS. Remember, the core of AI is data. People want to bring generative AI models to the data, not the other way around.
AWS not only has the broadest array of storage, database, analytics and data management services for customers, it also has more customers and data store than anybody else. Coupled with providing customers with unmatched choices at these three layers of the generative AI stack as well as Bedrock's enterprise-grade security that's required for enterprises to feel comfortable putting generative AI applications into production, we think AWS is poised to be customers' long-term partner of choice in generative AI.
That said, Jassy concluded with some expectation management, arguing that AWS seen “a very significant amount of business” driven by Machine Learning and AI for several years:
When you're talking about the big potential explosion in generative AI, which everybody is excited about, including us, I think we're in the very early stages there. We're a few steps into a marathon in my opinion. I think it's going to be transformative, and I think it's going to transform virtually every customer experience that we know, but I think it's really early. I think most companies are still figuring out how they want to approach it. They're figuring how to train models. They don't want to build their own Large Language Models. They want to take other models and customize them, and services like Bedrock enable them to do so. But it's very early. And so I expect that will be very large, but it will be in the future.
Solid numbers, although it should be noted that the 12% year-on-year growth rate for AWS is down on the 16% recorded in the prior quarter.