Developing artificial intelligence (AI) software wouldn't be possible without data centers and the powerful graphics processing chips (GPUs) inside them. For the past 18 months, Nvidia (NASDAQ: NVDA) has dominated the GPU industry with a staggering market share of up to 98%.
But competition was bound to emerge, and Advanced Micro Devices (NASDAQ: AMD) has stepped up to the plate with an exciting GPU roadmap. The company hosted its "Advancing AI" event on Oct. 10, where its CEO Lisa Su provided an update on its next-generation chips.
Although Advanced Micro Devices is still trailing Nvidia in the market for AI GPUs, Su's comments suggest the company is catching up at a rapid clip. Here's why investors should be excited.
Nvidia's H100 GPU set the benchmark for AI training and AI inference. The chip went into full production in September 2022, although sales didn't ramp up until 2023, when AI fever gripped the tech sector. The H100 is still a hot product today, and Nvidia continues to struggle with supply constraints because demand is so high from leading AI companies like OpenAI, Amazon, Microsoft, and more.
Those supply challenges have opened the door for competitors like Advanced Micro Devices to steal some market share. The company announced its own data center GPU called the MI300X at the end of 2023, which was specifically designed to compete with the H100. So far, it has attracted some of Nvidia's top customers, including Microsoft, Oracle, and Meta Platforms.
In fact, Advanced Micro Devices says some of those customers are seeing performance and cost advantages by using the MI300X compared to the H100. Despite being more than a year behind in terms of a launch date, the challenger delivered a very worthy product. It forecasts the MI300 series will propel its GPU revenue to a record $4.5 billion in 2024 -- an estimate that has already been raised twice.
But Nvidia still has the edge. It started shipping its new H200 GPU earlier this year, which is capable of performing AI inference at nearly twice the speed of the H100. It meant Advanced Micro Devices was still a step behind. However, at the Advancing AI event, Lisa Su offered fresh details on her company's new MI325X, which will deliver 80% more high-bandwidth memory than the H200 and 30% better inference performance.
That's great news, but it isn't expected to ship until the first quarter of 2025.
The race to catch up doesn't stop there. Nvidia is now focused on its latest Blackwell chip architecture, which paves the way for the biggest leap in performance so far. The new GB200 NVL72 system is capable of performing AI inference at a whopping 30x the pace of the equivalent H100 system. Each individual GPU will be priced comparably to the H100 (when it was first launched), so Blackwell is going to deliver an incredible improvement in cost efficiency.