Nvidia’s rivals struggle to gain ground in generative AI chip war

0
56
Nvidia’s rivals struggle to gain ground in generative AI chip war

In the three weeks since Nvidia stunned the tech world by predicting an unprecedented leap in sales, Wall Street has been hunting for other chip companies that could benefit from the latest AI boom.

But the gulf between Nvidia and the rest of the chip industry will only widen as the search goes on.

In one of the most anticipated attempts to catch up with Nvidia, rival AMD this week showed off a new AI chip called the MI300X. The chip includes a GPU — a product originally designed for video games and at the heart of Nvidia’s success — along with a more general-purpose CPU and built-in memory that feeds both processors.

The design reflects the chipmaker’s attempt to bundle different technologies to find the most efficient way to handle the massive amounts of data needed to train and apply the large models used in generating AI.

AMD is claiming impressive performance from its new chip, which it says will surpass Nvidia’s flagship H100 in a number of metrics. But it couldn’t name any potential customers who were considering the chip, instead highlighting the product’s ability to handle AI inferencing — applying pre-trained AI models — rather than the more demanding training effort that has been behind Nvidia’s surging sales reason. It also said it would not start ramping up production of new chips until the final quarter of the year.

By the time AMD’s new chips are generally available in the first half of next year, Nvidia’s H100 will have been on the market for 18 months, giving it a huge lead, said Bernstein analyst Stacy Rasgon. AMD “is far behind. They may be sucking up the dregs (of the artificial intelligence market) — though that may be enough to “justify Wall Street’s recent enthusiasm for the company’s stock,” he said.

Patrick Moorhead, an analyst at Moor Insights & Strategy, added that “Nvidia is free and clear” in this round of chip wars that have erupted around AI.

Wall Street has singled out some chip companies that could get a boost from generating artificial intelligence. The combined market capitalization of AMD, Broadcom and Marvell jumped $99 billion, or 20%, in the two days after Nvidia released its stunning sales forecast last month. But their AI-related sales aren’t expected to come from a market dominated by Nvidia.

Broadcom, for example, will benefit from growing demand for its datacom products, as well as an in-house data center chip it designed in partnership with Google, called a TPU. Earlier this month, Broadcom forecast that AI-related businesses would account for about a quarter of its revenue by 2024, up from 10% last year.

However, processors used to train and apply large AI models are seeing the biggest surge in demand and generating the most enthusiasm in the stock market. With AMD’s new chips underwhelming on Wall Street, Nvidia’s stock climbed back above $1 trillion, a level it first reached two weeks ago.

AMD CEO Lisa Su said: “There is no doubt that artificial intelligence will be the main driver of silicon consumption for the foreseeable future” and data centers will be the main focus of investment. She predicts that the market for AI accelerators (GPUs and other specialized chips designed to speed up the data-intensive processing needed to speed up training or operations) will soar from $30 billion this year to more than $150 billion by 2027.

Companies such as AMD and Intel have struggled to compete with Nvidia in the most advanced AI chips, so they are counting on the development of the generative AI market to boost demand for other types of processors. They claim that large models such as OpenAI’s GPT-4 dominated the technology’s early days, but a recent surge in the use of smaller and more specialized models could lead to increased sales of less powerful chips.

Intel vice president Kavitha Prasad said many customers who want to use enterprise data to train models also want to keep their information close to home, rather than risk it in the hands of large AI providers. Model company. Along with all the computational work of preparing the data to feed into training, that will create a lot of work for the CPUs and AI accelerators that Intel makes, she said.

However, the proliferation of services such as ChatGPT has led to rapidly changing demands on data centers, making it difficult for chipmakers to predict how their markets will develop. CPU sales could even decline in the coming years, Rasgon said, as data center customers channel spending into AI accelerators.

Competitors hoping to get a piece of Nvidia’s booming artificial intelligence business face an equally daunting challenge on the software front. The widespread use of Nvidia chips in AI and other applications is due in large part to the fact that its GPUs, originally designed for video games, can be easily programmed for other tasks using its Cuda software.

In an effort to attract more developers to its AI chips, AMD this week highlighted its efforts to partner with widely used AI framework PyTorch. However, it still has a long way to go to match the many software libraries and applications already developed for Cuda, Rasgon said. He said it would be “another decade” before competitors could match Nvidia’s software — and Nvidia would still rapidly expand its lead in the interim.

“Nobody wants an industry with one dominant player,” Moorhead said. But for now, the booming market for chips capable of handling generative AI belongs to Nvidia.

LEAVE A REPLY

Please enter your comment!
Please enter your name here