LOGO

Nvidia CEO: AI Chip Progress Surpasses Moore's Law

January 7, 2025
Nvidia CEO: AI Chip Progress Surpasses Moore's Law

Nvidia's AI Chip Performance Surpasses Moore's Law

Jensen Huang, CEO of Nvidia, asserts that the advancement of his company’s AI chips is occurring at a rate exceeding that historically defined by Moore’s Law.

Accelerated Progress

“Our systems are progressing way faster than Moore’s Law,” Huang stated in a recent interview with TechCrunch. This declaration followed his keynote presentation at CES in Las Vegas, which was attended by a crowd of 10,000 people.

Understanding Moore's Law

Moore’s Law, originally proposed by Intel co-founder Gordon Moore in 1965, predicted a doubling of transistors on computer chips approximately every two years. This projection largely held true, driving substantial improvements in computing power and reducing costs for many years.

A Shift in the Trend

While Moore’s Law has experienced a slowdown in recent years, Nvidia contends that its AI chips are experiencing an accelerated rate of development. The company reports its newest data center superchip delivers over 30 times the performance for AI inference tasks compared to its prior generation.

Integrated Innovation

“We can build the architecture, the chip, the system, the libraries, and the algorithms all at the same time,” Huang explained. “This holistic approach allows us to innovate more rapidly than Moore’s Law dictates, as we are optimizing the entire technological stack.”

Addressing Concerns About AI Progress

Huang’s statement arrives amidst discussions regarding the potential deceleration of AI advancements. Many leading AI research organizations, including Google, OpenAI, and Anthropic, rely on Nvidia’s chips for both training and deploying their AI models. Improvements to these chips are expected to directly contribute to further progress in AI capabilities.

"Hyper Moore's Law"

This is not the first instance of Huang suggesting Nvidia is exceeding the boundaries of Moore’s Law. In a November podcast appearance, he proposed that the AI sector is currently on track for what he termed “hyper Moore’s Law.”

Three Active AI Scaling Laws

Huang disputes the notion of slowing AI progress, instead identifying three key scaling laws currently in effect: pre-training, post-training, and test-time compute. These phases represent the initial learning, refinement, and reasoning stages of AI development.

Cost Reduction Through Performance

“Moore’s Law was so important in the history of computing because it drove down computing costs,” Huang noted to TechCrunch. “A similar effect will occur with inference, where increased performance will lead to lower costs.”

Nvidia’s position as the world’s most valuable company is intrinsically linked to the current AI boom, making such pronouncements strategically beneficial.

nvidia ceo says his ai chips are improving faster than moore’s lawThe Shift to Inference

While Nvidia’s H100 chips were initially favored for AI model training, the industry’s growing focus on inference has prompted questions about the continued dominance of Nvidia’s higher-priced offerings.

Cost of Test-Time Compute

AI models utilizing test-time compute can be costly to operate. Concerns have been raised regarding the potential inaccessibility of models like OpenAI’s o3, which employs a scaled-up version of this technique. OpenAI reportedly spent approximately $20 per task using o3 to achieve human-level performance on a general intelligence test, while a ChatGPT Plus subscription costs $20 per month.

The GB200 NVL72 Superchip

During his CES keynote, Huang showcased Nvidia’s latest data center superchip, the GB200 NVL72, presenting it as a solution. This chip is reported to be 30 to 40 times faster at running AI inference workloads than the previous generation H100.

Lowering Inference Costs

Huang believes this performance increase will make computationally intensive AI reasoning models, such as OpenAI’s o3, more affordable over time.

Focus on Performance and Affordability

Huang emphasized his commitment to developing more performant chips, asserting that increased performance ultimately translates to lower prices.

“The direct and immediate solution for test-time compute, both in performance and cost affordability, is to increase our computing capability,” Huang explained. He also suggested that advanced AI reasoning models could contribute to the creation of improved data for pre-training and post-training processes.

Continued Price Declines

The price of AI models has decreased significantly in the past year, partly due to hardware advancements from companies like Nvidia. Huang anticipates this trend will continue with AI reasoning models, despite the initial expense of early versions like those from OpenAI.

A Thousandfold Improvement

More broadly, Huang claims that Nvidia’s current AI chips are 1,000 times more powerful than those available a decade ago. This represents a significantly faster pace than that dictated by Moore’s Law, and a trend he expects to continue.

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.

#Nvidia#AI chips#Moore's Law#artificial intelligence#technology#Jensen Huang