Nvidia’s AI Chips: Surpassing Moore’s Law?
In a recent conversation with TechCrunch, Nvidia CEO Jensen Huang confidently asserted that the performance of his company’s AI chips is accelerating beyond the historic benchmarks established by Moore’s Law. Speaking shortly after addressing a crowd of 10,000 at CES in Las Vegas, Huang emphasized that “Our systems are progressing way faster than Moore’s Law.”
Moore’s Law, introduced by Intel’s co-founder Gordon Moore in 1965, suggested that the number of transistors on computer chips would approximately double every year, effectively doubling their performance. This prediction fueled rapid technological progress and cost reductions for decades. However, in recent years, this rate has decelerated.
“We can build the architecture, the chip, the system, the libraries, and the algorithms all at the same time,” Huang explained. “If you do that, then you can move faster than Moore’s Law, because you can innovate across the entire stack.”
Jensen Huang
Nvidia claims its latest datacenter superchip is over 30 times faster for AI inference workloads compared to its previous generation. This claim arrives amidst speculation about whether AI’s trajectory has hit a plateau. Leading AI labs like Google and OpenAI rely on Nvidia’s chips to train and operate their models, suggesting that chip advancements could spur further AI development.
- Pre-training: Initial learning phase with large data sets
- Post-training: Fine-tuning using human feedback
- Test-time compute: Allowing more processing time for responses
Huang also discussed three active scaling laws in AI: pre-training, post-training, and test-time compute. He believes these factors will drive down costs just as Moore’s Law once did for computing.
Nvidia’s growth has paralleled the AI boom, and Huang continues to emphasize innovation. During his CES keynote, he showcased Nvidia’s GB200 NVL72 superchip, claiming it as a game changer for reducing inference costs over time.
While some question if Nvidia’s costly chips will maintain dominance as companies shift focus to inference, Huang remains optimistic. He asserts that better-performing chips will naturally lead to lower prices and enhance AI reasoning models.
“The direct and immediate solution for test-time compute, both in performance and cost affordability, is to increase our computing capability,” he stated to TechCrunch.
Jensen Huang
Nvidia projects its AI chips are now 1,000 times more efficient than a decade ago—outpacing Moore’s Law significantly. As AI evolves, Nvidia appears committed to staying at the forefront of this technological revolution.