Is Moore's Law dead? We spoke to Intel, AMD, Nvidia, and Qualcomm, and both sides of the debate agree: The only constant is progress
Moore’s law is both alive and dead.
Can we keep expecting 20-30% increases in hardware performance and efficiency, generation to generation?
As part of our 2025 Silicon Survey, Laptop Mag spoke with executives from AMD, Apple, Arm, Intel, MediaTek, Nvidia, and Qualcomm to determine if we can expect to see performance and efficiency plateaus in the coming years.
You’ll be able to catch these exclusive interviews in full throughout the week in Laptop Mag’s Silicon Survey special issue. And while the microchip world may be divided on whether or not Moore’s Law is actually dead, all of our interviewees agreed that performance gains will continue long into the future.
But what that means, and how we get there differs by chip maker. Intel’s Robert Hallock tells Laptop Mag, when it comes to hardware performance and efficiency, “You can't go backwards.”
The only constant is progress.
This article is part of a Laptop Mag special issue featuring exclusive interviews with Apple, AMD, Intel, Qualcomm, Nvidia, and more as we learn how their silicon will shape the future of CPUs and GPUs, check out Laptop Mag's Silicon Survey 2025 special issue for more.
What is Moore’s Law and why does it matter for chips?
Moore’s Law is a trend in microchip design, first recognized by Intel Co-Founder Gordon Moore. Moore observed that the number of transistors on an integrated circuit doubles roughly every two years. In 1965, Moore predicted that this trend of doubling transistors would continue for the next decade. However, his observations have stood as a guiding principle of processor design for the last 60 years.
While the end of Moore’s projections proving accurate has been heralded many times, Computex 2024 saw the most recent resurrection of the “Moore’s Law is Dead” debate among chipmakers.
Nvidia’s CEO Jensen Huang is the largest proponent of the idea that Moore’s Law is dead, while former Intel CEO Pat Gelsinger argues otherwise. After all, Intel’s Lunar Lake chips prove that you can still double the number of transistors on an integrated circuit, even in modern computing.
Taking a long, hard look at raw computing power and efficiency from all the major chip makers over the last several years, we wonder: can we expect to keep seeing the same major performance increases between generations of silicon?
Eventually, physical hardware scaling will have to change. Moore’s Law is only a trend in outcome, not a bankable guarantee. Semiconductors can only get so small, and microchips can only house so many transistors before the actual and inarguable laws of thermodynamics have to step in. TSMC is already hard at work on a 2-nanometer process for semiconductors, but how small can a semiconductor get before it cooks under the pressure?
As our interviewees highlighted, microchip performance is about more than transistors. Chips aren’t just hardware. To run a new chip, you need new drivers and software.
With the rise of AI, Nvidia’s argument has changed from “Moore’s Law is dead” to “Our systems are progressing way faster than Moore’s Law” because of artificial intelligence, which has been the biggest force behind Nvidia’s new RTX 50-series “Blackwell” GPUs.
Regardless of the approach, we can expect major performance gains to continue for several generations
Part of the argument that “Moore’s Law is Dead” can be traced back to the industry itself.
“I think the industry is always cyclical sometimes. It's always a balance of performance per Watt, right?” Intel’s Robert Hallock tells Laptop Mag. “Sometimes the industry can unlock a new process node or a new technology and performance jumps way up, and then you can't do that every year… I firmly believe there are still leaps and bounds out there, that we've by no means hit the collective end of our ropes on performance.”
However, it isn’t just a hardware versus software debate, as chipmakers have now adopted a dual approach.
“Chip manufacturing technology is improving at a fairly steady rate, enabling processors to physically shrink in size every two or three years. Every size reduction enables us to improve power efficiency and add more transistors,” Qualcomm’s Kedar Kondap tells Laptop Mag.
But Kondap also admits that AI and the introduction of the NPU in Qualcomm’s Snapdragon X series processors has “created new performance metrics and introduced AI capabilities into both the mobile and PC markets.”
Nvidia has long held that optimization and AI are the way of the future for continued performance. After all, Nvidia’s new RTX 50-series GPUs are still using the same 4N (4-nanometer) node as the RTX 40-series GPUs, because “The 4N process provides us with the best combination of performance, power, and price for our GeForce RTX 50 series GPUs” Nvidia’s Jesse Clayton expounded.
That said, refusing to place all of its eggs in one basket, Clayton also makes Nvidia's wider goals apparent, stating, “hardware is not the only important aspect … NVIDIA’s AI software platform, which has been in development for more than a decade, enables developers to get the most out of their RTX GPUs.”
AMD’s Director of Product Marketing, Adam Kozak, may have been the most succinct. Even when using older architecture and older nodes, you can still see major performance improvements, because “The software can now do more tricks.”
While chip makers like Intel and Qualcomm strive to prove that Moore’s Law still has life left in it, others focus on fine-tuning output, using software and AI to push performance past potential plateaus. And we’ll see these improvements coming throughout 2025 and over the next few years.
Laptop Mag invites you to read about these improvements and more as we publish a series of interviews with AMD, Apple, Arm, Intel, MediaTek, Nvidia, and Qualcomm throughout the week as part of our Silicon Survey special issue.
A former lab gremlin for Tom's Guide, Laptop Mag, Tom's Hardware, and Tech Radar; Madeline has escaped the labs to join Laptop Mag as a Staff Writer. With over a decade of experience writing about tech and gaming, she may actually know a thing or two. Sometimes. When she isn't writing about the latest laptops and AI software, Madeline likes to throw herself into the ocean as a PADI scuba diving instructor and underwater photography enthusiast.