• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA's N1x CPU Hits New Roadblock: Launch Pushed to Late 2026

With this nvidia could canibalize the console market f.e.
Then the handheld.
I'd argue NV already has the handheld market locked: NS and NS2. What it doesn't have is the handheld PC market.
 
Any replacement to x86 has to be compatible, more performant, cheaper and just as expandable and upgradable. ARM has none of those things. RISC-V does have potential, but it has ZERO compatibility and is going to be slow after emulation and expensive.
How dare you?! Apple's M4 beats any x86 CPU in single threaded tests! Doesn't matter that only in GeekBench.

/s

IMHO, ARM is good for gadgets, handhelds, smartphones, tablets and other battery-dependent stuff.
Those scenarios when you don't require real performance but efficiency for mails, internet, porn and shit.
 
How does hyper-threading matter, if Intel itself already removed it from Arrow Lake going forward?
They did not remove it from P-core workstation and server lines, and don't plan to either. E-core high core count Xeons don't have it just as desktop/mobile parts.

I wouldn't put much faith in Intel's "going forward" either. AVX10 was supposed to not support 512-bit vectors on E-cores, but suddenly in the latest revision of the specs it was made mandatory. That means future E-cores are going to be as capable as P-cores in that regard, including running existing AVX-512 software (since AVX10/512 is backwards compatible).
So while HT was removed for now it doesn't mean they won't go back to it in the future.
 
They did not remove it from P-core workstation and server lines, and don't plan to either. E-core high core count Xeons don't have it just as desktop/mobile parts.
Really? But it makes sense. As removing HT was due to their idea that cramming more E-cores (and now LPE as well) would suffice in substitution.
 
Really? But it makes sense. As removing HT was due to their idea that cramming more E-cores (and now LPE as well) would suffice in substitution.
Intel's workstation and server CPU aren't hybrid, they are all P-core or all E-core. Similar to how AMD's are exclusively full or exclusively compact cores.

SMT (which HT is a marketing name for) isn't useless, Phoronix benchmarked the "small" server CPUs from both Intel and AMD:
In total I ran 69 multi-threaded benchmarks on both the Intel Xeon 6369P and AMD EPYC 4345P Supermicro servers for this comparison. When taking the geometric mean of all those raw performance benchmarks, the 8-core Xeon 6369P saw 1.21x the performance out of having SMT enabled compared to the baseline run of it disabled. Meanwhile the 8-core EPYC 4345P processor saw 1.32x the performance out of Simultaneous Multi-Threading on this Ubuntu 25.04 Linux server setup. At the same core counts, the Zen 5 based AMD EPYC 4005 series was showing greater benefit out of SMT than the flagship Xeon 6300 series processor.

Also fascinating to see was that the AMD EPYC 4345P even with SMT disabled was still faster than the Xeon 6369P with its full load-out thanks to the EPYC Grado CPU supporting AVX-512 and other advantages over the Xeon 6300 series that in turn is largely rehashed from the Xeon E-2400 series.
The same story for "big" EPYC:
On a geo mean basis for all of the benchmarks in total, having SMT enabled was a 13% improvement than running the EPYC 9575F processor with SMT disabled. SMT typically was of measurable benefit to the 5th Gen AMD EPYC processor with the exception of some HPC workloads that perform better with SMT disabled or otherwise limited by memory bandwidth. SMT also hurt the OpenVINO inference latency but by and large Simultaneous Multi-Threading remains an important and valuable feature for AMD processors. For HPC clusters and others with workloads where SMT isn't of benefit, it can be simply disabled.
Some benchmarks scale to over 50% as well.

I can't find the source, so take it with a grain of salt, but I remember reading AMD stating that including SMT is a relatively small area size increase for a tangible benefit. What is more, Zen 5 was designed with dual 4-way decoders - one for each thread.
 
Intel's workstation and server CPU aren't hybrid, they are all P-core or all E-core. Similar to how AMD's are exclusively full or exclusively compact cores.

SMT (which HT is a marketing name for) isn't useless, Phoronix benchmarked the "small" server CPUs from both Intel and AMD:

The same story for "big" EPYC:

Some benchmarks scale to over 50% as well.

I can't find the source, so take it with a grain of salt, but I remember reading AMD stating that including SMT is a relatively small area size increase for a tangible benefit. What is more, Zen 5 was designed with dual 4-way decoders - one for each thread.

SMT has been a better implementation of multi-threading as compared to Hyper-threading since it's inception but it's also much newer. You'd think Intel would have taken that as a cue to improve HT instead of removing it.
 
If x86 must go, which it doesn't, RISC-V should be the successor, not ARM. Unfortunately, there isn't Windows on RISC-V, hindering that.
Who said that we have to stay on windows? Year of the linux desktop, surely
 
Who said that we have to stay on windows? Year of the linux desktop, surely
With Microsoft's incompetence, it is pushing many, little by little, to Linux. I'm fed up too. Still, Windows controls the desktop market at present, and as far as I'm aware, MS hasn't invested into RISC-V.
 
How dare you?! Apple's M4 beats any x86 CPU in single threaded tests! Doesn't matter that only in GeekBench.

/s

IMHO, ARM is good for gadgets, handhelds, smartphones, tablets and other battery-dependent stuff.
Those scenarios when you don't require real performance but efficiency for mails, internet, porn and shit.
Performance isn't really an issue with ARM. I feel like people don't really really realise how fast those chip actually are since most tech outlets don't bother reviewing them consistently. The M4 max would be way faster than the average person would need it to be. and the average person is the big majority of the market. Heck, ARM is even found in HPC datacenter. What do you call "real performance"? And I'm asking that question in the context of a conssumer product, not someone who's running AVX512 stuff all day everyday
1752590381639.png
1752590228194.png
 
Roadblocks havn't stopped NVIDIA before... you know... the meltings that are still going on.
 
Not a 100% on point analogy, but yeah, kind of. There just isn’t much desire or reason for companies to look outside the established x86 and ARM ecosystems. That isn’t to say RISC-V is pointless or anything, it has its uses, but the idea of it overtaking and replacing the aforementioned two is… dubious at best.
For over 20 years when talking about Linux I was saying that all those options that people had, all those distros, where confusing in the eyes of the average(and not only) consumer and the only way Linux could really see major success was if a huge corporation was promoting one distro that everyone else would promote as the perfect option for the newcomers to the Linux world. Canonical did tried with Ubuntu, but Canonical wasn't the huge corporation that was needed, while as it is usual in Linux, Ubuntu didn't had the absolute support from Linux users. There where users who where insisting in promoting other distros. So Linux never really get off the ground as the ultimate operating system that was going to replace Windows for the average user. With some exceptions that had that huge corporation behind it. In the form of Android with Google behind it and as SteamOS with Valve behind it.

I am thinking that Risc-V will need something similar to see adoption, but I don't see it happening the next 10-20 years. In the next years we will be seeing Risc-V CPUs becoming faster, better more capable, but I doubt the architecture will get 1% of the market, because those CPUs will be coming from small design teams. That will partially change if one of the 4 biggest players, Intel, AMD, Nvidia or Qualcomm starts designing and releasing Risc-V CPUs, with strong support behind the architecture. Also Microsoft or Google will need to offer a complete and stable OS running on those CPUs.

Now, from those 4, AMD while having the size, can't promote anything. I mean they are incapable of promoting anything. They just can't. Qualcomm will probably not show strong interest, because it is the top company in the ARM market, so why abandon a leading position in an already established market? Their win against ARM in their last legal dispute doesn't give them a reason to start thinking Risc-V as their near future primary architecture. Intel also would want to stick with X86. I mean based on the new CEO's own words recently, they are not even in the 10 top design teams. But they are still second in X86 and they make billions from that architecture. If they where going Risc-V they would beome irrelevant in chip design and they would end up just begging others to make their Risc-V CPUs in their fabs. So, who remains? Nvidia. The problem with Nvidia is that it loves control. That's why they tried to buyout ARM, before strongly pushing ARM designs. With Risc-V they will have zero control over the architecture and the competition. So, probably even Nvidia wouldn't want to push Risc-V out there any time soon. Google and Microsoft creating versions of their OS to run on Risc-V? Well, probably. Assigning huge teams and resourses behind those builds? Probably not.

So, what I see is that Risc-V will have some success in the market, as the next big thing waiting to happen, but corporations like AMD, Intel, Qualcomm, Nvidia, Microsoft and Google will probably work on Risc-V as much as needed to have something partially ready in case Risc-V see skyrocketing success and adoption for whatever reason. I don't see them taking the next step and heavily promoting the architecture. Maybe if Apple desides to make another transition from ARM to Risc-V, maybe then the architecture could become relevant. Or maybe if China decides to push Risc-V as the main architecture used in the country, maybe they could throw enough billions behind the architecture and a couple of official distros to promote it. Until something like this happens, Risc-V will be out there with 99.9% of consumers ignoring it's existense.
But I could be wrong. It wouldn't be the first time I would be wrong, not even the 100th time.....
 
That's all the effort the 4 trillion dollar company, with all the engineering prowess in the world, is putting into entering a new market? Maybe all the focus is still on AI, to shareholders delight, so they could care less about ARM laptops.
 
That's all the effort the 4 trillion dollar company, with all the engineering prowess in the world, is putting into entering a new market? Maybe all the focus is still on AI, to shareholders delight, so they could care less about ARM laptops.
Nvidia become 4 trillion company because they being smart on which market they seriously want to go after rather than just blindly throwing billions on it hoping it will shift to their way. It is the mistake intel did a decade ago that ruined intel.
 
To the people saying that x86_64 vs ARM ISA argument was always obsolete, I need to ask you this particular thing.

We got 2 cpus running at 2Ghz of frequency;
1 is ARM and the other one is x86_64;
to make 1+1=2 at base frequency of 2Ghz on x86_64 we need 15 instructions!
to make 1+1=2 at base frequency of 2Ghz on ARM we need 5 instructions!

At this point!!!
because the base frequency is always the same on both - then what do you think?
- Which CPU will finish this operation for first?

P.S: If you understood the answer to this question, then you can already understand the reasoning for which x86_64 must completely disappear from the face of the earth.
 
I simply do not understand this push to ARM for traditional PC users. It offers nothing over anything... We will have Zen 6 and whatever Intel pushes out by late 2026, so performance is meh.

And you simply cannot tell me that nGreedia will not sell this thing for an ARM and a leg!!! It also will not be upgradeable. Windows ARM is a joke, and compatibility issues will be a pox on the platform.

I see no reason why ARM should be in high-end consumer PC's and laptops.
It is quite simple, laptops!

Ever since Apple showed ARM could do high performance for work laptops the race has been on. Before Apple went to ARM their laptops had 2-6 hours of battery life, since going to ARM it is 12-20 hours.

Being able to work an entire work day and not having to worry about a charge is a big deal.

Cloud platform providers like Amazon, Google, and Microsoft are also looking to bring down their electrical and cooling costs for their Data Centers. ARM CPUs consume significantly less electricity under load and thus produce significantly less waste heat.

Windows ARM is a joke because Microsoft didn't take it seriously the way Apple did. Now Microsoft is increasingly starting to take Windows ARM seriously.
 
Looking at Apple and Nvidia (specifically CUDA), both companies:

- Control both their software and hardware layers
- Make enough money to fund big design changes in both those layers
- Target multiple markets with the same resources, benefitting one side of their business more than another, making the relatively small potatoes side (Mac and Gaming GPUs) benefit from the improvements in the big bucks side (iPhone and Professional GPUs)

If MicroIntelDevices were one focused company then maybe they could so the same: port Windows natively over to ARM for laptop efficiencies and maybe even desktop/gaming if ARM proves flexible enough in the long term. It'd likely need both a Rosetta-type translation layer for lighter office work apps as well as Universal Binaries so calculation-heavy apps like scientific and games run well. But of course it can be done.

Hell, Apple's done it 4 freakin' times now (68xx - PPC - OSX - Intel - ARM/M#), contract Apple to do it!
 
To the people saying that x86_64 vs ARM ISA argument was always obsolete, I need to ask you this particular thing.

We got 2 cpus running at 2Ghz of frequency;
1 is ARM and the other one is x86_64;
to make 1+1=2 at base frequency of 2Ghz on x86_64 we need 15 instructions!
to make 1+1=2 at base frequency of 2Ghz on ARM we need 5 instructions!

At this point!!!
because the base frequency is always the same on both - then what do you think?
- Which CPU will finish this operation for first?

P.S: If you understood the answer to this question, then you can already understand the reasoning for which x86_64 must completely disappear from the face of the earth.
You are making a simplistic assumptions about how CPUs operate internally. x86 CPUs do not execute x86 instructions internally, but they decode the ISA into micro operations that are RISC-like. The last x86 CPU that executed natively was probably VIA some decades ago. Shockingly ARM CPUs also do this despite being "RISC".

As to your argument, 1+1=2 in x86 assembly is 3 instructions:
mov eax, 1
mov ebx, 1
add eax, ebx
The same number of instructions in ARM assembly:
mov r0, #1
mov r1, #1
add r2, r1, r0
If you want to make a point at least select an example that makes sense ;)

As for the irrelevancy of ISA wars. AMD Zen's chief architect stated in an Anandtech interview that ISA is not the critical part, but the internal microarchitecture:
IC: Alongside Zen we learned about Project Skybridge, the ability to put an x86 SoC and an Arm SoC on the same socket. Do you know how far along the Arm version of Skybridge, we know as K12, was in development before AMD went full bore for Ryzen?

MC: Originally Zen and K12 were, I think, we call them sister projects. They had kind of the same goals, just a different ISA actually hooked up. The core proper was that way, and the L2/L3 hierarchy could be either one. Then of course, in Skybridge, the Data Fabric could be either one. There was a whole team doing the K12, and we did share a lot of things you know, to be efficient, and had a lot of good debates about architecture. Although I've worked on x86 obviously for 28 years, it's just an ISA, and you can build a low-power design or a high-performance out any ISA. I mean, ISA does matter, but it's not the main component - you can change the ISA if you need some special instructions to do stuff, but really the microarchitecture is in a lot of ways independent of the ISA. There are some interesting quirks in the different ISAs, but at the end of the day, it's really about microarchitecture. But really I focused on the Zen side of it all.
So there was an ARM project at AMD co-developed with Zen, but wasn't commerciallized.

There's a good article on Chips and Cheese about this issue as well.
 
My only one complain about this N1X is about performance cores.
Why they are picking Cortex-X925 instead of X1 or at least X4 which is the latest & greatest?
If you wish to cannibalize Intel, then pick the best - Right?
 
My only one complain about this N1X is about performance cores.
Why they are picking Cortex-X925 instead of X1 or at least X4 which is the latest & greatest?
If you wish to cannibalize Intel, then pick the best - Right?
Cortex-X925 is the latest and greatest of the ARM X line. X1 is from 2020, X4 from 2023 and X925 from 2024.
 
It is quite simple, laptops!

Ever since Apple showed ARM could do high performance for work laptops the race has been on. Before Apple went to ARM their laptops had 2-6 hours of battery life, since going to ARM it is 12-20 hours.

Being able to work an entire work day and not having to worry about a charge is a big deal.

Cloud platform providers like Amazon, Google, and Microsoft are also looking to bring down their electrical and cooling costs for their Data Centers. ARM CPUs consume significantly less electricity under load and thus produce significantly less waste heat.

Windows ARM is a joke because Microsoft didn't take it seriously the way Apple did. Now Microsoft is increasingly starting to take Windows ARM seriously.
So there we have it... The most tangible benefit (the only one that most people notice) of ARM (for consumers) is battery life. That is literally all it offers right now, and that's not exactly clear cut right now.

We already have ARM laptops that cost many times more than an x86_64 based laptop. The masses rejected it and laughed at it because it was expensive, slow, buggy and non compatible. But it had amazing battery life - at the time!

AMD could easily launch a range of efficient and performant laptop chips consisting of only Zen 5c cores and LPDDR5 memory. Zen 6c looks to offer at least another 25% more performance through IPC and frequency gains and another similar jump in power savings. People have to realise that Windows is the biggest issue with x86 performance and battery life. Windows is awfully designed and coded. Linux offers higher efficiency but is not compatible, nor is it easy to use, but that situation is starting to improve through Valve's efforts. Maybe Linux will finally be a thing in 2026/7 thanks to the work AMD and Valve is doing.

People using Apple chips as an example are crazy. If those chips were privately sold to 3rd parties, an M4 Pro/Max chip (comparable to high end x86) would cost $800 for the chip alone and still offer crap GPU performance, which cannot be upgraded.

Cherry picking Apple's move from ancient, god-awful Intel chips to Apples own CPU, optimized for their software is absolutely disingenuous and is not the state of play in 2025 at all! You can buy an x86 laptop right now that offers 20+ hours of battery life even with god-awful Windows and it's unoptimized bloat! (Lenovo ThinkPad X9 15 Aura Edition - Intel Luna Lake CPU)

I think there is plenty of life left in the performance x86 arena.
 
Back
Top