• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

New Arm CPUs from NVIDIA Coming in 2025

It is obvious that Nvidia wants a piece of the Handheld market. If this is successful they seriously need to work on their software. Intel had the money to almost catch them for RT and AMD have a commanding lead in the Space. Cemented by the Claw. In this ultimate driver narrative you cannot recommend the Claw for Gaming even if it is only 10-12% behind because the narrative does the same thing with the 4090 vs 7900XTX argument even though one card is 3 times the other but the Claw is the same price as the Ally so. They will probably use 3nm from TSMC too but just because you have TSMC hardware does not mean that you don't need to make sure the code is up to snuff to pull the performance out. Just yesterday I had to turn Hyper RX on for City Skylines 2 as my population went above 700,000 so the CPU was pegging at 85+% usage the whole time. The temps even went past 70 C on the CPU so PC Games are really starting to take advantage of all of those cores. That is not even a console based Game either.
 
Is this what influenced Intel and AMD to further collaborate on x86?
 
I had a gut feeling this was going to happen.
When Apple succeeded in proving that ARM chips are capable of competing with x86 chips, it pretty much opened that door for more chip designers to do the same for Windows devices. Enter Qualcomm this year, and it again proves it is competitive, especially in the laptop/ mobile segment, which in itself is far more lucrative than desktops due to higher margins and volumes. So with Nvidia entering this space, this is Intel and AMD's worst nightmare happening. Qualcomm SD Elite is powerful, but hampered by not so good graphics and compatibility issues. By the time Nvidia joins the party, I would expect some of these compatibility early adoption issue to go away. And they are no newbie in the GPU space. So if AMD is still slowly spinning RDNA 3.5, 3.6, 3.7 or so, they are going to be in trouble in the next 2 years or so.
 
I don't understand why Nvidia doesn't start its journey in the CPU market, from the beginning, with RISC-V CPUs.

Starting right away with RISC-V, there would be a vast amount of apps created by software developers that would run natively on their RISC-V CPUs.

If Nvidia launches ARM CPUs now and if in about 5 years they decide to launch RISC-V CPUs, the same old mess that we already know about will happen in hardware or software (or in both) to old ARM apps run on their future RISC-V CPUs.

The guy in this video, who knows a lot about CPU development, said that the RISC-V architecture is the best:
(watch from 28:50)
 
Last edited:
It is obvious that Nvidia wants a piece of the Handheld market.

Nvidia owns the handheld market - it’s called the Nintendo Switch.
 
For the God sake, the last thing we need is Nvidia entering in the CPU business and actively trying to increase market prices.

I use to have an NV motherboard with an AMD processor in it- Athlon 3200+.
Good old times...
They were good indeed.
 
For the God sake, the last thing we need is Nvidia entering in the CPU business and actively trying to increase market prices.
Competition INCREASES prices? LMAO OK.
I don't understand why Nvidia doesn't start its journey in the CPU market, from the beginning, with RISC-V CPUs.

Starting right away with RISC-V, there would be a vast amount of apps created by software developers that would run natively on their RISC-V CPUs.

If Nvidia launches ARM CPUs now and if in about 5 years they decide to launch RISC-V CPUs, the same old mess that we already know about will happen in hardware or software (or in both) to old ARM apps run on their future RISC-V CPUs.

The guy in this video, who knows a lot about CPU development, said that the RISC-V architecture is the best:
(watch from 28:50)
Because RISC V isnt a competitive ISA. It still needs a lot of work, and so long as ARM already exists, financially it doesnt make sense to invest tens of billions to make RISC V work yet.
When Apple succeeded in proving that ARM chips are capable of competing with x86 chips, it pretty much opened that door for more chip designers to do the same for Windows devices. Enter Qualcomm this year, and it again proves it is competitive, especially in the laptop/ mobile segment, which in itself is far more lucrative than desktops due to higher margins and volumes. So with Nvidia entering this space, this is Intel and AMD's worst nightmare happening. Qualcomm SD Elite is powerful, but hampered by not so good graphics and compatibility issues. By the time Nvidia joins the party, I would expect some of these compatibility early adoption issue to go away. And they are no newbie in the GPU space. So if AMD is still slowly spinning RDNA 3.5, 3.6, 3.7 or so, they are going to be in trouble in the next 2 years or so.
Those "compatibility issues" have existed for a decade by this point. If they have not been ironed out now, spoiler alert, they wont be ironed out next year either.

Apple proved that, in a vertically controlled stack, ARM can perform really well. Which isnt a surprise, they got powerPC to perform well too. Strangely that never caught on in the windows world either. We also have to consider size, the M series chips are MASSIVE. The M4 max is 28 billion transistors, over double a 7950x (13 billion). So, yeah, it better be faster.

What qualcomm has proved is that with similar sized cores its rather difficult to get x86 performance out of ARM. They're closer, but still not there, a tale that is 11 years old now.
 
I don't understand why Nvidia doesn't start its journey in the CPU market, from the beginning, with RISC-V CPUs.
Because RISC-V is politically compromised since so many Chinese vendors have contributed to its specification; no Western company that's serious about selling lots of semiconductors is going to touch it with a barge pole because no Western government is going to allow it in anything that government buys.

Apple proved that, in a vertically controlled stack, ARM can perform really well. Which isnt a surprise, they got powerPC to perform well too. Strangely that never caught on in the windows world either. We also have to consider size, the M series chips are MASSIVE. The M4 max is 28 billion transistors, over double a 7950x (13 billion). So, yeah, it better be faster.
Apple CPUs also have at least double the number of memory channels that consumer x86 parts do. That's one of the reasons their showing is so good in synthetic benchmarks, and also why synthetic benchmarks are garbage.
 
Last edited:
Apple CPUs also have at least double the number of memory channels that consumer x86 parts do. That's one of the reasons their showing is so good in synthetic benchmarks, and also why synthetic benchmarks are garbage.
They also have sub processors, just like powerPC G series chips did, that do allow for incredible performance when optimized for, but are very situational oriented and dont pan out over a variety of software.
 
It's not like they ever left CPUs.

I wish the Shield Tablet X1 had been released. I still occasionally plug the Shield Tablet K1 into the TV and play some games or use it as a media player.
 
RIP x86, it's been a good run.

1730833775392.jpeg
 
@Initialised
Lolno, I have been hearing about the supposedly inevitable demise of x86 as long as I have been a techie (so quite a while), about how it’s bloated and inefficient, yada yada. To absolutely no surprise, it’s still here and isn’t going anywhere. By the time I am absolutely cooked and will shuffle off this mortal coil I am willing to bet quite a substantial sum of money that x86 will still be here and still will be one of the dominant ISAs.
 
They also have sub processors, just like powerPC G series chips did, that do allow for incredible performance when optimized for, but are very situational oriented and dont pan out over a variety of software.

FYI PowerPC never had "subprocessors." The only thing it had going on was AltiVec, an instruction set extension that at the time bested SSE. It was more like SSE3 level in capability, but if wasn't a literal "subprocessor."
 
That's not the CPU doing all that.


You guys know they already tried this before like a decade ago, right ?
Not until now for me, but it's a completely different context. Nowadays Windows on Arm is being pushed for more widespread adoption, mainly by improving software compatibility. NVidia is now targeting high-end consumer desktops/laptops.
 
I want that Nvidia comes in putting the sole of its shoe on Intel and AMD's chest to these two companies launch new CPUs with higher IPC and lower power consumption.
 
That's not the CPU doing all that.
It absolutely is. This is like saying SSE or similar is "not the CPU."

I want that Nvidia comes in putting the sole of its shoe on Intel and AMD's chest to these two companies launch new CPUs with higher IPC and lower power consumption.
It'd need to be a lot higher to overcome the emulation penalty on legacy x86 games, which is like... everything.
 
It absolutely is. This is like saying SSE or similar is "not the CPU."
What do you mean, all those benchmarks you see with crazy fast rendering times are using hardware encoding not the CPUs, high end x86 CPUs still absolutely destroy Apple chips in software video encoding.
 
Back
Top