• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA & MediaTek Reportedly Readying "N1" Arm-based SoC for Introduction at Computex

T0@st

News Editor
Joined
Mar 7, 2023
Messages
3,113 (3.92/day)
Location
South East, UK
System Name The TPU Typewriter
Processor AMD Ryzen 5 5600 (non-X)
Motherboard GIGABYTE B550M DS3H Micro ATX
Cooling DeepCool AS500
Memory Kingston Fury Renegade RGB 32 GB (2 x 16 GB) DDR4-3600 CL16
Video Card(s) PowerColor Radeon RX 7800 XT 16 GB Hellhound OC
Storage Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME SSD
Display(s) Lenovo Legion Y27q-20 27" QHD IPS monitor
Case GameMax Spark M-ATX (re-badged Jonsbo D30)
Audio Device(s) FiiO K7 Desktop DAC/Amp + Philips Fidelio X3 headphones, or ARTTI T10 Planar IEMs
Power Supply ADATA XPG CORE Reactor 650 W 80+ Gold ATX
Mouse Roccat Kone Pro Air
Keyboard Cooler Master MasterKeys Pro L
Software Windows 10 64-bit Home Edition
Around late April, MediaTek confirmed that their CEO—Dr. Rick Tsai—will be delivering a big keynote speech—on May 20—at this month's Computex 2025 trade show. The company's preamble focuses on their "driving of AI innovation—from edge to cloud," but industry moles propose a surprise new product introduction during proceedings. MediaTek and NVIDIA have collaborated on a number of projects; the most visible being automative solutions. Late last year, intriguing Arm-based rumors emerged online—with Team Green allegedly working on a first time attempt at breaking into the high-end CPU consumer market segment; perhaps with the leveraging of "Blackwell" GPU architecture. MediaTek was reportedly placed in the equation, due to expertise accumulated from their devising of modern Dimensity "big core" mobile processor designs. At the start of 2025, data miners presented evidence of Lenovo seeking new engineering talent. Their job description mentioned a mysterious NVIDIA "N1x" SoC.

Further conjecture painted a fanciful picture of forthcoming "high-end N1x and mid-tier N1 (non-X)" models—with potential flagship devices launching later on this year. According to ComputerBase.de, an unannounced "GB10" PC chip could be the result of NVIDIA and MediaTek's rumored "AI PC" joint venture. Yesterday's news article divulged: "currently (this) product (can be) found in NVIDIA DGX Spark (platforms), and similarly equipped partner solutions. The systems, available starting at $3000, are aimed at AI developers who can test LLMs locally before moving them to the data center. The chip combines a 'Blackwell' GPU with a 'Grace' Arm CPU (in order) to create an SoC with 128 GB LPDDR5X, and a 1 TB or 4 TB SSD. The 'GB10' offers a GPU with one petaflop of FP4 performance (with sparsity)." ComputerBase reckons that the integrated graphics solution makes use of familiar properties—namely "5th-generation Tensor Cores and 4th-generation RT Cores"—from GeForce RTX 50-series graphics cards. When discussing the design's "Grace CPU" setup, the publication's report outlined a total provision of: "20 Arm cores, including 10 Cortex-X925 and 10 Cortex-A725. The whole thing sits on a board measuring around 150 × 150 mm—for comparison: the classic NUC board format is 104 × 101 mm."




ComputerBase predicts a cut-down translation of "GB10"—tailored for eventual deployment in premium laptops/notebooks, instead of small footprint AI supercomputing applications. Their inside source-laced news piece explained as follows: "a modification of this solution is also conceivable for PCs aimed at end users. Instead of 20 CPU cores, perhaps only eight to twelve, and the RAM likely to be a quarter of that or even less, i.e. 32 or 16 GB—depending on which market segment is ultimately targeted. The same applies to the GPU unit and its possible expansion levels. Instead of the $3000 entry-level price (DGX Spark) in the professional world, this should also be significantly cheaper." Citing Asian media reports, ComputerBase delved into whispers of production activities: "MediaTek has already booked additional capacity with ASE. ASE provides OSAT (outsourced semiconductor assembly and test) capacity. Moreover, a mainstream PC chip doesn't require extravagant packaging; it's a classic chip on a substrate in an FCBGA package—there's more than enough capacity for that, even from many suppliers. MediaTek is said to have awarded contracts with ASE for about a year within a few weeks, it's reported. Things seem to be getting serious."

View at TechPowerUp Main Site | Source
 
Now Nvidia and MediaTek, make a gaming APU/Soc variant of that with the GPU performance of a RTX 4060 at 30 watts at full power and you’ll have my money quickly.
 
The latest semiaccurate article says this chip is seriously delayed thanks to nvidia f ing up in any partnership possible, so probable release on 25Q4 or 26Q1 while they aimed on 25Q3.
 
Now Nvidia and MediaTek, make a gaming APU/Soc variant of that with the GPU performance of a RTX 4060 at 30 watts at full power and you’ll have my money quickly.
The Ryzen AI max+ 395 already does this in a 15-20 w envelope when plugged in yes it can use use far more power. Also, can be had for far less than $3000 though there are models that cost more. Besides ARM is ARM compatability is not going to be has universal as x86. Remember how the qualcomm laptop launch went.
 
This is just more evidence that x86 is being replaced.

How's Windows on Snapdragon these days - I've not looked at in almost the year since it's been available, but presumably performance, emulation, compatibility are still improving....?
 
This is just more evidence that x86 is being replaced.
This is a bad thing. CISC is the best for a reason. RISC has it's uses and they are good, but not as a general computing platform

How's Windows on Snapdragon these days - I've not looked at in almost the year since it's been available, but presumably performance, emulation, compatibility are still improving....?
Exactly. Linux as well. Though Linux is way further along than Windows in this arena.
 
Last edited:
This is just more evidence that x86 is being replaced.
Nah. Companies without an x86 license are obviously not going to produce x86 CPUs. nVidia using ARM is par for the course if anything.
RISC has is uses and they are good, but not as a general computing platform
I think I'm going to push back on that one a little bit, at least when it comes to server applications. Phoronix last year did some testing against an AMD EPYC powered r7a.16xlarge instance, an Intel Xeon 8488C powered r8i.16xlarge instance, and a Graviton4 powered r8g.16xlarge instance in AWS. All of them have 64 vCPUs. The Gravitron4 instance performed better than the Intel instance when it came to the geometric mean of all the test results, but worse than the AMD instance. I'd call that competitive.

 
x86 is being replaced slowly, and only at the fringes I think. Specific server/datacenter use cases, lightweight consumer computing that already runs fine on a phone or tablet,

At the moment, most PC software developers are making x86-64 native code, and possibly then porting it to ARM or at least compiling for ARM afterwards. I don't know how long that will be the default software-dev behaviour though, especially with Snapdragon laptops gaining marketshare alongside iPads in a segment that used to be x86-exclusive.

The last bastion of x86-64 might be high-end PC gaming, and by extension also console gaming. Am I right in thinking that the Nintendo Switch is the only significant gaming system that doesn't use x86?
 
can it run Linux ARM?
 
This is a bad thing. CISC is the best for a reason. RISC has is uses and they are good, but not as a general computing platform
That's an opinion somewhat outdated and that doesn't take into consideration any relevant implementation details.
There's no "reason" why CISC is best at anything nowadays other than if you're dealing with really space-constrained MCUs.

I think I'm going to push back on that one a little bit, at least when it comes to server applications. Phoronix last year did some testing against an AMD EPYC powered r7a.16xlarge instance, an Intel Xeon 8488C powered r8i.16xlarge instance, and a Graviton4 powered r8g.16xlarge instance in AWS. All of them have 64 vCPUs. The Gravitron4 instance performed better than the Intel instance when it came to the geometric mean of all the test results, but worse than the AMD instance. I'd call that competitive.

https://www.phoronix.com/review/aws-graviton4-benchmarks/7
The RISC x CISC idea is pretty outdated and irrelevant, and so is the idea of ISAs being that relevant for performance. Your underlying µarch implementation is way more relevant than the ISA itself, and that example you posted showcases that with x86 offerings being both better and worse than an ARM one.

can it run Linux ARM?
Given how Nvidia's Grace offerings already run on linux (and solely on linux), which includes their spark and GB300 workstations, I believe this one should have good linux support as well, way better than Windows at least.
 
That's an opinion somewhat outdated and that doesn't take into consideration any relevant implementation details.
There's no "reason" why CISC is best at anything nowadays other than if you're dealing with really space-constrained MCUs.
It's an opinion based on decades of supporting information. RISC(ARM) is solid for a lot of things, it's even a good server CPU with specific workloads. CISC doesn't need software workarounds to do more complex calc tasks. CISC will persist regardless of how well RISC does.
 
Last edited:
CISC doesn't need software workarounds to do more complex calc tasks.
That's a irrelevant thing, I guess this mostly comes from the lack of understand of how ISAs/front-ends/CPUs work nowadays, given that the whole CISC/RISC has pretty much fallen out of relevance anyway.
CISC will persist regardless of how well RISC does.
Yeah, sure, because those nomeclatures are really irrelevant. If by CISC you just mean x86, sure, x86 will persist for quite some time because stuff uses it and we have good x86 CPUs.
Seems like you're just trying to mean x86 when speaking about CISC, but trying to make it sound fancier for some reason, given that there isn't any other CISC "contender" for that space.
 
That's a irrelevant thing
Not until you see the slow down caused by RISC running tasks in software that CISC runs in hardware. Not an irrelevant thing.
Yeah, sure, because those nomeclatures are really irrelevant.
Mate, your needle is stuck in the groove..
Seems like you're just trying to mean x86 when speaking about CISC
X86/X64 IS CISC. Just throwing it out there..
 
That's a irrelevant thing, I guess this mostly comes from the lack of understand of how ISAs/front-ends/CPUs work nowadays, given that the whole CISC/RISC has pretty much fallen out of relevance anyway.
Not until you see the slow down caused by RISC running tasks in software that CISC runs in hardware. Not an irrelevant thing.
I think you guys are talking past each other. I believe it to be a true statement to say that the line between CISC and RISC with modern day CPUs have been blurred. It's not like ARM doesn't have extensions intended to accelerate certain workloads, just like x86. So from that standpoint, RISC CPUs aren't as "reduced" as they used to be. To me, the only real difference between a RISC and CISC CPU is how interacting with memory is handled after code has been compiled for the platform, where RISC CPUs explicitly call out LOAD and STORE operations with other operations acting solely on system registers and with CISC allowing for operations to do memory accesses in that one operation with relatively complicated addressing modes. That's it and I don't see that as a performance advantage for CISC.

So, the real question. Which is better?
CISC will persist regardless of how well RISC does.
Any CPU architecture will persist so long as it's used at scale. The only real advantage x86 has is that it's been around longer than just about everything else so a very large ecosystem has been built around it. However if somebody came to me and said switching from x86 to ARM is going to save me money on my AWS bill at work with those fancy new Graviton4 EC2 instances, I'd switch tomorrow because the JVM runs everywhere and everybody likes to save money.
 
Not until you see the slow down caused by RISC running tasks in software that CISC runs in hardware. Not an irrelevant thing.
I guess you just never had any actual experience running anything in those in the past 10 years or so.
X86/X64 IS CISC. Just throwing it out there..
No shit, Sherlock :laugh:


To me, the only real difference between a RISC and CISC CPU is how interacting with memory is handled after code has been compiled for the platform, where RISC CPUs explicitly call out LOAD and STORE operations with other operations acting solely on system registers and with CISC allowing for operations to do memory accesses in that one operation with relatively complicated addressing modes. That's it and I don't see that as a performance advantage for CISC.

So, the real question. Which is better?
I guess that is mostly a compiler/asm difference, which does have some impact on the higher level cache and makes the encoder work differently, but on the other hand, past the encoder you are likely to just µop-fusion those loads/stores and shove it into the µop cache, not much differently from what would happen in x86.
So yeah, I don't think this brings any perf benefit, nor any downside per se, just different ways to do things where you hand-off part of the work to some other place.

I think this only matters for compiler folks or CPU front-end designers, those likely would have a say in what they prefer to work with haha
Any CPU architecture will persist so long as it's used at scale. The only real advantage x86 has is that it's been around longer than just about everything else so a very large ecosystem has been built around it. However if somebody came to me and said switching from x86 to ARM is going to save me money on my AWS bill at work with those fancy new Graviton4 EC2 instances, I'd switch tomorrow because the JVM runs everywhere and everybody likes to save money.
As I had said before, there's nothing meaningful to performance that's inherent to an ISA, so it's most about comparing actual different CPUs and µarches.
The place I currently work at is mostly a python shop, and most of our stuff was easy to do multi-plat builds for and have deployed in both Graviton and other x86 instances within our node pools.
Even adding support for G5g coming from g4dn instances was okayish, once we figured out a proper way to get pytorch ARM wheels with CUDA included, although that was done more due to availability reasons rather than cost savings.
 
I guess that is mostly a compiler/asm difference, which does have some impact on the higher level cache and makes the encoder work differently, but on the other hand, past the encoder you are likely to just µop-fusion those loads/stores and shove it into the µop cache, not much differently from what would happen in x86.
Nah, my man. In x86 assembly, many instructions allow for direct memory access as operands to said instructions and is handled in hardware. This difference is a defining feature of CISC vs RISC in my opinion.
 
Nah, my man. In x86 assembly, many instructions allow for direct memory access as operands to said instructions and is handled in hardware. This difference is a defining feature of CISC vs RISC in my opinion.
Yeah, I'm aware. My point was that those ops with direct memory addressing end up being broken down into multiple µops past the front-end, which ends up with a similar number of µops for a similar ARM operation, as in an x86 `ADD r1, [mem]` would end up taking a similar number of retired instructions to a `LDR r1, [mem]; ADD r1, r1, r1` in ARM, and in both cases such generated µops would just be soaked into the µop cache in the exact same manner.

So, as I said, the ARM one takes a little extra i$ space, but past the encoder there's not much difference.
 
I believe it to be a true statement to say that the line between CISC and RISC with modern day CPUs have been blurred.
True statement. This is why RISC isn't as compelling for a desktop/workstation/server as CISC is. True RISC is a mobile-centric platform. It's what it excels at. But for full function compute, CISC is the best and most optimal answer.
So from that standpoint, RISC CPUs aren't as "reduced" as they used to be.
Very true!
So, the real question. Which is better?
The real answer is: It depends. One could also correctly say: Both and neither.

RISC is here to stay and that's a good thing because it's good for a lot of things.

CISC is here to stay and that's ALSO a good thing because it's good for a lot of things.

Neither should be completely dominant, such an idea is both senseless and would be harmful to computing as a whole.

No shit, Sherlock :laugh:
Well then, keep digging Watson.
 
Back
Top