Tuesday, March 19th 2024

NVIDIA "Blackwell" GeForce RTX to Feature Same 5nm-based TSMC 4N Foundry Node as GB100 AI GPU

Following Monday's blockbuster announcements of the "Blackwell" architecture and NVIDIA's B100, B200, and GB200 AI GPUs, all eyes are now on its client graphics derivatives, or the GeForce RTX GPUs that implement "Blackwell" as a graphics architecture. Leading the effort will be the new GB202 ASIC, a successor to the AD102 powering the current RTX 4090. This will be NVIDIA's biggest GPU with raster graphics and ray tracing capabilities. The GB202 is rumored to be followed by the GB203 in the premium segment, the GB205 a notch lower, and the GB206 further down the stack. Kopite7kimi, a reliable source with NVIDIA leaks, says that the GB202 silicon will be built on the same TSMC 4N foundry node as the GB100.

TSMC 4N is a derivative of the company's mainline N4P node, the "N" in 4N stands for NVIDIA. This is a nodelet that TSMC designed with optimization for NVIDIA SoCs. TSMC still considers the 4N as a derivative of the 5 nm EUV node. There is very little public information on the power- and transistor density improvements of the TSMC 4N over TSMC N5. For reference, the N4P, which TSMC regards as a 5 nm derivative, offers a 6% transistor-density improvement, and a 22% power efficiency improvement. In related news, Kopite7kimi says that with "Blackwell," NVIDIA is focusing on enlarging the L1 caches of the streaming multiprocessors (SM), which suggests a design focus on increasing the performance at an SM-level.
Sources: Kopite7kimi (Twitter), #2, VideoCardz
Add your own comment

60 Comments on NVIDIA "Blackwell" GeForce RTX to Feature Same 5nm-based TSMC 4N Foundry Node as GB100 AI GPU

#1
Onasi
Not necessarily a bad thing, NV had shown before that they CAN extract more performance on architectural level without a node shrink, like with Maxwell. So we’ll see.
Posted on Reply
#2
Metroid
Disappointing. I was excited to buy a RTX 5090, I'm not anymore.
OnasiNot necessarily a bad thing, NV had shown before that they CAN extract more performance on architectural level without a node shrink, like with Maxwell. So we’ll see.
Seriously? come on, if there is not advancement and yet charging an arm and a leg for it, is a failed product. If AMD play the cards well then nvidia is done for this gen. But as we know AMD has been following nvidia in price and embarrassment. So AMD will also use that node or even worse. Where is Intel to save the day? Yeah driver issues to the teeth. Unbelievable, we dont have any GPU manufacture that will use this nvidia's moment of weakness to shine.
Posted on Reply
#3
BorisDG
If it's true, my guess is that the TDP will go up. Quite disappointing, but we will see at the end of 2024/start of 2025.
Posted on Reply
#4
Onasi
@Metroid
Was Maxwell a failed product? Was Fermi 2.0? Was second gen Kepler? Was Turing, for that matter, since it was just an optimization of the 16nm that Pascal used? If you expect every gen to be a massive uplift like Pascal was… well, that’s on you. Hell, look at Ampere. Those cards were good despite the new process, not because of it. And yeah, blame lies with Samsung there. But NV cannot control TSMC either. From what I understand, 3nm is just straight up not ready for prime time yet, there are yield issues. And in the current market where demand is high the last thing NV needs is yield issues.
Seriously, people get hung up on fabrication processes too much.
Posted on Reply
#5
Denver
OnasiNot necessarily a bad thing, NV had shown before that they CAN extract more performance on architectural level without a node shrink, like with Maxwell. So we’ll see.
Now there isn't as much scope for improving efficiency as before. The GPU announced yesterday brings less than 14% improvement at the architecture level.

Maybe they'll take advantage of new features on the software side like "graphs" to show bigger numbers, or maybe they'll just cross all the efficiency lines to deliver more performance.
Posted on Reply
#6
Metroid
Onasi@Metroid
Was Maxwell a failed product? Was Fermi 2.0? Was second gen Kepler? Was Turing, for that matter, since it was just an optimization of the 16nm that Pascal used? If you expect every gen to be a massive uplift like Pascal was… well, that’s on you. Hell, look at Ampere. Those cards were good despite the new process, not because of it. And yeah, blame lies with Samsung there. But NV cannot control TSMC either. From what I understand, 3nm is just straight up not ready for prime time yet, there are yield issues. And in the current market where demand is high the last thing NV needs is yield issues.
Seriously, people get hung up on fabrication processes too much.
Maxwell to me was not a failed product, 28nm 2012 - 2016, 3 product releases 680, 780 and 980, that was 4 years then we got a new 16 nm 1080 in 2016 and then in 2018 we got a 12nm then in 2020 we got a 8nm 3080, in 2022 we got a 4090 4nm and in 2024 still a 4nm? If this is true then this is the first time nvidia failed miserably but if in 2 years they release a 2nm then it means we are still on track, will they do that? I dont think so, will take them at least 4 years to release a 2nm, so 2028? The gap was 4 years for a new nm, now 6 years?
Posted on Reply
#7
Onasi
@Metroid
How the actual hell have they failed? So 4 years on the same process was all kosher. Then another 4 years on the same process - still fine (as I said, 12nm TSMC IS just an optimization of 16nm). But now suddenly a second gen in only two years using the same process is a failure? I fail to see your logic. And how has NV failed here, even? Again, they work with what TSMC has. If 3nm isn’t ready then it isn’t ready. What, do you expect Raptor Jesus to come on down from heavens and make it work for them? Or should they delay the chips indefinitely until it DOES work and then pour more resources into shrinking the arch onto new node? Do you understand how these things even work?
Posted on Reply
#8
dgianstefani
TPU Proofreader
Why bother with more if AMD isn't planning on competing in the high end. I mean, I can't say I wouldn't have preferred N4P or some 3 nm process, since I'm planning to upgrade my 3080 Ti to something from RTX 50xx, since Samsung 8nm (10nm) sucks, but it is what it is. I think AMD/Intel will be fighting for the mid range/low end next generation.

NVIDIA will probably still be secure even with the RTX 4090 vs RDNA4, let alone anything RTX 50xx.

Without a node upgrade I'd expect ~25-40% improvements just from architecture/more cores, maybe with some more unique hardware accelerators like the optical flow unit in Ada, for example. Perhaps in some scenarios a ~50% performance improvement (ray tracing?) like the leaks suggest.

Apple will as usual be buying all the 3nm leading edge stuff, so why fix something that isn't broken?
Posted on Reply
#9
Metroid
Onasi@Metroid
How the actual hell have they failed? So 4 years on the same process was all kosher. Then another 4 years on the same process - still fine (as I said, 12nm TSMC IS just an optimization of 16nm). But now suddenly a second gen in only two years using the same process is a failure? I fail to see your logic. And how has NV failed here, even? Again, they work with what TSMC has. If 3nm isn’t ready then it isn’t ready. What, do you expect Raptor Jesus to come on down from heavens and make it work for them? Or should they delay the chips indefinitely until it DOES work and then pour more resources into shrinking the arch onto new node? Do you understand how these things even work?
Generational progress was 4 in 4 years, nvidia want to make 6 years now. They are doing the same thing Intel has done, intel was doing good up to the first 14nm product then things got messed up and then we got from Intel 14+++++++++++++++++, AMD surpassed Intel, will AMD use that to surpass nvidia this time, this gen? That is the question we will see till the end of this year.
Posted on Reply
#10
3DVCash
MetroidDisappointing. I was excited to buy a RTX 5090, I'm not anymore.


Seriously? come on, if there is not advancement and yet charging an arm and a leg for it, is a failed product. If AMD play the cards well then nvidia is done for this gen. But as we know AMD has been following nvidia in price and embarrassment. So AMD will also use that node or even worse. Where is Intel to save the day? Yeah driver issues to the teeth. Unbelievable, we dont have any GPU manufacture that will use this nvidia's moment of weakness to shine.
I mean, maaaybe with node maturity and better yields, prices might actually come down?

Copium, I know... but there's a chance!
Posted on Reply
#11
dgianstefani
TPU Proofreader
3DVCashI mean, maaaybe with node maturity and better yields, prices might actually come down?

Copium, I know... but there's a chance!
Prices went down with Supers so there's hope for at least price/performance to not just continue scaling. I guess we'll see.

$1000 5080 would be nice, and possible, unlike the $600 5080 some people seem to expect/want.
Posted on Reply
#12
Metroid
3DVCashI mean, maaaybe with node maturity and better yields, prices might actually come down?

Copium, I know... but there's a chance!
If a miracle happens and nvidia come and say a 5090 will be 1000 dollars then that will be a success in my book. If they can not deliver a generational progress then at least cut the price in half, AMD usually does that, will nvidia do the same? I don't think so.
Posted on Reply
#13
Onasi
MetroidGenerational progress was 4 in 4 years, nvidia want to make 6 years now. They are doing the same thing Intel has done, intel was doing good up to the first 14nm product then things got messed up and then we got from Intel 14+++++++++++++++++, AMD surpassed Intel, will AMD use that to surpass nvidia this time, this gen? That is the question we will see till the end of this year.
I swear I feel like I am taking crazy pills. Once more, please understand, what NVidia “wants” is completely irrelevant in this discussion. The process which is used to fabricate their chips is OUT OF THEIR HANDS. They can use what TSMC has and even then they don’t get first dibs. Apple does, as the VVIP client. Comparing them to Intel who are their own fab is nonsensical.
Posted on Reply
#14
Metroid
OnasiI swear I feel like I am taking crazy pills. Once more, please understand, what NVidia “wants” is completely irrelevant in this discussion. The process which is used to fabricate their chips is OUT OF THEIR HANDS. They can use what TSMC has and even then they don’t get first dibs. Apple does, as the VVIP client. Comparing them to Intel who are their own fab is nonsensical.
If AMD secured a 3nm node then I see nvidia not viable for this gen. I really hope AMD will use the moment to shine, I'm sure price node is a concern I guess for nvidia, they want to pay less and less and make more profits. Maybe they planned this all along. They could have done it on 3nm or even 2nm but they decided not to because of cost. But like I said before, if nvidia comes with a good price for their 5090 then it will be all right.
Posted on Reply
#15
Onasi
MetroidIf AMD secured a 3nm node
For their GPUs? Highly unlikely.
MetroidI'm sure price node is a concern I guess for nvidia, they want to pay less and less and make more profits
Price? No. Physical limitations of production? Yes.
MetroidMaybe they planned this all along. They could have done it on 3nm or even 2nm but they decided not to because of cost.
What the actual f**k am I reading.
Posted on Reply
#16
Lycanwolfen
I bet ya a dollar new card will be a cool 4999.00 dollars 5 grand US or 10 grand Canadian. Might as well if you a gamer have no life no wife and spend money on a video card instead of a car.
Posted on Reply
#17
Slizzo
Guys, node wars don't really matter anymore. It's the performance they can extract out of the nodes they're using.

On that point, I have no doubt NVIDIA will be able to extract enough performance from the node.
Posted on Reply
#18
Legacy-ZA
What I read was:

"Here gamers, take the old A.I chips we couldn't sell, lol, now give me $5000 / GPU." :roll:
Posted on Reply
#19
Bwaze
We will see if this new "Jensen's Law" looks like this now:

2020, RTX 3080 - $700
2022, RTX 4080 - $1200
2024, RTX 5080 - $2060
Posted on Reply
#20
AnotherReader
SlizzoGuys, node wars don't really matter anymore. It's the performance they can extract out of the nodes they're using.

On that point, I have no doubt NVIDIA will be able to extract enough performance from the node.
That isn't really the case either. If nodes didn't matter, then Ada wouldn't have been as massive an upgrade over Ampere as it has been. Architecturally, they are almost identical so all of the performance gain in rasterization is due to the new node. If the 5000 series in on N4, I would expect it to be like Turing: massive and expensive dies.
Posted on Reply
#21
Fatalfury
so RTX 5060 = RTX 4060 TI with higher power consumption and high price
Posted on Reply
#22
redeye
LycanwolfenI bet ya a dollar new card will be a cool 4999.00 dollars 5 grand US or 10 grand Canadian. Might as well if you a gamer have no life no wife and spend money on a video card instead of a car.
well that’s severe. 4090 was not that expensive… it is just that in canada a 4090 was (pretty much almost…) twice the price of a 7900xtx. and compared to the 4090, only gave you 10fps more. which one could say, 60 or 70 what does it matter… but fastest card ever, is something, but nvidia has destroyed that feeling because the 5090 will be slightly faster, and will force 4090 owners to think “ya, so” not fast enough to replace…
Posted on Reply
#23
DemonicRyzen666
dgianstefaniWhy bother with more if AMD isn't planning on competing in the high end. I mean, I can't say I wouldn't have preferred N4P or some 3 nm process, since I'm planning to upgrade my 3080 Ti to something from RTX 50xx, since Samsung 8nm (10nm) sucks, but it is what it is. I think AMD/Intel will be fighting for the mid range/low end next generation.

NVIDIA will probably still be secure even with the RTX 4090 vs RDNA4, let alone anything RTX 50xx.

Without a node upgrade I'd expect ~25-40% improvements just from architecture/more cores, maybe with some more unique hardware accelerators like the optical flow unit in Ada, for example. Perhaps in some scenarios a ~50% performance improvement (ray tracing?) like the leaks suggest.

Apple will as usual be buying all the 3nm leading edge stuff, so why fix something that isn't broken?
Nvidia can't come close to 40% increase on the same node, & has never achieved this.
Second, it's not happening. Not if they're planning to increase the front end's L1 cache.
The increase on the L1 cache shows their Shaders/or other parts are just sitting idle & still not being used mostly.
The best Nvidia can do right now is 20% increase in performance compared to the 4090 at the same power.]
Raytracing side hasn't increase more than 6% pre-clock compared to each generation. The only time Raytracing has had a major increase on Nvidia is when rasterization was massively increase above it.
Posted on Reply
#24
Chomiq
Sounds like this next generation will be equivalent of gap year for GPU makers.
Posted on Reply
#25
dgianstefani
TPU Proofreader
DemonicRyzen666Nvidia can't come close to 40% increase on the same node, & has never achieved this.
Second, it's not happening. Not if they're planning to increase the front end's L1 cache.
The increase on the L1 cache shows their Shaders/or other parts are just sitting idle & still not being used mostly.
The best Nvidia can do right now is 20% increase in performance compared to the 4090 at the same power.]
Raytracing side hasn't increase more than 6% pre-clock compared to each generation. The only time Raytracing has had a major increase on Nvidia is when rasterization was massively increase above it.
I guess we'll see, won't we.
Posted on Reply
Add your own comment
Apr 29th, 2024 18:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts