Wednesday, March 20th 2024

NVIDIA to Implement GDDR7 Memory on Top-3 "Blackwell" GPUs

NVIDIA is confirmed to implement the GDDR7 memory standard with the top three GPU ASICs powering the next-generation "Blackwell" GeForce RTX 50-series, Tweaktown reports, citing XpeaGPU. By this, we mean the top three physical silicon types from which NVIDIA will carve out the majority of its SKUs. This would include the GB202, the GB203, and GB205; which will power successors to everything from the current RTX 4070 to the RTX 4090. NVIDIA is expected to build these chips on the TSMC 4N foundry node.

There will be certain GPU ASIC types in the "Blackwell" generation that will stick to older memory standards such as GDDR6 or even the GDDR6X. These would be successors to the current AD106 and AD107 ASICs, powering SKUs such as the RTX 4060 Ti, and below. NVIDIA co-developed the GDDR6X standard with Micron Technology, which is the chip's exclusive supplier to NVIDIA. GDDR6X scales up to 23 Gbps and 16 Gbit, which means NVIDIA can avail plenty of performance for the lower-end of its product stack using GDDR6X; especially considering that its GDDR7 implementation will only run at 28 Gbps, despite chips being available in the market for 32 Gbps, or even 36 Gbps. Even if NVIDIA chooses the regular GDDR6 standard for its entry-mainstream chips, the tech scales up to 20 Gbps.
Source: Tweaktown
Add your own comment

21 Comments on NVIDIA to Implement GDDR7 Memory on Top-3 "Blackwell" GPUs

#1
wolf
Performance Enthusiast
Hopefully it will curb power consumption, the GDDR6X on my 3080 is power hungry, not sure if that was improved with Ada.
Posted on Reply
#2
nguyen
wolfHopefully it will curb power consumption, the GDDR6X on my 3080 is power hungry, not sure if that was improved with Ada.
GDDR6x on Ada is on a better node with much better power consumption and thermal
Posted on Reply
#3
konga
1) GPUs are not ASICs. There's nothing "application-specific" about them, especially since GPGPU exists.
2) The top Blackwell GPUs are going to use "4NP" according to Kopite7kimi, which is a different node than 4N.
Posted on Reply
#4
LazyGamer
I think I'll skip RTX 5000 series. My RTX 4070 should serve me well until RTX 6070 comes out.
Posted on Reply
#5
ARF
LazyGamerI think I'll skip RTX 5000 series. My RTX 4070 should serve me well until RTX 6070 comes out.
12 GB for 3 or 4 years more? RTX 5070 ~2025, RTX 6070 ~2027-2028?

Better buy RTX 5070 if it has 16-20 GB VRAM, and skip RTX 6000 series altogether.
Posted on Reply
#6
N/A
507o is touted GB205 and therefore 8192 minus 2-4 disabled SMs and 12GB and PCIE 8x, just because PCIe 5.0.

the price points should remain the same with 20-25% gen on gen performance uplift, 5090 will shoot up to the stratosphere
For the most part RTX 50 is more like a refresh. using the N4P nodelet provides only a minor clock speed bump 5-10%.
more L1 cache is good but more shaders crammed in each GPC is a step back and means less ROPs.

Skip skip skip until the N1 node.
Posted on Reply
#7
LazyGamer
ARF12 GB for 3 or 4 years more? RTX 5070 ~2025, RTX 6070 ~2027-2028?

Better buy RTX 5070 if it has 16-20 GB VRAM, and skip RTX 6000 series altogether.
Naaah. I'm not one of the people who are desperate to have everything maxed out all the time. This includes texture resolution. And I'm still on a 1080P monitor. Nvidia is deliberately serving everything but catastrophically overpriced top tier cards with less VRAM than they should have. To force people into buying new cards, if for no other reason then because of lack of VRAM. I'll rather skip to AMD if I do buy next gen of cards. They are more generous when it comes to video memory.
Posted on Reply
#8
bonehead123
NVIDIA to Implement GDDR7 Memory on Top-3 "Blackwell" GPUs
Well, of course they are, cause this will give them yet anutha excuse to jack up their prices even more for what will probably be yet anutha round of minuscule 5-8% performance increases :(
Posted on Reply
#9
Bwaze
bonehead123Well, of course they are, cause this will give them yet anutha excuse to jack up their prices even more for what will probably be yet anutha round of minuscule 5-8% performance increases :(
These aren't Intel CPUs. Previous two generations had about 50% performance uplift, even if you looked pure rasetisation numbers.

The problem is, price increase was even bigger, so some RTX 40x0 cards had worse price / performance than the previous generation - and I don't think we will get any better deal in next generation.
Posted on Reply
#10
Chrispy_
Nvidia often seems to use faster VRAM as a way to skimp on bus width, so I'm much less excited about this than I would have been a decade ago.

We've lost bus width with several previous consumer GPU jumps to newer GDDR generations.
[INDENT]512-bit cards were abandoned by Nvidia in the switch from GDDR3 to GDDR5[/INDENT]
[INDENT]Consumer cards were relegated to 256-bits when GDDR5X came along, with 384-bit prosumer Titans being the only, eye-wateringly expensive way to get more than 256-bits for over a year.[/INDENT]
[INDENT]The pattern repeated with GDDR6/6X with the entry cost of 384-bit memory buses rising to $1500MSRP, and $2000 on the street, while mainstream cards losing bus-width at almost every tier, as well as PCIe lanes.[/INDENT]
[INDENT][/INDENT]
Hopefully, we don't see GDDR7 used as a way to make 192-bit buses compete with the same tier of 256-bit GDDR6 card from this generation, but my cynicism is justified by plenty of historic data. It's what Nvidia do, all the damn time.
Posted on Reply
#11
Metroid
What is the difference between TSMC N4P and N5?

"In October 2021, TSMC introduced a new member of its "5 nm" process family: N4P. Compared to N5, the node offered 11% higher performance (6% higher vs N4), 22% higher power efficiency, 6% higher transistor density and lower mask count. TSMC expected first tapeouts by the second half of 2022."
Posted on Reply
#12
ARF
MetroidWhat is the difference between TSMC N4P and N5?

"In October 2021, TSMC introduced a new member of its "5 nm" process family: N4P. Compared to N5, the node offered 11% higher performance (6% higher vs N4), 22% higher power efficiency, 6% higher transistor density and lower mask count. TSMC expected first tapeouts by the second half of 2022."
Marginal.
LazyGamerNaaah. I'm not one of the people who are desperate to have everything maxed out all the time.
It is risky to stay for further 3-4 years asking the games not to release one that needs more than 12 GB of VRAM even at low-medium settings.
I don't share the optimistic view that 12 GB is good. Not, it is not.
Sooner or later you will see this message:

LazyGamerNvidia is deliberately serving everything but catastrophically overpriced top tier cards with less VRAM than they should have. To force people into buying new cards
You should act proactively and counter-act Nvidia. Buy a card with more VRAM than wait for a new generation which no one knows when exactly will get a public release.
LazyGamerI'll rather skip to AMD if I do buy next gen of cards. They are more generous when it comes to video memory.
This is even better. I doubt that AMD is generous, they simply know better than Nvidia how much VRAM is needed today.
AMD works better with the game developers to determine the minimum system requirements, and releases its products accordingly.
Posted on Reply
#13
BorisDG
MetroidWhat is the difference between TSMC N4P and N5?

"In October 2021, TSMC introduced a new member of its "5 nm" process family: N4P. Compared to N5, the node offered 11% higher performance (6% higher vs N4), 22% higher power efficiency, 6% higher transistor density and lower mask count. TSMC expected first tapeouts by the second half of 2022."
Yeah, but N4 is not 4N right? :p Yeah, I know all those "nm" are just marketing thing now. On paper Ada is 4nm which is N4P. From what I'm aware the regular N4 is nowhere to be found on consumer market.
Posted on Reply
#14
dgianstefani
TPU Proofreader
konga1) GPUs are not ASICs. There's nothing "application-specific" about them, especially since GPGPU exists.
2) The top Blackwell GPUs are going to use "4NP" according to Kopite7kimi, which is a different node than 4N.
Makes sense to follow Apple's lead. "Pro" models on leading edge node, "non pro" on updated last gen APU. Even AMD is doing this already with the differences in node from 7600 to higher RDNA3 models.

There's such limited capacity left once Apple has placed its order anyway, so from a technical perspective I understand this strategy.

Intel with its own fabs hopefully coming out swinging with 15th gen Core and Battlemage.
Posted on Reply
#15
Minus Infinity
N/A507o is touted GB205 and therefore 8192 minus 2-4 disabled SMs and 12GB and PCIE 8x, just because PCIe 5.0.

the price points should remain the same with 20-25% gen on gen performance uplift, 5090 will shoot up to the stratosphere
For the most part RTX 50 is more like a refresh. using the N4P nodelet provides only a minor clock speed bump 5-10%.
more L1 cache is good but more shaders crammed in each GPC is a step back and means less ROPs.

Skip skip skip until the N1 node.
Blackwell uses a newer architecture than Ada, for cores, so definitely not just a refresh with GDDR7 and some extra L2 cache
Posted on Reply
#16
Prima.Vera
ARFIt is risky to stay for further 3-4 years asking the games not to release one that needs more than 12 GB of VRAM even at low-medium settings.
I don't share the optimistic view that 12 GB is good. Not, it is not.
Sooner or later you will see this message:



You should act proactively and counter-act Nvidia. Buy a card with more VRAM than wait for a new generation which no one knows when exactly will get a public release.
Bro, stop spreading miss information.
I had an GTX 1080 with 8GB of VRAM, and never, ever received that message or even had any issues on any game.
I now run an RTX 3080 with 10GB of VRAM and is the same thing. I play all games maximized and with 4K or Ultra Res textures and my VRAM is always around 8GB at tops.
I'm very curious what games do you think require more than 12GB of VRAM?? The ones that have engines caching all VRAM do not count ;)
Posted on Reply
#17
Double-Click
Only the top 3? So the 5070 isn't going to get it...#%^&ing pricks.

EDIT: Never mind, I need more coffee.
Posted on Reply
#18
dgianstefani
TPU Proofreader
Double-ClickOnly the top 3? So the 5070 isn't going to get it...#%^&ing pricks.
Top three dies, not top three SKUs.

GB202, the GB203, and GB205

5070 will be GB203/GB205.
Posted on Reply
#19
Vayra86
ARFMarginal.



It is risky to stay for further 3-4 years asking the games not to release one that needs more than 12 GB of VRAM even at low-medium settings.
I don't share the optimistic view that 12 GB is good. Not, it is not.
Sooner or later you will see this message:




You should act proactively and counter-act Nvidia. Buy a card with more VRAM than wait for a new generation which no one knows when exactly will get a public release.



This is even better. I doubt that AMD is generous, they simply know better than Nvidia how much VRAM is needed today.
AMD works better with the game developers to determine the minimum system requirements, and releases its products accordingly.
Dude. Im one of the people saying 12GB isnt enough at its given price point but youre taking this way out of context. Stop spreading BS.
Posted on Reply
#20
THU31
The initial line-up of the 50 series is not looking great at the moment. Same node, slow low-capacity GDDR7 modules which are supposed to be coupled with narrow buses.

I honestly don't even know what to expect anymore. A slightly more efficient 5070 with 12 GB? It seems surreal.
Posted on Reply
#21
N/A
supposedly 50 series is adding 33% more shaders per GPC and 33% faster GDDR7. I don't believe the shaders are any different just more of them and $L1 to mitigate that
Nvidia has no reason to make 5070 any faster than a 4070 SUper. and with GB205 that's what we get. a minor refresh and early adopters get 12GB as opposed to the anticipated 18GB much later.
5080 is barely as fast as 4090 in raster, 25% faster than 4080 that is, even the bandwidth shaders point to 22.5 ->28GBps, 9728 -> 12160 likely 25%
Nvidia renamed 5070 to 5080 and for 1199 very likely. and even bigger gap between 80/90 that leaves room for the 5080 Ti. yeah.
Posted on Reply
Add your own comment
Apr 27th, 2024 04:17 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts