• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
The first round of NVIDIA GeForce RTX 50-series "Blackwell" graphics cards that implement GDDR7 memory are rumored to come with a memory speed of 28 Gbps, according to kopite7kimi, a reliable source with NVIDIA leaks. This is despite the fact that the first GDDR7 memory chips will be capable of 32 Gbps speeds. NVIDIA will also stick with 16 Gbit densities for the GDDR7 memory chips, which means memory sizes could remain largely unchanged for the next generation; with the 28 Gbps GDDR7 memory chips providing 55% higher bandwidth over 18 Gbps GDDR6 and 33% higher bandwidth than 21 Gbps GDDR6X. It remains to be seen what memory bus widths NVIDIA chooses for its individual SKUs.

NVIDIA's decision to use 28 Gbps as its memory speeds has some precedent in recent history. The company's first GPUs to implement GDDR6, the RTX 20-series "Turing," opted for 14 Gbps speeds despite 16 Gbps GDDR6 chips being available. 28 Gbps is exactly double that speed. Future generations of GeForce RTX GPUs, or even refreshes within the RTX 50-series could see NVIDIA opt for higher memory speeds such as 32 Gbps. When the standard debuts, companies like Samsung even plan to put up fast 36 Gbps chips. Besides a generational doubling in speeds, GDDR7 is more energy-efficient as it operates at lower voltages than GDDR6. It also uses a more advanced PAM3 physical layer signaling compared to NRZ for JEDEC-standard GDDR6.



View at TechPowerUp Main Site | Source
 
It will be interesting to see benchmarks, my guess is GDDR7 will help 4k gamers and 3440x1440 gamers, but any resolution below that may not benefit from the extra speed of GDDR7. None the less, it's nice to see progress still, we are lucky this industry even exists and they didn't just force cloud gaming down our throats, that day will come, but thankfully its not today.
 
If AMD doesn't return with a large monolithic GPU, you will see how nvidia will charge $2900 for GB103 based RTX 5080 10-20% faster than RTX 4090 and won't even bother to release the much larger GB102 in RTX 5090.

Let's hope AMD and nvidia don't intentionally allign once again their lineups, so that Radeon RX 8700 XT is 10% faster than RX 7800 XT, and RTX 5060 is 5% faster than RTX 4060.
That will be a disaster, but at the same time the gamers will save some money because those lineups will make the negative buying decisions much easier.
 
They should not return to monolithic design. They have the advantage that they already released chiplet based consumer GPU and server market one. Sooner or later nvidia will also switch to chiplet design but design and actual release are two different things. But remains to be seen if AMD will use that advantage.
 
So one can expect the same amount of GDDR per GPU tire as the current gen.
Lovely.
We can all keep enjoying the endless ‘not enough memory vs it just allocating’ and you ‘want to be future proof vs by than your fps will tank anyway’ for another episode.

Also, NV must keep the 32gbps variant for the upcoming ‘super’ 5xxx series.
 
If AMD doesn't return with a large monolithic GPU, you will see how nvidia will charge $2900 for GB103 based RTX 5080 10-20% faster than RTX 4090 and won't even bother to release the much larger GB102 in RTX 5090.

Let's hope AMD and nvidia don't intentionally allign once again their lineups, so that Radeon RX 8700 XT is 10% faster than RX 7800 XT, and RTX 5060 is 5% faster than RTX 4060.
That will be a disaster, but at the same time the gamers will save some money because those lineups will make the negative buying decisions much easier.

I was just wandering how Nvidia will pull + 50% peeformance / +50% price increase this generation without the public outrage, and one option is of course:

- approx. $2000 RTX 5080, which will be faster than RTX 4090 ("so youre getting your money's worth", reviewers will be paid to say)

- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
 
My hopes are not high for the 5000 series given the attention given to making AI HW atm & AMD not competing at the top end this time round :( On the plus side, my 4090 will remain relevant for longer I guess.

I was just wandering how Nvidia will pull + 50% peeformance / +50% price increase this generation without the public outrage, and one option is of course:

- approx. $2000 RTX 5080, which will be faster than RTX 4090 ("so youre getting your money's worth", reviewers will be paid to say)

- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
I don't think even Nvidia would be as bold as jumping to $2k for the 5080. I agree with your just faster than the 4090 though, my guess is just under 4090 MSRP with 10% more performance so Jensen can pretend he's our friend.
 
Low quality post by Bwaze
I don't think even Nvidia would be as bold as jumping to $2k for the 5080. I agree with your just faster than the 4090 though, my guess is just under 4090 MSRP with 10% more performance so Jensen can pretend he's our friend.

But it's the logical conclusion of Jensen's "Moire's Law Is Dead" paradigm.

2020, RTX 3080 - $700
2022, RTX 4080 - $1200
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, RTX 1080 - $28965
 
Low quality post by HOkay
But it's the logical conclusion of Jensen's "Moire's Law Is Dead" paradigm.

2020, RTX 3080 - $700
2022, RTX 4080 - $1200
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, RTX 1080 - $28965
That's fine, as long as my income scales in the same way :D
 
They should not return to monolithic design.

Monolithic is better.
1710156060612.png
 
Theoretically, if we are talking in a vacuum, sure. It’s also unsustainable in terms of yields as the designs get denser and more complex. We already have seen it with the AD102. There is a good reason AMD is on chiplets, Intel is going to chiplets and for NV it’s the question of when, not if.
 
In last 30 days i seen 4 or 5 post, telling the same over and over and over again.

Now every time nvidia farts we get a new post..
 

No, chiplet is 100% better. For starters there are papers demonstrating that a chiplet based architecture with an active interposer can achieve lower latency than a monolithic design (like the one done by the University of Toronto). An active interposer gives the chip a dedicated point to point communication layer, which will be superior to a monolithic design that has to route wires around CPU features particularly as complexity scales up.

A monolithic chip is smaller individually but that's essentially irrelevant given that chiplets are larger horizontally. It does not make the chip extend beyond the height of caps for example and does not increase size requirements for devices. The larger chiplet based design would be easier to cool as well. Of course both of the above depend on the exact chiplet design, Intel's chiplets for example are much closer together so the size difference between it and a monolithic design is going to be less.

Of course chiplets also allow modularity, superior binning (each individual chiplet on a die can be binned), chips to exceed the reticle limit, they are cheaper to produce, and they have higher yield as compared to the same chip in a monolithic design.

AMD has 3 chiplet designs for it's entire CPU lineup: The IO die, Zen 4 core die, and the Zen 4c core die. Meanwhile Intel needs dozens of designs to address the same markets.

This is why Intel is switching to chiplet, it's just better.
 
that was the case of GDDR5 7Gbps as well for low cost. expect 60 series with 28-32Gbps 70-series 42Gbps
unless they come up with GDDR7X again like G5X and G6X before just for a buzzword and otherwise nothingburger
 
We already have seen it with the AD102.

Meanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
Please, don't tell me fairy tales. Why did AMD abandon the failed Navi 41? Because the chiplet approach doesn't work and they leave large portions of performance on the table.

Meanwhile Intel needs dozens of designs to address the same markets.

That's because Intel is a large corporation and it can afford it. I, for one, also support the idea that it must cut at least 50% of the projects because they are not necessary and instead waste time and money.
 
- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
As much as I personally want a card like that to happen, there are two points that make it highly unlikely:
1) They already have a sky-high priced RTX6000 ADA, which is an almost fully unlocked AD102 with 48GB - exactly what one would want for "home AI acceleration." They're charging $7000 for it and it's consistently out of stock.
2) They can charge even more by allocating their production towards dedicated AI chips! Those go for 15k+ while the actual die size is around the same. Sure, packaging costs, even more memory and all that but, you know, that's where the margins are.

At the end of the day I wouldn't be too surprised if they stopped bothering with gaming altogether. IMHO this might end up even worse than mining for home GPU market - that stuff went in cycles, miners weren't paying 20k for a card even in the worst moments and they wanted the same product as we do. AI people want a different product - and they can pay the price for it, hogging up all the supply they can get. If they keep doing that - where's the incentive to sell us the cards or make them better? Even high-end will become like mid- and low-end already has, with measly +5-10% boosts every couple of years.
 
Last edited:
No, chiplet is 100% better. For starters there are papers demonstrating that a chiplet based architecture with an active interposer can achieve lower latency than a monolithic design (like the one done by the University of Toronto). An active interposer gives the chip a dedicated point to point communication layer, which will be superior to a monolithic design that has to route wires around CPU features particularly as complexity scales up.
And HBM is better than GDDR6x. And Graphene is better than Silicium. For regular consumers chiplets are purely a cost saving measure with performance/efficiency downsides. Although I'm not even sure if they're cheaper to make GPU-wise as I'm sure the 7900XTX is more expensive in chip cost than the 4080. We don't know if chiplets are to blame though for RDNA3's shortcomings.

Of course at one point chiplets will be a necessity because of rising costs but I don't think of that as a positive for consumers. It's going to be all about increasing margins.

AMD has 3 chiplet designs for it's entire CPU lineup: The IO die, Zen 4 core die, and the Zen 4c core die.
That's not really true, is it? AMD also uses monolithic CPUs.
 
Meanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.

Is it? Where are you getting this info?

The GCD of Navi 31 is smaller and the MCDs are tiny, for all we know it could very well be cheaper even if the total die space used is higher.

Please, don't tell me fairy tales. Why did AMD abandon the failed Navi 41? Because the chiplet approach doesn't work and they leave large portions of performance on the table.

I'd like to point out that AMD is whopping Intel in the server and enterprise space and has the fastest consumer desktop processors as well. To me it seems you are purposefully ignoring things that don't favor your point.

AMD pointed out why they could not yet scale up their GPUs, bandwidth.

That's because Intel is a large corporation and it can afford it. I, for one, also support the idea that it must cut at least 50% of the projects because they are not necessary and instead waste time and money.

You are implying that a company is wasting money for the fun of it. I'm sure their CEO and shareholders would highly disagree.

And HBM is better than GDDR6x. And Graphene is better than Silicium. For regular consumers chiplets are purely a cost saving measure with performance/efficiency downsides.


The X3D CPUs absolutely prove this false. The binning of the 5950X does as well.

Although I'm not even sure if they're cheaper to make GPU-wise as I'm sure the 7900XTX is more expensive in chip cost than the 4080. We don't know if chiplets are to blame though for RDNA3's shortcomings.

The 7900 XTX has a die size of 304mm2 and the MCDs have a size of 34mm2. The cost is similar to that of a mid-ranged GPU. RDNA3 doesn't reach 4090 level performance because AMD was unable to get a 2nd GCD working.

I bet they failed to Google this at Nvidia. You should mail it to them.

Nvidia wrote a paper in 2017 about how chiplets are better FYI.
 
Meanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
So are we legitimately comparing what is a first consumer GPU chip using chiplets with a technology that’s at its peak (monolithic chip GPU) and immediately coming to conclusion that chiplets are worthless? That’s cool. Should I remind you that the OG Zen also had a “hard time” keeping up with the 7700K in many tasks? Guess that chiplet approach was worthless too.
 
I bet they failed to Google this at Nvidia. You should mail it to them.
I guess that's why Nvidia GPUs are becoming cheaper with every generation, right? Right?

1710163809188.jpeg
 
Last edited:
I bet they failed to Google this at Nvidia. You should mail it to them.
Monolithic is better for efficient designs, multi-chip is better for scaling. There is a scaling threshold before you need to go multi-chip. The thing is, GPUs right now are right on the edge. Monolithic is becoming expensive to build, but multi-chip isn't justified just yet.

Nvidia can afford to build expensive monoliths, because AI will gobble up anything (and likes efficiency). AMD... just tries to play catch up.
 
Back
Top