• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 50-series and AMD RDNA4 Radeon RX 8000 to Debut GDDR7 Memory

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
With Samsung Electronics announcing that the next-generation GDDR7 memory standard is in development, and Cadence, a vital IP provider for DRAM PHY, EDA software, and validation tools announcing its latest validation solution, the decks are clear for the new memory standard to debut with the next-generation of GPUs. GDDR7 would succeed GDDR6, which had debuted in 2018, and has been around for nearly 5 years now. GDDR6 launched with speeds of 14 Gbps, and its derivatives are now in production with speeds as high as 24 Gbps. It provided a generational doubling in speeds from the preceding GDDR5.

The new GDDR7 promises the same, with its starting speeds said to be as high as 36 Gbps, going beyond the 50 Gbps mark in its lifecycle. A MyDrivers report says that NVIDIA's next-generation GeForce RTX 50-series, probably slated for a late-2024 debut, as well as AMD's competing RDNA4 graphics architecture, could introduce GDDR7 at its starting speeds of 36 Gbps. A GPU with a 256-bit wide GDDR7 interface would enjoy 1.15 TB/s of bandwidth, and one with 384-bit would have a cool 1.7 TB/s to play with. We still don't know what is the codename of NVIDIA's next graphics architecture, it could be any of the ones NVIDIA hasn't used from the image below.



View at TechPowerUp Main Site | Source
 
Nice, it can mature for 2 generations before i upgrade my 4070 Ti.
 
It begs a question. What would the price be for those graphics cards? Gives me chills to even think about it though.
 
Last edited:
It begs a question. What would the price be for those graphics cards? Give me chills to even think about it though.

an extra 200 dollars or so, which people keep being willing to pay, its hard to even be mad that de manufactuers or developers anymore....I mean if the consumer is fine with these prices then why not....I am just not one of those consumers.

Also while I know we get this kind of information all the time, just shove it over one gen, inb4 future message of GDDR8 memory being used in the gen after this one or the gen after that. shocking news I know, ANYWHO it is in a way still weird to read about this when the current gen isnt even fully out yet: 7900x, 7900, 7800, 7600, rtx4070, 4060, 4050....still MIA.
 
nGreedia should love this. 256bit+ busses will be a thing relegated to Titan cards, and the rest of us will be lucky to get more than 128bit!
 
an extra 200 dollars or so, which people keep being willing to pay, its hard to even be mad that de manufactuers or developers anymore....I mean if the consumer is fine with these prices then why not....I am just not one of those consumers.

Also while I know we get this kind of information all the time, just shove it over one gen, inb4 future message of GDDR8 memory being used in the gen after this one or the gen after that. shocking news I know, ANYWHO it is in a way still weird to read about this when the current gen isnt even fully out yet: 7900x, 7900, 7800, 7600, rtx4070, 4060, 4050....still MIA.
Could be the case that the manufacturer hold back their gpu so customers can ramp up more cash to buy better and expensive ones rather than waiting until the lower end model come out.
 
an extra 200 dollars or so, which people keep being willing to pay, its hard to even be mad that de manufactuers or developers anymore....I mean if the consumer is fine with these prices then why not....I am just not one of those consumers.

Also while I know we get this kind of information all the time, just shove it over one gen, inb4 future message of GDDR8 memory being used in the gen after this one or the gen after that. shocking news I know, ANYWHO it is in a way still weird to read about this when the current gen isnt even fully out yet: 7900x, 7900, 7800, 7600, rtx4070, 4060, 4050....still MIA.
I think it is GDDR7 not 8 at this point. extra $200? One is fine the other isn't that is not the point here at least for me. You pay extra buck for a product which is being advertised as the best with all the features. Then you realize you can't run a game with the features it has been advertised that convinced you to purchase it, without using some artificial FPS generator since it cant cope with it. Not to mention, today that artificial feature is a mainstream and tomorrow with new graphics it is not since it has been replaced. You would need to purchase another graphics and so on. I'm just worried were these all is going. Today AMD has it open for every card tomorrow it may not. From a business perspective i get it from a consumer perspective it is not that good. It becomes not $200 since you need to purchase brand new thing every year for instance to make use of the new features. It is like these features are tied to hardware which is expensive. You can go with the best again but that would require a bag filled with money so the $200 extra becomes moot as an argument for getting something extra since it will no longer be the case.
 
So GDDR7 is faster than the HBM memory?
Who would have thought....
 
So GDDR7 is faster than the HBM memory?
Who would have thought....
It is hard to tell if it is really faster across the board. To achieve this speed, it generally means increasing latencies drastically. And if GDDR6 and 6X runs so hot, I wonder how hot GDDR7 will get.
 
So GDDR7 is faster than the HBM memory?
No it isn't, people still don't understand the reason why HBM is so fast, i.e the bus interface.

HBM3 is up to 1 TB/s per 1024bit interface, so 4 chips of HBM3 for example would equate to 4 TB/s of bandwidth, there is absolutely no way to match that with GDDR7 modules in a real product.
 
It is hard to tell if it is really faster across the board. To achieve this speed, it generally means increasing latencies drastically. And if GDDR6 and 6X runs so hot, I wonder how hot GDDR7 will get.

21Gbps GDDR6X is just fine, not sure where you've read that they run hot. Most cards in the entire 40 series stack hang out in the low 60s memory temp.

The doubling of capacity per package, general cooler improvements, and not having packages on the back of the PCB all help.

AMD's new 20Gbps GDDR6 for RDNA3 does run quite warm, but it's hard to say for sure since
  • Memory temp is not officially reported by AMD
  • Reviews mostly do not report on memory temp
  • MBA coolers are unimpressive and also pretty bad at memory cooling
  • 10-12 packages is still a lot for smaller not-4090 coolers
All in all, not really any reason to assume that faster product runs hotter just because, remember speculation about 21Gbps G6X just because Micron's 19Gbps product was hard to cool?
 
Last edited:
  • Memory temp is not officially reported by AMD, only by HWInfo so take their word for it
  • Reviews mostly do not report on memory temp
  • MBA coolers are unimpressive and also pretty bad at memory cooling, considering it's still 10-12 packages under there

AMD reports hotspot temperatures for memory meaning they are not comparable to what Nvidia cards report but there is no reason to believe both of those temperatures respectively differ in any significant manner. On the reference cards all the components share the same vapor chamber, under heavy load the cooling is likely better compared to a classic heatpipe design.
 
64bit is coming to mainstream.
 
AMD reports hotspot temperatures for memory meaning they are not comparable to what Nvidia cards report but there is no reason to believe both of those temperatures respectively differ in any significant manner. On the reference cards all the components share the same vapor chamber, under heavy load the cooling is likely better compared to a classic heatpipe design.

Both 40 series and Navi31 show up as Memory Junction Temp in HWInfo. GPU hotspot is reported differently (and more accurately on RDNA), but AMD doesn't inject secret sauce into its GDDR6 packages. Haven't seen any sources proving otherwise, you're welcome to share some.

Granted, it's still 3rd party software only for Radeon (HWInfo) so the jury is still out, but I find it highly unlikely that the move from 16Gb 16Gbps G6 > 16Gb 20Gbps G6 completely broke memory temp reporting.
 
Both 40 series and Navi31 show up as Memory Junction Temp in HWInfo. GPU hotspot is reported differently (and more accurately on RDNA), but AMD doesn't inject secret sauce into its GDDR6 packages. Haven't seen any sources proving otherwise, you're welcome to share some.

Granted, it's still 3rd party software only for Radeon (HWInfo) so the jury is still out, but I find it highly unlikely that the move from 16Gb 16Gbps G6 > 16Gb 20Gbps G6 completely broke memory temp reporting.

That's not what I am saying, since it's junction temperature we have no idea what it really means nor how that is calculated, that's why I am saying you can't compare them.
 
So GDDR7 is faster than the HBM memory?
Who would have thought....

Well its not like HBM is just some established fact, once discovered, thats what it is.
It also has iteration/generation that are faster and more energy efficient.

I mean purely talking physics, distance between objects = delay, then GDDR per definition loses out because its further away.
 
Low quality post by CrAsHnBuRnXp
It could be GDDR231.89 as far as I am concerned, but without reducing prices back down to normal, pre-pandemic/pre-scalping levels, it won't matta :D
 
@bonehead123
Will prices ever come back to 'normal' for mid-range and top tier GPU's? I think the days of $700 to $750 MSRP top tier videocards are gone forever...
 
So Nvidia isnt greedy? That's news to me.
12 year olds hang out in wccftech comment section where they use terms like AMDone etc.
Didnt say they werent. But only young kids trying to look cool on the internet and salty adults talk like that. The rest of us acknowledge the greed but call them by their actual names. Like Microsoft and not M$. Just shows youre still living in the past and cant get over the fact that a company is actually being a company and if you were in the exact same shoes as one of the big tech companies with next to no competition, youd be price gouging too. It's the way of the corporate world and youre just salty about it.
 
So GDDR7 is faster than the HBM memory?
Who would have thought....
Nope. GDDR7 is getting to what HBM2/e is already capable of and was doing 4 years ago, and HBM3 is already in production.

2019 the Radeon VII has HBM2 1,024GB/s.
2020 the MI100 has HBM2 1,228.8GB/s
2022 the MI210 has HBM2e 1,638.4GB/s
Nvidia has an H100 with HBM3 that will do 3,000GB/s.

When you consider that the Vega 56/64 back in 2017 had 8GB of HBM2 starting at $400 and the VII having 16GB for $700. I don't think that HBM is crazy expensive. For RDNA3 there is the packaging issue. Would be coold if AMD could 3D stack the MCDs onto the GCD so they have room for HBM3 :D
 
OK let's stay on topic...
Stop the bickering.
 
Nvidia is going to love including only 8GB of this on their next gen 128 bit wide bus GPUs.
 
Back
Top