• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

The Reason Why NVIDIA's GeForce RTX 3080 GPU Uses 19 Gbps GDDR6X Memory and not Faster Variants

Oh NVidia's yet another Fermi moment, but hey everyone will lap it up anyway because JHH said so :slap:
GTC 2018 Live Keynote | Page 2 | [H]ard|Forum
 
It was measured externally, which means the chips run even hotter in reality now that I think about it.
Externally and internally. Igor somehow has access to Nvidia engineering software which can display G6X temperatures.
 
Watch 3080 Ti/Super is getting announced after AMD's keynote and it's equipped with those faster chips. :p
 
At 320W+ TDP I'm a bit surprised that nvidia hasn't released a water cooled version of the FE. I believe AMD did that for Vega (or was it Fury?).
Asus tuf is good solution to that, thermal readings are the proof of that.
 
Once again people are making a mountain out of what may not even be a molehill.

Firstly, nobody knows what safe temperatures are for GDDR6X, since that information isn't publicly available. 110 °C is the maximum temp for GDDR6 non-X, for all we know G6X could be rated to 125 °C.

Secondly, even if G6X is only rated to 110 °C, the modules have thermal throttling built in, so they shouldn't be damaged.

Thirdly, Igor himself states:

But even such a high value is no reason for hasty panic when you understand the interrelationships of all temperatures.

Finally, if you really have a problem with this, do what everyone sane does: buy an AIB version with a proper cooler.
 
They could go with 21Gbps if they went with 7nm TSMC. The chip would have had lower TDP, consequently allowing less robust VR and cooler PCB around the VR, allowing the use of faster memory. I guess that is going to happen with 3080 Super.
 
They could go with 21Gbps if they went with 7nm TSMC. The chip would have had lower TDP, consequently allowing less robust VR and cooler PCB around the VR, allowing the use of faster memory. I guess that is going to happen with 3080 Super.
It doesn't work like this, if you change the node you must do a complete redesign of the die. The Supers will be on Samsung, too.
 
do you know there's a BOM budget..
Big OLD Mammaries?

On topic/
When cards get designed right side up heat will actually travel away from the chips naturally.
 
Once again people are making a mountain out of what may not even be a molehill.

Firstly, nobody knows what safe temperatures are for GDDR6X, since that information isn't publicly available. 110 °C is the maximum temp for GDDR6 non-X, for all we know G6X could be rated to 125 °C.

Secondly, even if G6X is only rated to 110 °C, the modules have thermal throttling built in, so they shouldn't be damaged.

Thirdly, Igor himself states:



Finally, if you really have a problem with this, do what everyone sane does: buy an AIB version with a proper cooler.

No mountains in sight, but I did nearly break my ankle a few times now with all those molehills on my path. Definitely not a problem free gen, this, and hot memory on an FE is a new thing now. So the core doesn't throttle anymore, yay, now the memory does.
 
When cards get designed right side up heat will actually travel away from the chips naturally.
Perhaps... but that requires a complete retooling of PCIe spacing. The space is available below the slot, not above it. At most you have room for a 1.5 slot card above the top PCIe slot as it stands.
 
Perhaps... but that requires a complete retooling of PCIe spacing. The space is available below the slot, not above it. At most you have room for a 1.5 slot card above the top PCIe slot as it stands.
yea, yea, likely excuses.... :p
 
It doesn't work like this, if you change the node you must do a complete redesign of the die. The Supers will be on Samsung, too.

Yup, NVIDIA has really split Ampere this gen - the lower-volume compute chips (GA100) are on TSMC 7nm, the consumer chips are Samsung.

No mountains in sight, but I did nearly break my ankle a few times now with all those molehills on my path. Definitely not a problem free gen, this, and hot memory on an FE is a new thing now. So the core doesn't throttle anymore, yay, now the memory does.

Again, there is no way to know if these temperatures are problematic because we don't yet know what safe G6X operating temperatures are. So making a fuss about said temperatures is premature at best and FUD at worst.

Should evidence emerge showing that these temps are a problem, I will join in rightly criticising NVIDIA for putting form over function. But not before. There's far too much fanboyism and idiot brigades on these forums, I reject such nonsense wholeheartedly.
 
Again, there is no way to know if these temperatures are problematic because we don't yet know what safe G6X operating temperatures are. So making a fuss about said temperatures is premature at best and FUD at worst.

Should evidence emerge showing that these temps are a problem, I will join in rightly criticising NVIDIA for putting form over function. But not before.

Mhm in the same way, Intel's current CPU operating temps are also not problematic, but they still urge them to limit all sorts of stuff, come up with 2810 ways to boost, and throttle like nobody's business. Come on, smoke > fire, its not hard. Even if they spec them for 120 C its a horrible temp figure to look at. There are lots of parts that will suffer around this temperature and those boards are cramped as hell. And lets not forget that even if they spec them for a very high 125C, you're still looking at major degradation risk for anything over 100C.

Why do you think these specs aren't public? Coincidence? Materials don't magically suddenly take more heat. They're just stretching up the limits of what's safe and what's not. As long as it makes the warranty period, right?

Time to put two and two together.
 
Mhm in the same way, Intel's current CPU operating temps are also not problematic, but they still urge them to limit all sorts of stuff, come up with 2810 ways to boost, and throttle like nobody's business. Come on, smoke > fire, its not hard. Even if they spec them for 120 C its a horrible temp figure to look at. There are lots of parts that will suffer around this temperature and those boards are cramped as hell. And lets not forget that even if they spec them for a very high 125C, you're still looking at major degradation risk for anything over 100C.

Why do you think these specs aren't public? Coincidence? Materials don't magically suddenly take more heat. They're just stretching up the limits of what's safe and what's not. As long as it makes the warranty period, right?

Time to put two and two together.

Now you are getting into conspiracy theory land, which is even worse than FUD. Please, use your brain to explain to me how it benefits NVIDIA to tarnish their reputation by purposefully shipping defective products that they know will get them into trouble down the road.
 
I really dont understand why they crammed all of that stuff on such a small PCB. its not like the PCB prices skyrocketed or something, and they needed to cut expenses. It just seems stupid.
 
Engineering Compromises...still fast enough
 
explain to me how it benefits NVIDIA to tarnish their reputation by purposefully shipping defective products

No one said anything is defective but it might be bordering on becoming defective.
 
I really dont understand why they crammed all of that stuff on such a small PCB. its not like the PCB prices skyrocketed or something, and they needed to cut expenses. It just seems stupid.

Form over function. NVIDIA's FE designs are sadly copying the iPhone trend.

No one said anything is defective but it might be bordering on becoming defective.

Do you waste time worrying that your phone or monitor or car or toaster might become defective? If not, why is the RTX 3080 FE an exception?
 
Do you waste time worrying that your phone or monitor or car or toaster might become defective?

I do if there is a known issue, obliviously I can only worry about things that I know of.
 
Again, there is no way to know if these temperatures are problematic because we don't yet know what safe G6X operating temperatures are. So making a fuss about said temperatures is premature at best and FUD at worst.

Should evidence emerge showing that these temps are a problem, I will join in rightly criticising NVIDIA for putting form over function. But not before. There's far too much fanboyism and idiot brigades on these forums, I reject such nonsense wholeheartedly.
Operating temperature range is 0 - 95C

Absolute Maximum ratings, storage temperature: -55C Min. +125C Max.

can be found under data sheet in this link.
 
uhh, can see again the problem with overheated micron chips like this happened already in some 20xx series cards...
 
Now you are getting into conspiracy theory land, which is even worse than FUD. Please, use your brain to explain to me how it benefits NVIDIA to tarnish their reputation by purposefully shipping defective products that they know will get them into trouble down the road.

planned obscolescence is a conspiracy theory now? I think you need to get real.

Nvidias cards generally aged just fine.
The hot ones however really didnt. Also on the AMD side. I dont see why this would be an exception to that rule. But you are welcome to provide examples of VRAM running close to 100C doing just fine after 4-5 years. I do have some hands full of examples showing the opposite.

And ehh tarnish reputation? The card made it past warranty right?
 
planned obscolescence is a conspiracy theory now? I think you need to get real.

Nvidias cards generally aged just fine.
The hot ones however really didnt. Also on the AMD side. I dont see why this would be an exception to that rule. But you are welcome to provide examples of VRAM running close to 100C doing just fine after 4-5 years. I do have some hands full of examples showing the opposite.

And ehh tarnish reputation? The card made it past warranty right?

If you care about your card aging a million years just get a decent AIB card and everything will be fine. It's just one card out of tens avaliable.
I'm definitely disappointed that Nvidia has designed such a good looking card that cools the GPU itself just fine but somehow fails to keep the memory chips cool enough, I'd avoid the FE and look somewhere else.
 
Last edited:
If you care about your card aging a million years just get a decent AIB card and everything will be fine. It's just one card out of tens avaliable.

Obviously but that is not what this topic is about, is it... Nobody ever said 'buy an FE'. The article here is specifically talking about temps on the FE.

And we both know expecting 5-6 years of life out of a GPU is not a strange idea at all. Obviously it won't run everything beautifully, but it certainly should not be defective before then. Broken or crappy fan over time? Sure. Chip and memory issues? Bad design.

Now, when it comes to those AIB cards... the limitations of the FE do translate to those as well, since they're also 19 Gbps cards because 'the FE has it'.
 
  • Like
Reactions: M2B
Back
Top