• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Nvidia seems to always have their own memory standard, why not everyone else?

Joined
Dec 12, 2020
Messages
1,755 (1.08/day)
Why does Nvidia get GDDR5x and GDDR6x and no one else does? Why don't AMD and/or Intel (I guess AMD and Intel teaming up might be a non-starter) come out w/their own memory standard? And if Nvidia can get their own GDDR standards why can't someone else get a DDR5x standard going?
 
Wikipedia:

Micron developed GDDR6X in close collaboration with Nvidia. GDDR6X SGRAM had not been standardized by JEDEC yet. Nvidia is Micron's only GDDR6X launch partner.[22]
 
Because AMD's engineers have opted not to adopt these standards. There's no favoritism or lockout, G5X wasn't adopted to save costs, G6X wasn't adopted because AMD decided to implement the infinity cache to mitigate the low bandwidth in Navi 21, and now they chose to still use G6 but make it the bus 50% wider.

It's all about tradeoffs and design elements, and AMD's engineering believes that this is an acceptable tradeoff for their designs.

Wikipedia:

Micron developed GDDR6X in close collaboration with Nvidia. GDDR6X SGRAM had not been standardized by JEDEC yet. Nvidia is Micron's only GDDR6X launch partner.[22]

This still doesn't stop AMD from using it, they just chose not to.
 
Yup, another reason to just buy green and tell your mates you got the latest GDDR X.... :D
 
Not to mention that it's easier to change suppliers if you use standard parts.
 
So Nvidia can market something different to justify their prices.
 
It's called "Ngreediya 101"....

Similar to "Capitalism 101" but much, much worse :D
 
As if AMD or Intel wouldn't do exactly the same things if they had Nvidia's market position. :kookoo: It takes a true fanboi to believe that one large corporation is friendlier and more morally upstanding than the next one.
 
Last edited:
They don't. (Except maybe GDDR6X, which they developed in partnership with Micron)

Sometimes companies just choose to not use something. Nvidia chose not to use GDDR4 when AMD played around with it. They stuck with GDDR3 and jumped to GDDR5 when that was ready.
 
It's called "Ngreediya 101"....

Similar to "Capitalism 101" but much, much worse :D

Is it AMD's greed that Nvidia hasn't released any consumer-grade GPU with HBM while AMD had three of them, then? I don't consider GV100 (even at the Titan V cut at $3000) a consumer grade processor, but Fiji, Vega 10 and Vega 20 were all released under gaming brands at flagship, but affordable prices.
 
Is it AMD's greed that Nvidia hasn't released any consumer-grade GPU with HBM while AMD had three of them, then? I don't consider GV100 (even at the Titan V cut at $3000) a consumer grade processor, but Fiji, Vega 10 and Vega 20 were all released under gaming brands at flagship, but affordable prices.

Although all it really showed was using HBM on a consumer graphics card was stupid and likely why AMD abandoned it so fast for the consumer market.
 
Although all it really showed was using HBM on a consumer graphics card was stupid and likely why AMD abandoned it so fast for the consumer market.

Thanks Raja for that stunt and his professional leadership of the development process. He had all the resources to foresee a bad product. He was hired to have the ability to foresee it just looking at the dev drawing. It ain't the memory at fault here.

GDDR6 or X or not... are GPU's that memory speed starved? Definitely not.
 
Thanks Raja for that stunt and his professional leadership of the development process. He had all the resources to foresee a bad product. He was hired to have the ability to foresee it just looking at the dev drawing. It ain't the memory at fault here.

GDDR6 or X or not... are GPU's that memory speed starved? Definitely not.

Part of it is the internet always blows AMD gpus out of proportion for some reason at least for the last 5-8 years anyways but I remember being extremely disappointed with Vega and Fury before it. It really was the start of me not buying AMD gpus really liked the 290X and 7970s before it.
 
Part of it is the internet always blows AMD gpus out of proportion for some reason at least for the last 5-8 years anyways but I remember being extremely disappointed with Vega and Fury before it. It really was the start of me not buying AMD gpus really liked the 290X and 7970s before it.

Well Fury was okay for small SFF builds... and that's pretty much it. Internet is a weird place, the most vocal opinions usually doesn't match reality. I went green after 7970 I also had. It was the last card I used on air.
 
Well Fury was okay for small SFF builds... and that's pretty much it. Internet is a weird place, the most vocal opinions usually doesn't match reality. I went green after 7970 I also had. It was the last card I used on air.

Previous I'd buy the 80 class nvidia card and whatever amd card performed similarly at least up through the 290x. I've always ran two systems. I would love to do that now with like a 7900XTX and my 4090 but too expensive now lol at least with all my other money pit hobbies. Building a whole system for about the cost of a 7900XTX this week lol.
 
I don't consider GV100 (even at the Titan V cut at $3000) a consumer grade processor, but Fiji, Vega 10 and Vega 20 were all released under gaming brands at flagship, but affordable prices.
HBM was made with AMD collab and was used first on Fiji. So, NV simply couldn't get it before Fiji launched (IF they REALLY wanted it). Aside from that, in 2015, HBM 1.0 was probably too much hassle to make work for NV, on top of being very capacity limited AND not really needed it at that point (since HBM didn't help AMD beat Maxwell 2.0).
Vega 10 was made with HBM2 in mind, and you can't simply switch memory tech mid way because "it's too expensive to implement"). Also, Vega 10 needed HBM2 to not blow past power budget too fast (as every MHz was needed to counter Pascal price/performance metric).

Titan V was released in December 2017, and it was at least 50% cheaper than previous HBM2 GPU (Quadro GP100) :D (/s)
And it is faster than Vega 20 which launched a bit over year later (even with one HBM2 stack disabled vs. Vega 20).
 
So Nvidia can market something different to justify their prices.
No Nvidia used GDDR5X to nuke AMD's failing HBM-oriented attack on the performance crown, and keep providing high end GPUs amidst a crypto and mining 'crisis'.

And succeeded, we might add. These X'es exist because GDDR5 and 6 didn't fit Nvidia's strategy and gen-to-gen performance increase plans. Pascal was part success because Nvidia could make a very cheap GPU line and GDDR5X was part of that cost reduction. Similarly, GDDR6X enabled Ampere even despite a major hit on TDPs.
 
Probably to alleviate bus width issues in higher-end GPUs without designing them with much wider buses and much more complex PCBs as a result. AMD doesn't use these because their GPUs don't need them.
 
As if AMD or Intel wouldn't do exactly the same things if they had Nvidia's market position.
GDDR3 is probably a relevant example. Also HBM although AMD's part in that was not quite that big.
Although all it really showed was using HBM on a consumer graphics card was stupid and likely why AMD abandoned it so fast for the consumer market.
Cost and packaging issues. Today when chiplets and advanced packaging solutions have come a long way, a faster HBM is more and more likely to make a reappearance somewhere.
GDDR6 or X or not... are GPU's that memory speed starved? Definitely not.
Yes they are. GPUs are memory bandwidth starved as well as latency starved. GDDR5X and GDDR6X both were born out of desire/need to wrangle that last bit of speed out of existing memory technology.

Probably to alleviate bus width issues in higher-end GPUs without designing them with much wider buses and much more complex PCBs as a result. AMD doesn't use these because their GPUs don't need them.
GDDR5X/GDDR6X? AMD was/is not using them due to cost concerns (relative to the increased speed/bandwidth over GDDR5/GDDR6). AMD GPUs also need as much of all VRAM speed as they can get. Wider buses note is quite funny - AMD started the bus width reduction with large cache starting RDNA3, Nvidia now followed a generation later.

These X'es exist because GDDR5 and 6 didn't fit Nvidia's strategy and gen-to-gen performance increase plans.
GDDR5X exists because GDDR6 was late. GDDR6X exists mainly because faster GDDR6 was not there yet. 16Gbps and 20Gbps GDDR6 seems nice and all today (and even 24Gbps was officially launched last year after long delays) but when Ampere launched, they were nowhere to be found despite being announced a while ago.
 
Last edited:
GDDR5X/GDDR6X? AMD was/is not using them due to cost concerns (relative to the increased speed/bandwidth over GDDR5/GDDR6). AMD GPUs also need as much of all VRAM speed as they can get. Wider buses note is quite funny - AMD started the bus width reduction with large cache starting RDNA3, Nvidia now followed a generation later.
Let's compare the 7800 XT and the 4070 Ti. The former uses a 256-bit bus with 19.5 Gbps GDDR6, resulting in 624.1 GB/s total bandwidth. The latter uses a 192-bit bus with 21 Gbps GDDR6X, resulting in 504.2 GB/s. To imagine the same results with GDDR6 (non-X), Nvidia would have to use a 256-bit bus which results in a larger GPU die and more VRAM chips on a more complex PCB. GDDR6X saves costs on these fronts.
 
Let's compare the 7800 XT and the 4070 Ti. The former uses a 256-bit bus with 19.5 Gbps GDDR6, resulting in 624.1 GB/s total bandwidth. The latter uses a 192-bit bus with 21 Gbps GDDR6X, resulting in 504.2 GB/s. To imagine the same results with GDDR6 (non-X), Nvidia would have to use a 256-bit bus which results in a larger GPU die and more VRAM chips on a more complex PCB. GDDR6X saves costs on these fronts.
and runs hotter LMFAO, pretty good for consumer (except for idiots who going wet)
 
At full HD and looking at benchmarks, in full hd there is no big difference, maybe even 0. But in 4K you can see the difference. This is my opinion - amd made so called infinity fabric for cache to get perf. At full hd and 2k res.

Good that yesterday at shop bought popcorn with different flavors :D.
 
Last edited:
Good that yesterday at shop bought popcorn with different flavors :D
I have got my empty bag from last night beside me, smells gross :rolleyes:
 
No Nvidia used GDDR5X to nuke AMD's failing HBM-oriented attack on the performance crown, and keep providing high end GPUs amidst a crypto and mining 'crisis'.

And succeeded, we might add. These X'es exist because GDDR5 and 6 didn't fit Nvidia's strategy and gen-to-gen performance increase plans. Pascal was part success because Nvidia could make a very cheap GPU line and GDDR5X was part of that cost reduction. Similarly, GDDR6X enabled Ampere even despite a major hit on TDPs.
Thats basically what I said.

The X variant is a something different that entices people to want the product. If they just used basic GDDR, there would be less of a draw and as such lower prices.

The HBM failed experiment was a bad move on AMD part trying to do same, but luckily they reverted to basic GDDR after.
 
Back
Top