• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GDDR5 Memory - Under the Hood

HTC

Joined
Apr 1, 2008
Messages
4,696 (0.75/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
In the graphics business, there's no such thing as too much memory bandwidth. Real-time graphics always wants more: more memory, more bandwidth, more processing power. Most graphics cards on the market today use GDDR3 memory, a graphics-card optimized successor to the DDR2 memory common in PC systems (it's mostly unrelated to the DDR3 used in PC system memory).

A couple years ago, ATI (not yet purchased by AMD ) began promoting and using GDDR4, which lowered voltage requirements and increased bandwidth with a number of signaling tweaks (8-bit prefetch scheme, 8-bit burst length). It was used in a number of ATI graphics cards, but not picked up by Nvidia and, though it became a JEDEC standard, it never really caught on.

AMD's graphics division is at it again now with GDDR5. Working together with the JEDEC standards body, AMD expects this new memory type to become quite popular and eventually all but replace GDDR3. Though AMD plans to be the first with graphics cards using GDDR5, the planned production by Hynix, Qimonda, and Samsung speak to the sort of volumes that only come with industry-wide adoption. Let's take a look at the new memory standard and what sets it apart from GDDR3 and GDDR4.

Source: ExtremeTech
 
But what's the point when only one company uses it, and that too on its 'top-of-the-line' product (HD4870), while another company used GDDR3 across 5 generations of products?
 
But what's the point when only one company uses it, and that too on its 'top-of-the-line' product (HD4870), while another company used GDDR3 across 5 generations of products?

Yea I know. But really, theres no real difference in gaming between GDDR3,4,and 5 yet.
 
Just have to weigh the costs between things. I guess one of the big questions would be if its cheaper to go GDDR5 with a 256bit bus or to go GDDR3 with a 512bit bus.
 
Have you dudes checked the page? There's more, you know!
 
But what's the point when only one company uses it, and that too on its 'top-of-the-line' product (HD4870), while another company used GDDR3 across 5 generations of products?

Have you read the link?
 
In the graphics business, there's no such thing as too much memory bandwidth.

Too true, the increased speed doesn't hurt either :p
 
That was a pretty good read. In the end they figure GDDR5 to be as cost effective as GDDR3, so why not just replace the whole lot of the stuff? It'd be overkill for weaker cards, but they'd probably get better pricing if they went exclusive GDDR5 with Samsung, Qimonda, and Hynix.

Every card I've owned has always benefited more from a higher memory clock than core. I'm down with a 4870 if they got the GDDR5, but I'd rather stick with a 4850 if they get the same stuff.
 
Yea I know. But really, theres no real difference in gaming between GDDR3,4,and 5 yet.

Says who? GDDR5 has never been seen in a real card.

And GDDR4 is better than GDDR3, it's just offset by the crappy bus widths nvidia and ATi use.

Also, it won't cost more, think about it, GDDR5 costs slightly more than GDDR4 ok, but then GPUs are getting smaller, they are fitting more on a wafer, so each one costs less. In the end, counting everything overall, the graphics card will be cheaper to make.

And to btarunr, because ATi looks to the future (and generally fails lol), where as nvidia is still stuck with it's brute force method (bigger badder GPUs).
 
There was some news, where someone stated that its more profitable to produce flash memory for wannebe ssd ´s instead of ddr5, so this "shortage" might last for quite some time :mad:
 
What shortage?

Switching to GDDR5 would make smaller chips, less PCB layers, more efficient bus widths, and cause less "shortages".
 
Says who? GDDR5 has never been seen in a real card.

And GDDR4 is better than GDDR3, it's just offset by the crappy bus widths nvidia and ATi use.

Also, it won't cost more, think about it, GDDR5 costs slightly more than GDDR4 ok, but then GPUs are getting smaller, they are fitting more on a wafer, so each one costs less. In the end, counting everything overall, the graphics card will be cheaper to make.

And to btarunr, because ATi looks to the future (and generally fails lol), where as nvidia is still stuck with it's brute force method (bigger badder GPUs).

Yeah: compare the die sizes of both nVidia's and ATI's next gen cards :twitch:
 
And to btarunr, because ATi looks to the future (and generally fails lol), where as nvidia is still stuck with it's brute force method (bigger badder GPUs).

I'm just looking at the near future of GDDR4/5 memory, and that a tiny minority of cards actually use them, that too from ATI which commands less than half of the discrete graphics market share. Since NV makes powerful GPU's that end up faring better than competition, they needn't use better memory and end up using GDDR3 that's dirt cheap these days....profit. Whereas ATI use GDDR4/5 more to build up aspirational value to their products. They need performance increments to come from wherever they can manage to. Stronger/more expensive memory used. It's expensive because companies like Qimonda are pushed to making these memory that are produced on a small scale, lower profit.
 
I'm just looking at the near future of GDDR4/5 memory, and that a tiny minority of cards actually use them, that too from ATI which commands less than half of the discrete graphics market share. Since NV makes powerful GPU's that end up faring better than competition, they needn't use better memory and end up using GDDR3 that's dirt cheap these days....profit. Whereas ATI use GDDR4/5 more to build up aspirational value to their products. They need performance increments to come from wherever they can manage to. Stronger/more expensive memory used. It's expensive because companies like Qimonda are pushed to making these memory that are produced on a small scale, lower profit.

Ok, but did you read the rest of the article? Yes, the memory itself is more expensive, but it allows for less complex PCB designs (lower cost), and the die shrinks (lower cost) and the experience at producing dies at 55nm (lower cost)

So all in all, there won't be that much of a price hike, if any.

Not only that, but GDDR5 is going to be a bigger performance jump than going from GDDR3 to GDDR4, just because GDDR3 is dirt cheap doesn't make it better to use on your next generation GPUs.

And you're wrong, You NEED to pair a strong GPU, with stronger memory. With GPUs getting more and more powerful, they need bigger bandwidths, and GDDR5 is the next logical step.
It's just like in a PC, if you start bottlenecking the GPU, it doesn't matter how powerful you make it, because the GDDR3 will be holding it back.
 
Execly DN, memory price will be higher but pcb price offsets that.

also more complex pcb the more likely you are to have failings in the pcb itself due to production flaws.

the more complex something is the more likely it is to fail. this has alwase been true.

now as to stron gpu not needing strong ram.........I cant say what i am thinking without getting infracted so I will put it a diffrent way.

only a fanboi would say that a strong gpu dosnt need good/strong ram, and in this case i see alot of nvidiots saying that kinda crap because nvidia is sticking with ddr3, honestly the reasion they are sticking with ddr3 is because ITS CHEAP and they have large contracts that make it even cheaper, not because its the best tech for the job, not because it gives the best performance, but because they want to make as much per card as they can, look at their card prices, they are alwase TO DAMN HIGH per card, i have an 8800gt 512(its being replaced how...) it was an "ok" buy at the time, but the price most ppl where paying for them was 320bucks, thats insain........

ok the 9600and 8800gt/9600gos are decent priced BUT they are still high for what your getting in my view....the 3870 would be a better buy at that price range and far more future proof.

blah i dont want to argue, im tired, its late, and i need some rest........

read the artical, and understand that lower power and higher clocks/bandwith mean that u dont need to make an insainly complex card that costs a ton to build, you can build a cheaper card(pcb) and get the same or better performance.

also note 3 makers are already onboard, would be more follow suit to.......cant wait to see this stuff in action.
 
And you're wrong, You NEED to pair a strong GPU, with stronger memory. With GPUs getting more and more powerful, they need bigger bandwidths, and GDDR5 is the next logical step.
It's just like in a PC, if you start bottlenecking the GPU, it doesn't matter how powerful you make it, because the GDDR3 will be holding it back.

Well that's what NVidia chooses not to do. They're making the GT200 use GDDR3, but part of the reason is also that the GPU itself is very expensive ($125 /die, $150 /package) so that's $150 for the GPU alone. More in this contentious article. So NVidia is using GDDR3 more for economic reasons. And if this is the scheme of things, they'll keep themselves away from GDDR4/5 for quite some time though they're already a JEDEC standard technologies.
 
Well that's what NVidia chooses not to do. They're making the GT200 use GDDR3, but part of the reason is also that the GPU itself is very expensive ($125 /die, $150 /package) so that's $150 for the GPU alone. More in this contentious article. So NVidia is using GDDR3 more for economic reasons. And if this is the scheme of things, they'll keep themselves away from GDDR4/5 for quite some time though they're already a JEDEC standard technologies.

Have you ever wondered WHY nvidia is making such an expensive GPU? As I've said before, it's just a brute force method to make a more powerful GPU. Instead of stopping, scrapping when they have and creating a really efficient GPU architecture (like ATi did) they don't stand a cat in hells chance against ATi this coming year.

Considering how powerful the 4870 is meant to be, I honestly don't see anyone with any knowledge of GPUs going for nvidia with that hefty a price tag...
 
on paper the 2900XT should have crushed all comers, instead it barley put up a fight agasint the 8800GTS 640. AMD can look great on paper, but give me some proof they can compete. As for the 3870 being futureproof i beg to differ. It has 5 groups of 64 shaders. Only 1 in each group can do complex shader work, 2 can do simple, one does interger the other does floating point. Now in the real world this means that 128 of those shader units wont be used if at all, the floating point and interger units, and the simple shaders are not used thanks to AMD's lack to supply a complier for there cards. LEt it look as good as you want, but if AMD can't supply a code complier so code works right on there design they are still screwed.
 
But what's the point when only one company uses it, and that too on its 'top-of-the-line' product (HD4870), while another company used GDDR3 across 5 generations of products?
whats the point of Intel @ DDR3? :P
adopting new technologies is good
even the memory companys agree and will make GDDR5 a standard so whats the problem?
 
the problem is GDDR5 will suffer like GDDR4 did when it was new, insane latancy. Also for Nvidia they started work on the GT200 right after the G80 was shipped, at the time GDDR4 wasn't viable and GDDR5 wasn't heard of. Do you expect Nvidia to stop working on there next gen just to include a new memory controller?
 
the problem is GDDR5 will suffer like GDDR4 did when it was new, insane latancy. Also for Nvidia they started work on the GT200 right after the G80 was shipped, at the time GDDR4 wasn't viable and GDDR5 wasn't heard of. Do you expect Nvidia to stop working on there next gen just to include a new memory controller?
insane what?
its not atis problem, nvidia is lacking a proper memory controller ^^
 
the problem is GDDR5 will suffer like GDDR4 did when it was new, insane latancy. Also for Nvidia they started work on the GT200 right after the G80 was shipped, at the time GDDR4 wasn't viable and GDDR5 wasn't heard of. Do you expect Nvidia to stop working on there next gen just to include a new memory controller?

ATI started work on the R700 architecture about the same time when they released the HD 2900XT. Taken, GDDR5 was unheard of then, but later, the RV770 did end up with a GDDR5 controller, didn't it? Goes on to show that irrespective of when a company starts work on an architecture, something as modular as a memory controller can be added to the architecture even weeks before they hand over designs to the fabs to make an ES and eventually mass production.

So, about when NV started work on the GT200 is a lame excuse.
 
ATI made GDDR5 bta didnt you get the memo they have most likly been working on it just as long. I approve of what Nvidia is doing, using known tech with a wider bus is just as effective, and there is less chance of massive lat issues like there will be with GDDR5. I prefer tried and true, this will be the 2nd time AMD tried something new with there graphics cards and this will be the 2nd time they fail. I was dead right with the 2900XT failing, i said it would before it even went public, and ill be right about this.
 
GDDR5 is a JEDEC standard. Irrespective of who makes it, any licensed company can use it. HD4870 is more of something that will beat 9800 GTX and go close to GTX 260. It's inexpensive, cool, efficient. Don't try to equate HD4870 to GTX 280, you'll end up comparing a sub-400 dollar card to something that's 600+ dollars. The better comparison would be to HD4870 X2, which is supposed to be cheaper than GTX 280 and has win written all over it.
 
GDDR5 is a JEDEC standard. Irrespective of who makes it, any licensed company can use it. HD4870 is more of something that will beat 9800 GTX and go close to GTX 260. It's inexpensive, cool, efficient. Don't try to equate HD4870 to GTX 280, you'll end up comparing a sub-400 dollar card to something that's 600+ dollars. The better comparison would be to HD4870 X2, which is supposed to be cheaper than GTX 280 and has win written all over it.

yes bta, exectly, but you gotta remmber candle has an irrational hate for amd/ati, logic: he fails it....

tho im shocked to see you say the 4870/4870x2 has win writen allover it.....did you forget your nvidia pillz today?
 
Back
Top