• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

GDDR5 Memory - Under the Hood

tho im shocked to see you say the 4870/4870x2 has win writen allover it.....did you forget your nvidia pillz today?

Wutha :eek: how did you know?

It so happened that instead of the usual shipment of NVidiocy pills, they sent me a can of whoopass (that was supposed to go to Intel). Whoopass is a very tasty BBQ sauce. :p

464655499_9d3e456e07.jpg
 
Wutha :eek: how did you know?

It so happened that instead of the usual shipment of NVidiocy pills, they sent me a can of whoopass (that was supposed to go to Intel). Whoopass is a very tasty BBQ sauce. :p

464655499_9d3e456e07.jpg

ah, not into excessivly spicy, but its better then those pills u been taking ;) guess its helping burn their effects out of your system :D
 
ATI made GDDR5 bta didnt you get the memo they have most likly been working on it just as long. I approve of what Nvidia is doing, using known tech with a wider bus is just as effective, and there is less chance of massive lat issues like there will be with GDDR5. I prefer tried and true, this will be the 2nd time AMD tried something new with there graphics cards and this will be the 2nd time they fail. I was dead right with the 2900XT failing, i said it would before it even went public, and ill be right about this.

I prefer better.

GDDR3 has been tried, tested, and beaten to death with a rather large stick. I was right about the 2900XT failing too, but I'm still with ATi, why? because I'm with a company that likes to look ahead rather than beat people to death with ever increasing GPU dies.

nvidia refuses to even consider DX10.1, which is not a massive jump (and don't even bother with the whole "Dx10 not used yet" arguement, it's called the future - google it if you don't understand the concept), but is all about efficiency and performance improvement.

nvidia wants none of it, that to me, is a poor company policy.

ATi beat nvidia with the 1950 series, they will do it again with the 48xx series.
 
I prefer better.

GDDR3 has been tried, tested, and beaten to death with a rather large stick. I was right about the 2900XT failing too, but I'm still with ATi, why? because I'm with a company that likes to look ahead rather than beat people to death with ever increasing GPU dies.

nvidia refuses to even consider DX10.1, which is not a massive jump (and don't even bother with the whole "Dx10 not used yet" arguement, it's called the future - google it if you don't understand the concept), but is all about efficiency and performance improvement.

nvidia wants none of it, that to me, is a poor company policy.

ATi beat nvidia with the 1950 series, they will do it again with the 48xx series.

Well, we've all got a good idea of why NV won't touch DX10.1, seems they simply can't implement it, by the looks of things at least. I believe that the DX10.1 we have now, was what DX10 was originally intended to be before the specification was made optional instead of compulsory. Elite Bastards has a few good articles on the workings of DX10 and 10.1. From those articles, it seems painfully obvious that NV threw a hissy fit over DX10 because of the memory virtualisation. If you remember those early rumors of the 2900, they were said to be coming in an X2 form, but they obviously never did. This also makes me think that RV670 was really meant to be R600, but they had to stop production due to the spec change in DX10. I reckon the 2900s are technically capable of DX10.1, but it's disabled or something. In the end, if DX10.1 was as pointless as NV make out, then Assassin's Creed wouldn't have got that large performance boost, and NV wouldn't be complaining to devs who support it, ie, Assassin's Creed, and 3D Mark Vantage. This, and a few other reasons is why I won't be touching an NV card for a long time to come.
 
GDDR5 = less pins, high performace, lower power, cheaper = cheaper graphics cards for the CUSTOMER

like say GDDR5 on a 256bit is 70% faster than GDDR3 on a 512bit bus and 256bit bus costs 50% less, heck its less complex and can result in a cooler chip
 
Last edited:
Interesting articled, enjoyed that. ^^

Some rather interesting points, but some ring truer than others and instead of immature fanboi-isms, we'll just have to wait and see. Which I prefer more than arguing with small minded delinquents in forums. Not all of you are delinquents, obviously. ;)

I'm itching for Computex as we might get a little insight into the 4k series from ATi and maybe something from the green camp as well.

Even if GDDR5 brings a lot to the plate and can only mean good things, like I said, we'll have to see.
 
on paper the 2900XT should have crushed all comers, instead it barley put up a fight agasint the 8800GTS 640. AMD can look great on paper, but give me some proof they can compete. As for the 3870 being futureproof i beg to differ. It has 5 groups of 64 shaders. Only 1 in each group can do complex shader work, 2 can do simple, one does interger the other does floating point. Now in the real world this means that 128 of those shader units wont be used if at all, the floating point and interger units, and the simple shaders are not used thanks to AMD's lack to supply a complier for there cards. LEt it look as good as you want, but if AMD can't supply a code complier so code works right on there design they are still screwed.

Funny you say that, since the 2900XT ended up being faster than an 8800GTS G80 :rolleyes:. This has been common knowledge for some time now. You really do hate on ATi, I don't really like NV, but at least I have good reasons, nothing to do with their performance or making things up to suit my argument. Remember that link I sent you regard the relative performance difference between 3870s and 8800GTs? About how the 8800GT appears faster, yet between res and adding of AA and AF, the GT's frame drop is higher than the 3870's frame drop. Anyone who doesn't get what I'm saying, look at some of w1z's recent graphics reviews, for example, at 1024x768, an 8800GT will be getting 180FPS, a 3870 will getting 150, move up to 1280x1024 with 2AA, the 8800GT's frame rate drops to about 110, the 3870 drops to about 100. We can tell the 8800GT is "faster", but for some reason, its performance is hurt much more when increasing the res and adding AA. Realistically, the 8800GT is the poorer one out of the 2 because it doesn't take much to drop its framerate so much. Yeah, the 8800GT is marginally faster over all, but for those who argue about ATi's tech is crap and shader based AA doesn't work, they need to actually look at things for themselves rather than just hear something and repeat it parrot fashion as if it's their belief and or opinion. :rolleyes: *cough*candle*cough* :D
 
Interesting articled, enjoyed that. ^^

Some rather interesting points, but some ring truer than others and instead of immature fanboi-isms, we'll just have to wait and see. Which I prefer more than arguing with small minded delinquents in forums. Not all of you are delinquents, obviously. ;)

I'm itching for Computex as we might get a little insight into the 4k series from ATi and maybe something from the green camp as well.

Even if GDDR5 brings a lot to the plate and can only mean good things, like I said, we'll have to see.

When is Computex? I hope these do end up coming at June 16th!

PS, I'm not a delinquent am I? :( :p
 
When is Computex? I hope these do end up coming at June 16th!

PS, I'm not a delinquent am I? :( :p

Only if you want to be... ;)

Computex is next week.
 
Interesting articled, enjoyed that. ^^

Some rather interesting points, but some ring truer than others and instead of immature fanboi-isms, we'll just have to wait and see. Which I prefer more than arguing with small minded delinquents in forums. Not all of you are delinquents, obviously. ;)

I'm itching for Computex as we might get a little insight into the 4k series from ATi and maybe something from the green camp as well.

Even if GDDR5 brings a lot to the plate and can only mean good things, like I said, we'll have to see.

Comments such as this cause just as many problems on this forum as "Fanboys." If you have a problem with another user please use the report post button rather than inflaming situations with negative comments and attitude.
 
I have to agree with candle 86(post #18). All of this new tech ATI is using is great and all, but the 2900XT made huge promises on paper as well. I think with all the new ideas ATI is implementing in the HD4xxx series they will be extremely competitive, but I also think that nVidia's "brute force" approach will continue to serve them well this generation as it has before, but maybe not as well as they hope.
 
I have to agree with candle 86(post #18). All of this new tech ATI is using is great and all, but the 2900XT made huge promises on paper as well. I think with all the new ideas ATI is implementing in the HD4xxx series they will be extremely competitive, but I also think that nVidia's "brute force" approach will continue to serve them well this generation as it has before, but maybe not as well as they hope.

I think this is as far as they can go now with the brute force method on the G80/G92 architecture unless they manage to sort out 55nm parts. Even when/if they do get to 55nm on GT200, there's only so much it can really do. I still think on 55nm, they've not really got many options left other than to try to redesign their core for the next next-gen.
 
GDDR5 = less pins, high performace, lower power, cheaper = cheaper graphics cards for the CUSTOMER

like say GDDR5 on a 256bit is 70% faster than GDDR3 on a 512bit bus and 256bit bus costs 50% less, heck its less complex and can result in a cooler chip

where do you get that idea, GDDR5 @ 3000mhz on a 256bit bus = 96mb/s

but GDDR3 @ 2000mhz on a 512bit bus = 128mb/s

there you have it, the ATI video memory would have to run at 4000mhz on a 256bit bus to tie a modern 512bit GDDR3 bus.
 
where do you get that idea, GDDR5 @ 3000mhz on a 256bit bus = 96mb/s

but GDDR3 @ 2000mhz on a 512bit bus = 128mb/s

there you have it, the ATI video memory would have to run at 4000mhz on a 256bit bus to tie a modern 512bit GDDR3 bus.

You actually didn't read the article did you?

Read the article, then try talking to us again.
 
Funny you say that, since the 2900XT ended up being faster than an 8800GTS G80 :rolleyes:. This has been common knowledge for some time now. You really do hate on ATi, I don't really like NV, but at least I have good reasons, nothing to do with their performance or making things up to suit my argument. Remember that link I sent you regard the relative performance difference between 3870s and 8800GTs? About how the 8800GT appears faster, yet between res and adding of AA and AF, the GT's frame drop is higher than the 3870's frame drop. Anyone who doesn't get what I'm saying, look at some of w1z's recent graphics reviews, for example, at 1024x768, an 8800GT will be getting 180FPS, a 3870 will getting 150, move up to 1280x1024 with 2AA, the 8800GT's frame rate drops to about 110, the 3870 drops to about 100. We can tell the 8800GT is "faster", but for some reason, its performance is hurt much more when increasing the res and adding AA. Realistically, the 8800GT is the poorer one out of the 2 because it doesn't take much to drop its framerate so much. Yeah, the 8800GT is marginally faster over all, but for those who argue about ATi's tech is crap and shader based AA doesn't work, they need to actually look at things for themselves rather than just hear something and repeat it parrot fashion as if it's their belief and or opinion. :rolleyes: *cough*candle*cough* :D

So Nvidia takes a bigger hit, they stay faster in almost every benchmark, and compete with the AMD price point making AMD a bad buy right now. The 8800GS has the 3850 cornered, the 9600GT/9600GSO have the 3870 cornered, and quite frankly the 9800GX2 preforms well enough faster than the x2 that its price is justified. Just face it right now AMD has nothing going for them.
 
I've read the article and honestly even if they can transmit so much per pin doesnt mean all that much, it's still DDR at heart and increased latancy thanks to increased speed and reduced power always increases latancy, so you can all you want with the memory but when latancy is high you have to have things like this to make it even work. All that this does actully is say that the memory can dump quickly, but it still falls under the constraints of the bus width and speed. No matter how many pins you have availble to you its Double Data Rate which means it can process on both rising and falling sides, and the speed of the memory and bus width affect actual preformace, these improvements will help negate the latancy but they had to do this because of lat, GDDR3 did the same thing, yet DDR 1000 and GDDR3 1000 both where just as fast, GDDR3 was just cheaper to use than DDR running at 1000mhz. Quite honestly there is more bandwith to be had right now with a 512bit bus than a 256bit bus.
 
You actually didn't read the article did you?

Read the article, then try talking to us again.

might as well give up, hes just a hater, hell if btarunr has to slap him for it and he STILL dosnt listen, then you KNOW he is far from rational
 
ATI started work on the R700 architecture about the same time when they released the HD 2900XT. Taken, GDDR5 was unheard of then, but later, the RV770 did end up with a GDDR5 controller, didn't it? Goes on to show that irrespective of when a company starts work on an architecture, something as modular as a memory controller can be added to the architecture even weeks before they hand over designs to the fabs to make an ES and eventually mass production.

So, about when NV started work on the GT200 is a lame excuse.

for the record, ati started working on the r700 before the r480 (x850xt) lauched. And I imagine the gt200 has been in development as long. These designs don't roll out overnight you know. lol
 
ATI made GDDR5 bta didnt you get the memo they have most likly been working on it just as long. I approve of what Nvidia is doing, using known tech with a wider bus is just as effective, and there is less chance of massive lat issues like there will be with GDDR5. I prefer tried and true, this will be the 2nd time AMD tried something new with there graphics cards and this will be the 2nd time they fail. I was dead right with the 2900XT failing, i said it would before it even went public, and ill be right about this.

increasing the bus size also increases the memory latency. I mean seriously it's like a bunch of school children arguing about who made the first shot in the american revolutionary war. like they'd be the experts on that.:roll:

edit: and all hail candle the worlds most supreme expert on graphics cards. he knows all see all and predicts the future!

seriously dude, don't get all high and mighty into yourself in front of your computer screen, you're not the expert on this subject and frequently show that. I'd think twice if I were you about making vast predictions on what cards will do well and what one won't.
 
where do you get that idea, GDDR5 @ 3000mhz on a 256bit bus = 96GB/s

but GDDR3 @ 2000mhz on a 512bit bus = 128GB/s
(...)
You actually didn't read the article did you?

Read the article, then try talking to us again.
Darknova,
I'm not sure what you are trying to say, candle_86's calculations are correct - and if you're referring to the following paragraph:
extremetech said:
Bandwidth first: A system using GDDR3 memory on a 256-bit memory bus running at 1800MHz (effective DDR speed) would deliver 57.6 GB per second. Think of a GeForce 9600GT, for example. The same speed GDDR5 on the same bus would deliver 115.2 GB per second, or twice that amount.
This is just BS. Or, more likely an unintentional lapse from the author. [edit: no, it isn't] It will soon be changed to something like [edit: no, it won't]:
extremetech said:
Bandwidth first: A system using GDDR3 memory on a 256-bit memory bus running at 1800MHz (effective DDR speed) would deliver 57.6 GB per second. Think of a GeForce 9600GT, for example. A double speed GDDR5 on a bus half as wide would deliver an equal amount.
 
Last edited:
Largon, that's the SAME thing, in different words.
 
Gah. Fixed.
Too much sun for me today. And it's almost 1AM here...
:)
 
Memory bandwidth isn't important enough to garner such attention. Improvements in core architecture lead to much greater advances than memory technology. At the high-end even with 256-bit memory buses, GDDR3 at 2GHz still gives us 64GBit/s of bandwidth which is 90%+ of the time not utilized fully because the bottleneck comes from another part of the chip.
I guess my point is that on the current (and I'm guessing next, but we will have to wait to see that) generation of products the memory bandwidth advances supplied by GDDR4/5 over GDDR3 are negligible in comparison to the obvious bottlenecks in core GPU design.
 
Back
Top