Wednesday, May 21st 2008

AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards

AMD today announced the first commercial implementation of Graphics Double Data Rate, version 5 (GDDR5) memory in its forthcoming next generation of ATI Radeon graphics card products. The high-speed, high-bandwidth GDDR5 technology is expected to become the new memory standard in the industry, and that same performance and bandwidth is a key enabler of The Ultimate Visual Experience, unlocking new GPU capabilities. AMD is working with a number of leading memory providers, including Samsung, Hynix and Qimonda, to bring GDDR5 to market.

Today's GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. The higher data rates supported by GDDR5 - up to 5x that of GDDR3 and 4x that of GDDR4 - enable more bandwidth over a narrower memory interface, which can translate into superior performance delivered from smaller, more cost-effective chips. AMD's senior engineers worked closely with industry standards body JEDEC in developing the new memory technology and defining the GDDR5 spec.

"The days of monolithic mega-chips are gone. Being first to market with GDDR in our next-generation architecture, AMD is able to deliver incredible performance using more cost-effective GPUs," said Rick Bergman, Senior Vice President and General Manager, Graphics Product Group, AMD. "AMD believes that GDDR5 is the optimal way to drive performance gains while being mindful of power consumption. We're excited about the potential GDDR5 brings to the table for innovative game development and even more exciting game play."

The introduction of GDDR5-based GPU offerings marks the continued tradition of technology leadership in graphics for AMD. Most recently AMD has been first to bring a unified shader architecture to market, the first to support Microsoft DirectX 10.1 gaming, first to lower process nodes like 55nm, the first with integrated HDMI with audio, and the first with double-precision floating point calculation support.

AMD expects that PC graphics will benefit from the increase in memory bandwidth for a variety of intensive applications. PC gamers will have the potential to play at high resolutions and image quality settings, with superb overall gaming performance. PC applications will have the potential to benefit from fast load times, with superior responsiveness and multi-tasking.

"Qimonda has worked closely with AMD to ensure that GDDR5 is available in volume to best support AMD's next-generation graphics products," said Thomas Seifert, Chief Operating Officer of Qimonda AG. "Qimonda's ability to quickly ramp production is a further milestone in our successful GDDR5 roadmap and underlines our predominant position as innovator and leader in the graphics DRAM market."

GDDR5 for Stream Processing
In addition to the potential for improved gaming and PC application performance, GDDR5 also holds a number of benefits for stream processing, where GPUs are applied to address complex, massively parallel calculations. Such calculations are prevalent in high-performance computing, financial and academic segments among others. AMD expects that the increased bandwidth of GDDR5 will greatly benefit certain classes of stream computations.

New error detection mechanisms in GDDR5 can also help increase the accuracy of calculations by indentifying errors and re-issuing commands to get valid data. This capability is a level of reliability not available with other GDDR-based memory solutions today.
Source: AMD
Add your own comment

135 Comments on AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards

#76
FR@NK
XooMWater volume through tubing is what really matters, and 3/4" has lower laminar resistance than 7/16". However, the difference in volumetric throughput between the two is minimal enough that 7/16" is preferred due to simplicity in tubing runs.
Maybe because the 3/4" tubing has more surface area on the inter area of the tube which causes more friction on the coolant.....lol GDDR5 will be able to have smaller 256bit interface yet still have more bandwidth then a 512-bit 2900xt. I have no idea what this has to do with a garden hose tho.
Posted on Reply
#77
Rebo&Zooty
candle_86hows that the case, the 2900XT and 3870 looked great on paper, but they came out and didnt stand a chance.

The x1800XT didnt have a fight agasint the 7800GTX 512 and the 1900XTX tied it and was again beaten a few weeks later by the 7900GTX and then the 7950GX2. When the 1950XTX showed up the 8800GTX arrived 1 month later. ATI hasnt been putting up a good showing for awhile. Heck look at the x1600 or hd2600 cards compared to there direct compititon
oh, dont make me link every farking review out there showing the 7950gx2 for the POS it was, your such an nvidiot..........

first the gx2 vs the 1950x2 the gx2 looses not just in perf, but in support, the gx2 is trash, nvidia made it to keep top numbers in a few games till the 8800 came out thats it, then they fully dumped its support, sure the drivers work, but quad sli? and even sli perf of the gx2 vs true sli was worse, thats sad since its basickly 2 cards talking dirrectly.

as to the x1900, it STOMPED the 7900/7950, cards that ON PAPER should have been stronger, 24 pipes vs 16 for example was what ppl where using to "proove" that the nvidia cards WOULD kill the x1900 range of cards.

i would make another massivly long post, but you would just ignore it like all fanboi's do, or resorte to insults.
Posted on Reply
#78
jaydeejohn
EastCoasthandleWell think of it as this:
You have water cooling setup and want to decide on the size of the tubing. You can go 3/4" inner diameter tubing but you run the risk of a slower flow rate do to the size of the pump's barb only being 1/2" and it's power output (more/less). Or you can get a tube with an inner diameter 7/16" (which is slightly smaller then 1/2") which should maximize your flow rate. I believe this is what the following means:



If I am wrong could someone clarify this? :o
Imagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps
Posted on Reply
#79
EastCoasthandle
jaydeejohnImagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps
Thanks for the info :toast:
Posted on Reply
#80
jaydeejohn
YW. This should dramatically cut down the costs of the pcbs, and still provide great performance
Posted on Reply
#81
jbunch07
jaydeejohnYW. This should dramatically cut down the costs of the pcbs, and still provide great performance
that is if the cost for the gddr5 doesnt cripple them... :(
Posted on Reply
#82
EastCoasthandle
jaydeejohnYW. This should dramatically cut down the costs of the pcbs, and still provide great performance
Agreed...I still wonder what kind of performance is had with 512 bus. I hope we find out with the X2 :D
Posted on Reply
#83
HTC
EastCoasthandleAgreed...I still wonder what kind of performance is had with 512 bus. I hope we find out with the X2 :D
And, in theory, reduce the heat it creates too!
Posted on Reply
#84
Rebo&Zooty
how long till the nvidia fanboi says that ati should have gone 512bit and should have more pipes/rops?

funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........

meh, im sick of the "ati sucks because *add bullshit FUD here*" or the "nvidia sucks because *add bullshit FUD here*"

they both have their flaws and their good points.

the one thing i almost alwase see out of ati since the 8500 has been INNOVATION, it hasnt alwase worked out the way they intended, the 2900/3800 are the prime example, the main issue was that ati designed the r600/670 cores for dx10 not dx9, as such they followed what microsoft wanted to do with dx10+ that was to remove detocated AA hardware, using shaders to do the AA and other work, ofcorse this lead to a problem, dx9 support was an after thought and as such gave worse performance when you turned aa on.

ati thought like many other companys thought, vista would take off and be a huge hit, just like xp did when it came out, and with vista being a big hit, dx10+ games would have been out en-mass, but vista fell on its face, an ati still had this pure dx10 chip alreadin in the pipe, so they ran with it KNOWING it would have its issues/querks in dx9 games.

Nvidia on the other hand effectivly took the oposite aproch with the g80/92 cores, they build a dx9 part with dx10 support as an afterthought, in this case it was a good move, because without vista being a giant hit, game developers had no reasion to make true pure dx10 games.

nvidia didnt go dx10.1 because it would have taken some redesign work on the g92, and they wanted to keep their investment in it as low as possable to keep the profit margin as high as possable, its why they lowered the buss width and complexity of the pcb, its why they didnt add dx10.1 support, its why the 8800gt's refrance cooler is the utter peice of shit it is(i have one, i can say for 100% certen the refrance coolers a hunk of shit!!!!)

now i could go on and on and on about each company, point is they have both screwed up.

biggist screwups for each

ATI:2900(r600) not having a detocated AA unit for dx9 and older games.

nVidia: geforce 5/FX line, horrible dx9 support that game developers ended up having to not use because it ran so bad, forcing any FX owner to run all his/her games in dx8 mode, also the 5800 design was bad, high end ram with a small buss and ungodly loud fan does not a good card make.


thats how i see it, at least ATI never put out a card tauted as being capable of something that in practice it couldnt do even passably well......
Posted on Reply
#85
Rebo&Zooty
jbunch07that is if the cost for the gddr5 doesnt cripple them... :(
dought it will have any real impact from the card makers end, they buy HUGE quintites of chips, getting a price thats FAR lower then the preimum we consumers pay for that same ram.

I had an artical b4 my last hdd melt down, it showed acctual cost per memory chip for videocards, ddr vs ddr2 vs ddr3 vs ddr4

ddr4 was more expencive, but that was mostly due not to it being new but due to it being in short supply at the time, still the price you payed to get it on a card was extreamly exagerated, ofcorse its "new" so they charge extra for it.

the cost of 2 vs 3 again, wasnt that large, same with ddr vs ddr2, again, we are talking about companys that buy 100's of thousnads if not millions of memory chips at a time from their supplyers, those supplyers want to keep on the good side of their customers so they keep making a profit, so they give them far better prices then they would ever admit to an outside party.

also the more you buy, the lower the per unit cost is, same as with most things, go check supermediastore, if you guy 600 blanks the price is quite a bit lower then buying 50 or 100 at a time ;)
Posted on Reply
#86
jbunch07
yea this is true!
the vendor to get ram at a nice price because they buy such large orders!
Posted on Reply
#87
Rebo&Zooty
i would love to see somebody try that new qmoda...whatever ram thats higher dencity per chip, would be intresting to see a videocard that had 2gb of high bandwith ram......or hell use it for onboard video(ohhh that could rock, 4 chips for 512bit(or something like that) would make onboard a hell of alot better....
Posted on Reply
#88
candle_86
Rebo&Zootyhow long till the nvidia fanboi says that ati should have gone 512bit and should have more pipes/rops?

funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........

meh, im sick of the "ati sucks because *add bullshit FUD here*" or the "nvidia sucks because *add bullshit FUD here*"

they both have their flaws and their good points.

the one thing i almost alwase see out of ati since the 8500 has been INNOVATION, it hasnt alwase worked out the way they intended, the 2900/3800 are the prime example, the main issue was that ati designed the r600/670 cores for dx10 not dx9, as such they followed what microsoft wanted to do with dx10+ that was to remove detocated AA hardware, using shaders to do the AA and other work, ofcorse this lead to a problem, dx9 support was an after thought and as such gave worse performance when you turned aa on.

ati thought like many other companys thought, vista would take off and be a huge hit, just like xp did when it came out, and with vista being a big hit, dx10+ games would have been out en-mass, but vista fell on its face, an ati still had this pure dx10 chip alreadin in the pipe, so they ran with it KNOWING it would have its issues/querks in dx9 games.

Nvidia on the other hand effectivly took the oposite aproch with the g80/92 cores, they build a dx9 part with dx10 support as an afterthought, in this case it was a good move, because without vista being a giant hit, game developers had no reasion to make true pure dx10 games.

nvidia didnt go dx10.1 because it would have taken some redesign work on the g92, and they wanted to keep their investment in it as low as possable to keep the profit margin as high as possable, its why they lowered the buss width and complexity of the pcb, its why they didnt add dx10.1 support, its why the 8800gt's refrance cooler is the utter peice of shit it is(i have one, i can say for 100% certen the refrance coolers a hunk of shit!!!!)

now i could go on and on and on about each company, point is they have both screwed up.

biggist screwups for each

ATI:2900(r600) not having a detocated AA unit for dx9 and older games.

nVidia: geforce 5/FX line, horrible dx9 support that game developers ended up having to not use because it ran so bad, forcing any FX owner to run all his/her games in dx8 mode, also the 5800 design was bad, high end ram with a small buss and ungodly loud fan does not a good card make.


thats how i see it, at least ATI never put out a card tauted as being capable of something that in practice it couldnt do even passably well......
The AA on the shaders is a stupid bad idea, MS doesnt even understand hardware thats the problem. Nvidia is not going to do DX10.1 because it requires shader based AA which is total junk and worthless. Sure the AA might look better but a 50% drop in FPS isnt worth it, ill take dedicated hardware AA any day. What MS needs to do is discuss these ideas not just sit around and think them up. Remember if MS cuts Nvidia out of DX totally OpenGL will make a massive comeback. MS has not choice but to do what Nvidia tells it to do for this reason alone. Sevral problems exist with shader AA if you can't see that im sorry. As for Innovation i beg to differ, what has ATI actully done, shader AA was the worst idea ive heard of. 5 groups of 64 shaders but only one unit can do complex shader math another bad idea. Thats why ATI cards preform like 64 shader cards most of the time, and if they are lucky 128 shader cards. GDDR5 is marketing hype, the latancy alone kills it, new ram types are never as good as the older ones on release. Look at the GDDR3 5700Ultra vs the regular 5700Ultra. Same preformace because of latancy. Go ahead give us 3000mhz ram with a 200ms reponse time, it wont be any better than 2000mhz ram with an 80ms reponse time, these are just random numbers but its the same reason people dont upgrade to DDR3. All hype from AMD and appsolutly nothing to even care about. This time they might have a single core solution that can tie the 8800Ultra.
Posted on Reply
#89
jbunch07
Rebo&Zootyi would love to see somebody try that new qmoda...whatever ram thats higher dencity per chip, would be intresting to see a videocard that had 2gb of high bandwith ram......or hell use it for onboard video(ohhh that could rock, 4 chips for 512bit(or something like that) would make onboard a hell of alot better....
hmmm 2gb of video ram would only be good for extremely high resolutions...
Posted on Reply
#90
Dangle
Why are there so many furious Nvidia fans in here?
Posted on Reply
#91
candle_86
where tring to save you from a stupid purchase
Posted on Reply
#92
jbunch07
candle_86where tring to save you from a stupid purchase
come on now candle...what has ati ever done to you?

seriously there is now reason to hate ati that much!
Posted on Reply
#93
candle_86
oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point
Posted on Reply
#94
jbunch07
candle_86oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point
look at my system specks<----
look at my face----> :D

im very happy with Amd/ati

my previous rig was nvidia i was happy with that as well
but hey im not complaining...you have every right to say what you want.
Posted on Reply
#95
FR@NK
candle_86oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point
Most of us here are smart enough to know that the ATI cards we use are slower then nvidia's cards.
Posted on Reply
#96
jbunch07
FR@NKMost of us here are smart enough to know that the ATI cards we use are slower then nvidia's cards.
true...besides you dont see me going to a nvidia new thread and bash on them...

no one like a buzz kill!
Posted on Reply
#97
erocker
*
candle_86oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point
No one here needs saving, keep these types of comments to yourself. I believe you have already been warned on this subject before.
Posted on Reply
#98
Rebo&Zooty
candle_86The AA on the shaders is a stupid bad idea, MS doesnt even understand hardware thats the problem. Nvidia is not going to do DX10.1 because it requires shader based AA which is total junk and worthless. Sure the AA might look better but a 50% drop in FPS isnt worth it, ill take dedicated hardware AA any day. What MS needs to do is discuss these ideas not just sit around and think them up. Remember if MS cuts Nvidia out of DX totally OpenGL will make a massive comeback. MS has not choice but to do what Nvidia tells it to do for this reason alone. Sevral problems exist with shader AA if you can't see that im sorry. As for Innovation i beg to differ, what has ATI actully done, shader AA was the worst idea ive heard of. 5 groups of 64 shaders but only one unit can do complex shader math another bad idea. Thats why ATI cards preform like 64 shader cards most of the time, and if they are lucky 128 shader cards. GDDR5 is marketing hype, the latancy alone kills it, new ram types are never as good as the older ones on release. Look at the GDDR3 5700Ultra vs the regular 5700Ultra. Same preformace because of latancy. Go ahead give us 3000mhz ram with a 200ms reponse time, it wont be any better than 2000mhz ram with an 80ms reponse time, these are just random numbers but its the same reason people dont upgrade to DDR3. All hype from AMD and appsolutly nothing to even care about. This time they might have a single core solution that can tie the 8800Ultra.
humm, maby u need to check the assassins creede reviews, seems shader based aa isnt a bad idea if done nativly by the game, the 9800gtx and 3870x2 where toe to toe less then 1fps diffrance between them, corse ur a fanboi, wouldnt expect you to know that.

as to ms doing what another company tells it, wrong, ms could block opengl support if they wanted, and guess what, nobody could stop them, everybody has to do what ms says, because the only other choice is to fall back into a niche market like matrox has done.

as to your 5700 example, that dosnt mean shit the 5700 was a peice of crap, it was the best of the fx line, but thats not saying much......specly when a 9550se can out perform it LOL
this is dx10.1 3870x2 vs 9800gtx under sp1(dx10.1 is enabled with sp1)
But after we installed Vista SP1, an interesting thing happened. The performance of AMD's video card increased, while NVIDIA's performance did not. In fact, with SP1 installed there was less than a single frame per second difference on average between these two video cards.
funny, shader based aa vs detocated AA and the perf diffrance is around 1fps diffrance

so your "shader based AA is a stupid Idea" line is a load of fanboi bullshit(as expected from you)

the ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example)

as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.
Posted on Reply
#99
jaydeejohn
You wont be seeing those huge latencies with this memory. I dont like to argue, just give it time, and see what happens. The 4870 is going to be a killer card, and rumors have it at 25% over the 9800GTX. nVidias solution, the G280 should be faster, but itll draw too much energy to do a X2, thus allowing the 4870 X2 to compete with it at the very top for single slot solution. Hopefully we will see that MCM on the 4870 X2
Posted on Reply
#100
Thermopylae_480
Rebo&amp;Zootyhumm, maby u need to check the assassins creede reviews, seems shader based aa isnt a bad idea if done nativly by the game, the 9800gtx and 3870x2 where toe to toe less then 1fps diffrance between them, corse ur a fanboi, wouldnt expect you to know that.

as to ms doing what another company tells it, wrong, ms could block opengl support if they wanted, and guess what, nobody could stop them, everybody has to do what ms says, because the only other choice is to fall back into a niche market like matrox has done.

as to your 5700 example, that dosnt mean shit the 5700 was a peice of crap, it was the best of the fx line, but thats not saying much......specly when a 9550se can out perform it LOL
this is dx10.1 3870x2 vs 9800gtx under sp1(dx10.1 is enabled with sp1)




funny, shader based aa vs detocated AA and the perf diffrance is around 1fps diffrance

so your "shader based AA is a stupid Idea" line is a load of fanboi bullshit(as expected from you)

the ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example)

as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.
Don't respond to trolls, especially after a moderator has already attempted to end the situation. Such behavior only worsens the situation, and can get you in trouble.

(DO NOT RESPOND)
Posted on Reply
Add your own comment
Apr 24th, 2024 19:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts