Wednesday, May 21st 2008
AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards
AMD today announced the first commercial implementation of Graphics Double Data Rate, version 5 (GDDR5) memory in its forthcoming next generation of ATI Radeon graphics card products. The high-speed, high-bandwidth GDDR5 technology is expected to become the new memory standard in the industry, and that same performance and bandwidth is a key enabler of The Ultimate Visual Experience, unlocking new GPU capabilities. AMD is working with a number of leading memory providers, including Samsung, Hynix and Qimonda, to bring GDDR5 to market.
Today's GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. The higher data rates supported by GDDR5 - up to 5x that of GDDR3 and 4x that of GDDR4 - enable more bandwidth over a narrower memory interface, which can translate into superior performance delivered from smaller, more cost-effective chips. AMD's senior engineers worked closely with industry standards body JEDEC in developing the new memory technology and defining the GDDR5 spec.
"The days of monolithic mega-chips are gone. Being first to market with GDDR in our next-generation architecture, AMD is able to deliver incredible performance using more cost-effective GPUs," said Rick Bergman, Senior Vice President and General Manager, Graphics Product Group, AMD. "AMD believes that GDDR5 is the optimal way to drive performance gains while being mindful of power consumption. We're excited about the potential GDDR5 brings to the table for innovative game development and even more exciting game play."
The introduction of GDDR5-based GPU offerings marks the continued tradition of technology leadership in graphics for AMD. Most recently AMD has been first to bring a unified shader architecture to market, the first to support Microsoft DirectX 10.1 gaming, first to lower process nodes like 55nm, the first with integrated HDMI with audio, and the first with double-precision floating point calculation support.
AMD expects that PC graphics will benefit from the increase in memory bandwidth for a variety of intensive applications. PC gamers will have the potential to play at high resolutions and image quality settings, with superb overall gaming performance. PC applications will have the potential to benefit from fast load times, with superior responsiveness and multi-tasking.
"Qimonda has worked closely with AMD to ensure that GDDR5 is available in volume to best support AMD's next-generation graphics products," said Thomas Seifert, Chief Operating Officer of Qimonda AG. "Qimonda's ability to quickly ramp production is a further milestone in our successful GDDR5 roadmap and underlines our predominant position as innovator and leader in the graphics DRAM market."
GDDR5 for Stream Processing
In addition to the potential for improved gaming and PC application performance, GDDR5 also holds a number of benefits for stream processing, where GPUs are applied to address complex, massively parallel calculations. Such calculations are prevalent in high-performance computing, financial and academic segments among others. AMD expects that the increased bandwidth of GDDR5 will greatly benefit certain classes of stream computations.
New error detection mechanisms in GDDR5 can also help increase the accuracy of calculations by indentifying errors and re-issuing commands to get valid data. This capability is a level of reliability not available with other GDDR-based memory solutions today.
Source:
AMD
Today's GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. The higher data rates supported by GDDR5 - up to 5x that of GDDR3 and 4x that of GDDR4 - enable more bandwidth over a narrower memory interface, which can translate into superior performance delivered from smaller, more cost-effective chips. AMD's senior engineers worked closely with industry standards body JEDEC in developing the new memory technology and defining the GDDR5 spec.
"The days of monolithic mega-chips are gone. Being first to market with GDDR in our next-generation architecture, AMD is able to deliver incredible performance using more cost-effective GPUs," said Rick Bergman, Senior Vice President and General Manager, Graphics Product Group, AMD. "AMD believes that GDDR5 is the optimal way to drive performance gains while being mindful of power consumption. We're excited about the potential GDDR5 brings to the table for innovative game development and even more exciting game play."
The introduction of GDDR5-based GPU offerings marks the continued tradition of technology leadership in graphics for AMD. Most recently AMD has been first to bring a unified shader architecture to market, the first to support Microsoft DirectX 10.1 gaming, first to lower process nodes like 55nm, the first with integrated HDMI with audio, and the first with double-precision floating point calculation support.
AMD expects that PC graphics will benefit from the increase in memory bandwidth for a variety of intensive applications. PC gamers will have the potential to play at high resolutions and image quality settings, with superb overall gaming performance. PC applications will have the potential to benefit from fast load times, with superior responsiveness and multi-tasking.
"Qimonda has worked closely with AMD to ensure that GDDR5 is available in volume to best support AMD's next-generation graphics products," said Thomas Seifert, Chief Operating Officer of Qimonda AG. "Qimonda's ability to quickly ramp production is a further milestone in our successful GDDR5 roadmap and underlines our predominant position as innovator and leader in the graphics DRAM market."
GDDR5 for Stream Processing
In addition to the potential for improved gaming and PC application performance, GDDR5 also holds a number of benefits for stream processing, where GPUs are applied to address complex, massively parallel calculations. Such calculations are prevalent in high-performance computing, financial and academic segments among others. AMD expects that the increased bandwidth of GDDR5 will greatly benefit certain classes of stream computations.
New error detection mechanisms in GDDR5 can also help increase the accuracy of calculations by indentifying errors and re-issuing commands to get valid data. This capability is a level of reliability not available with other GDDR-based memory solutions today.
135 Comments on AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards
EDIT
Apologies, moderator: it was in post # 99 when i started to write this reply!
no more arguing about ati vs nvidia!
at least not here
i started when he sposted that orignaly, i spent alot of time on my slow net(damn comcast is buggin again!!!!!) finding those damn links/images.
sorry for the cross posts,woulde delete it but, all that effort gone to waste :(
Ignorant people even begin with their own terminology, "256bit GPU", "Mine's a 512bit GPU" I've not seen anything more retarded, I mean come on, xxx-bit is just the width of the memory bus.
its about time someone finally said it!
comparing memory bus always made me laugh
256 gddr5 should do very nice!
i have sent ati enough bitching emails in the past about buggs that i know how their support is, if you report it dirrectly they tend to try and fix it.
nvidia support, u get a form letter at best unless u know somebody on the inside, then they get the runaround and you get the runaround from them because, honestly they cant get any clear answers to alot of long standing buggs.
a few examples
windows server 2003 and xp x64(same os core) have a lovely bugg with nvidia drivers, if you have ever installed another companys video drivers you have a 99% chance that once you install the nvidia drivers the system will BSOD every time you try and use the card above a 2d desktop level, its a KNOWN issue since x64 came out(and from some reports also effects 32bit server 2003 as well), nvidia has had YEARS to fix this, they havent bothered, their fix is noted as "reinstall windows"..........if i had repoted that bugg to ati i would have had a beta fix in a couple days( i know because i repoted a bugg with some 3rd party apps that caused them to lock up and got fast acction)
Nvidia for over a year had a bugg in their yv12 video rendering, ffdshow wiki explains it, and documents how long its been a probblem, they fixed it in some beta drivers, but then broek it in fulls........
ati wide screen scaling: on some older games the games image is streched because the game dosnt support widescreen res's, theres a fix for this if you know where to look in drivers but its not automatic so it causes alot of people troubles.
i got a large list of bitches about both companys.
ati: agp card supports been spotty with the x1k and up cards, no excuse here other then the fact that they just need more people to email them and complain about it(squeeky wheel gets the oil as granny use to say)
correct me if im wrong
wait for the card, then smack it if you feel necessary, else just stfu and let the facts roll from the horses mouth so to speak and wait for genuine reviews.
:toast:
a- BOTH had 16 ROPS and 8 vertex shaders.
b- It's true that NV had 24 TMU while Ati had 16, though they were different. Ati ones were more complex.
c- AND X1900 had 48 pixel shaders vs 24 on the 7900.
Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...
Don't bring in the price argument, please, since 7900GTX was a lot cheaper than X1900 XTX. It actually traded blows with the XT, both in price and performance. The only card that stood out at it's price segment was the X1950 pro when G80 was already out, but was still very expensive.
I don't have anything against your opinions, but try not to use false data to support your arguments. I really think it's just that your memory failed, but be careful next time. :toast:
EDIT: Hmm I just noticed two things in the Assasin's Creed graphic you posted.
1- No Anisotropic Filtering used on the Ati card.
2- It's the X2 what is being compared to the 9800 GTX, I first thought it was the HD3870.
All in all the X2 should be faster, because it's more expensive and no AF is applied, but it's not.
af was dissaled because its bugged on that game eather a driver patch or game patch would fix that, but the makers are patching out dx10.1 support for now, probbly because nvidia dont want anybody competing with them.
this wasnt to show price card vs card it was to show that dx10.1 shader based AA has less impact then dx9 shader based AA, and since the r600/670 where made for not dx9 or dx9+dx10 shaders.
diffrent designs, the 8800 is really a dx9 card with shader4.0 taged on, the 2900/3800 are native shader 4 cards with dx9 support taged on via drivers, very diffrent concept behind each, since vista tanked, the r600/670 havent had any true dx10/10.1 games to show off their design, and as soon as one came out, somehow it ended up patching it out when ati did well in it.
Today 1950xtx is 36% faster than 7900gtx in 1280x1024 without aa, and 79% faster than 7900gtx with 4x aa. (it also beats the 7950gx2 by 4% without aa, and by 35% with 4x aa)
www.computerbase.de/artikel/hardware/grafikkarten/2008/test_ati_radeon_hd_3870_x2/24/#abschnitt_performancerating
Anyway my point was that the graphic didn't show shader AA to be superior, X2 should be a lot faster in that circunstances, but it's not. It only shows that the performance hit under DX10.1 is not as pronounced as under DX10 when AA done on shaders, but never that it's faster than with dedicated hardware. Also according to THAT GAME, DX10.1 AA is faster than DX10 AA on Ati cards, but I would take that with a grain of salt. The lighting in DX10.1 was way inferior to DX10 one on some places, because something was missing. I saw it somewhere and had my doubts. Until one of my friends confirmed it.
according to tpu review its
7800/7900
Number of pixel shader processors 24
Number of pixel pipes 24
Number of texturing units 24
so your wrong, the 7800/7900 based cards are 24 rops/pipes 1 shader unit per pipe, where as the x19*0xt/xtx have 16 pipes 3 shaders per pipe(48 total)
you try and disscredit me then use faulse facts......
2nd, the x1900xt and xtx where the same card, i have yet to meet a x1900xt that wouldnt clock to xtx and beyond, and it was cake to flash them, infact thats what my backup card is, a CHEAP x1900xt flashed with toxic xtx bios
www.trustedreviews.com/graphics/review/2006/07/03/Sapphire-Liquid-Cooled-Radeon-X1900-XTX-TOXIC/p4
check that out, seems the gx2 is faster then the xtx but only in a few cases, over all they trade blows, yet the gx2 was alot more expencive and had a VERY short life, it went EOL pretty fast, and never did get quad SLI updates.......
in the end it was a bad buy, where as my x1900xt/xtx card was a great buy, i got it for less then 1/3 the price of a 8800gts, its still able to play current games not maxed out by any means but still better then the 7900/50 do :)
Speaking of PIPES when the cards have different number of units at each stage is silly. What's the pipe number? The number of ROPs, the number of TMUs or the number of Pixel shaders? Silly.
I used Google to translate it to Spanish and it didn't a good job. I understood it was 3DMark results, not to mention that Next/Previous page was nowhere to be found.. OMG, I love you Google translator...
Traduction to English went better. :toast:
In the end you're right. The X1900 is A LOT faster in newer games, and I knew it would happen. Bigger use of shaders helping the card with more pixel shaders is not a surprise. If you knew me, you would know that I have always said X1900 was a lot faster than 7900, but in no way it STOMPED it in games. NOW it does. Anyway, it's faster but almost always on unplayable frames. Don't get me wrong, it's a lot faster, period. It just took too long for this to happen IMO.
Also IMO Ati should make cards for today and not always thinking in being the best in a far future (that's 1 year in this industry), when better cards are going to be around and ultimately no one will care about the old one. That's my opinion anyway. I want Ati back and I think that's what they have to do. Until then they are making me buy Nvidia, since it's the better value at the moment. HD4000 and GTX 200 series are not going to change this from what I heard, it's a shame.
EDIT: I forgot to answer this before even though I wanted to do so: It seems they are right. BUT they are doubling Shader power too, so it doesn't look like texture power was as big of a problem if they have mantained the balance between the two. Same with next Nvidia's cards, they have mantained the balance between SP and TMU AFAIK.
It's something that saddens me, since I really wanted to know where the bottleneck is more common, it's in SPs or TMUs? It definately isn't on ROPs until you reach high resolution and AA levels and sure as hell it's not on memory bandwidth. That doesn't mean memory bandwidth couldn't be more important in the future. Indeed if GPU physics are finally widespread, and I think it's inevitable, we will need that bandwidth, but for graphics alone, bandwidth is the one thing with more spare power to give nowadays. GDDR5 clocks or 512bit interface is NOT needed for the kind of power that next cards will have, if only used for rendering. They are more e-peenis than anything IMO.
I'm not certain if the rops are a bottleneck...