Wednesday, May 21st 2008

AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards

AMD today announced the first commercial implementation of Graphics Double Data Rate, version 5 (GDDR5) memory in its forthcoming next generation of ATI Radeon graphics card products. The high-speed, high-bandwidth GDDR5 technology is expected to become the new memory standard in the industry, and that same performance and bandwidth is a key enabler of The Ultimate Visual Experience, unlocking new GPU capabilities. AMD is working with a number of leading memory providers, including Samsung, Hynix and Qimonda, to bring GDDR5 to market.

Today's GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. The higher data rates supported by GDDR5 - up to 5x that of GDDR3 and 4x that of GDDR4 - enable more bandwidth over a narrower memory interface, which can translate into superior performance delivered from smaller, more cost-effective chips. AMD's senior engineers worked closely with industry standards body JEDEC in developing the new memory technology and defining the GDDR5 spec.

"The days of monolithic mega-chips are gone. Being first to market with GDDR in our next-generation architecture, AMD is able to deliver incredible performance using more cost-effective GPUs," said Rick Bergman, Senior Vice President and General Manager, Graphics Product Group, AMD. "AMD believes that GDDR5 is the optimal way to drive performance gains while being mindful of power consumption. We're excited about the potential GDDR5 brings to the table for innovative game development and even more exciting game play."

The introduction of GDDR5-based GPU offerings marks the continued tradition of technology leadership in graphics for AMD. Most recently AMD has been first to bring a unified shader architecture to market, the first to support Microsoft DirectX 10.1 gaming, first to lower process nodes like 55nm, the first with integrated HDMI with audio, and the first with double-precision floating point calculation support.

AMD expects that PC graphics will benefit from the increase in memory bandwidth for a variety of intensive applications. PC gamers will have the potential to play at high resolutions and image quality settings, with superb overall gaming performance. PC applications will have the potential to benefit from fast load times, with superior responsiveness and multi-tasking.

"Qimonda has worked closely with AMD to ensure that GDDR5 is available in volume to best support AMD's next-generation graphics products," said Thomas Seifert, Chief Operating Officer of Qimonda AG. "Qimonda's ability to quickly ramp production is a further milestone in our successful GDDR5 roadmap and underlines our predominant position as innovator and leader in the graphics DRAM market."

GDDR5 for Stream Processing
In addition to the potential for improved gaming and PC application performance, GDDR5 also holds a number of benefits for stream processing, where GPUs are applied to address complex, massively parallel calculations. Such calculations are prevalent in high-performance computing, financial and academic segments among others. AMD expects that the increased bandwidth of GDDR5 will greatly benefit certain classes of stream computations.

New error detection mechanisms in GDDR5 can also help increase the accuracy of calculations by indentifying errors and re-issuing commands to get valid data. This capability is a level of reliability not available with other GDDR-based memory solutions today.
Source: AMD
Add your own comment

135 Comments on AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards

#101
Rebo&Zooty
candle_86oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point
i own amd, and im using an x1900 till my 8800gt's back from the shop(stock cooler gave out) an u dont see me QQ(crying) or upset about being an amd user, i have setup core2 systems for people, they are nice, but price for price i still prefer to get as much out of an amd rig as i can, my new/current boards got a few years left b4 i need to replace it, plenty of cpu's to come in that time, i would guess 3-4 will pass thru the board b4 i upgrade it, unless i get a really kickass deal on a DFI 790fx board(the high end one not the lower one)
Posted on Reply
#102
HTC
Rebo&Zootythe ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example)

as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.
This should be easy enough to prove / disprove when more dx10.x games are released: might take a while for that to happen, though :(

EDIT

Apologies, moderator: it was in post # 99 when i started to write this reply!
Posted on Reply
#103
jbunch07
lets get this thread back on track!

no more arguing about ati vs nvidia!
at least not here
Posted on Reply
#104
Rebo&Zooty
Thermopylae_480Don't respond to trolls, especially after a moderator has already attempted to end the situation. Such behavior only worsens the situation, and can get you in trouble.
sorry was a cross post
i started when he sposted that orignaly, i spent alot of time on my slow net(damn comcast is buggin again!!!!!) finding those damn links/images.

sorry for the cross posts,woulde delete it but, all that effort gone to waste :(
Posted on Reply
#105
btarunr
Editor & Senior Moderator
jaydeejohnImagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps
Well said. We must stop laying empasis on bus-width as long as faster memory makes up. Let's stop (my 512bit pwns your 256bit), look up the charts and the final bandwidth of the memory bus.

Ignorant people even begin with their own terminology, "256bit GPU", "Mine's a 512bit GPU" I've not seen anything more retarded, I mean come on, xxx-bit is just the width of the memory bus.
Posted on Reply
#106
jbunch07
btarunrWell said. We must stop laying empasis on bus-width as long as faster memory makes up. Let's stop (my 512bit pwns your 256bit), look up the charts and the final bandwidth of the memory bus.

Ignorant people even begin with their own terminology, "256bit GPU", "Mine's a 512bit GPU" I've not seen anything more retarded, I mean come on, xxx-bit is just the width of the memory bus.
thank you!

its about time someone finally said it!

comparing memory bus always made me laugh

256 gddr5 should do very nice!
Posted on Reply
#107
jaydeejohn
Actually, having thruput is only good if it delivers. Its like putting 1 gig of memory on a x1600. Sure its there, but can the card relly use it?
Posted on Reply
#108
Rebo&Zooty
HTCThis should be easy enough to prove / disprove when more dx10.x games are released: might take a while for that to happen, though :(
yeah, see, from what i been told by a couple people i know who work for amd/ati and intel, ati honestly expected vista to take off and replace xp over night, if that had happened dx10 would have become the norm and the r600/670 design would have been GREAT, it would have looked far better then it does, BUT because vista fell on its face(doah!! *homer simpson sound*) ati's design was.....well less then optimal.

i have sent ati enough bitching emails in the past about buggs that i know how their support is, if you report it dirrectly they tend to try and fix it.

nvidia support, u get a form letter at best unless u know somebody on the inside, then they get the runaround and you get the runaround from them because, honestly they cant get any clear answers to alot of long standing buggs.

a few examples

windows server 2003 and xp x64(same os core) have a lovely bugg with nvidia drivers, if you have ever installed another companys video drivers you have a 99% chance that once you install the nvidia drivers the system will BSOD every time you try and use the card above a 2d desktop level, its a KNOWN issue since x64 came out(and from some reports also effects 32bit server 2003 as well), nvidia has had YEARS to fix this, they havent bothered, their fix is noted as "reinstall windows"..........if i had repoted that bugg to ati i would have had a beta fix in a couple days( i know because i repoted a bugg with some 3rd party apps that caused them to lock up and got fast acction)

Nvidia for over a year had a bugg in their yv12 video rendering, ffdshow wiki explains it, and documents how long its been a probblem, they fixed it in some beta drivers, but then broek it in fulls........

ati wide screen scaling: on some older games the games image is streched because the game dosnt support widescreen res's, theres a fix for this if you know where to look in drivers but its not automatic so it causes alot of people troubles.


i got a large list of bitches about both companys.

ati: agp card supports been spotty with the x1k and up cards, no excuse here other then the fact that they just need more people to email them and complain about it(squeeky wheel gets the oil as granny use to say)
Posted on Reply
#109
btarunr
Editor & Senior Moderator
jaydeejohnActually, having thruput is only good if it delivers. Its like putting 1 gig of memory on a x1600. Sure its there, but can the card relly use it?
This is sort of an arms race between USA and USSR. Even if a GPU doesn't need all the bandwidth, it's in place, a HD3650 will never need PCI-E 2.0 x16 bandwidth, but when it comes to RV770 and memory subsystem, the difference comes to surface when RV770Pro is compared to its own GDDR3 variant. The fact that there is a difference shows the RV770 is able to make use of all that bandwidth and is efficient with it.
Posted on Reply
#111
jbunch07
btarunrThis is sort of an arms race between USA and USSR. Even if a GPU doesn't need all the bandwidth, it's in place, a HD3650 will never need PCI-E 2.0 x16 bandwidth, but when it comes to RV770 and memory subsystem, the difference comes to surface when RV770Pro is compared to its own GDDR3 variant. The fact that there is a difference shows the RV770 is able to make use of all that bandwidth and is efficient with it.
i thought the bandwidth needed had more to do with the game or what your doing with the cards...ie some games need more bandwidth than other games...but i know what you mean.

correct me if im wrong
Posted on Reply
#112
mandelore
people like candle really should be kept out of these types of threads, im sure he just comes a stompin to troll as usual....

wait for the card, then smack it if you feel necessary, else just stfu and let the facts roll from the horses mouth so to speak and wait for genuine reviews.

:toast:
Posted on Reply
#113
btarunr
Editor & Senior Moderator
jbunch07i thought the bandwidth needed had more to do with the game or what your doing with the cards...ie some games need more bandwidth than other games...but i know what you mean.

correct me if im wrong
Yes, higher the resolution (of the video/game), larger are the frames, more amounts of data are transferred, extra bandwidth helps there.
Posted on Reply
#114
DarkMatter
Rebo&Zootyas to the x1900, it STOMPED the 7900/7950, cards that ON PAPER should have been stronger, 24 pipes vs 16 for example was what ppl where using to "proove" that the nvidia cards WOULD kill the x1900 range of cards.
funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........
I could agree with many of your points in this thread, but I can't take you seriously, just because of these:

a- BOTH had 16 ROPS and 8 vertex shaders.
b- It's true that NV had 24 TMU while Ati had 16, though they were different. Ati ones were more complex.
c- AND X1900 had 48 pixel shaders vs 24 on the 7900.

Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...

Don't bring in the price argument, please, since 7900GTX was a lot cheaper than X1900 XTX. It actually traded blows with the XT, both in price and performance. The only card that stood out at it's price segment was the X1950 pro when G80 was already out, but was still very expensive.

I don't have anything against your opinions, but try not to use false data to support your arguments. I really think it's just that your memory failed, but be careful next time. :toast:

EDIT: Hmm I just noticed two things in the Assasin's Creed graphic you posted.

1- No Anisotropic Filtering used on the Ati card.
2- It's the X2 what is being compared to the 9800 GTX, I first thought it was the HD3870.

All in all the X2 should be faster, because it's more expensive and no AF is applied, but it's not.
Posted on Reply
#115
Rebo&Zooty
msrp is simlar on both cards, nvidia just recently price droped them afik.

af was dissaled because its bugged on that game eather a driver patch or game patch would fix that, but the makers are patching out dx10.1 support for now, probbly because nvidia dont want anybody competing with them.

this wasnt to show price card vs card it was to show that dx10.1 shader based AA has less impact then dx9 shader based AA, and since the r600/670 where made for not dx9 or dx9+dx10 shaders.

diffrent designs, the 8800 is really a dx9 card with shader4.0 taged on, the 2900/3800 are native shader 4 cards with dx9 support taged on via drivers, very diffrent concept behind each, since vista tanked, the r600/670 havent had any true dx10/10.1 games to show off their design, and as soon as one came out, somehow it ended up patching it out when ati did well in it.
Posted on Reply
#116
Valdez
DarkMatterBack then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...
rv770 has 32tmu instead of 16 in the rv670, if the rumours are right.

Today 1950xtx is 36% faster than 7900gtx in 1280x1024 without aa, and 79% faster than 7900gtx with 4x aa. (it also beats the 7950gx2 by 4% without aa, and by 35% with 4x aa)

www.computerbase.de/artikel/hardware/grafikkarten/2008/test_ati_radeon_hd_3870_x2/24/#abschnitt_performancerating
Posted on Reply
#117
DarkMatter
Rebo&Zootymsrp is simlar on both cards, nvidia just recently price droped them afik.

af was dissaled because its bugged on that game eather a driver patch or game patch would fix that, but the makers are patching out dx10.1 support for now, probbly because nvidia dont want anybody competing with them.

this wasnt to show price card vs card it was to show that dx10.1 shader based AA has less impact then dx9 shader based AA, and since the r600/670 where made for not dx9 or dx9+dx10 shaders.

diffrent designs, the 8800 is really a dx9 card with shader4.0 taged on, the 2900/3800 are native shader 4 cards with dx9 support taged on via drivers, very diffrent concept behind each, since vista tanked, the r600/670 havent had any true dx10/10.1 games to show off their design, and as soon as one came out, somehow it ended up patching it out when ati did well in it.
MSRP is not similar, the GTX is $50 cheaper since day one. And that's the case in Newegg, GTX is around $50 cheaper. Average GTX $300, average X2 $350-375. Cheaper GTX $289, cheaper X2 $339. The average is not calculated, but aproximated, I didn't take the 2 higher prices for each card to make the average. If I did so the X2 would suffer a lot, indeed my averages are being very favorable to the X2. Here in Spain, the GTX is well below 250 euro, while the X2 is well above 300.

Anyway my point was that the graphic didn't show shader AA to be superior, X2 should be a lot faster in that circunstances, but it's not. It only shows that the performance hit under DX10.1 is not as pronounced as under DX10 when AA done on shaders, but never that it's faster than with dedicated hardware. Also according to THAT GAME, DX10.1 AA is faster than DX10 AA on Ati cards, but I would take that with a grain of salt. The lighting in DX10.1 was way inferior to DX10 one on some places, because something was missing. I saw it somewhere and had my doubts. Until one of my friends confirmed it.
Posted on Reply
#118
Rebo&Zooty
DarkMatterI could agree with many of your points in this thread, but I can't take you seriously, just because of these:

a- BOTH had 16 ROPS and 8 vertex shaders.
b- It's true that NV had 24 TMU while Ati had 16, though they were different. Ati ones were more complex.
c- AND X1900 had 48 pixel shaders vs 24 on the 7900.

Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...

Don't bring in the price argument, please, since 7900GTX was a lot cheaper than X1900 XTX. It actually traded blows with the XT, both in price and performance. The only card that stood out at it's price segment was the X1950 pro when G80 was already out, but was still very expensive.

I don't have anything against your opinions, but try not to use false data to support your arguments. I really think it's just that your memory failed, but be careful next time. :toast:

EDIT: Hmm I just noticed two things in the Assasin's Creed graphic you posted.

1- No Anisotropic Filtering used on the Ati card.
2- It's the X2 what is being compared to the 9800 GTX, I first thought it was the HD3870.

All in all the X2 should be faster, because it's more expensive and no AF is applied, but it's not.
www.techpowerup.com/reviews/PointOfView/Geforce7900GTX/

according to tpu review its
7800/7900
Number of pixel shader processors 24
Number of pixel pipes 24
Number of texturing units 24

so your wrong, the 7800/7900 based cards are 24 rops/pipes 1 shader unit per pipe, where as the x19*0xt/xtx have 16 pipes 3 shaders per pipe(48 total)

you try and disscredit me then use faulse facts......

2nd, the x1900xt and xtx where the same card, i have yet to meet a x1900xt that wouldnt clock to xtx and beyond, and it was cake to flash them, infact thats what my backup card is, a CHEAP x1900xt flashed with toxic xtx bios
www.trustedreviews.com/graphics/review/2006/07/03/Sapphire-Liquid-Cooled-Radeon-X1900-XTX-TOXIC/p4

check that out, seems the gx2 is faster then the xtx but only in a few cases, over all they trade blows, yet the gx2 was alot more expencive and had a VERY short life, it went EOL pretty fast, and never did get quad SLI updates.......
in the end it was a bad buy, where as my x1900xt/xtx card was a great buy, i got it for less then 1/3 the price of a 8800gts, its still able to play current games not maxed out by any means but still better then the 7900/50 do :)
Posted on Reply
#119
DarkMatter
Valdezrv770 has 32tmu instead of 16 in the rv670, if the rumours are right.

Today 1950xtx is 36% faster than 7900gtx in 1280x1024 without aa, and 79% faster than 7900gtx with 4x aa. (it also beats the 7950gx2 by 4% without aa, and by 35% with 4x aa) in 3DMark 06

www.computerbase.de/artikel/hardware/grafikkarten/2008/test_ati_radeon_hd_3870_x2/24/#abschnitt_performancerating
Corrected that for you. C'mon, we all know what happens between 3DMark 06 and Nvidia, and what happens in games. I don't want to hear the conspiracy theory again, unless some actual proofs are showed, please. It's an old tired argument. Over the time I have come to the conclusion that Ati does their cards for benchmarking, while Nvidia does theirs for games. [H] had a really nice article about Benchmarks vs. Games. The difference was brutal, and they didn't talk about 3DMark vs games. It was benchmarks of a game vs. the actual gameplay on the same game. They even demoed their own benchmarks and the result was the same.
Posted on Reply
#120
DarkMatter
Rebo&Zootywww.techpowerup.com/reviews/PointOfView/Geforce7900GTX/

according to tpu review its
7800/7900
Number of pixel shader processors 24
Number of pixel pipes 24
Number of texturing units 24

so your wrong, the 7800/7900 based cards are 24 rops/pipes 1 shader unit per pipe, where as the x19*0xt/xtx have 16 pipes 3 shaders per pipe(48 total)

you try and disscredit me then use faulse facts......

2nd, the x1900xt and xtx where the same card, i have yet to meet a x1900xt that wouldnt clock to xtx and beyond, and it was cake to flash them, infact thats what my backup card is, a CHEAP x1900xt flashed with toxic xtx bios
www.trustedreviews.com/graphics/review/2006/07/03/Sapphire-Liquid-Cooled-Radeon-X1900-XTX-TOXIC/p4

check that out, seems the gx2 is faster then the xtx but only in a few cases, over all they trade blows, yet the gx2 was alot more expencive and had a VERY short life, it went EOL pretty fast, and never did get quad SLI updates.......
in the end it was a bad buy, where as my x1900xt/xtx card was a great buy, i got it for less then 1/3 the price of a 8800gts, its still able to play current games not maxed out by any means but still better then the 7900/50 do :)
7900 GTX has 16 ROPS. Period.
Speaking of PIPES when the cards have different number of units at each stage is silly. What's the pipe number? The number of ROPs, the number of TMUs or the number of Pixel shaders? Silly.
Posted on Reply
#121
Valdez
DarkMatterCorrected that for you. C'mon, we all know what happens between 3DMark 06 and Nvidia, and what happens in games. I don't want to hear the conspiracy theory again, unless some actual proofs are showed, please. It's an old tired argument. Over the time I have come to the conclusion that Ati does their cards for benchmarking, while Nvidia does theirs for games. [H] had a really nice article about Benchmarks vs. Games. The difference was brutal, and they didn't talk about 3DMark vs games. It was benchmarks of a game vs. the actual gameplay on the same game. They even demoed their own benchmarks and the result was the same.
I don't know what you're talking about. The link shows a benchmark with a lots of games. The page i linked shows how the cards perform to each other in average of all tests.
Posted on Reply
#122
DarkMatter
ValdezI don't know what you're talking about. The link shows a benchmark with a lots of games. The page i linked shows how the cards perform to each other in average of all tests.
Yup, OK, sorry. :o

I used Google to translate it to Spanish and it didn't a good job. I understood it was 3DMark results, not to mention that Next/Previous page was nowhere to be found.. OMG, I love you Google translator...
Traduction to English went better. :toast:

In the end you're right. The X1900 is A LOT faster in newer games, and I knew it would happen. Bigger use of shaders helping the card with more pixel shaders is not a surprise. If you knew me, you would know that I have always said X1900 was a lot faster than 7900, but in no way it STOMPED it in games. NOW it does. Anyway, it's faster but almost always on unplayable frames. Don't get me wrong, it's a lot faster, period. It just took too long for this to happen IMO.
Also IMO Ati should make cards for today and not always thinking in being the best in a far future (that's 1 year in this industry), when better cards are going to be around and ultimately no one will care about the old one. That's my opinion anyway. I want Ati back and I think that's what they have to do. Until then they are making me buy Nvidia, since it's the better value at the moment. HD4000 and GTX 200 series are not going to change this from what I heard, it's a shame.

EDIT: I forgot to answer this before even though I wanted to do so:
Valdezrv770 has 32tmu instead of 16 in the rv670, if the rumours are right.
It seems they are right. BUT they are doubling Shader power too, so it doesn't look like texture power was as big of a problem if they have mantained the balance between the two. Same with next Nvidia's cards, they have mantained the balance between SP and TMU AFAIK.
It's something that saddens me, since I really wanted to know where the bottleneck is more common, it's in SPs or TMUs? It definately isn't on ROPs until you reach high resolution and AA levels and sure as hell it's not on memory bandwidth. That doesn't mean memory bandwidth couldn't be more important in the future. Indeed if GPU physics are finally widespread, and I think it's inevitable, we will need that bandwidth, but for graphics alone, bandwidth is the one thing with more spare power to give nowadays. GDDR5 clocks or 512bit interface is NOT needed for the kind of power that next cards will have, if only used for rendering. They are more e-peenis than anything IMO.
Posted on Reply
#123
vexen
Kirby123prices dosnt matter its all on your setup. 9800 gx2 and the X2 3 series are basicly equal. i recomend X2 over 9800 gx2. right now the best computer out is the ati crossfire X quad gpu setup. my friend from css mark 06 score was 45k.... one main reason is the quad extream he has at 4.8 ghz with 8gb ram. but his friend got the 9800 gx2 sli and his was only 40k. crossfireX is the best thing out right now from what i have seen atleast.
Considering the current World Record is 32k, and 8GB of ram doesn't increase a 3DMark score over 2GB, what are you talking about?
Posted on Reply
#124
Valdez
DarkMatterYup, OK, sorry. :o

I used Google to translate it to Spanish and it didn't a good job. I understood it was 3DMark results, not to mention that Next/Previous page was nowhere to be found.. OMG, I love you Google translator...
Traduction to English went better. :toast:

In the end you're right. The X1900 is A LOT faster in newer games, and I knew it would happen. Bigger use of shaders helping the card with more pixel shaders is not a surprise. If you knew me, you would know that I have always said X1900 was a lot faster than 7900, but in no way it STOMPED it in games. NOW it does. Anyway, it's faster but almost always on unplayable frames. Don't get me wrong, it's a lot faster, period. It just took too long for this to happen IMO.
Also IMO Ati should make cards for today and not always thinking in being the best in a far future (that's 1 year in this industry), when better cards are going to be around and ultimately no one will care about the old one. That's my opinion anyway. I want Ati back and I think that's what they have to do. Until then they are making me buy Nvidia, since it's the better value at the moment. HD4000 and GTX 200 series are not going to change this from what I heard, it's a shame.

EDIT: I forgot to answer this before even though I wanted to do so:



It seems they are right. BUT they are doubling Shader power too, so it doesn't look like texture power was as big of a problem if they have mantained the balance between the two. Same with next Nvidia's cards, they have mantained the balance between SP and TMU AFAIK.
It's something that saddens me, since I really wanted to know where the bottleneck is more common, it's in SPs or TMUs? It definately isn't on ROPs until you reach high resolution and AA levels and sure as hell it's not on memory bandwidth. That doesn't mean memory bandwidth couldn't be more important in the future. Indeed if GPU physics are finally widespread, and I think it's inevitable, we will need that bandwidth, but for graphics alone, bandwidth is the one thing with more spare power to give nowadays. GDDR5 clocks or 512bit interface is NOT needed for the kind of power that next cards will have, if only used for rendering. They are more e-peenis than anything IMO.
I think the tmu was a bottleneck in rv670, the memory bandwith is good for high res with high aa. More shader powa is necessary as always, especially if gpu physics come into the picture.
I'm not certain if the rops are a bottleneck...
Posted on Reply
#125
DarkMatter
ValdezI think the tmu was a bottleneck in rv670, the memory bandwith is good for high res with high aa. More shader powa is necessary as always, especially if gpu physics come into the picture.
I'm not certain if the rops are a bottleneck...
My point is that if TMUs were the bottleneck, they would have done 2 x Shader Power AND dunno 3x Texturing power? and not 2x-2x. If textures were the bottleneck, this flaw has been carried over to the new series. Since I don't think that possible, as I don't think they are so stupid, my only conclusion is that it wasn't a bottleneck. I am based in other concerns to reach tht conclusion too, like the efficiency at which they are capable of using the VLIW + SIMD SP arrays, for example. I have defended since day one that R600 was limited by it's shader power, it's just lately and after hearing most people complain about its 16 TMUs that I had to "admit" or adopt the idea that the bottleneck is on TMUs.
Posted on Reply
Add your own comment
Apr 25th, 2024 13:05 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts