1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards

Discussion in 'News' started by malware, May 21, 2008.

  1. Thermopylae_480

    Thermopylae_480 New Member

    Joined:
    May 27, 2005
    Messages:
    3,686 (1.07/day)
    Thanks Received:
    393
    Location:
    Little Rock Arkansas, United States
    Don't respond to trolls, especially after a moderator has already attempted to end the situation. Such behavior only worsens the situation, and can get you in trouble.

    (DO NOT RESPOND)
     
  2. Rebo&Zooty

    Rebo&Zooty New Member

    Joined:
    May 17, 2008
    Messages:
    490 (0.21/day)
    Thanks Received:
    30
    i own amd, and im using an x1900 till my 8800gt's back from the shop(stock cooler gave out) an u dont see me QQ(crying) or upset about being an amd user, i have setup core2 systems for people, they are nice, but price for price i still prefer to get as much out of an amd rig as i can, my new/current boards got a few years left b4 i need to replace it, plenty of cpu's to come in that time, i would guess 3-4 will pass thru the board b4 i upgrade it, unless i get a really kickass deal on a DFI 790fx board(the high end one not the lower one)
     
  3. HTC

    HTC

    Joined:
    Apr 1, 2008
    Messages:
    2,240 (0.93/day)
    Thanks Received:
    303
    This should be easy enough to prove / disprove when more dx10.x games are released: might take a while for that to happen, though :(

    EDIT

    Apologies, moderator: it was in post # 99 when i started to write this reply!
     
    Last edited: May 22, 2008
  4. jbunch07

    jbunch07 New Member

    Joined:
    Feb 22, 2008
    Messages:
    5,261 (2.16/day)
    Thanks Received:
    614
    Location:
    Chattanooga,TN
    lets get this thread back on track!

    no more arguing about ati vs nvidia!
    at least not here
     
  5. Rebo&Zooty

    Rebo&Zooty New Member

    Joined:
    May 17, 2008
    Messages:
    490 (0.21/day)
    Thanks Received:
    30
    sorry was a cross post
    i started when he sposted that orignaly, i spent alot of time on my slow net(damn comcast is buggin again!!!!!) finding those damn links/images.

    sorry for the cross posts,woulde delete it but, all that effort gone to waste :(
     
  6. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,706 (11.16/day)
    Thanks Received:
    13,668
    Location:
    Hyderabad, India
    Well said. We must stop laying empasis on bus-width as long as faster memory makes up. Let's stop (my 512bit pwns your 256bit), look up the charts and the final bandwidth of the memory bus.

    Ignorant people even begin with their own terminology, "256bit GPU", "Mine's a 512bit GPU" I've not seen anything more retarded, I mean come on, xxx-bit is just the width of the memory bus.
     
  7. jbunch07

    jbunch07 New Member

    Joined:
    Feb 22, 2008
    Messages:
    5,261 (2.16/day)
    Thanks Received:
    614
    Location:
    Chattanooga,TN
    thank you!

    its about time someone finally said it!

    comparing memory bus always made me laugh

    256 gddr5 should do very nice!
     
  8. jaydeejohn New Member

    Joined:
    Sep 26, 2006
    Messages:
    127 (0.04/day)
    Thanks Received:
    8
    Actually, having thruput is only good if it delivers. Its like putting 1 gig of memory on a x1600. Sure its there, but can the card relly use it?
     
  9. Rebo&Zooty

    Rebo&Zooty New Member

    Joined:
    May 17, 2008
    Messages:
    490 (0.21/day)
    Thanks Received:
    30
    yeah, see, from what i been told by a couple people i know who work for amd/ati and intel, ati honestly expected vista to take off and replace xp over night, if that had happened dx10 would have become the norm and the r600/670 design would have been GREAT, it would have looked far better then it does, BUT because vista fell on its face(doah!! *homer simpson sound*) ati's design was.....well less then optimal.

    i have sent ati enough bitching emails in the past about buggs that i know how their support is, if you report it dirrectly they tend to try and fix it.

    nvidia support, u get a form letter at best unless u know somebody on the inside, then they get the runaround and you get the runaround from them because, honestly they cant get any clear answers to alot of long standing buggs.

    a few examples

    windows server 2003 and xp x64(same os core) have a lovely bugg with nvidia drivers, if you have ever installed another companys video drivers you have a 99% chance that once you install the nvidia drivers the system will BSOD every time you try and use the card above a 2d desktop level, its a KNOWN issue since x64 came out(and from some reports also effects 32bit server 2003 as well), nvidia has had YEARS to fix this, they havent bothered, their fix is noted as "reinstall windows"..........if i had repoted that bugg to ati i would have had a beta fix in a couple days( i know because i repoted a bugg with some 3rd party apps that caused them to lock up and got fast acction)

    Nvidia for over a year had a bugg in their yv12 video rendering, ffdshow wiki explains it, and documents how long its been a probblem, they fixed it in some beta drivers, but then broek it in fulls........

    ati wide screen scaling: on some older games the games image is streched because the game dosnt support widescreen res's, theres a fix for this if you know where to look in drivers but its not automatic so it causes alot of people troubles.


    i got a large list of bitches about both companys.

    ati: agp card supports been spotty with the x1k and up cards, no excuse here other then the fact that they just need more people to email them and complain about it(squeeky wheel gets the oil as granny use to say)
     
    EastCoasthandle says thanks.
  10. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,706 (11.16/day)
    Thanks Received:
    13,668
    Location:
    Hyderabad, India
    This is sort of an arms race between USA and USSR. Even if a GPU doesn't need all the bandwidth, it's in place, a HD3650 will never need PCI-E 2.0 x16 bandwidth, but when it comes to RV770 and memory subsystem, the difference comes to surface when RV770Pro is compared to its own GDDR3 variant. The fact that there is a difference shows the RV770 is able to make use of all that bandwidth and is efficient with it.
     
  11. jaydeejohn New Member

    Joined:
    Sep 26, 2006
    Messages:
    127 (0.04/day)
    Thanks Received:
    8
  12. jbunch07

    jbunch07 New Member

    Joined:
    Feb 22, 2008
    Messages:
    5,261 (2.16/day)
    Thanks Received:
    614
    Location:
    Chattanooga,TN
    i thought the bandwidth needed had more to do with the game or what your doing with the cards...ie some games need more bandwidth than other games...but i know what you mean.

    correct me if im wrong
     
  13. mandelore

    mandelore New Member

    Joined:
    Aug 12, 2006
    Messages:
    3,251 (1.09/day)
    Thanks Received:
    152
    Location:
    UK-small Village in a Valley Near Newcastle
    people like candle really should be kept out of these types of threads, im sure he just comes a stompin to troll as usual....

    wait for the card, then smack it if you feel necessary, else just stfu and let the facts roll from the horses mouth so to speak and wait for genuine reviews.

    :toast:
     
    Rebo&Zooty says thanks.
  14. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,706 (11.16/day)
    Thanks Received:
    13,668
    Location:
    Hyderabad, India
    Yes, higher the resolution (of the video/game), larger are the frames, more amounts of data are transferred, extra bandwidth helps there.
     
  15. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.67/day)
    Thanks Received:
    184
    I could agree with many of your points in this thread, but I can't take you seriously, just because of these:

    a- BOTH had 16 ROPS and 8 vertex shaders.
    b- It's true that NV had 24 TMU while Ati had 16, though they were different. Ati ones were more complex.
    c- AND X1900 had 48 pixel shaders vs 24 on the 7900.

    Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...

    Don't bring in the price argument, please, since 7900GTX was a lot cheaper than X1900 XTX. It actually traded blows with the XT, both in price and performance. The only card that stood out at it's price segment was the X1950 pro when G80 was already out, but was still very expensive.

    I don't have anything against your opinions, but try not to use false data to support your arguments. I really think it's just that your memory failed, but be careful next time. :toast:

    EDIT: Hmm I just noticed two things in the Assasin's Creed graphic you posted.

    1- No Anisotropic Filtering used on the Ati card.
    2- It's the X2 what is being compared to the 9800 GTX, I first thought it was the HD3870.

    All in all the X2 should be faster, because it's more expensive and no AF is applied, but it's not.
     
    Last edited: May 22, 2008
  16. Rebo&Zooty

    Rebo&Zooty New Member

    Joined:
    May 17, 2008
    Messages:
    490 (0.21/day)
    Thanks Received:
    30
    msrp is simlar on both cards, nvidia just recently price droped them afik.

    af was dissaled because its bugged on that game eather a driver patch or game patch would fix that, but the makers are patching out dx10.1 support for now, probbly because nvidia dont want anybody competing with them.

    this wasnt to show price card vs card it was to show that dx10.1 shader based AA has less impact then dx9 shader based AA, and since the r600/670 where made for not dx9 or dx9+dx10 shaders.

    diffrent designs, the 8800 is really a dx9 card with shader4.0 taged on, the 2900/3800 are native shader 4 cards with dx9 support taged on via drivers, very diffrent concept behind each, since vista tanked, the r600/670 havent had any true dx10/10.1 games to show off their design, and as soon as one came out, somehow it ended up patching it out when ati did well in it.
     
  17. Valdez

    Joined:
    Sep 2, 2005
    Messages:
    294 (0.09/day)
    Thanks Received:
    25
    Location:
    Szekszárd, Hungary
    rv770 has 32tmu instead of 16 in the rv670, if the rumours are right.

    Today 1950xtx is 36% faster than 7900gtx in 1280x1024 without aa, and 79% faster than 7900gtx with 4x aa. (it also beats the 7950gx2 by 4% without aa, and by 35% with 4x aa)

    http://www.computerbase.de/artikel/...on_hd_3870_x2/24/#abschnitt_performancerating
     
  18. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.67/day)
    Thanks Received:
    184
    MSRP is not similar, the GTX is $50 cheaper since day one. And that's the case in Newegg, GTX is around $50 cheaper. Average GTX $300, average X2 $350-375. Cheaper GTX $289, cheaper X2 $339. The average is not calculated, but aproximated, I didn't take the 2 higher prices for each card to make the average. If I did so the X2 would suffer a lot, indeed my averages are being very favorable to the X2. Here in Spain, the GTX is well below 250 euro, while the X2 is well above 300.

    Anyway my point was that the graphic didn't show shader AA to be superior, X2 should be a lot faster in that circunstances, but it's not. It only shows that the performance hit under DX10.1 is not as pronounced as under DX10 when AA done on shaders, but never that it's faster than with dedicated hardware. Also according to THAT GAME, DX10.1 AA is faster than DX10 AA on Ati cards, but I would take that with a grain of salt. The lighting in DX10.1 was way inferior to DX10 one on some places, because something was missing. I saw it somewhere and had my doubts. Until one of my friends confirmed it.
     
  19. Rebo&Zooty

    Rebo&Zooty New Member

    Joined:
    May 17, 2008
    Messages:
    490 (0.21/day)
    Thanks Received:
    30
    http://www.techpowerup.com/reviews/PointOfView/Geforce7900GTX/

    according to tpu review its
    7800/7900
    Number of pixel shader processors 24
    Number of pixel pipes 24
    Number of texturing units 24

    so your wrong, the 7800/7900 based cards are 24 rops/pipes 1 shader unit per pipe, where as the x19*0xt/xtx have 16 pipes 3 shaders per pipe(48 total)

    you try and disscredit me then use faulse facts......

    2nd, the x1900xt and xtx where the same card, i have yet to meet a x1900xt that wouldnt clock to xtx and beyond, and it was cake to flash them, infact thats what my backup card is, a CHEAP x1900xt flashed with toxic xtx bios
    http://www.trustedreviews.com/graph...phire-Liquid-Cooled-Radeon-X1900-XTX-TOXIC/p4

    check that out, seems the gx2 is faster then the xtx but only in a few cases, over all they trade blows, yet the gx2 was alot more expencive and had a VERY short life, it went EOL pretty fast, and never did get quad SLI updates.......
    in the end it was a bad buy, where as my x1900xt/xtx card was a great buy, i got it for less then 1/3 the price of a 8800gts, its still able to play current games not maxed out by any means but still better then the 7900/50 do :)
     
  20. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.67/day)
    Thanks Received:
    184
    Corrected that for you. C'mon, we all know what happens between 3DMark 06 and Nvidia, and what happens in games. I don't want to hear the conspiracy theory again, unless some actual proofs are showed, please. It's an old tired argument. Over the time I have come to the conclusion that Ati does their cards for benchmarking, while Nvidia does theirs for games. [H] had a really nice article about Benchmarks vs. Games. The difference was brutal, and they didn't talk about 3DMark vs games. It was benchmarks of a game vs. the actual gameplay on the same game. They even demoed their own benchmarks and the result was the same.
     
  21. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.67/day)
    Thanks Received:
    184
    7900 GTX has 16 ROPS. Period.
    Speaking of PIPES when the cards have different number of units at each stage is silly. What's the pipe number? The number of ROPs, the number of TMUs or the number of Pixel shaders? Silly.
     
  22. Valdez

    Joined:
    Sep 2, 2005
    Messages:
    294 (0.09/day)
    Thanks Received:
    25
    Location:
    Szekszárd, Hungary
    I don't know what you're talking about. The link shows a benchmark with a lots of games. The page i linked shows how the cards perform to each other in average of all tests.
     
  23. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.67/day)
    Thanks Received:
    184
    Yup, OK, sorry. :eek:

    I used Google to translate it to Spanish and it didn't a good job. I understood it was 3DMark results, not to mention that Next/Previous page was nowhere to be found.. OMG, I love you Google translator...
    Traduction to English went better. :toast:

    In the end you're right. The X1900 is A LOT faster in newer games, and I knew it would happen. Bigger use of shaders helping the card with more pixel shaders is not a surprise. If you knew me, you would know that I have always said X1900 was a lot faster than 7900, but in no way it STOMPED it in games. NOW it does. Anyway, it's faster but almost always on unplayable frames. Don't get me wrong, it's a lot faster, period. It just took too long for this to happen IMO.
    Also IMO Ati should make cards for today and not always thinking in being the best in a far future (that's 1 year in this industry), when better cards are going to be around and ultimately no one will care about the old one. That's my opinion anyway. I want Ati back and I think that's what they have to do. Until then they are making me buy Nvidia, since it's the better value at the moment. HD4000 and GTX 200 series are not going to change this from what I heard, it's a shame.

    EDIT: I forgot to answer this before even though I wanted to do so:

    It seems they are right. BUT they are doubling Shader power too, so it doesn't look like texture power was as big of a problem if they have mantained the balance between the two. Same with next Nvidia's cards, they have mantained the balance between SP and TMU AFAIK.
    It's something that saddens me, since I really wanted to know where the bottleneck is more common, it's in SPs or TMUs? It definately isn't on ROPs until you reach high resolution and AA levels and sure as hell it's not on memory bandwidth. That doesn't mean memory bandwidth couldn't be more important in the future. Indeed if GPU physics are finally widespread, and I think it's inevitable, we will need that bandwidth, but for graphics alone, bandwidth is the one thing with more spare power to give nowadays. GDDR5 clocks or 512bit interface is NOT needed for the kind of power that next cards will have, if only used for rendering. They are more e-peenis than anything IMO.
     
    Last edited: May 22, 2008
    candle_86 says thanks.
  24. vexen

    vexen

    Joined:
    Oct 13, 2005
    Messages:
    192 (0.06/day)
    Thanks Received:
    12
    Considering the current World Record is 32k, and 8GB of ram doesn't increase a 3DMark score over 2GB, what are you talking about?
     
  25. Valdez

    Joined:
    Sep 2, 2005
    Messages:
    294 (0.09/day)
    Thanks Received:
    25
    Location:
    Szekszárd, Hungary

    I think the tmu was a bottleneck in rv670, the memory bandwith is good for high res with high aa. More shader powa is necessary as always, especially if gpu physics come into the picture.
    I'm not certain if the rops are a bottleneck...
     

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page