1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

HD 5870 Discussion thread.

Discussion in 'AMD / ATI' started by a_ump, Oct 25, 2009.

Thread Status:
Not open for further replies.
  1. Mussels

    Mussels Moderprator Staff Member

    Joined:
    Oct 6, 2004
    Messages:
    45,228 (10.34/day)
    Thanks Received:
    12,457
    Location:
    Australalalalalaia.
    so, you're saying the cards are disappointing based on your own interpretation of how fast it should be, based on specs?

    that seems a bit... odd
     
    10 Year Member at TPU
  2. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.04/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    It's not odd at all. One suspects that doubling specs is the way of trying to achieve twice the performance. If not, if Ati doubled the specs knowingly expecting a mere 50% performance increase, then Ati's architecture and their long term strategy is a BIG BIG fail.
     
    qubit says thanks.
  3. theorw

    theorw New Member

    Joined:
    Jul 5, 2007
    Messages:
    771 (0.23/day)
    Thanks Received:
    50
    Location:
    Athens GREECE
    Hey mussels this guy reminds me of guy obsessed with 9800gtx comparing 4850s 800sp with the 128 of 9800gt ...u remember...?
    Benetanegia its not what every company does to put 2ble shaders>double performance...
    Thats very difficult especially when u redesign the core completely...Actually my guess is that even if nvidia manages to puts something like DOUBLE 295GTX performance,it will cost like 600+euros but i even doupt that the gt300 will be that strong but the price WILL be close to that number...
    and how can u accuse an architecture FAIL when it scores really nice and has so small consumption compared with the competition??????
     
  4. bobzilla2009 New Member

    Joined:
    Oct 7, 2009
    Messages:
    455 (0.18/day)
    Thanks Received:
    39
    Maybe the design is more geared for dx11 performance, with current api's being less of a concern (rightly). Regardless, driver updates will improve matters a lot and the card is still fast at the moment anyway, so things will improve :)
     
  5. Mussels

    Mussels Moderprator Staff Member

    Joined:
    Oct 6, 2004
    Messages:
    45,228 (10.34/day)
    Thanks Received:
    12,457
    Location:
    Australalalalalaia.
    it doesnt have double specs.


    it doesnt have double shaders, double clocks, double ram size, double ram clocks, double ROPs, double texture fill rate, double sized ram bus...


    *SOME* aspects of the card are doubled - not all.
     
    10 Year Member at TPU
  6. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.04/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    RV870 is not a redesing, far from it. And I say it's a fail, if they knew it from the drawing board, because if they have to double up the silicon to obtain a 50% increase in performance they won't get too far in the future.

    Besides GT300 will not be expensive necessarily, it won't be expensive to make, not like GT200 or even GT200b. Price will depend entirely on how fast Nvidia wants to recover the R&D money and if they decide to come back with a vengance to RV770... I mean Ati cards have doubled specs and have not scaled accordingly, Nvidia has more than doubled specs, if they scale like they did in every previous generation...
     
  7. theorw

    theorw New Member

    Joined:
    Jul 5, 2007
    Messages:
    771 (0.23/day)
    Thanks Received:
    50
    Location:
    Athens GREECE
    IF u were an ENTHUSIAST,u would have already bought a n i7 extreme,and some workstation ASUS mobo for quad fire 5870s ,EACH @1,5 volts EACH on water EACH@1000+core and THEN u would try to compare this with quad fire 295s and u would post some screens and comment on them...But u look like more of a nvidia FANboy that does a research about the performance increase % and claiming that 5870 is a disapointment when everyone finds them EXCELLENT!
    Its a well known ATI strategy to go NOT o
    for the beastly performance but for bung for buck;)
    Sure ati can make a BEAST but prefers clever and smart over brutal force.If u dont understand this then there nothing anyone can say about u...:cool:
    If gt300 comes in production in time,after the 2%successful chips right?
     
  8. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.04/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Specs, the true specs are Fill Rate, GFlops, memory bandwidth... how you obtain them is your decision. You don't need twice the shaders at twice the mhz to obtain twice the performance, you would obtain 4x the performance that way.
     
  9. Mussels

    Mussels Moderprator Staff Member

    Joined:
    Oct 6, 2004
    Messages:
    45,228 (10.34/day)
    Thanks Received:
    12,457
    Location:
    Australalalalalaia.
    as someone said before... this card matches a GTX295, but with lower power consumption, and roughly half the price.


    Benetanegia: how nice of you to do math for me. its something i suck at. It still doesnt negate the point that the 5870 is not twice the 'specs' in every way of a 4870/4890
     
    10 Year Member at TPU
  10. theorw

    theorw New Member

    Joined:
    Jul 5, 2007
    Messages:
    771 (0.23/day)
    Thanks Received:
    50
    Location:
    Athens GREECE
    I DID:D:rockout::D
     
  11. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.04/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Whatever, you know very well what I meant, and if you don't I'm really sorry for you.

    I'm a TRUE enthusiast, the one that likes innovation and technology over everything even if I couldn't afford it. Even though I can afford it anyway. It's just that I'm 27 and spending $$$ on hardware that I don't need is quite stupid for ME. I've spent in the past, I've bought almost everyinthg in the past, but right now I don't, period. I'd rather spend it making parties thn hardware that I don't need.

    You know what, I like cars too, and planes go figure, but I don't buy every single one that gets to the market. Do you?

    On topic, I've not said they are not the best cards out there at the moment, but we were expecting more, considering the specs. You can understand it already or take your Ati flag and get to the streets I don't care. I've spent too much time discussing this stupid thing.
     
  12. theorw

    theorw New Member

    Joined:
    Jul 5, 2007
    Messages:
    771 (0.23/day)
    Thanks Received:
    50
    Location:
    Athens GREECE
    U have also spent too much time doing research on the performance increase ratios of the previous generations cards...If u cant understand that in the world we live in almost nothing increases LINEARLY then is YOUR PROBLEM DUDE!!!
    double specs---->double performance....NAH....:nutkick:
     
  13. qubit

    qubit Overclocked quantum bit

    Joined:
    Dec 6, 2007
    Messages:
    12,675 (3.94/day)
    Thanks Received:
    6,057
    Location:
    Quantum Well (UK)
    I'm disappointed with the 5870 too, as it's not greater than an X2 like the others were in your examples. Note that the GTX 280 & 9800 GX2 were actually roughly matched though, which made the 280 a bit of a disappointment too. The situation improved with the slightly improved GTX 285, however.
     
  14. kid41212003

    kid41212003

    Joined:
    Jul 2, 2008
    Messages:
    3,587 (1.19/day)
    Thanks Received:
    536
    Location:
    California
    Maybe AMD did this in purpose, because with its current performance, it's already the fastest single GPU card, and maybe they will release the HD5890 after NVIDIA's new gen cards.
     
    Last edited: Oct 27, 2009
  15. Mussels

    Mussels Moderprator Staff Member

    Joined:
    Oct 6, 2004
    Messages:
    45,228 (10.34/day)
    Thanks Received:
    12,457
    Location:
    Australalalalalaia.
    as well as 5890/58x0 x2
     
    10 Year Member at TPU
  16. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.04/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Ugh! I hope that's not the case, that would be ugly, deceiving consumers and all. I don't see them doing that.

    And does the HD5890 really exist? What's going to be exactly? Higher clocks? More shaders (unlikely)? I don't see them pulling out another HD4890, with the same percentual clock increase, because it was based on some modifications that must already be present in HD5870, t would be stupid not to make the best from the start IMO.
     
  17. kid41212003

    kid41212003

    Joined:
    Jul 2, 2008
    Messages:
    3,587 (1.19/day)
    Thanks Received:
    536
    Location:
    California
    It's just like CPUs, low clock and high clock speed. It's not necessary for Intel to put out the highest clock possible, but rather, depend on the current market.

    When NVIDIA finale their products and put them on the market, AMD then still have HD5890 to counter, maybe higher memory bandwidth, I'm not so sure myself, just my thoughts.
     
  18. Easo

    Easo

    Joined:
    May 19, 2009
    Messages:
    1,012 (0.38/day)
    Thanks Received:
    180
    Location:
    Latvia
    Sorry, but i will say simply, you look like an Nvidia fanboy... You are whining over something, that counts atm as the best card in market price / performance wise, goes 1 on 1 with nvidia current flagman (gtx295), costs much less, heats less, is more silent, eats less electricity and offers 50 % performance increase. Do you know, that 50% counts as FUCKING HUGE INCREASE? Drivers will do even more. Uhhhhh, out of breath.
     
  19. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,941 (0.72/day)
    Thanks Received:
    417
    Location:
    Singapore
    Well looking entirely statistically, it did require more than 2x the shaders to get 2x the performance. 3870 to 4870 was 320 to 800 shaders. Still I think that the main bottleneck is software related and not hardware.
     
  20. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.04/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Yet, all those calculations that I made just prove that Nvidia didn't even double up the specs and they did double up performance. Something fails in your theory. :nutkick:

    Don't worry, I'll give you a hand. It's called SCALAR architecture. When you base your designs in that, it takes more silicon to achieve the same peak performance, but the resulting chip has the bad habit of being efficient and scale almost linearly. The idiot, how it dares? :laugh:

    Besides many things are linear in life, but above all of them computing is the most linear of all, assuming you get rid of bottlenecks and don't base your design in 3 dimensional arrangement*.

    * Ati's architecture has to deal with 3 dimensions of parallelism and it's related efficiencies (which are linear each of them): thread level parallelism, instruction level parallelism and word level parallelism. RV870 for example, has 20 SIMD clusters of 16 shader processors that consist on 5 "ALU"s each which must be accesed with a single instruction word. That's 20x16x5, 3 dimensions. That way it's very common that on a typical rendering task, i.e one (2,3...) of the ALUs isn't used, that 3 of the SPs inside the SIMD unit aren't used and that 4 of the SIMD clusters are wasted because of a context change. That example results in 4x13x16 = 832. Only 832 of the 1600 "SP"s are in use. And don't think for a second that is an unusual situation... it could be worse like 1x16x20= 320.

    Yeah, yeah I know that all the Ati fanboys see me like an Nvidia fanboy. LOL. Funny thing is that this only happens in this forums. I wonder if it has anything to do with the fact that most people find it because of ATItool. :laugh:

    Seriously, I don't have a problem with people calling me a Nvidia fanboy. I'm not a Nvidia fanboy, but since I am the person that says what no one wants to hear, the voice of sanity, the enemy of fanaticism, that automatically makes me the Nvidia fanboy. Facts are facts and the numbers are above, only a blind can't see them and its meaning, which is as simple as this: for whatever reason the HD5870 doesn't perform on par to what it's specs suggest it should perform and that dissapoints me a lot. There's no interpretation there, there's no bashing, it's not a offense against you beloved brand. It's pure facts, mathematically proven facts.
     
    Last edited: Oct 27, 2009
  21. theorw

    theorw New Member

    Joined:
    Jul 5, 2007
    Messages:
    771 (0.23/day)
    Thanks Received:
    50
    Location:
    Athens GREECE
    I just keep wondering,how come AMD missed u from their labs?????:eek:
    Well the bottom line is that sales will prove whos right and whos wrong...Wont they???
     
  22. AddSub

    AddSub

    Joined:
    Aug 9, 2006
    Messages:
    1,001 (0.27/day)
    Thanks Received:
    152
    Definitively! Pre-release propaganda was pretty good though. My hat is off to AMD. Bravo! I sold my GTX 280's in anticipation of 5xxx series but after reading around 25+ reviews (pretty much every 5850/5870 review out there, crossfire or not) I came out pretty unimpressed. Crossfire scaling is pretty spotty or downright non-existent in many titles, or even atrociously buggy. For example, many reviews show one 5870 being faster than two 5870 cards in crossfire in some games, same goes for 5850 and other 5xxx cards. Drivers issues are responsible for this is my guess since AMD/ATI GPU's never had decent scaling, compared to SLI at least. (Something I was hoping 5xxx series would change). Anyways, knowing AMD's track record when it comes to fixing these sort of major things (I've owned nearly a dozen Radeon cards to date) they generally fix these kind of things on generational jumps, with minor bugs left to be fixed in between. In other words, I don't expect any scaling issues to be fixed until 6xxx series. AMD simply does not have the resources to do anything otherwise.

    Single-GPU wise, again performance is pretty spotty. Some reviews show 5850/5870 outperforming nearly every GPU out there, while other reviews show the same GPUs getting trashed by mid-range GTX2xx nVidia offerings. I was going to attribute this to driver issues as well, however in many of those conflicting reviews the reviewers used nearly identical platforms, specs-wise, so I'm going to go with simple dishonesty in those cases. I mean, if both reviewers are using the same hardware+software down to identical driver revision #'s, yet one reviewer shows consistent 25% to 40% skew, then somebody is simply lying their ass off. It is getting to the point where you can't trust 50% (easily!) or more "review" sites out there. It seems anybody with an agenda and $10 to spare for domain registration fees can become a "reviewer".
     
    10 Year Member at TPU
  23. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.04/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    How are sales going to prove how a card is scaling? That's absurd.

    FYI I'm dissapointed because this is the first time that a new gen card (for a new DX version) isn't twice as fast (or close) to previous generation fastest card. It's not even close, it's just 25% faster than the GTX285 and that means that I am at crisis point. On one hand I don't want GT300 to be much faster than the Ati cards, so that there is price wars, but on the other hand I want GT300 to be what its specs suggest it is, because I enjoy knowing there's amazing hardware and technology out there. I enjoy news about NASA and space even though I know I will never be there. I too think that is the best way of moving new technology forward and down to mainstream cards, the only way to prevent stagnation too and that is soooo much needed nowadays.
     
  24. phanbuey

    phanbuey

    Joined:
    Nov 13, 2007
    Messages:
    5,290 (1.63/day)
    Thanks Received:
    1,023
    Location:
    Miami
    LOL... are you kidding? look where ATI came from... nvidia's architecture is BARELY twice as fast as the 8800GTX lol... they have made a 100% improvement since 2006.

    ATI came from the R600 which SUCKED, and are now officially ahead of NV in the single GPU department. There are doubts that NV can even make the Gt300 properly. ATI has lots of clockroom overhead left too, and the silicon package is fairly tiny for the performance... if anything performance/die size ATI is in the lead.

    I love NV, but they just got caught and passed... and IF the gt 300 comes out by christmas, ati will counter with an x2. Basically, even if the gt300 is double the speed of the gtx285, theyre still fucked - the x2 will be out...

    ATI's architecture has clearly shown the progression in which they will be moving. Their R&D cycle seems much faster than NV's which means even if their cards dont give a 100% boost every generation, they will still be faster than the competing NV solution at the time of release - because of the simple - shrink, tweak, add more shaders, throw two/three chips on a board, shink, tweak....
     
  25. Bo_Fox

    Bo_Fox New Member

    Joined:
    May 29, 2009
    Messages:
    480 (0.18/day)
    Thanks Received:
    57
    Location:
    Barack Hussein Obama-Biden's Nation
    Ahh.. not to sound like a broken record, but I'd expect a 5870 to be at least as fast as two 4890's in Crossfire in every single game. Yep, in every single one, because crossfire scaling is not perfect and there's some additional CPU overhead.

    A 5870 is exactly like 2 4890's in one chip, like as if it's being "hardware"-crossfired. As long as it retains or preserves the memory bandwidth for each 4890 "half", none of the performance should ever be lost.

    ATI is just making a big mistake once again, and it should be much more "fix-able" this time around than the last time when they were trying out a very new DX10 architecture with HD 2900XT. Now the core is really beefed-up, but they do not even want to bother doubling the memory bandwidth to 512-bit. Heck, they could still use the same speed GDDR5 (3900MHz) to save a bit money rather than using 4800MHz. But then they'd have to use 16 memory chips for 512-bit, and I'm not sure if there are less dense GDDR5 chips available so they could still come out with both 1GB cards with 16 chips and 2GB cards with 16 chips.

    Well, ATI is probably thinking that we do not need 512-bit bandwidth until Nvidia rolls out their GT300 cards. Perhaps it's just the business way of playing cards--it's best not to lay down all the cards just yet. But Nvidia did it with their 8800GTX so ATI should just go ahead full-steam and then reap greater rewards for the time being!

    Now, I really think it has a lot to do with 5870X2. When Nvidia did 8800GTX, Nvidia did not have a dual-chip card in mind.

    Perhaps ATI thought that doing 512-bit and 16 chips on a single card with 2 GPU's (therefore 32 chips total) would have been a nightmare like a Voodoo5 6000, so ATI decided to just design a 5870 in a way that a 5870X2 would be impressive enough in doubling the performance to warrant healthy sales before a more powerful single-GPU card is released.

    Yep, I think that's it.
     
    a_ump and phanbuey say thanks.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Thread Status:
Not open for further replies.