1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

HD 5870 Discussion thread.

Discussion in 'AMD / ATI' started by a_ump, Oct 25, 2009.

Thread Status:
Not open for further replies.
  1. Zubasa

    Zubasa

    Joined:
    Oct 1, 2006
    Messages:
    3,993 (1.10/day)
    Thanks Received:
    466
    Location:
    Hong Kong
    At the same time a 512-bit bus with 16 high-speed GDDR5 chips will greatly add to the cost of the card.
    The problem in the end is, is the extra performance worth the increased cost?
     
  2. wolf

    wolf Performance Enthusiast

    Joined:
    May 7, 2007
    Messages:
    5,558 (1.64/day)
    Thanks Received:
    855
    here is a link to another forum where I put up the results http://forums.whirlpool.net.au/forum-replies-archive.cfm/1262546.html

    The post that beings "GT200 VS RV770, how well do they scale?" shows both GT200 and R770 scaling in excess of 100% at times if memory serves.

    And Bo Fox, I tend to agree with that last statement, HD58XX is not an exact doubling of everything, thus shouldn't be seen as trying to be double. hec if they reach 80% more performance than a 4890 through drivers thats already fantasmagorical.
     
  3. Binge

    Binge Overclocking Surrealism

    Joined:
    Sep 15, 2008
    Messages:
    6,981 (2.41/day)
    Thanks Received:
    1,754
    Location:
    PA, USA
    Yes. It's meant to target the enthusiast gamer, so YES YES YES.
     
    Bo_Fox says thanks.
  4. Zubasa

    Zubasa

    Joined:
    Oct 1, 2006
    Messages:
    3,993 (1.10/day)
    Thanks Received:
    466
    Location:
    Hong Kong
    Not really, the card for that is going to be the 5970 as ATi has planned.
     
  5. Binge

    Binge Overclocking Surrealism

    Joined:
    Sep 15, 2008
    Messages:
    6,981 (2.41/day)
    Thanks Received:
    1,754
    Location:
    PA, USA
    If you don't want the enthusiast gamer card you can buy a 5850 and enjoy the price/perf. Don't tell me the 5870 is not their enthusiast gamer card with it supports 3 monitors with eyefinity technology for an "unparalleled gaming experience".
     
    Last edited: Nov 2, 2009
  6. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    9,275 (2.35/day)
    Thanks Received:
    1,788
    5XXX series seems to be bandwidth limited.


    That is a myth. It isn't for all intents and purposes limited by the GDDR5, PCIe slot, or anything except the software, and yes, the design. If only one shader is used of the five, then you get a whopping 20% utilization of a cluster.


    Over 1TB of bandwidth on the L1 cache is huge, really ATI's design is the current limitation, they probably will release a XX90 series single core card or the like based on really good cores, a small easy to implement change in the design (designs change in these as well as CPU's, thus the all mighty stepping numbers) that will either increase yields of excellent cores and clocks, or allow a new generation of speed based on the same core as we saw with 4890.


    I am really hoping to see mfg's push the use of DX11 features mroe than they did with NV's hand in the pie for 10.1, fanbois deny all you want, but there is no doubt in my mind that NV pushed for the removel of 10.1 features in certain games. DX11 adn no one cheating (NV) has the possability of bridging the gap for a faster PC in all intensive application uses and allowing a cheaper upgrade path for users in the future.

    F@H is already using it, are quite a few other intensive applications. Why not allow eveythign to use it? Audio recoding and double floating pointer 32bit 96K sample rate audio that can be worked with in real time. Mmmmm. Video rendering and upscalling is done to a point, why not use more and allow for future expansion, and better results with lower bandwidth content now?
     
    10 Year Member at TPU 10 Million points folded for TPU
  7. Hayder_Master

    Hayder_Master

    Joined:
    Apr 21, 2008
    Messages:
    5,210 (1.71/day)
    Thanks Received:
    665
    Location:
    IRAQ-Baghdad
    i just want to know where is the 4870x2 gpu's bridge driver support between two gpu's , or it was fake talking
     
  8. Bo_Fox

    Bo_Fox New Member

    Joined:
    May 29, 2009
    Messages:
    480 (0.18/day)
    Thanks Received:
    57
    Location:
    Barack Hussein Obama-Biden's Nation
    My last post (or last 2 posts) commented on CF/SLI being more than 100% efficient if a certain bottleneck scenario has been overcome.

    If 2x of 4890's perform at 100fps in a certain game and a 5870 does only 60fps, and a 4890 does 40 fps, then let's see if a 5870 does 100fps if it has the total memory bandwidth of 2x 4890's. The CPU factor should not be a bottleneck for 100fps at least, since it has already been achieved with 2x 4890's which already use up a bit more CPU cycles. If the 5870 still does not do 100fps with 512-bit bandwidth, then we might be left with blaming the latency for actually dropping with cache (increase of bandwidth at the cost of cache's latency) or the poorly optimized drivers even though the architecture has remained unchanged at large since the R600.

    ??? :confused: What do you mean? Are you asking if there is a specific driver for the crossfire setup of a 4870X2? Crossfire is the same thing whether it's a 4870X2 or two 4890 cards connected by a bridge or just PCI-E slots.
     
  9. wolf

    wolf Performance Enthusiast

    Joined:
    May 7, 2007
    Messages:
    5,558 (1.64/day)
    Thanks Received:
    855
    Maybe he means the sideport they never turned on
     
  10. Bo_Fox

    Bo_Fox New Member

    Joined:
    May 29, 2009
    Messages:
    480 (0.18/day)
    Thanks Received:
    57
    Location:
    Barack Hussein Obama-Biden's Nation
    Well anyways..

    Do you guys remember a couple of review sites doing a test on an underclocked X1950XTX vs. an X1900XTX at stock? The new GDDR4 memory on an X1950XTX was underclocked to exactly match the bandwidth of an X1900XTX, to see if GDDR4 had higher latencies associated with it. Identical performance showed that the latency of the memory also had to be the same as with GDDR3.

    It'd be nice if somebody could test a 5870 against two 4890's with reduced memory bandwidth so that the total is exactly the same as that of a 5870. If 2x 4890's no longer beats a 5870 in *any* of the games, then we'd know for sure and start a riot for a faster card with 512-bit memory, like a HD2900XT that actually sported 512-bit bandwidth.
     
  11. Zubasa

    Zubasa

    Joined:
    Oct 1, 2006
    Messages:
    3,993 (1.10/day)
    Thanks Received:
    466
    Location:
    Hong Kong
    Well the 5750 also supports Eyefinity for that matter. The 5670 will most likely support Eyefinity and I don't see those as "enthusiast gamer cards". :roll:
     
    Last edited: Nov 3, 2009
  12. FreedomEclipse

    FreedomEclipse ~Technological Technocrat~

    Joined:
    Apr 20, 2007
    Messages:
    15,761 (4.61/day)
    Thanks Received:
    3,809
    Location:
    London,UK
    Is a 5870 worth getting rid of 2 4870's for??? It'l be a little while just yet but ive managed to secure a buyer whose willing to wait on my 2 4870's probably gonna pwn them off at £80 a peice. I think is pretty reasenoble despite being about go get it brand new in the shops for £100 - they both have custom coolers on them & in theory because of that I should be selling them for more
     
  13. wolf

    wolf Performance Enthusiast

    Joined:
    May 7, 2007
    Messages:
    5,558 (1.64/day)
    Thanks Received:
    855
    I'd do it in a heartbeat, and have had 2x4870 in CF, I can tell you a 5870 tears them a new one in terms of your gaming experience, at least from my experiences that is.

    come on 9.12.....;)
     
  14. FreedomEclipse

    FreedomEclipse ~Technological Technocrat~

    Joined:
    Apr 20, 2007
    Messages:
    15,761 (4.61/day)
    Thanks Received:
    3,809
    Location:
    London,UK
    great. hopefully by then the Sapphire toxic versions will be out if not then I'l just grab the cheapest 1Gb 5870 off the shelf & slap a 3rd party cooler on it.
     
  15. grimeleven New Member

    Joined:
    Oct 10, 2009
    Messages:
    19 (0.01/day)
    Thanks Received:
    8
    I believe they did, still they cannot be blamed for it since no one knows for sure, so they can get away with it :p.
    I reported their issue on 4800 series overheating (several other people did also) and they never replied to me, or else they would have to find an excuse/answer for not having found out themselves about the issue, instead they made the fix on 5800 series and everyone is happy, without a scandal.

    It's also possible that all those shaders are not utilized to full potential.. i mean come on 2.72TFlops of compute performance? as explained in Amdahl's law these is several limitations on how this can achieve it's peak performance. 5800 might just have reached that limitation, either the actual games aren't coded to take advantage of such powerhouse for calculations or it's a driver problem, or else something in the design but not the memory bandwidth. We might see some nice improvement with DX11 enabled games, maybe they can now use those shaders to "handle" the new types of shaders introduced with DX11, surpassing previous DX10 hardware by a large margin.

    As explained on wiki, if the games aren't being coded "parallelized" then all that power is sitting idle and waiting, what if Furmark discovered that flaw in design? all those unused shaders would then be fully solicitated but those 4800 did not supply enough voltages.. and AMD didn't want this.

    *Edit.. saw your post*
    Fully agree with you on this, they did in Assassin Creed and who knows what else and their shader path is totally different from AMD, which then they "requested" to be implemented into the game's code so that the competitor will be disadvantaged (cannot find back that dev image that represents it)
     
    Last edited: Nov 3, 2009
  16. Bo_Fox

    Bo_Fox New Member

    Joined:
    May 29, 2009
    Messages:
    480 (0.18/day)
    Thanks Received:
    57
    Location:
    Barack Hussein Obama-Biden's Nation
    Good point, but it's only speculation until we do some real testing. I'll try to find a direct comparison between a 5870 and 2x 4890's in CF.

    Could somebody lend me a 5870 and 2x 4890's for quick testing? :eek: I could show you how the memory bandwidth is so much of a bottleneck for a 5870 in several of the games (perhaps up to 60% of the newer games).

    Only after we know for sure can we finally start to remove some uncertainties.

    In some of the games, an X1950XTX performed around 8-10% better than an X1900XTX, due to the increase in memory bandwidth alone, and most (or nearly all) of us thought that an X1900XTX already had more than enough bandwidth.
     
  17. zithe

    zithe

    Joined:
    Jun 16, 2008
    Messages:
    3,154 (1.05/day)
    Thanks Received:
    424
    Location:
    North Chili, NY
    There's really nothing wrong with the framerates, but my friend is constantly dragging me over to manage issues and I have the same answer for him. "It's a driver problem, and you have the best drivers. We've tried all of them and these are what you get for now."
    A lot of the shuttering has been fixed, though. We just have this issue with source games where you have to keep alt+tabbing to get AA to enable. NFS:Shift has similar issues.

    I noticed that when testing the difference between my 8800GTX and his 5850, the minimum FPS in CoH were both the same in the bench. 9. Seems a little low.
     
  18. Binge

    Binge Overclocking Surrealism

    Joined:
    Sep 15, 2008
    Messages:
    6,981 (2.41/day)
    Thanks Received:
    1,754
    Location:
    PA, USA
    What does that have to do with anything? I'm talking about how the 5870 is marketed. It's not marketed as anything but an enthusiast gamer card that has tripple monitor support. The 5750 and 5670 will as well, but they aren't marketed to run Dirt2 on 3 screens with decent frames. I said the 5870 is an enthusiast gamer card. If you want to mince words then at least get the flavor right, and take my quotes in context.
     
  19. Zubasa

    Zubasa

    Joined:
    Oct 1, 2006
    Messages:
    3,993 (1.10/day)
    Thanks Received:
    466
    Location:
    Hong Kong
    We have yet to see how opitmized Dirt2 is, and we don't know mch about the 5670.
    But as far back as I remember the enthusiast market usually refers to cards over $400 since the X1800 times.
    That includes the 8800GTX/Ultra/GTX280 at launch and the HD 4870X2 and 4850X2 at a certain extant.
    The 5750 is still a decent gaming card I mind you, and it can well be possible that it can run Dirt2 on 3 monitors at reasonable settings.
     
  20. bobzilla2009 New Member

    Joined:
    Oct 7, 2009
    Messages:
    455 (0.18/day)
    Thanks Received:
    39
    i'd imagine 3 screens at any decent resolution that would warrant 3 screens would choke a 5750 tbh, its not a slight against the card, but the resolutions we're talking about are higher than 2560x1200 across the 3 screens (for it to be worthwhile at least). I can't see the 5750 doing that with anything other than low-medium settings with no AA.

    Still, maybe dirt 2 will run and scale fantastically well, and considering codemasters general pro-ati approach (ati card have always been better in grid for example), maybe it will be a possibility. But personally, if i would have spent the money on 3 monitors for gaming, i would not have skimped on the gpu aspect and would have gone for the 5850 or 5870 :) [or waited for the 5970 or whatever its going to be called]

    Also, the hd5870 is DEFINITLEY an enthusiast card in the uk, it costs us about $490 (~£300). It'll be nice when i eventually move to us (aiming on going into the processor industry on a research basis :) ) you guys get everything so much cheaper!
     
  21. Bo_Fox

    Bo_Fox New Member

    Joined:
    May 29, 2009
    Messages:
    480 (0.18/day)
    Thanks Received:
    57
    Location:
    Barack Hussein Obama-Biden's Nation
    There's only been one hardware review site (after checking more than 20) that directly compares a 5870 against 2x 4890's in CF:

    Many thanks to Firingsquad, an excellent review site (http://www.firingsquad.com/hardware/ati_radeon_hd_5870_performance_preview), here are the benchmark snapshots from FS.. Corei7 X975 @ 3.33 GHz was used, but Crossfire has additional CPU overhead, so CF should be limited by the CPU before anything else, and 2560x1600 will be chosen to ensure that the CPU bottleneck is avoided in as many games as possible. 2560x1600 is also a good indicator of upcoming games next year, as some of the games out right now are not so demanding on those cards.

    [​IMG]

    A 5870 does 72.3 fps
    A 4870X2 does 82.5 fps
    2x 4890's CF does 85.8 fps (18.7% increase over a 5870)


    [​IMG]

    A 5870 does 58.6 fps
    A 4870X2 does 61.7 fps
    2x 4890's CF does 72.4 fps (23.5% increase over a 5870)


    [​IMG]

    A 4870X2 does 31.3 fps
    A 5870 does 33.8 fps
    2x 4890's CF does 37.1 fps (18.5% increase over a 5870)


    [​IMG]

    A 4870X2 does 20.2 fps
    A 5870 does 20.6 fps
    2x 4890's CF does 23.6 fps (16.8% increase over a 5870)


    [​IMG]

    A 5870 does 47.8 fps
    A 4870X2 does 50.6 fps
    2x 4890's CF does 61.2 fps (28% increase over a 5870)


    [​IMG]

    A 5870 does 32.1 fps
    A 4870X2 does 35 fps
    2x 4890's CF does 39.4 fps (18.7% increase over a 5870)


    [​IMG]

    A 5870 does 63 fps
    A 4870X2 does 63.2 fps
    2x 4890's CF does 73.8 fps (22.7% increase over a 5870)


    [​IMG]

    A 5870 does 72.6 fps
    A 4870X2 does 88.9 fps
    2x 4890's CF does 101 fps (39.1% increase over a 5870)


    [​IMG]

    A 5870 does 72.1 fps
    A 4870X2 does 79.2 fps
    2x 4890's CF does 92.7 fps (28.6% increase over a 5870)


    [​IMG]

    A 5870 does 43.8 fps
    A 4870X2 does 49.5 fps
    2x 4890's CF does 58.3 fps (33.1% increase over a 5870)


    [​IMG]

    A 5870 does 55 fps
    A 4870X2 does 74 fps
    2x 4890's CF does 89 fps (61.8% increase over a 5870)!!!!!


    [​IMG]

    A 4870X2 does 52.3 fps
    A 5870 does 54.6 fps
    2x 4890's CF does 61.3 fps (17.2% increase over a 5870)


    The results are also quite similar with 8x AA instead of 4x AA, where 2x 4890's remain the undisputed king over a 5870. A 5870 does gain a 2% advantage overall when using 8x AA, against a 4870X2, but it is still not enough to beat a 4870X2 overall.

    In every single game of the benchmark test suite done by Firingsquad, a 5870 has lost to 2x 4890's in CF. Sometimes, 2x 4890's in CF beats out a 5870 by more than 50-60%.

    Are the drivers to be blamed? Many would like to hope for a miracle boost in performance from driver optimizations, which has never been done by ATI before to the point where there's a 15% increase across the board. It has been done by Nvidia a couple times in the past several years, in which a new driver set brought about 10-20% increase in performance in a handful of games. Usually, it was after optimizing for a new GPU architecture or for new graphical features in games. However, in the case of 4890's in Crossfire and a 5870, both are very similar architectures, if not the same minus DX11.

    A 5870 chip actually looks like two 4890's infused into one chip, with two halves of 800 shader units on both sides. In theory, it should perform identically to two 4890's, according to the identical 850 MHz clock speed and exactly 2x the quantities of shader and texture management units and render back-ends (ROP's). Actually, in theory, a 5870 should perform better as it removes the dependence upon Crossfire.

    Let's see if 2x 4890's would no longer beat a 5870 in all of the above games if and only if the total memory bandwidth of both cards would be exactly the same as that of a 5870.

    That would mean downclocking the memory all the way down from 3.9GHz to 2.4GHz effective for each 4890 card, so that the total bandwidth matches that of a 5870 card with 4.8GHz memory.

    Only then can we know for sure...


    EDIT: A grand hypothesis: I expect that two 4890's in crossfire will not perform any better than a 5870 in any of the above games if the total memory bandwidth is reduced to that of a 5870. However, the performance would be within 96-100% of a 5870 in several games.
     
    Last edited: Nov 4, 2009
    Mussels and Fishymachine say thanks.
  22. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,941 (0.73/day)
    Thanks Received:
    417
    Location:
    Singapore
    First of all as 1313231 people before me said, it's not the memory that's the limiting factor. We're talking about over a 150 gygaBITES a second here. That's 150MB per milisecond (for a theoretical maximum of 1000fps). Now I'm not very knowledgable in the field but I highly doubt that a single frame can be over 150MB. Lower that to a more realistic number like 100 fps and you'd end up with 1,5GB of bandwith per each frame...

    Secondly you can't simply compare single card results to CFX ones because the two use entirely different rendering algorithms.
     
    Last edited: Nov 3, 2009
  23. Bo_Fox

    Bo_Fox New Member

    Joined:
    May 29, 2009
    Messages:
    480 (0.18/day)
    Thanks Received:
    57
    Location:
    Barack Hussein Obama-Biden's Nation
    Actually, 1328234023509203975 people said that the world was flat! :nutkick::D

    Both are nearly identical architectures (other than DX11, it's practically identical). If one thing, it's software crossfire that should be less efficient than a hardware solution with both processing cores infused into one physical chip.


    EDIT: Also, there's a huge difference between a 4870 with GDDR5 memory and a 4850 with stock GDDR3 memory and overclocked core that is the same speed as a 4870. Yep, 150GB of memory bandwidth can actually be a bottlenecking factor in several games for those chips that can do 1.2 TFLOPS, or whatever a good measure of the chip's performance is. My post before my last one above should have already dispelled your theory.. I'll go ahead and quote the sentence:
     
    Last edited: Nov 3, 2009
  24. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,941 (0.73/day)
    Thanks Received:
    417
    Location:
    Singapore
    I'm not talking about the architecture or the API but about the fact that two seperate cards render two seperate frames(or part-frames) simultaneously thus in this process removing any software limitations. Currently software ( both drivers and game engines) hasn't evolved enough to successfuly utilize all avaible recourses, just like your fancy i7 isn't 100% stressed durring gaming...
     
  25. Bo_Fox

    Bo_Fox New Member

    Joined:
    May 29, 2009
    Messages:
    480 (0.18/day)
    Thanks Received:
    57
    Location:
    Barack Hussein Obama-Biden's Nation
    That would be micro-stuttering, which is no longer an issue as of 48xx and GT200 architecture.

    You cannot leave out CF/SLI solutions from this "utilization" issue any more than a bigger chip that can also perform more frames/part-frames according to 2x more shader units, TMU's, and ROP's just as it can be done with two separate chips (but less efficiently in real-world applications).

    EDIT: (I edited my above post before this one so that it answers your #122 post more fully).
     
    Last edited: Nov 3, 2009

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Thread Status:
Not open for further replies.