1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Official AMD Radeon 6000 Series Discussion Thread

Discussion in 'AMD / ATI' started by TechPowerUp Forums, Jul 28, 2010.

  1. sliderider

    Joined:
    Sep 20, 2010
    Messages:
    282 (0.11/day)
    Thanks Received:
    47
    GTX500 is a refresh of GTX400. Not much is changed at all. 10% higher clocks and GTX480 also had a shader cluster laser cut the same as GTX460 so they might have fixed that, though it doesn't seem likely otherwise it should be a helluva lot faster than testing is showing. In testing it is showing only a 4-10% improvement over GTX480 but if all the clocks are boosted 10%, that should account for the difference by itself. A 10% clock boost plus an additional shader cluster should be 20-25% faster, so something is still wrong with the Fermi picture. The extra shader cluster is either still locked or they found some way to unlock it but hobble them all so they aren't doing as much work as before. It's still mostly same GF100 core with a few efficiencies stolen from GF104 to reduce power and heat, that's all. Move along, nothing to see here.
     
    Last edited: Jan 7, 2011
  2. campb292 New Member

    Joined:
    Mar 8, 2010
    Messages:
    19 (0.01/day)
    Thanks Received:
    2
    Quite a bit changed. Prior to the 580 launch, the single most powerful card was a 2 gpu 5970. Now the most powerful card is a tossup between the 2 gpu $600 dollar 5970 or the single gpu 580. The 580 does what it takes AMD a 2 gpu card to do.
     
  3. Kaiser Kraus

    Kaiser Kraus

    Joined:
    May 31, 2010
    Messages:
    62 (0.02/day)
    Thanks Received:
    47
    Yeah and it only took them a year just to do that....:roll:
     
  4. campb292 New Member

    Joined:
    Mar 8, 2010
    Messages:
    19 (0.01/day)
    Thanks Received:
    2
    I like the competition between the two - it is good for business and good for development. If AMD can get back in the race I will happily float back over that way... but right now nvidia is just ahead in development, performance, technology, and outlook.
     
  5. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.27/day)
    Thanks Received:
    69
    this is such utter bullshit. the cards are competing on price, dollar for dollar its neck and neck.If you wanna go with performance based on die size, AMD is sluaghtering nvidia. So actually from a raw technology standpoint. AMD could hand nvidia its ass if it choose to.
     
  6. sliderider

    Joined:
    Sep 20, 2010
    Messages:
    282 (0.11/day)
    Thanks Received:
    47
    Yeah on a die almost twice as big as the Cypress die so technically AMD manages to pack more circuitry onto less die space than nVidia uses so if AMD made a single GPU with the same die size as nVidia, they'd kill them. Also with the much better Crossfire scaling of the HD6000 cards, the HD6990 is going to trash the GTX580 and likely for the same price or a little less. The additional texture memory in the HD6990 will also future proof it a bit more than GTX580. 1gb and 1.2gb cards are already being swamped by newer games at the highest settings. Only including 1.5gb on the GTX580 was a little short sighted on the part of nVidia.
     
    Last edited: Jan 7, 2011
  7. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,680 (0.91/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Check your math and the reviews you read... :shadedshu

    580 is 16% faster than 480 according to W1zzard's review, see:

    [​IMG]

    Now, 512 SP is exactly 6.667% more shaders than 480 and GTX580 clocks are 10% faster than 480 (like you said): 10%+6%= 16% (16~17%).

    That's perfect scaling and a far cry from the poor scaling that Cayman is showing: HD6970 should be 20% faster than HD 6950 (10% clock, ~10% shader count) and 30% faster than HD5870 to show the same scaling.

    The truth is that none of them are using a new architecture for these "new" generation. GF110 is just a fixed GF100, it was never meant as a new chip (which was a huuuge dissapointment for me), but at least, GF100 was a completely new architecture with no resemblance to any previous GPU and, at least for now, it scales 100% as SP count or clocks are increased. If previous generations are something to go by, Fermi will easily scale to 1024 or even up to 1536 SPs, with >>90% efficiency.

    Meanwhile on AMD's part, HD68xx's architecture is exactly the same they have been using since R600 and Cayman has only been tweaked here and there. It's here where the efficiency fails badly. It was created for 320 SPs (4 SIMDs) and they are now putting there 24 SIMDs, the architecture was simply not ready for that. Yes on Cayman they changed to VLIW4, but that's not really an architecture change in the overall picture (and didn't even bring any improvement). The VLIW4 shaders replace the old VLIW5 shaders in form, function and placement, and are essentially the same thing and are arranged in the exact same way they were previouly placed. The front end is almost identical too, except for a dual geometry engine: everything (front end) was doubled in Cypress too. Both changes were suposed to bring massive efficiency gains and failed miserably at that, that's where the dissapointment comes from.

    People who still think Cayman is good will face reality when the GTX560 is launched and it matches the HD6950 (maybe even beat it by a few %). I mean: a refreshed ~1.9 billion transistor Nvidia GPU will match a 2.6 billion AMD GPU on average performance, while decimating it on tesselation. AMD has the clearly better manufacturing and transistor density, Nvidia is at least one step behind in manufacturing R&D and yet, they will score a clear win on perf/area.

    I just don't want to think what would have happened to this "generation" if Nvidia had chosen to abandon GPGPU (continuing with GF100 for that, instead of EOLing it) and had released a GF110 aimed purely at gaming and that used the same 48 SP layout as GF104 (or GF114)... that'd mean that with 576 SPs and 128 TMUs, we would be talking about a card more than 20% faster than the GTX580, on the same power envelope and die area (due to not having half rate DP, ECC... a lot of area and power would be saved, just compare Barts to Cypress/Cayman).
    I completely disagree. Not with their current architecture. I'd have agreed before Cayman was released, but now that statement is almost 100% false. Yes, if AMD had 500mm^2, it would be able to create a 2400+ SP monstrosity, but the problem is that as Cypress, Barts and Cayman have demostrated, the architecture just does not scale. AT ALL. They could put 20000 SPs in there, whatever, doesn't matter, it woud not be even slightly faster than what Cayman is. IMO.

    Also, comparing Fermi to Cayman the way you're doingis stupid. In order to have better DP and better tesselation AMD had to give up a lot of die area and 80w of consumption compared to Cypress, to the point that it's perf/watt is the same as a GTX570 (unveliebable just 3 months ago). And even yet, it's DP rate is still 1/4 that of SP, while Fermi does 1/2. And tesselation on Fermi is like 5 times faster. I know you won't see the difference on current games, but the capabilities are there and are demostrated by tessmark. Until AMD offers the same capabilities, an apples to apples comparison cannot be made. Again, just look how much AMD had to give up on the perf/watt or perf/area department in order to add some DP and tess capabilities, there0s no way to know how much they would have needed to give up in order to match Fermi on DP and tesselation...
     
    Last edited: Jan 7, 2011
    HalfAHertz says thanks.
  8. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    6,483 (2.28/day)
    Thanks Received:
    5,755
    Location:
    Glasgow - home of formal profanity
    Most of the DX11 gains from 58xx to 69xx series is from the tesselation improvements. I know this is a 6 series thread but my single 580 plays Stalker COP better than my dual 5850's - all down to the DX11 superiority of the 580. If AMD had pushed the rest of the 69xx specs to compliment the tesselation processing improvements it would have equalled the 580 if not beaten it.

    Just think what the 28nm process will allow for the same power envelope.....
     
  9. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,716 (2.81/day)
    Thanks Received:
    3,618
    Location:
    04578
    yea lets be honest
    5970 release was November 2009
    GTX 580 release November 2010

    so it took nvidia a year to match ATi's dual gpu and nvidia was unable to bring out a dual gpu in the 400 series the 500 series they will be right now from all reports AMDs 6900 series is actually scaling better at the high end then nvidia.

    right now both companies are set on releasing dual gpu cards in the 1st Quarter of 2011.

    so as of right now nvidia has better single gpu cards and AMD has better dual gpu scaling at the high end. With all speculation aside a GTX 580 is still $500 a 6990 should come in around $650 and be around 6950 crossfire performance maybe a bit less. This means currently if things continue as they are AMD has the edge in terms of multi gpu tech key word to watch for is *for now* as Nvidia has the ability to provide driver improvements over time that AMD seems to be all over the place. That said right now depending on how you look at things there about equal in terms of performance.

    Nvidia wins single gpu crown ATi/ AMD wins multi gpu segment in terms of performance to cost. The big thing i care about is i dont give a shit i bought what i could afford and got way more bang for buck. If a gtx 580 had been in the $350 range id have bought 2 of them sadly there not and as such i went AMD. Take what you cant get for the best price you can find thats all that really matters.

    and no its not its direct compute improvements im sorry sure tessellation has improved but the performance hit from tessellation in Metro 2033 is less then 2 fps on the 5800 series between on and off. so thats moot fact is DX11 games respond better the VLIW 4 and better tessellation performance. example even if i turn off all dx11 features in Metro 2033 the 6900 series is still nearly double the performance on the 5800 series.

    and in stalker the game you mentioned dual 6900s completely waste dual 580s so again nvidia wins single gpu segment hands down dual there getting matched by far cheaper gpus

    a good example is 6970s are $750 for 2 580s are $1000 for 2 yet at the resolutions your gonna run those 2x gpus the 580s are more expensive for equal performance meaning in that sense they dont compete and in the 2560x1600 segment they get decimated in DX11 titles do to a lack of vram and thats 1 of the reasons Metro 2033 runs like crap with DOF on as its using alot of vram that filter alone uses more vram then most games released in 2006-2008 in general
     
    Last edited: Jan 7, 2011
  10. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    6,483 (2.28/day)
    Thanks Received:
    5,755
    Location:
    Glasgow - home of formal profanity
    Yeah... but....

    It doesn't matter to me. My GPU history is:

    Nvidia 6800?
    ATI 9800 Pro (or XT) I forget....
    7950GT
    7950 GT sli
    8800 GT
    8800GTX
    GTX 260
    GTX 295
    HD 5850 crossfire
    GTX 580.

    I had BSOD's with my 7950 GT sli, Low fps often with the GTX 295 and ocassional driver failures on boot (plus dips in min fps) with my 5850's. Despite how good the dual gpu improvements are I wanted a single card solution. I held out for the 6970 but was disappointed by it (thought it might beat the 580).

    I have no doubt AMD are easily winning the current dual gpu battle. My preference is simply one card for now. Plus, one card is quieter and cooler than two.

    But when i upgrade at the end of 2011 (such an upgrade whore) i'll see who's got the best pitch and i'll make a new gfx card friend. :)
     
  11. DigitalUK

    DigitalUK

    Joined:
    Oct 16, 2009
    Messages:
    510 (0.18/day)
    Thanks Received:
    73
    Location:
    UK South
    has anyone else noticed any stutters on 6970, they are only small and every so often. ive tried everything to get rid of them. overclocked cpu then set to stock. overclocked the card then stock with increased power management 10%>20%. games like mafia 2, BFBC2, Dirt2 the frame rates im getting are excellent it almost looks like microstutter but im not using crossfire.
     
  12. sliderider

    Joined:
    Sep 20, 2010
    Messages:
    282 (0.11/day)
    Thanks Received:
    47
    Where do you fanboys get this stuff? You're actually trying to insinuate that Barts is just a really fast HD2900? Please. You aren't even worth talking to if you really believe that.

    OK here's the facts.

    GTX580 529mm^2, 3 billion transistors. Density= about 5.6-5.7 million transistors per mm^2

    HD6970 389mm^2, 2.64 billion transistors for a density of 6.7-6.8 million transistors per mm^2

    A simple extrapolation shows that if AMD GPU's used a 529mm die size like nVidia uses they would contain almost 3.6 billion transistors.

    Whose GPU's are packing more power into a smaller space again? And AMD's GPU's are clocked higher and STILL manage to produce less heat and draw less power than nVidia in spite of a higher density of transistors.
     
    Last edited: Jan 7, 2011
  13. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,716 (2.81/day)
    Thanks Received:
    3,618
    Location:
    04578
    i havent had any issues with that at all on my end the cards have run flawlessly

    then again after the 500 series chipsets Nvidias AMD chipsets werent that great to begin with 750a and 980a being the only ones still purchasable today and niether one really being worth a damn. I dont knows up with your system and what could be causing it but ive yet to encounter that yet and im up to 50+ games tested and played and im up to 140+ hrs in BC2 and played about 12-15hrs since i got the 6900s and no issues at all. Dosent mean your not having problems but i cant really spot what could be causing it either.
     
  14. DigitalUK

    DigitalUK

    Joined:
    Oct 16, 2009
    Messages:
    510 (0.18/day)
    Thanks Received:
    73
    Location:
    UK South
    thanks, i was planning to get a crossfire board and another 6970 but with bulldozer right round the corner i dont want to really buy another board at the moment. i did look around and there are afew others saying about stutters ,but like you say most are saying the same as you. im not saying anything bad about the card , its an animal frame rate whatever i do is amazing, so maybe it is a chipset thing, i was hoping it was a driver issue (10.12a)
     
  15. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,716 (2.81/day)
    Thanks Received:
    3,618
    Location:
    04578
    worst case try running with vsync on unless im bench marking i tend to leave Vsync on and my frame rate stays pegged at 60fps and never drops except for metro 2033 which averages 55 and crysis which averages 51 due to cpu bottleneck
     
  16. DigitalUK

    DigitalUK

    Joined:
    Oct 16, 2009
    Messages:
    510 (0.18/day)
    Thanks Received:
    73
    Location:
    UK South
    i never run without v-sync on @ 1920x1080, as you have a very similar cpu im gonna go with chipset or may do a fresh install just to be sure.
     
  17. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,716 (2.81/day)
    Thanks Received:
    3,618
    Location:
    04578
    you can try an install also if you can try a 3rd party tool it might be an issue with triple buffering i remember someone having issues with 4800 series and by using a 3rd party app to force triple buffering in DX instead of the Open GL version ati offers fixed there stuttering just something for you to google and look into. I think mussells has tried it and it solved his issues once before as well so worth a shot
     
  18. Johnny5

    Johnny5

    Joined:
    Feb 2, 2007
    Messages:
    222 (0.06/day)
    Thanks Received:
    8
    Sorry but I dont have time to search for my answers so I would hope someone can answer some quick questions.

    I'm thinking about getting a 6950 to replace a 5850 that died. My power supply only comes with 2 6pin vga extensions, Do the cards come with a 6 to 8 pin connector?

    Also

    How hard is it to flash the card to a 6970 and could someone point me in the right direction to get started for when I receive the card.
     
    10 Year Member at TPU
  19. bear jesus

    bear jesus New Member

    Joined:
    Aug 12, 2010
    Messages:
    1,534 (0.59/day)
    Thanks Received:
    200
    Location:
    Britland
    The 6950 only has two 6 pin plugs so you are fine with that, here is the guide on how to flash from a 6950 to a 6970

    If you have flashed a GPU bios before it is very easy, if not it is still pretty easy as the guide tells you exactly what to do.
     
    Johnny5 says thanks.
  20. theoneandonlymrk

    theoneandonlymrk

    Joined:
    Mar 10, 2010
    Messages:
    4,818 (1.75/day)
    Thanks Received:
    1,478
    Location:
    Manchester uk
    i ponder how no testers have ever tested crossfire anything 5870 6870 6970 or a 6970 with hybrid physx via a cheap gt240 50 quid making 350 tops with a 6970 and since ive a 5870 with a gt240 imho it would fully piss on a gtx580 even in the NVIDIA games for less money 2x6970 plus physx metro would be v nice

    oh and being a fanboys str8 lame(why care about 2 mega rich cos) and fully gay get a bird or football team or sumat at least thats worth a shit:nutkick:

    perhaps nosmuch the bird if she nags
     
  21. inferKNOX

    inferKNOX

    Joined:
    Jul 17, 2009
    Messages:
    921 (0.31/day)
    Thanks Received:
    123
    Location:
    SouthERN Africa
    Ok, so the confusion hasn't ended for me yet.
    After those drivers... 10.11 I think, that were supposed to add 5 monitor Eyefinity support to the HD6000s, is it now possible to run Eyefinity3 with just the 2xDVI + 1xHDMI (on the HD6900s specifically)?
    Please guys, really need this conclusive info (not speculation please), I'm on the cusp of getting the 6950.
    Thanks in advance
     
  22. bear jesus

    bear jesus New Member

    Joined:
    Aug 12, 2010
    Messages:
    1,534 (0.59/day)
    Thanks Received:
    200
    Location:
    Britland
    As far as i knew no card supports eyefinity without a display port monitor or one on an adapter so if you want eyefinity you need a display port adapter.

    At only £20 it did not bother me having to use one.
     
  23. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    6,483 (2.28/day)
    Thanks Received:
    5,755
    Location:
    Glasgow - home of formal profanity
    I'm far too old to understand the structure, lack of punctuation and general street talk in this post - can we have an age limit so oldies like me know when it's time to go and read the funeral pages?

    You sound like that bird from Little Britain.

    And for ref, Physx isn't worth the bother of having so many cards stuffed in your pci slots. You can run metro 2033 with 2 x5850's (with reasonable fps at 1650x1080 res). The physx doesn't make a massive difference.

    I think tesselation is a better feature that the 69xx series addresses competently compared to the 58xx series. You notice good tesselation use in a game - it's all around. Physx is something you kinda have to look for and isnt universal - not all items can be interacted with - yes i can kick up some papers in Batman AA but cant knock over a desk fan etc..... Bit lame.
     
    yogurt_21 says thanks.
  24. pantherx12

    pantherx12

    Joined:
    Jan 2, 2009
    Messages:
    9,767 (3.06/day)
    Thanks Received:
    1,804
    Location:
    Suffolk/Essex, England
    I'm curious has anyone tried soldering different power connectors to a modded 6950 to see if powerdraw matches a full 6970 yet?
     
  25. Over50 New Member

    Joined:
    Jan 2, 2011
    Messages:
    48 (0.02/day)
    Thanks Received:
    3
    this is to comment on GPU Temperatures.

    In May 2008 built my own rig. Using 2 HD 3870's in CF. 1st one was Sapphire Toxic HD 3870 (with Vapor-X cooling technology) 2nd card was Visiontek HD 3870.
    Since day one of building my desktop ..the Sapphire Toxic connected to my monitor by DVI was always 10*c's higher than the Visiontek at IDLE ( 48*c vs 37*c). When playing WoW pretty much was the most demanding load on the gpu's typically playing 4 hours per day and 8 to 14 hrs on weekends. Sapphire Toxic card hovered near mid 70 to 80*c and Visiontek be lucky to go higher than 57*c- 60*c.
    After nearly 30 months, Dec.22 2010, the Sapphire Toxic emitted an odor and fan was clanking since probably at 100% fan speed. I immediately checked Sensors with Everest Ultimate Edition, SIW, and SiSoft Sandra ... ouch! a whopping 119*c.

    Visually on screen everything appeared fine... except playing WoW was getting artifacts and jumpy FPS.
    But anyways I quickly shut down my PC when I saw 119* celcius.

    There is more to this but point is ... Sapphire cooling technology sucks. VaporX never worked. BTW : thought GPU's maximum temp was 110*c.
    anything over 90*c is living dangerously .. I am surprised I didn't see arcs and sparks and smoke with a large bang of capacitors exploding. Sapphire card shows no visual signs of damage .. still works under very low load capacity ..like Internet Browsing and playing a DVD.

    I replaced the Sapphire+Visiontek HD 3870 CF with a MSI R6870 Twin Frozr II Radeon HD 6870 1GB 256-bit GDDR5. It has very quiet fans and also runs cooler than any GPU with stock equipment. True I paid more just like I did for that Sapphire Toxic card. But at least it works as advertised.
     

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)