1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX 480 1536 MB Fermi

Discussion in 'Reviews' started by W1zzard, Mar 19, 2010.

  1. TheMailMan78

    TheMailMan78 Big Member

    Joined:
    Jun 3, 2007
    Messages:
    20,974 (7.88/day)
    Thanks Received:
    7,532
    Hmm but Nvidia doesn't have that feature so it will in fact run hot and go WAY past its listed power draw which equals lie. Plus thats a 5890 your talking about. Duel GPU? I'm afraid your grabbing at straws here man.
     
  2. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.47/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    It's not lie, since only under Furmark it will go beyond the specified TDP. AMD has put protection so that Furmark does not stress the GPU to those limits. If at all, it's AMD who is lying in that respect, because Furmark is not showing the real max consumption of the cards, Nvidia is. Except that it is not lying, since a card will never reach those limits in any REAL application. Same goes to Nvidia cards, they will never reach those levels under gaming or CUDA apps or whatever you throw at them, as far as is not a synthetic app especifically designed to stress the GPU that far. Any real application will do much more than just stress the shader processors, the SPs do their work, but that work has to go somewhere and has to be treated there too and then go elsewhere, etc. That's why AMD's raw flop numbers are totally meaningless for real apps, because although the SPs can essentially work that hard, the data generated would never be able to go out and be useful. And that's what Furmark does, stress the shaders without the need for the generated data to be useful, without the need for the data to go outside the SPs.

    Taking the above into account and the links I posted, which cards are worse when under Furmark before throttling kicks in? Well both the HD4850 and the HD5970 went way above 100C before throttling kicked in, in just 40 seconds of Furmark!!! God knows how far they could go in some minutes under full load. On the other hand the GTX480 stays at around 95C even if there's no throttling going on, so Nvidia took actions on hardware itself to keep the card cool, while AMD used artificial measures. Both use what I would call legit measures, since both are rightly "assuming" that nobody will be able to reach those limits under any real condition and from several years of cards being out there, it's obvious they are right. HD4850s have not died while gaming right? Fermi won't either.

    Now if you have to use Furmark all day... yeah you'd need a card that artificially cripples the cards performance to prevent it from burning inside your PC. Maybe Nvidia should release a similar protection on their next drivers? Would you all be happy? Thing is, I doubt it. In fact I bet that although AMD did it first AND still is doing it, if Nvidia did that on their next drivers and Fermi power consumtion went lower especially on Furmark, we would see many many complaints about how Nvidia is cheating, because well, it's Nvidia. Sad but true.
     
  3. Super XP

    Super XP

    Joined:
    Mar 23, 2005
    Messages:
    2,754 (0.79/day)
    Thanks Received:
    538
    Location:
    Ancient Greece, Acropolis
    Do you know why FurMark is one of the best ways to determine a GPU's power consumption along with other synthetic benchmarks? So far it’s the only way to measure a cards true power consumption in a consistent manner by utilising something that is constant, won’t change or surprise and change the GPU’s stress level just as you would find in real world gaming. This is the only thing that is great about Synthetic benchmarks. It’s the best way to measure a cards performance vs. previous gen performances.

    I believe the results speak for themselves ;)
     
  4. Super XP

    Super XP

    Joined:
    Mar 23, 2005
    Messages:
    2,754 (0.79/day)
    Thanks Received:
    538
    Location:
    Ancient Greece, Acropolis
    There’s no denying the facts, it's so evident that the GTX 480 & 470 are Hot, Power Hungry and sound like a jet engine. But anyway we've got more than 25+ reviews to prove this with different methods of tests. No point in defending something that cannot be defended. All we need to do right now is live with the results and wait for a possible Fermi re-fresh if and when it gets released. But we won’t see anything for another 6+ months IMO. Until then, everybody enjoy your nice cool running HD 5800 series cards O.K.
    And I would have to agree 100%
     
  5. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,888 (0.96/day)
    Thanks Received:
    378
    Location:
    Singapore
    @Benetanegia: Haha well let's see what do I prefer...to have a fried GPU or be "cheated" and have my GPU throttle down to safe temps...hm-mm what a hard decision. We've had throttling CPUs since the P4 days, and nobody complained, I don't think anyone ever will either considering the consequences otherwise. I think Nvidia's solution is simpler but just as effective.

    About Furmark - yes it is a power virus because, just as you said, it stresses the SPs beyond what they were meant to do by means of bypassing the rest of the GPU pipeline and overloading them with calculations - a situation not usefull in a real life situation. You wouldn't see that in any other GPU application.

    And why do you insist of saying that Ati's gflops numbers are wrong? I remember we had a similar discussion before. They aren't the only problem is that you'd need smart coding to get to the low level hardware functionality. Remember the SPs are in groups of 5 - 1 for complex calculations and 4 for simple calcs. But if you want to do only dual point calculatins, you can group the 4 simple ones and simulate a second complex SP, indeed reducing the SP count to 640 - the fact why DP gflops is only 2/5 of the SP gflops numbers. The numbers stated by Ati are indeed achievable but only with smart coding specifically for their architecture.

    Edit: SuperXP stop spamming your negative propaganda :p I remember that the single slot HD4850s and the original 4870x2 also ran at 90+ degrees and nobody complained as much...
     
  6. spud107

    spud107

    Joined:
    Feb 12, 2007
    Messages:
    1,194 (0.43/day)
    Thanks Received:
    131
    Location:
    scotland
    sometimes not disengaging safety features is a good thing . . .
    [​IMG]

     
    Last edited: Mar 31, 2010
    Solaris17 says thanks.
  7. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.47/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    I have not option but to wonder why you have to take all things personal... I talk about Ati fanboys in one post and you reply what you think. I talk about how people would complain and you reply about what you'd prefer. I'm thinking of an F word and it's not f--k?

    I never said they were wrong. They are not achievable in any real application, not even AMD's internal apps are achieving anything beyond a 75% or so and that's on very very especific apps. Why do I insist? I was not insisting in the matter, I mentioned that because normal usage is way below the raw "potential" and that's why under normal usage a HD4850 would not go much higher than 90C, but on Furmark, where artificial stressing will increase the usage close to its potential, well, we don't know how high it could reach, all we know is that it would reach 105C++ and get fried. The reson is simple and it's where I was at when I mentioned it. Typical AMD shader usage is around a 40%, which is around 7% higher than the usage found under SGEMM. That's real usage. On Furmark it probably reaches something close to 100%, tbh I have no idea, but probaby nobody knows the exact number or even an aproximation, except AMD. From 33-40% to 100% there's a long way though enough to put temps through the roof.

    As to the performance side of things, if it can't be achieved in normal escenarios, it can't be achieved period. Sure you can create an app that uses 4 simple and 1 complex one and bla bla bla, but that's not an application, that's a benchmark, a demo, a showcase. No single application (not even games, transcoding, SGEMM...) will be close to being able to do that, real apps need what they need in the exact moment they need them and AMD's architecture simply isn't suited for that. Period, you can argue as much as you want.[/QUOTE]
     
  8. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,888 (0.96/day)
    Thanks Received:
    378
    Location:
    Singapore
    Uhm, I was actually trying to agree with you in the above post. I was basically saying that I don't care exactly how they prevent a GPU from frying - be it a throttling function or a powerful fan, as long as my expensive GPU doesn't turn into an expensive paperweight :eek:

    I'm no mathematician nor a coder so I don't really know how hard it is to write a code utilizing all that hardware, but I'll tell you one thing - there are many people much smarter than you and I who can and will if given enough incentive to do so...
     
  9. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.47/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    You don't get what I'm trying to say, but it's probably my problem, it's usually difficult for me to explain such complicated things in a foreign language. Well, it's not only the fact that you can use all the shaders, it's that not always the fact that you are using all those shaders will suppose a true benefit. Take games for example, average shader (ALU) use has been stablished (Beyond3D, Devnet...) to be around 3.8 out of 5 on AMD SPs, which is a 76%, but even that number is not exact or true by any means. Let me explain, of course 75% of shaders are working, but not all of them are producing genuine results, many of them are duplicating work (couldn't find a better word than genuine). This becomes obvious as soon as you realize that 76% of 1.2 TFlops (HD4870) is 912 Gflops, way more than the theoretical 708 Gflops on a GTX285 or 536 Gflops on a GTX260, and those don't have 100% efficiency either, not at all. Basically the HD4870 is calculating twice as much for the same task, otherwise if every flop operation was genuine, that would mean that 900 Gflops was required for a certain performance and the GTX cards would be seriously bottlenecked by shaders. What most probably happens is that, like I said, the AMD card is duplicating many of the calculations and it just makes sense if you think about it: when you have many spare ALUs and your bandwidth is more limited, it doesn't make sense to store some results in vram, even if you know you will need them later, because you know you will have spare ALUs too, so you just calculate things (most things) as they come. Nvidia, on the other hand prefers efficiency over throughoutput and hence they store the output, and as a result they need better caches and intercommunications. Like I have always said, two different ways of achieving the same thing.
     
  10. shevanel

    shevanel New Member

    Joined:
    Jul 27, 2009
    Messages:
    3,479 (1.85/day)
    Thanks Received:
    406
    Location:
    Leesburg, FL
    [​IMG]
     
  11. dir_d

    dir_d

    Joined:
    Sep 1, 2009
    Messages:
    848 (0.46/day)
    Thanks Received:
    110
    Location:
    Manteca, Ca
  12. shevanel

    shevanel New Member

    Joined:
    Jul 27, 2009
    Messages:
    3,479 (1.85/day)
    Thanks Received:
    406
    Location:
    Leesburg, FL
    yeah, wonderful solution.. Ramp that fan up boys!
     
  13. SK-1

    SK-1

    Joined:
    May 15, 2005
    Messages:
    3,200 (0.94/day)
    Thanks Received:
    330
    Thanks kids. Thanks for pissing off the admin. so much he's leaving TPU. Hope your all proud of yourselves.
     
    scamps says thanks.
  14. mlee49

    mlee49

    Joined:
    Dec 27, 2007
    Messages:
    8,485 (3.46/day)
    Thanks Received:
    2,103
    No shit, way to go. Bash on the reviewer and look at what happens.
     
    Last edited by a moderator: Mar 31, 2010
  15. trickson

    trickson OH, I have such a headache

    Joined:
    Dec 5, 2004
    Messages:
    6,494 (1.82/day)
    Thanks Received:
    956
    Location:
    Planet Earth.
    Great review .
    I was thinking of getting one now I just may :D .
     
  16. freaksavior

    freaksavior To infinity ... and beyond!

    Joined:
    Dec 11, 2006
    Messages:
    8,069 (2.85/day)
    Thanks Received:
    907
    Honestly, after seeing the review, i have no idea which card I want my girlfriend to buy me.

    I really like the non reference cooler 5870's but it looks like the gtx480 does just about as good, and we all know how the driver game goes.

    btw, thanks w1zzard :D
     
    Crunching for Team TPU
  17. Paulieg

    Paulieg The Mad Moderator Staff Member

    Joined:
    Feb 19, 2007
    Messages:
    11,914 (4.31/day)
    Thanks Received:
    2,979
    Location:
    Wherever I can find the iron.
    Stop the retarded arguing. If the negativity continues , I'll be handing out major custom infractions.
     
    Last edited: Mar 31, 2010
  18. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,888 (0.96/day)
    Thanks Received:
    378
    Location:
    Singapore
    Ok now I see what you mean. So basically if I got what you're saying, the built in scheduler sucks and does some of the calculations multiple times, thus wasting sp cycles?
     
  19. Andy77 New Member

    Joined:
    May 7, 2009
    Messages:
    119 (0.06/day)
    Thanks Received:
    11
    Hm... long story? I hope it was not because of the 9.12!

    When I first saw the review I was like "9.12? WTF?!?!" But then I thought, well, he has lots of cards and lots of tests, and they go way back, so no reason bothering him about it, knowing that there will be kids that will do just that, and I hoped somehow that 10.3 will be released... and it did! :) Now I need one for the 5970... probably I'll find something out there in time.
     
  20. qubit

    qubit Overclocked quantum bit

    Joined:
    Dec 6, 2007
    Messages:
    9,822 (3.97/day)
    Thanks Received:
    3,481
    Location:
    Quantum well (UK)
    Eh? Who's leaving TPU? :confused:
     
  21. Fitseries3

    Fitseries3 Eleet Hardware Junkie

    Joined:
    Oct 6, 2007
    Messages:
    15,509 (6.11/day)
    Thanks Received:
    3,107
    Location:
    Republic of Texas
    Am i the only person who gets this?

    the march 26th "event" was another conundrum** to help nvidia further delay the REAL release of the retail version of the 4 series cards so that nvidia could buy more time to fix the issues at hand.

    the "review" samples that were given out were known to be faulty in several ways but it gave everyone something to talk about to shut the fuck up about "when is fermi comming out? its 6months late"

    yes, maybe it looks "bad" that the cards are hot and draw a ton of power but it alleviates one problem and starts another.

    the big thing i see here is... NO ONE CAN EVER BE HAPPY ABOUT A DAMN THING.

    if the gtx480 was 30x faster than 5970 you would still bitch cause the price is too high. but why is the price so high? because its bleeding edge technology and thats the price you pay.

    i notice alot of you guys bitching about "well you shoulda used the 10.X driver for ATI... its better" yes... perhaps it is but why is that? because ATI has had time to fix and optimize their drivers for better performance. has nvidia had time to do that? NO. does it cross your mind that perhaps the older driver was used so that both ati and nvidia's offerings could be compared as they were released?

    if you are comparing 2 brand new cars off the show room floor would it be "fair" to let company A fix a bunch of their problems before the comparison while company B is forced to be judged on what they brought to the plate as it stands? NO.

    these reviews are done with immature cards, and immature drivers. why do you expect so much from them?

    perhaps im being an asshole here but i just want to remind you that you should take these early reviews with a grain of salt.

    if you think you can do a better review then do so yourself. oh wait... .you cant.. you dont have any gtx480s or gtx470s.

    give the man some respect.




    **Conundrum is a logical postulation that evades resolution, an intricate and difficult problem
     
    Dead_Again, HTC, JrRacinFan and 5 others say thanks.
  22. mdsx1950

    mdsx1950 New Member

    Joined:
    Nov 21, 2009
    Messages:
    2,107 (1.20/day)
    Thanks Received:
    413
    Location:
    In a gaming world :D
    Your kidding right? :wtf: :mad:
     
  23. DOM

    DOM

    Joined:
    May 30, 2006
    Messages:
    7,552 (2.49/day)
    Thanks Received:
    828
    Location:
    TX, USA
    fit dont waste your time some ppl will never change, thats a fact look at the world we live in today ppl bitch about everything

    and he does retest everytime he does a new review so idk what all the crying was all about if i had the money i would have two of every card to play with but i dont :ohwell:
     
  24. tigger

    tigger I'm the only one

    Joined:
    Mar 20, 2006
    Messages:
    10,183 (3.28/day)
    Thanks Received:
    1,399
    Close this thread to stop the arguing and bitching.
     
    scamps says thanks.
  25. [I.R.A]_FBi

    [I.R.A]_FBi New Member

    Joined:
    May 19, 2007
    Messages:
    7,664 (2.86/day)
    Thanks Received:
    540
    Location:
    c:\programs\kitteh.exe

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page