• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX 580 Reference Design Graphics Card Pictured

i don't. As long as my psu is up to the task, i don't care. I only care about it's performance, then price is next. heat and power consumption on a top tier card mean absolutely nothing to me.

Now, go down a notch, and power becomes an issue, but if i'm paying over $400 for a card, it's no concern. Only performance.

+1
 
You didn't pay over 400 for a single card. There is a big difference. And your 2 5850s consume more than a single 480 anyway. But enough about that.

You can't call Fermi a failure because it consumes more for the same performance, when it excels in other areas that AMD can't begin to compete, like GPGPU or heavy tessellation. i7 quads run hotter than Phenom X6 for similar performance, yet aren't considered a failure. You aren't taking into account all aspects of the cards.


Run milky-way @ home on an nvidia card and on a ati card.

Generally Ati come out on top.

ATI just needs some dev support :laugh:
 
The 480 underperforms on account of it's power requirement. It SHOULD perform better.
That is the entire goal of the GF 110 - to make it perform better for lower power consumption

You didn't pay over 400 for a single card. There is a big difference. And your 2 5850s consume more than a single 480 anyway. But enough about that.

You can't call Fermi a failure because....

Read my post again Wile E. I didn't call Fermi a failure. I'm saying it should perform better for it's power usage. NVidia thinks the same that's why they're doing the GTX 580. They knew there were power issues from the problems in it's manufacture. Now they are going to fix it.

I chose not to buy a 5870 and go for two 5850's because i wanted super performance. 5870 wasn't enough. I spent £400 on a gfx solution - don't think that doesn't make it any less than buying one single card. You're logic is irrelevant. If i bought 2 x 5770's then yeah, big diff. But i bought 5850's in Nov 09 when they were the 2nd fastest single chip solutions. I couldn't buy a GF 100 because well, someone forgot how to make them on time.

You seem to miss that I'm not slagging off GF 100. I'm simply saying it should perform better and the GF110 is exactly that fix. It's also where NV want to go with Kepler and Maxwell. They have stated they want to produce 'x' times performance gains with a marked improvement in efficiency.

And on power - going by W1zz's readings:
http://www.techpowerup.com/reviews/HIS/Radeon_HD_6870/27.html
One 5850 = 150 Watts (two even doubled = 300)
One 480 = 320 Watts.
Comparison on max power.
Now i'm not arguing on that point anymore, take it up with W1zz.

As for i7's consuming more power than phenom x6:
http://www.anandtech.com/show/3674/amds-sixcore-phenom-ii-x6-1090t-1055t-reviewed/10
Yeah, my 920 conumes 2 Watts more on this test yet performs:
http://www.anandtech.com/show/3674/amds-sixcore-phenom-ii-x6-1090t-1055t-reviewed/9
3 wins for i7 920, 2 draws and one loss.
And we all know clock speeds mean jack between AMD and Intel.

I spent £1500 on my self built rig. I get to call myself an enthusiast. I'm not a water cooler or into shiny neon. My money goes on quality for noise/performance optimisation as i watch TV through my PC. For me, performance must be met with acceptable noise levels. I had to use rivatuner on my GTX 295 to lower idle fan speeds. I dont on my crossfire set up. People have different priorities - accept it, enthusiasts dont all care about pure power at the expense of everything else.

So, if you're going to quote me again, quote this:

GF100 isn't a failure - it should have performed better, even JSH said so in his modest interview. GTX 580 is to fix that. That's what i've been posting all along.
 
now for those who think the GTX480 is still only a little bit better then the 5870? Well you wrong. now that nvidia has been pushing out performance like bunny off spring the GTX480 is quite a bit faster then a 5870 now and can play in HD5970 territory when overclocked. That being said even the GTX470s are now faster then the 5870 in almost every game other then Vantage and Crysis (Crysis by like 4%)

Are you talking about the 2GB HD5970? Because i highly doubt an overclocked GTX480 can reach the performance of a 4GB 5970.

I don't. As long as my psu is up to the task, I don't care. I only care about it's performance, then price is next. Heat and power consumption on a top tier card mean absolutely nothing to me.

Now, go down a notch, and power becomes an issue, but If I'm paying over $400 for a card, it's no concern. Only performance.

Agreed! You pay for performance! :rockout:
Except i do care about heat because i usually tend to pair up cards ;)

You'd expect the GTX 480 to perform better than a HD 5970 when it consumes more power.

Exactly what i thought when i heard about the power consumption of the GTX480.
 
Last edited:
Read my post again Wile E. I didn't call Fermi a failure. I'm saying it should perform better for it's power usage. NVidia thinks the same that's why they're doing the GTX 580. They knew there were power issues from the problems in it's manufacture. Now they are going to fix it.

I chose not to buy a 5870 and go for two 5850's because i wanted super performance. 5870 wasn't enough. I spent £400 on a gfx solution - don't think that doesn't make it any less than buying one single card. You're logic is irrelevant. If i bought 2 x 5770's then yeah, big diff. But i bought 5850's in Nov 09 when they were the 2nd fastest single chip solutions. I couldn't buy a GF 100 because well, someone forgot how to make them on time.

You seem to miss that I'm not slagging off GF 100. I'm simply saying it should perform better and the GF110 is exactly that fix. It's also where NV want to go with Kepler and Maxwell. They have stated they want to produce 'x' times performance gains with a marked improvement in efficiency.

And on power - going by W1zz's readings:
http://www.techpowerup.com/reviews/HIS/Radeon_HD_6870/27.html
One 5850 = 150 Watts (two even doubled = 300)
One 480 = 320 Watts.
Comparison on max power.
Now i'm not arguing on that point anymore, take it up with W1zz.

As for i7's consuming more power than phenom x6:
http://www.anandtech.com/show/3674/amds-sixcore-phenom-ii-x6-1090t-1055t-reviewed/10
Yeah, my 920 conumes 2 Watts more on this test yet performs:
http://www.anandtech.com/show/3674/amds-sixcore-phenom-ii-x6-1090t-1055t-reviewed/9
3 wins for i7 920, 2 draws and one loss.
And we all know clock speeds mean jack between AMD and Intel.

I spent £1500 on my self built rig. I get to call myself an enthusiast. I'm not a water cooler or into shiny neon. My money goes on quality for noise/performance optimisation as i watch TV through my PC. For me, performance must be met with acceptable noise levels. I had to use rivatuner on my GTX 295 to lower idle fan speeds. I dont on my crossfire set up. People have different priorities - accept it, enthusiasts dont all care about pure power at the expense of everything else.

So, if you're going to quote me again, quote this:

GF100 isn't a failure - it should have performed better, even JSH said so in his modest interview. GTX 580 is to fix that. That's what i've been posting all along.

The fact that you brought up an ATI card as a comparison in the original post I quoted suggested to me you were comparing the two in terms of power consumption, painting NV in a negative light.

I think I would've understood your point better if ATI was left out of the original comment altogether.

Meh. Whatever. It's over and done. Point taken.

Agreed! You pay for performance! :rockout:
Except i do care about heat because i usually tend to pair up cards ;)

Meh. I water cool. Total non-issue for me.
 
Are you talking about the 2GB HD5970? Because i highly doubt an overclocked GTX480 can reach the performance of a 4GB 5970.

The 4GB HD5970 are a bad comparison. They're nothing but overclocked HD5970 with more vram, and he was talking about a OC GTX480 being close to reference HD5970. When OCed nothing can touch the HD5970, but with a drawback, I think that power consumption scales horribly, even more so than the GTX480. 4GB HD5970 consume 50% more than reference HD5970, in Wizzard's review the Asus ARES consumed 436 Watts!
 
The 4GB HD5970 are a bad comparison. They're nothing but overclocked HD5970 with more vram

Not exactly true. The VRAM plays a part when it comes to high resolutions. And the 4GB HD5970 has more room to OC.

and he was talking about a OC GTX480 being close to reference HD5970. When OCed nothing can touch the HD5970, but with a drawback, I think that power consumption scales horribly, even more so than the GTX480. 4GB HD5970 consume 50% more than reference HD5970, in Wizzard's review the Asus ARES consumed 436 Watts!

That's the ARES. Not all 4GB HD5970s consume so much. Anyways i don't care about the power consumption as I only care about the performance. :)
 
That's the ARES. Not all 4GB HD5970s consume so much. Anyways i don't care about the power consumption as I only care about the performance. :)

Well I've been looking at reviews of the Sapphire and XFX ones and they also consume like 100++ watts more than the 2 GB version, but I don't want to argue about this. You are right, people buying such high-end cards don't care about power consumption anyway, but my comment was just following the context in which Fermi and now this GTX580 are mostly judged: performance-per-watt.
 
We should be seeing much better efficiency from GTX 580, Nvidia is already aware of the power and heat complaints that have been perpetually complained and whined about since Fermi's launch. I have no doubts that the 580 should have much better performance per watt and a decent decrease in heat output.(or at least i hope)
 
Ive got a gut feeling this would perform very much like a gtx 460 1GB sli
 
... but my comment was just following the context in which Fermi and now this GTX580 are mostly judged: performance-per-watt.

Fermi (mainly GTX 480) is a great card, yet it couldn't deliver HD5970 performance with the same or lower power consumption. If there was no HD5970 card, there wouldn't have been so many complaints about the GTX480 and it's power consumption. :(
 
GTX 580

27655156.jpg


97756698.jpg


90261638.jpg


http://itbbs.pconline.com.cn/diy/12074471.html
 
If there was no HD5970 card, there wouldn't have been so many complaints about the GTX480 and it's power consumption. :(

Quoted for truth.
 
Fermi (mainly GTX 480) is a great card, yet it couldn't deliver HD5970 performance with the same or lower power consumption. If there was no HD5970 card, there wouldn't have been so many complaints about the GTX480 and it's power consumption. :(

... using your same reasoning; if nVidia did their job on Fermi properly in the first place, and weren't in rush to send their product to the market regardless of its clear issues, "there wouldn't have been so many complaints about the GTX480 and it's power consumption" ... nVidia should have waited a bit further and issued the proper GTX 480 (i.e. GTX 580), but this is just another evidence of their bad management :shadedshu ...
 
... using your same reasoning; if nVidia did their job on Fermi properly in the first place, and weren't in rush to send their product to the market regardless of its clear issues, "there wouldn't have been so many complaints about the GTX480 and it's power consumption" ... nVidia should have waited a bit further and issued the proper GTX 480 (i.e. GTX 580), but this is just another evidence of their bad management :shadedshu ...

Or good management if in fact it will now, some months on be the right product that actually manages to compete with AMD's "new" high end offering, may well have actually served a very strategic purpose.
 
It's all part of the same management mistake with the fabric thing, one mistake lead to another mistake, but most of them imposible to avoid back then.

They should have revised the fabric thing when they received A1 and go directly to B1 revision instead of doing metal respins that didn't really fix anything or very little? That would have made them be 4 months late or so instead of 6 months and have a much better product. But it is very easy to judge that right now that we know where the error was made. At the time all they knew was that the fabric which is in fact nothing but a metal layer didn't work, so a metal respin was what made most sense: 2 months late and hopes of fixing it. A metal respin takes less than 2 months so it just made sense to try to fix the management mistake in a metal respin. The error was deeper than they thought and the rest is history left for prosperity. Should they have went to B1 after A2 didn't fix anything, being 8 months late but with a card that could potentially destroy Cypress like the alleged GF110 is apparently going to? Again, maybe they should have, but that was probably too much pressure for partners and it's again a decision that is too easy to make now that we have all the responses...

EDIT: tbh if GF110 turns out to be just GF100 made right, and it seems most likely to, I'm going to be dissapointed, regardless of how it does in comparison to Cayman. Because even though I think it's going to be very competitive against Cayman, I'm 99% sure they could have made a much better card if they had used the 48SP SMs instead of the 32 in GF100. In fact, IMO if they wanted to release another 512 SP chip, they could have taken GF104 and add another 16 SP. The architecture must be able to handle it, because in GF104 2 dispatchers are working with 3 SIMDs with no performance penalty at all, so every dispatcher can surely dispatch to 2 SIMDs, so 2x2=4= 64 SP. And since GF104 has 2x the TMU/SFU/load/store of CF100 per SM, they would end up with a card with almost same specs as GF100 but with a sub 400mm^2 area (instead of the rumored 460-480 mm^2 of GF110). The only drawback would be tesselation, because with half the clusters it would have half the tesselation capabilities. Still considering that even the GTX460 (or GTS450 if you stretch it) annihilates Cypress and Barts when it comes to tesselation it would be a good compromise.
 
Last edited:
NVIDIA GeForce GTX 580 priced at US$599, to be available November 9th?

Supposed GTX 580 price($599) and availibilty info coming from VRzone..

VRzone said:
However, in an unexpected twist, we stumbled upon an online posting made no more than just a few hours ago, which suggests the GeForce GTX 580 is expected to cost US$599, and will be available on November 9th. These GeForce GTX 580 cards will come from a number of NVIDIA's partners, but all are reference boards.

http://vr-zone.com/articles/nvidia-...-599-to-be-available-november-9th-/10222.html

That price is actually a big far out of reach for me unfortunately(especially for a single GPU card).
 
Last edited:
That cant be true...Nvidia would be shooting themselves in the foot.
 
especially with no job yet :( but still $599 seems steep given what you can get a GTX480 for, and it's only rumored to be ~20% faster.

Exactly, i have no job right now and at that price it just pushes me away even more. Even if i had the money i still wouldn't drop that much for a single GPU card unless it's giving me ungodly performance.

I hope it's not that much and if so, i hope the 6900's are cheaper.
 
Exactly, i have no job right now and at that price it just pushes me away even more. Even if i had the money i still wouldn't drop that much for a single GPU card unless it's giving me ungodly performance.

I hope it's not that much and if so, i hope the 6900's are cheaper.

If Nvidia just waited till the 6900's were out, they could judge performance of the 580 vs the 6970 and price accordingly.

hec if AMD does what they did with the 5870 and go for roughly a ~$399 price point Nv may have screwed themselves, if they wait they will know what they can price them at to sell as many as possible.
 
Well that's not to far from reality GTX 480 is at 489 USD on average, so 20% more is 586 USD.

HD5870 is at 360 USD, speculated 40% increase of HD6970 will take it to 504 USD BUT, that won't be the case because people will prefer to buy 2x HD6870 for 480 USD and 20% more performance.
Then i guess it will be at 450USD.
 
That cant be true...Nvidia would be shooting themselves in the foot.

remember what half the nvidia fanboys on the forum repeat over and over:

'i dont care about price, power, heat or noise so long as its the fastest!'


nvidia knows that their fanboys (not their regular users) only care because they have the FASTEST, not about any other category.
 
Back
Top