Monday, March 12th 2012

GeForce GTX 580 to Get Price-Cuts

To pave the way for the GeForce GTX 680, which will arrive later this month in small but sizable quantities, with wide availability in the months to come, NVIDIA is cutting the prices of its GeForce GTX 580 graphics card. The GF110-based behemoth of 2011, will now start at 339.8 EUR, according to European price-aggregators such as Geizhals.at. The new price makes GeForce GTX 580 1.5 GB cheaper than the Radeon HD 7950, and having a slightly improved price-performance ratio. The 3 GB variants of GeForce GTX 580 are priced similar to the HD 7950. The GTX 570 starts at 249 EUR.
Add your own comment

54 Comments on GeForce GTX 580 to Get Price-Cuts

#26
CrAsHnBuRnXp
Why not just get the GTX 660 since the performance scale (according to Tom's Hardware) is that of a GTX 580 and its cheaper than the 580. (according to same source).

Source
Posted on Reply
#27
Selene
CrAsHnBuRnXpWhy not just get the GTX 660 since the performance scale (according to Tom's Hardware) is that of a GTX 580 and its cheaper than the 580. (according to same source).

Source
Thats the whole point, the GTX660 that was going to be around $299.99 is now going to be called the GTX680 and be around $499.99.
I hope this is not what happens and I do not blame AMD for NV doing this but this does show us what happens when you have little or no competition.
If the chart you linked is right thats not so bad, alittle higher then it should be but not insane.
Posted on Reply
#28
Benetanegia
CrAsHnBuRnXpWhy not just get the GTX 660 since the performance scale (according to Tom's Hardware) is that of a GTX 580 and its cheaper than the 580. (according to same source).

Source
According to every recent rumour those specs and posibly everything else is false.
Posted on Reply
#29
CrAsHnBuRnXp
SeleneThats the whole point, the GTX660 that was going to be around $299.99 is now going to be called the GTX680 and be around $499.99.
I hope this is not what happens and I do not blame AMD for NV doing this but this does show us what happens when you have little or no competition.
If the chart you linked is right thats not so bad, alittle higher then it should be but not insane.
Well hopefully that isnt the case because ive been holding out for months for getting a new card and if i can get a GTX660 that has the performance of a GTX580 for less im going to grab it. If they end up calling it a 680 and price it at $500, im going to be very pissed for waiting so long.
BenetanegiaAccording to every recent rumour those specs and posibly everything else is false.
Well they need to start releasing information so i know whether or not to buy my GTX 570 now or wait a month until new shit comes out.
Posted on Reply
#30
Casecutter
BenetanegiaAccording to rumors, things were not going well with the chip(GK-110), so instead of releasing another cut down chip like GTX480, they cancelled/post-poned it. And again IF they have done so at all, because no one really knows fuck about it. If GK104 couldn't compete with Tahiti, they would have forced another cut down card like GTX480.
So, as said, why did Nvidia put the GK-110 on the back-bench of R&D we don't know, but that’s totally uncharacteristic? Can you recall the last time Nvidia start with something other than the Uber of offerings?

While it's pretty well established that a GK-104 requires "Dynamic Profile" make it perform and still be on top of its' game, while remain in the established power envelope. So Kepler is so perfect they benched thier "star", then brought up the "B" league, although you contend AMD screwed the pooch on Tahiti it's at least not MIA.

The truth... to control Kepler Nvidia needs to add a software "shine". I doubt you'll be able to take a GK-104 disable "Dynamic Profiles", and clock it anywhere close to matching a 7970. (which will have been out almost 5 months before you get what called a GTX680 in hand)

IMO its slow, hot, and nowhere efficient, Kepler is worse and late compared to AMD, althought for that someone will pay for Nvidia’s R&D to reign it in. It always someone else fualt last time TSMC this time AMD's nice try.

We wait... :D
Posted on Reply
#31
Benetanegia
CasecutterSo, as said, why did Nvidia put the GK-110 on the back-bench of R&D we don't know, but that’s totally uncharacteristic? Can you recall the last time Nvidia start with something other than the Uber of offerings?
8800 GT? For example?

And GK107 will be the second chip in the series to be realeased, before GK106 and that's uncharacteristic too. Maybe it's because they decided to address the markets that need more first (higher revenue), like I don't know, doing what they said they would over a year now?

Plus it's GK100 the high-end chip that was put back, not GK110. GK110 is a refresh that may or may not be what GK100 was.
While it's pretty well established
:roll:
The truth... to control Kepler Nvidia needs to add a software "shine". I doubt you'll be able to take a GK-104 disable "Dynamic Profiles", and clock it anywhere close to matching a 7970. (which will have been out almost 5 months before you get what called a GTX680 in hand)
:roll: Yeah I laugh again. Care to show a proof, because I've seen your posts about the dynamic clocks and it's pretty obvious you don't understand what they are at all, so any further conclusion you think you can make is just wrong.

And really 5 months? Maybe the red tint does not allow you to follow the calendar. But it's not even going to be 3 months. 2 months if cards are actually available on the 23rd.

EDIT: Oh and regarding the HD7970 only a real fanboy does not see the obvious elephant in the room: Tahiti is 60% bigger than Pitcairn and has 60% more shaders and TMU, but it's only 25% faster. Factor in clocks and Tahiti is still 20-25% slower than it should.
Posted on Reply
#32
xenocide
CasecutterWhile it's pretty well established that a GK-104 requires "Dynamic Profile" make it perform and still be on top of its' game, while remain in the established power envelope. So Kepler is so perfect they benched thier "star", then brought up the "B" league, although you contend AMD screwed the pooch on Tahiti it's at least not MIA.
You make it sound as though Dynamic Profiles are intended to address some kind of lack of performance, which is definitely not the case. Nvidia just trying to adress what people give them the most complaints about--power consumption and heat. Why have their cards run either at 30% when idle or 100% when anything is present, when it can dynamically clock the card so there is no wasted energy or extra heat generation? If a task only requires the card to run at 50%, why run it at 100%? It would be like flooring it inbetween stop signs. It is just inefficient.
CasecutterIMO its slow, hot, and nowhere efficient, Kepler is worse and late compared to AMD, althought for that someone will pay for Nvidia’s R&D to reign it in. It always someone else fualt last time TSMC this time AMD's nice try.
According to?

*crickets*

Yea, I thought so. People need to remember, this isn't just a rework of Fermi, so it's not going to behave the same. It's more than likely going to require less power, run cooler, and perform slightly better given the specs. Stop trying to make it sound like AMD released the greatest GPU ever, and look at the facts. Ben said it correctly, Tahiti is pretty terrible compared to Pitcairn which is amazing.
Posted on Reply
#33
qubit
Overclocked quantum bit
Price drop? Hell yeah, I might just get myself a second one and have some SLI fun with it.

Note that this here enthusiast has no idea about being sensible with money and I'd still get myself the latest card, regardless. :D
Posted on Reply
#34
NanoTechSoldier
RejZoRWell, for 340 EUR i see it as a good competition even against current HD7000 lineup despite the fact its the older series. With absolute nonsense prices around HD7950 and HD7970 (both way over 400 EUR) this GTX 580 is a good option still.
What Would You Sell A World First 28nm Graphics Card For..??

I Think It's A Reasonable Price, For The "Advanced Micro Devices" HD7000 Series.. Plus They're PCIE 3.0 Cards etc.. Not PCIE 2.0, Like The Nvidia Series Cards...

Graphics Cards, Have A Lot To Do With The Drivers, To Get Performance Too..

OpenGL Drivers, Tell The OS (Or Application), What To Do & DirectX Drivers, Wait For The OS (Or Application), To Tell The Graphics Card, What To Do & Can Slow Performance Down etc.. (CPU Interupts = CPU Load etc..)

On The Other Hand.. If An OpenGL Driver, Isn't Written Properly Or Has Bugs.. It Can Cause Problems In System Too & Cause A Performance Drop..

Point Being... Don't Base Your Graphics Card Purchases, On Price... But, On The Software, Drivers & GPU/s That Run Them etc..
Drivers Can Always, Be Updated Though & An OpenGL Card Is The Best Option..
Posted on Reply
#35
Steevo
BenetanegiaGK104, the 4 in the end has always been indicative of midrange/performance part. just like 6 is lower mainstream and 8 is low end. And the existence of GK100 and GK110 indicative of high-end is very well known, though nothing else is really known except that at some point they were/are in the making.
Again, some proof other than what you THINK is happening?


I have seen none, Nvidia seem to be doing a great job of keeping it under wraps. I for one welcome the competition as my wallet wins, but I haven't seen anything concrete. Just a lot of speculation and rumors that are based on rumors that are based on a idea someone had about a post they saw somewhere else.
Posted on Reply
#36
Benetanegia
SteevoAgain, some proof other than what you THINK is happening?


I have seen none, Nvidia seem to be doing a great job of keeping it under wraps. I for one welcome the competition as my wallet wins, but I haven't seen anything concrete. Just a lot of speculation and rumors that are based on rumors that are based on a idea someone had about a post they saw somewhere else.
Sorry but that is not rumor. Look, Nvidia has been using that code-naming scheme for like forever and there's not a single reason nor evidence that it is different this time around. A 4 in the end means midrange/performance. 300 mm^2 means midrange/performance. 256 bit means midrange/performance. It's known that a GK100 was in the works and then dissapeared from the rumor mill, which why it's suposed cancelled. And there is certainly a GK110 in the pipeline too. Do you have any VALID reason to believe this chip is anything but their performance chip? No, you don't.
Posted on Reply
#37
xenocide
BenetanegiaNvidia has been using that code-naming scheme for like forever and there's not a single reason nor evidence that it is different this time around. A 4 in the end means midrange/performance. 300 mm^2 means midrange/performance. 256 bit means midrange/performance.
All valid points. I still believe the GK104 was originally intended to go into a GTX660, and another offering (GK100 or GK110) was intended to be their high-end offering. Nvidia probably just saw the HD7970, did some internal testing, and decided they could make more money just using the GK104 so they bumped a GTX660 to GTX680 and called it a day.
BenetanegiaDo you have any VALID reason to believe this chip is anything but their performance chip? No, you don't.
That it's now called the GTX680 :wtf:
SteevoI for one welcome the competition as my wallet wins
Except that they will probably price the GTX680 at $600, a GTX670 which is between the HD7950 and HD7970 at $450-500, and so on. Odds are there won't really be a price war since AMD set their prices high, so Nvidia has no reason to lower their prices.
Posted on Reply
#38
Steevo
I find it funny that you all think Nvidia/ATI set complete card prices when they mostly manufacture a small piece of silicon that is soldered to a board with GDDR, vregs, a PCB, and many other components that cost money.



ATI sells batches of 7970 GPU dies to Sapphire, HIS, XFX..... at the same price, and it is up to the board maker and the retailer to set retail price. Same for Nvidia, they have dick all to do with retailer jacking up prices.
BenetanegiaSorry but that is not rumor. Look, Nvidia has been using that code-naming scheme for like forever and there's not a single reason nor evidence that it is different this time around. A 4 in the end means midrange/performance. 300 mm^2 means midrange/performance. 256 bit means midrange/performance. It's known that a GK100 was in the works and then dissapeared from the rumor mill, which why it's suposed cancelled. And there is certainly a GK110 in the pipeline too. Do you have any VALID reason to believe this chip is anything but their performance chip? No, you don't.
I also don;t have any reason to doubt the existance of a flying spaghetti monster, that I can fly if I believe hard enough, and that I'm superman.

Nothing against you Bene, but no one in ANY thread has posted anything other than "well they did X in the past". And if the rumor mill is to be believed they have had yield issues, heat issues, and performance issues too.
Posted on Reply
#39
CrAsHnBuRnXp
NanoTechSoldierWhat Would You Sell A World First 28nm Graphics Card For..??

I Think It's A Reasonable Price, For The "Advanced Micro Devices" HD7000 Series.. Plus They're PCIE 3.0 Cards etc.. Not PCIE 2.0, Like The Nvidia Series Cards...

Graphics Cards, Have A Lot To Do With The Drivers, To Get Performance Too..

OpenGL Drivers, Tell The OS (Or Application), What To Do & DirectX Drivers, Wait For The OS (Or Application), To Tell The Graphics Card, What To Do & Can Slow Performance Down etc.. (CPU Interupts = CPU Load etc..)

On The Other Hand.. If An OpenGL Driver, Isn't Written Properly Or Has Bugs.. It Can Cause Problems In System Too & Cause A Performance Drop..

Point Being... Don't Base Your Graphics Card Purchases, On Price... But, On The Software, Drivers & GPU/s That Run Them etc..
Drivers Can Always, Be Updated Though & An OpenGL Card Is The Best Option..
Do you write every word in capital letters when using pen and paper too? If not why do it here? Makes no sense.
Posted on Reply
#40
dj-electric
NanoTechSoldier,



On a more serious note, a lot of what you wrote are more gimmicks then actual helping features.
BTW reading your comments makes my head ache, caps and random punctuation. No offense.
Posted on Reply
#41
Horrux
BenetanegiaTo me this price-cut does sound like it's going to be significantly cheaper than 500, say 350-400, or significantly faster than most of us expect now, because there would be no rush to lower GTX580 price to 330 otherwise. If they were going to sell it for 500, and it is 25% faster than GTX580, there would still be a place for GTX580 at 400 or so, no need to go as low as 330 and 250 for the GTX570. They would be making the new offering look very overpriced and it's forcing AMD to lower prices too, BEFORE GTX680 launches which is shooting themselves in the foot, because it's the GTX680 that needs the fame, not the EOL'd card.
You are correct, the last thing nVidia wants is to force AMD to lower their prices. If they were to do that, it would mean their offerings would be underpriced. And there is nothing good that can come of that, for these companies. They maximize profits by charging comparably on a price/performance basis. But it is definitely not competition.
Posted on Reply
#42
Nihilus
Change of heart

Alot of people went from "ROAR, this Kepler will destroy AMD" to "The real Kepler will come much later." Why? Very few can afford the top tier Nvidia cards as opposed to the AMD cards. The whole point of the Fermi excitement is to get the HD 7970 prices down, not to see who has the biggest e-peen. If the GTX 680 has similiar performance and price to the GTX 580 with lower power consumption, it is still a win. :toast:
Posted on Reply
#43
xenocide
NihilusAlot of people went from "ROAR, this Kepler will destroy AMD" to "The real Kepler will come much later." Why? Very few can afford the top tier Nvidia cards as opposed to the AMD cards. The whole point of the Fermi excitement is to get the HD 7970 prices down, not to see who has the biggest e-peen. If the GTX 680 has similiar performance and price to the GTX 580 with lower power consumption, it is still a win. :toast:
7970 = $550 MSRP
GTX680 = $550 MSRP (According to Rumors)

Whaaaaaaa???
Posted on Reply
#44
Benetanegia
NihilusIf the GTX 680 has similiar performance and price to the GTX 580 with lower power consumption, it is still a win. :toast:
No, it's not unless they do something like 8800 GT and price it accordingly below $400 at least. The point is that high-end cards from both Nvidia (GTX500) and AMD (HD6000) are selling for the same price since they launched 15 months ago. HD7000 increased the price point instead of lowering it and apparently Nvidia will just follow suit, which makes all the sense in the world for them, but it's just crap for us. Only 2-3 years ago similar sized chips, with same amount of vram chips and similar vrm circuitry was selling for $150, now we have to pay 3-4x as much for the same thing.
xenocide7970 = $550 MSRP
GTX680 = $550 MSRP (According to Rumors)

Whaaaaaaa???
Not to mention that GTX570 and GTX560 Ti and non-Ti have always been sold cheaper than AMD counterparts. Only the flagship GTX580 has been more expensive.
Posted on Reply
#45
Casecutter
Benetanegia8800 GT? For example?.
Got me there... took them 5 months to get a G94 9800GTX, but some would say they had not reason to rush there was no competition I'll give you that.
Benetanegiadoing what they said they would over a year now?.
Can't say I had heard of a set a road map with intended releases with a quarter... I can take your word on that.
BenetanegiaPlus it's GK100 the high-end chip that was put back, not GK110. GK110 is a refresh that may or may not be what GK100 was..
My miss-type but as you say, "may or may not be what GK100 was"; so in other words the GK-100 went in the trash... got it!
BenetanegiaCare to show a proof, because I've seen your posts about the dynamic clocks and it's pretty obvious you don't understand what they are at all, so any further conclusion you think you can make is just wrong..
Oh so you have knowlege of what it's actually going to do, wish to share?
BenetanegiaAnd really 5 months? Maybe the red tint does not allow you to follow the calendar. But it's not even going to be 3 months. 2 months if cards are actually available on the 23rd..
The 7970 released 12/27/11. I said "which will have been out almost 5 months before you get what called a GTX680 in hand". Now for most that's reality, as any average guy who wants one will be camp out on every E-tailer hoping he'll get one of the in the basket and paid for before the other guy 10 each isn’t availably it's call Beta or pre-production. Real accessibility will be at least mid-April, so I stand by 5 months.
BenetanegiaEDIT: Oh and regarding the HD7970 only a real fanboy does not see the obvious elephant in the room: Tahiti is 60% bigger than Pitcairn and has 60% more shaders and TMU, but it's only 25% faster. Factor in clocks and Tahiti is still 20-25% slower than it should.
That's true... irrefutable data and spec's, but the room also has a huge gaping hole from where that "whale" went missing; like not having GK1X0 to quantify GK104 Kepler against I suppose we aren't permitted to "speculate" without spec's... we wait. :D
Posted on Reply
#46
Casecutter
xenocideAccording to?
btarunr...? (so yes I digress) "705 MHz, which clocks down to 300 MHz when the load is lowest, and the geometric domain (de facto "core") will clock up to 950 MHz on high load."
www.techpowerup.com/forums/showthread.php?t=162035

That says it might have profiles that bump the clocks 35% over baseline. As I read. :cool:
xenocideBen said it correctly, Tahiti is pretty terrible compared to Pitcairn which is amazing.
Nice opinion, while it's hard to be your own (only) competition.
CasecutterIMO
Am I not allowed an opinion just like "Benny" ^? :D
Posted on Reply
#47
Casecutter
BenetanegiaHD7000 increased the price point instead of lowering it and apparently Nvidia will just follow suit, which makes all the sense in the world for them, but it's just crap for us.
First all 28Nm GPU production had a increase that basically wipe out the normal incentive of move to a die shrink, so AMD has contented with that, and so is Nvidia by working from a GK104.
forums.nvidia.com/index.php?showtopic=210049

Consider the GTX580 MSRP was $500 with 1.5Gb and hadn't deviated much from that in 17 months, though in the market now about 15% less. The 7970 comes with 3Gb, 15-18% increase of performance, efficiency, matches the GTX 580 348-Bit and for that it starting out asking an extra 10%.

If Nvidia can bring itself to provide a GTX680 that has a capability overtake the7970 here or there for $500 that how I figure. But we realize using a much more cost-effective chip, 512-Bit (though on that I'm not sure some say 256-Bit) and probably just 2Gb. But for that you get those Dynamic Profiles.
Posted on Reply
#48
Benetanegia
CasecutterOh so you have knowlege of what it's actually going to do, wish to share?
There should be no need for that. I just read the same as you did. Difference: I paid attention. When on 100% REAL load, not what software shows which is always false, the GPU will clock to it's highest clocks, ALWAYS, even going OC if the 100% is mantained for a long time. Long time in GPU terms, so miliseconds, after which 100% (again REAL not what afterburner shows) load will be gone and another profile will be loaded. And when load is lower than 100%, say 50% it will be clocked much lower, so that the chip jumps to a higher utilisation rate. The basis is that i.e. 500 SPs @ 1000 Mhz consume much more than 1000 SPs @ 500 Mhz. This will be adjusted dynamically by the hardware, differently for each clock domain and with dozens of profiles, so for 100%, 95%, 90%, etc.
The 7970 released 12/27/11. I said "which will have been out almost 5 months before you get what called a GTX680 in hand". Now for most that's reality, as any average guy who wants one will be camp out on every E-tailer hoping he'll get one of the in the basket and paid for before the other guy 10 each isn’t availably it's call Beta or pre-production. Real accessibility will be at least mid-April, so I stand by 5 months.
Lol, yeah and no one will get a card 2 months after official release, sure... :rolleyes:
Either you compare official launches against each other or you simply don't. AMD did a paperlaunch like no other, but now it's time to disregard that and claim that Nvidia will do a paperlaunch with 3 months of difference between official launch and availability. Absurd and flawed thinking based on your pure speculation. Dec 22 vs March 22 == 3 months. Period.
Posted on Reply
#49
NanoTechSoldier
CrAsHnBuRnXpDo you write every word in capital letters when using pen and paper too? If not why do it here? Makes no sense.
Dj-ElectriCNanoTechSoldier,

i.qkme.me/35qteg.jpg

On a more serious note, a lot of what you wrote are more gimmicks then actual helping features.
BTW reading your comments makes my head ache, caps and random punctuation. No offense.
It's more to piss you Grammer Effected off than anything... LOL.. + to Stop people, using translators etc..

I hate, having to repeat myself, to people, that can't understand English.. It wastes my time & money...

"When Life, Gives You Lemons.. Your Screwed.."

That's the only way, you take it.. With a strap-on.. :confused: LOL..
Posted on Reply
#50
CrAsHnBuRnXp
NanoTechSoldierIt's more to piss you Grammer Effected off than anything... LOL.. + to Stop people, using translators etc..

I hate, having to repeat myself, to people, that can't understand English.. It wastes my time & money...

"When Life, Gives You Lemons.. Your Screwed.."

That's the only way, you take it.. With a strap-on.. :confused: LOL..
It doesnt piss me off at all. I just find it stupid for people to do that and it takes more time hitting shift for every word. Thats wasting time. If someone doesnt understand it because they have broken English, dont repeat yourself. Let someone else piece it together for them.
Posted on Reply
Add your own comment
Apr 23rd, 2024 02:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts