Tuesday, January 17th 2012

NVIDIA Rushing in Stopgap HD 7970 Competitor This February?

AMD's Radeon HD 7970 seems to have ruffled a few feathers at NVIDIA and it looks like the green team doesn't want too much market exposure for it. A fairly-reliable source at ChipHell learned that NVIDIA's GeForce "GTX 680" part could be launched some time in February. The source says that this part could be competitive with the HD 7970, though not exactly NVIDIA's fastest next-generation GPU in the works. So it has to be something other than the GeForce Kepler 110, that's reportedly slated for March-April. At least the tiny pieces of specifications trickling out seem to reinforce this theory. Graphics cards based on this part apparently have 2 GB of memory, and its core clock speed is reported to be 780 MHz.
Source: ChipHell
Add your own comment

74 Comments on NVIDIA Rushing in Stopgap HD 7970 Competitor This February?

#26
qubit
Overclocked quantum bit
Indeed they would, if I was in the market for a card now, so I voted yes.
Posted on Reply
#27
Benetanegia
Like I've said plenty of times, if I wanted to shell out $500+ on that kind of performance, I wouldn't have waited 14 months, I would already have a GTX580. But no I don't think such level of performance is worth that much of my money, so I voted yes.

BTW: Was the poll question suposed to include a joke, because resily-available and HD7970 on the same sentence does look like a joke.
Posted on Reply
#28
entropy13
BenetanegiaLike I've said plenty of times, if I wanted to shell out $500+ on that kind of performance, I wouldn't have waited 14 months, I would already have a GTX580. But no I don't think such level of performance is worth that much of my money, so I voted yes.

BTW: Was the poll question suposed to include a joke, because resily-available and HD7970 on the same sentence does look like a joke.
lol Besides being not readily available here, it's also actually more expensive than the GTX 580 as well.
Posted on Reply
#29
Jarman
There are probably only very few that need this kind of power anyway.

My 6950 plays pretty much anything at 1900x1200 with whatever settings you want. There can't be that many people that play at a higher reolution than this at the moment.

Until consoles get their ass into gear (i hear the new nintendo is based on a crippled R700 design, wow!) then uber high end pc cards are laregely pointless imo
Posted on Reply
#30
Completely Bonkers
We are at the point where GPU's consume more power than CPU's. Therefore, CPU's could be single slot solutions and the GPU's could have a socket and huge heatsink on the mainboard. ;) Seriously. Time to reconsider the form factor. Make the "PC" a huge GPU environment... and make the CPU an add-on board. ATX form factor... nV style.
Posted on Reply
#31
the54thvoid
Intoxicated Moderator
Completely BonkersWe are at the point where GPU's consume more power than CPU's. Therefore, CPU's could be single slot solutions and the GPU's could have a socket and huge heatsink on the mainboard. ;) Seriously. Time to reconsider the form factor. Make the "PC" a huge GPU environment... and make the CPU an add-on board. ATX form factor... nV style.
That's not so completely bonkers Completely Bonkers. :laugh:
Posted on Reply
#32
entropy13
the54thvoidThat's not so completely bonkers Completely Bonkers. :laugh:
Tsk tsk tsk, he's a disappointment for that username. :shadedshu

:laugh:
Posted on Reply
#33
LiveOrDie
I've always like nvidia better umm if i dont sell my 480 on ebay think ill hold of from getting a HD 7970 just because its price will fall when this is released.
Posted on Reply
#34
Grings
I voted yes, i was tempted by a 7970, and was just waiting to see the 7950's price, but if Nvidia's next card is coming that much sooner its worth waiting.
Posted on Reply
#35
Selene
I think NV is playing this smart, they will hit AMD with the 670($349.00) and then later in the year hit them again with the 680($499.99) and 690($699.99).
I do not think NV is having any problems, other then they wanted to do a bottom up launch and AMD kinda spoiled it with a top to bottom launch.
Posted on Reply
#36
blibba
I voted no. Lack of need and money is holding me back.
the54thvoidGF100 was resurrected by GF110. Some compute functionailty (what Ben's talking about) was dropped and it resulted in a manageable 512 core card.

To counter Ben though the original rumours suggested a bottom up launch for NV to get the process right. The last time I checked, the Kepler behemoth gpu isn't scheduled to arrive until 2013. GK 104 is the card for now (a high end but not top end single card).

But folk are also then forgetting AMD do plan a dual Tahiti card in Q1 2012 which will be enormously powerful..and expensive.

So, to summarise, just lets wait and see what happens. We have two tech companies still vying to be the best, so let them fight it out and we'll all benefit.

And yes, my 7970 purchase plans are now on hold (was waiting for water blocks anyway!).
I totally agree with the spirit of your post.

Just a couple of points though - I thought GF110 was more pf a manufacturing and optimisation improvement than a computer/graphics compromise closer to graphics. I might be wrong.

Secondly, I do believe that a dual GK104 is planned.

Also, Nvidia, please take this opportunity to have a generation that wins the hearts and minds of enthusiasts with a sensible naming scheme. You can put the letters before the numbers if it makes you happy.
Posted on Reply
#37
Benetanegia
blibbaJust a couple of points though - I thought GF110 as more a manufacturing and optimisation improvement than a computer/graphics compromise closer to graphics. I might be wrong.
You are right. GF110 didn't drop any compute feature or capability. It's GF104 and below that did it. They simply made it work as it should with GF110.
the54thvoidTo counter Ben though the original rumours suggested a bottom up launch for NV to get the process right. The last time I checked, the Kepler behemoth gpu isn't scheduled to arrive until 2013. GK 104 is the card for now (a high end but not top end single card).
Oh I didn't pay attention to this paragraph. You are not completely countering me. GK104 is the card that is going to release "now" and that's what I've been saying. I wouldn't say end of February is now tho (just like I would even question if Tahiti actually launched yet), especially considering that, hard launch of not, they will not have sufficient cards to meet demand by a long shot. I remember the days in which a release with less than 250k cards worldwide was almost considered a paper launch. Now 10k cards on US only and people take precautions not to say paper launch. Bleh, IT IS paper launch, of course it is.

Regarding "Kepler behemoth", show me the proof that is going to release in 2013 and please don't link to that chart that screamed fake (i.e when has Nvidia EVER given dual-GPU cards a codename? Fake. Only AMD does that.), and don't mention Charlie D. and associates who still think that no next gen Nvidia chip has taped out yet and hence say it's 2013 launch (if all goes well!).

As I see it, Kepler performance is going to release in Feb. and Kepler behemoth as you called it, will launch in April, probably later. Show me any evidence that demostrates that Nvidia can't release their behemoth*, namely GK110, in April-June timeframe. And again refrain from using the usual suspects who still think that no Kepler chip exists yet or that for example only 9000 waffers of GF100 were ever made (absurd considering the amount of cards they sold), for example.

There's just simply too much confusion regarding what Kepler is. Some people refer to the high-end chip as Kepler, which will launch in April or later. And hence are calling GK104 a stopgap card, because it's maybe not completely the same architecture as the high-end chip. But that's simply because they have been spoiled by how AMD uses codenames. Some other people rightfully include GK104 and hence report Kepler in Feb.

* How do we know it's going to be a behemoth anyway. I'm talking about ca 500 mm^2 behemoth like Fermi. They don't really need it this time around. First and foremost it will be a 512 bit card, hardly more than that. That's only 33% more transistors devoted to that part of the chip, ROPs and L2, included. MC+ROPs+L2 is almost half of the chip in GF100. They don't need a massive increase for Kepler high-end in that front, especially if they sorted out memory clocks. Then it really depends on how they arrange the SMs. For example and it's just an example, GF104/114 is a 1.9 billion transistor chip, 384 SP, 256 bits, so in theory they could simply mirror it and end up with a 768 SP, 512 bit chip. Any doubts about such a chip decimating the GTX580 and by extension also easily beating Tahiti? Well remember 1.9 billion *2 = 3.8 billion transistor, a fair bit less then Tahiti and hardly a behemoth. Not a fair comparison because GF104 lacks the compute capabilitites that high-end Kepler will certainly need. But between such a card, and another hypothetical 500 mm^2 and 6 billion transistor behemoth, there's a world of a difference and 2 billion transistor budget to play with. Nvidia in theory and if they don't go nuts with compute over what is already sufficient, could release a card with 2x times the arithmetic capabilities while staying well below the 500 mm^2 mark. Will they do it and stay below? I don't know it was just pure speculation.
Posted on Reply
#38
Enmity
Wasnt there talk of a gtx585 possibility at some point? It would make sence nvidia releasing a tweaked/tuned 580 variant with 2gb+ which would be the perfect stopgap before launching the proper kepler parts in the coming months.

I think it was here at tpu i read a newspost that the 585 was in the works...havent heard anything since then though.
Posted on Reply
#39
the54thvoid
Intoxicated Moderator
BenetanegiaYou are right. GF110 didn't drop any compute feature or capability. It's GF104 and below that did it. They simply made it work as it should with GF110.
I got the info somewhere, this website confirms the notion..

www.nordichardware.com/news/71-graphics/41570-the-geforce-gtx-580-mystery-is-clearing.html

specifically:
GF110 focuses on retail, without HPC functions

We have had a hard time seeing how NVIDIA would be able to activate its sixteenth SM unit without severe problems with the power consumption. But with GF110 NVIDIA made an active choice and sacrificed the HPC functionality (High Performance Computing) that it talked so boldly about for Fermi, not only to make it smaller but also more efficient.

According to sources to NordicHardware it can be as many as 300 million transistors that NVIDIA has been able to cut in this way. The effect is that GF110 will be a GPU targetting only retail and will not be as efficient for GPGPU applications as the older siblings of the Fermi Tesla family. Something few users will care about.
It fares well in compute however as it remains cooler, therefore power doesn't become a limiting factor and it clocks faster (infers the article anyway). They did optimise the design and tweak it very well but they did also cut it down.
Posted on Reply
#40
MxPhenom 216
ASIC Engineer
Oh boy, could this be another Fermi fail?

Whatever nvidia release, ill still be going red team this time around. So hurry up and release your stuff nvidia so there can be a price war.
Posted on Reply
#41
TIGR
They don't need the fastest or newest card to get my money. They just need the card or set of cards that offers the best solution per dollar for the intended application. I don't open my wallet for bragging rights or hype.
Posted on Reply
#42
Benetanegia
the54thvoidIt fares well in compute however as it remains cooler, therefore power doesn't become a limiting factor and it clocks faster (infers the article anyway). They did optimise the design and tweak it very well but they did also cut it down.
That info turned out to be false afaik. The transistor count for GF110 is exactly the same as GF100 as reported everywere. GF110 is almost a GF100b, if not for some changes to the texture mapping units. Everything else remained untouched.
Posted on Reply
#43
the54thvoid
Intoxicated Moderator
BenetanegiaThat info turned out to be false afaik. The transistor count for GF110 is exactly the same as GF100 as reported everywere. GF110 is almost a GF100b, if not for some changes to the texture mapping units. Everything else remained untouched.
Yup, more checking (Anandtech) agrees. 3B transistors for both. Reminds you how botched the 480 was when they redesign it and come up with the 580 :laugh:
Posted on Reply
#44
Patriot
BenetanegiaThat info turned out to be false afaik. The transistor count for GF110 is exactly the same as GF100 as reported everywere. GF110 is almost a GF100b, if not for some changes to the texture mapping units. Everything else remained untouched.
Indeed a GTX580 was just a fully functional GTX480...
I am curious what Nvidia is up to... other than the usual please wait to purchase our cards.

They really can't afford to make it* extremely power hungry unless its much faster and priced competitively.

* your guess is as good as mine...
Posted on Reply
#45
faramir
_JP_Rushing is rarely a good idea. Especially in this market. Let's see what comes out of it.
They are not rushing anything - they put out some rumors that create FUD among potential buyers of AMD's upcoming lineup. You can see cases of affected individuals further up this thread. It is a standard tactic some companies employ (Microsoft was well-known for this, never actually coming up with the product in the end, but the rumors were enough to retard sales of competitive real-world product as people were waiting for M$'s vaporware to show up).

Unlike M$, Nvidia will release their new line-up eventually fearmongering rumors or not, and this "eventually" is going to be later than AMD's release of 7xxx series.

If you need a more powerful graphics card now, there is only one product to buy. If you can afford to wait, then wait ... and wait forever as there is always something better just around the next corner ;)
Posted on Reply
#46
erocker
*
I think Nvidia likes to talk too much. Show me the product!
Posted on Reply
#47
Crap Daddy
erockerI think Nvidia likes to talk too much. Show me the product!
It's the other way around, we talk they don't!
Posted on Reply
#48
TheMailMan78
Big Member
As long as they don't give everyone an exploding 590 again Ill be happy.
Posted on Reply
#49
blibba
Crap DaddyIt's the other way around, we talk they don't!
Agreed to be honest. I don't think there's been an official NV press release on GTX6/7 cards at all yet.
Posted on Reply
#50
chris89
Voted No for myself.

Need to upgrade GPU at some point but can't afford one at the moment.

I just hope no matter how good this new Nvidia GPU is it makes the HD7950 more affordable when the price is annouced or pushes down the prices of the HD6950/6970 since about 3gens odd behind now :P

Chris
Posted on Reply
Add your own comment
Apr 25th, 2024 20:04 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts