Friday, October 15th 2010

NVIDIA to Counter Radeon HD 6970 ''Cayman'' with GeForce GTX 580

AMD is undertaking its product development cycle at a breakneck pace, NVIDIA trailed it in the DirectX 11 and performance leadership race by months. This November, AMD will release the "Cayman" GPU, its newest high end GPU, the expectations are that it will outperform the NVIDIA GF100, that is a serious cause for concern, for the green team. It's back to its old tactics of talking about GPUs that haven't even taken shape, to try and water down AMD's launch. Enter, the GF110, NVIDIA's new high-end GPU under design, on which is based the GeForce GTX 580.

The new GPU is speculated to have 512 CUDA cores, 128 TMUs, and a 512-bit wide GDDR5 memory interface holding 2 GB of memory, with a TDP of close to that of the GeForce GTX 480. In the immediate future, there are prospects of a more realistic-sounding GF100b, which is basically GF100 with all its 512 CUDA cores enabled, while retaining its 384-bit GDDR5 memory interface, 64 TMUs, and slightly higher TDP than that of the GTX 480.
Sources: 3DCenter.org, PCGH
Add your own comment

195 Comments on NVIDIA to Counter Radeon HD 6970 ''Cayman'' with GeForce GTX 580

#76
cis278
what happen to nvidia

look at the products nvidia is putting out. there tdp is much higher than amd's products, nvidia is cutting rops from there gpu's but yet either has more cuda cores or the same amount as some last gen dx 10 cards (gts 250,gt240) but perform the same or less. luckily nvidia has a lock on physx which is still much to be desired there or who knows where they would be sitting at.
Posted on Reply
#77
inferKNOX
CDdude55I'm just saying;):
Ha ha ha, mine is not a complaint, but a bit of a giggle at nVidia's expense. Silly me, I never read out all the specs nicely, else I would have seen the new jumbo-bus, etc. I just saw the 512 Cuda cores and was beside myself with laughter thinking it's the original GTX480.;)

And actually I do hope that nVidia gives AMD a good kick in the butt! The fact that they want the 6800s to be GPUs that perform like 5800s is a testament to AMD's new found complacency!:shadedshu
And yes, I do know that it's because they're apparently moving their mid-range up from the x700s, but where is the sense in that?! Where's the space for the high end then... x900? Rubbish, there's not enough space there for med-high, high and ultra-high end!
It's a overpricing ploy and nothing more IMHO, so yes nVidia, make them look stupid by making their "best single GPU card" dominance short-lived! Serves them (AMD) right for trying to be lax!:banghead:

*phew*... rant over... for now!:rockout:
Posted on Reply
#78
CDdude55
Crazy 4 TPU!!!
Atom_AntiWe know very much about it, even lot more than about the upcoming HD6900:rolleyes:. GF110= 512+ CUDA, 128 TMUs, 512-bit 2 GB memory. It means about 700+ mm^2 chip, which is too big for TSMC production line, TDP would lot more than 300 watt, than I could continue with technical problems, heating and price. No way!, just Nvidia trying to keep their fans somehow:ohwell:.
Sources?, links?:confused:
Posted on Reply
#79
erocker
*
CDdude55Sources?, links?:confused:
First post of this thread. The two small links at the end of bta's post.
Posted on Reply
#80
Atom_Anti
CDdude55Sources?, links?:confused:
:D
erockerFirst post of this thread. The two small links in bta's post at the end of his post.
Thank You:toast:!
Posted on Reply
#81
CDdude55
Crazy 4 TPU!!!
erockerFirst post of this thread. The two small links at the end of bta's post.
Any that are not in German?..:laugh:
Posted on Reply
#82
Benetanegia
Yellow&Nerdy?Erm, am I the only one who remembers the article about the "full fledged" 512SP GTX 480 a couple of months ago?en.expreview.com/2010/08/09/world-exclusive-review-512sp-geforce-gtx-480/9070.html 204W extra power consumption and 5% more performance. Not to mention the triple slot, triple fan cooler.

For this "GTX 580" to be possible, Nvidia has to make major changes to the GF100 architecture. And I don't see Nvidia having resources to do that, since they just got done releasing their entry-level desktop cards, are still missing a dual-GPU card and have only released their first notebook-graphics.
That was not a card from Nvidia and was most probably FAKE. In fact, maybe not, the pictures had the portion of the chip where it states revision number blurred. That cards was probably a prototype using A1 silicon, and yes we know A1 was not very good by the fact that A2 and A3 silicon was created. pff. Only a clueless person can believe that enabling a 5% of extra shaders (32 SP) is going to consume 200w more. With that rule of dumb, the GTX465 with 352 SPs consumes nearly 200w, so the GTX480 with 128 SPs more would consume 1000w... suuuure.
Atom_AntiWe know very much about it, even lot more than about the upcoming HD6900:rolleyes:. GF110= 512+ CUDA, 128 TMUs, 512-bit 2 GB memory. It means about 700+ mm^2 chip, which is too big for TSMC production line, TDP would lot more than 300 watt, than I could continue with technical problems, heating and price. No way!, just Nvidia trying to keep their fans somehow:ohwell:.
We don't know anything. Those are fake specs coming from sites that are actually mentioning 4 different posibilities, each one of them making less sense than the next one.
erockerFirst post of this thread. The two small links at the end of bta's post.
Yeah, sure, based on that we know everythng about GF110...

So I follow the first link www.3dcenter.org/news/2010-10-13 and which of the specs mentioned there are true exactly?

1- Fully enabled GF100?
2- 512 SP, 128 TMU, 512 bit?
3- 576 SP, 96 TMU, 384 bit?
4- Two GF104s in a die? 768 SP, 128 TMU, 512 bit

The truth is they are completely clueless. And it's obvious they have no kind of source, which becomes apparent by the fact that they are posting 4 different posible configs. If you have a source or if it's Nvidia PR behind you would have one spec posted, wrong or not, fake of not, hype or not. But 4? Come on...
Posted on Reply
#83
Atom_Anti
Whatever it is, but Nvidia can answer only with the next production stepping. That means next summer/fall times.
Posted on Reply
#84
the54thvoid
Intoxicated Moderator
Ya, neine, ich bein ein...something and all that.

One of the scenarios points to the 768 shader GTX 580 with about 50% performance on top of the GTX 480.

Excuse me for laughing out my cornflakes.

We're talking 40nm process here. A GF100 variant (unlike the super GF104) isn't very practical.

It took many months to release a crippled GTX 480 (480 cores not 512). They've yet to manage a 512 core variant. Yes, they probably will. But surely it requires a redesign which would be a bit odd seeing as they are working on Kepler now.

Or will Kepler be a diff design team as the Fermi design team managed to miss a few things in the design to manufacture process (just saying what JSH said).

Can i also state i think it's absolute bollocks that AMD have kept 58xx series prices so high. If NV can make a chip like the nonsense we're all talking about, it'll cost a fortune. Just like 69xx surely will :(
Posted on Reply
#85
the54thvoid
Intoxicated Moderator
BenetanegiaThe truth is they are completely clueless. And it's obvious they have no kind of source, which becomes apparent by the fact that they are posting 4 different posible configs. If you have a source or if it's Nvidia PR behind you would have one spec posted, wrong or not, fake of not, hype or not. But 4? Come on...
Completely with you there dude.

Physical products and real info is the mana of tech, not rumour and superstition.
Posted on Reply
#86
Benetanegia
GF100 failed because a very specific problem that has already been fixed (fabric). All this "Nvidia couldn't do 512 SP, much less more of them" BS needs to stop (I'm not talking to anyone specifically). There was a problem and has been fixed. (2 actually, if you count bad TSMC process as well, and that was a fact thet even AMD suffered). AMD couldn't make r600 work well, it was a 720 million transistor behemoth (at the time) that waas creamed by the 680 million transistor competitor by as much as 40% performance lead (8800 Ultra). Months later the 959 million transistor and 2.25x more horsepower RV770 was born. Stop saying stupid things please...
Atom_AntiWhatever it is, but Nvidia can answer only with the next production stepping. That means next summer/fall times.
Erm, no. There's one that can be released already and that is the 3rd one they mentioned or the first one I mentioned. That one is 3/2 of a GF104, and could have been in the making since first GF104 silicon went back from TSMC 6 months ago and they realized how much better than GF100 it was, meaning that full production chips could be on their way already. You could bet that Nvidia could be making this indeed, since Nvidia allocated +++70% of 40nm capacity once again and capacity now is like 4x higher than it was a year ago. Only for GF106 and GF108? Most probably, but certainty that there's nothing more? Methinks not.

That one is not only doable, but it's a lot more doable than GF100 and better than GF100 in every posible way, and doesn't require Nvidia going to the drawing board at all. 50% more GPC/shaders means less than 50% more silicon, because PCIe interface, video decoder portion of the chip and many other things are already there. In any case if we take the most pessimistic number of 50% increase in silicon we end up with a 2.8 billion transistor and 480mm^2 chip. Also 50% more power consumption means 225-275w.

basically "GF110" vs GF100:

transistors: 2.8 vs 3.1 billion
die area: 480 mm^2 vs 530mm^2
shaders: 576 vs 480
tmu: 96 vs 64
384 vs 384 bit
275w vs 320w
Posted on Reply
#87
HalfAHertz
BenetanegiaGF100 failed because a very specific problem that has already been fixed (fabric). All this "Nvidia couldn't do 512 SP, much less more of them" BS needs to stop (I'm not talking to anyone specifically). There was a problem and has been fixed. (2 actually, if you count bad TSMC process as well, and that was a fact thet even AMD suffered). AMD couldn't make r600 work well, it was a 720 million transistor behemoth (at the time) that waas creamed by the 680 million transistor competitor by as much as 40% performance lead (8800 Ultra). Months later the 959 million transistor and 2.25x more horsepower RV770 was born. Stop saying stupid things please...



Erm, no. There's one that can be released already and that is the 3rd one they mentioned or the first one I mentioned. That one is 3/2 of a GF104, and could have been in the making since first GF104 silicon went back from TSMC 6 months ago and they realized how much better than GF100 it was, meaning that full production chips could be on their way already. You could bet that Nvidia could be making this indeed, since Nvidia allocated +++70% of 40nm capacity once again and capacity now is like 4x higher than it was a year ago. Only for GF106 and GF108? Most probably, but certainty that there's nothing more? Methinks not.

That one is not only doable, but it's a lot more doable than GF100 and better than GF100 in every posible way, and doesn't require Nvidia going to the drawing board at all. 50% more GPC/shaders means less than 50% more silicon, because PCIe interface, video decoder portion of the chip and many other things are already there. In any case if we take the most pessimistic number of 50% increase in silicon we end up with a 2.8 billion transistor and 480mm^2 chip. Also 50% more power consumption means 225-275w.

basically "GF110" vs GF100:

transistors: 2.8 vs 3.1 billion
die area: 480 mm^2 vs 530mm^2
shaders: 576 vs 480
tmu: 96 vs 64
384 vs 384 bit
275w vs 320w
If only these things worked so linearly...

Ok let's think logically: all these new shaders have to be connected to the L2 cache, those connections take some space. Then you'll probably need a larger(and/or faster) L2, unless you want to leave all those new shiny cores starved for information. Then the back-end with the 50% more TMUs will need to be rewired and all of these new changes will need to be tested over and over and over...I dunno man you make it sounds a lot easier than it actually is.

I still think that if they play around with the existing cores and release fully enabled ones, Nvidia should pretty much be in the clear. Get rid of the 465 because it sucks donkey a$$, release a 466 based on the full G106 core, get rid of the 470 because the 466 will eat it up, move the 480 down to a 475 and have a 512core 485 at the top of the line. Play with prices and voltages a bit and et voila problem solved.
Posted on Reply
#88
Athlon2K15
HyperVtX™
The fact that this thread is pure speculation makes me :roll:
Posted on Reply
#89
cadaveca
My name is Dave
AthlonX2The fact that this thread is pure speculation makes me :roll:
Just some nV soul-sucking on the 6-series launch. I'd be more concerned if it DIDN'T happen.
Posted on Reply
#90
Athlon2K15
HyperVtX™
nvidia isnt going to release anything until next year 2nd quarter they have no reason to.
Posted on Reply
#91
Benetanegia
HalfAHertzIf only these things worked so linearly...

Ok let's think logically: all these new shaders have to be connected to the L2 cache, those connections take some space. Then you'll probably need a larger(and/or faster) L2, unless you want to leave all those new shiny cores starved for information. Then the back-end with the 50% more TMUs will need to be rewired and all of these new changes will need to be tested over and over and over...I dunno man you make it sounds a lot easier than it actually is.
Nothing is wired as you say. Fermi is 100% modular and every step was designed so that you can add anything in a LEGO fashion, from SIMDs, to SMs, to complete GPCs. Buffers and pooled buses are placed between every step for that purpose and the performance penalty that Fermi suffers in terms of SP/performance in comparison with G80/G92/GT200 supposedly comes from this re-alignment. The trade off was made (just like when Ati created R600), now it's time to add the components that actually do the work.

Yeah, maybe it's not as easy as I made it out to be, but it certainly isn't as difficult as a competely new chip. It's been 6+ months since GF104 was finished (not released). 6 months is more than enough to make that thing and then some. Besides forget about release times, it's internal times which we have to look at, and thse are unknown. Release dates for GF104, 106 and 108 were not based on when the design was finished, but on when can I make enough of them for a proper release, without eating up on production of the chips that make me most money, that is higher end ones. Bottom line Fermi derivatives were probably almost finished probably even before GF100 cards were released. Enough time for anything.

EDIT: And no, you don't need more L2. Fermi had much more L2 than any GPU will ever need. Reason GPGPU (GF100 is and will always be the GPGPU chip, just like G80 always was the GPGPU part, G92 existed oly like a gaming chip tho). GF104 is showing any decrease in performance due to less L2 per SP? No, not a single 1%. And 50% more SPs per SM were added. Adding another 16 SP, equalling a 33% increase is not going to change that either.

Also:
If only these things worked so linearly...
They don't indeed, but it's actually the other way around as you are suggesting and in absolute favor for the "3/2 GF104-GF110":

- Doubling execution units usually never doubles transistor count or die area, especially die area. And you waste less area in "margins" (I know there's a term for that). i.e:

Ati

Redwood = 627 million
Juniper = 1040 million
Cypress = 2150 million, more than twice yes, but it does not count because it has at least a massive difference in that it supports 64 bit, while Juniper and below don't.

RV730 = 514 million (remember 320 SP)
RV740 = 826 million (640 SP)
RV770 = 956 million (800 SP)

Nvidia

GF108 = 585 million
GF106 = 1170 million
GF104 = 1950 million

- Power requirement increases are almost always lower than the actual active transistor increase.
Posted on Reply
#92
CDdude55
Crazy 4 TPU!!!
Like any other company, they are trying to garner hype for themselves to try and dumb down the hype for their competitors who are prepping for a product launch, nothing new.
Posted on Reply
#93
the_pharaoh
Total fail, not news. The two sites linked as "sources" cite each other as sources :laugh: give me a break...

As far as I'm concerned until cards are on shelves it's all a bunch of hot air. NVIDIA has zero credibility left with me after the Fermi "launch".
Posted on Reply
#94
CDdude55
Crazy 4 TPU!!!
the_pharaohTotal fail, not news. The two sites linked as "sources" cite each other as sources :laugh: give me a break...

As far as I'm concerned until cards are on shelves it's all a bunch of hot air. NVIDIA has zero credibility left with me after the Fermi "launch".
The second site actually lists more then one source, one including that article from 3D center.

3Dcenter claims in that article of it's source:
''However, the source of new information on the GF110-chip can be trusted - it is the same which had been previously noted by us called the specifications of the AMD chips Barts and the Cayman, which is now apparently out to be correct.''
So who knows..

And as for the Fermi comment.... i like my GTX 470.:)
Posted on Reply
#95
v12dock
Block Caption of Rainey Street
Are they going to be able to make the chip this time....:laugh:
Posted on Reply
#96
HalfAHertz
BenetanegiaNothing is wired as you say. Fermi is 100% modular and every step was designed so that you can add anything in a LEGO fashion, from SIMDs, to SMs, to complete GPCs. Buffers and pooled buses are placed between every step for that purpose and the performance penalty that Fermi suffers in terms of SP/performance in comparison with G80/G92/GT200 supposedly comes from this re-alignment. The trade off was made (just like when Ati created R600), now it's time to add the components that actually do the work.

Yeah, maybe it's not as easy as I made it out to be, but it certainly isn't as difficult as a competely new chip. It's been 6+ months since GF104 was finished (not released). 6 months is more than enough to make that thing and then some. Besides forget about release times, it's internal times which we have to look at, and thse are unknown. Release dates for GF104, 106 and 108 were not based on when the design was finished, but on when can I make enough of them for a proper release, without eating up on production of the chips that make me most money, that is higher end ones. Bottom line Fermi derivatives were probably almost finished probably even before GF100 cards were released. Enough time for anything.

EDIT: And no, you don't need more L2. Fermi had much more L2 than any GPU will ever need. Reason GPGPU (GF100 is and will always be the GPGPU chip, just like G80 always was the GPGPU part, G92 existed oly like a gaming chip tho). GF104 is showing any decrease in performance due to less L2 per SP? No, not a single 1%. And 50% more SPs per SM were added. Adding another 16 SP, equalling a 33% increase is not going to change that either.

Also:



They don't indeed, but it's actually the other way around as you are suggesting and in absolute favor for the "3/2 GF104-GF110":

- Doubling execution units usually never doubles transistor count or die area, especially die area. And you waste less area in "margins" (I know there's a term for that). i.e:

Ati

Redwood = 627 million
Juniper = 1040 million
Cypress = 2150 million, more than twice yes, but it does not count because it has at least a massive difference in that it supports 64 bit, while Juniper and below don't.

RV730 = 514 million (remember 320 SP)
RV740 = 826 million (640 SP)
RV770 = 956 million (800 SP)

Nvidia

GF108 = 585 million
GF106 = 1170 million
GF104 = 1950 million

- Power requirement increases are almost always lower than the actual active transistor increase.
Well I guess we'll have to wait and see...a while....has anyone read any good books leately? :laugh:
Posted on Reply
#97
erocker
*
I don't expect to see this "GTX 580" for a while. I don't see how a 512 core GTX 480 is going to happen as it already has happened and failed (6 - 12% perf. increase and horrible power usage). Nvidia is going to need to be creative for the next half-year or so. They need to work with the GF104 chip as much as they can until 28nm is ready. It's freaking sad. I hope they can get something competetive out, otherwise prices aren't going to be very competetive at all, all around.
Posted on Reply
#98
bear jesus
Really all i expected from nvidia in the near future was a refresh of the current cores that are fully enabled, possibly from binned chips or possibly a refresh simmilar to the 280 to 285 although i thought that refresh was basicly a drop form 65nm to 55nm thus why i was not expecting anything much past full fat gf100 and gf104 chips.
I was expecting nothing special from them untill they move onto 28nm, i would have said the same for AMD but it seams that the 6xxx cards may prove to be quite nice, only time will tell.
erockerI don't see how a 512 core GTX 480 is going to happen as it already has happened and failed (6 - 12% perf. increase and horrible power usage).
Do you think well binned chips could possibly have everything enabled and not chow down on a massive ammount more power?
Posted on Reply
#99
erocker
*
bear jesusReally all i expected from nvidia in the near future was a refresh of the current cores that are fully enabled, possibly from binned chips or possibly a refresh simmilar to the 280 to 285 although i thought that refresh was basicly a drop form 65nm to 55nm thus why i was not expecting anything much past full fat gf100 and gf104 chips.
I was expecting nothing special from them untill they move onto 28nm, i would have said the same for AMD but it seams that the 6xxx cards may prove to be quite nice, only time will tell.



Do you think well binned chips could possibly have everything enabled and not chow down on a massive ammount more power?
It's simply not cost sufficient. This is what a "well binned" chip gets you. en.expreview.com/2010/08/09/world-exclusive-review-512sp-geforce-gtx-480/9070.html/1
Posted on Reply
Add your own comment
Apr 16th, 2024 11:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts