Discussion in 'General Hardware' started by Crap Daddy, Jan 9, 2012.
Here are some specs and pricing leaked for the GK104. Take it as you wish.
If that price is true... I will be very happy.
so it should be faster than a gtx580, i don't like that we don't have a shader clock anymore, i think it will decrease performans, but we have to wait how it will perform
If true I'd expect it to match the 7970 at stock in most cases, and I'd also expect them to raise the price instead of fighting a price war.
It's the first leak that seems close to reality. NV is holding a press conference at CES right now maybe we'll find more.
It should be 10-15% over the 580 and it's a mid-high range GPU something like the 560Ti.
I don't know. Every other leak seems to be way overshooting it but this one seems to be undershooting it. Hopefully they will say something at the conference.
looks like gf 114 +50% on everything ;12 tesselation units meh ...indeed somewhat a midrange gpu .
if this is true then it looks like nVidia is integrating the shader clocks to the rop/tmu clocks now, just like the radeons (or maybe it was hidden again, just like n the G70)
If the shader clock are fixed like on fermi doesn't matter anyway if they are hidden or not.
This looks like a realistic design, but flawed...and look, 4 alus per shader...that sounds familiar!
I think last gen showed where the optimal rop/shader ratio was, and this undershoots shaders in favor of rops...oh wait...LIKE EVERY OTHER NVIDIA DESIGN. I thought they were going to fix that?
This would be faster than 7970 per clock, sure, but it will not be as efficient. Since 7970 can use it's 300w tdp and overclock pretty linearly on voltage, it would be the overall better design.
On the over-under game, this would appear to be 1-2 48sp units light of maximum efficiency. That would probably be a big-ass chip, granted, hence why it was smart of AMD to stay at 32 ROPs, even if using less units. I still argue they didnt really need over 30 CUs...perhaps just barely hence it makes sense to overshoot on an enthusiast part (and to allow breathing space for a 1792 part...) and that will be proven by Pitcairn which should have amazing efficiency, even if just barely shader-light (assuming 24 rop, 1408sp)...It should be within 1 CU in average use.
That all said, a salvage 32 rop part using this design should be perfect with 10 GPCs...lots of wasted transistors though.
IOW, this sounds like typical nvidia. What AMD would need to match it will boil down to how nvidia implements their alus. They will probably lose efficiency to fermi as it was super granular (2 scalar alus with 2x clock), but gain die savings moving closer to AMDs (and Intel's) design. Rough guess would be AMD would need 7970 in the 11xx range.
Bring on the inevitable 1200mhz AMD sku with (hopefully) .28ns gddr5...
I believe this is from OBR blog HERE.
If it's true GTX660 could be best video card ever made.
1536MB AGAIN? nVIDIA, get your act together, AMD has you beat in this segment...
And dayuuuuuuuum, a GTX 660 will beat my 580...
3.2 bilion transistors on 28nm = 256 bit GDDR5, for starters
if the 399USD is true i will properly but one
Sausages in the background? LOLWUT?
The sausages are fine. I just found out that NV has some "new" mobile GPUs which is the 600 series, saw a laptop with GT610M on offer. They are of course rebranded mobile fermis. Anyway I think this is a clear indication that the new desktop GPUs (whenever they come out) will be the 700 series. Maybe I was the only one not aware of this but anyway.
What makes you say that? nVIDIA only skips a generation when they have custom OEM cards on offer.
I heard a rumor that the desktop GPUs were going to be 700s but that is the first actual evidence I have heard that shows it could be true.
Did some research and here it is:
Now of course that doesn't mean the desktop parts will be named 700 but as they jumped from the 200 series to the 400 with a line of mobile 300s it might be so.
Now if Nvidia would just hurry up and release some concrete details we would be set. I'm thinking about waiting to see if there is a price war anyway, I need Ivy more than I need a GPU upgrade right now and anything I could save would help.
They missed out the 100 and 300 series because OEMs had their own special cards and nVIDIA didn't want to confuse consumers. There were mobile versions of the 200 series cards AND the 400 series cards after all, and nVIDIA didn't skip them.
Separate names with a comma.