Friday, January 9th 2015

NVIDIA GM206-300 Silicon Pictured

Here's the first picture of NVIDIA next chip based on its "Maxwell" architecture, GM206-300. Powering the upcoming GeForce GTX 960 graphics card, the silicon appears to have half the die-size of the company's current flagship GM204, which powers the GTX 980 and GTX 970. The package itself is smaller, with its much lower pin-count, owing to half the memory bus width to the GM204, at 128-bit, and fewer power pins. No other specs are leaked, but we won't be surprised if its CUDA core count is about half that of the GTX 980. NVIDIA plans to launch the GeForce GTX 960 on the 22nd of January, 2015.
Source: VideoCardz
Add your own comment

22 Comments on NVIDIA GM206-300 Silicon Pictured

#1
nunomoreira10
if it is indead half of the gtx 980, i dont see how it would compete with gtx 680/770, maybe with the 760 at a lower price, still it would be more probable a 1280sp comfiguration, double 750ti, but the die size seem to disagree lol
Posted on Reply
#2
hojnikb
nunomoreira10 said:
if it is indead half of the gtx 980, i dont see how it would compete with gtx 680/770, maybe with the 760 at a lower price, still it would be more probable a 1280sp comfiguration, double 750ti, but the die size seem to disagree lol
Yeah, looks like 960 is gonna be another dissapointment from nvidia as far as midrange is concerned.
Posted on Reply
#3
blibba
The 980 competes with and beats the 780 despite having a smaller bus width and just over half the shaders.

This should match the 670 and 760Ti imo, perhaps with <100W TDP. It'll as such be ideal for most gamers wanting to play at high settings with 1080p screens (that being most gamers).
Posted on Reply
#4
MxPhenom 216
Corsair Fanboy
you guys are jumping so quick to pure speculation. Maxwell and Kepler are different architectures, and the lossless compression algorithm for the limited memory bandwidth seems to be working well on 970/980. I'll reserve performance judgement till we see 960 and 960Ti reviews
Posted on Reply
#5
Disparia
GTX 960 = probably be the best selling GPU of 2015.
Posted on Reply
#6
newtekie1
Semi-Retired Folder
The way I see it, the GTX980 is almost exactly double the performance of a GTX760. So it sounds pretty reasonable that cutting it exactly in half would yield similar performance to a GTX760 at a much lower power consumption.

I mean, yeah it isn't that exciting to go from a 760 to a 960 and get no performance improvement. But the power difference will be insane. And if nVidia decides to ramp up the clocks, because the power consumption allows it, then it may just be possible to get close to 770 performance.

Of course they could have snuck a few extra shaders in there too. They could be doubling the 750ti, which would give it 1280 shaders.

I'm going to wait for the performance numbers, and just as importantly the price, before calling this a disappointment. And also remember they'll likely have a 960ti that is carved out of GM204.
Posted on Reply
#7
RealNeil
Show me the reviews,.........
Posted on Reply
#8
Sony Xperia S
hojnikb said:
Yeah, looks like 960 is gonna be another dissapointment from nvidia as far as midrange is concerned.
The first 28nm GPU was launched in 2011, that's 3 !!! years ago and counting. The worst node change in history. And they still try to sell you crap.

Keep buying, guys!

It is not crazy who eats it but the one who gives him.!

960 is so bad that they are afraid even to mention the process node in the articles about it.

Jizzler said:
GTX 960 = probably be the best selling GPU of 2015.
Sure :rolleyes: Because people blindly follow that company, it's like the sect built by Apple with their iphones which actually have no competitive advantages whatsoever but still people blindly throw multiple times more money. And this have the potential to be dramatically improved with people's own cost optimisations.




http://www.extremetech.com/computing/123529-nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless

Nvidia was unhappy with TSMC 3 years ago.

However, according to the diagram, we have already been in that area of the graph which points to logical move to the new process, back in mid 2014.......

Intel is now on 14nm, and GPU makers are still struggling with 28nm products.

What the... is going wrong with these guys? :(
Posted on Reply
#9
RCoon
Gaming Moderator
Sony Xperia S said:
actually have no competitive advantages whatsoever but still people blindly throw multiple times more money
The 970 had a colossal advantage. It outperformed the AMD cards, it used less power, and it cost so much less that AMD had to axe their prices severely. The 970 went on to score stupendously high in almost every review, and became one of the best selling GPUs of the decade.
So it does have competitive advantage in literally every area, and it didn't cost multiple times more money. What's more, it was built on 28nm, which just goes to show GPU makers from all manufacturers can produce far more than they have been on the same node. All you have to do is create an amazing architecture.

NVidia (like every other company) is not here to make people happy. These people run businesses. Businesses exist to make money by making smart decisions and producing good products. They're not out here to do us a favour, they're trying to sell us a product. They did just that, and if AMD were in the same position, they'd do the same.
Posted on Reply
#10
RealNeil
RCoon said:
They're not out here to do us a favour, they're trying to sell us a product. They did just that, and if AMD were in the same position, they'd do the same.
The AMD 280s and 290s came out at a fairly decent price. Then, Bitcoin miners snatched them up in a big way, inflating prices for them. We suffered with those artificially high GPU prices for a long time until recently when the 970 and 980 were released.
Posted on Reply
#11
Casecutter
Does anyone know why we're see silicone that's rectangle not square? That doesn't fly with logic to use the round wafer sufficiently?
Posted on Reply
#12
Sony Xperia S
RCoon said:
The 970 had a colossal advantage. It outperformed the AMD cards, it used less power, and it cost so much less that AMD had to axe their prices severely.
It doesn't outperform the R9 290X, especially at Full 4K. It uses a bit of less power, bu I highly doubt it would matter so much in the end.

Both are at virtually identical prices. And yes, prices from AMD needed to be adjusted accordingly but it has always been happening in both directions.

RCoon said:
The 970 went on to score stupendously high in almost every review, and became one of the best selling GPUs of the decade.
Source or statistical data published somewhere?

RCoon said:
...it was built on 28nm, which just goes to show GPU makers from all manufacturers can produce far more than they have been on the same node. All you have to do is create an amazing architecture.
Don't speak about 28nm anymore. These amazing architectures need to be on fine next-gen process now.

We are lagging with the progress.

4K should have already been affordable mainstream.

Casecutter said:
Does anyone know why we're see silicone that's rectangle not square? That doesn't fly with logic to use the round wafer sufficiently?
Haha, because a square is taken and cut into two slices. :(
Posted on Reply
#13
hojnikb
Sony Xperia S said:
It doesn't outperform the R9 290X, especially at Full 4K. It uses a bit of less power, bu I highly doubt it would matter so much in the end.
Hahahha, thats the biggest understatment yet.
120W more is quite alot.

Posted on Reply
#14
newtekie1
Semi-Retired Folder
Sony Xperia S said:
It doesn't outperform the R9 290X, especially at Full 4K. It uses a bit of less power, bu I highly doubt it would matter so much in the end.
The GTX970 matches the 290X at Full 4K, but it outperforms the 290X at 1440p and 1080p.

As for the power usage, no one seems to want to accept it but Hawaii consumes more power than Fermi did! And not just a little bit more either, we're talking a good 40-50w more under load. What really amazes me is that Fermi at least outperformed the competition. Hawaii doesn't outperform the competition, yet I don't really see any hate towards Hawaii but everyone hated on Fermi. And the difference between Fermi and Cayman was about 40w, the difference between Hawaii and GM204 is 100w!

Sony Xperia S said:
Don't speak about 28nm anymore. These amazing architectures need to be on fine next-gen process now.
Talk to the foundries about that, they are the ones holding backup progress to smaller nodes.
Posted on Reply
#15
Sony Xperia S
newtekie1 said:
The GTX970 matches the 290X at Full 4K, but it outperforms the 290X at 1440p and 1080p.
Double standard?

"Matches", according to TPU's own review, is 5% more performance for the R9 290X.

At lower resolutions things go opposite but it is obvious that the R9 290X is still the more capable graphics card.
Posted on Reply
#16
hojnikb
Sony Xperia S said:
Double standard?

"Matches", according to TPU's own review, is 5% more performance for the R9 290X.

At lower resolutions things go opposite but it is obvious that the R9 290X is still the more capable graphics card.
Just stop being a butthurt fanboy, mkay ?
Posted on Reply
#17
newtekie1
Semi-Retired Folder
Sony Xperia S said:
Double standard?

"Matches", according to TPU's own review, is 5% more performance for the R9 290X.

At lower resolutions things go opposite but it is obvious that the R9 290X is still the more capable graphics card.
I'm not sure what review you're looking at but:


That looks like a 3% advantage to the 290x to me, which is completely unnoticeable. And even more, only a 1% difference for the 970 in the review. That is as close to matching performance as you are going to get from two very different cards.

Not to mention the GTX970 overclocks better than the 290x does. But yeah, keep holding on to that dream that the 290X is the better card...
Posted on Reply
#19
newtekie1
Semi-Retired Folder
Sony Xperia S said:
This:



http://www.techpowerup.com/reviews/Gigabyte/GeForce_GTX_980_G1_Gaming/27.html
Yep, that isn't how you read those graphs. The graph being normalized for the GTX980 G1 makes the other percentage gaps seem higher the further away from 100% you get. That is why you have to use a graph normalized to either the GTX970 or 290X. A 1 or 2% gap becomes a 4 or 5% gap when your 20% away from the reference point of the graph.
Posted on Reply
#20
TheHunter
that leaked chart was almost spot on, this normal 960gtx 128bit is around 760gtx perf./ sightly faster.
Posted on Reply
#21
xorbe
Sony Xperia S said:
The first 28nm GPU was launched in 2011, that's 3 !!! years ago and counting. The worst node change in history.
Yeah 28nm is dragging on, but I'm pretty sure it's been refined along the way, so it not exactly like the initial 2011 28nm runs.
Posted on Reply
#22
Casecutter
xorbe said:
Yeah 28nm is dragging on, but I'm pretty sure it's been refined along the way, so it not exactly like the initial 2011 28nm runs.
God's let's hope so... the initial production was so bad TSMC halted production end Jan-February 212 to fix problems. They tried to hush it hiding it in Chinese New Year slowdown. It took a good 3 weeks to get acceptable production restarted.
Posted on Reply
Add your own comment