Thursday, February 16th 2012

NVIDIA Kepler Yields Lower Than Expected.

NVIDIA seems to be playing the blame game according to a article over at Xbit. This is what they had to say, "Chief executive officer of NVIDIA Corp. said that besides continuously increasing capital expenditures that the company ran into in the recent months will be accompanied by lower than expected gross margins in the forthcoming quarter. The company blames low yields of the next-generation code-named Kepler graphics chips that are made at TSMC’s 28nm node. “Decline [of gross margin] in Q1 is expected to be due to the hard disk drive shortage continuing, as well as a shortage of 28nm wafers. We are ramping our Kepler generation very hard, and we could use more wafers. The gross margin decline is contributed almost entirely to the yields of 28nm being lower than expected. That is, I guess, unsurprising at this point,” said Jen-Hsun Huang, chief executive officer of NVIDIA, during a conference call with financial analysts.

NVIDIA’s operating expenses have been increasing for about a year now: from $329.6 million in Q1 FY2012 to $367.7 million in Q4 FY2012 and expects OpEx to be around $383 million in the ongoing Q1 FY2013. At the same time, the company expects its gross margins in Q1 FY2013 to decline below 50% for the first time in many quarters to 49.2%. Nvidia has very high expectations for its Kepler generation of graphics processing units (GPUs). The company claims that it had signed contracts to supply mobile versions of GeForce “Kepler” chips with every single PC OEM in the world. In fact, NVIDIA says Kepler is the best graphics processor ever designed by the company. [With Kepler, we] won design wins at virtually every single PC OEM in the world. So, this is probably the best GPU we have ever built and the performance and power efficiency is surely the best that we have ever created,” said Mr. Huang.

Unfortunately for NVIDIA, yields of Kepler are lower than the company originally anticipated and therefore their costs are high. Chief exec of NVIDIA remains optimistic and claims that the situation with Fermi ramp up was ever worse than that. “We use wafer-based pricing now, when the yield is lower, our cost is higher. We have transitioned to a wafer-based pricing for some time and our expectation, of course, is that the yields will improve as they have in the previous generation nodes, and as the yields improve, our output would increase and our costs will decline,” stated the head of NVIDIA.

Kepler is NVIDIA's next-generation graphics processor architecture that is projected to bring considerable performance improvements and will likely make the GPU more flexible in terms of programmability, which will speed up development of applications that take advantage of GPGPU (general purpose processing on GPU) technologies. Some of the technologies that NVIDIA promised to introduce in Kepler and Maxwell (the architecture that will succeed Kepler) include virtual memory space (which will allow CPUs and GPUs to use the "unified" virtual memory), pre-emption, enhance the ability of GPU to autonomously process the data without the help of CPU and so on. Entry-level chips may not get all the features that Kepler architecture will have to offer."Source: Xbit Laboratories
Add your own comment

75 Comments on NVIDIA Kepler Yields Lower Than Expected.

#1
Benetanegia
mastrdrver said:
How is it that nVidia is the only one affected by the hard drive shortage? AMD never said it's GPU sales were affected by the shortage.
AMD GPU sales were affected by the shortage, and they probably did say so in their own report:

http://www.anandtech.com/show/5465/amd-q411-fy-2011-earnings-report-169b-revenue-for-q4-657b-revenue-for-2011
Meanwhile the biggest loser here was the desktop GPU segment, thanks both to a general decrease in desktop sales and the hard drive shortage. Compared to CPU sales desktop GPU sales in particular are being significantly impacted by the hard drive shortage as fewer desktop PCs are being sold and manufacturers cut back on or remove the discrete GPU entirely to offset higher hard drive prices.
Also while on laptops AMD has a bigger marketshare, in desktops Nvidia has a 60%, so it's more affected than AMD there. In any case Nvidia's Q4 results were much better than AMD's Q4, so it's just a matter of explaining why their operating expenses were higher than before.
Posted on Reply
#2
TheMailMan78
Big Member
Benetanegia said:
AMD GPU sales were affected by the shortage, and they probably did say so in their own report:

http://www.anandtech.com/show/5465/amd-q411-fy-2011-earnings-report-169b-revenue-for-q4-657b-revenue-for-2011



Also while on laptops AMD has a bigger marketshare, in desktops Nvidia has a 60%, so it's more affected than AMD there. In any case Nvidia's Q4 results were much better than AMD's Q4, so it's just a matter of explaining why their operating expenses were higher than before.
That and AMD has way more fab time then NVIDIA. There is a reason NVIDIA was down graded. NVIDIA saying this is just telling you "Get ready to pay out the ass for our new GPU" Stock holders are not fanboys. They play no favorites.
Posted on Reply
#3
Prima.Vera
BlackOmega said:
This is basically a message saying "Hey guys, I know you wanted a competitively priced GPU from us, but because our yields are total suck ass, they're going to be expensive as hell. Sorry."
Exactly what I was thinking. Plus, you can add the obvious delay in launching the cards. Fermi all over again! :) ;)
Posted on Reply
#4
BlackOmega
omegared26 said:
And loool since nvidia will cost 3xtimes more than amd u should buy 4 of those but i bet u dont have money for that
Actually I have to agree with this.
radrok said:
Could you please explain this "difference" in colours?
I'm really looking forward to what you will pull out now.
He can't but I can ;).

Nvidia has sacrificed image quality in lieu of performance.
Now this goes back a little bit but back when I was using some 8800's in SLI when I switched from the 175.19 driver to the 180.xx driver I noticed that my framerate doubled [in BF2142] but all of the colors washed out. At the time I was using a calibrated Dell Trinitron Ultra-Scan monitor so I immediately noticed the difference in color saturation and overall image quality.
I actually switched back to the 175.19 driver and used it as long as I possibly could. Then I made the switch to ATi and couldn't have been happier. Image quality and color saturation was back, not to mention the 4870 I bought simply SMOKED my SLI getup. :D

EDIT:
Prima.Vera said:
Exactly what I was thinking. Plus, you can add the obvious delay in launching the cards. Fermi all over again! :) ;)
Makes me wonder if the same thing that happened when Fermi came out is going to happen again. People waited and waited, then Fermi debuted, was a flop and all of the ATi cards sold out overnight.
Posted on Reply
#5
alwayssts
wolf said:
Yields on big chips, on a new node, are low? really? noooooo....... :rolleyes:
EXACTLY.

Compound this:

AMD has 32 CUs and really only needs slightly more than 28 most of the time. 7950 is a fine design, and it doesn't really hurt the design if yields are low on 7970. Tahiti is over-designed, prolly because of the exact reason mentioned; big chip on new node. Even if GK104 did have the perfect mix of rop:shader ipc, the wider bus and (unneeded) bandwidth of 7950 should make up that performance versus a similar part with 256bit bus because 7950 is not far off that reality. Point AMD on flexibility to reach a certain performance level.

Again, I think the 'efficient/1080p/gk104-like' 32 ROP design will come with Sea Islands when 28nm is mature and 1.5v 7gbps gddr5 is available..think something similar to a native 7950 with a 256-bit bus using higher clocks. Right now, that chip will be Pitcairn (24 ROPs) because it is smaller and lines up with market realities. Point AMD on being realistic.

nVIDIA appears to have 16 less-granular big units, which itself is a yield problem...like Fermi on a less-drastic level because the die is smaller. If the shader design is 90% ppc (2 CU versus 1 SM) or less versus AMD, every single SM is needed to balance the design. I wager that is either a reality or very close to it considering 96sp, even with realistic use of SFU, is not 90% of 128. Yeah, scalar is 100% efficient, but AMD's 4vliw/MIMD designs are not that far off on average. Add that Fermi should need every bit of of 5ghz memory bandwidth per 1ghz core clock and 2 SM (ie 32 ROP/16/256-bit SM, 28 ROP/14 SM/224-bit) and you don't have any freaking wiggle room at all if your memory controller/core design over or under-perform.

Conclusion:

So if you are nVIDIA you are sitting with a large die, with big units that are all needed at it's maximum level to compete against the salvage design of the competition. Efficient as fermi can be yes, smart choices for this point in time...not even close.

Design epic fail.
Posted on Reply
#6
theoneandonlymrk
radrok said:
You should calibrate the monitor every time you switch GPU before doing any analysis on colour.
If you just plug and forget then you can't really complain about colours.
if you just plug and forget with both cards you have a reasonable comparison untweeked and nv look poorer, simples
Posted on Reply
#7
radrok
theoneandonlymrk said:
if you just plug and forget with both cards you have a reasonable comparison untweeked and nv look poorer, simples
Do you realize it makes no sense to not optimize things? If default is fine for you then okay, be my guest.
Posted on Reply
#8
theoneandonlymrk
radrok said:
Do you realize it makes no sense to not optimize things? If default is fine for you then okay, be my guest.
read again ,i never said that i said if you plug and foreget both that would then be a fair comparison and NV look worse,,, simples
Posted on Reply
#9
cadaveca
My name is Dave
radrok said:
Do you realize it makes no sense to not optimize things?
To me, it makes no sense TO optimize anything. The average user is going to do just that, so while "optimized" systems amy be better, most users will do no such thing, jsut beucase it's pain in the butt, or they do not know how.

For a professional, where colour matters, sure, calibration of your tools is 100% needed. But not all PC users use their PCs in a professional context, and most definitely not the gamer-centric market that find their way on to TPU.


You need to be able to relate the user experience, nto the optimal, unless every user can get the same experience with minimal effort. When that requires education of the consumer, you can forget about it.
Posted on Reply
#10
radrok
I understand your point Dave, still I think that is a waste to not get self informed about things and get the best experience you can out of your purchases.


theoneandonlymrk said:
read again ,i never said that i said if you plug and foreget both that would then be a fair comparison and NV look worse,,, simples
With all due respect, your sentence makes no sense to me, sorry.
Posted on Reply
#11
Benetanegia
AMD does not have "better" colors, it has "more saturated" colors. Oversaturated colors. Several studies have dmostrated that when people are presented 2 identical images side by side, one being natural and the other being oversaturated, they tend to prefer the oversaturated one, well 70% of people do. But the thing is it's severely oversaturated and colors are not natural by any means. They are not the colors you can find in real life.

So what is "better"? What is your definition of better? I guess if you belong to the 70% of people whose definition of better is more saturated then I guess that AMD has a more appealing default color scheme. If your definition of better is "more close to reality, more natural" then you'd prefer Nvidia's scheme.

Saying that AMD has better color is like saying that fast food tastes better, because they use additives to make it "taste more". I guess people who get addicted to fast food do think it tastes better, but in the end it's just a matter of taste and so is colors.
Posted on Reply
#12
TheMailMan78
Big Member
Benetanegia said:
AMD does not have "better" colors, it has "more saturated" colors. Oversaturated colors. Several studies have dmostrated that when people are presented 2 identical images side by side, one being natural and the other being oversaturated, they tend to prefer the oversaturated one, well 70% of people do. But the thing is it's severely oversaturated and colors are not natural by any means. They are not the colors you can find in real life.

So what is "better"? What is your definition of better? I guess if you belong to the 70% of people whose definition of better is more saturated then I guess that AMD has a more appealing default color scheme. If your definition of better is "more close to reality, more natural" then you'd prefer Nvidia's scheme.

Saying that AMD has better color is like saying that fast food tastes better, because they use additives to make it "taste more". I guess people who get addicted to fast food do think it tastes better, but in the end it's just a matter of taste and so is colors.
Having using AMD for years and just now using a NVIDIA card I can say with full confidence what you just said is BS. They look the same. I didn't even have recalibrate for process colors.
Posted on Reply
#13
radrok
TheMailMan78 said:
Having using AMD for years and just now using a NVIDIA card I can say with full confidence what you just said is BS. They look the same. I didn't even have recalibrate for process colors.
I agree with you TheMailMan78, in fact no one has given us proof to strengthen their argument.
That's why I asked the person who brought the "colour" argument in the first place.
Posted on Reply
#14
Benetanegia
TheMailMan78 said:
Having using AMD for years and just now using a NVIDIA card I can say with full confidence what you just said is BS. They look the same. I didn't even have recalibrate for process colors.
It was true some years ago at least, I honestly don't know if it's true now, but people still say the same. In any case my point was that there's no "better" color, just more saturated or less saturated color and it's all about what you prefer. The one truth is that most of the media we are fed nowadays is oversaturated anyway, so it's just a matter of what extent of oversaturation you really prefer.

And I find kinda funny that you chose to call BS on my post and not any of the preceeding ones. :cool:
Posted on Reply
#15
TheMailMan78
Big Member
Benetanegia said:
It was true some years ago at least, I honestly don't know if it's true now, but people still say the same. In any case my point was that there's no "better" color, just more saturated or less saturated color and it's all about what you prefer. The one truth is that most of the media we are fed nowadays is oversaturated anyway, so it's just a matter of what extent of oversaturation you really prefer.

And I find kinda funny that you chose to call BS on my post and not any of the preceeding ones. :cool:
I call yours BS because I expect more out of you....;):toast:

Dont sink to it man.
Posted on Reply
#16
pr0n Inspector
TheMailMan78 said:
I call yours BS because I expect more out of you....;):toast:

Dont sink to it man.
There used to be a 16-235 vs 0-255 issue. But that was dealt with long ago and it was not the video card's job anyway.
Posted on Reply
#17
LAN_deRf_HA
BlackOmega said:
Actually I have to agree with this.


He can't but I can ;).

Nvidia has sacrificed image quality in lieu of performance.
Now this goes back a little bit but back when I was using some 8800's in SLI when I switched from the 175.19 driver to the 180.xx driver I noticed that my framerate doubled [in BF2142] but all of the colors washed out. At the time I was using a calibrated Dell Trinitron Ultra-Scan monitor so I immediately noticed the difference in color saturation and overall image quality.
I actually switched back to the 175.19 driver and used it as long as I possibly could. Then I made the switch to ATi and couldn't have been happier. Image quality and color saturation was back, not to mention the 4870 I bought simply SMOKED my SLI getup. :D

EDIT:
Makes me wonder if the same thing that happened when Fermi came out is going to happen again. People waited and waited, then Fermi debuted, was a flop and all of the ATi cards sold out overnight.
Nvidia sacrificed IQ with the 7xxx series, that was it. Still to this day I rag on people who bought 7xxx cards because it was empty framerates. First time I can recall a new card gen having lower IQ than the previous one. The driver issue you talk about is well behind BOTH companies. Both got into the habit of releasing drivers around card release time that had IQ errors that increased performance. Namely I can think of this happening in Crysis 1 around the time the 3870/8800 GT were being compared, but the issue was always corrected in successive driver releases.

TheMailMan78 said:
Having using AMD for years and just now using a NVIDIA card I can say with full confidence what you just said is BS. They look the same. I didn't even have recalibrate for process colors.
You're doing it wrong. You need screenshots. I've seen this a lot in AA quality comparison shots in reviews as recently as Metro 2033's release. AMD cards are more saturated, at least that recently.
Posted on Reply
#18
sergionography
alwayssts said:
EXACTLY.

Compound this:

AMD has 32 CUs and really only needs slightly more than 28 most of the time. 7950 is a fine design, and it doesn't really hurt the design if yields are low on 7970. Tahiti is over-designed, prolly because of the exact reason mentioned; big chip on new node. Even if GK104 did have the perfect mix of rop:shader ipc, the wider bus and (unneeded) bandwidth of 7950 should make up that performance versus a similar part with 256bit bus because 7950 is not far off that reality. Point AMD on flexibility to reach a certain performance level.

Again, I think the 'efficient/1080p/gk104-like' 32 ROP design will come with Sea Islands when 28nm is mature and 1.5v 7gbps gddr5 is available..think something similar to a native 7950 with a 256-bit bus using higher clocks. Right now, that chip will be Pitcairn (24 ROPs) because it is smaller and lines up with market realities. Point AMD on being realistic.

nVIDIA appears to have 16 less-granular big units, which itself is a yield problem...like Fermi on a less-drastic level because the die is smaller. If the shader design is 90% ppc (2 CU versus 1 SM) or less versus AMD, every single SM is needed to balance the design. I wager that is either a reality or very close to it considering 96sp, even with realistic use of SFU, is not 90% of 128. Yeah, scalar is 100% efficient, but AMD's 4vliw/MIMD designs are not that far off on average. Add that Fermi should need every bit of of 5ghz memory bandwidth per 1ghz core clock and 2 SM (ie 32 ROP/16/256-bit SM, 28 ROP/14 SM/224-bit) and you don't have any freaking wiggle room at all if your memory controller/core design over or under-perform.

Conclusion:

So if you are nVIDIA you are sitting with a large die, with big units that are all needed at it's maximum level to compete against the salvage design of the competition. Efficient as fermi can be yes, smart choices for this point in time...not even close.

Design epic fail.
well nvidia did drop the hot clocks which allowed more cores in the gpu and will no longer be limited in clocks as the shaders and the cores will have the same frequency(before since they had hot clocks they always had scaling issues), they radically changed the fermi make up and seems like they know what they are doing, as for the gtx660 i read leaks that it was a 340mm2 chip compared to the 365mm2 of the hd7970 and is meant to compete and come close to the hd7970 which seems reasonable, tho im not sure how they will pull a gtx680/670 (probably will be like the gtx470/480 with disabled hardware)

so while i agree with you overall nvidia isnt in such a bad place, only their biggest chip is.
so in the worst case nvidia will end up with a top end gpu that is 10-20% slower than amds top end, but I doubt that, even with the 256bit bandwidth that everyone is all crazy about i dont think it should be a problem in most scenarios, especially considering the fact that most people buying nvidia dont really do multiple gpu setups while for amd its almost a must for eyefinity.

also i heard leaks nvidia was debating whether they should call the gk104 gtx660 or gtx680 when the gk110 was supposed to be for that but isnt coming anytime soon, so idk whether the yeild issues force nvidia to do so, or whether they think the gk104 is sufficient, either way we need competition already, and for cards with 340mm2 and 365mm2 die sizes they should be well in the 350-400$ price range, and thats considering the TSMC 20% more expensive wafer prices
Posted on Reply
#19
pr0n Inspector
LAN_deRf_HA said:
Nvidia sacrificed IQ with the 7xxx series, that was it. Still to this day I rag on people who bought 7xxx cards because it was empty framerates. First time I can recall a new card gen having lower IQ than the previous one. The driver issue you talk about is well behind BOTH companies. Both got into the habit of releasing drivers around card release time that had IQ errors that increased performance. Namely I can think of this happening in Crysis 1 around the time the 3870/8800 GT were being compared, but the issue was always corrected in successive driver releases.



You're doing it wrong. You need screenshots. I've seen this a lot in AA quality comparison shots in reviews as recently as Metro 2033's release. AMD cards are more saturated, at least that recently.
I don't think we were talking about the image quality of 3D engines.
Posted on Reply
#20
TheGuruStud
I have my 7950, nvidia, so na na na boo boo. Go cry to mommy. We knew yields were low LAST YEAR (for both camps)!.

Fantastic card, btw :) Runs much better than the 6950s I had. At 1,175 core so far. Still testing :)
With a non-reference cooler and OCed it still won't go above low 60s. The fans are still silent.
Posted on Reply
#21
Inceptor
sergionography said:
especially considering the fact that most people buying nvidia dont really do multiple gpu setups while for amd its almost a must for eyefinity.
The other way around.
Posted on Reply
#22
Wrigleyvillain
PTFO or GTFO
Inceptor said:
The other way around.
Uh idk...can't speak to multi-monitor really but offhand I know a lot more people running Crossfire than SLI and always have pretty much (if the opposite is in fact what you were saying).
Posted on Reply
#23
erocker
Senior Moderator
For Nvidia Surround (3 monitors) you need two cards. For AMD Eyefinity you only need one.
Posted on Reply
#24
m1dg3t
erocker said:
For Nvidia Surround (3 monitors) you need two cards. For AMD Eyefinity you only need one.
And with eyefinity you can run up to 6 screen's ;)
Posted on Reply
#25
sergionography
erocker said:
For Nvidia Surround (3 monitors) you need two cards. For AMD Eyefinity you only need one.
which is why amd graphic cards are more bandwidth hungry than nvidia
and explains why nvidia is releasing a 256-bit card to compete with amds hd7970

Inceptor said:
The other way around.
for multi-monitor on nvidia you have to SLI, while for eyefinity you can use one amd card, thats what i was referring to, in other words the hd7970 needs that extra bandwidth more than the gk106
Posted on Reply
Add your own comment