• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Kepler Yields Lower Than Expected.

How is it that nVidia is the only one affected by the hard drive shortage? AMD never said it's GPU sales were affected by the shortage.

AMD GPU sales were affected by the shortage, and they probably did say so in their own report:

http://www.anandtech.com/show/5465/...ort-169b-revenue-for-q4-657b-revenue-for-2011

Meanwhile the biggest loser here was the desktop GPU segment, thanks both to a general decrease in desktop sales and the hard drive shortage. Compared to CPU sales desktop GPU sales in particular are being significantly impacted by the hard drive shortage as fewer desktop PCs are being sold and manufacturers cut back on or remove the discrete GPU entirely to offset higher hard drive prices.

Also while on laptops AMD has a bigger marketshare, in desktops Nvidia has a 60%, so it's more affected than AMD there. In any case Nvidia's Q4 results were much better than AMD's Q4, so it's just a matter of explaining why their operating expenses were higher than before.
 
AMD GPU sales were affected by the shortage, and they probably did say so in their own report:

http://www.anandtech.com/show/5465/...ort-169b-revenue-for-q4-657b-revenue-for-2011



Also while on laptops AMD has a bigger marketshare, in desktops Nvidia has a 60%, so it's more affected than AMD there. In any case Nvidia's Q4 results were much better than AMD's Q4, so it's just a matter of explaining why their operating expenses were higher than before.

That and AMD has way more fab time then NVIDIA. There is a reason NVIDIA was down graded. NVIDIA saying this is just telling you "Get ready to pay out the ass for our new GPU" Stock holders are not fanboys. They play no favorites.
 
This is basically a message saying "Hey guys, I know you wanted a competitively priced GPU from us, but because our yields are total suck ass, they're going to be expensive as hell. Sorry."

Exactly what I was thinking. Plus, you can add the obvious delay in launching the cards. Fermi all over again! :) ;)
 
And loool since nvidia will cost 3xtimes more than amd u should buy 4 of those but i bet u dont have money for that
Actually I have to agree with this.
Could you please explain this "difference" in colours?
I'm really looking forward to what you will pull out now.

He can't but I can ;).

Nvidia has sacrificed image quality in lieu of performance.
Now this goes back a little bit but back when I was using some 8800's in SLI when I switched from the 175.19 driver to the 180.xx driver I noticed that my framerate doubled [in BF2142] but all of the colors washed out. At the time I was using a calibrated Dell Trinitron Ultra-Scan monitor so I immediately noticed the difference in color saturation and overall image quality.
I actually switched back to the 175.19 driver and used it as long as I possibly could. Then I made the switch to ATi and couldn't have been happier. Image quality and color saturation was back, not to mention the 4870 I bought simply SMOKED my SLI getup. :D

EDIT:
Exactly what I was thinking. Plus, you can add the obvious delay in launching the cards. Fermi all over again! :) ;)
Makes me wonder if the same thing that happened when Fermi came out is going to happen again. People waited and waited, then Fermi debuted, was a flop and all of the ATi cards sold out overnight.
 
Yields on big chips, on a new node, are low? really? noooooo....... :rolleyes:

EXACTLY.

Compound this:

AMD has 32 CUs and really only needs slightly more than 28 most of the time. 7950 is a fine design, and it doesn't really hurt the design if yields are low on 7970. Tahiti is over-designed, prolly because of the exact reason mentioned; big chip on new node. Even if GK104 did have the perfect mix of rop:shader ipc, the wider bus and (unneeded) bandwidth of 7950 should make up that performance versus a similar part with 256bit bus because 7950 is not far off that reality. Point AMD on flexibility to reach a certain performance level.

Again, I think the 'efficient/1080p/gk104-like' 32 ROP design will come with Sea Islands when 28nm is mature and 1.5v 7gbps gddr5 is available..think something similar to a native 7950 with a 256-bit bus using higher clocks. Right now, that chip will be Pitcairn (24 ROPs) because it is smaller and lines up with market realities. Point AMD on being realistic.

nVIDIA appears to have 16 less-granular big units, which itself is a yield problem...like Fermi on a less-drastic level because the die is smaller. If the shader design is 90% ppc (2 CU versus 1 SM) or less versus AMD, every single SM is needed to balance the design. I wager that is either a reality or very close to it considering 96sp, even with realistic use of SFU, is not 90% of 128. Yeah, scalar is 100% efficient, but AMD's 4vliw/MIMD designs are not that far off on average. Add that Fermi should need every bit of of 5ghz memory bandwidth per 1ghz core clock and 2 SM (ie 32 ROP/16/256-bit SM, 28 ROP/14 SM/224-bit) and you don't have any freaking wiggle room at all if your memory controller/core design over or under-perform.

Conclusion:

So if you are nVIDIA you are sitting with a large die, with big units that are all needed at it's maximum level to compete against the salvage design of the competition. Efficient as fermi can be yes, smart choices for this point in time...not even close.

Design epic fail.
 
You should calibrate the monitor every time you switch GPU before doing any analysis on colour.
If you just plug and forget then you can't really complain about colours.

if you just plug and forget with both cards you have a reasonable comparison untweeked and nv look poorer, simples
 
if you just plug and forget with both cards you have a reasonable comparison untweeked and nv look poorer, simples

Do you realize it makes no sense to not optimize things? If default is fine for you then okay, be my guest.
 
Do you realize it makes no sense to not optimize things? If default is fine for you then okay, be my guest.
read again ,i never said that i said if you plug and foreget both that would then be a fair comparison and NV look worse,,, simples
 
Do you realize it makes no sense to not optimize things?

To me, it makes no sense TO optimize anything. The average user is going to do just that, so while "optimized" systems amy be better, most users will do no such thing, jsut beucase it's pain in the butt, or they do not know how.

For a professional, where colour matters, sure, calibration of your tools is 100% needed. But not all PC users use their PCs in a professional context, and most definitely not the gamer-centric market that find their way on to TPU.


You need to be able to relate the user experience, nto the optimal, unless every user can get the same experience with minimal effort. When that requires education of the consumer, you can forget about it.
 
I understand your point Dave, still I think that is a waste to not get self informed about things and get the best experience you can out of your purchases.


read again ,i never said that i said if you plug and foreget both that would then be a fair comparison and NV look worse,,, simples

With all due respect, your sentence makes no sense to me, sorry.
 
AMD does not have "better" colors, it has "more saturated" colors. Oversaturated colors. Several studies have dmostrated that when people are presented 2 identical images side by side, one being natural and the other being oversaturated, they tend to prefer the oversaturated one, well 70% of people do. But the thing is it's severely oversaturated and colors are not natural by any means. They are not the colors you can find in real life.

So what is "better"? What is your definition of better? I guess if you belong to the 70% of people whose definition of better is more saturated then I guess that AMD has a more appealing default color scheme. If your definition of better is "more close to reality, more natural" then you'd prefer Nvidia's scheme.

Saying that AMD has better color is like saying that fast food tastes better, because they use additives to make it "taste more". I guess people who get addicted to fast food do think it tastes better, but in the end it's just a matter of taste and so is colors.
 
AMD does not have "better" colors, it has "more saturated" colors. Oversaturated colors. Several studies have dmostrated that when people are presented 2 identical images side by side, one being natural and the other being oversaturated, they tend to prefer the oversaturated one, well 70% of people do. But the thing is it's severely oversaturated and colors are not natural by any means. They are not the colors you can find in real life.

So what is "better"? What is your definition of better? I guess if you belong to the 70% of people whose definition of better is more saturated then I guess that AMD has a more appealing default color scheme. If your definition of better is "more close to reality, more natural" then you'd prefer Nvidia's scheme.

Saying that AMD has better color is like saying that fast food tastes better, because they use additives to make it "taste more". I guess people who get addicted to fast food do think it tastes better, but in the end it's just a matter of taste and so is colors.

Having using AMD for years and just now using a NVIDIA card I can say with full confidence what you just said is BS. They look the same. I didn't even have recalibrate for process colors.
 
Having using AMD for years and just now using a NVIDIA card I can say with full confidence what you just said is BS. They look the same. I didn't even have recalibrate for process colors.

I agree with you TheMailMan78, in fact no one has given us proof to strengthen their argument.
That's why I asked the person who brought the "colour" argument in the first place.
 
Having using AMD for years and just now using a NVIDIA card I can say with full confidence what you just said is BS. They look the same. I didn't even have recalibrate for process colors.

It was true some years ago at least, I honestly don't know if it's true now, but people still say the same. In any case my point was that there's no "better" color, just more saturated or less saturated color and it's all about what you prefer. The one truth is that most of the media we are fed nowadays is oversaturated anyway, so it's just a matter of what extent of oversaturation you really prefer.

And I find kinda funny that you chose to call BS on my post and not any of the preceeding ones. :cool:
 
It was true some years ago at least, I honestly don't know if it's true now, but people still say the same. In any case my point was that there's no "better" color, just more saturated or less saturated color and it's all about what you prefer. The one truth is that most of the media we are fed nowadays is oversaturated anyway, so it's just a matter of what extent of oversaturation you really prefer.

And I find kinda funny that you chose to call BS on my post and not any of the preceeding ones. :cool:

I call yours BS because I expect more out of you....;):toast:

Dont sink to it man.
 
Actually I have to agree with this.


He can't but I can ;).

Nvidia has sacrificed image quality in lieu of performance.
Now this goes back a little bit but back when I was using some 8800's in SLI when I switched from the 175.19 driver to the 180.xx driver I noticed that my framerate doubled [in BF2142] but all of the colors washed out. At the time I was using a calibrated Dell Trinitron Ultra-Scan monitor so I immediately noticed the difference in color saturation and overall image quality.
I actually switched back to the 175.19 driver and used it as long as I possibly could. Then I made the switch to ATi and couldn't have been happier. Image quality and color saturation was back, not to mention the 4870 I bought simply SMOKED my SLI getup. :D

EDIT:
Makes me wonder if the same thing that happened when Fermi came out is going to happen again. People waited and waited, then Fermi debuted, was a flop and all of the ATi cards sold out overnight.

Nvidia sacrificed IQ with the 7xxx series, that was it. Still to this day I rag on people who bought 7xxx cards because it was empty framerates. First time I can recall a new card gen having lower IQ than the previous one. The driver issue you talk about is well behind BOTH companies. Both got into the habit of releasing drivers around card release time that had IQ errors that increased performance. Namely I can think of this happening in Crysis 1 around the time the 3870/8800 GT were being compared, but the issue was always corrected in successive driver releases.

Having using AMD for years and just now using a NVIDIA card I can say with full confidence what you just said is BS. They look the same. I didn't even have recalibrate for process colors.

You're doing it wrong. You need screenshots. I've seen this a lot in AA quality comparison shots in reviews as recently as Metro 2033's release. AMD cards are more saturated, at least that recently.
 
EXACTLY.

Compound this:

AMD has 32 CUs and really only needs slightly more than 28 most of the time. 7950 is a fine design, and it doesn't really hurt the design if yields are low on 7970. Tahiti is over-designed, prolly because of the exact reason mentioned; big chip on new node. Even if GK104 did have the perfect mix of rop:shader ipc, the wider bus and (unneeded) bandwidth of 7950 should make up that performance versus a similar part with 256bit bus because 7950 is not far off that reality. Point AMD on flexibility to reach a certain performance level.

Again, I think the 'efficient/1080p/gk104-like' 32 ROP design will come with Sea Islands when 28nm is mature and 1.5v 7gbps gddr5 is available..think something similar to a native 7950 with a 256-bit bus using higher clocks. Right now, that chip will be Pitcairn (24 ROPs) because it is smaller and lines up with market realities. Point AMD on being realistic.

nVIDIA appears to have 16 less-granular big units, which itself is a yield problem...like Fermi on a less-drastic level because the die is smaller. If the shader design is 90% ppc (2 CU versus 1 SM) or less versus AMD, every single SM is needed to balance the design. I wager that is either a reality or very close to it considering 96sp, even with realistic use of SFU, is not 90% of 128. Yeah, scalar is 100% efficient, but AMD's 4vliw/MIMD designs are not that far off on average. Add that Fermi should need every bit of of 5ghz memory bandwidth per 1ghz core clock and 2 SM (ie 32 ROP/16/256-bit SM, 28 ROP/14 SM/224-bit) and you don't have any freaking wiggle room at all if your memory controller/core design over or under-perform.

Conclusion:

So if you are nVIDIA you are sitting with a large die, with big units that are all needed at it's maximum level to compete against the salvage design of the competition. Efficient as fermi can be yes, smart choices for this point in time...not even close.

Design epic fail.

well nvidia did drop the hot clocks which allowed more cores in the gpu and will no longer be limited in clocks as the shaders and the cores will have the same frequency(before since they had hot clocks they always had scaling issues), they radically changed the fermi make up and seems like they know what they are doing, as for the gtx660 i read leaks that it was a 340mm2 chip compared to the 365mm2 of the hd7970 and is meant to compete and come close to the hd7970 which seems reasonable, tho im not sure how they will pull a gtx680/670 (probably will be like the gtx470/480 with disabled hardware)

so while i agree with you overall nvidia isnt in such a bad place, only their biggest chip is.
so in the worst case nvidia will end up with a top end gpu that is 10-20% slower than amds top end, but I doubt that, even with the 256bit bandwidth that everyone is all crazy about i dont think it should be a problem in most scenarios, especially considering the fact that most people buying nvidia dont really do multiple gpu setups while for amd its almost a must for eyefinity.

also i heard leaks nvidia was debating whether they should call the gk104 gtx660 or gtx680 when the gk110 was supposed to be for that but isnt coming anytime soon, so idk whether the yeild issues force nvidia to do so, or whether they think the gk104 is sufficient, either way we need competition already, and for cards with 340mm2 and 365mm2 die sizes they should be well in the 350-400$ price range, and thats considering the TSMC 20% more expensive wafer prices
 
Last edited:
Nvidia sacrificed IQ with the 7xxx series, that was it. Still to this day I rag on people who bought 7xxx cards because it was empty framerates. First time I can recall a new card gen having lower IQ than the previous one. The driver issue you talk about is well behind BOTH companies. Both got into the habit of releasing drivers around card release time that had IQ errors that increased performance. Namely I can think of this happening in Crysis 1 around the time the 3870/8800 GT were being compared, but the issue was always corrected in successive driver releases.



You're doing it wrong. You need screenshots. I've seen this a lot in AA quality comparison shots in reviews as recently as Metro 2033's release. AMD cards are more saturated, at least that recently.


I don't think we were talking about the image quality of 3D engines.
 
I have my 7950, nvidia, so na na na boo boo. Go cry to mommy. We knew yields were low LAST YEAR (for both camps)!.

Fantastic card, btw :) Runs much better than the 6950s I had. At 1,175 core so far. Still testing :)
With a non-reference cooler and OCed it still won't go above low 60s. The fans are still silent.
 
especially considering the fact that most people buying nvidia dont really do multiple gpu setups while for amd its almost a must for eyefinity.

The other way around.
 
The other way around.

Uh idk...can't speak to multi-monitor really but offhand I know a lot more people running Crossfire than SLI and always have pretty much (if the opposite is in fact what you were saying).
 
For Nvidia Surround (3 monitors) you need two cards. For AMD Eyefinity you only need one.
 
For Nvidia Surround (3 monitors) you need two cards. For AMD Eyefinity you only need one.

And with eyefinity you can run up to 6 screen's ;)
 
Back
Top