Tuesday, May 5th 2020

NVIDIA Underestimated AMD's Efficiency Gains from Tapping into TSMC 7nm: Report

A DigiTimes premium report, interpreted by Chiakokhua, aka Retired Engineer, chronicling NVIDIA's move to contract TSMC for 7 nm and 5 nm EUV nodes for GPU manufacturing, made a startling revelation about NVIDIA's recent foundry diversification moves. Back in July 2019, a leading Korean publication confirmed NVIDIA's decision to contract Samsung for its next-generation GPU manufacturing. This was a week before AMD announced its first new-generation 7 nm products built for the TSMC N7 node, "Navi" and "Zen 2." The DigiTimes report reveals that NVIDIA underestimated the efficiency gains AMD would yield from TSMC N7.

With NVIDIA's bonhomie with Samsung underway, and Apple transitioning to TSMC N5, AMD moved in to quickly grab 7 nm-class foundry allocation and gained prominence with the Taiwanese foundry. The report also calls out a possible strategic error on NVIDIA's part. Upon realizing the efficiency gains AMD managed, NVIDIA decided to bet on TSMC again (apparently without withdrawing from its partnership with Samsung), only to find that AMD had secured a big chunk of its nodal allocation needed to support its growth in the x86 processor and discrete GPU markets. NVIDIA has hence decided to leapfrog AMD by adapting its next-generation graphics architectures to TSMC's EUV nodes, namely the N7+ and N5. The report also speaks of NVIDIA using its Samsung foundry allocation as a bargaining chip in price negotiations with TSMC, but with limited success as TSMC established its 7 nm-class industry leadership. As it stands now, NVIDIA may manufacture its 7 nm-class and 5 nm-class GPUs on both TSMC and Samsung.
Sources: Chiakokhua (Twitter), DigiTimes
Add your own comment

66 Comments on NVIDIA Underestimated AMD's Efficiency Gains from Tapping into TSMC 7nm: Report

#1
watzupken
I think Nvidia made a good choice going for N7, instead of the rumored 10nm. And certainly it does seem like Nvidia underestimated AMD's improvement in power efficiency and performance when it comes to the current RX 5xxx series. This is apparant where Nvidia have to repeatedly cut prices/ introduce refreshes late in the cycle to keep up with AMD's RDNA based GPUs. I am not sure that they took cue from Vega 7's efficiency and figured that AMD's 7nm is not going to be a big threat to them. If the rumored performance improvement for RDNA 2 is true, then it certainly is going to outperform the current 12nm RTX GPUs.
Posted on Reply
#2
moproblems99
watzupken
I am not sure that they took cue from Vega 7's efficiency
What efficiency? It has basically the same performance as Navi but uses about 30% more power. Not sure what they were going to get out of that.
Posted on Reply
#3
GoldenX
This is not the first time this happens. Same happened with the 8000/9000 series vs the HD4000 series. IIRC, ATI had a node, PCIe revision and memory type advantage at that time.
Posted on Reply
#4
phanbuey
GoldenX
This is not the first time this happens. Same happened with the 8000/9000 series vs the HD4000 series. IIRC, ATI had a node, PCIe revision and memory type advantage at that time.
I was about to say... When has nvidia NOT underestimated ATI. That's basically their been their MO from day one.
Posted on Reply
#5
Frick
Fishfaced Nincompoop
phanbuey
I was about to say... When has nvidia NOT underestimated ATI. That's basically their been their MO from day one.
Given AMDs performance in high end GPUs (and otherwise) the last bunch of years I'd say Nvidia has had them correctly estimated.
Posted on Reply
#6
nguyen
5700XT is only 25% more efficient than Vega 56, what is there to underestimate ? What probably surprised Nvidia was probably AMD banking all in on TSMC 7nm, leaving not enough capacity for Nvidia at the time...
Posted on Reply
#7
Xex360
ATI and later AMD often had an nod advantage, or rather they experimented first. That being said Navi isn't very efficient or at least it's pushed beyond its efficiency to provide more performance compared to nVidia.
Posted on Reply
#8
R0H1T
Frick
Given AMDs performance in high end GPUs (and otherwise) the last bunch of years I'd say Nvidia has had them correctly estimated.
Yeah just like Intel, oh wait o_O
Posted on Reply
#9
Easo
So basically like Intel, it seems.
Posted on Reply
#10
dj-electric
NVIDIA is not intel, since intel is not in full control over an oh-snap mode where competition has a potent products.
You know that, becuase NVIDIA is being able to compete not only a lot of time ahead, but with a previous node. If you are able to fight an opponent with his hand behind his back, how legitimate is the fight?
Posted on Reply
#11
phanbuey
Easo
So basically like Intel, it seems.
nah this is different. Intel just sat on the same design for a decade thinking that shrinks would be enough.

Nvidia just genuinely believes that they are the absolute best and no one can beat them ever. It's just the corporate attitude. They don't sit around and wait to get smashed like intel did.
Posted on Reply
#12
john_
AMD in the last 6-7 years was only targeting performance parity with it's competitors, ignoring efficiency. Whatever they could gain from moving to lower nanometers, but not really any time spent on architectural improvements because they didn't had resources and money for both targets. Performance AND efficiency. Now that they have more resources and money, they can start targeting both. Nvidia was expecting probably the 7nm Vega, but not Navi and certainly they must be feeling a little worried about what RDNA 2 can bring to the table. Another thing changing is that AMD's brand is improving and that can also sell more AMD GPUs. In the past Nvidia could sell the worst product ever at a higher price thanks to it's strong brand against AMD's totally destroyed brand name because of that Bulldozer thing...
Posted on Reply
#13
mtcn77
AMD is actually commanding developer attention and let's not dismiss Nvidia tanks under ray tracing. Oh, when did it last happen when ATi was quick to migrate to 45nm & gddr5 both at the same time?
Posted on Reply
#14
Frick
Fishfaced Nincompoop
R0H1T
Yeah just like Intel, oh wait o_O
Intel definitely correctly estimated AMD until they released Ryzen. Phanbuye said Nvidia has always underestimated AMD, which if true would mean AMD would have been able to compete across the board, which they have not been able to do.
Posted on Reply
#15
R0H1T
phanbuey
nah this is different. Intel just sat on the same design for a decade thinking that shrinks would be enough.

Nvidia just genuinely believes that they are the absolute best and no one can beat them ever. It's just the corporate attitude. They don't sit around and wait to get smashed like intel did.
So basically the same thing then? In case you don't remember Intel was working on CNL & ICL for a good 4-5 years, it's just that their node (10nm) development got worse than the delays at 14nm, yes even 22nm was delayed.
Frick
Intel definitely correctly estimated AMD until they released Ryzen. Phanbuye said Nvidia has always underestimated AMD, which if true would mean AMD would have been able to compete across the board, which they have not been able to do.
Aren't you contradicting yourself there? Also it's not like AMD didn't smash Intel in the heydays of Athlon, heck ATI had Nvidia beat across multiple generations. So there's precedent on either side of CPU/GPU history.

It's just that AMD was too thinly spread over the last decade, if it weren't for Zen they'd be bankrupt right now literally!
Posted on Reply
#16
Frick
Fishfaced Nincompoop
R0H1T
Aren't you contradicting yourself there? Also it's not like AMD didn't smash Intel in the heydays of Athlon, heck ATI had Nvidia beat across multiple generations. So there's precedent on either side of CPU/GPU history.
Am I? How? Unless I misunderstood your first comment.
Posted on Reply
#17
yeeeeman
What efficiency?
RX5700XT has pretty much the same efficiency as nvidia Turing 12nm parts.
Posted on Reply
#18
Turmania
Why would they underestimate? Their current gen is more efficient than rival competitor who is on 7nm and a year newer.
Posted on Reply
#19
Fourstaff
Turmania
Why would they underestimate? Their current gen is more efficient than rival competitor who is on 7nm and a year newer.
Everytime the performance gap gets closed out, competition drive profit margins down. Nvidia was probably expecting big fat paycheck but have to settle for a smaller one instead.
Posted on Reply
#20
riklaunim
yeeeeman
What efficiency?
RX5700XT has pretty much the same efficiency as nvidia Turing 12nm parts.
Relative performance that matters. Nvidia likely knows how much power AMD designs use and what their desings use. Then they compared what TSMC 7nm gave AMD when moving to it and they compared what using some Samsung node gave to their design and likely they saw AMD got more so if they would use it as well they would get a noticeably better product than on Samsung. Plus they likely know more than we do - about next-gen and what's being researched. Navi2 is what will be a competitor to Ampere. Completing gaming and compute arch separation could give AMD the edge versus Nvidia on Samsung node so Nvidia wants some TSMC to still be better in charts even if 200W vs 240W vs 260W... wouldn't matter that much to customers looking at price and performance for the 99% part.

Nvidia saw something that is worthwhile to contract a second fab, a fab that doesn't even have to offer discounds or preferred access to a customer the size of Nvidia.
Posted on Reply
#21
watzupken
yeeeeman
What efficiency?
RX5700XT has pretty much the same efficiency as nvidia Turing 12nm parts.
I don't deny we are comparing a 7nm GPU with a 12nm one here. However if you look at 7nm Vega 7, it was clearly nowhere near as power efficient as the Turing. RDNA and 7nm basically allowed AMD to get within striking range of Turing. If the rumored power efficiency is true with RDNA 2, then perhaps they may be competitive with the 7nm Nvidia GPUs. We should get more clarity later this month on next gen graphics from Nvidia.
moproblems99
What efficiency? It has basically the same performance as Navi but uses about 30% more power. Not sure what they were going to get out of that.
I mentioned efficiency, I did not say efficient. I think you misread/ misunderstood what I meant.
Posted on Reply
#22
renz496
nvidia underestimated AMD? that is not like nvidia at all. if nvidia really did underestimated AMD they probably get no 7nm capacity at all since they 100% believe they can get away only using samsung nodes against AMD. if what we hear recently is true then what nvidia is doing is executing something like plan B because AMD finally able to meet nvidia certain expectation.
Posted on Reply
#23
watzupken
Turmania
Why would they underestimate? Their current gen is more efficient than rival competitor who is on 7nm and a year newer.
While it is true that AMD is on a newer 7nm which will surely give them an edge, I feel Nvidia was only expecting marginal improvements in power efficiency jumping for Vega to RDNA. AMD first released a 7nm GPU, the Vega 7, which performs better than the older Vega GPUs, but clearly still not as power efficient and good performing as Pascal. Even before AMD's move to 7nm, I am sure Nvidia made their decision to go with 12nm from 16nm for Turing could also be driven by the fact they do not expect AMD to catch up. From my opinion, it is a missed opportunity for them to widen the gap. Now, RDNA 2 and Ampere GPUs are slated for release this year, so it will be a very interesting year to see if AMD can cause a stir again like they did on the CPU space.
Posted on Reply
#24
FordGT90Concept
"I go fast!1!11!1!"
NVIDIA underestimated two things:
1) RDNA which drastically improved performance per watt
2) AMD's need for TSMC wafers for CPU and GPU products

If Samsung's 7nm flops, NVIDIA is at risk of falling into second place over the next year or two.
Posted on Reply
#25
mtcn77
AMD doubled schedulers which fix RDNA performance because RPM used to be required for load balanced performance. Now, it is a side bonus.
Jumping between warps so efficiently was the underlying pascal effect. I dunno what is new about turing.
Posted on Reply
Add your own comment