• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Reportedly Moving Ampere to 7 nm TSMC in 2021

What sort of knob would buy 3080 for 1080p gaming. It's a 1440p Ultra high setting or 4K card all the way. 1080p will be handled with ease by 3060 for $300 less.

Sometimes people don't upgrade their monitors until they have the graphics card to do so... that's what I would do honestly.
 
I wonder what pixie dust you're on lately but 30%? Lmao. Don't get too enthusiastic now

You have a habit of quoting my posts, I feel flattered by your attention.

Yes 30% Einstein, it doesn't take a physicist to work out this is the ballpark we'd land in if the 3080 was on TSMC instead of Samsung's poor 8nm. In fact, rumour has it there were engineering samples of Ampere cards using TSMCs process node. I looked at some of the figures and let's just put it this way; they weren't worse than what we ended up getting from Nvidia in end.
 
I swear to God, I wasn't expecting this much amusement when I started pecking at Samsung the potato.
For those of us looking at these cards, this Samsung 8nm process is making me a bit nervous compared to TSMC 7nm...
Where could we have seen this? 3... 2... 1...
You guys act like the process node is going to make a difference vs AMD's historic incompetence with energy consumption. I doubt it will, frankly.
Thanks for the company, dude. I couldn't live with myself without some juicy green team red team beef, although I got no 'steak' in the discussion.
I don't post these things just because I don't like Samsung, it is to spite you wonderful gentlemen with incendiary comments. Because it is not about Samsung's process, but irrational teamster preferences.
I better take my opinion, elsewhere... nobody can see without their colored goggles here.
No, but watts are.
Funny you would go there. Samsung isn't your bread and butter foundry. Expect a few hiccups.

If we continue, I'll have to resort to Mortal Kombat jokes;
- "You are still trying to win?"
Let's not go there pal, I've never missed my common troll courtesy.
You assume I am trolling. I'm not. You'll end up playing with yourself.

I just can't see the watts on Samsung 7nm being worse than Turing, which is about what it'd need to matter I think.
Hmm, never thought it that way. It is quite an uncommon way of thinking. Turing is neither the competition, nor made at the same foundry, but hold on to your beliefs, I guess...
 
The only same-product, different-process jump I can think of was Vega64 > Radeon VII and that was about 30% faster for the same power draw (or you could save 30% power to match the Vega64's performance).

It's not a single apple-to-apples easy number as this was GloFo to TSMC, 8GB to 16GB, and 64CU vs 60CU but it's probably the closest comparison of 14nm to 7nm there is because at least the class of product and architecture remain completely unchanged (both a ~300W, 60-ish CU, HBM2, GCN 5, 2048-bit design running on identical drivers and software stack, both available to the market simultaneously).

30% may not be accurate but it's kind of in line with TSMC's claims about their 7nm node - when it first launched TSMC were saying that it could go 30% faster at the same power budget or up to 50% lower power at the same performance levels. "Up to" comes with all the usual marketing-speak caveats, ofc.

The Radeon VII also lost about a 25% of its die size... do you think this is the sort of difference between Samsung 8nm and TSMC 7nm?

Maybe I missed something, its an honest question... but I thought the gap from 14nm to 7nm is pretty substantial.

You have a habit of quoting my posts, I feel flattered by your attention.

Yes 30% Einstein, it doesn't take a physicist to work out this is the ballpark we'd land in if the 3080 was on TSMC instead of Samsung's poor 8nm. In fact, rumour has it there were engineering samples of Ampere cards using TSMCs process node. I looked at some of the figures and let's just put it this way; they weren't worse than what we ended up getting from Nvidia in end.

Please share them, I'm genuinely interested.

Sometimes people don't upgrade their monitors until they have the graphics card to do so... that's what I would do honestly.

Exactly. The other way around is utterly shite.
 
With the current back log of RTX 3080 orders going into 2021 and AMD back logs for PS5, Xbox Series X and RDNA 2 on 7nm. I don't see NVIDIA changing to 7nm any where in the first half of 2021. They might not at all as 3080 with 20GB VRAM is already going to be released in December with the same core count. That is your 3080 Super. NVIDIA went Samsung because TMSC can't meet demand and I call bullshit on this "leak". Overclocking the 3080 gets you within 5fps of a stock 3090. It's only 10% faster then the 3080. The 20gb VRAM is the Super the Ti if there is one will be 5% faster.
 
Last edited:
With the current back log of RTX 3080 orders going into 2021 and AMD back logs for PS5, Xbox Series X and RDNA 2 on 7nm. I don't see NVIDIA changing to 7nm any where in the first half of 2021. They might not at all as 3080 with 20GB VRAM is already going to be released in December with the same core count. That is your 3080 Super. NVIDIA went Samsung because TMSC can't meet demand and I call bullshit on this "leak". Overclocking the 3080 gets you within 5fps of a stock 3090. It's only 10% faster then the 3080. The 20gb VRAM is the Super the Ti if there is one will be 5% faster.

Why are you providing an excuse for NVIDIA low supply. They went Samsung because they tried to play their usual game in getting Samsung and TSMC to compete with each other on price but that strategy failed miserably. AMD bought up all Apple 7nm orders once they moved on to 5nm and they bought up all stock left over from Huawei ban. Plus they are first in line for any additional 7nm stock that becomes available. And from reports, the same trend will follow with 5nm as they are like second to apple for 5nm. So it's not because TSMC could not supply the chips its because NVIDIA squandered their opportunity to secure orders trying to play games. Not doing their due diligence and analyzing the market that has a resurgent AMD who has tons of product to release on 7nm. It's all NVIDIA's fault and no one else. No excuses
 
Why are you providing an excuse for NVIDIA low supply. They went Samsung because they tried to play their usual game in getting Samsung and TSMC to compete with each other on price but that strategy failed miserably. AMD bought up all Apple 7nm orders once they moved on to 5nm and they bought up all stock left over from Huawei ban. Plus they are first in line for any additional 7nm stock that becomes available. And from reports, the same trend will follow with 5nm as they are like second to apple for 5nm. So it's not because TSMC could not supply the chips its because NVIDIA squandered their opportunity to secure orders trying to play games not doing their due diligence and analyzing the market and a researching AMD who has tons of product to release on 7nm. It's all NVIDIA's fault and no one else. No excuses
I think I hit the nail on the head a while back. Nvidia jumped the gun too early.
Bye, bye, don't let the door hit ya on the way out, Huang.
Samsung's gate all around might lower parasitic capacitance. This might be important in the effective tdp, since big chips leak more.
 
The Radeon VII also lost about a 25% of its die size... do you think this is the sort of difference between Samsung 8nm and TSMC 7nm?

Maybe I missed something, its an honest question... but I thought the gap from 14nm to 7nm is pretty substantial.

TBH I haven't looked into transistor density of Samsung vs TSMC. You can't really compare that too closely - it was revealed that Intel's 10nm is pretty damn close to TSMC's 7nm but I haven't seen a similar article comparing Samsung's 8nm yet. One thing's for sure though; the process node name (in nm) bears very little resemblence to the actual transistor density - it's just one dimension of several that drive the actual transistor density. Radeon VII was just a die shrink (it still had Vega's 64CU, they just disabled 4 to improve yields) so if you consider 14nm to 7nm is a 75% area reduction (and 50% linear reduction) then the fact they only got 25% of the area reduction from the process shrink demonstrates how pointless and inaccurate the process node name is.

What matters far more in the discussion about switching from Samsung to TSMC isn't really the name of the node but how efficient it is.

Samsung's 8nm let Nvidia build GA-102 with 50% more transistors than TU-102 but it used 50% more power too, making it roughly on par with TSMC's 2017 12FFN process, almost four years behind the curve - not exactly something you want in your flagship product.

I'm not 100% sure which of TSMC's nodes RDNA2 and Zen3 are being made on. N7P (or LP) is the new normal 7nm from TSMC - it's a tweaked process that clocks a bit higher than the original N7 process that Radeon VII and Zen2 launched on, but it's the same size, density and of comparable efficiency. N7FF+ (often written as 7nm+) is the EUV lithography that comes with a ~20% transistor density improvement and supposedly a 4.5% efficiency increase going by TSMC's own figures.
 
***

...Given that RDNA1 was already on part with Turing, which is essentially the same in PPW as Ampere, NOT!

Navi is actually slightly more power efficient if you compare the 5700 to the 2060 / 2070, the cards which it was competing against.

1602719659657.png
 
I swear to God, I wasn't expecting this much amusement when I started pecking at Samsung the potato.

You do realize my points still stand, right?

AMD has yet to launch, and it has yet to be worse than Turing. You are again, playing with yourself. Have fun with that.
 
You do realize my points still stand, right?
How could it not be, I pull repositories from 2 years back, but your 2 month old frameset is still not challenged... oh, I be getting ya loud and clear!
 
Not going to believe this. How much of an improvement will be TSMC 7nm process will do ? It's not like it will become super fast or clock high, at max it will lower the power consumption, everyone knows that since Pascal this b.s Clock Boost came so I'm thinking the TSMC shift won't magically make the GPUs boost to higher clocks, look at Ryzen 5000, it's on 7nm EUV, 7NP it didn't change the clock speed and we know Ryzen 7nm has been pushed to max out of box, and add that boosting behavior where the clocks change like GPUs. And that VRM component disaster of the AIBs on the Ampere is not at all because of the 8nm node but rather BOM.
Thats wrong.
ZEN3 is on the same 7nm DUV node that ZEN2 is. Not 7NP that is also enhanced DUV.

Clocks have gone up a little but gone up (wait to see all-core clocks), and that was from refined architecture and maybe the more mature 7nm DUV node. Also performance/watt gone up for the same reasons, but not from 7NP DUV because 5000 is not on 7NP DUV.
RDNA2 GPUs are on 7NP DUV. No 7+ EUV for any AMD products.

That was an honest mistake I want to believe...

And yes the only thing that Samsungs’s 8nm hurt on Ampere was the performance/watt. Clock crashes was/are probably due to lack of time to refine vBIOS, drivers, and time to test properly... so AIBs did some rush moves with the combination of all previous and the selected power delivery components.
 
Why are you providing an excuse for NVIDIA low supply. They went Samsung because they tried to play their usual game in getting Samsung and TSMC to compete with each other on price but that strategy failed miserably. AMD bought up all Apple 7nm orders once they moved on to 5nm and they bought up all stock left over from Huawei ban. Plus they are first in line for any additional 7nm stock that becomes available. And from reports, the same trend will follow with 5nm as they are like second to apple for 5nm. So it's not because TSMC could not supply the chips its because NVIDIA squandered their opportunity to secure orders trying to play games. Not doing their due diligence and analyzing the market that has a resurgent AMD who has tons of product to release on 7nm. It's all NVIDIA's fault and no one else. No excuses

personally i don't think nvidia really play games with TSMC to lower the price for them. because realistically TSMC 7nm have very high demand to the point some chip maker have to book their order one year in advance. if nvidia try to play games like leaving TSMC if the price is not lowered for them there always be others to fill the gap. it is not a game that nvidia can win anyway. second the reality is TSMC is over crowded and because of that 7nm wafer still end up being very expensive despite being mature in 2020. nvidia has been expressing their displeasure about TSMC capacity as far as 2012. that is when they start looking at samsung as an alternative to TSMC. back then Jensen even said intel should open their fab for others. in 2016 it was the first time samsung ever make GPU for nvidia with GP107 chip. one way or another alternative fab must exist for nvidia. and TSMC also have their own fair of issues before. 40nm have terrible yield. 32nm being cancelled. 20nm was not suitable for high performance chip making nvidia and AMD have to stick with 28nm for another 2 years until 16nm. there is no guarantee if upcoming shrink will not have it's problem. because the most important thing for nvidia is they need a process for their high performance chip. if nvidia did not work seriously with samsung alternative will never exist.

and about 7nm capacity as i said in my other post even if AMD have some priority over others by TSMC they cannot take everything for themselves. remember nvidia is not the only one that need 7nm capacity. there is also others. TSMC cannot burn the bridge they had with other chip maker just so they can satisfy AMD alone. and news piece from Digitimes (which the basis of this article) saying TSMC is becoming more "friendly" towards nvidia. meaning they want nvidia to fab their stuff with them.
 
This is simpler than it looks.
nVidia has gone with Samsung because it was cheaper node. Ampere card are so expensive that they cut corners wherever the could. They need this to be price competitive against AMD and get their profit margin. Against perf/watt.


Look how great Turing and Ampere is compared to past nVidia gens.


25F9B532-82DA-49EE-B7E6-3AAB8B8296B0.png9F6279FD-DD07-4A05-B598-DF1E914286AE.png86C5128D-C90F-4BB5-8DD0-AE8356BE31F5.pngEC679378-4B62-4091-BA75-BF0C733624D1.png
 
Last edited:
Those graphs only show how stupid the 3090 is. Not ampere. But i guess it wasnt convenient enough nor fit the narrative to add the 3080 there.
 
Those graphs only show how stupid the 3090 is. Not ampere. But i guess it wasnt convenient enough nor fit the narrative to add the 3080 there.
You are missing some points, or maybe you left those out to fit yours?

Thats ok. I can say it

Still performance uplift from previous gen for Turing and Ampere is the worst the last decade.
Performance per for watt for Turing and Ampere is the worst the last decade
Turing was so bad at everything that makes Ampere and particularly 3080 looking good.

It was my bad to not provide the narrative...
Watch it and listen before you can decide whats fits in what.

 
Last edited:
I did, it's constructed to build a very specific narrative that fits that tunnel vision perspective, just look at that price performance, oh yes.

Anyway, im done giving views to pot stirring fanboy aimed rumorville channels, hope you keep enjoying top tier influence like that, i won't.

Off to my tinfoil hat wearing sessions.
 
I did, it's constructed to build a very specific narrative that fits that tunnel vision perspective, just look at that price performance, oh yes.

Anyway, im done giving views to pot stirring fanboy aimed rumorville channels, hope you keep enjoying top tier influence like that, i won't.
Yeap... Thats exactly what he is! You should see his AMD related content in the past. In those he is the Intel/nVidia fanboy.
You dont know what you're talkin about and and that is at least misleading
He is just telling things as is and for all brands

Offtopic:
I wonder how you are watching that 27" monitor without a display output.
 
Ampere ga102 is the fastest gpu on the planet and with the best p/W ratio in games on 1440p and above. What else do you want to know?

Navi is actually slightly more power efficient if you compare the 5700 to the 2060 / 2070, the cards which it was competing against.
Ok, but not Ampere as you said.
performance-per-watt_3840-2160.png
 
When AdoredTV starts getting linked in a topic, you kinda know its time to leave. Seems like the red haze has taken over :D
 
Can't believe people really believe this. For one, there's not enough 7nm wafers for them now.

with huewei and several other chinese designers out of the running, TMSC has had room open up.


Two, do you know how long it would take for this if even possible to come to market (guess is like 6-8 months) and then be obsolete months later once Hopper comes out. Some people will believe anything these days without even stopping to think for a second. AMD has TSMC 7nm pretty much locked up.

Nvidia already had GA10x designs ready for TMSC.

Their first in line for any additional wafers that comes available. Plus that would be too costly for NVIDIA to change to TSMC for only a few months to then change to maybe 5nm for hopper months later. :banghead:

"few months"

Hopper won't be for consumers and it won't be for 2 years as of September.
 
I doubt it's gonna be that difficult to move the other cards to TSMC considering the GA100 is already manufactured on 7nm... still a 400W TDP though. Will have to wait and see how it progresses now there's free capacity at TSMC.
 
How could it not be, I pull repositories from 2 years back, but your 2 month old frameset is still not challenged... oh, I be getting ya loud and clear!

I wish I could understand you.
 
I wish I could understand you.
It is understandable. It is called retrospective determinism, "I knew it" bias.
I mean, what is a fanboy without some self-fulfilling conspiracy which I have...
 
What a ridiculous way to spin what happened.

"The report claims that TSMC has become more "friendly" to NVIDIA."

not that TSMC is afraid of samsung but they probably try to make sure samsung to not becoming a formidable competitor in the future.

Yeah, and totally not because 3080 is an epic.fail.
 
Back
Top