• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Reportedly Moving Ampere to 7 nm TSMC in 2021

They might pull it off for Ampere refresh, basically Super series, but I doubt they'll do it with current 30xx lineup.
 
No space for NVIDIA at TSMC...
Nvidia told us that they cannt keep up with order of their Ampere GPU cos of high demand not cos of Samsung XD
Suureee, reality hits when you look at Steam hardware survey.
 
Was planning to get an Asus Tuf 3080 once available. Guess i’ll wait to see what comes from this. Not in a rush anyways.
 
Although this will arouse personal attacks against me, I wish TSMC would tell Nvidia "no" so that Nvidia would be force to lie in the bed they made for themselves. Also, simply for the sake of benefiting the consumer, any advantage AMD could gain to improve their market share, revenue/profits, and thereby invest that in R&D, to level the playing field would be an overall gain for the entire community.

Now, let's not take a shortcut to thinking and accuse me of being an AMD fanboy, as if AMD's and Nvidia's positions were switched, I'd STILL be advocating for the same thing. We have all witnessed what increased competition did for the CPU market and broke a multi-year stagnation, so how anybody would not want the same for the GPU market is beyond me.

.And to be honest, AMD only managed to beat the Skylake uArch a 5 year old design and 14nm node from 7 years ago in 2020 with Ryzen 5000.


Even IF this is true, don't ignore the fact that AMD literally has a fraction of the R&D budget and resources that Intel has. The fact that AMD has been able to catch up with and now overtake Intel while being seriously financially hamstrung is unbelievably impressive, and is literally a feat that has never been accomplished before (no, Cyrix was not able to overtake Intel). Why is there this constant effort by some to minimize, reduce and outright take this accomplishment away from AMD? Hypothetically, speaking, if if AMD was 10% behind Intel in every metric, it'd STILL be impressive because they're managing to do it with a fraction of the resources. Another way of saying it is that there is literally no excuse for Intel to be beaten by AMD.

I think the same can be said for AMD in the GPU market.... Nvidia has at least three times the resources of AMD's graphics division, and yet, AMD is still able to compete in the tiers with the largest T.A.M., and this is seriously impressive... With Nvidia's drastic advantages in resources and even talent, it can be argued that AMD shouldn't even have a presence in the GPU market, but not only do they have a presence, but they're poised to match Nvidia in performance, possibly beat them in value and probably in efficiency, all while having a quarter of the market share Nvidia does and a fraction of the money. And in spite of all these material realities, a large portion of the community continually bashes AMD for not outperforming Nvidia in every metric!

I'm sure some will write this off as some fanboy rant, but these are literally the undeniable facts of the situation, namely that AMD has drastically fewer resources than both Intel and Nvidia and yet are able to compete (Nvidia) and even overtake(Intel) their competitors... How is it that there are those that are not impressed by this?
 
Can't believe people really believe this. For one, there's not enough 7nm wafers for them now. Two, do you know how long it would take for this if even possible to come to market (guess is like 6-8 months) and then be obsolete months later once Hopper comes out. Some people will believe anything these days without even stopping to think for a second. AMD has TSMC 7nm pretty much locked up. Their first in line for any additional wafers that comes available. Plus that would be too costly for NVIDIA to change to TSMC for only a few months to then change to maybe 5nm for hopper months later. :banghead:.
 
I would think they would bundle this with new VRAM from Micron and list them as 3080 Super / 3090 Super or ti. It's why I've held off upgrading my GTX 1080, I feel like I was screwed over with the GTX 1080 ti. Not making that mistake again. If they don't then Nvidia will lose out on my purchase. Not that it really matters to them.
 
What I read between the lines is that, they are not happy/satisfied with Samsung for whatever their reasons are and are moving to TSMC, and will use 7nm process when the rest would probably be on 5nm.
I personally want to see more competitio to TSMC at the moment they are too far ahaed from rest.
 
Am I the only 1 thinking NV realise they can be behind on fab process and still hang in the performance and efficiency stakes so they can do that and save BOM?

We all know it's all about the bottom line for them so untill they need to spend for the latest fab they won't.
 
Maybe that's why shortages in supply of GeForce RTX 3080 and RTX 3090 graphics cards could persist until 2021? This hopefully is not true, or else people will get mad :)
CEO said its a demand problem, not supply. They have better supply than at Pascal and Turing launch
 
What I read between the lines is that, they are not happy/satisfied with Samsung for whatever their reasons are and are moving to TSMC, and will use 7nm process when the rest would probably be on 5nm.
I personally want to see more competition to TSMC at the moment they are too far ahead from rest.

I have great skills to read behind the lines.
The problem under my translation, this is the volume of production that NVIDIA this can adsorb.
What is not cost effective for Samsung this seems as cost effective to some one else.
 
Seems similar to the AMD Polaris 14nm-->12nm shrink. It's not going to result in more transistors in the same given area, but instead just a higher power efficiency with the same number of transistors. So a typical cadence of new arch-->lower power-->rinse/repeat. Though I'm sure there are many more considerations going into this on the business side, than just an increase in power efficiency on the technical side.

 
Samsung's nodes aren't up to TSMC's standards, but what about GloFo, are they now such a mess that nobody in the CPU or GPU business dares touch them?

Not really a mess, they have just jumped off that ship and are focusing on other stuff, like 5G RF chips, which will be/is a huge market. The company is [probably, as it's a private company so we don't actually know] better off for it, seeing as how insanely expensive and difficult it is to get volumes on <10nm stuff. People seem to think that the only chips in existance are bleeding edge CPU's and GPU's.
 
It will be a refresh that will allow a bit higher boost clocks and lower power draw. But soon after that RDNA3 will be made on 5nm. All signs show that TSMC is much better for big chips than Samsung at least for now.
 
Not going to believe this. How much of an improvement will be TSMC 7nm process will do ? It's not like it will become super fast or clock high, at max it will lower the power consumption, everyone knows that since Pascal this b.s Clock Boost came so I'm thinking the TSMC shift won't magically make the GPUs boost to higher clocks, look at Ryzen 5000, it's on 7nm EUV, 7NP it didn't change the clock speed and we know Ryzen 7nm has been pushed to max out of box, and add that boosting behavior where the clocks change like GPUs. And that VRM component disaster of the AIBs on the Ampere is not at all because of the 8nm node but rather BOM.

TSMC this TSMC that, this Intel screw up gave them too much hype even to the MSM Shill media, a big joke when you actually watch der8aur video of this whole nm marketing drama. And see the Apple A series processors, it's like they came from outer space in Anandtech's Spec scores and etc, but on Load the power consumption is insanely high and then on top you have the Qcomm vs Apple Application/Software performance videos and benches all over the Youtube to tell the real life performance / bang for buck / comparison etc. And to be honest, AMD only managed to beat the Skylake uArch a 5 year old design and 14nm node from 7 years ago in 2020 with Ryzen 5000.

Also Samsung is mentioning their upcoming non custom design (no M cores unlike prev Exynos, similar to the Qcomm post 820, since 820 is the only custom they did after that 810 big fiasco) Exynos 1080 is on 5nm node. So losing up a big contract like Nvidia is insanely bad to them in this pureplay fab industry, as we move to one and only one corporation doing that.

AMD has been beating skyake since zen1. Winning is more than just IPC. The MCM approach is what allowed AMD to win. Each generation for the past 3 years has pushed intel for more and more desperate measures way outside of their historic trends. 4 to 6 to 8 and then to 10 core flagships back to back after 10 years of capping at 4 cores. As for Nvidia, the move to 7nm would be much needed. Ampere is a pretty bad architecture by Nvidia standards. They presented a less than average 30% improvement compared to last gen, but at up to 350w TDP. That's unprecedented.

Just look at those numbers...

And that performance per watt is just sad for a new gen architecture. RDNA2 will most definitely do better in performance/watt even if it doesn't compete at the highest end
 
There is this talk of uniform load vectorization taking place inside Nvidia and Intel's gpu drivers, so when that happens for AMD we'll get a clearer picture.
 
It's a shame we are moving to a single fab world, now get in line!
 
Samsung's nodes aren't up to TSMC's standards, but what about GloFo, are they now such a mess that nobody in the CPU or GPU business dares touch them?
AMD still uses them; partly to continue fulfilling their contract, and also to produce budget parts and sections of CPUs that don't scale down well (like the I/O die). The Polaris cards are still around (both as a value proposition and helping use up wafer agreements), select Zen 1000 series CPUs were upgraded with 12nm and Zen+ refinements (the "AF" sub-series), and GloFo has shifted towards producing chips for other tech companies not chasing the bleeding edge, like 5G, IoT, and so forth.

GloFo just isn't on the leading edge any more, but they are still competitive on modern nodes (12nm and above). They had potential with their early experimental 7nm processes rivaling early TMSC 7nm, but they couldn't afford to make the jump/full conversion, which is why AMD was able to aggressively renegotiate the Wafer Supply Agreement and reduce their reliance on GloFo.
 
Am I the only 1 thinking NV realise they can be behind on fab process and still hang in the performance and efficiency stakes so they can do that and save BOM?

We all know it's all about the bottom line for them so untill they need to spend for the latest fab they won't.

AMD has stated a 50% improvement to performance per watt for RDNA2. Given that RDNA1 was already on part with Turing, which is essentially the same in PPW as Ampere, it is going to be very hard for Nvidia to maintain an edge in efficiency unless AMD clock their cards way past the sweet spot. Given the clocks on the consoles being between 2.1 GHz and 2.35 GHz, I'd guess that 2.1GHz is the sweet spot whereas 2.35 GHz is closer to the higher clocks and less efficient. AMD could very well launch both a base model and an XT model following the 2.1 GHz 2.35 GHz pattern
 
Sometimes smaller die has more leakage, and uses more power. Sometimes not. Sometimes allows for higher clocks, sometimes not. So no one really knows anything until they actually manufacture them at TSMC.
 
Sometimes smaller die has more leakage, and uses more power. Sometimes not. Sometimes allows for higher clocks, sometimes not. So no one really knows anything until they actually manufacture them at TSMC.
I see what you did there.
 
***
AMD has stated a 50% improvement to performance per watt for RDNA2. Given that RDNA1 was already on part with Turing, which is essentially the same in PPW as Ampere, it is going to be very hard for Nvidia to maintain an edge in efficiency unless AMD clock their cards way past the sweet spot. Given the clocks on the consoles being between 2.1 GHz and 2.35 GHz, I'd guess that 2.1GHz is the sweet spot whereas 2.35 GHz is closer to the higher clocks and less efficient. AMD could very well launch both a base model and an XT model following the 2.1 GHz 2.35 GHz pattern
...Given that RDNA1 was already on part with Turing, which is essentially the same in PPW as Ampere, NOT!
 
Back
Top