• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA "Hopper" Might Have Huge 1000 mm² Die, Monolithic Design

Just another step towards Nvidia's attempt to push everyone onto their cloud gaming scam, following the ridiculous pricing of their gaming hardware
You will own nothing and be happy.
 
Nvidia still thinking monolithic is a good idea. Saddening
It kinda is necessary for graphics. Dividing the workload between 2 dies is very tough at the performance level we have these days. It's also why SLI/Crossfire died as the cards got more powerful. Nvidia would def kill SLI for money but I am pretty sure AMD (the underdog) would not kill CrossFire unless they couldn't get it to work well enough to be worth it.
 
It kinda is necessary for graphics. Dividing the workload between 2 dies is very tough at the performance level we have these days. It's also why SLI/Crossfire died as the cards got more powerful. Nvidia would def kill SLI for money but I am pretty sure AMD (the underdog) would not kill CrossFire unless they couldn't get it to work well enough to be worth it.
MI250X and CDNA say hi
 
We'll see, I'm not gonna make assumptions like that until I see it with my own eyes. I learned my lesson about immediately assuming stuff in the tech space.

MCM in consumer gaming hardware is inevitable, 3090 is the biggest example of that. Look at how inefficient that monolithic design is, look at how difficult it is to get clock increases when every bump you give it has to be applied to 10k+ cores on the same die. Whether or not it works well will have to be determined.
 
Last edited:
MCM in consumer gaming hardware is inevitable
Of this I have no doubt. I just wanted to convey that there is a very good reason GPUs are monolithic while CPUs have been able to easily follow a multi die strategy as far back as the first Core 2 Quads. Think about a GPU generating an image - a light source from one corner of the screen can cast a shadow in the other corner of the screen. You can't divide this workload so easily. And these days games use temporal techniques like TAA which rely on past frames generated to create the new frames faster. So you can't divide this workload so easily either. The net result is that you can't divide the workload spacially because that's how light works. And you can't divide the workload temporally because of techniques like TAA. The result is that consumer GPUs have found it impossible to support multi GPU as they evolved to their current state.

I am not saying multi die GPU in consumer graphics is impossible. But it will take a lot more time and effort than we expect to get there.
 
Of this I have no doubt. I just wanted to convey that there is a very good reason GPUs are monolithic while CPUs have been able to easily follow a multi die strategy as far back as the first Core 2 Quads. Think about a GPU generating an image - a light source from one corner of the screen can cast a shadow in the other corner of the screen. You can't divide this workload so easily. And these days games use temporal techniques like TAA which rely on past frames generated to create the new frames faster. So you can't divide this workload so easily either. The net result is that you can't divide the workload spacially because that's how light works. And you can't divide the workload temporally because of techniques like TAA. The result is that consumer GPUs have found it impossible to support multi GPU as they evolved to their current state.

I am not saying multi die GPU in consumer graphics is impossible. But it will take a lot more time and effort than we expect to get there.
Oh yeah, of course. When consumer GPUs will take that leap is unknown but it will happen and I'm interested. As for Hopper, being monolithic on the server side is saddening.
 
We'll see, I'm not gonna make assumptions like that until I see it with my own eyes. I learned my lesson about immediately assuming stuff in the tech space.

MCM in consumer gaming hardware is inevitable, 3090 is the biggest example of that. Look at how inefficient that monolithic design is, look at how difficult it is to get clock increases when every bump you give it has to be applied to 10k+ cores on the same die. Whether or not it works well will have to be determined.

Don't mix up the node, the architectural choices and the parameter of efficiency. They're different things. What we have seen in larger GPUs is really that clock speeds don't suffer as much as they used to back in the day. In other words: overall yields are better, the perf/clock range at which the chips come out of the oven is smaller, tighter, and clock control is highly dynamic.

This echoes in many things in the past 3-5 generations of GPUs:

The best overclocker (in % perf, not necessarily peak clock, but even then, 1600mhz was a unicorn lower in the stack) in the whole stack of Maxwell was a top tier part. Peak clocks equal that of lower tiered parts with power within spec, and temperature started playing a greater role as GPU Boost was introduced. These were the last 28nm GPUs with fantastic yields.

Pascal:
Review OC clocks on a properly cooled Gaming X:
1644400620418.png


Versus a GP104 part on the same cooler (1080):

1644400685854.png

And this was on a smaller node, straight from the get-go. GPU Boost was further refined. The architecture was stripped of anything non-gaming.

Since Turing, we saw a step back in clocks on a highly similar node, as Nvidia added new components to CUDA, but the small gap between parts in the stack remained.
Since Ampere, we saw another step back, considering this was another shrink and no clockspeed was earned. This can be attributed again to further focus on new CUDA components but also: Samsung's 8nm node that's definitely worse than anything TSMC has.
 
Back
Top