Wednesday, July 21st 2021

NVIDIA Multi-Chip-Module Hopper GPU Rumored To Tape Out Soon

Hopper is an upcoming compute architecture from NVIDIA which will be the first from the company to feature a Multi-Chip-Module (MCM) design similar to Intel's Xe-HPC and AMD's upcoming CDNA2. The Hopper architecture has been teased for over 2 years but it would appear that it is nearing completion with a recent leak suggesting the product will tape out soon. This compute GPU will likely be manufactured on TSMC's 5 nm node and could feature two dies each with 288 Streaming Microprocessors which could theoretically provide a three-fold performance improvement over the Ampere-based NVIDIA A100. The first product to feature the GPU is expected to be the NVIDIA H100 data center accelerator which will serve as a successor to the A100 and could potentially launch in mid-2022.
Sources: @3DCenter_org, VideoCardz
Add your own comment

32 Comments on NVIDIA Multi-Chip-Module Hopper GPU Rumored To Tape Out Soon

#1
qubit
Overclocked quantum bit
So, is this likely to make its way into graphics cards, or is it for compute only?
Posted on Reply
#2
Flanker
Will this GPU be relevant for gamers?
Posted on Reply
#3
Minus Infinity
qubitSo, is this likely to make its way into graphics cards, or is it for compute only?
Lovelace was supposed to be coming next for desktop and Hopper much later.
Posted on Reply
#4
Uskompuf
qubitSo, is this likely to make its way into graphics cards, or is it for compute only?
No, it will be compute only.
Posted on Reply
#5
Gruffalo.Soldier
I'm the only one
Minus InfinityLovelace
Interesting choice of name, Linda would be proud :laugh:
Posted on Reply
#6
64K
I think I recall reading that the next AMD multi-chip GPU for desktop will be coming to desktop at some point so Nvidia should make them available for Desktop gaming at some point in the future.
Posted on Reply
#7
Vya Domus
64KI think I recall reading that the next AMD multi-chip GPU for desktop will be coming to desktop at some point so Nvidia should make them available for Desktop gaming at some point in the future.
Not these ones they wont. A100 for example is not a graphics oriented chip, it doesn't even have RT cores and neither will Hopper most likely.
Posted on Reply
#8
Vayra86
FlankerWill this GPU be relevant for gamers?
If you cash out all your ETH by then which is probably in about 2-3 years, you can then buy one to mine ETH with and in six to eight years you will get to ask the same question :)

More serious... So far this seems to just be oriented at other markets but eventually some cut down should reach us, especially if you consider the die sizes and power on current top end cards. Not much headroom there.
Posted on Reply
#9
Richards
These are the same leakers that lied about the switch pro.. these are made up rumours these are not nsa level on getting intel
Posted on Reply
#10
R-T-B
Gruffalo.SoldierInteresting choice of name, Linda would be proud :laugh:
wrong lovelace, and no she wouldn't.
Posted on Reply
#11
BorisDG
Most people which aren't able to snap Ampere on this point, will get at least fresh RTX 4****s.
Posted on Reply
#12
Franzen4Real
qubitSo, is this likely to make its way into graphics cards, or is it for compute only?
From my understanding it is more of a software issue (OS and game engine) that is keeping us from having MCM gpu's in the consumer space, whereas from the hardware side they will be able to make it work once the interconnect designs like Infinity Fabric or NVLink catch up to the needs of the individual cores (which, they have pretty much accomplished now). I remember reading a Q&A with Lisa Su leading up to the launch of Navi where she essentially stated this. I have looked for a bit trying to find that exact article (I thought it was here at TPU or Anandtech but I'm not having luck) and though I was not able to, here is an article interviewing David Wang from AMD that basically covers the same thing Lisa Su did. I can't recall reading about this from an nVidia designer/engineer, but I'm sure Green camp faces the exact same issue. On the topic of this article (being that Hopper is not a consumer gpu), the following link also briefly covers the fact that in compute environments the MCM limitation is different or non-existent, and it is only in gaming that the problems arise. I'll keep trying to find Lisa's response/explanation, as it was a bit more detailed.

www.pcgamesn.com/amd-navi-monolithic-gpu-design
Vayra86More serious... So far this seems to just be oriented at other markets but eventually some cut down should reach us, especially if you consider the die sizes and power on current top end cards. Not much headroom there.
I too expect the day to come for MCM to trickle down to the consumer space. But it seems AMD/nVidia are more so at the mercy of Microsoft and game developers. It is a MUCH needed evolution of graphic card design for so many reasons. cost/scalability/yields/ just to name a few big ones.
Posted on Reply
#13
ThrashZone
Hi,
Tapout is like saying something is giving up so this is a poor choice of words even tape out really makes no sense
Posted on Reply
#14
64K
It's past time for a gaming multi-chip module. We need Nvidia and AMD to make that a reality.
Posted on Reply
#15
Vayra86
64KIt's past time for a gaming multi-chip module. We need Nvidia and AMD to make that a reality.
Latency though

But some sort of internal split render tech should work I suppose. But even those technologies incur a latency hit on multi GPU already.
Posted on Reply
#17
InVasMani
Vayra86Latency though

But some sort of internal split render tech should work I suppose. But even those technologies incur a latency hit on multi GPU already.
I could see them having a chip dedicated solely for post process which would allow for a lot of additional performance. It might be something more like a RX6800 chip with RX6600 chip combined where the RX6600 handles the post process of the rendering passed along to it by the RX6800. They could basically use a second chip for post process inference interpolation of effects with a secondary GPU to devout solely to it you can use all that overhead for it and not incur a performance dip on the primary GPU by taking away resources from it. You could be upscale post process higher and layer it more extensively with multiple configurations of the same technique layering them much like drum samples get layered in songs or similar to a SID chip.
Posted on Reply
#18
Anymal
BorisDGMost people which aren't able to snap Ampere on this point, will get at least fresh RTX 4****s.
Imagine how much money for new graphics card will be saved till then? Good times ahead ;.)
Posted on Reply
#19
HisDivineOrder
AnymalImagine how much money for new graphics card will be saved till then? Good times ahead ;.)
Jen and Lisa sure are imagining it. Everyone's taught them both that they've been WAAAAAAAY underpricing their GPU MSRP's. They tried hitting lower price points and look what happened. Everyone self-priced up. Just wait till you get a load of those new MSRP's they now realize the market can bear.

All thanks to people buying cards regardless of the price. Thanks everyone!
Posted on Reply
#20
Minus Infinity
UskompufNo, it will be compute only.
Based on what. Nvidia hasn't said squat about Hopper. No one's ever said it won't find it's way to desktop. With Lovelace in the interim, Hopper or a revision will not be needed for desktop until probably late 2023 anyway. By then RDNA4 will be on the release schedule and Nvidia will be fools not to have their all new architecture ready. Already known RDNA3 will be a very large pgrade over RDNA2 and Lovelace will have a tough job. And we might find RDNA3 has a MC design.
Posted on Reply
#21
InVasMani
I think Nvidia will have a different SKU for desktop mGPU if it comes at all to desktop from this SKU it would probably be in the form of a Titan or Quadro card, but might lack tensor core hardware. Honestly that would be fine for a lot of people though especially workstation users that might not even make use of RTRT or care about the DLSS aspect which at this point why would you need to with FSR that's "close enough" even if you want to argue and debate differences between the two!!?

I think there is about a 50% chance RNDA3 is mGPU based. If it isn't I suspect the next Radeon Pro/Killer Instinct cards will be.
Posted on Reply
#22
Lindatje
Time will tell how it unfolds.
Posted on Reply
#23
Aquinus
Resident Wat-man
Vayra86Latency though
That's what people said about CPUs using MCM. Those are problems to be solved, not ones to be avoided. With that said, I don't see MCM being unrealistic for GPUs. It'll just take time to get right.
Posted on Reply
#24
yotano211
Gruffalo.SoldierInteresting choice of name, Linda would be proud :laugh:
I saw all her movies, such a great actress. 2 thumbs up.
Posted on Reply
#25
Anymal
She was forced in filming, nothing to brag about.
Posted on Reply
Add your own comment
Copyright © 2004-2021 www.techpowerup.com. All rights reserved.
All trademarks used are properties of their respective owners.