Tuesday, June 26th 2018

NVIDIA GV102 Prototype Board With GDDR6 Spotted, Up to 525 W Power Delivery. GTX 1180 Ti?

Reddit user 'dustinbrooks' has posted a photo of a prototype graphics card design that is clearly made by NVIDIA and "tested by a buddy of his that works for a company that tests NVIDIA boards". Dustin asked the community what he was looking at, which of course got tech enthusiasts interested.

The card is clearly made by NVIDIA as indicated by the markings near the PCI-Express x16 slot connector. What's also visible is three PCI-Express 8-pin power inputs and a huge VRM setup with four fans. Unfortunately the GPU in the center of the board is missing, but it should be GV102, the successor to GP102, since GDDR6 support is needed. The twelve GDDR6 memory chips located around the GPU's solder balls are marked as D9WCW, which decodes to MT61K256M32JE-14:A. These chips are Micron-made 8 Gbit GDDR6, specified for 14 Gb/s data rate, operating at 1.35 V. With twelve chips, this board has a 384-bit memory bus and 12 GB VRAM. The memory bandwidth at 14 Gbps data rate is a staggering 672 GB/s, which conclusively beats the 484 GB/s that Vega 64 and GTX 1080 Ti offer.
Looking to the top edge of the PCB we see a connector similar to NVIDIA's NVLink connector, but it's missing half of its pins, which means daisy chaining more than two cards won't be possible. Maybe NVIDIA plans to segment NVLink to "up to two" and "more than two", with the latter of course being much more pricey, similar to how server processors are segmented by their multi-processor support. It could also be a new kind of SLI connector, which I'm not sure about, as GPU vendors want to get rid of this multi-GPU approach.

My take on this whole board, mostly due to the overkill power supply (up to 525 W) and the amount of test points and jumpers is that this board is used to test and qualify performance and power consumption in an unconstrained way, so that engineers and marketing can later decide on acceptable power and performance targets for release. The NVLink connector and functionality can also be tested at this stage, and the final PCB for mass production will be designed based on the outcome of these tests. On the bottom left of the PCB we find a mini-DP connector, which should be perfectly sufficient for this kind of testing, but not for a retail board.

Near the far right of the photo, rotated by 90 degrees, we see some mechanical drawings that to me, look like a new retention plate for the cooler. You can clearly see some space inside, which seems to be for the graphics processor itself. Around that are some mounting holes, which look like they are for a cooling solution.

Update:
I tried to estimate die size from the photo. We do know from the datasheet that the GDDR6 memory chips are 14 mm x 12 mm. Based on that information I rescaled, warped and straightened the image, so that each GDDR6 memory chip is 140 x 120 pixels. With all memory chips around the GPU now being at the correct size, we can use the GPU's silkscreen print to estimate the actual size of the chip package, which I measured at 48.5 x 48.5 mm. Assuming that the inner silk screen with the solder balls represents the surface of the GPU die, we get a length of 26 mm for each side of the die, which brings die size to 676 mm². This makes it a relatively large die considering NVIDIA's existing lineup: GV100 (815 mm², Titan V), GP100 (610 mm², Quadro GP100), GP102 (471 mm², GTX 1080 Ti), GP104 (314 mm², GTX 1080), GP106 (200 mm², GTX 1060). So my initial assessment that this could be the GP102 successor seems accurate, especially since GV100 die size is quite a bit bigger than GP100 die size, by roughly 33%. Our calculated GV102 die size is 40% bigger than GP102, which falls in that range.
Source: Reddit
Add your own comment

77 Comments on NVIDIA GV102 Prototype Board With GDDR6 Spotted, Up to 525 W Power Delivery. GTX 1180 Ti?

#26
JalleR
Maybe they are doing an intel…… LOOK AT THIS CART running 5Ghz...… :D
Posted on Reply
#27
efikkan
I know there was some talk of bringing NVLink to GeForce products a long time ago, but I haven't heard anything lately. NVLink on a prototype board does not mean consumer products will feature it, but at least it indicates "GV102" supports it. I do wonder why they have rotated it vs. other boards.
Posted on Reply
#28
bug
efikkanI know there was some talk of bringing NVLink to GeForce products a long time ago, but I haven't heard anything lately. NVLink on a prototype board does not mean consumer products will feature it, but at least it indicates "GV102" supports it. I do wonder why they have rotated it vs. other boards.
NVLink could be needed when the data being shuffled around increases past a certain limit. But I don't think we're there just yet.
Posted on Reply
#29
T4C Fantasy
CPU & GPU DB Maintainer
since GV100 (5376 cuda cores) when all SMs are unlocked a new Titan Xv could be 5376
Posted on Reply
#30
iO
It's only a little bigger as GP102 so it's either a small Volta, which I find unlikely, or the big Turing with ~ 72 SMs.
Posted on Reply
#31
T4C Fantasy
CPU & GPU DB Maintainer
iOIt's only a little bigger as GP102 so it's either a small Volta, which I find unlikely, or the big Turing with ~ 72 SMs.
Its most likely a smaller volta like 102 pascal was to 100 with same core count
610 gp100
470 gp102 same core count

So 800 to 600 is plausable 5120 each
Posted on Reply
#32
Unregistered
UpgrayeddIf they made a new kickass SLI connector that fixes the downfalls I would love to buy 2
Ya, I keep seeing people note that developers, etc. want to eliminate multi-gpu...that would be kinda crappy for some gamers. I mean right now the only way I am able to play a # of my newer games in 4k / 60fps is because of SLI. Without SLI, most folks would be stuck in the 1440p / 1080p resolutions still if they want to play with excellent framerates. So I am hopeful that you're right and it's just maybe a newer connector with more bandwidth, etc.
Posted on Edit | Reply
#33
bug
Razrback16Ya, I keep seeing people note that developers, etc. want to eliminate multi-gpu...that would be kinda crappy for some gamers. I mean right now the only way I am able to play a # of my newer games in 4k / 60fps is because of SLI. Without SLI, most folks would be stuck in the 1440p / 1080p resolutions still if they want to play with excellent framerates. So I am hopeful that you're right and it's just maybe a newer connector with more bandwidth, etc.
I guess SLi/Crossfire's death is not because they're completely useless, but because the ROI is not there. Too few people use that feature.
Posted on Reply
#34
Upgrayedd
Razrback16Ya, I keep seeing people note that developers, etc. want to eliminate multi-gpu...that would be kinda crappy for some gamers. I mean right now the only way I am able to play a # of my newer games in 4k / 60fps is because of SLI. Without SLI, most folks would be stuck in the 1440p / 1080p resolutions still if they want to play with excellent framerates. So I am hopeful that you're right and it's just maybe a newer connector with more bandwidth, etc.
Yeah I don't really see multi-gpu dying..Nivida loves to sell extra cards they won't let it die.
Posted on Reply
#35
efikkan
iOIt's only a little bigger as GP102 so it's either a small Volta, which I find unlikely, or the big Turing with ~ 72 SMs.
GV100 and GV102 will have 6 GPCs, 84 SMs and 5376 Cuda cores.
GV104 will have 4 GPCs, 56 SMs, and 3584 Cuda cores.
GV106 will have 2 GPCs, 28 SMs, and 1792 Cuda cores.

GV102 will have fewer fp64 units, but we don't yet know if the tensor unit count is the same.
Posted on Reply
#36
iO
efikkanGV100 and GV102 will have 6 GPCs, 84 SMs and 5376 Cuda cores.
GV104 will have 4 GPCs, 56 SMs, and 3584 Cuda cores.
GV106 will have 2 GPCs, 28 SMs, and 1792 Cuda cores.

GV102 will have fewer fp64 units, but we don't yet know if the tensor unit count is the same.
IMO, Volta was a pure AI/HPC oriented GPU so I doubt we'll see it ever in mainstream consumer cards. They might have a GV102, just like you said but not the rest. That will be based on Volta_v2/Turing.

TSMCs 12nm process offers about 20% higher density.. They would have to go beyond 400mm² for 40% cores, something they haven't done for their high end chips.
Posted on Reply
#37
bug
UpgrayeddYeah I don't really see multi-gpu dying..
Oh it's dying, make no mistake. Support has already been moved from the drivers to DX12 and Vulkan and pretty much no developer cared enough to pick up the slack.
UpgrayeddNivida loves to sell extra cards they won't let it die.
Considering multi GPU is run by less than 1% of the users (and not all of them are running Nvidia), I'd be really surprised if extra income from that segment actually registered on Nvidia's radar.
Posted on Reply
#38
efikkan
iOIMO, Volta was a pure AI/HPC oriented GPU so I doubt we'll see it ever in mainstream consumer cards. They might have a GV102, just like you said but not the rest. That will be based on Volta_v2/Turing.

TSMCs 12nm process offers about 20% higher density.. They would have to go beyond 400mm², something they haven't done for their high end chips.
It will feature the resources needed for raytracing through Nvidia's "RTX technology", which Pascal lacks. But GV102 might not have the same amount of tensor cores as GV100.

Where do you get that TSMC's "12 nm" is 20% denser? GV100 is not denser, and TSMC "12 nm" is still the same node just with improved thermals. Pascal is not pushing the maximum density of TSMC's "16 nm". Expect the density of GV102/GV104 to be in the same range as before.
Posted on Reply
#39
iO
efikkanIt will feature the resources needed for raytracing through Nvidia's "RTX technology", which Pascal lacks. But GV102 might not have the same amount of tensor cores as GV100.

Where do you get that TSMC's "12 nm" is 20% denser? GV100 is not denser, and TSMC "12 nm" is still the same node just with improved thermals. Pascal is not pushing the maximum density of TSMC's "16 nm". Expect the density of GV102/GV104 to be in the same range as before.
SemiWiki: "12nm FFC offers a 10% performance gain or a 25% power reduction. 12nm also offers a 20% area reduction with 6T Libraries versus 7.5T or 9T."
Posted on Reply
#40
T4C Fantasy
CPU & GPU DB Maintainer
iOIMO, Volta was a pure AI/HPC oriented GPU so I doubt we'll see it ever in mainstream consumer cards. They might have a GV102, just like you said but not the rest. That will be based on Volta_v2/Turing.

TSMCs 12nm process offers about 20% higher density.. They would have to go beyond 400mm² for 40% cores, something they haven't done for their high end chips.
Volta isnt purely anything they always have removed features for lower end GP100 is server only too
Removed tensor cores and theres your space
Posted on Reply
#41
jabbadap
iOSemiWiki: "12nm FFC offers a 10% performance gain or a 25% power reduction. 12nm also offers a 20% area reduction with 6T Libraries versus 7.5T or 9T."
But then again 12nm FFN is not the same as 12nm FFC. So those numbers does not apply.
Posted on Reply
#42
iO
T4C FantasyVolta isnt purely anything they always have removed features for lower end GP100 is server only too
Removed tensor cores and theres your space
That would work for a potential GV102 but what about the rest? More cores at the same density requires more space which means either lower profit margins or $800 MSRPs for 1180s which probably isn't what they're after...
jabbadapBut then again 12nm FFN is not the same as 12nm FFC. So those numbers does not apply.
Ah OK, that slipped past me.
Posted on Reply
#43
T4C Fantasy
CPU & GPU DB Maintainer
iOThat would work for a potential GV102 but what about the rest? More cores at the same density requires more space which means either lower profit margins or $800 MSRPs for 1180s which probably isn't what they're after...

Ah OK, that slipped past me.
same concept, more features removed, they are different dies its what they always do, they are not just shrunken dies you see everything moved to different places xD
Posted on Reply
#44
efikkan
iOThat would work for a potential GV102 but what about the rest? More cores at the same density requires more space which means either lower profit margins or $800 MSRPs for 1180s which probably isn't what they're after...
Nvidia have made large chips for the mainstream before, like GM200 at ~601mm². It really comes down to yields and production volume. GP102 struggled a lot with yields in the beginning, leading to Titan X (Pascal) being sold out despite its high price and GTX 1080 Ti being delayed from around December 2016 to March 2017. The larger GP100 didn't seem to suffer from such problems, so it comes down to mistakes in the design, and GTX 1080 Ti did eventually ship in very high volumes. Similarly, the launch dates and prices of "GV102" based products will depend on yields.

We don't even know yet if "GV104" and "GV102" will be segmented the same way as GP104 and GP102. Titan and Quadro are already using the GV100 die, so it will be interesting if Nvidia carve out more than one bin of "GV102", and if multiple of those will be consumer products.
Posted on Reply
#45
T4C Fantasy
CPU & GPU DB Maintainer
efikkanNvidia have made large chips for the mainstream before, like GM200 at ~601mm². It really comes down to yields and production volume. GP102 struggled a lot with yields, leading to Titan X (Pascal) being sold out despite its high price and GTX 1080 Ti being delayed from around December 2016 to March 2017. The larger GP100 didn't seem to suffer from such problems, so it comes down to mistakes in the design, and GTX 1080 Ti did eventually ship in very high volumes. Similarly, the launch dates and prices of "GV102" based products will depend on yields.

We don't even know yet if "GV104" and "GV102" will be segmented the same way as GP104 and GP102. Titan and Quadro are already using the GV100 die, so it will be interesting if Nvidia carve out more than one bin of "GV102", and if multiple of those will be consumer products.
There is atleast 7 gv100 skus and titan v is the consumer version believe it or not, the gv100 in titan v isnt classified by engineers as server or workstation chip by device ID even though they are the same on inside

GV100 non-gl
Posted on Reply
#46
Upgrayedd
bugOh it's dying, make no mistake. Support has already been moved from the drivers to DX12 and Vulkan and pretty much no developer cared enough to pick up the slack.

Considering multi GPU is run by less than 1% of the users (and not all of them are running Nvidia), I'd be really surprised if extra income from that segment actually registered on Nvidia's radar.
They are a business, any extra income is welcome...

Heres 15 pages of SLI supported games.. www.geforce.com/hardware/technology/sli/games
Posted on Reply
#47
Prima.Vera
Definetly not a gaming card. Or maybe it's just one of those prototype boards to test GPUs under various circumstances...
Posted on Reply
#48
T4C Fantasy
CPU & GPU DB Maintainer
Prima.VeraDefinetly not a gaming card. Or maybe it's just one of those prototype boards to test GPUs under various circumstances...
Yea this is a test board it supports all features like voltage modifying and it has maxed power connectors to test limits
Posted on Reply
#49
Patriot
FordGT90ConceptMe thinks it is an engineering card to test what will become an NVLink product.
Most definitely not a SXM2 (left card) proto... those are and will remain HBM2 and followups...simply no room for gddr6.
The right card is a 150w full Volta core enabled Tesla... albeit greatly reduced clocks. SXM2 is 300w, full sized pcie 250w, and baby volta 150w.

The SXM2 cards are pretty nuts...



Had to go dig through my pictures, the baby v100 (150w variant) is not nvlink enabled, no gold finger, the big one can do 2 way, and the sxm2 can do 4/8/16 way. Most servers are 4 or 8 sxm2 though... only dgx2 supports 16 way...and costs what 400k.

My guess would be quadro or consumer volta/turning sample for testing gddr6.
Posted on Reply
#50
bug
UpgrayeddThey are a business, any extra income is welcome...

Heres 15 pages of SLI supported games.. www.geforce.com/hardware/technology/sli/games
Here's another way to look at that: after 14 years in existence, less than 400 titles have added support for SLI. And that's when most of the work was being done in the video driver. Now that more of the burden has shifted onto the developers/game engine, you can figure out where that figure is going.
Posted on Reply
Add your own comment
Apr 16th, 2024 16:36 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts