Tuesday, June 26th 2018

NVIDIA GV102 Prototype Board With GDDR6 Spotted, Up to 525 W Power Delivery. GTX 1180 Ti?

Reddit user 'dustinbrooks' has posted a photo of a prototype graphics card design that is clearly made by NVIDIA and "tested by a buddy of his that works for a company that tests NVIDIA boards". Dustin asked the community what he was looking at, which of course got tech enthusiasts interested.

The card is clearly made by NVIDIA as indicated by the markings near the PCI-Express x16 slot connector. What's also visible is three PCI-Express 8-pin power inputs and a huge VRM setup with four fans. Unfortunately the GPU in the center of the board is missing, but it should be GV102, the successor to GP102, since GDDR6 support is needed. The twelve GDDR6 memory chips located around the GPU's solder balls are marked as D9WCW, which decodes to MT61K256M32JE-14:A. These chips are Micron-made 8 Gbit GDDR6, specified for 14 Gb/s data rate, operating at 1.35 V. With twelve chips, this board has a 384-bit memory bus and 12 GB VRAM. The memory bandwidth at 14 Gbps data rate is a staggering 672 GB/s, which conclusively beats the 484 GB/s that Vega 64 and GTX 1080 Ti offer.
Looking to the top edge of the PCB we see a connector similar to NVIDIA's NVLink connector, but it's missing half of its pins, which means daisy chaining more than two cards won't be possible. Maybe NVIDIA plans to segment NVLink to "up to two" and "more than two", with the latter of course being much more pricey, similar to how server processors are segmented by their multi-processor support. It could also be a new kind of SLI connector, which I'm not sure about, as GPU vendors want to get rid of this multi-GPU approach.

My take on this whole board, mostly due to the overkill power supply (up to 525 W) and the amount of test points and jumpers is that this board is used to test and qualify performance and power consumption in an unconstrained way, so that engineers and marketing can later decide on acceptable power and performance targets for release. The NVLink connector and functionality can also be tested at this stage, and the final PCB for mass production will be designed based on the outcome of these tests. On the bottom left of the PCB we find a mini-DP connector, which should be perfectly sufficient for this kind of testing, but not for a retail board.

Near the far right of the photo, rotated by 90 degrees, we see some mechanical drawings that to me, look like a new retention plate for the cooler. You can clearly see some space inside, which seems to be for the graphics processor itself. Around that are some mounting holes, which look like they are for a cooling solution.

Update:
I tried to estimate die size from the photo. We do know from the datasheet that the GDDR6 memory chips are 14 mm x 12 mm. Based on that information I rescaled, warped and straightened the image, so that each GDDR6 memory chip is 140 x 120 pixels. With all memory chips around the GPU now being at the correct size, we can use the GPU's silkscreen print to estimate the actual size of the chip package, which I measured at 48.5 x 48.5 mm. Assuming that the inner silk screen with the solder balls represents the surface of the GPU die, we get a length of 26 mm for each side of the die, which brings die size to 676 mm². This makes it a relatively large die considering NVIDIA's existing lineup: GV100 (815 mm², Titan V), GP100 (610 mm², Quadro GP100), GP102 (471 mm², GTX 1080 Ti), GP104 (314 mm², GTX 1080), GP106 (200 mm², GTX 1060). So my initial assessment that this could be the GP102 successor seems accurate, especially since GV100 die size is quite a bit bigger than GP100 die size, by roughly 33%. Our calculated GV102 die size is 40% bigger than GP102, which falls in that range.
Source: Reddit
Add your own comment

77 Comments on NVIDIA GV102 Prototype Board With GDDR6 Spotted, Up to 525 W Power Delivery. GTX 1180 Ti?

#51
Upgrayedd
bugHere's another way to look at that: after 14 years in existence, less than 400 titles have added support for SLI. And that's when most of the work was being done in the video driver. Now that more of the burden has shifted onto the developers/game engine, you can figure out where that figure is going.
That is a game every two weeks..
Posted on Reply
#52
bug
UpgrayeddThat is a game every two weeks..
Make of it what you wish.
Posted on Reply
#53
efikkan
Prototypes gives a fascinating insight into the development process of products. I wish Nvidia and other hardware manufacturers either provided an online archive or donated the hardware to museums, but after the launch of course. Most of the public never get to see the cool innovations and experimentations of prototypes and development kits. I find this stuff very fascinating, whether it's old Commodore, Amiga, Nintendo, 3dfx or brand new Nvidia and AMD parts.
Posted on Reply
#54
Unregistered
UpgrayeddThat is a game every two weeks..
Ya one thing I'd add here also is *MOST* games that are released do not need SLI. SLI is really only mandatory for the AAA titles that are incredibly demanding on the GPU, and those are the games that should have SLI to help the users out. I think most SLI users are reasonable and aren't asking for support on every given title, just the ones that are very demanding on the GPU. So I'd say if that's one game every 2 weeks, that's actually pretty encouraging for SLI support, IMO.
Posted on Edit | Reply
#55
Upgrayedd
bugMake of it what you wish.
Not wishing anything...that's just how it is...
bugMake of it what you wish.
Vermintide 2 and FarCry5 are two recent SLI games and they aren't even on that list... stop trying to wish SLI away
Posted on Reply
#56
efikkan
Nvidia will keep SLI alive as long as the professional market demands it. Why do you think their Linux drivers support SLI and "stereoscopic 3D"? Let me give you a hint, it's not because of gaming…
Posted on Reply
#57
FordGT90Concept
"I go fast!1!11!1!"
UpgrayeddVermintide 2 and FarCry5 are two recent SLI games and they aren't even on that list... stop trying to wish SLI away
Vermintide is surprising seeing how it uses a little known engine (owned by Autodesk). Dunia engine (all FarCry games use it) has supported SLI since the beginning so ongoing support isn't surprising.
Posted on Reply
#58
T4C Fantasy
CPU & GPU DB Maintainer
FordGT90ConceptVermintide is surprising seeing how it uses a little known engine (owned by Autodesk). Dunia engine (all FarCry games use it) has supported SLI since the beginning so ongoing support isn't surprising.
Ff15 finally supports sli which means 60fps 4k is possible now lol *finally*
Posted on Reply
#59
bug
efikkanNvidia will keep SLI alive as long as the professional market demands it. Why do you think their Linux drivers support SLI and "stereoscopic 3D"? Let me give you a hint, it's not because of gaming…
Professional usage may be legit (maybe relegated to Quadros?). But still, for consumers this technology is as good as dead.
Posted on Reply
#60
efikkan
bugProfessional usage may be legit (maybe relegated to Quadros?). But still, for consumers this technology is as good as dead.
Professionals certainly use Teslas, Quadros and Titans when there are reasons to do so. But both GeForce cards and consumer CPUs are quite common for workstations and test setups. I've seen developers order pretty decent shipments of higher consumer cards, and even sometimes use them in SLI. A good portion of upper mid-range and high-end consumer graphics cards are in fact sold to companies.
Posted on Reply
#61
Upgrayedd
bugProfessional usage may be legit (maybe relegated to Quadros?). But still, for consumers this technology is as good as dead.
Why do you keep saying that. Its clearly not. I just gave you clear examples of how its not dying. Where are your examples besides just your gut?
Posted on Reply
#62
bug
UpgrayeddWhy do you keep saying that. Its clearly not. I just gave you clear examples of how its not dying. Where are your examples besides just your gut?
Because after all this times it's still around 1% adoption and clearly both Nvidia and AMD are scaling back support. But don't let that stand in your way, you have a list.
Posted on Reply
#63
Upgrayedd
bugBecause after all this times it's still around 1% adoption and clearly both Nvidia and AMD are scaling back support. But don't let that stand in your way, you have a list.
And new games coming out don't forget that part, the part where the devs are still doing work on it....get real man...try not to let all the evidence smack you in the face too hard..
Posted on Reply
#64
bug
UpgrayeddAnd new games coming out don't forget that part, the part where the devs are still doing work on it....get real man...try not to let all the evidence smack you in the face too hard..
Game still sporting support are an indication that SLI is not dead yet. Not that it's not dying.
I mean, can you name any other technology that became a success after failing to break 2% market penetration 10+ years after its commercial release?
Posted on Reply
#65
Blueberries
If Micron's GDDR6 is packaged in 2GB as rumored then there's 24GB of VRAM on that board which is probably too costly to manufacture for a consumer video card. Maybe a Tesla V?
Posted on Reply
#66
W1zzard
BlueberriesIf Micron's GDDR6 is packaged in 2GB as rumored then there's 24GB of VRAM on that board which is probably too costly to manufacture for a consumer video card. Maybe a Tesla V?
Did you read my article? The memory part number is listed there and how to decode the part number to get the memory capacity is posted as picture. These chips are without any doubt 256 Mb x 32 = 8 Gb = 1 GB.

Unless there is more memory chips on the other side of the PCB, which seems unlikely.
Posted on Reply
#67
John Naylor
bugI guess SLi/Crossfire's death is not because they're completely useless, but because the ROI is not there. Too few people use that feature.
Up thru the 900 series, well over half our gaming builds we did for users were SLI ... with 95% at least "SLI ready". SLIs problem had more to do with nVidia than customers .... and nVidia's problem was this ... two 970s were the same price as a 980. And it had no downside. For most games, yoiu say a 70% average increase in FPS, on the games that matters, many were over 95%. That 70% gave yu 40% more frames than the 980. In the few popular games that did not support it, you lost maybe 12% in fps... weighing one against the other it was an easy decision. The problem for nVidia was, they make more money selling one 980 than they do twin 970s... even more so when game bundles are involved. With the 700, 600 and 500 series, the dollars favored SLI even more.

With the 10xx series, nvida made it harder. For reasons never explained, SLI average scaling at 1080p was only 18% ... @ 1440p, about 30%. Yet for some reason, at 2160p it was well over 55% with some games scaling near 100%. Looking at the numbers, it does seem off that the proce equartion was no longer valid at 1080p and 1440p where the top tier card was now keeping ya well above 60 fps. But where it couldn't, SLI managed to make sense at 2160p. With thr 4k HDR panels about to drop at $1,999 or so... they will be too high priced for mainstream users and i don't see a chnage in the parigm until these things drop into the range of affordability. But as monitor technology improvements continue to bring us more pixels, I don't see nVidia abandoning a multi GPU option at the high end.
Posted on Reply
#68
Unregistered
John NaylorUp thru the 900 series, well over half our gaming builds we did for users were SLI ... with 95% at least "SLI ready". SLIs problem had more to do with nVidia than customers .... and nVidia's problem was this ... two 970s were the same price as a 980. And it had no downside. For most games, yoiu say a 70% average increase in FPS, on the games that matters, many were over 95%. That 70% gave yu 40% more frames than the 980. In the few popular games that did not support it, you lost maybe 12% in fps... weighing one against the other it was an easy decision. The problem for nVidia was, they make more money selling one 980 than they do twin 970s... even more so when game bundles are involved. With the 700, 600 and 500 series, the dollars favored SLI even more.

With the 10xx series, nvida made it harder. For reasons never explained, SLI average scaling at 1080p was only 18% ... @ 1440p, about 30%. Yet for some reason, at 2160p it was well over 55% with some games scaling near 100%. Looking at the numbers, it does seem off that the proce equartion was no longer valid at 1080p and 1440p where the top tier card was now keeping ya well above 60 fps. But where it couldn't, SLI managed to make sense at 2160p. With thr 4k HDR panels about to drop at $1,999 or so... they will be too high priced for mainstream users and i don't see a chnage in the parigm until these things drop into the range of affordability. But as monitor technology improvements continue to bring us more pixels, I don't see nVidia abandoning a multi GPU option at the high end.
Great post - I haven't gamed below 4k for about 3-5 years now so I can't comment on the scaling for 1440p / 1080p, but on 4k I can absolutely confirm the scaling is a huge help. I would not be able to play my games in 4k / 60fps like I do without SLI. I certainly hope it sticks around. When 4k starts becoming mainstream, no doubt the next resolution up will start its rise in popularity where SLI will be needed yet again.
Posted on Edit | Reply
#69
efikkan
John NaylorUp thru the 900 series, well over half our gaming builds we did for users were SLI ... with 95% at least "SLI ready". SLIs problem had more to do with nVidia than customers .... and nVidia's problem was this ... two 970s were the same price as a 980. And it had no downside. For most games, yoiu say a 70% average increase in FPS, on the games that matters, many were over 95%. That 70% gave yu 40% more frames than the 980. In the few popular games that did not support it, you lost maybe 12% in fps... weighing one against the other it was an easy decision. The problem for nVidia was, they make more money selling one 980 than they do twin 970s... even more so when game bundles are involved. With the 700, 600 and 500 series, the dollars favored SLI even more.

With the 10xx series, nvida made it harder. For reasons never explained, SLI average scaling at 1080p was only 18% ... @ 1440p, about 30%.…
So, we're venturing into conspiracy theories now?
SLI has never worked as well as you described. And Nvidia have not deliberately sabotaged their Pascal cards, but SLI support in top games have decreased over the last years, due to most top games being console ports. SLI support is highly dependent on how the game engine works, and Nvidia with all their driver tricks can't make a game which is not suited scale with SLI. Current games commonly use multi-pass rendering with synchronization and post-processing effects which limits multi-GPU scaling.

SLI has never been a good choice to get "cheap high-end performance"; SLI has only worked well in certain games, full of glitches and riddled with stutter. SLI is and have just been an option for those who "need" more performance than a single card can offer, in select titles.
Posted on Reply
#70
Blueberries
W1zzardDid you read my article? The memory part number is listed there and how to decode the part number to get the memory capacity is posted as picture. These chips are without any doubt 256 Mb x 32 = 8 Gb = 1 GB.

Unless there is more memory chips on the other side of the PCB, which seems unlikely.
No, sorry. I skimmed it. Good catch.
Posted on Reply
#71
TheGuruStud
UpgrayeddThat is a game every two weeks..
And only 5% work correctly lol
Posted on Reply
#72
gamerman
nvidia is not amd.

and i mean massive TDP like vega56/64 and 64 watercool way. loosers choice amd.

vega64 power eat juice is close that 500W with oc's and its so high that that amd gpu should just banned and get penalty ticket for amd.

nvidia politics are faraway thouse 'we dont care' politics.

300W is limit for nvidia,and that only special gpus like titan.

turing 1060 beat vega 64.
Posted on Reply
#73
bug
gamermannvidia is not amd.

and i mean massive TDP like vega56/64 and 64 watercool way. loosers choice amd.

vega64 power eat juice is close that 500W with oc's and its so high that that amd gpu should just banned and get penalty ticket for amd.

nvidia politics are faraway thouse 'we dont care' politics.

300W is limit for nvidia,and that only special gpus like titan.

turing 1060 beat vega 64.
It's not that Nvidia are geniuses or something. The power efficiency comes from the fact that Nvidia also makes SoCs (mostly for automotive these days) and they have a vested interest in reusing the same design. But yes, that translates into a quite nice advantage for their dGPUs.
Posted on Reply
#74
Totally
gamermannvidia is not amd.

and i mean massive TDP like vega56/64 and 64 watercool way. loosers choice amd.

vega64 power eat juice is close that 500W with oc's and its so high that that amd gpu should just banned and get penalty ticket for amd.

nvidia politics are faraway thouse 'we dont care' politics.

300W is limit for nvidia,and that only special gpus like titan.

turing 1060 beat vega 64.
What I don't even...oh my brain hurts. Vega 64 have a 300w (water has 400W) limit so where are you getting 500w?
Posted on Reply
#75
Midland Dog
not a bad die size guestimate, 754 is where TU102 landed and you said 600 and something, pretty close
Posted on Reply
Add your own comment
Apr 19th, 2024 01:37 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts