• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Unveils the Tesla P100 HPC Board based on "Pascal" Architecture

NVIDIA unveiled the Tesla P100, the first product based on the company's "Pascal" GPU architecture. At its core is a swanky new multi-chip module, similar in its essential layout to the AMD "Fiji." A 15 billion-transistor GPU die sits on top of a silicon wafer, through which a 4096-bit wide HBM2 memory interface wires it to four 3D HBM2 stacks; and with the wafer sitting on the fiberglass substrate that's rooted into the PCB over a ball-grid array. With the GPU die, wafer, and memory dies put together, this package has a cumulative transistor count of 150 billion transistors. The GPU die is built on the 16 nm FinFET process, and is 600 mm² in area.

The P100 sits on top of a space-efficient PCB that looks less like a video card, and more like a compact module that can be tucked away into ultra-high density supercomputing cluster boxes, such as the new NVIDIA DGX-1. The P100 offers a double-precision (FP64) compute performance of 5.3 TFLOP/s, FP32 performance of 10.6 TFLOP/s, and FP16 performance of a whopping 21.2 TFLOP/s. The chip has registers as big as 14.2 MB, and an L2 cache of 4 MB. In addition to PCI-Express, each P100 chip will be equipped with NVLink, and in-house developed high-bandwidth interconnect by NVIDIA, with bandwidths as high as 80 GB/s per direction, 160 GB/s both directions. This allows extremely high-bandwidth paths between GPUs, so they could share memory and work more like single-GPUs. The P100 is already in volume production, with its target customers already having bought it all the way up to its OEM channel availability some time in Q1-2017.


So what's its Tdp then Bta, I've seen a 300 watt Tdp quoted elsewhere,which seams to be ok if a bit higher than I personally expected , 8 of these in one (Gpx1)box must use quite an impressive cooling solution.
 
So what's its Tdp then Bta, I've seen a 300 watt Tdp quoted elsewhere,which seams to be ok if a bit higher than I personally expected , 8 of these in one (Gpx1)box must use quite an impressive cooling solution.
Nvidia says 300W. It seems a lot, but it also looks like those cards are meant to be mounted horizontally, so a 2U rack can accommodate decent cooling.
 
So what's its Tdp then Bta, I've seen a 300 watt Tdp quoted elsewhere,which seams to be ok if a bit higher than I personally expected , 8 of these in one (Gpx1)box must use quite an impressive cooling solution.
Standard rack mount screamin' server fan layout by the looks of it. System builders already shoehorn sixteen Titan's into a 4U, so I doubt eight P100's present an insurmountable problem
4u_hdca_web.png
 
Come on benchmarks. I want a side by side of Pascal and Polaris already
 
Come on benchmarks. I want a side by side of Pascal and Polaris already
I hope by next year, because it seems like both camps are stalling...
I've finished watching GTC2016 and I really had a strong feeling of deja-vu, because 2015 was almost identical: Pascal is coming and it is going to be awesome.... deep learning yada-yada-yada...revolutionaty technology blah-blah-blah... and that's it.
 
This sounds like Cypress vs Fermi all over again...
 
Weird, everything is level? Is that some kind of integrated heatsink around all of the chips? They weren't level on Fiji.
 
Weird, everything is level? Is that some kind of integrated heatsink around all of the chips? They weren't level on Fiji.
There is definitly someting filling the gaps between the hbm and the gpu itself!
 
Back
Top