Thursday, August 2nd 2018

NVIDIA GeForce GTX 1180 Bare PCB Pictured

Here are some of the first pictures of the bare printed circuit board (PCB) of NVIDIA's upcoming GeForce GTX 1180 graphics card (dubbed PG180), referred to by the person who originally posted them as "GTX 2080" (it seems the jury is still out on the nomenclature). The PCB looks hot from the press, with its SMT points and vias still exposed. The GT104 GPU traces hint at a package that's about the size of a GP104 or its precessors. It's wired to eight memory chips on three sides, confirming a 256-bit wide memory bus. Display outputs appear flexible, for either 2x DisplayPort + 2x HDMI, or 3x DisplayPort + 1x HDMI configurations.

The VRM setup is surprisingly powerful for a card that's supposed to succeed the ~180W GeForce GTX 1080, which can make do with a single 8-pin PCIe power input. The card draws power from a combination of 6-pin and 8-pin PCIe power connectors. There is a purportedly 10-phase VCore side, which in all likelihood is a 5-phase setup with "dumb" phase-doubling; and similarly, a 2-phase memory power (which could again be doubled single-phase). The SLI-HB fingers also make way. There's a new connector that looks like a single SLI finger and an NVLink finger arranged side-by-side. NVIDIA still hasn't given up on multi-GPU. NVLink is a very broad interconnect, in terms of bandwidth. NVIDIA probably needs that for multi-GPU setups to work with not just high resolutions (4K, 5K, or even 8K), but also higher bit-depth, higher refresh-rates, HDR, and other exotic data. The reverse side doesn't have much action other than traces for the VRM controllers, phase doublers, and an unusually large bank of SMT capacitors (the kind seen on AMD PCBs with MCM GPUs).
Sources: Baidu Tieba, VideoCardz
Add your own comment

57 Comments on NVIDIA GeForce GTX 1180 Bare PCB Pictured

#51
efikkan
FordGT90ConceptWhy not? Not everyone can afford a $2000+ card but they still want RSX. For all we know, GV102 could have two stacks of HBM2 and GV104 could have one forming a complete, professional product stack. Deep learning and neural processing require very different silicon from display tasks. It makes sense to divorce the two at some point.
We've all seen the prototype board and this new "1180" board, one with 384-bit GDDR6 and one with 256-bit GDDR6. No HBM on either of these.
Posted on Reply
#52
FordGT90Concept
"I go fast!1!11!1!"
This PCB could easily be for a Turing or Ampere GPU. What are gamers going to do with tensor cores?
Posted on Reply
#53
efikkan
FordGT90ConceptThis PCB could easily be for a Turing or Ampere GPU.
I don't care about the speculation about name, wccftech, videocardz & co changes their mind every other week about that stuff.

What I do know is that "GV102" and "GV104" was taped out before last summer, and that Nvidia is about to release a lineup of new GeForce cards which will be "Volta based". I don't know what product names and "architecture" names Nvidia will use for branding, and frankly I don't care.
FordGT90ConceptWhat are gamers going to do with tensor cores?
Many assume the RTX technology uses these. RTX is primarily designed for consumers.
Anyway, it's very common that Nvidia reduce or eliminate features on their lower models. Like GP102 reduces fp64 and removes fp16 support vs. GP102, along with a different memory controller, etc.
Posted on Reply
#54
FordGT90Concept
"I go fast!1!11!1!"
Turing may have been branding for mining cards which may or may not have been scraped. Ampere is an architecture with the pilot GPU bearing the GA100 model number.

You may be right that Volta-based GPUs are being announced in a few weeks and, excepting the GV100 (with Tensor cores) are just 12nm rehashes of Pascal. We'll find out in due time.
Posted on Reply
#55
efikkan
FordGT90ConceptTuring may have been branding for mining cards which may or may not have been scraped. Ampere is an architecture with the pilot GPU bearing the GA100 model number.

You may be right that Volta-based GPUs are being announced in a few weeks and, excepting the GV100 (with Tensor cores) are just 12nm rehashes of Pascal. We'll find out in due time.
GV100 is not a rehash of Pascal. Pascal was the stepping stone between Maxwell and Volta, introduced due to the delays of Volta. Pascal is Maxwell with some of the design features from Volta.

I recommend reading the White paper from Nvidia, or some of the articles from Anandtech.
Especially page 10-12 from the white paper is an interesting read, some snippets:
Similar to Pascal GP100, the GV100 SM incorporates 64 FP32 cores and 32 FP64 cores per SM. However, the GV100 SM uses a new partitioning method to improve SM utilization and overall performance.
Integration within the shared memory block ensures the Volta GV100 L1 cache has much lower latency and higher bandwidth than the L1 caches in past NVIDIA GPUs.
Unlike Pascal GPUs, which could not execute FP32 and INT32 instructions simultaneously, the Volta GV100 SM includes separate FP32 and INT32 cores, allowing simultaneous execution of FP32 and INT32 operations at full throughput, while also increasing instruction issue throughput.
And so much more. I highly recommend reading it, since it's a good insight into what's going to trickle down to the consumers.
Posted on Reply
#56
jabbadap
efikkanGV100 is not a rehash of Pascal. Pascal was the stepping stone between Maxwell and Volta, introduced due to the delays of Volta. Pascal is Maxwell with some of the design features from Volta.

I recommend reading the White paper from Nvidia, or some of the articles from Anandtech.
Especially page 10-12 from the white paper is an interesting read, some snippets:

And so much more. I highly recommend reading it, since it's a good insight into what's going to trickle down to the consumers.
Volta was not exactly delayed, it had target to meet with Summit Supercomputer(Was always to be 16nm product, as technically it still is. 12nm FFN is closer to 16nm FF+ rather than 12nm FFC). TSMC 20nm manufacturing process were not suitable for GPUs, thus nvidia was forced to strip FP64 out of Maxwell and wait for 16nm to put it back on Pascal. You could say gp100 was very short lived King for HPC, and I would even say that Nvidia would have wanted to delay Volta just to sell gp100 products for a longer time... But yeah Volta is vastly different arch than Pascal/Maxwell.
FordGT90ConceptThis PCB could easily be for a Turing or Ampere GPU. What are gamers going to do with tensor cores?
Gameworks Denoisers might have use for Tensor cores. I don't think RTX itself needs them though. And like Efikkan said: Volta can do int32 and fp32 math simultaneously, while Pascal or earlier ones can't(i.e. using int8 to inference denoiser and fp32/fp16 for rasterizing). Nvidia might allow full speed fp16 too, now that expensive pro cards have Tensor cores do such tasks.
Posted on Reply
#57
efikkan
jabbadapVolta was not exactly delayed, it had target to meet with Summit Supercomputer(Was always to be 16nm product, as technically it still is. 12nm FFN is closer to 16nm FF+ rather than 12nm FFC). TSMC 20nm manufacturing process were not suitable for GPUs, thus nvidia was forced to strip FP64 out of Maxwell and wait for 16nm to put it back on Pascal. You could say gp100 was very short lived King for HPC, and I would even say that Nvidia would have wanted to delay Volta just to sell gp100 products for a longer time... But yeah Volta is vastly different arch than Pascal/Maxwell.
If you look at past roadmaps, Volta was scheduled for 2016, but was postponed for late 2017 and Pascal shoved in-between.
But as you are saying, TSMC "12nm FFN" is not any denser than 16nm at all, it's just a refinement.
Posted on Reply
Add your own comment
Apr 19th, 2024 16:51 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts