• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Allegedly Preparing H100 GPU with 94 and 64 GB Memory

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,005 (1.07/day)
NVIDIA's compute and AI-oriented H100 GPU is supposedly getting an upgrade. The H100 GPU is NVIDIA's most powerful offering and comes in a few different flavors: H100 PCIe, H100 SXM, and H100 NVL (a duo of two GPUs). Currently, the H100 GPU comes with 80 GB of HBM2E, both in the PCIe and SXM5 version of the card. A notable exception if the H100 NVL, which comes with 188 GB of HBM3, but that is for two cards, making it 94 GB per each. However, we could see NVIDIA enable 94 and 64 GB options for the H100 accelerator soon, as the latest PCI ID Repository shows.

According to the PCI ID Repository listing, two messages are posted: "Kindly help to add H100 SXM5 64 GB into 2337." and "Kindly help to add H100 SXM5 94 GB into 2339." These two messages indicate that NVIDIA could prepare its H100 in more variations. In September 2022, we saw NVIDIA prepare an H100 variation with 120 GB of memory, but that still isn't official. These PCIe IDs could just come from engineering samples that NVIDIA is testing in the labs, and these cards could never appear on any market. So, we have to wait and see how it plays out.



View at TechPowerUp Main Site | Source
 
GPU? Even VideoCardz mostly calls it datacenter accelerator which is somewhat more accurate in our context.
 
GPU? Even VideoCardz mostly calls it datacenter accelerator which is somewhat more accurate in our context.
H100 has all the hardware to classify it as a GPU, like ROPs, TMUs, etc.

I don't see anything wrong with calling it a GPU. Something like Instinct MI300 apparently lacks ROPs, I'd call that one a datacenter accelerator as it's certainly not a GPU.
 
Anyone knows why some versions have nice binary round number of gigabytes but others not (94, 188)?
 
Pic looks like a drawer packed with staples.
 
H100 has all the hardware to classify it as a GPU, like ROPs, TMUs, etc.

I don't see anything wrong with calling it a GPU. Something like Instinct MI300 apparently lacks ROPs, I'd call that one a datacenter accelerator as it's certainly not a GPU.
The lack of a video port may be a good reason to not call it a GPU.
 
The lack of a video port may be a good reason to not call it a GPU.

It's not a good reason, GPU just means graphics processing unit, not "video output unit" or whatever. You can render something on one GPU that doesn't have any outputs and display the contents via any other method.

You PC/laptop can already do this, you can use the display outputs connected to the iGPU and render the application on the dedicated video card.
 
It's not a good reason, GPU just means graphics processing unit, not "video output unit" or whatever. You can render something on one GPU that doesn't have any outputs and display the contents via any other method.

You PC/laptop can already do this, you can use the display outputs connected to the iGPU and render the application on the dedicated video card.
Besides, rendering may run offline instead of in real time, output goes to a file, but GPU is still GPU.
 
Meanwhile, consumers get 8GB unless they can afford to drop more on a GPU than an entire console.
 
It's not a good reason, GPU just means graphics processing unit, not "video output unit" or whatever. You can render something on one GPU that doesn't have any outputs and display the contents via any other method.

You PC/laptop can already do this, you can use the display outputs connected to the iGPU and render the application on the dedicated video card.

this is not intended to process graphics
 
this is not intended to process graphics

Yes it is, among other things. Otherwise it wouldn't have the dedicated hardware for it, what a weird thing to say.

MI300 isn't intended to process graphics because it's missing some of that dedicated hardware.
 
Yes it is, among other things. Otherwise it wouldn't have the dedicated hardware for it, what a weird thing to say.

MI300 isn't intended to process graphics because it's missing some of that dedicated hardware.

i guess, in the same way i can use a cement truck to daily drive the kids to school, technically it's true
 
i guess, in the same way i can use a cement truck to daily drive the kids to school, technically it's true

That's a laughable comparison and it's not just "technically" true, it's true from a practical standpoint.

Companies buy these to be used in rendering farms, in case you didn't know, they are also used for virtualization, i.e partition each physical GPU into multiple instances to give virtual machines graphics hardware acceleration. Yes, they are GPUs and are used as such.
 
H100 Whitepaper said:
Note that the H100 GPUs are primarily built for executing datacenter and edge compute workloads for AI, HPC, and data analytics, but not graphics processing. Only two TPCs in both the SXM5 and PCIe H100 GPUs are graphics-capable (that is, they can run vertex, geometry, and pixel shaders)
SXM5 variant has 66 TPCs, PCIe variant has 57 TPCs.
Plus, there is precious little mention about anything else graphics related.
 
How are they doing 94GB memory, disabling 2GB from somewhere :wtf:

I think it's just a mistake, 94GB would be impossible.
 
How are they doing 94GB memory, disabling 2GB from somewhere :wtf:
I mean it's NVidia. I wouldn't be surprised. it's not the first time they do weird DRAM shenanigans.
 
Back
Top