Wednesday, March 4th 2020

Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

(Update, March 4th: Another NVIDIA graphics card has been discovered in the Geekbench database, this one featuring a total of 124 CUs. This could amount to some 7,936 CUDA cores, should NVIDIA keep the same 64 CUDA cores per CU - though this has changed in the past, as when NVIDIA halved the number of CUDA cores per CU from Pascal to Turing. The 124 CU graphics card is clocked at 1.1 GHz and features 32 GB of HBM2e, delivering a score of 222,377 points in the Geekbench benchmark. We again stress that these can be just engineering samples, with conservative clocks, and that final performance could be even higher).

NVIDIA is expected to launch its next-generation Ampere lineup of GPUs during the GPU Technology Conference (GTC) event happening from March 22nd to March 26th. Just a few weeks before the release of these new GPUs, a Geekbench 5 compute score measuring OpenCL performance of the unknown GPUs, which we assume are a part of the Ampere lineup, has appeared. Thanks to the twitter user "_rogame" (@_rogame) who obtained a Geekbench database entry, we have some information about the CUDA core configuration, memory, and performance of the upcoming cards.
NVIDIA Ampere CUDA Information NVIDIA Ampere Geekbench
In the database, there are two unnamed GPUs. The first GPU is a version with 7552 CUDA cores running at 1.11 GHz frequency. Equipped with 24 GB of unknown VRAM type, the GPU is configured with 118 Compute Units (CUs) and it scores an incredible score of 184096 in the OpenCL test. Compared to something like a V100 which has a score of 142837 in the same test, we can see almost 30% improvement in performance. Next up, we have a GPU with 6912 CUDA cores running at 1.01 GHz and featuring 47 GB of VRAM. This GPU is a less powerful model as it has 108 CUs and scores 141654 in the OpenCL test. Some things to note are weird memory configurations in both models like 24 GB for the more powerful model and 47 GB (which should be 48 GB) for the weaker one. The results are not the latest, as they date back to October and November, so it may be that engineering samples are in question and the clock speed and memory configuration might change until the launch happens.
Sources: @_rogame (Twitter), Geekbench
Add your own comment

62 Comments on Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

#52
bug
ZenZimZalibenNot sure it is a hard rule, but I remember your GPU will carve out an equal amount of system ram if available for Shadow Ram. Might be a myth.

"A quick rule of thumb is that you should have twice as much system memory as your graphics card has VRAM, so a 4GB graphics card means you'd want 8GB or more system memory, and an 8GB card ideally would have 16GB of system memory "

www.pcgamer.com/best-gpu-2016/
It's not a myth. Shadow RAM is something ancient, when video cards used to copy routines in the system RAM, because that was faster than reading from the card's own BIOS. I don't think it's been used in ages.
Or, you may be thinking VRAM mapping that could eat into your addressable RAM on 32bit systems. It's going to take a while till we hit that again on 64bit.
Posted on Reply
#53
Cheeseball
Not a Potato
ARFNot exactly...




www.anandtech.com/show/13923/the-amd-radeon-vii-review/15

www.anandtech.com/show/14618/the-amd-radeon-rx-5700-xt-rx-5700-review/13
LSM and N-body simulation relies heavily on memory bandwidth (and FFTs, which Radeons are good at with inverse transforms since we're dealing with particles and shapes on different level), so its not a surprise that a Radeon VII with HBM2 can surpass any of the GDDR6 cards (except the 2080 Ti). It's also the reason why the RX 5700 XT is trash (and inaccurate, unfortunately due to the driver) at it.
Posted on Reply
#54
Flanker
xkm1948For all the folks who singing praises of OpenCL
Who does that? Only people I know who usees OpenCL are those who are either forced to, or never tried other API's.
Posted on Reply
#56
biffzinker
ZenZimZalibenNot sure it is a hard rule, but I remember your GPU will carve out an equal amount of system ram if available for Shadow Ram. Might be a myth.

"A quick rule of thumb is that you should have twice as much system memory as your graphics card has VRAM, so a 4GB graphics card means you'd want 8GB or more system memory, and an 8GB card ideally would have 16GB of system memory "

www.pcgamer.com/best-gpu-2016/
Is this what you're referring to?


When I had 16GB installed it showed 8 GB shared. I just recently added another 16 GB, and now it shows 16 GB shared out of the 32 GB.
Posted on Reply
#57
efikkan
Metroid24gb is about time, old 4k games with good textures require at least 8gb right now, new games minimum 16gb, 24gb to have a room for future demanding games.
Just because a game allocates memory, doesn't mean it actually needs it.
24 GB or more would be pretty pointless for games right now, as you would need memory bandwidth, computational power and assets to scale with it to make sense.

Take for instance RTX 2080 Ti with 616 GB/s memory bandwidth, if you're running at 120 FPS, it's only going to touch maximum 5.6 GB of those during a single frame, in practice though it's even much less than that, as memory traffic is not evenly distributed during frame rendering.

Games which actually needs more than 8 GB will not use it during a single frame, but use it for storing a larger world. This also means that some assets can be streamed.

While games are likely to slowly require more memory in the future, it will be a balancing act, and no one can accurately predict how much top games actually needs 3-5 years from now. So far Nvidia have been very good at balancing resources on their GPUs, despite many predicting their cards would flop.
notbCUDA is a part of the ecosystem you buy into. But it's free to use.
Furthermore, you may or may not use it (since there are alternatives), so you actually have a choice (which you don't get with some other products).
And you can use it even when you don't own the hardware - this is not always the case.

In other words: there are no downsides. I honestly don't understand why people moan so much about CUDA (other than general hostility towards Nvidia).
Even the CUDA compiler is open source, so if AMD (or Intel) wanted to, they could add support themselves.

CUDA is mostly used for custom software, which runs on specific machines. CUDA offers a better ecosystem, debugging tools, more features (which leverages more efficient implementations), so the choice is easy. The ones who keep complaining about CUDA seems to be the ones who don't know the first thing about it.
ZenZimZalibenNot sure it is a hard rule, but I remember your GPU will carve out an equal amount of system ram if available for Shadow Ram. Might be a myth.

"A quick rule of thumb is that you should have twice as much system memory as your graphics card has VRAM, so a 4GB graphics card means you'd want 8GB or more system memory, and an 8GB card ideally would have 16GB of system memory "

www.pcgamer.com/best-gpu-2016/
This is not true today.

You should not care about numbers, you should care about solid benchmarks showing how much you actually need, because all resources will ultimately be pushed beyond the point of diminishing returns.
bugOr, you may be thinking VRAM mapping that could eat into your addressable RAM on 32bit systems. It's going to take a while till we hit that again on 64bit.
The fact police have to correct you there, firstly, there are two misconceptions here;
1) 32-bit OS/hardware and the 4 GB memory limit;
There are no relation between register width(e.g. a "32-bit" CPU) and address width. It's just a coincidence that some consumer 32-it OS' at the time supported up to 4 GB RAM. You should read up on PAE. Windows Enterprise/Datacenter (2000/2003/2008), Linux, BSD and Mac OS (Pro) supported >4 GB on 32-bit systems, provided the CPU and BIOS supported it (e.g. Xeons).
2) VRAM address space;
Unless you're running integrated graphics, VRAM is never a part of RAM's address space.
Even with a system like Windows XP (32-bit) where the address space is limited to 32-bit, the size of VRAM will not affect it at all. The reserved upper part of the address space(typical 0.25-0.75 GB at the time) is reserved by the BIOS for use with IO with PCIe devices etc., while the VRAM address space is not directly addressable at all.
Posted on Reply
#58
bug
efikkanThe fact police have to correct you there, firstly, there are two misconceptions here;
1) 32-bit OS/hardware and the 4 GB memory limit;
There are no relation between register width(e.g. a "32-bit" CPU) and address width. It's just a coincidence that some consumer 32-it OS' at the time supported up to 4 GB RAM. You should read up on PAE. Windows Enterprise/Datacenter (2000/2003/2008), Linux, BSD and Mac OS (Pro) supported >4 GB on 32-bit systems, provided the CPU and BIOS supported it (e.g. Xeons).
2) VRAM address space;
Unless you're running integrated graphics, VRAM is never a part of RAM's address space.
Even with a system like Windows XP (32-bit) where the address space is limited to 32-bit, the size of VRAM will not affect it at all. The reserved upper part of the address space(typical 0.25-0.75 GB at the time) is reserved by the BIOS for use with IO with PCIe devices etc., while the VRAM address space is not directly addressable at all.
1) I knew about that, but we were talking x86 here...
2) I think you're right, but I'm not 100% sure. It's been a while since I read about this.
Posted on Reply
#59
Midland Dog
notbWhy is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.
24gb of hbm means only 3 stacks, 3072 bit bus with 8gb per 1024, like a double capacity titan v
47 is likely a mistake, though.
Posted on Reply
#60
Jayp
ratirtThe frequencies for both cards are pretty low. Maybe these aren't gaming GPUs but workstations or something like that? The Ram capacities are weird too. Wonder what ram is it.
I think the clocks are possibly base frequencies or misread by the software.
Posted on Reply
#61
ratirt
JaypI think the clocks are possibly base frequencies or misread by the software.
Still pretty low even if it is base clocks.
Posted on Reply
#62
Vayra86
If you really think we are getting a full fat 24GB VRAM gaming GPU, you need to get your sense of reality examined fast.

11 > 24 ? Dream on. 16 is more likely, or some weirdness like 14.

These are quadros or teslas, I think that is clear. For the odd one out thinking 24GB is somehow useful for gaming... k buddy.

All we know now is that Nvidia will succeed V100. In other news, water is wet.
Posted on Reply
Add your own comment
Apr 26th, 2024 02:12 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts