• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,004 (1.07/day)
(Update, March 4th: Another NVIDIA graphics card has been discovered in the Geekbench database, this one featuring a total of 124 CUs. This could amount to some 7,936 CUDA cores, should NVIDIA keep the same 64 CUDA cores per CU - though this has changed in the past, as when NVIDIA halved the number of CUDA cores per CU from Pascal to Turing. The 124 CU graphics card is clocked at 1.1 GHz and features 32 GB of HBM2e, delivering a score of 222,377 points in the Geekbench benchmark. We again stress that these can be just engineering samples, with conservative clocks, and that final performance could be even higher).

NVIDIA is expected to launch its next-generation Ampere lineup of GPUs during the GPU Technology Conference (GTC) event happening from March 22nd to March 26th. Just a few weeks before the release of these new GPUs, a Geekbench 5 compute score measuring OpenCL performance of the unknown GPUs, which we assume are a part of the Ampere lineup, has appeared. Thanks to the twitter user "_rogame" (@_rogame) who obtained a Geekbench database entry, we have some information about the CUDA core configuration, memory, and performance of the upcoming cards.





In the database, there are two unnamed GPUs. The first GPU is a version with 7552 CUDA cores running at 1.11 GHz frequency. Equipped with 24 GB of unknown VRAM type, the GPU is configured with 118 Compute Units (CUs) and it scores an incredible score of 184096 in the OpenCL test. Compared to something like a V100 which has a score of 142837 in the same test, we can see almost 30% improvement in performance. Next up, we have a GPU with 6912 CUDA cores running at 1.01 GHz and featuring 47 GB of VRAM. This GPU is a less powerful model as it has 108 CUs and scores 141654 in the OpenCL test. Some things to note are weird memory configurations in both models like 24 GB for the more powerful model and 47 GB (which should be 48 GB) for the weaker one. The results are not the latest, as they date back to October and November, so it may be that engineering samples are in question and the clock speed and memory configuration might change until the launch happens.

View at TechPowerUp Main Site
 
The frequencies for both cards are pretty low. Maybe these aren't gaming GPUs but workstations or something like that? The Ram capacities are weird too. Wonder what ram is it.
 
Why is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.

47 is likely a mistake, though.
 
Why is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.

47 is likely a mistake, though.

24 isnt weird, its weird that the "more powerful" one is the one we less ram.
 
Why is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.

47 is likely a mistake, though.

Maybe the GPU binds 256MB/1280MB of memory for some reason (cards show 23.8 and 46.8 GB memory) and that's why it shows less memory available. Like in a PC with an integrated GPU. A co-processor maybe?

24 isnt weird, its weird that the "more powerful" one is the one we less ram.

Not if both GPUs will be available as 24GB and 48GB models.
 
24gb is about time, old 4k games with good textures require at least 8gb right now, new games minimum 16gb, 24gb to have a room for future demanding games.
 
Low frequency sure I guess.. But I am the only one looking at shader counts vs performance in OpenCL? If the V100 had 5100 ish, then why does a card with 7550 cores, as in, 50% more cores, only score 30% higher. Frequency and more black silicon I guess..
 
Why is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.

47 is likely a mistake, though.
Because it is a lot? why would you need 24 or 47 GB in a graphics card for gaming? That is why it is weird and maybe these cards are workstation of some sort not gaming.
 
Looks like the new Rtx 3060 is coming along nicely:p.
 
Probably Tesla cards. And since they're most likely engineering samples, they only give us some ballpark figures, nothing more.
 
These would be amazing for ML/DL applications. Deploying well trained CNN or RNN at bench level would be amazing for researchers. Does not look forward for the price though. It has been creeping up year to year.

Still I am surprised Nvidia allowed usage of gaming level GPU to access both Tensorflow and full CUDA developer toolkit. I am seeing 1060 all the way to Titan RTX used in genomic workstations. Maybe it is just to get all the research labs hooked up on their butter smooth computation experience.
 
Link to games that "require minimum 16 GB VRAM" ?

I dont need to link, if you have at least 8gb gpu then you can do it yourself, put in 4k and all maximum and see how much video memory is using then you will have your answer.
 
Still I am surprised Nvidia allowed usage of gaming level GPU to access both Tensorflow and full CUDA developer toolkit. I am seeing 1060 all the way to Titan RTX used in genomic workstations. Maybe it is just to get all the research labs hooked up on their butter smooth computation experience.
Just a guess:
Modern games do quite a lot of compute work on GPU aka compute shaders. But building compute pipelines (including openCL) is still not as productive as CUDA. I guess it could be an attempt to lure developers to make use of CUDA interoperability, and therefore more dependence on Nvidia GPU.
 
I dont need to link, if you have at least 8gb gpu then you can do it yourself, put in 4k and all maximum and see how much video memory is using then you will have your answer.
Well, we do have our answer, TPU has tested several games and none of them needs more than 8GB VRAM. They rarely need more than 4, actually. See: https://www.techpowerup.com/review/wolcen-benchmark-test-performance-analysis/4.html
You're claiming otherwise, so the burden of proof is on you.
 
Link to games that "require minimum 16 GB VRAM" ?
8K textures DLC's coming right up.

What the hell, they'd probably do it if they could market the card as "8K" optimized.
 
Well, we do have our answer, TPU has tested several games and none of them needs more than 8GB VRAM. They rarely need more than 4, actually. See: https://www.techpowerup.com/review/wolcen-benchmark-test-performance-analysis/4.html
You're claiming otherwise, so the burden of proof is on you.


Picking your own set of games to prove your argument is understandable, you can always pick old games put in 4k and "hey see 4k all maximum and only 4gb memory". There is a reason nvidia will launch a 24gb and minimum at the this time is 8gb memory for at least 2560x1440p. Nvidia and AMD work closely with game developers.




Check this quote and i do agree,

"Funny side note to prove this point; if you play the RE2 Remake it has a VRAM usage display next to the graphic settings. At 1440P with max settings the game will tell you it needs 11GB of VRAM and gives you a big warning, yet running the game itself, I never went over 7GB, and again, that's allocated not actually used. "

"Yeah, to actually run out of vram on re2 on 11gb I had to run 8k with maxed out graphics and at least "6gb high textures" option. At "4gb high textures" it would run fine. Of course fps was low but no stutters. "

Finished re2 remake and was playing on 4k and used all my vram memory, had to se4t it right to be at 90%, lower settings.
 
Last edited:
Picking your own set of games to prove your argument is understandable, you can always pick old games put in 4k and "hey see 4k all maximum and only 4gb memory". There is a reason nvidia will launch a 24gb and minimum at the this time is 8gb memory for at least 2560x1440p. Nvidia and AMD work closely with game developers.


But linking two sites that list no games needing more than 8GB VRAM is so much better :wtf:
Also, like I said those 24/48GB VRAM, if real, are most likely for professional cards.
 
But linking two sites that list no games needing more than 8GB VRAM is so much better :wtf:
Also, like I said those 24/48GB VRAM, if real, are most likely for professional cards.

Okay, 2 games that I play and need more than 16gb on 4k to play nice, re2 remake and cities skylines. I'd say they are not new by any means, cities 2015 and re2 remake half and year ago. For you trolls that dont play on 4k, I cant for the sake of it make you agree with me, you need to play the games and see for yourself and like I already said, nvidia works closely with game devs, also for professional gpus, nvidia and amd have its own line of dedicated gpus. You might be probably referring to workstations and deep learning.
 
Okay, 2 games that I play and need more than 16gb on 4k to play nice, re2 remake and cities skylines. I'd say they are not new by any means, cities 2015 and re2 remake half and year ago.
I very much doubt that. More likely someone did texture packs for those leading you to believe they need all that VRAM.

Actually, looking around a bit, it seems RE2 Remake is downright broken when reporting VRAM needs: https://forums.overclockers.co.uk/t...-use-of-vram-in-an-official-release.18843137/
 
Still won't support the latest OpenCL versions because NV is scared.

:laugh:
 
Still won't support the latest OpenCL versions because NV is scared.

:laugh:
Scared of what? Virtually no professional software/SDK uses OpenCL. It's all CUDA, Nvidia has the market covered.
And I've seen AMD OpenCL 2.0 cards beaten by Nvidia OpenCL 1.2 cards in less professional apps.
 
the clocks seem low because most likely they are just base clocks, with boost being around 1500~, at base 1.11~ it would barely be as powerful as a Quadro 8000 and i think this gpu is a Quadro
 
Back
Top