Wednesday, March 4th 2020

Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

(Update, March 4th: Another NVIDIA graphics card has been discovered in the Geekbench database, this one featuring a total of 124 CUs. This could amount to some 7,936 CUDA cores, should NVIDIA keep the same 64 CUDA cores per CU - though this has changed in the past, as when NVIDIA halved the number of CUDA cores per CU from Pascal to Turing. The 124 CU graphics card is clocked at 1.1 GHz and features 32 GB of HBM2e, delivering a score of 222,377 points in the Geekbench benchmark. We again stress that these can be just engineering samples, with conservative clocks, and that final performance could be even higher).

NVIDIA is expected to launch its next-generation Ampere lineup of GPUs during the GPU Technology Conference (GTC) event happening from March 22nd to March 26th. Just a few weeks before the release of these new GPUs, a Geekbench 5 compute score measuring OpenCL performance of the unknown GPUs, which we assume are a part of the Ampere lineup, has appeared. Thanks to the twitter user "_rogame" (@_rogame) who obtained a Geekbench database entry, we have some information about the CUDA core configuration, memory, and performance of the upcoming cards.
NVIDIA Ampere CUDA Information NVIDIA Ampere Geekbench
In the database, there are two unnamed GPUs. The first GPU is a version with 7552 CUDA cores running at 1.11 GHz frequency. Equipped with 24 GB of unknown VRAM type, the GPU is configured with 118 Compute Units (CUs) and it scores an incredible score of 184096 in the OpenCL test. Compared to something like a V100 which has a score of 142837 in the same test, we can see almost 30% improvement in performance. Next up, we have a GPU with 6912 CUDA cores running at 1.01 GHz and featuring 47 GB of VRAM. This GPU is a less powerful model as it has 108 CUs and scores 141654 in the OpenCL test. Some things to note are weird memory configurations in both models like 24 GB for the more powerful model and 47 GB (which should be 48 GB) for the weaker one. The results are not the latest, as they date back to October and November, so it may be that engineering samples are in question and the clock speed and memory configuration might change until the launch happens.
Sources: @_rogame (Twitter), Geekbench
Add your own comment

62 Comments on Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

#26
bug
gamefoo21If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
Nope, general consensus seems to be OpenCL is just bad/poorly designed.

And here's Radeon VII "smacking around" the 2080Ti: www.phoronix.com/scan.php?page=article&item=radeon-vii-rocm24&num=2
Keep in mind 2080 Ti is only on OpenCL 1.2.
Posted on Reply
#27
MuhammedAbdo
T4C Fantasythe clocks seem low because most likely they are just base clocks, with boost being around 1500~, at base 1.11~ it would barely be as powerful as a Quadro 8000 and i think this gpu is a Quadro
No, Geekbench reads boost clocks as well. The 118CU GPU is beating the Titan RTX by 40% despite the low clocks.
Posted on Reply
#28
T4C Fantasy
CPU & GPU DB Maintainer
MuhammedAbdoNo, Geekbench reads boost clocks as well. The 118CU GPU is beating the Titan RTX by 40% despite the low clocks.
Ahh okay, but i think the clock tables may be different and its reading base clock anyways or even the wrong clocks in general, it happened to Navi aswel on launch and before it. literally reading 1ghz clock.
Posted on Reply
#29
bug
T4C FantasyAhh okay, but i think the clock tables may be different and its reading base clock anyways or even the wrong clocks in general, it happened to Navi aswel on launch and before it. literally reading 1ghz clock.
*cough*engineering sample*cough*
Posted on Reply
#30
xkm1948
FlankerJust a guess:
Modern games do quite a lot of compute work on GPU aka compute shaders. But building compute pipelines (including openCL) is still not as productive as CUDA. I guess it could be an attempt to lure developers to make use of CUDA interoperability, and therefore more dependence on Nvidia GPU.
Very few research labs in bioinformatics / biomedical use OpenCL. It is the exact opposite of user friendly. Sloppy documentation of almost everything, lack of active community engagement. As of right now it is almost abandonware, at least for us genetics/genomics researchers.

To put it simply, why would researchers devote their time, energy and resources into OpenCL when nobody will even cite and use their work afterwards?
Posted on Reply
#31
Franzen4Real
ratirtBecause it is a lot? why would you need 24 or 47 GB in a graphics card for gaming? That is why it is weird and maybe these cards are workstation of some sort not gaming.
I'm not sure why someone would think that these are anything but professional cards. "Weird" RAM aside, a GTC reveal alone makes it pretty clear.
Posted on Reply
#32
TheinsanegamerN
MetroidOkay, 2 games that I play and need more than 16gb on 4k to play nice, re2 remake and cities skylines. I'd say they are not new by any means, cities 2015 and re2 remake half and year ago. For you trolls that dont play on 4k, I cant for the sake of it make you agree with me, you need to play the games and see for yourself and like I already said, nvidia works closely with game devs, also for professional gpus, nvidia and amd have its own line of dedicated gpus. You might be probably referring to workstations and deep learning.
Liar. I run Cities at 4k with all details fully maxxed out. Doesnt max out the framebuffer on a vega 64 GPU with 8GB of VRAM. And RE2R is jsut broken when it comes to reporting, what it "uses" is merely allocated, not actively used.

You do not need 16GB of VRAM to run either of these games at 4k. If you are running a ton of graphical mods on Cities, then you could push past the framebuffer. But thats mods. You could mod the liek sof skyrim to use 2-3x the VRAM of cards at the time, but that was not the native game, and mods are not often optimized like the base game is.
Posted on Reply
#33
Metroid
TheinsanegamerNLiar. I run Cities at 4k with all details fully maxxed out. Doesnt max out the framebuffer on a vega 64 GPU with 8GB of VRAM. And RE2R is jsut broken when it comes to reporting, what it "uses" is merely allocated, not actively used.

You do not need 16GB of VRAM to run either of these games at 4k. If you are running a ton of graphical mods on Cities, then you could push past the framebuffer. But thats mods. You could mod the liek sof skyrim to use 2-3x the VRAM of cards at the time, but that was not the native game, and mods are not often optimized like the base game is.
How can you call me a liar, when your own explanation agrees with my statement? First of all, those 2 games with 8gb vram at 4k is just not enough if you want play it nicely. I'm not saying they will use 16gb of vram, I'm saying you will have enough free vram space if those games needs that. Nobody wants to play a game with stutters and other problems related to not have enough free vram.

About re2 remake showing memory size usage wrong, I wonder if gpu-z is also showing it wrong then, cause I used gpu-z the last time I saw just to check and windows taskbar manager just to see the usage.
Posted on Reply
#34
Flanker
xkm1948Very few research labs in bioinformatics / biomedical use OpenCL. It is the exact opposite of user friendly. Sloppy documentation of almost everything, lack of active community engagement. As of right now it is almost abandonware, at least for us genetics/genomics researchers.

To put it simply, why would researchers devote their time, energy and resources into OpenCL when nobody will even cite and use their work afterwards?
I know right, it's a bloody pain with little benefits. Nvidia won here with a productive API.

edit: I didn't mean people use OpenCL if that is what it looks like. What I was saying is that Nvidia exposed CUDA to all gaming gpus so game developers can use it too and see how much more productive it is than other API's.
Posted on Reply
#35
TheinsanegamerN
MetroidHow can you call me a liar, when your own explanation agrees with my statement?
Because it doesnt agree with your statement, and you are trying to twist other people's statement to support yours. MODS are not part of the stock gameplay experience. They are made by the community. For every talented coder there are just as many mods that are poorly optimized, if optimized at all, and it is trivial to break a game by loading it with mod after mod. That is not the fault of the card, because it doesnt matter how much silicon and memory you throw at a problem, you will always be able to throw more software at it as well.

If you drop a turbo into your car and overheat it because the radiator didnt have enough capacity for the increased load, is that the fault of the radiator? No. You modded the application and ran out of capacity, for its designed use case it works perfectly.
First of all, those 2 games with 8gb vram at 4k is just not enough if you want play it nicely.
Citation needed, something that has been asked of you multiple times and you refuse to deliver. (here's a hint, a site with 0 benchmarks or proof of what you are claiming makes you look like a total mong).
I'm not saying they will use 16gb of vram, I'm saying you will have enough free vram space if those games needs that.
Except those games do not need that, that has been proven to you already in this very thread by @bug, and is readily disproven by casual googling of these very games being played in 4k for reviews and gameplay videos showing them running just fine.
Nobody wants to play a game with stutters and other problems related to not have enough free vram.
Good thing that isnt a problem with any game currently on the market, 8gb is currently sufficient for 4k.
About re2 remake showing memory size usage wrong, I wonder if gpu-z is also showing it wrong then, cause I used gpu-z the last time I saw just to check and windows taskbar manager just to see the usage.
What did you think we were talking about? RE2R "consumes" large amounts of VRAM because it is reserving way more then it actually needs. Much of that VRAM is unused, as is evident by the fact that lower VRAM cards run the game fine without stuttering.

Let me help you here Metroid: you came here making claims that games need more then 8GB of VRAM to play sufficiently in 4k. That has been proven false by information posted by other users. You have yet to post anything that backs up your claims. The burden of proof is on the back of those making claims. That's you.

Since you seem so sure about this, how about you record video on your computer of the games you are talking about, show the settings you are using, run an FCAT test and FPS test for us, use MSI afterburner to verify VRAM usage and FPS results. Shouldnt take more then 10 minutes to run the benchmarks and a bit of time to post the resulting video to youtube. Doesnt need to be edited or anything, just as long as it contains proof of what you are claiming.
Posted on Reply
#36
ratirt
Franzen4RealI'm not sure why someone would think that these are anything but professional cards. "Weird" RAM aside, a GTC reveal alone makes it pretty clear.
Somebody did and this was my answer bro. Besides as mentioned, these are samples. we dont know whether these end up in which segment and if they have these RAM capacities. It all may change you know depending on the tiers NV would go with. Who knows what will happen? I surely don't. We know new cards from NV are around the corner.
Posted on Reply
#37
notb
ratirtBecause it is a lot? why would you need 24 or 47 GB in a graphics card for gaming? That is why it is weird and maybe these cards are workstation of some sort not gaming.
But why did almost everyone here assume this is a desktop gaming card? :) Is it mentioned somewhere in the leak or what?

Both leaked cards look like next-gen top Quadro models. RTX 6000 and 8000 had 24 and 48GB RAM respectively.
Posted on Reply
#39
ratirt
notbBut why did almost everyone here assume this is a desktop gaming card? :) Is it mentioned somewhere in the leak or what?

Both leaked cards look like next-gen top Quadro models. RTX 6000 and 8000 had 24 and 48GB RAM respectively.
I said it is professional card due to the ram capacity.
Posted on Reply
#40
jabbadap
gamefoo21If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
Intel uses it's own sysCL based oneAPIs DPC++ not opencl. While one can run OpenCL/Cuda/Syscl code through wrapper, it's still better to use direct oneAPi code with intel hw. Not sure how easy will it be to migrate from OneAPI to SysCL/OpenCL once you have done coding for Intel. So all in all intel's showing up won't necessary give OpenCL any boost, rather deprecate it even further(And I mean deprecating by things like Apple has moved all to Metal, intel might give their OpenCL support nvidia like second citizen status).

And what do you mean by AFR, some multi card rendering method or did you mess it with VRR. VESA VRR and HDMI Forum VRR are different things, HDMI Forum VRR is supported currently by Console manufacturers and Nvidia, amd's support for it is still pending.

I don't think CUDA license have any fee, but nvidia can lock HW for them with CUDA.
Posted on Reply
#41
notb
gamefoo21NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
Imagine a world where Nvidia haters actually learn something about the products/company they attack. :o

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.
Posted on Reply
#42
gamefoo21
jabbadapIntel uses it's own sysCL based oneAPIs DPC++ not opencl. While one can run OpenCL/Cuda/Syscl code through wrapper, it's still better to use direct oneAPi code with intel hw. Not sure how easy will it be to migrate from OneAPI to SysCL/OpenCL once you have done coding for Intel. So all in all intel's showing up won't necessary give OpenCL any boost, rather deprecate it even further(And I mean deprecating by things like Apple has moved all to Metal, intel might give their OpenCL support nvidia like second citizen status).

And what do you mean by AFR, some multi card rendering method or did you mess it with VRR. VESA VRR and HDMI Forum VRR are different things, HDMI Forum VRR is supported currently by Console manufacturers and Nvidia, amd's support for it is still pending.

I don't think CUDA license have any fee, but nvidia can lock HW for them with CUDA.
According to Intel all of their GPUs from 2010 on support OpenCL.

I run OpenCL on my 7700K's UHD630 without any translation. I only need to have the drivers enabled. Intel also offers SDK support for their FPGAs to do OpenCL.

CUDA costs a bunch because you pay for the hardware. Only one supplier of hardware that can run CUDA. I'd actually be really interested in wrapping CUDA and running it on non nV hardware but they NV has never shied away from locking their software down as hard as possible.

I remember when I could have PhysX on while using a Radeon GPU to do the drawing.

I'm also very sure it would be very easy for NV to turn on OCL 2 support.

Edit: Isn't One API open? I thought it was basically OpenCL 3... I have to look into it more.

Edit 2: One API is basically a unified open standard that offers full cross platform use. According to Phoronix articles, porting it to AMD will be easy because Intel and AMD both use open source drivers, NV on the other hand locks the good stuff up with closed source drivers on Linux

Edit 3: Too many abbreviations in my head, I meant VRR instead of AFR. Somehow it was Adaptive Frame Rendering... LoL I meant the lovely piece of hardware that NV required for VRR rather than just supporting the Display port spec. I mean it's kinda awesome that the Xbox One X supports VRR if you plug it into a compatible display.
notbImagine a world where Nvidia haters actually learn something about the products/company they attack. :eek:

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.
Free to use on Nvidia hardware. AMD has to jump through hoops just to emulate some small parts.

Nvidia also completely gimps the GP-GPU performance of their more affordable GPUs. Want that performance, the cheapest option available to normal folk is the $3000 USD Titan V.

Yeah, you can use the older less functional and capable OpenCL 1.2. want those newer features... Well CUDA only on NV.

Yeah... Free... :rolleyes:

At least I'm a hater who uses GeForces and Quadros. :pimp:
Posted on Reply
#43
Turmania
When they launch this generation, I think it would be very hard for AMD to climb back up. They are at least 4 years away from competing this is like 2 generations away.
Posted on Reply
#44
notb
gamefoo21Free to use on Nvidia hardware. AMD has to jump through hoops just to emulate some small parts.
CUDA is a part of the ecosystem you buy into. But it's free to use.
Furthermore, you may or may not use it (since there are alternatives), so you actually have a choice (which you don't get with some other products).
And you can use it even when you don't own the hardware - this is not always the case.

In other words: there are no downsides. I honestly don't understand why people moan so much about CUDA (other than general hostility towards Nvidia).
Nvidia also completely gimps the GP-GPU performance of their more affordable GPUs. Want that performance, the cheapest option available to normal folk is the $3000 USD Titan V.
That's absolutely not true. What you mean is FP32. But some software uses it and some doesn't. It's just an instruction set.
One could say AMD gimps AVX-512 on all of their CPUs.

Many professional/scientific scenarios are fine with FP16.
Phoronix tested some GPUs in PlaidML, which is probably the most popular non-CUDA neural network framework.
www.phoronix.com/scan.php?page=article&item=plaidml-nvidia-amd&num=4
2 things to observe here: how multiple Nvidia GPUs perform in FP16 and as a tasty bonus - how they perform in OpenCL compared to Polaris.
At least I'm a hater who uses GeForces and Quadros. :pimp:
I don't understand why people raise this argument. It simply makes you a miserable hater.
Posted on Reply
#45
bug
notbImagine a world where Nvidia haters actually learn something about the products/company they attack. :eek:

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.
Well, I'm not sure if Nvidia drivers do OpenCL 2.0. There was preliminary support like 3 years ago, but I haven't heard anything about it since.

The point is moot though, the world seems to be set on CUDA by now. More precisely, the world seems to be set on anything that's not OpenCL.
Posted on Reply
#46
ZenZimZaliben
Wow, with that much GPU Ram your system RAM should typically be double GPU RAM. Finally a reason to have more than 16gb of system RAM.
Posted on Reply
#47
Hotobu
ZenZimZalibenWow, with that much GPU Ram your system RAM should typically be double GPU RAM. Finally a reason to have more than 16gb of system RAM.
I've never heard of this rule of thumb/correlation between these two.
Posted on Reply
#48
jeremyshaw
bugWell, I'm not sure if Nvidia drivers do OpenCL 2.0. There was preliminary support like 3 years ago, but I haven't heard anything about it since.

The point is moot though, the world seems to be set on CUDA by now. More precisely, the world seems to be set on anything that's not OpenCL.
After Apple, the defacto chair of the Kronos group (OpenCL committe/standards body) burned OpenCL - which Apple themselves created - in favor of their own proprietary "Metal," who is going to have faith in OpenCL's development?
Posted on Reply
#49
ZenZimZaliben
HotobuI've never heard of this rule of thumb/correlation between these two.
Not sure it is a hard rule, but I remember your GPU will carve out an equal amount of system ram if available for Shadow Ram. Might be a myth.

"A quick rule of thumb is that you should have twice as much system memory as your graphics card has VRAM, so a 4GB graphics card means you'd want 8GB or more system memory, and an 8GB card ideally would have 16GB of system memory "

www.pcgamer.com/best-gpu-2016/
Posted on Reply
#50
xkm1948
For all the folks who singing praises of OpenCL, here is a recent GPU compute benchmark from Guru3D

www.guru3d.com/articles-pages/gpu-compute-performance-review-with-20-graphics-cards,1.html

OpenCL Indigo GPU render test

The entire line of Radeon got absolutely destroyed. 2060 beating R7 which was hailed as "GCN, king of compute" or something. Big oof.




Blender, OpenCL, Radeons got creamed hard again. Surprisingly even Navi beats out the R7.






From what I have seen so far, Radeon cards are good for mining crypto-kitties. For professional work or scientific research, their OpenCL based approach is just too weak or too buggy for day to day use.
Posted on Reply
Add your own comment
Apr 25th, 2024 08:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts