• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Cyberpunk 2077 Does Not Leverage SMT on AMD Ryzen, Lower Core-Count Variants take a Bigger Hit, Proof Included

Whats the chance this was intentional and meant for console versions ?
Zero, there's no need to detect CPU vendor or core count on consoles, there is just one hardware configuration
 
Last edited:
Zero, there's no need to detect CPU vendor or core count on consoles, there is just one hardware configuration
Wanted to make that clear, thanks.
 
Last edited by a moderator:
Cyberpunk 2077 does not leverage simultaneous multi-threading (SMT) on AMD Ryzen processors, according to multiple technical reviews of the game that tested it with various processors. The game does leverage the analogous HyperThreading feature on rival Intel Core processors.

According to them, Cyberpunk 2077 reuses AMD GPUOpen pseudo-code to optimize its scheduler for the processor. It was originally designed to let an application use more threads when an AMD "Bulldozer" processor is used; but has the opposite effect when a non-Bulldozer AMD processor is detected. The game looks for "AuthenticAMD" processor brand, and "family = 0x15" (AMD K15 or Bulldozer/derivative), and only then engages "logical processors" (as identified by Windows OS scheduler as part of its Bulldozer-optimization). When any other, including a newer AMD processor is detected, the code makes the game scheduler only send traffic to the physical cores, and not to their logical processors.
This problem is really just about detecting the correct thread count, not about leaveraging SMT. Even if you spin up 8 threads on a 8 core/16 thread CPU, you don't control whether they run on separate cores or not, scheduling is up to the OS kernel, not the game.

I don't think this qualifies as optimization at all, it's just hardware detection. Software isn't "optimized" for SMT.

There were some comments of Intel's Compiler being involved in these shenanigans, again.
Can't help but think they're onto something.
Of course, those conspiracy theories never dies.
Rest assured those theories are nonsense, and this problem have nothing to do with compilers.

How in the earth this AAA game has still not been updated for the new ryzen cpus?
This is certainly not excusable for a studio of this size and a game with this kind of budget.
But it's still very understandable how it happened; this is not a bug that is glaringly obvious to a normal game tester, and the core development team may only have a very limited set of hardware they test continously throughout development.

the code says something like:
if AuthenticAMD than:kookoo: then use this
if (Bulldozer than Threadcount=logical)
if not (than threadcount=corecount)
yeah it´s from 2017 when Win10 didn´t have its threading optimizations, so eventually it´s very dull to make anything like this, every game should get all logical threads, in some very old code or rare cornercases it can be manually fixed with affinity in taskmanager or SMT turned off, but in general that part of the 2017 code is idiotic now.
Should only take a couple of minutes/seconds to fix.
This is about reading the CPUID instruction to detect CPU features, it has nothing to do with Windows.
The same instruction is used to detect CPU features on all platforms, inlcuding the cpuinfo tool on Linux.
Unfortunately, some extensions to CPUID is not standardized across CPU makers (and even microarchitectures), so it may require some extra interpretation to detect the correct core and thread count on various AMD, Intel and VIA CPUs.
But nevertheless, this "problem" is well known and only takes a few lines of code to detect correctly (at least for known CPUs so far, I can't guarantee for future CPUs).

This still makes little sense to me, most threading frameworks used everywhere typically only expose logical threads with no real way of targeting cores and the reason for that is very simple, trying to take SMT into the way equation is a terrible idea with mostly negative results. The OS will typically prioritize the distribution of threads across cores first anyway…
You are mostly right.
The bug here is only about detecting the correct amount of available threads, not whether a specific thread is using SMT or not, that's normally not controlled by the application. So the article claiming SMT is not leveraged is technically incorrect.

AMDs technical dev support is rather like a good looking single guy at a pub perving at all the hotties thinking damn I could smash that 9 times on a sunday then going home to jerk off in a sock or worse taking the drunk fattatoe in the corner home and shashin that!
Didn't AMD boast about having hundreds of engineers dedicated to this?
At least this bug comes down to reading a single instruction, not something that requires any major effort from anyone.

-----

One additional thought;
If this is hard, just wait until Alder Lake with different cores arrives. There may be a lot of software relying on the CPUID instruction to fetch capabilities, but this instruction returns the features of the core, not the whole CPU, which can get messy if different cores have different ISA features.
 
Last edited:
Didn't AMD boast about having hundreds of engineers dedicated to this?
At least this bug comes down to reading a single instruction, not something that requires any major effort from anyone.

yeah they did but ask most dev how helpful are they and you'll most likely get the same response "What AMD engineers" in this particular case nVidia does a better job at actually helping devs and AMD need to be doing the same thing
 
So, I wonder how many of you actually played the game before dropping your pants and dumping your opinion on every outlet you have available... (hint: my guess is very few)

Onto some actual discussion on the topic:

I made the modification to my exe and here is a sample scene. CPU is a 2950x (16c/32t) and GPU is a 3080, running 3440*1440.

Original:, CPU is at 50% utilisation, GPU is at 65-76%. Result = 42FPS
Modded: CPU is at 66% utilisation, GPU is at 51-62%. Result = 33FPS.

How many people tested this on their own rigs? Might be worth posting your specs and A/B comparisons as CPU utilisation != better experience on my end.
 
I have it and have played it. From reviews etc was expecting it to be pretty easy to run, it's not. I have a 1440/75 monitor and to run it at a playable frame rate I am at medium at the monitors resolution. If I run it at 1080 it is fine all high. I have a 5700xt red devil btw.
 
I have it and have played it. From reviews etc was expecting it to be pretty easy to run, it's not. I have a 1440/75 monitor and to run it at a playable frame rate I am at medium at the monitors resolution. If I run it at 1080 it is fine all high. I have a 5700xt red devil btw.
In my experience, AMD cards are on the struggle bus (really any non-RTX card). Without DLSS, we have a harder time running the game at 1440p and higher. I would say check out Digital Foundries optimal settings video. I can run it at 1440p high-ish settings and get a comfortable 45-60 fps which is playable just fine (I haven't tried the DF settings yet).
 
In my experience, AMD cards are on the struggle bus (really any non-RTX card). Without DLSS, we have a harder time running the game at 1440p and higher. I would say check out Digital Foundries optimal settings video. I can run it at 1440p high-ish settings and get a comfortable 45-60 fps which is playable just fine (I haven't tried the DF settings yet).

Tbh at the moment i am not too worried. It will probably get better with a few more patches. I will play it as it is for now until then, or i manage to jump on the RTX bus. Shame really Nvidia has kinda forced RTX on the gaming industry as it means more games will need it to run playably, and until AMD has something comparable their cards will be always on the back foot.
 
Ive tried the hex fix and seem to be able to produce a less laggy mess now for my stream. :toast:
 
Back
Top