• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

"Black Myth: Wukong" Game Gets Benchmarking Tool Companion Designed to Evaluate PC Performance

RTX 3080 Phoenix V1 Gainward 10gb lhr/ Ryzen 5 5600 32gb DDR4 3000mhz/ B550M / SSD 7500mbps China democracy
4K DLSS Qualidade 75% Resolution Framegen on Qualidade alto / raytracer off 85fps Average média

b1   13_08_2024 18_06_49.png


X

AMD 4k/alto Raytracer off FSR 3 Framegen on 75% resolution. 75fps AVERAGE Média


b1   13_08_2024 19_29_39.png



Bônus QuadHD Ultra/veryhigh qualidade Dlss 100% DLAA Framegen on 66 fps Média

b1   14_08_2024 00_31_53.png


RTX 3080 its life Again.
 
Interesting, there's one problem though, DLSS Frame Generation doesn't work on the RTX 3080. Yet it claims to be enabled here... I wonder if it will fallback to FSR 3 fluid motion frames in this scenario.
 
77 FPS Ave
28 FPS Ave RT High
 

Attachments

  • Wukong15082024.jpg
    Wukong15082024.jpg
    866.3 KB · Views: 64
  • WukongRT15082024.jpg
    WukongRT15082024.jpg
    839.9 KB · Views: 53
Last edited:
This is a news thread, not a thread to post your benchmark in.
There's a thread for that.
 
Not sure how you get higher FPS with this game, since on default settings, without RT, I only get this:

1723790932227.png
 
Not sure how you get higher FPS with this game, since on default settings, without RT, I only get this:

View attachment 359178

Your driver is a bit older (560.81 has specific optimizations for Wukong) and you're pushing 75% of 3440x1440 resolution with cinematic settings, thats way too heavy for the 3080 and probably for the 4080 too (if targeting 60fps). Also bench was updated idk if scores are invalidated across these two minor versions but in the interests of fairness I would say yes
 
For some reasons TSR and FSR allows for Frame Generation when playing on a RTX 3080 card.
Was wondering, what is the common opinion, does TSR looks almost as good as DLSS at 75% ? Or is it better.
 
This game will be a GPU seller I guess, I will post some screenshots later showing CPU utilisation alongside result pages (I did also test HTT vs non HTT (soft parking) as this seems to have more threads than P cores). CPU power usage didnt go above 40w even with HTT, and peaked at 79w during loading. I didnt snap it during shader compilation.

But as a early summary before I upload the pics, on my 4080 super, it can sustain 1440p 60fps (with minor dips on results page), at the default 75% super resolution DLSS, with both no RT and low quality RT. Increasing resolution to 100% even without RT it drops to mid 50s maybe even bit lower in places. It also drops with RT medium or higher even with 75% super resolution, didnt test frame generation as currently HAGS is off, but will test that after reboot.

I am guessing this is going to end up in TPU testing suite. :)

Interesting there is very low shimmering with the lower than 1440p rendering. DLSS I guess is good with that now days? RT makes the glass glimmer weirdly in places which made me prefer non RT lighting.

Ok here is some screenshots from my own testing.

720p-motionblurstrong-nort-cstatesoff-5000interval.png720p-nort-cstatesoff-5000interval.png720p-nort-cstateson-15interval.png900p-nort-cstateson-15interval.png1080p-nort-cstateson-5000interval.png1440p-dlss75-nort-cstatesoff-15interval.png1440p-dlss75-nort-cstateson-15interval.png1440p-dlss75-rtlow-cstateson-15interval.png1440p-dlss75-rtmedium-cstateson-15interval.png1440p-dlss100off-nort-cstateson-15interval.png
 

Attachments

  • 1440p-dlss75-rthigh-cstateson-15interval.png
    1440p-dlss75-rthigh-cstateson-15interval.png
    2.8 MB · Views: 46
Same problem as Witcher 3 UE version. It barely pulls over 370 watts from my GPU. Even OW2 at lowest setting pulls 450.

Needs more work.

Hard pass for me.
 
Is it me or there is absolutely no difference in quality between RT ON and RT OFF??
Actually RT OFF seems to render the water way better. Another proof that Lumen Lighting is way better than any RT solutions so far, both from quality and performance perspective.
 
RTX 3080 Phoenix V1 Gainward 10gb lhr/ Ryzen 5 5600 32gb DDR4 3000mhz/ B550M / SSD 7500mbps China democracy
4K DLSS Qualidade 75% Resolution Framegen on Qualidade alto / raytracer off 85fps Average média

View attachment 358930

X

AMD 4k/alto Raytracer off FSR 3 Framegen on 75% resolution. 75fps AVERAGE Média

View attachment 358931


Bônus QuadHD Ultra/veryhigh qualidade Dlss 100% DLAA Framegen on 66 fps Média

View attachment 358932

RTX 3080 its life Again.

alto, what is that ?

why u can get 85 fps average in 4K with rtx 3080 10gb ?
 
Same problem as Witcher 3 UE version. It barely pulls over 370 watts from my GPU. Even OW2 at lowest setting pulls 450.

Needs more work.

Hard pass for me.

But that doesn't mean much. Lower power consumption usually arises from hardware inefficiencies such as a CPU bottleneck.

alto, what is that ?

why u can get 85 fps average in 4K with rtx 3080 10gb ?

It means "High", but I think this person's settings are completely invalid, for example DLSS-G isn't compatible with the RTX 3080. Something wrong there isn't quite right.
 
But that doesn't mean much. Lower power consumption usually arises from hardware inefficiencies such as a CPU bottleneck.

You are correct, but did you read my full comment? I am pushing OW2 (and even CS GO) on low settings at 450 Watts (getting over 500 FPS). This game is pushing 25 FPS at 370 watts. So, a CPU that can push 500FPS is suddenly only capable of 25? Cmon man.
 
You are correct, but did you read my full comment? I am pushing OW2 (and even CS GO) on low settings at 450 Watts (getting over 500 FPS). This game is pushing 25 FPS at 370 watts. So, a CPU that can push 500FPS is suddenly only capable of 25? Cmon man.

Different engines, different workloads, so yeah. eSports games designed to run on super low end hardware will obviously hit 500 fps on anything half decent
 
Different engines, different workloads, so yeah. eSports games designed to run on super low end hardware will obviously hit 500 fps on anything half decent
Point I am trying to make is different as well? :)

Different unoptimized engine is exactly the point. A 24 thread CPU hitting over 5GHz pushing 500FPS all of a sudden can only produce 25FPS on another engine.

You see the flawed argument that you are presenting here, just to dig in?
 
Point I am trying to make is different as well? :)

Different unoptimized engine is exactly the point. A 24 thread CPU hitting over 5GHz pushing 500FPS all of a sudden can only produce 25FPS on another engine.

You see the flawed argument that you are presenting here, just to dig in?

The critical flaw in the entire back and forth is that it's apples to oranges to begin with? 24-thread CPU? Be specific? (Although I'm guessing from the 5 GHz description that it's one of the Ryzen 900X CPUs, which are known to have topology problems that affect workloads such as these - there's a reason the 5800X3D and 7800X3D leave both of their R9 900X3D counterparts in the dust), you just can't expect the exact same behavior from every engine, especially when you are dealing with suboptimal hardware configurations to begin with.

Wukong makes use of what's probably the heaviest engine in the market right now and makes extensive use of hardware resources because of the rendering techniques employed. It's just inevitable, it doesn't mean a problem with the software at all.
 
Point I am trying to make is different as well? :)

Different unoptimized engine is exactly the point. A 24 thread CPU hitting over 5GHz pushing 500FPS all of a sudden can only produce 25FPS on another engine.

You see the flawed argument that you are presenting here, just to dig in?
A 7800X3D can barely sustain 30FPS in lightning returns in late game ruffian with crowded NPC area, a game ported to the PC 9 years ago. 9900k can only dream of it. A 24 threaded 7900X I suspect might not keep the lows above 30.

You be surprised how bad some engines are on CPU side. There is many many games that only have 1-2 threads, those games you can have a 100 core CPU and it wont help you.

This benchmark doesnt have enough threads either to fully optimise a 24 thread CPU. A CPU with say 8-16 threads with better performance per core would be better.
 
A 7800X3D can barely sustain 30FPS in lightning returns in late game ruffian with crowded NPC area, a game ported to the PC 9 years ago. 9900k can only dream of it. A 24 threaded 7900X I suspect might not keep the lows above 30.

You be surprised how bad some engines are on CPU side. There is many many games that only have 1-2 threads, those games you can have a 100 core CPU and it wont help you.

This benchmark doesnt have enough threads either to fully optimise a 24 thread CPU. A CPU with say 8-16 threads with better performance per core would be better.

The PC ports of FFXIII are notorious for their extremely poor quality, they're very buggy straight ports. No optimization done whatsoever. Lightning Returns is a bit better than XIII and XIII-2 in this regard but still not stellar. The reason Ryzen 900X CPUs (12-core models) suck is their suboptimal topology. The 6+6 deal means it's at best going to perform like a 6-core processor in most workloads, with a potential for negative scaling since it has to deal with internal inter-CCD bottlenecks for anything above 6C/12T workloads. The 7900X3D is especially bad because of the resource imbalance between both CCDs.
 
But that doesn't mean much. Lower power consumption usually arises from hardware inefficiencies such as a CPU bottleneck.



It means "High", but I think this person's settings are completely invalid, for example DLSS-G isn't compatible with the RTX 3080. Something wrong there isn't quite right.

yup.... there is something wrong with his/her SS....


my self, barely 70fps average in 3440x1440, the same graphic setting....
 
The critical flaw in the entire back and forth is that it's apples to oranges to begin with? 24-thread CPU? Be specific? (Although I'm guessing from the 5 GHz description that it's one of the Ryzen 900X CPUs, which are known to have topology problems that affect workloads such as these - there's a reason the 5800X3D and 7800X3D leave both of their R9 900X3D counterparts in the dust), you just can't expect the exact same behavior from every engine, especially when you are dealing with suboptimal hardware configurations to begin with.

Wukong makes use of what's probably the heaviest engine in the market right now and makes extensive use of hardware resources because of the rendering techniques employed. It's just inevitable, it doesn't mean a problem with the software at all.
Unoptimized = Heavy. Got it. And OC'd 7900x with tight RAM timings is suboptimal now... LOL!

Tell me one config on Wukong that pushes max wattage on an NVidia GPU right now. Just one... I can wait. And If you cant, please stop trying to disprove what others observed just because you think you know better.
 
Unoptimized = Heavy. Got it. And OC'd 7900x with tight RAM timings is suboptimal now... LOL!

Tell me one config on Wukong that pushes max wattage on an NVidia GPU right now. Just one... I can wait. And If you cant, please stop trying to disprove what others observed just because you think you know better.

Before you get infracted because of your attitude problem allow someone that has more than 24 cores and an addiction to trash MP titles take the time to educate you.

1724034778493.png


90+% throughout the benchmark on a 4090.

Games are a balance. Fast clocks only get you so far and thats not the games fault. Anyone that spends more than 10min browsing newegg knows of this issue. Take my 112 thread W9. Certain workloads, even your coveted overwatch and valorant will run like absolute dog shit on my machine.

The problem is not the engine. It is your BIOS, Your OS, Your power settings, The engine, your game settings.

All of these play a part. Since my machine cannot play OW2 at 500FPS can I call your game unoptimized?

Let me sit you down real quick. Lower end parts. You know what I mean. anything less than 56 cores in my case (see how being obnoxious is addicting?) are easy to run games on. Your boost clocks are controlled by your power settings, which inturn communicate with your BIOS. Those power settings realy information to the windows kernel and the CPU registers on the level of load on your system. The level of load on your system tells windows what power states to command, which gets sent to the kernel, which touch the various bus' on the mobo to tell your machine to clock up.

This is how you dont idle at 500w. Sorry you dont understand that reference. I meant like 100w in what was it? the 13900k's example? 7950? I get confused.

Anyway. Even on lower end systems (there I go again. being pretentious.) the same effect still applies. Like others have mentioned. In my case background tasks like overwatch 2 produce so little load on my CPU windows never commands the CPU too enter a higher power state or to even initiate boost clocks. This load effect spans not only games but all sorts of workloads. It's also an effect that affects lower core count systems.

That is why in some scenarios lower core count CPUs perform better because they are so loaded with work the combination of parts responsible are commanded to run at the highest rate they possible can to satiate demand.

To fix issues like this. it is upto you, the user. to understand these systems and change settings and power levels to the best of your ability if your system is otherwise not performing. In the case of low threaded apps this may be adjusting the boost clock on preferred cores. On highly threaded apps this may be to enable longer TAU times. In my case its defining minimum processor state. If I dont my chip will happily sit at 800mhz for the duration of your game because it is so unoptimized for multithreading, my OS and CPU scheduler might as well think it doesnt exist.

There is a lot game optimizations and engine optimizations do, but im sure you must have already known ALL about the options available and must just be focusing on the engine and making blanket statements for some other reason. It couldnt POSSIBLY be that you somehow expected all your mid end parts to just work together in every single load scenario based on personal experience from playing a game no one in that particular community even likes over the original.

But I digress, im really bad at reading people online.

I get confused when they use terms like:

please stop trying to disprove what others observed just because you think you know better


At any rate. feel free to discuss scores in the actual thread about it thanks!

 
Before you get infracted because of your attitude problem allow someone that has more than 24 cores and an addiction to trash MP titles take the time to educate you.

View attachment 359632

90+% throughout the benchmark on a 4090.

Games are a balance. Fast clocks only get you so far and thats not the games fault. Anyone that spends more than 10min browsing newegg knows of this issue. Take my 112 thread W9. Certain workloads, even your coveted overwatch and valorant will run like absolute dog shit on my machine.

The problem is not the engine. It is your BIOS, Your OS, Your power settings, The engine, your game settings.

All of these play a part. Since my machine cannot play OW2 at 500FPS can I call your game unoptimized?

Let me sit you down real quick. Lower end parts. You know what I mean. anything less than 56 cores in my case (see how being obnoxious is addicting?) are easy to run games on. Your boost clocks are controlled by your power settings, which inturn communicate with your BIOS. Those power settings realy information to the windows kernel and the CPU registers on the level of load on your system. The level of load on your system tells windows what power states to command, which gets sent to the kernel, which touch the various bus' on the mobo to tell your machine to clock up.

This is how you dont idle at 500w. Sorry you dont understand that reference. I meant like 100w in what was it? the 13900k's example? 7950? I get confused.

Anyway. Even on lower end systems (there I go again. being pretentious.) the same effect still applies. Like others have mentioned. In my case background tasks like overwatch 2 produce so little load on my CPU windows never commands the CPU too enter a higher power state or to even initiate boost clocks. This load effect spans not only games but all sorts of workloads. It's also an effect that affects lower core count systems.

That is why in some scenarios lower core count CPUs perform better because they are so loaded with work the combination of parts responsible are commanded to run at the highest rate they possible can to satiate demand.

To fix issues like this. it is upto you, the user. to understand these systems and change settings and power levels to the best of your ability if your system is otherwise not performing. In the case of low threaded apps this may be adjusting the boost clock on preferred cores. On highly threaded apps this may be to enable longer TAU times. In my case its defining minimum processor state. If I dont my chip will happily sit at 800mhz for the duration of your game because it is so unoptimized for multithreading, my OS and CPU scheduler might as well think it doesnt exist.

There is a lot game optimizations and engine optimizations do, but im sure you must have already known ALL about the options available and must just be focusing on the engine and making blanket statements for some other reason. It couldnt POSSIBLY be that you somehow expected all your mid end parts to just work together in every single load scenario based on personal experience from playing a game no one in that particular community even likes over the original.

But I digress, im really bad at reading people online.

I get confused when they use terms like:




At any rate. feel free to discuss scores in the actual thread about it thanks!

Way to comment out of context. Writing an essay on Boxing Day when asked about Christmas, is close enough, but not it. Either way, I am done discussing a topic which gets people defensive about an engine.
 

is it true, right now, with the latest driver, rtx 3080 10gb can use DLSS FG in this pc game ?
 
Wukong already became the #1 most played single player game on Steam, despite launching on a weekday LOL
 
Back
Top