• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Horizon Forbidden West Performance Benchmark

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,747 (3.75/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Horizon Forbidden West finally brings the PS5 Exclusive to the PC, with stunning visuals and an excellent gaming experience. There's also support for multiple upscalers and DLSS 3 Frame Generation. In our performance review, we're taking a closer look at image quality, VRAM usage, and performance on a wide selection of modern graphics cards.

Show full review
 
A game that actually runs decently while looking good what a foreign concept in 2024. Haven't played much but seems to run great on a 5950X/5800X which are a bit dated by today's standards.

Thanks for the comprehensive benchmark.
 
Nice review.

I reckon there is a bit more performance to be pulled from the nvidia cards, both through patches and driver updates.

RT reflections would have been nice, and added a lot to the visual presentation.

Honestly it wouldn't have made that much of a difference... it takes place in nature where there isn't all that many reflective surfaces, aside of water.

A full pathtraced image would ofc have looked superior, but if i could only have choosen 1 RT effect for this game, it would have been shadows / ambient occlusion.
 
A full pathtraced image would ofc have looked superior, but if i could only have choosen 1 RT effect for this game, it would have been shadows / ambient occlusion.

For me even though the cost is high multibounce global illumination would be my preference had they included RT but honestly the game is more than fine as is.
 
"The game is also DRM-free and works fine completely offline."

Great news and hopefully coming to GOG soon.
 
I actually prefer the settings on medium/high. Very high just looks "meshy forced" not sure how else to describe it. lol that is happening a lot with modern games I noticed, but I can always turn settings down a notch, it will look better plus I gain fps, so meh it is what it is.
 
It's been running great on my 5950x and 7900xt, combo. Haven't experienced any crashes yet. Wish the FSR3 implementation was ready in time for launch. I may pause my playthrough and wait for it or a DLSS3toFSR3 mod.
What resolution are you playing at that you need upscaling on a 7900XT?
 
What resolution are you playing at that you need upscaling on a 7900XT?

or what refresh rate is he aiming for one what monitor too. some people like super high refresh rates regardless of resolution, but yes I agree with you. 7900 XT is pretty damn close to a 4090 at 1080p and 1440p.
 
I reckon there is a bit more performance to be pulled from the nvidia cards, both through patches and driver updates.
This isn’t all that uncommon in games when there is no RT involved, differences in raster not particularly noteworthy.
 
I'm surprised to see that the 6800XT is consistently faster than the 7800XT
72 vs. 60 CUs at similar average clocks, twice the Infinity Cache.

Considering how AMD most probably missed their target clocks on their GCDs made on N4 by a bunch, I think the 7800XT only really goes above the 6800XT when there's driver optimization for RDNA3.
If there's no hand-written code from AMD for the driver to make use of the dual-issue FP32 ALUs, the 7800XT will stay behind the 6800XT.
 
72 vs. 60 CUs at similar average clocks, twice the Infinity Cache.

Considering how AMD most probably missed their target clocks on their GCDs made on N4 by a bunch, I think the 7800XT only really goes above the 6800XT when there's driver optimization for RDNA3.
If there's no hand-written code from AMD for the driver to make use of the dual-issue FP32 ALUs, the 7800XT will stay behind the 6800XT.
The GCD on Navi31 and 32 are onm TSMC N5. It is the APUs that are on N4.
As for the performance, I think it is more likely that AMD over estimated their software teams' ability to utlitise the dual-issue FP32 on RDNA3.
Also the reduction in Infinity Cache size does not help. Navi21 has 128MB while Navi31 only has 96MB. Navi32 is down to 64MB that is less than the 6700 XT that had 96.
Navi31 has a 384-bit memory bus and significantly faster memory to compensate, Navi32 basically got nothing vs Navi21.
 
Last edited:
Had to use dynamic resolution in relation to high 1440p otherwise i get 45-60 fps unplayable. 60 fps for my 2o80TI turned out to be on the optimistic side of things and that is not even on max.
 
Is this test system reasonable? It seems madly overclocked to me. 330W and 6 GHz P cores? This is a chiller territory. I cannot imagine how that 280 AIO can cool this.

tchpw test sys.png
 
These are GPU tests so the CPU needs to be as small a bottleneck as possible to isolate GPU performance.

14900K is a native 6GHz part and while it can peak at 330W in all-core loads, it usually consumes less than 200W while gaming because no game comes close to saturating it. No probs for a 280mm AIO.

power-games-compare-vs-7800x3d.png
 
another game that no one can run right, a 4090 should run the game at 120 fps at 4k
You can play the game on very high 1080p60 with a ~330$ last gen card like 6700xt but apparently that means no one can run the game?
 
I actually prefer the settings on medium/high. Very high just looks "meshy forced" not sure how else to describe it. lol that is happening a lot with modern games I noticed, but I can always turn settings down a notch, it will look better plus I gain fps, so meh it is what it is.
What do you mean? Do you have any more detailed examples?
 
These are GPU tests so the CPU needs to be as small a bottleneck as possible to isolate GPU performance.

14900K is a native 6GHz part and while it can peak at 330W in all-core loads, it usually consumes less than 200W while gaming because no game comes close to saturating it. No probs for a 280mm AIO.

power-games-compare-vs-7800x3d.png
330W is nowhere in Intel specs and 6 GHz is maximal frequency for selected two cores, the CPU will never run at this frequency with gaming load.

I presumed when 6 GHz was written in the test system specs, that it is overclocked to all P cores at 6 GHz, which IS A MAD OVERCLOCK!
 
I never cared about FSR vs DLSS that much, being older and not having a top notch eyesight but THE WATER LOOKS PIXELATED ON FSR!
Other than that the game runs ok, they managed to make higher end GPU's relevant by making shadows extremely detailed, Nvidia is probably disappointed they couldn't do all the works but who cares anymore, making so much money with AI.
So turning shadows to medium brings a huge performance improvement.
 
looks like a solid port like god of war and zero dawn before this
 
Oof, seems I will need to upgrade my GPU, just, not this generation.

Luckily I have many other titles to catch up on. I enjoyed Death Stranding, kept me busy for weeks. Next up, Control.
 
On my 12100F/3060 Ti/2560x1080 resolution I've had to tweak some settings like Textures down to High from V.High cause that caused issues over time on my end, luckily the difference is something that I cannot notice so its whatever.
Shadows down to medium cause thats like a good ~10 FPS difference vs High and it still looks okay to me + also Level of detail to High from V.High tho this I could keep maxed probably. 'rest is maxed'
DLSS Quality is also enabled but thats something I do in almost every game that supports it anyway, seems to be a decent implementation to me.

In overall I can't complain, runs better than most new games while still looking good.
 
Back
Top