• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Hogwarts Legacy Benchmark Test & Performance Analysis

People are quick to jump to conspiracy on the internet, the fact that the 3090 is barely faster than the A770 in 4k RT means something is not working right.
Do people seriously thinks that the A770 is suppose to match a 3080Ti in 4K RT?
On top of that, the RT in this game is nothing special for it to run this poorly.
View attachment 283140

Edit: Apperently HUB also found a menu bug in this game.
Hardware Unboxed:"OMG! I solved the issue, it's not a Ryzen bug but rather a menu bug. Although DLSS and all forms of upscaling were disabled & greyed out in the menu, frame generation was on for just the 40 series. I had to enable DLSS, then enabled FG, then disable FG and disable DLSS to fix it!" / Twitter
This isnt the first game to have had frame gen on while the setting was greyed out, either


Ooops we didnt mean to do that, we'll patch it later after our sponsor finishes lookin good
 
RT also known as performance-killing glossifier.
 
Can someone provide a logical reason for the decline in performance of the 7900XTX to the level of a Mid-end 3060 when RT is enabled? lol
AMD cards still have less horsepower when it comes to raytracing. The specific problematic part has so far been specifically tracing rays. There is a base performance hit for enabling RT due to all the preparatory work that needs to happen - generally BVH generation and all the associated hoopla - but after that it is all about how many rays can be traced in a given time. So far, Nvidia cards can do more of that even when comparing Ampere to RDNA3. Some suspicions are there on why exactly AMD gets a bigger hit sometimes, primarily revolving around AMD potentially doing some of the work on shaders but that is a minor detail in big picture.

AMD does relatively better when given RT effect involves less actual rays. Some AMD-sponsored games with RT do exactly that - a simpler effect, less rays and things are more even. Nvidia of course pushed for their strengths - more effects, more rays, knowing that AMD cards will take a hit sooner than theirs. And even if the rays needed/intended exceeds what Nvidia cards can do - and that happens often enough in games with more and more intensive RT effects - the resulting hit is still smaller for them.

Capability for more rays are obviously better but there is a balance to be struck given it clearly takes dedicated hardware for reasonable RT performance in the first place. AMD has banked on RT not being relevant (yet) and skimping on capability. Nvidia - maybe slightly weirdly - has also been holding back on increasing the relative amounts of RT Cores (only increasing their capability to some degree) so they seem to be hedging their bets a little too.

Edit:
Hogwarts-specific example - ComputerBase tested different RT quality levels and 7900XTX only gets the relatively big hit (compared to 4080) at Ultra:
 
Last edited:
The intel arc 770 card runs this Hogwash™ exceedingly well, perhaps finally showing its true potential. AMD Hoseron™ doesn't do all that well, looks like optimisations can still be done, we shall see.
 
I'm disappointed that DLSS wasn't used in this review with such poor performance title (no wonder it's Unreal Engine).
 
I think that there is something wrong with the RT ON tests here for AMD
Definitely. I clean installed the newest drivers and ran the game. Not sure what else I could do
 
Just watched the HUB video, the differences in results are actually very stark. I'm honestly not sure what to believe here.
Not that I'm faulting anyone, but there's something seriously different somewhere.
If I had to point at something I'd wager it's the CPU, 13900K vs 7700X is a major difference, I'm not so sure I agree with Steve's "I tested the 7900XTX and the 4090 with the 13900K and saw comparable performance". Perhaps some more testing is necessary between CPUs.
 
@W1zzard , ramake the test please

Seen that one. Steve said that (and showed which was astonishing)the graphics cards should be equipped with at least 12GB vram if you wanna play some serious resolution and detail level with RT. It was so damn weird seeing 3060 overtaking 3080 and 3070 cards due to vram. Although the FPS was low but still.
Though the results here and on HWUB are strangely different and it is not by a few % but rather noticeable difference. Maybe it is due to the 13900K Wizz used but from what Steve said there is no difference or should be no difference according to his findings.
What was also weird, 7900xtx and 7900xt on top of the chart for 1080p with RT enabled Ultra quality.
Simply the game needs some improvement here and there.
One more thing that pops into my head. Intel and NVidia had a game ready driver and AMD didn't. Wonder if the driver when released change something noticeably or not.
I wonder also, if there is a difference between the ReBAR on and off for platforms that support it.
 
Hardware Unboxed used a 7700X as their CPU.
Yes and Steve mentioned it did not bring any significant improvements in FPS when he used 13900K with few GPUs. Maybe at some point there will be a follow up video and 13900K will be used for comparison.
 
So hogwarts is a resource hog, who would have thought.

It's looks like fun game but optimization doesn't seem to be the priority here. Sure makes the new gpu's look good.

@W1zzard any reason for not including 3080 Ti in performance charts? Clearly you can't extrapolate the results between 3080 and 3090 due to the difference in VRAM.
 
jeez did they follow the Nvidia guidance of implementing RT and stuck by it....
this game is on consoles right? and has RT there as well right? so what gives?
 
The HUB 1080 ultra RT show the Radeon in a far better light than here, massively so. Seems area used for testing is as critical as choosing gfx card. Further evidence the game's just broken.
 
What is going on here? A770 is on par with 3090 in RT and much faster than 7900 xtx?! Bruh, this is some wild crap.
It is indeed. Intel's Alchemist is extremely good in ray tracing... not so much in anything else, unfortunately. Its drivers are also kind of crap.

HwInfo64 its a bit extensive monitoring (can lower fps a few) used with RTSS Riva Tuner Statistics Server.
Or if you have an AMD Radeon in the performance metrics section of the Adrenalin control panel.
Also for Nvidia MSI afterburner includes RTSS.
Those tools (or any tool in fact) only shows VRAM allocation, as far as I know. It cannot differentiate between VRAM used for assets on screen and VRAM used to store extra stuff for later.

People are quick to jump to conspiracy on the internet, the fact that the 3090 is barely faster than the A770 in 4k RT means something is not working right.
Do people seriously thinks that the A770 is suppose to match a 3080Ti in 4K RT?
On top of that, the RT in this game is nothing special for it to run this poorly.
View attachment 283140

Edit: Apperently HUB also found a menu bug in this game.
Hardware Unboxed:"OMG! I solved the issue, it's not a Ryzen bug but rather a menu bug. Although DLSS and all forms of upscaling were disabled & greyed out in the menu, frame generation was on for just the 40 series. I had to enable DLSS, then enabled FG, then disable FG and disable DLSS to fix it!" / Twitter
So with a 40-series card, frame generation is on whether you want it or not... that's shady! Very shady! No wonder 40-series cards do so much better in reviews. :shadedshu:
 
Was ReBAR enabled on the Radeons? For these cards this is very important.
It's enabled.

1676031901680.png
 
any reason for not including 3080 Ti in performance charts? Clearly you can't extrapolate the results between 3080 and 3090 due to the difference in VRAM.
I can only include so many cards, didn't include the GeForce 30 Ti's except for 3090 Ti, because fastest GeForce 30
 
And this is why I don't trust commercial reviewers when they post their results. Here's a video of my 7900xtx(default settings no OC) and in 1440p I was averaging 50fps which is right below the 4080. How are you getting in the teens I don't understand)
watch user videos with stats running. Tomorrow I'll be posting a 4k video rt ultra comparing default vs oc. 4k 7900xtx gets around 20fps outside inside the fps shoots up to about 45fps
 
This games all over social media today for all the wrong reasons, but one thing stood out - one user found .ini files you can customise the ray tracing in, and found the defaults to be absolute garbage for performance and quality

FB screenshots couldnt really show much of a difference, but the FPS values were a lot higher with his changes (and since thats some random FB user, i'm sure better guides will exist elsewhere soon enough)


High performance storage and RAM will alleviate that issue
Someone on DDR4 2133 with a sata SSD would have a stutter fest, but review level hardware with high speed RAM and storage behind it wont suffer anywhere near as much.


And yes, it really is that drastic an issue - i fixed a friends system with a weakass 2GB GT960 and nearly doubled her FPS in DX12 games by OCing her ram from 2133 to 2667 - higher VRAM is a buffer, but if a system can stream that data fast enough it's not needed (but can cause those 1% and 0.1% lows to dip)

One of my intel machines (i7 6700, locked to DDR4 2133) has great CPU performance but was *Garbage* with a GTX980 4GB GPU with lots of stuttering - all gone with an 8GB 1070 - the exact opposite fix to the same problem.
If VRAM was seriously running out, even with OC DDR5 DRAM and NVMe 5 storage, you still still see major stuttering and dropped frame issues in the 1% lows. That card has 760GB/s of bandwidth for a reason.

The HUB 1080 ultra RT show the Radeon in a far better light than here, massively so. Seems area used for testing is as critical as choosing gfx card. Further evidence the game's just broken.
I think thats a big part. Getting consistent results is like pulling teeth.
 
The moment you hand $2k for a 4090 saying this is a high refresh rate 4k gaming card. It should stay this way for years to come
 
The moment you hand $2k for a 4090 saying this is a high refresh rate 4k gaming card. It should stay this way for years to come
It should. But it doesn't. There lies Nvidia's marketing power.
 
Back
Top