• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

DOOM: The Dark Ages Performance Benchmark

I want to play on the "Radeon RX 5700 XT", :(, possible to make a patch to play this Computer Game with imitation "Ray Tracing", "Path Tracing" ?, or disable it ?, that's will be a good!, :)
 
I want to play on the "Radeon RX 5700 XT", :(, possible to make a patch to play this Computer Game with imitation "Ray Tracing", "Path Tracing" ?, or disable it ?, that's will be a good!, :)
They managed to get Indiana Jones—built on the same engine—to run on Linux even on GPUs without ray tracing hardware, like the RX 5700 XT.
I wouldn’t pay to play a game like this, but I’ve got to admit... it’s impressive what the Linux community is pulling off:

 
The 9070 XT nearly matches the 5080? Ouch.:nutkick:

Not that I'm rooting for any brand, but considering that the 5080 costs nearly double than the 9070 XT, it's a bit awkward. Where are all the "AMD should be cheaper" people now?
I am one of those people, I put that as a response to all the "why arent people buying AMD".

However this new 9070 XT card is pretty good, and I did say that in the review thread as well.
 
I am one of those people, I put that as a response to all the "why arent people buying AMD".

However this new 9070 XT card is pretty good, and I did say that in the review thread as well.
Pretty good? I'd say being cheaper than the equally performing 5070 Ti, and nearly reaching the doubly expensive 5080 in Doom is freaking awesome! :)

But I get what you mean.
 
- I'm not sure if the 8GB and 16GB Numbers are mixed up in the WHQD and 1080p diagram. Maybe some special super overclocked 5060ti with 8gb was used. Note: I only looked at the numbers - not text

WHQD
5060 ti
8GB = 53.0 FPS
16GB = 50.6 FPS


FAKE 4k
5060 ti
8GB = 13.3 FPS
16GB = 27.6 FPS
Might be late to the party here, but Hardware Unboxed showed the same thing about the 8 gig version being a little quicker as well.
 
They managed to get Indiana Jones—built on the same engine—to run on Linux even on GPUs without ray tracing hardware, like the RX 5700 XT.
I wouldn’t pay to play a game like this, but I’ve got to admit... it’s impressive what the Linux community is pulling off:


That's because Linux has a robust and well supported software/shader based RT translation layer/method and drivers that support it really well. DXR has a similar feature... its not just as widely supported by drivers, as well developed by MS and it is likely not as performant because of all of that.
 
Well... Nice to see my current GPU can at least run it decently with some slight upscale at ultra nightmare. Radeon Pro W6600 8GB, basically an RX 6600. It's at least not as bad as a 3050! Lol
 
Sounds like Ill be wait for Doom DA, pretty lame system specs. Im not paying $1K for a gamer GPU just to play a single game. And with Nvidia's BS 50's gpu drivers, no way im paying for crap drivers....
 
Man looks like 4090 continue to be a beast of a GPU
performance-upscaling-3840-2160.png


120FPS @ 4K DLSS4.Balanced, pretty nice
 
Crazy how bad the 5080 is for what they ask for it.

Even the 4080 beats it at times.
 
Last edited:
The 9070 XT nearly matches the 5080? Ouch.:nutkick:

Not that I'm rooting for any brand, but considering that the 5080 costs nearly double than the 9070 XT, it's a bit awkward. Where are all the "AMD should be cheaper" people now?
Uhm what? Besides the fact that you don't reach conclusions based on a single game (that's insane, lol), the 9070xt is nowhere near half the price of the 5080. The 5080 is 50% more expensive (1139 vs 756), but since there already is the 5070ti (that is as fast as the 9070xt) for 849. The difference is 90€ between the 2 cards, and the 70ti more than makes up for it with a better upscaler, much much better PT performance and lower power draw. Especially for people with SFF PCs where every watt matters, spending 90€ for all of the above is a nobrainer. The 9070xt is a good alternative for people that don't wanna buy nvidia for whatever reason, but if you ignore brand names the 5070ti is clearly the better product.
 
Crazy how bad the 5080 is for what they ask for it.

Even the 4080 beats at times.
I've only been following PC hardware since around 2011, but this is easily the weakest jump from one gen to the next I've ever seen. Regression on a high end part like that is crazy.
 
Uhm what? Besides the fact that you don't reach conclusions based on a single game (that's insane, lol), the 9070xt is nowhere near half the price of the 5080. The 5080 is 50% more expensive (1139 vs 756),
Here in the UK, the difference is 650 vs 1000. Not double, but close. My point was about their performance in Doom, anyway.

I'm not drawing conclusions based on one game. The 650 vs 1000 GBP difference is bad (should be more like 600 vs 750), but the cards really come close to each other in Doom. This is a nice plus for AMD.

but since there already is the 5070ti (that is as fast as the 9070xt) for 849. The difference is 90€ between the 2 cards, and the 70ti more than makes up for it with a better upscaler, much much better PT performance and lower power draw. Especially for people with SFF PCs where every watt matters, spending 90€ for all of the above is a nobrainer. The 9070xt is a good alternative for people that don't wanna buy nvidia for whatever reason, but if you ignore brand names the 5070ti is clearly the better product.
The difference between the 9070 XT and 5070 Ti is 650 vs 800 GBP. Considering that the two cards are equally fast, both come with a 300 W TDP as standard (so I don't understand where you get the lower power consumption from), and FSR 4 exists and is quite good, therefore, Nvidia's only advantage is a slightly better, but still not very fast PT, and CUDA, which 9 out of 10 gamers don't need, the 9070 XT is clearly the better product.
 
Here in the UK, the difference is 650 vs 1000. Not double, but close. My point was about their performance in Doom, anyway.

I'm not drawing conclusions based on one game. The 650 vs 1000 GBP difference is bad (should be more like 600 vs 750), but the cards really come close to each other in Doom. This is a nice plus for AMD.


The difference between the 9070 XT and 5070 Ti is 650 vs 800 GBP. Considering that the two cards are equally fast, both come with a 300 W TDP as standard (so I don't understand where you get the lower power consumption from), and FSR 4 exists and is quite good, therefore, Nvidia's only advantage is a slightly better, but still not very fast PT, and CUDA, which 9 out of 10 gamers don't need, the 9070 XT is clearly the better product.
The prices in your country are whack then, that's not nornal. The tdp is also irrelevant, I'm talking about the actual power draw. Fsr 4 is nowhere near as popular as dlss

I just checked couk, cheapest cards in stock 669 for the 9070xt and 729 for the 5070ti
 
The 9070xt is a good alternative for people that don't wanna buy nvidia for whatever reason, but if you ignore brand names the 5070ti is clearly the better product.
There is more to this than 5070 ti is better than 9070 xt.
5070 ti is a cut down version of RTX 5080, that means it is about 39 billions transistors chip, 9070 xt is a 53.9 billions transistors chip.
5070 ti is perfectly optimized, that means no driver can improve performance further, on the other hand 9070 xt has the hardware ready but the software and optimizations are not there yet, it has the machine learning hardware but right now it is only used for FSR4.
Right now i don't think any game use neural denoising and texture compression, once that is added in games could mean bye bye Nvidia advantage in ray tracing and power consumption.
All the games you see today use less VRAM on Nvidia for the same graphics because they have better texture compression.
Also, if you happen to see almost equal performance in ray tracing between 5070 ti and 9070 xt is with the AMD card having a big performance hit because they don't do denoising in hardware, yet, once that is solved then 9070 xt can equal rtx 5080.
 
5070 ti is perfectly optimized, that means no driver can improve performance further, on the other hand 9070 xt has the hardware ready but the software and optimizations are not there yet, it has the machine learning hardware but right now it is only used for FSR4.
What? I highly doubt Blackwell that essentially just released is anywhere near “perfectly optimized”, especially considering what we’ve seen from the latest driver branch NV shat out.
In any case, that’s hardly relevant. We are not in the dark times of early 2010s where driver fixes and optimizations were nigh on necessary for the games to even work. The whole point of low level APIs like DX12/Vulkan is that the onus for making sure the games use hardware efficiently is now fully on game developers.
 
There is more to this than 5070 ti is better than 9070 xt.
5070 ti is a cut down version of RTX 5080, that means it is about 39 billions transistors chip, 9070 xt is a 53.9 billions transistors chip.
Yikes. 38% more transistors and it's neck and neck with the 70ti? What the heck, are the drivers that bad?
 
What? I highly doubt Blackwell that essentially just released is anywhere near “perfectly optimized”, especially considering what we’ve seen from the latest driver branch NV shat out.
In any case, that’s hardly relevant. We are not in the dark times of early 2010s where driver fixes and optimizations were nigh on necessary for the games to even work. The whole point of low level APIs like DX12/Vulkan is that the onus for making sure the games use hardware efficiently is now fully on game developers.
You have no idea about what you are talking about, Nvidia did the work for ray reconstruction, lots of work for ray denoising, they are far ahead with neural texture compression which is huge, just look up neural texture compression vs BC7 what it's used today.
Amd on the other hand didn't do the work, they only have FSR4, they have huge potential and processing power with RNDA4 that just sits idle.
Yikes. 38% more transistors and it's neck and neck with the 70ti? What the heck, are the drivers that bad?
They are not bad like crashing or not working, just very badly optimized, brute forcing it, throwing more processing power at something rather than optimizing for it, and that is complicated in what it is to come, many new technologies that are proposed rely on machine learning and lots of training, that takes time and money and i don't know who cares for this when gaming market is not that profitable.
 
You have no idea about what you are talking about, Nvidia did the work for ray reconstruction, lots of work for ray denoising, they are far ahead with neural texture compression which is huge, just look up neural texture compression vs BC7 what it's used today.
We are talking about different things then. NVidias RT performance is stemming from their tech and just flat out superior RT cores. It’s not a “driver optimization”. The idea that AMD could theoretically just “get” more RT performance out of their existing hardware via drivers is faulty from the get go. This is like saying that DLSS Transformer is superior to FSR 4 because drivers.
 
Yikes. 38% more transistors and it's neck and neck with the 70ti? What the heck, are the drivers that bad?
Now remove the 39% bandwidth advantage that the 5080 has over the 9070 XT and see how much actual performance is left. Chips aren’t priced per transistor but per unit area, and development costs are front-loaded—so your argument misses the point. If AMD can cram more transistors into a smaller die than Nvidia, that speaks in favor of their architecture and efficiency.

If you're looking for areas to optimize, AMD should consider reducing FP64 performance to make room for more shader cores. The 9070 XT delivers nearly twice the FP64 throughput of the 5080; this has little to no impact on typical applications or gaming workloads. I understand that this is a side effect of using the same chip for the professional market, where AMD holds the FP64 niche, but in my opinion, the gaming market is larger and more significant.

And yes, it's clear that performance bottlenecks still exist at the driver level:
"CP is very slow on GFX12 and parsing the packet header is the main bottleneck. Using paired context regs reduce the number of packet headers and it should be more optimal.
It doesn't seem worth when only one context reg is emitted (one packet header and same number of DWORDS) or when consecutive context regs are emitted (would increase the number of DWORDS)."


We are talking about different things then. NVidias RT performance is stemming from their tech and just flat out superior RT cores. It’s not a “driver optimization”. The idea that AMD could theoretically just “get” more RT performance out of their existing hardware via drivers is faulty from the get go. This is like saying that DLSS Transformer is superior to FSR 4 because drivers.
What evidence do you have to support this? PT implementations are handled exclusively by Nvidia. While their hardware solution is certainly more refined in some areas, it's not to that extent. This is, quite simply, a software issue.

1747057065561.png
 
Last edited:
Now remove the 39% bandwidth advantage that the 5080 has over the 9070 XT and see how much actual performance is left. Chips aren’t priced per transistor but per unit area, and development costs are front-loaded—so your argument misses the point. If AMD can cram more transistors into a smaller die than Nvidia, that speaks in favor of their architecture and efficiency.
Sir, we already have the 4080 and the 4080s that have lower bandwidth compared to the 5080. The 9070xt is much bigger than these 2 without a bandwidth disadvantage.

Yes, their architecture and their efficiency is what they are known for, lol.
 
Sir, we already have the 4080 and the 4080s that have lower bandwidth compared to the 5080. The 9070xt is much bigger than these 2 without a bandwidth disadvantage.

Yes, their architecture and their efficiency is what they are known for, lol.
Factually false. Both have more bandwidth than the 9070XT. Next embarrassing lie to be debunked? No? That's what I thought.
 
Factually false. Both have more bandwidth than the 9070XT. Next embarrassing lie to be debunked? No? That's what I thought.
Factually irrelevant. The point is there is minimal difference in bandwidth between a 4080 and a 9070XT, and all that bandwidth doesn't seem important anyways judging from the 5080. But sure, let's focus on that and ignore that the 70xt is newer, with a 40% bigger die struggling to catch up to the 2 year old 4080. Bruuih.
 
Factually irrelevant. The point is there is minimal difference in bandwidth between a 4080 and a 9070XT, and all that bandwidth doesn't seem important anyways judging from the 5080. But sure, let's focus on that and ignore that the 70xt is newer, with a 40% bigger die struggling to catch up to the 2 year old 4080. Bruuih.
Another irrelevant lie. The N48(359mm²) is 5.5% smaller than the 4080/4080S(378mm²) and has nearly 11% less available bandwidth. Nvidia uses custom nodes and GDDR6X, pour a river of money into studios both directly and indirectly—yet AMD still manages to compete with less hardware.

In my view, this is a major win for AMD; they just need to keep refining their software.
 
Another irrelevant lie. The N48(359mm²) is 5.5% smaller than the 4080/4080S(378mm²) and has nearly 11% less available bandwidth. Nvidia uses custom nodes and GDDR6X, pour a river of money into studios both directly and indirectly—yet AMD still manages to compete with less hardware.

In my view, this is a major win for AMD; they just need to keep refining their software.
Lol, I wasn't the one that claimed that the XT has 38% more transistors.

Pour a river of money... Biggest aaa titles are amd sponsored man. Stop it. Just freaking stop it. They really don't care about you, you don't have to whiteknight them
 
Back
Top