• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

DOOM: The Dark Ages Performance Benchmark

I think we should agree to disagree and leave it that.
Now this is important and might be of interest for some, as pointed out by some publications, RTX 5000 series is slower than 4000 series in some games, as usual, reviewers are brainwashed by Nvidia marketing and can't comprehend how the "new" arhitecture, version...higher number bullshit is slower.
The details are RIGHT IN THE SPECIFICATIONS:

RTX 4070 Ti SUPER - Pixel Rate 250.6 GPixel/s​

- FP32 (float) 44.10 TFLOPS​

- 706 AI TOPS​

RTX 5070 Ti - Pixel Rate 235.4 GPixel/s​

- FP32 (float) 43.94 TFLOPS​

- 1406 AI TOPS​

RTX 4080 - Pixel Rate 280.6 GPixel/s​

- FP32 (float) 48.74 TFLOPS​

- 780 AI TOPS​

I don't know if they measured the same way TOPS for 4000 series and 5000 series but that silicon die has way more AI capability, they even cut physx support from the chip so they would make room for probably even more AI.
So the expectations for 5070 ti to beat or equal even the 4080 are not realistic unless the game heavily uses tensor cores for some future neural denoising and texture compression, DLSS transformer might be to light of work for tensor cores.
One thing is clear, Nvidia architecture is not strong in raster and if they care to compete in gaming space they need to find a way to put tensor cores to work, more complicated upscalers that need more power, more complex ray tracing to need heavier denoising, neural texture compression...etc.
All of this can be perfect for business, make an architecture for AI customers and adapt the same architecture for gaming.
 
One thing is clear, Nvidia architecture is not strong in raster and if they care to compete in gaming space
How do people reach these outlandish conclusions? Seriously, WHAT? The 9070xt has 54B transistors vs 45B for the 5080, and yet the latter is 25% faster in Raster and 39% in RT. With a smaller die. But nvidia architecture isn't good in raster. :banghead:
 
The double standard, the times you're called out (with receipts) and deny, entrench yourself or gaslight, the false equivalences, and overall it's the disparity between how you say you act, and how you actually act.
Without examples, those are just words.

There is no double standard here. I've got my own standards and I'm true to them all the time. It's only that your blood is too green to accept it and move on (proven by the fact that only you got infuriated by my initial comment and keep denying that it's about Nvidia fans, not the company even after I expressly stated so).

It's far from whenever - you're very active here, and this time it was because you said two very conflicting things within a very short timeframe.
You're the one being very active. I'm only replying to your accusations.

I don't get it... If you hate my comments so much, why don't you just put me on ignore?

Its 70 cheaper in your region - but it's a slower card, especially in RT where the difference is 20%. So again, objectively, the price isn't really any better than nvidias. And that's in your specific region, cause in other regions it's even more expensive than the 70ti. And even if we pretend they are both equal, a 70 pound difference makes you HATE nvidias pricing? While completely ignoring the overpriced POS that is the non xt 9070?
I don't care about RT performance. I care much more about a fluent Linux support and the 8-pin power. Do you see how we're interested in different things? That's what makes the world great, isn't it?

The 9070 non-XT is within 10% of the XT, it can even match it with some OC/UV in some cases. It's not such a bad card as some people think.

So basically you hate nvidia pricing because you live in GB where one model of the nvidia is 10% more expensive than a worse (let's say slightly) amd card, and if you lived in EU you'd be hating amd pricing? Does that make sense to you? If yes, sure, let's just move on.
I'm not answering hypotheticals. So if you were living in the UK, and you didn't care about RT, you'd love AMD, right? What's the point of even saying that?

You want what you want from a GPU, and I want what I want, ok? There's no need to convince each other because we're both right in our own PoV. There is no objectivity in an "I hate Nvidia's pricing" statement, and that's fine.
 
How do people reach these outlandish conclusions? Seriously, WHAT? The 9070xt has 54B transistors vs 45B for the 5080, and yet the latter is 25% faster in Raster and 39% in RT. With a smaller die. But nvidia architecture isn't good in raster. :banghead:
I never mentioned AMD in that post, i am comparing Nvidia against Nvidia, trivial, nonsense matters like AMD vs Nvidia is of no interest to me anymore.
I get it, Nvidia is the best, on this matter let's assume there is only Nvidia in this world and think how gaming will evolve in the future.
On one hand, the ones that need chips for actual games like consoles, will want a GPU architecture focused on gaming and deliver the best performance per Watt and best cost/cheap, could AI make their way in consoles ? who knows
On the other hand, PC desktop is dominated by AI focused Gpu's.
So we have 2 GPU makers, one who will not name so JustBenching doesn't get crazy here, wanted to make something for consoles and then same thing overpowered for PC, they failed on PC, then Nvidia, at some point a gaming gpu maker but now they make GPU architectures for AI business ( 92% of revenue ) and then underpower that architecture for PC market.
What will the future hold for us ? will they battle each other ? the one who will not name so JustBenching doesn't get crazy could be nasty and make PS6 GPU heavy on raster and some basic tensor core for upscaling/denoising ray, just for spite to mess with Nvidia, or they could agree not to complicate this and waste R&D, focus on AI and suck on that sweet government money tits for eternity.
What do you guys think ?
 
I never mentioned AMD in that post, i am comparing Nvidia against Nvidia, trivial, nonsense matters like AMD vs Nvidia is of no interest to me anymore
You said if nvidia wants to compete in the gaming space yadayada. Who would they compete against?

I still don't fully get your point, are you saying that hypothetically if they removed support for AI etc they could just make a faster GPU in pure raster? Well, sure I guess. But shouldn't everyone else that doesn't put as much emphasis in AI, LLMs, tensor cores and the whole 9 yard, while also have much bigger dies be stomping them already? In fact I think that the fact that nvidia isn't even putting that much emphasis in pure raster and still remain the fastest (by far, btw) at each given die size shows how great their architecture is for raster performance, no?
 
JustBenching, we already established Nvidia is the best, they should name your first born and a tattoo is mandatory.
I'm just saying, if consoles have a GPU that is too different from what we have on desktop, we could have PC ports that are buggy and need a GPU that is way stronger than what they put on a console, win for them but not for us.
Just think a bit, you seen a lot of ports from consoles and some of them have wild requirements compared to what they have on console as GPU power, so that architecture is not bad, it's excellent on consoles but when they hit PC, suddenly we need a GPU more expensive than the whole fricking console.
Nobody wants this crap happening, meaning us consumer's, they actually want this.
And please don't give me "but on PC has higher resolution, better shadows", if you would put the same GPU on PC you would get lower performance than console.
 
And please don't give me "but on PC has higher resolution, better shadows", if you would put the same GPU on PC you would get lower performance than console.
I have seen 0 evidence of that being the case. I've heard people talk about it a lot, but it just never materializes in practice. We've had lots of crappy (in terms of optimization) games on pc lately, but most / all of them were super crappy on the console as well. I'm talking 648p resolution and almost single digits framerates on a ps5 (jedi survivor, forspoken, golum etc.)

DF tested this btw, a 3070 / 6700xt runs most games faster than the ps5 does. Sure there is the odd game here and there that's extra crappy on the pc but the same applies in reverse as well.

Realistically, what could the next ps6 even have? A 9070xt is currently amds fastest card and it pulls over 300w (I'm bringing up amd here because that's who's most likely making the ps6 hardware). A console that needs to run at less than 200w for the whole system realistically cant have a card in the next 2 years that's faster than a 9070xt
 
I have seen 0 evidence of that being the case. I've heard people talk about it a lot, but it just never materializes in practice. We've had lots of crappy (in terms of optimization) games on pc lately, but most / all of them were super crappy on the console as well.
I don't even know why entertain this conversation, how old are you ?
Let's take latest example, Last of us part 2, Digital Foundry -
, he says both last of us part 1 and 2 have exaggerated requirements on PC, he says that an RTX 3060 set with similar graphics to BASE PS4, AGAIN, BASE PS4, couldn't keep constant 60 fps despite being a much faster GPU, he even compared specs.
1747249384067.png

So yeah, this can happen if architecture's are too different, no game studio will completely rewrite a game so it can work on cheap PC gpu's.
Happy now ?
 
I don't know if they measured the same way TOPS for 4000 series and 5000 series but that silicon die has way more AI capability, they even cut physx support from the chip so they would make room for probably even more AI.
Uh, no, they did not cut PhysX support from the chip lmao. They removed support for 32-bit CUDA code from the drivers for Blackwell (GeForce and professional RTX) cards and from the latest CUDA toolkit. And since most PhysX games use 32-bit code, that CUDA code doesn't run anymore.

There's nothing specifically on the GPU to support PhysX, that's the entire point of CUDA. It still runs 64-bit CUDA (and PhysX) code just fine.

And they didn't change anything about the chip to "make room" for AI. The core architecture and space allocation is essentially unchanged. The increased AI TOPS come from FP4 support in Blackwell, while Ada only had FP8.

1747253407836.jpeg

So the expectations for 5070 ti to beat or equal even the 4080 are not realistic unless the game heavily uses tensor cores for some future neural denoising and texture compression, DLSS transformer might be to light of work for tensor cores.
It matched the 4080 Super in the general reviews, so yes, it is underperforming in this game. Blackwell "punches above its weight" from what the specs would suggest. For instance, the 5070 matches the 4070 Super despite having 1000 less CUDA cores, and barely more than the base 4070. The better power efficiency and power management seems to really be good for Blackwell.

I don't even know why entertain this conversation, how old are you ?
Let's take latest example, Last of us part 2, Digital Foundry - , he says both last of us part 1 and 2 have exaggerated requirements on PC, he says that an RTX 3060 set with similar graphics to BASE PS4, AGAIN, BASE PS4, couldn't keep constant 60 fps despite being a much faster GPU, he even compared specs.

Happy now ?
The Remastered versions are the only versions available for PC. And they were made for the PS5, which is about as good as a 3060. So this is another example of a game (in this case, the remastered version), just being crappy. Comparing it to the PS4 version is wrong, because it's a different game.
 
I don't even know why entertain this conversation, how old are you ?
Let's take latest example, Last of us part 2, Digital Foundry -
, he says both last of us part 1 and 2 have exaggerated requirements on PC, he says that an RTX 3060 set with similar graphics to BASE PS4, AGAIN, BASE PS4, couldn't keep constant 60 fps despite being a much faster GPU, he even compared specs.
View attachment 399697
So yeah, this can happen if architecture's are too different, no game studio will completely rewrite a game so it can work on cheap PC gpu's.
Happy now ?
Uhm, the video you just posted disproved your whole point. According to their own words "this is a huuuge outlier compared to every other pc game we've tested". I mean - I've already predicted that that would be the basis of your whole argument, you'll pick one or two outliers (completely ignoring the outliers that go the other way) and make your argument. No, sorry, this is not an honest discourse. This is you leading the evidence.
 
And they didn't change anything about the chip to "make room" for AI. The core architecture and space allocation is essentially unchanged. The increased AI TOPS come from FP4 support in Blackwell, while Ada only had FP8.
I can admit i am wrong so enlighten me, as i know they mostly measure in INT8, i guess Nvidia does FP4 now but is it measured in FP4 ? or FP8 ? from where does that very high number comes ? i mean 9070 xt can do 1557 tops in FP4, and this the worst GPU according to JustBenching, this can't be the case here, Nvidia must be superior somehow, don't ruin JustBenching day here.
It matched the 4080 Super in the general reviews, so yes, it is underperforming in this game. Blackwell "punches above its weight" from what the specs would suggest.
Not really at all, if you cherry pick then 5070 ti can match standard rtx 4080 in some games, if you go crazy like Hardware unboxed and test 16 games then things change, proof - https://www.techspot.com/review/2955-nvidia-geforce-rtx-5070-ti/
Look at the charts and you will see more games when 5070 ti is bellow 4080. Sure you are gonna say they are biased and all that, classic denial.
The Remastered versions are the only versions available for PC. And they were made for the PS5, which is about as good as a 3060. So this is another example of a game (in this case, the remastered version), just being crappy. Comparing it to the PS4 version is wrong, because it's a different game.
Again, wrong, if you played a little from that video you would see he matched the graphics of PS4 to prove a point, doesn't matter what version of the game it is, it's clear it's not a version of PS4 emulated on PC, it's something adapted/coded/ported to run on PC.

I have seen 0 evidence of that being the case. I've heard people talk about it a lot, but it just never materializes in practice. We've had lots of crappy (in terms of optimization) games on pc lately, but most / all of them were super crappy on the console as well. I'm talking 648p resolution and almost single digits framerates on a ps5 (jedi survivor, forspoken, golum etc.)
Show him evidence he is wrong and then
Uhm, the video you just posted disproved your whole point. According to their own words "this is a huuuge outlier compared to every other pc game we've tested". I mean - I've already predicted that that would be the basis of your whole argument, you'll pick one or two outliers (completely ignoring the outliers that go the other way) and make your argument. No, sorry, this is not an honest discourse. This is you leading the evidence.
I guess 0 EVIDENCE is not so 0, no matter how much evidence i bring it wouldn't matter.
 
I can admit i am wrong so enlighten me, as i know they mostly measure in INT8, i guess Nvidia does FP4 now but is it measured in FP4 ? or FP8 ? from where does that very high number comes ? i mean 9070 xt can do 1557 tops in FP4, and this the worst GPU according to JustBenching, this can't be the case here, Nvidia must be superior somehow, don't ruin JustBenching day here.
Nvidia's TOPS number for Blackwell is FP4 TOPS, because they added native 4-bit support. 8-bit and larger performance is only slightly improved.

Not really at all, if you cherry pick then 5070 ti can match standard rtx 4080 in some games, if you go crazy like Hardware unboxed and test 16 games then things change, proof - https://www.techspot.com/review/2955-nvidia-geforce-rtx-5070-ti/
Look at the charts and you will see more games when 5070 ti is bellow 4080. Sure you are gonna say they are biased and all that, classic denial.
And if you test 25 games you get an average matching performance (https://www.techpowerup.com/review/galax-geforce-rtx-5070-ti-1-click-oc-white/35.html). Obviously games differ, but in general, it pretty much matches.

Again, wrong, if you played a little from that video you would see he matched the graphics of PS4 to prove a point, doesn't matter what version of the game it is, it's clear it's not a version of PS4 emulated on PC, it's something adapted/coded/ported to run on PC.
You can try and tweak graphics settings to match what it looks like on the PS4, but it's not a port of the PS4 game. It's a port of the Remastered PS5 version, intended for the PS5 hardware (which is about as good as a 3060). So what's the performance when you match how it looks on the remastered PS5 version?
 
Last edited:
And if you test 25 games you get an average matching performance (https://www.techpowerup.com/review/galax-geforce-rtx-5070-ti-1-click-oc-white/35.html). Obviously games differ, but in general, it pretty much matches.
And now the question is, why does numbers differ from one publication to another ? same games and different ranking, from cyberpunk to stalker 2, last of us part 1 and so on, only thing they agree is RTX 5070 ti is weaker than RTX 4080 when ray tracing is involved, both publications.
You can try and tweak graphics settings to match what it looks like on the PS4, but it's not a port of the PS4 game. It's a port of the Remastered PS5 version, intended for the PS5 hardware (which is about as good as a 3060). So what's the performance when you match how it looks on the remastered PS5 version?
So, from you expert position you tell me because it's a PS5 port and not a PS4 port, it somehow needs more GPU power for the same graphics of PS4.
What if you take Tetris, the oldest ones, make it on PS5 and port it to PC but turn graphics down to look like Tetris from 1990 or 2000 ? do you think it can even start ? i don't know, does it need a super computer ? probably.
 
So, from you expert position you tell me because it's a PS5 port and not a PS4 port, it somehow needs more GPU power for the same graphics of PS4.
What if you take Tetris, the oldest ones, make it on PS5 and port it to PC but turn graphics down to look like Tetris from 1990 or 2000 ? do you think it can even start ? i don't know, does it need a super computer ? probably.
I'm saying if you're trying to compare performance on console vs PC, you can't pick two different games. It doesn't matter if it's the same story and looks the same, it's a different game "under the hood." Compare a game on console vs that game's PC port. What is the performance of TLOU Part 2 on PS5 vs the performance on a desktop 3060?

I don't think you actually know what you're talking about, so here, I'll give you the answer. https://www.techpowerup.com/review/the-last-of-us-part-2-performance-benchmark/6.html

Turns out, the 3060 gets almost the same performance as the PS5, even at highest settings. With upscaling (just like the PS5), the 3060 gets ~35fps at 4k, while the PS5 gets 30fps at 4k. At 1440p, the PS5 gets 60fps, while the 3060 gets ~53fps. And the PS5 is much more aggressive with upscaling to get a locked framerate, as opposed to the fixed Quality upscaling for a desktop GPU, so actually, the 3060 performs better overall when you factor that in.

In hindsight, it's honestly hilarious that you chose TLOU as the hill to die on. Even in the Playstation community, the endless TLOU remasters with essentially no visual improvement but increased requirements and worse performance are a meme. You honestly could not find a better example of studios making bloated games that perform worse on better hardware over time.
 
Last edited:
I don't get it... If you hate my comments so much, why don't you just put me on ignore?
I don't hate it, never said that, in fact you're the one that said you were "sick of it", so consider taking your own advice if that's the case as you say it is. You may know me well enough to see that I often point these things out to make them more obvious to others.
 
Show him evidence he is wrong and then

I guess 0 EVIDENCE is not so 0, no matter how much evidence i bring it wouldn't matter.
Besides everything @LastDudeALive said, I've already acknowledged BEFORE you made your comment that of course there are outliers both ways. Thats why you pick a mix of 10-15-20 games to make a point, you don't pick the "extreme outliers" as the reviewer you posted admitted himself.

That's like picking microsoft flight simulator and nothing else and then concluding that the 9800x 3d is on average 3 times faster than the 9700x. Oh well...
 
Turns out, the 3060 gets almost the same performance as the PS5, even at highest settings. With upscaling (just like the PS5), the 3060 gets ~35fps at 4k, while the PS5 gets 30fps at 4k. At 1440p, the PS5 gets 60fps, while the 3060 gets ~53fps. And the PS5 is much more aggressive with upscaling to get a locked framerate, as opposed to the fixed Quality upscaling for a desktop GPU, so actually, the 3060 performs better overall when you factor that in.
Wrong -
PS5 fidelity mode does about 40 fps at 4k, without upscaling as DF says it doesn't use dynamic resolution scaling and with frame rate unlocked, very very far from 24 fps what 3060 12gb does on the link you gave me.
Watch the content please, it's getting annoying constantly pointing obvious facts.
 
Wrong -
PS5 fidelity mode does about 40 fps at 4k, without upscaling as DF says it doesn't use dynamic resolution scaling and with frame rate unlocked, very very far from 24 fps what 3060 12gb does on the link you gave me.
Watch the content please, it's getting annoying constantly pointing obvious facts.
1) The 3060 gets 35 fps average, not 24.

2) It's doing it at maxed out settings, which might or might not be the same settings the PS5 uses (probably not, but need confirmation either way).

3) Again, you are STUCK at a game that the reviewer you yourself are using to prove yourself calls it an "EXTREME OUTLIER"!!

4) You are comparing it with a 3060 when - since it's your argument - consoles perform WAY better with the same hardware. How much is that "WAY better"? 30% better? 50% better? Cause a 5 year old 3070 is already getting 52 fps at maxed out settings, way higher than the PS5 does, in a game that is an extreme outlier. So how much better does the actual console hardware perform vs PC?

Number 3 is very typical it's become a banality, whenever there is a pc / console involved the whole argumentation for the console sides ignores everything else and focuses on the most extremely wild outlier to make a point. For obvious reasons, cause there is no point to be made. So let me bring my own outliers in the mix, wanna talk about Forspoken or Jedi survivor 2? Those are game that drop to 648p on the ps5...

Since we like outliers, a normal 6700 is 44% faster than the ps5. Im going to use this huge outlier as the basis of my argument, seems very objective and fair :roll:

image_2025-05-15_102031761.png
 
Last edited:
1) The 3060 gets 35 fps average, not 24.
Is this a joke ?
1747293619391.png

I think this is enough, if the thought is to wear me out with false claims then you 2 have won, i don't have time for this crap.
 
Is this a joke ?
View attachment 399763
I think this is enough, if the thought is to wear me out with false claims then you 2 have won, i don't have time for this crap.
Was looking at the link posted above which was upscaled numbers. You are still focused on the extreme outlier while using different settings.

Tell us your opinion about the SS I posted straight from digital foundry (your source). At matched settings the 6700 is 44% faster than the ps5. Best of luck
 
Is this a joke ?
View attachment 399763
I think this is enough, if the thought is to wear me out with false claims then you 2 have won, i don't have time for this crap.

I wouldn't even waste my time if I was you. These are people known for trolling on this forum and I have a bunch of them on ignore.
 
Is this a joke ?
View attachment 399763
I think this is enough, if the thought is to wear me out with false claims then you 2 have won, i don't have time for this crap.

A PC with the exact same hardware specifications as the PS5 wouldn’t come close to running the same games at the same performance level; I can guarantee that. In fact, some titles might not even launch properly. That’s why Digital Foundry’s comparisons often miss the mark. Here's why:

- The PS5's CPU isn't equivalent to a Ryzen 5 3600. While it’s technically based on the Zen 2 architecture, it’s a heavily customized and downclocked version designed to fit within strict power and thermal limits. Comparing it directly to desktop CPUs is misleading. https://chipsandcheese.com/p/the-nerfed-fpu-in-ps5s-zen-2-cores

1747317003231.png


- The PS5's 16GB of GDDR6 is unified memory. That means it’s shared between the CPU and GPU. In contrast, Digital Foundry’s test PC typically uses 32GB of DDR5 system RAM paired with a GPU that has 10GB of its own VRAM. Try running a modern game on a PC with just 8GB of RAM and an 8GB GPU—you’ll be lucky if it even launches, let alone performs well.


Here, Digital Foundry's (biased) analysis claims that the RX 6700 can't match the PS5's visual quality and must lower texture settings due to VRAM limitations. Despite its modest hardware, the PS5 remains an extremely efficient system(Mainly in memory management).
 
Back
Top