• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

DOOM: The Dark Ages Performance Benchmark

I think we should agree to disagree and leave it that.
Now this is important and might be of interest for some, as pointed out by some publications, RTX 5000 series is slower than 4000 series in some games, as usual, reviewers are brainwashed by Nvidia marketing and can't comprehend how the "new" arhitecture, version...higher number bullshit is slower.
The details are RIGHT IN THE SPECIFICATIONS:

RTX 4070 Ti SUPER - Pixel Rate 250.6 GPixel/s​

- FP32 (float) 44.10 TFLOPS​

- 706 AI TOPS​

RTX 5070 Ti - Pixel Rate 235.4 GPixel/s​

- FP32 (float) 43.94 TFLOPS​

- 1406 AI TOPS​

RTX 4080 - Pixel Rate 280.6 GPixel/s​

- FP32 (float) 48.74 TFLOPS​

- 780 AI TOPS​

I don't know if they measured the same way TOPS for 4000 series and 5000 series but that silicon die has way more AI capability, they even cut physx support from the chip so they would make room for probably even more AI.
So the expectations for 5070 ti to beat or equal even the 4080 are not realistic unless the game heavily uses tensor cores for some future neural denoising and texture compression, DLSS transformer might be to light of work for tensor cores.
One thing is clear, Nvidia architecture is not strong in raster and if they care to compete in gaming space they need to find a way to put tensor cores to work, more complicated upscalers that need more power, more complex ray tracing to need heavier denoising, neural texture compression...etc.
All of this can be perfect for business, make an architecture for AI customers and adapt the same architecture for gaming.
 
One thing is clear, Nvidia architecture is not strong in raster and if they care to compete in gaming space
How do people reach these outlandish conclusions? Seriously, WHAT? The 9070xt has 54B transistors vs 45B for the 5080, and yet the latter is 25% faster in Raster and 39% in RT. With a smaller die. But nvidia architecture isn't good in raster. :banghead:
 
The double standard, the times you're called out (with receipts) and deny, entrench yourself or gaslight, the false equivalences, and overall it's the disparity between how you say you act, and how you actually act.
Without examples, those are just words.

There is no double standard here. I've got my own standards and I'm true to them all the time. It's only that your blood is too green to accept it and move on (proven by the fact that only you got infuriated by my initial comment and keep denying that it's about Nvidia fans, not the company even after I expressly stated so).

It's far from whenever - you're very active here, and this time it was because you said two very conflicting things within a very short timeframe.
You're the one being very active. I'm only replying to your accusations.

I don't get it... If you hate my comments so much, why don't you just put me on ignore?

Its 70 cheaper in your region - but it's a slower card, especially in RT where the difference is 20%. So again, objectively, the price isn't really any better than nvidias. And that's in your specific region, cause in other regions it's even more expensive than the 70ti. And even if we pretend they are both equal, a 70 pound difference makes you HATE nvidias pricing? While completely ignoring the overpriced POS that is the non xt 9070?
I don't care about RT performance. I care much more about a fluent Linux support and the 8-pin power. Do you see how we're interested in different things? That's what makes the world great, isn't it?

The 9070 non-XT is within 10% of the XT, it can even match it with some OC/UV in some cases. It's not such a bad card as some people think.

So basically you hate nvidia pricing because you live in GB where one model of the nvidia is 10% more expensive than a worse (let's say slightly) amd card, and if you lived in EU you'd be hating amd pricing? Does that make sense to you? If yes, sure, let's just move on.
I'm not answering hypotheticals. So if you were living in the UK, and you didn't care about RT, you'd love AMD, right? What's the point of even saying that?

You want what you want from a GPU, and I want what I want, ok? There's no need to convince each other because we're both right in our own PoV. There is no objectivity in an "I hate Nvidia's pricing" statement, and that's fine.
 
How do people reach these outlandish conclusions? Seriously, WHAT? The 9070xt has 54B transistors vs 45B for the 5080, and yet the latter is 25% faster in Raster and 39% in RT. With a smaller die. But nvidia architecture isn't good in raster. :banghead:
I never mentioned AMD in that post, i am comparing Nvidia against Nvidia, trivial, nonsense matters like AMD vs Nvidia is of no interest to me anymore.
I get it, Nvidia is the best, on this matter let's assume there is only Nvidia in this world and think how gaming will evolve in the future.
On one hand, the ones that need chips for actual games like consoles, will want a GPU architecture focused on gaming and deliver the best performance per Watt and best cost/cheap, could AI make their way in consoles ? who knows
On the other hand, PC desktop is dominated by AI focused Gpu's.
So we have 2 GPU makers, one who will not name so JustBenching doesn't get crazy here, wanted to make something for consoles and then same thing overpowered for PC, they failed on PC, then Nvidia, at some point a gaming gpu maker but now they make GPU architectures for AI business ( 92% of revenue ) and then underpower that architecture for PC market.
What will the future hold for us ? will they battle each other ? the one who will not name so JustBenching doesn't get crazy could be nasty and make PS6 GPU heavy on raster and some basic tensor core for upscaling/denoising ray, just for spite to mess with Nvidia, or they could agree not to complicate this and waste R&D, focus on AI and suck on that sweet government money tits for eternity.
What do you guys think ?
 
I never mentioned AMD in that post, i am comparing Nvidia against Nvidia, trivial, nonsense matters like AMD vs Nvidia is of no interest to me anymore
You said if nvidia wants to compete in the gaming space yadayada. Who would they compete against?

I still don't fully get your point, are you saying that hypothetically if they removed support for AI etc they could just make a faster GPU in pure raster? Well, sure I guess. But shouldn't everyone else that doesn't put as much emphasis in AI, LLMs, tensor cores and the whole 9 yard, while also have much bigger dies be stomping them already? In fact I think that the fact that nvidia isn't even putting that much emphasis in pure raster and still remain the fastest (by far, btw) at each given die size shows how great their architecture is for raster performance, no?
 
JustBenching, we already established Nvidia is the best, they should name your first born and a tattoo is mandatory.
I'm just saying, if consoles have a GPU that is too different from what we have on desktop, we could have PC ports that are buggy and need a GPU that is way stronger than what they put on a console, win for them but not for us.
Just think a bit, you seen a lot of ports from consoles and some of them have wild requirements compared to what they have on console as GPU power, so that architecture is not bad, it's excellent on consoles but when they hit PC, suddenly we need a GPU more expensive than the whole fricking console.
Nobody wants this crap happening, meaning us consumer's, they actually want this.
And please don't give me "but on PC has higher resolution, better shadows", if you would put the same GPU on PC you would get lower performance than console.
 
And please don't give me "but on PC has higher resolution, better shadows", if you would put the same GPU on PC you would get lower performance than console.
I have seen 0 evidence of that being the case. I've heard people talk about it a lot, but it just never materializes in practice. We've had lots of crappy (in terms of optimization) games on pc lately, but most / all of them were super crappy on the console as well. I'm talking 648p resolution and almost single digits framerates on a ps5 (jedi survivor, forspoken, golum etc.)

DF tested this btw, a 3070 / 6700xt runs most games faster than the ps5 does. Sure there is the odd game here and there that's extra crappy on the pc but the same applies in reverse as well.

Realistically, what could the next ps6 even have? A 9070xt is currently amds fastest card and it pulls over 300w (I'm bringing up amd here because that's who's most likely making the ps6 hardware). A console that needs to run at less than 200w for the whole system realistically cant have a card in the next 2 years that's faster than a 9070xt
 
I have seen 0 evidence of that being the case. I've heard people talk about it a lot, but it just never materializes in practice. We've had lots of crappy (in terms of optimization) games on pc lately, but most / all of them were super crappy on the console as well.
I don't even know why entertain this conversation, how old are you ?
Let's take latest example, Last of us part 2, Digital Foundry -
, he says both last of us part 1 and 2 have exaggerated requirements on PC, he says that an RTX 3060 set with similar graphics to BASE PS4, AGAIN, BASE PS4, couldn't keep constant 60 fps despite being a much faster GPU, he even compared specs.
1747249384067.png

So yeah, this can happen if architecture's are too different, no game studio will completely rewrite a game so it can work on cheap PC gpu's.
Happy now ?
 
I don't know if they measured the same way TOPS for 4000 series and 5000 series but that silicon die has way more AI capability, they even cut physx support from the chip so they would make room for probably even more AI.
Uh, no, they did not cut PhysX support from the chip lmao. They removed support for 32-bit CUDA code from the drivers for Blackwell (GeForce and professional RTX) cards and from the latest CUDA toolkit. And since most PhysX games use 32-bit code, that CUDA code doesn't run anymore.

There's nothing specifically on the GPU to support PhysX, that's the entire point of CUDA. It still runs 64-bit CUDA (and PhysX) code just fine.

And they didn't change anything about the chip to "make room" for AI. The core architecture and space allocation is essentially unchanged. The increased AI TOPS come from FP4 support in Blackwell, while Ada only had FP8.

1747253407836.jpeg

So the expectations for 5070 ti to beat or equal even the 4080 are not realistic unless the game heavily uses tensor cores for some future neural denoising and texture compression, DLSS transformer might be to light of work for tensor cores.
It matched the 4080 Super in the general reviews, so yes, it is underperforming in this game. Blackwell "punches above its weight" from what the specs would suggest. For instance, the 5070 matches the 4070 Super despite having 1000 less CUDA cores, and barely more than the base 4070. The better power efficiency and power management seems to really be good for Blackwell.

I don't even know why entertain this conversation, how old are you ?
Let's take latest example, Last of us part 2, Digital Foundry - , he says both last of us part 1 and 2 have exaggerated requirements on PC, he says that an RTX 3060 set with similar graphics to BASE PS4, AGAIN, BASE PS4, couldn't keep constant 60 fps despite being a much faster GPU, he even compared specs.

Happy now ?
The Remastered versions are the only versions available for PC. And they were made for the PS5, which is about as good as a 3060. So this is another example of a game (in this case, the remastered version), just being crappy. Comparing it to the PS4 version is wrong, because it's a different game.
 
I don't even know why entertain this conversation, how old are you ?
Let's take latest example, Last of us part 2, Digital Foundry -
, he says both last of us part 1 and 2 have exaggerated requirements on PC, he says that an RTX 3060 set with similar graphics to BASE PS4, AGAIN, BASE PS4, couldn't keep constant 60 fps despite being a much faster GPU, he even compared specs.
View attachment 399697
So yeah, this can happen if architecture's are too different, no game studio will completely rewrite a game so it can work on cheap PC gpu's.
Happy now ?
Uhm, the video you just posted disproved your whole point. According to their own words "this is a huuuge outlier compared to every other pc game we've tested". I mean - I've already predicted that that would be the basis of your whole argument, you'll pick one or two outliers (completely ignoring the outliers that go the other way) and make your argument. No, sorry, this is not an honest discourse. This is you leading the evidence.
 
And they didn't change anything about the chip to "make room" for AI. The core architecture and space allocation is essentially unchanged. The increased AI TOPS come from FP4 support in Blackwell, while Ada only had FP8.
I can admit i am wrong so enlighten me, as i know they mostly measure in INT8, i guess Nvidia does FP4 now but is it measured in FP4 ? or FP8 ? from where does that very high number comes ? i mean 9070 xt can do 1557 tops in FP4, and this the worst GPU according to JustBenching, this can't be the case here, Nvidia must be superior somehow, don't ruin JustBenching day here.
It matched the 4080 Super in the general reviews, so yes, it is underperforming in this game. Blackwell "punches above its weight" from what the specs would suggest.
Not really at all, if you cherry pick then 5070 ti can match standard rtx 4080 in some games, if you go crazy like Hardware unboxed and test 16 games then things change, proof - https://www.techspot.com/review/2955-nvidia-geforce-rtx-5070-ti/
Look at the charts and you will see more games when 5070 ti is bellow 4080. Sure you are gonna say they are biased and all that, classic denial.
The Remastered versions are the only versions available for PC. And they were made for the PS5, which is about as good as a 3060. So this is another example of a game (in this case, the remastered version), just being crappy. Comparing it to the PS4 version is wrong, because it's a different game.
Again, wrong, if you played a little from that video you would see he matched the graphics of PS4 to prove a point, doesn't matter what version of the game it is, it's clear it's not a version of PS4 emulated on PC, it's something adapted/coded/ported to run on PC.

I have seen 0 evidence of that being the case. I've heard people talk about it a lot, but it just never materializes in practice. We've had lots of crappy (in terms of optimization) games on pc lately, but most / all of them were super crappy on the console as well. I'm talking 648p resolution and almost single digits framerates on a ps5 (jedi survivor, forspoken, golum etc.)
Show him evidence he is wrong and then
Uhm, the video you just posted disproved your whole point. According to their own words "this is a huuuge outlier compared to every other pc game we've tested". I mean - I've already predicted that that would be the basis of your whole argument, you'll pick one or two outliers (completely ignoring the outliers that go the other way) and make your argument. No, sorry, this is not an honest discourse. This is you leading the evidence.
I guess 0 EVIDENCE is not so 0, no matter how much evidence i bring it wouldn't matter.
 
I can admit i am wrong so enlighten me, as i know they mostly measure in INT8, i guess Nvidia does FP4 now but is it measured in FP4 ? or FP8 ? from where does that very high number comes ? i mean 9070 xt can do 1557 tops in FP4, and this the worst GPU according to JustBenching, this can't be the case here, Nvidia must be superior somehow, don't ruin JustBenching day here.
Nvidia's TOPS number for Blackwell is FP4 TOPS, because they added native 4-bit support. 8-bit and larger performance is only slightly improved.

Not really at all, if you cherry pick then 5070 ti can match standard rtx 4080 in some games, if you go crazy like Hardware unboxed and test 16 games then things change, proof - https://www.techspot.com/review/2955-nvidia-geforce-rtx-5070-ti/
Look at the charts and you will see more games when 5070 ti is bellow 4080. Sure you are gonna say they are biased and all that, classic denial.
And if you test 25 games you get an average matching performance (https://www.techpowerup.com/review/galax-geforce-rtx-5070-ti-1-click-oc-white/35.html). Obviously games differ, but in general, it pretty much matches.

Again, wrong, if you played a little from that video you would see he matched the graphics of PS4 to prove a point, doesn't matter what version of the game it is, it's clear it's not a version of PS4 emulated on PC, it's something adapted/coded/ported to run on PC.
You can try and tweak graphics settings to match what it looks like on the PS4, but it's not a port of the PS4 game. It's a port of the Remastered PS5 version, intended for the PS5 hardware (which is about as good as a 3060). So what's the performance when you match how it looks on the remastered PS5 version?
 
Last edited:
And if you test 25 games you get an average matching performance (https://www.techpowerup.com/review/galax-geforce-rtx-5070-ti-1-click-oc-white/35.html). Obviously games differ, but in general, it pretty much matches.
And now the question is, why does numbers differ from one publication to another ? same games and different ranking, from cyberpunk to stalker 2, last of us part 1 and so on, only thing they agree is RTX 5070 ti is weaker than RTX 4080 when ray tracing is involved, both publications.
You can try and tweak graphics settings to match what it looks like on the PS4, but it's not a port of the PS4 game. It's a port of the Remastered PS5 version, intended for the PS5 hardware (which is about as good as a 3060). So what's the performance when you match how it looks on the remastered PS5 version?
So, from you expert position you tell me because it's a PS5 port and not a PS4 port, it somehow needs more GPU power for the same graphics of PS4.
What if you take Tetris, the oldest ones, make it on PS5 and port it to PC but turn graphics down to look like Tetris from 1990 or 2000 ? do you think it can even start ? i don't know, does it need a super computer ? probably.
 
So, from you expert position you tell me because it's a PS5 port and not a PS4 port, it somehow needs more GPU power for the same graphics of PS4.
What if you take Tetris, the oldest ones, make it on PS5 and port it to PC but turn graphics down to look like Tetris from 1990 or 2000 ? do you think it can even start ? i don't know, does it need a super computer ? probably.
I'm saying if you're trying to compare performance on console vs PC, you can't pick two different games. It doesn't matter if it's the same story and looks the same, it's a different game "under the hood." Compare a game on console vs that game's PC port. What is the performance of TLOU Part 2 on PS5 vs the performance on a desktop 3060?

I don't think you actually know what you're talking about, so here, I'll give you the answer. https://www.techpowerup.com/review/the-last-of-us-part-2-performance-benchmark/6.html

Turns out, the 3060 gets almost the same performance as the PS5, even at highest settings. With upscaling (just like the PS5), the 3060 gets ~35fps at 4k, while the PS5 gets 30fps at 4k. At 1440p, the PS5 gets 60fps, while the 3060 gets ~53fps. And the PS5 is much more aggressive with upscaling to get a locked framerate, as opposed to the fixed Quality upscaling for a desktop GPU, so actually, the 3060 performs better overall when you factor that in.
 
Back
Top