• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

DOOM: The Dark Ages Performance Benchmark

We are talking about different things then. NVidias RT performance is stemming from their tech and just flat out superior RT cores. It’s not a “driver optimization”.
Well, it's not anyone's fault people are not informed, you see a benchmark, you see Nvidia winning in ray tracing and path tracing and that's it, the guy that reviewed the game/gpu probably even him doesn't understand why an RTX 4060 beats a 7900xtx.
Not everyone is stupid and take it as it is, once you understand Nvidia does RAY TRACING DENOISING IN HARDWARE AND AMD IN SOFTWARE then you understand why it loses so bad in black myth wukong and cyberpunk.
Another thing you should understand is no AMD GPU is capable of hardware denoising except RDNA4, i could actually believe they went this far with ray tracing just to render AMD completely useless as light ray tracing could actually work.
I bet you the moment AMD implements hardware denoising for ray tracing, Nvidia completely abandons path tracing and moves to neural stuff.
Inform yourself and you will understand.
The idea that AMD could theoretically just “get” more RT performance out of their existing hardware via drivers is faulty from the get go. This is like saying that DLSS Transformer is superior to FSR 4 because drivers.
My head actually hurts how stupid this conversation got.
Take RTX 2080, with a driver update it went from DLSS CNN model to DLSS Transformer model, we are talking about a driver update, ok ? why can it be done on Nvidia but not on AMD ?
I hope you don't separate GPU driver components now, a gpu driver contains the upscaling model/files , whether it's fsr 2.1 , 3.1 or DLSS 2 ,3 , 4, so yeah, a driver can change a lot, and the work behind for training that model, just like you need to train for denoising ray tracing.
 
My head actually hurts how stupid this conversation got.
Take RTX 2080, with a driver update it went from DLSS CNN model to DLSS Transformer model, we are talking about a driver update, ok ? why can it be done on Nvidia but not on AMD ?
I hope you don't separate GPU driver components now, a gpu driver contains the upscaling model/files , whether it's fsr 2.1 , 3.1 or DLSS 2 ,3 , 4, so yeah, a driver can change a lot, and the work behind for training that model, just like you need to train for denoising ray tracing.
Well, that's not really what happened though. Supposedly, the transformer model was trained on a supercomputer. The card is just running the results of the training basically. That's not the same as a driver unlocking performance.
 
Well, that's not really what happened though. Supposedly, the transformer model was trained on a supercomputer. The card is just running the results of the training basically. That's not the same as a driver unlocking performance.
So if AMD trains a model for ray tracing denoising and drops a file in the driver and enables better RT performance, does this count ? of course this has to be coupled with the developer actually supporting the denoiser.
Are we really going to that level of "not understanding" and playing stupid ?
 
The tdp is also irrelevant, I'm talking about the actual power draw.
TDP is the actual power draw. It's a hard limit both on AMD (starting from RX 7000) and Nvidia, that the card can only exceed for a few ms spikes.

Fsr 4 is nowhere near as popular as dlss
Is this a popularity contest, or a measure of value? :confused:

I just checked couk, cheapest cards in stock 669 for the 9070xt and 729 for the 5070ti
That's £60 difference. It's pretty much a toss of the coin at that point, imo.
 
So if AMD trains a model for ray tracing denoising and drops a file in the driver and enables better RT performance, does this count ? of course this has to be coupled with the developer actually supporting the denoiser.
Are we really going to that level of "not understanding" and playing stupid ?
Well no, not really, because it doesn't unlock the extra performance that you are supposedly claiming isn't utilized.

TDP is the actual power draw. It's a hard limit both on AMD (starting from RX 7000) and Nvidia, that the card can only exceed for a few ms spikes.
No, it is not. My 4090 has a 450w power limit. It rarely reaches that, even in 4k. Same with the 5070ti, check the review, it's usually ~260-270watts.
Is this a popularity contest, or a measure of value? :confused:
A popularity contest, since if it doesn't exist in games it's useless regardless of how good it is.
 
No, it is not. My 4090 has a 450w power limit. It rarely reaches that, even in 4k.
That's because you're not utilising your GPU to 100% due to a frame rate cap, or the game engine not needing it, or for some other reason.

Same with the 5070ti, check the review, it's usually ~260-270watts.
Um...
1747072206168.png


Not that a 20-30 W difference would matter at this level anyway.

A popularity contest, since if it doesn't exist in games it's useless regardless of how good it is.
FSR 4 is there in every game that has FSR 3.1. I'm sure that new games will mostly all have FSR 4, and you don't need it for old games.
 
That's because you're not utilising your GPU to 100% due to a frame rate cap, or the game engine not needing it, or for some other reason.
No, with the GPU at 100% and no framecap not many games can even come close to 450. Only heavy PT games can do that, sometimes. 370w is where it's usually sitting at.
Um...
View attachment 399357

Not that a 20-30 W difference would matter at this level anyway.
Bro, come on. Im not even going to comment on that, we both know what you are doing. Stop it. Makes you look like a fan, what's the point? The 9070xt is more power hungry, it is what it is.

FSR 4 is there in every game that has FSR 3.1. I'm sure that new games will mostly all have FSR 4, and you don't need it for old games.

Ah, didn't know that was the case. Thought every game needed an update to get it. That's good
 
Don't forget that anytime you see a game using nVidia's method of the PathTracing... your going to see everything but them perform well. It's developed, coded and designed explicitly for there hardware and no one else. It's like expecting hardware PhysX to run well on anything but hardware that can run its CUDA code. So remember that, the performance gaps in those circumstances has very little to do with hardware capability and allot to do with nVidia coding (coding the developer may not be able to optimize or change as well)

That's because you're not utilising your GPU to 100% due to a frame rate cap, or the game engine not needing it, or for some other reason.


Um...
View attachment 399357

Not that a 20-30 W difference would matter at this level anyway.


FSR 4 is there in every game that has FSR 3.1. I'm sure that new games will mostly all have FSR 4, and you don't need it for old games.
and with tools like Optiscaler... you can use it almost anywhere there is FSR 2.x and newer, DLSS 2.x or newer or XeSS.
 
Don't forget that anytime you see a game using nVidia's method of the PathTracing... your going to see everything but them perform well. It's developed, coded and designed explicitly for there hardware and no one else. It's like expecting hardware PhysX to run well on anything but hardware that can run its CUDA code. So remember that, the performance gaps in those circumstances has very little to do with hardware capability and allot to do with nVidia coding (coding the developer may not be able to optimize or change as well)
I really doubt that that's true, cause if it was we wouldn't be seeing super light RT effects on AMD sponsored games running at 1/4th resolution. The logical explanation is that those cards are not good in RT, that's why amd sponsored games avoid using it like the plague in higher quality. Furthermore, why are amd slower in normal RT as well? How did they get such a huge jump from rdna 2 to 3 if it's about "coded developed and designed yadayada"?
 
No, with the GPU at 100% and no framecap not many games can even come close to 450. Only heavy PT games can do that, sometimes. 370w is where it's usually sitting at.
There still must be something about your utilisation.
Every single Nvidia GPU I own consumes exactly its TDP in games. The 4090 is a huge chip with huge performance and power draw. I wouldn't be surprised if your reported 100% utilisation wasn't entirely accurate.

Bro, come on. Im not even going to comment on that, we both know what you are doing. Stop it. Makes you look like a fan, what's the point? The 9070xt is more power hungry, it is what it is.
We're arguing about peanuts of difference, that's what I'm saying. 300 W or 320 W or 290 W... what's the difference, c'mon? None of these values are ideal for SFF, you know that.

Ah, didn't know that was the case. Thought every game needed an update to get it. That's good
Yep, it's done through the driver for every game that has FSR 3.1. :)
 
There still must be something about your utilisation.
Every single Nvidia GPU I own consumes exactly its TDP in games. The 4090 is a huge chip with huge performance and power draw. I wouldn't be surprised if your reported 100% utilisation wasn't entirely accurate.


We're arguing about peanuts of difference, that's what I'm saying. 300 W or 320 W or 290 W... what's the difference, c'mon? None of these values are ideal for SFF, you know that.


Yep, it's done through the driver for every game that has FSR 3.1. :)
I can guarantee you it doesn't consume anywhere near it's TDP. Just check TPUs review, he averaged 346w for the card (which is rather low to be fair but again, it's game dependend)

power-gaming.png


We are arguing about peanuts but you still went and picked an overclocked 70ti model comparing it against a stock 70xt and not even in gaming (in gaming the 70xt has higher power draw than even the overclocked 70ti model!!) but in MAX which is useless (is that furmark or something?). That 70ti model btw is 10% faster in raster and 24% faster in RT compared to the 70xt. While drawing less power in games...
 
I can guarantee you it doesn't consume anywhere near it's TDP. Just check TPUs review, he averaged 346w for the card (which is rather low to be fair but again, it's game dependend)

power-gaming.png


We are arguing about peanuts but you still went and picked an overclocked 70ti model comparing it against a stock 70xt and not even in gaming (in gaming the 70xt has higher power draw than even the overclocked 70ti model!!) but in MAX which is useless (is that furmark or something?). That 70ti model btw is 10% faster in raster and 24% faster in RT compared to the 70xt. While drawing less power in games...
I picked an overclocked model, because no factory default model was tested here on TPU - same with 9070 XTs, by the way. I picked the Zotac Amp because I assumed that model is the closest to an FE.

My point stands: reference model 9070 XT and reference model 5070 Ti consume roughly the same amount of power, and there's no point arguing about peanuts.

Mine is a bone stock Powercolor Reaper with a power limit of 304 W, and guess what, it consumes exactly 304 W.
Don't tell me that it's miles more than a 5070 Ti, or that a 5070 Ti is suited for SFF, because that would make you look like a blind fan. Even with let's say 280 W, it's within the same league.
 
I really doubt that that's true, cause if it was we wouldn't be seeing super light RT effects on AMD sponsored games running at 1/4th resolution. The logical explanation is that those cards are not good in RT, that's why amd sponsored games avoid using it like the plague in higher quality. Furthermore, why are amd slower in normal RT as well? How did they get such a huge jump from rdna 2 to 3 if it's about "coded developed and designed yadayada"?

There is totally a hardware element, not saying there isn't. But when you see games using methods made, developed and coded by nVidia, you are always going to see a bias. Because they coded it for there own hardware. And often that code comes with agreements that prevent the game dev from changing, optimizing or altering that code (to varying degree's). Not a conspriracy, its just part of how closed source or propritary software/code works.

They both do this to different degree's. But the history and scale that nVidia does this is not really comparable. That history as to why, goes back decades at this point. If you want to dive down an interesting history rabit hole, this all started around the time programable shaders started to become a thing and nVIdia made CUDA.
 
Well no, not really, because it doesn't unlock the extra performance that you are supposedly claiming isn't utilized.
First of all, i know you play stupid but let's go along.
Let me give you an example from my world, for many years Adobe Premiere Pro decoded video in software, H.264, Prores...etc., the hardware for decoding these files and codecs was in GPU's for years, many many years, at some point Adobe added hardware decoding on the GPU and now you can edit 4-5 streams of these files with no problems.
Does Adobe run much faster now ? of course it does, did the GPU got faster because Adobe added a functionality it existed in GPU's for years ? No, the gpu is the same.
So, in the case of AMD, ray tracing denoising is done on compute units and general processing power that can be used for raster, Nvidia does denoising on Tensor cores and has little to no impact on the rest of the GPU.
Adding that functionality as intended on AMD frees up the GPU for faster rendering, makes the GPU as fast as it was intended.
And if you say "but AMD doesn't have tensor cores" , yeah it does in RDNA4, you get it now ? it's a copy paste job from AMD
So the part that is not utilized on AMD are the TENSOR CORES.
 
There is totally a hardware element, not saying there isn't. But when you see games using methods made, developed and coded by nVidia, you are always going to see a bias. Because they coded it for there own hardware. And often that code comes with agreements that prevent the game dev from changing, optimizing or altering that code (to varying degree's). Not a conspriracy, its just part of how closed source or propritary software/code works.

They both do this to different degree's. But the history and scale that nVidia does this is not really comparable. That history as to why, goes back decades at this point. If you want to dive down an interesting history rabit hole, this all started around the time programable shaders started to become a thing and nVIdia made CUDA.

1747080907899.jpeg
 
Should that be some inspirational quote or something ? seems like a child/fanboy wrote that.
People these days are so gullible, because they don't read/research they almost think some corporations make MAGIC.
Words like "Retina Display" , "Apple BIONIC" imagine :laugh: a silicon chip it's now BIONIC, it's so stupid, just like Nvidia brainwashed some people that needs everyone to validate their choice in hardware.
It's not magic and the nasty people from AMD and Intel are coming to take some Nvidia AI money.
 
Bigger and richer doesn't equal better.
If being bigger and richer means they deliver a better product to the consumer (me), then yes, it does. For instance, having the money to run a giant supercomputer 24/7/365 for years to improve DLSS, means I (as the consumer) get a better upscaling and frame generation product, and a fancy new transformer model.

Now, there absolutely are times when being bigger and richer doesn't provide a better product. For instance, Intel getting complacent and failing to improve their architecture for multiple CPU generations and falling behind on their fab schedules. While AMD, being smaller, was pushed to take risks and innovate with Ryzen, the 3D cache, and could easily contract out to TSMC to make their products, which means they provided the better product.

I work for one of the UK's major companies in the sector, so trust me, I know.
Oh my gosh, I'm so sorry for disagreeing with you, you obviously are in the right here, please, please, accept my apology, I will do anything to say I'm sorry, oh my gosh, I didn't know, geez I'm so sorry, wow.

lmao, who cares where you work, man?
 
If being bigger and richer means they deliver a better product to the consumer (me), then yes, it does.
It doesn't necessarily mean that, though.

For instance, having the money to run a giant supercomputer 24/7/365 for years to improve DLSS, means I (as the consumer) get a better upscaling and frame generation product, and a fancy new transformer model.
That sounds nice, but you have to evaluate things on a user level, not just believe the marketing.

Oh my gosh, I'm so sorry for disagreeing with you, you obviously are in the right here, please, please, accept my apology, I will do anything to say I'm sorry, oh my gosh, I didn't know, geez I'm so sorry, wow.
Are you done being a child?

lmao, who cares where you work, man?
I'm guessing about the same number of people who care about you posting a random stupid quote.

We're on a forum. We're here to talk about things with other people who come from different points of view. If you don't like it, I suggest you leave before someone hurts your pretty little feelings.

You're free not to care about anything I say, but the feeling is mutual.
 
It doesn't necessarily mean that, though.
Very true, I even provided an example of when it doesn't. But with Nvidia investing in making games perform better on their GPUs through game-specific optimizations or better upscaling and frame generation, which is what this entire discussion is about, it obviously has.
That sounds nice, but you have to evaluate things on a user level, not just believe the marketing.
I have, it's very nice. I can have Cyberpunk 2077 with maxed out graphics and ray tracing on my 3440x1440p monitor with a 12GB GPU (4070 Super), because the transformer model means I can have more aggressive upscaling while still looking just as nice and getting the same framerate as the old model on my former 2560x1440p monitor.
I'm guessing about the same number of people who care about you posting a random stupid quote.
What quote? The Norman Rockwell meme was my own opinion lol, have you never seen that meme before?
 
Very true, I even provided an example of when it doesn't. But with Nvidia investing in making games perform better on their GPUs through game-specific optimizations or better upscaling and frame generation, which is what this entire discussion is about, it obviously has.
You can see it that way. You can also see it as Nvidia bribing game developers to optimise for their technology to gain an unfair advantage. Take your pick.

I have, it's very nice. I can have Cyberpunk 2077 with maxed out graphics and ray tracing on my 3440x1440p monitor with a 12GB GPU (4070 Super), because the transformer model means I can have more aggressive upscaling while still looking just as nice and getting the same framerate as the old model on my former 2560x1440p monitor.
That's cool. Personally, I don't like upscaling, be it DLSS or FSR. They both have made big strides, but they'll never be as sharp as native.

What quote? The Norman Rockwell meme was my own opinion lol, have you never seen that meme before?
I'm not big on memes. Regardless, opinions are fine as long as they're not presented as facts which they're not.
 
You can see it that way. You can also see it as Nvidia bribing game developers to optimise for their technology to gain an unfair advantage. Take your pick.
Again, as the consumer, what I get is better performance. Intel, Ryzen, Radeon, and Nvidia all work with certain devs on flagship games. As long as I get better performance for my money, I don't really care how.
 
Crazy how they can go from Doom Eternal, which has amazing optimization, straight back to the terrible sloppy optimization so many games come out with now
This is because people keep buying Nvidia software, uh hem, I mean GPUs. Games don't have to be optimized when the devs are pandering to team green. The only that matters is the steam charts and DLSS. If people would wake up and see what they are doing to the market then we might see games built better.

8 GB is faster unless it runs out of VRAM which it will at 4K. Extra chips use power budget for no performance advantage unless the VRAM is actually needed.
This is why the 5060 8 gb will sell lots. There are too many 1080p budget gamers these cards will keep being made as long as people still game in 1080p.
 
Just a note for those that don't like to research before posting: AMD is training FSR 4 on supercomputers 24/7. They weren't doing this before because previous versions of FSR were not AI-based. It's probably one of the main reasons they've been able to catch up in one generation. I wasn't expect FSR 4 to be better than DLSS 3.x, and yet here we are. Hopefully we'll have FSR 4 in Doom by the time it hits 50% off on Steam.
 
Back
Top