• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

DOOM Eternal Benchmark Test & Performance Analysis

What are the graphics driver settings, though? Radeon Software and Nvidia Control Panel ?
There is quite noticeable difference in the results if you tweak one or two settings.
For all my testing I use out of the box settings as that represents 99.9% or higher of what people use
 
For all my testing I use out of the box settings as that represents 99.9% or higher of what people use

Well, I always change the settings, for AMD historically Texture Filtering Quality set to High gives higher performance.
One needs to test the settings and see where there are gains. I always do it because every frame is precious with lower end hardware.

But for apples-to-apples comparison the default might be right. :)

1584746923161.png


1584746947638.png
 
We don't even need to read CPU/GPU game tests to know which brands will top the charts lolz
 
Is anyone else having trouble with DE flicking their 144Hz monitor down to 60Hz? It doesn’t seem to matter what I do, every time I launch the game in either full screen or borderless window modes it changes my refresh rate to 60Hz.

I’m using the latest Radeon drivers which came out the other day with Windows 10 all patched up.
 
Can't find any CPU Ryzen-Core analysis on Doom Eternal but it can be seen that the game doesn't need more than a 4-core/8-thread which is just disappointing :eek:
Doom 2016 runs not bad on the old FX processor back in 2016.

View attachment 148680

View attachment 148681
I am sure a patch will rectify that core count. Next Gen Gaming Consoles WILL utilize more than 8 cores and up to even 16 threads if they can.
 
Im sure this game is great and I plan to play it, but honestly, Im more interested in Doom 64 on PC!
 
So it seems that 980 Ti will run more than fine with 1080p, I need to get this game soon since the last Doom was hella great!
 
So it seems that 980 Ti will run more than fine with 1080p, I need to get this game soon since the last Doom was hella great!

not just the 980Ti also RX580 & RX590 aswell
 
good to know that a RTX2060 is plenty enough for Ultra Nightmare in both 1080p & 1440p.
 
Is anyone else having trouble with DE flicking their 144Hz monitor down to 60Hz? It doesn’t seem to matter what I do, every time I launch the game in either full screen or borderless window modes it changes my refresh rate to 60Hz.

I’m using the latest Radeon drivers which came out the other day with Windows 10 all patched up.

If you bought the game on Steam and if you have the built-in Steam Overlay FPS counter enabled, disable it.
 
If you bought the game on Steam and if you have the built-in Steam Overlay FPS counter enabled, disable it.
Thanks, yes I bought it on Steam. I always have the overlay set to off though, so no such luck for me yet.
 
good to know that a RTX2060 is plenty enough for Ultra Nightmare in both 1080p & 1440p.

There is a Vram limiter in the game tho so you might not be able to max out the Textures.
At least I do not see any way to bypass this,it won't let you apply the settings unless its within the Vram limit,even if its only a few MBs.
2020.03.21-10.56.png


This is why I asked how the game was tested on supposedly 'highest' settings on 3-4GB cards.
 
I thought that Vulkan and DX12 being modern APIs supposably only requiring a thin driver would make the GPU Manufacturer driver-game babysitting a thing of the past!

But no it doesn't look like it that's completely the case. The down-to-the-metal optimizations should ALL be handled by the game developers just like the newer APIs said they would give more access and responsibility, but it looks like it wasn't fully it, hence it's still abstraction, just "better at it".

It's so weird when there's fixes in these GPU drivers about some edge-case in some game, "corruption is seen in XYZ game when opening a menu" ... why on earth would that ever be a driver problem, if it is then that is a wrong approach, but if it's not then it shuldn't be driver trying to fix it but rather the most sensible appropriate component where the root cause is coming from, the game is doing something wrong most likely, but with this current system it feels like they always look at it as if "the driver isn't doing enough", missing the point and forgetting to ask if the driver should be doing this at all, even if it's known to everyone that it is a game fault but we just choose it to fix it in the driver, bah, it just feels like a cheap way to get over the problem and then the devs also become less motivated because they expect the GPU manufacturer to fix it, but the system is made so that many things the only way to fix it is in the driver, just weird on so many levels, the driver shouldn't have this kind of extra wide ranging responsibilities, there is no rules in the industry what can and cannot go in a driver to keep things simple, GPU drivers are one of the biggest one, just look at how many Megabytes, that's MEGABYTES are the size of the .DLLs ... 20-40 MB, what a freakshow compared to everything else.
What kind of API calls or whatever is the game sending or doing something else somewhere that causes this bug effect. GAMES ARE THE BIG BULK, games are the CARGO of weight and complexity, it should always be the game looking for it's compatability with the OS/API/DRIVER/HW, not the other way around, is the cargo strapped in the airplane correctly, if not, fix the cargo, not add another engine or extend the wing or add counter-weight to keep balance of the imbalanced cargo, pffft, if it is compatible it JUST WORKS, the GPU freezes or crashes, guess what, it should never be the API/OS/DRIVER fault because those things should have to be DESIGNED to be as reliable as smooth as simple as possible and the only room should be in the game where the game is the place where those nitty-gritty down-to-the-metal optimizations happen and not anywhere else.

obviously you can't have 1 company serving 1000x games out there to their fullest potential, this is whole driver babysitting is a fundamentally not optimal approach for practical end-user usage.

The industry keeps chugging this terrible method of so much driver responsibility and babysitting each and every game and having to "support" a new game, give me a break, the game supports the API, the game supports the OS, the game supports the GPU, the GPU supports the API, and the driver translates the games instructions through the API into GPU instructions, it ought to JUST WORK, right?!?!? Why not? Why so much fiddling and diddling with the freaking driver, with the mystery middle-man. Why so much drama with the transporter. This from an outside practical point of view makes no sense, but sometimes it takes that kind of approach to view something from afar rather than being an expert at it's details, the various individual experts may not see/realize the and just go along with it as if that's just how it suppose to be. The transport/conversion always should be as smooth, fast, reliable, but simple.


It could also be GPU HW problem, if some game or two is coincidentially using some kind of a pattern of commands than causes the GPU to produce corrupted output ... guess what, IT'S THE GPUs FAULT, not the driver, leave the driver alone and fix your broken HW you cheap's cakes. In the practical economic world ofcourse they would want to poke the driver to fix it, the users would also not want to replace the GPU if they bought it recently, but this is just the reality.
If such things were to happen ... it would have been a failure at Quality Assurance and testing, not testing for all possible combinations of commands feeded into the GPU, with todays AI and supercomputer automation that's like a non-issue to test for, so such fixes would be very rare.

Continuing: Nothing else requires such insane amount of driver maintenance than the graphics department, this has been plaguing field and I think this is why there's so much drama around benchmarks and performance.

The only thing a GPU manufacturer thin driver would do is general things such as HW/OS/API compatability so that fullscreen modes, super resolution, scaling, support for newer API version and other insfrastructural things to work properly as OS and HW is developed, you would need to update it much less frequently versus now, and you would update it to support a new API version and that's it, not each new GAME!!! :/, and that update should be fine for all the new games using an updated API version!
If done properly and tested right there wouldn't really be so much room for bugs anyway and if there would be bugs they wouldn't affect specific games in such specific manner, these general and larger bugs would also be very noticable and affect a lot of people so they would be traced down and fixed relatively fast. The driver shouldn't ever go into nitty-gritty extremely-game-specific details which pretty much makes this world not a GPU war but a DRIVER WAR!!!



Do you have to update the mouse driver to make the mouse "support" a game that runs over 300 FPS?
Do you have to optimize the mouse driver when you choose a new mouse pointer style?
Do you have to optimize the keyboard driver when so you can press 10x more keys in a highly competitive FPS game?
Do you have to upgrade the CPU driver when you load a new program that uses modern instructions?
Do you have to update the network driver to support a new Cat.7 Ethernet cable?
Do you have to

No you don't! Everywhere else, IT JUST WORKS for what it is designed for, unless the driver is just bad made by low paid devs, usually cheap pheriperals from Asia.
 
Last edited:
It's so weird when there's fixes in the driver about some edge-case in some game, "corruption is seen in XYZ game when openin a menu" ... why is that a driver problem, it should be a game problem, either the game or the OS or whatever is, a thin driver shouldn't have that kind of responsibility IMO, and because GPU manufacturers take upon themselfs then everyone sits and waits for their fixes and obviously you can't have 1 company serving 1000x games out there to their fullest potential, this is whole driver babysitting is a fundamentally wrong approach.

Yes but it would give AMD or Nvidia a bad rep if a game does'nt properly function on a new released game, does'nt it? I mean you read everywhere the 5700 xt drivers blabla, no sir, it is the game that was done in a bad way on where drivers need to fix issues that are initially caused by a game.

Vulkan, Mantle, DX12, it's nothing new really. Back in the C64 days they already applied 'tactics' to get the utter best from that base tiny hardware:


With all the computational power a GPU such as the 580/590 has, you'd say you could even make it better then what eternal now does on Ultra / WQHD or so. It all depends on how far a programmer is willing to go. But they don't really, because they have to take into account so many different configs for a base PC to make it run in the first place.

Console gaming could actually look better then PC in a way; because they have a default set of hardware, and to get the best out of it you have to program it like your talking to the chip itself. This is why Vulkan is such a wonderful concept; you can extract simply more out of it since AMD chips tend to perform best.


The PS2 only had a 4MB 150Mhz GPU, but once devs put their work onto it, they really extracted whatever what was possible on such a tiny, 32MB console.


PS3 same story. A G70 based GPU, aka 7800 or so. But once devs start gettings down into it, they really pull the potential the GPU's have.

Bottom point: Game devs have schedule's, targets and timespans where there's profit to be made. So they go usually for a generic approach, leaving lots of potential behind or to be patched later. Pubg was a good example. Ran shit at the beginning, runs perfect now.
 
Can't find any CPU Ryzen-Core analysis on Doom Eternal but it can be seen that the game doesn't need more than a 4-core/8-thread which is just disappointing :eek:
Why is it "disappointing" for a developer to be so good at coding that they can hit 200fps in 2020 games with just a 4/8 CPU? A genuinely well optimised game is one that "does the most with the least", not one which has 16x threads filled with cr*ppy code or because the publisher wanted 10x layers of CPU-heavy virtualisation based DRM in. I have far more respect for id Software who produce amazingly well optimized 200fps Vulkan games, seem to consistently get +2.0-2.5x fps per core and end up universally GPU bottlenecked than I do certain other lazy developers like Ubisoft who can't even hit half that frame-rate given twice the horsepower even when reusing the same engine they're supposed to have a decade's worth of 'experience' with...
 
Why is it "disappointing" for a developer to be so good at coding that they can hit 200fps in 2020 games with just a 4/8 CPU? A genuinely well optimised game is one that "does the most with the least", not one which has 16x threads filled with cr*ppy code or because the publisher wanted 10x layers of CPU-heavy virtualisation based DRM in. I have far more respect for id Software who produce amazingly well optimized 200fps Vulkan games, seem to consistently get +2.0-2.5x fps per core and end up universally GPU bottlenecked than I do certain other lazy developers like Ubisoft who can't even hit half that frame-rate given twice the horsepower even when reusing the same engine they're supposed to have a decade's worth of 'experience' with...

Because more used cores means more realism, more AI, more physics.
Because the mainstream is at least 6-core/12-thread today, with many people already rocking 12-core/24-thread and 16-core/32-thread.
 
Because more used cores means more realism, more AI, more physics.
Because the mainstream is at least 6-core/12-thread today, with many people already rocking 12-core/24-thread and 16-core/32-thread.

Point is; with a game like doom, there's so much potential to be extracted from all those cores and race in who has the most cores, while once right optimized you can get away with a 4 core 8 thread and still get 200 FPS ingame.

This is why mantle was created in the first place.
 
Point is; with a game like doom, there's so much potential to be extracted from all those cores and race in who has the most cores, while once right optimized you can get away with a 4 core 8 thread and still get 200 FPS ingame.

This is why mantle was created in the first place.

With outdated graphics. This engine is like 5-year-old technology.
 
Techpowerup remains my go to for Benchmarks, Techpowerup's benchmarks are always right on the mark in terms of FPS. Also Congratulations to ID Tech team for making an amazing gem of a series and for other game developers... You guys need to take some notes now this is how you make a game for the PC platform. 45 fps in 4K max settings on a 1660 ti is amazing.

I honestly can't believe how good both Eternal and Doom perform, they are both excellent ports. Developers really need to start taking some notes from these guys. It just goes to show you don't need expensive hardware to pull in some good numbers. If a PC port is in working condition both inside and out it should perform well on various hardware.
 
Why don't the GPU manufacturers pay or send people over to game devs to get it right in the first place then? :p
 
Well, I always change the settings, for AMD historically Texture Filtering Quality set to High gives higher performance.
One needs to test the settings and see where there are gains. I always do it because every frame is precious with lower end hardware.

But for apples-to-apples comparison the default might be right. :)
Apples to apples. Now you are thinking! Reviews cant cover settings for every user. Most leave these things at default. Me, I actually set the texture filtering to high quality from high (nvidia) but that performance impact is negligible anyway.
 
Because more used cores means more realism, more AI, more physics.
It doesn't though. It has the potential to mean that but hardly any devs code for that, modern gaming is the same "Lowest Common Denominator" it's been since PC exclusives turned into "Console first cross-platforms" in the 2000's and half the time it's a case of "the more you give them, the more they waste", variable quality ports or simply conflicting priorities. Even today, ask people who've been gaming on PC since the 90's which games are memorable for great AI (or cleverly done scripts being able to spoof the feel of enemies doing clever stuff whilst playing) and you still hear "FEAR 1" or "Half Life 2" more than the latest titles. Even No One Lives Forever (2000, same LithTech Engine family as FEAR) had enemies flipping over tables & hiding behind them, reacting to lights being turned on in adjacent rooms / doors left open, tracking your footprints in the snow, etc, on one 1GHz Pentium 3 core. Thief (1997) had 11 visibility states and some of the most accurate sound propagation physics in PC gaming history. Thief (2014) in comparison was dumbed down to 3x visibility states, half sized levels despite 128x more RAM to play with and a super buggy audio engine. Fully Destructible Environment physics? Red Faction (2001) did that on P3's & 256MB RAM...

Likewise, the real bottleneck to "more realism" like having 1,000 unique NPC's each with their own personality isn't CPU, it's the development time / budget, paying for 1,000 voice actors, quadruple the writers, more mo-cap actors (to avoid having 'a crowd of clones' all moving the same way at once), etc, vs 10% of the effort that brings in +200% more profit churning out skins, lootboxes / pay2win "Booster Packs" / DLC, etc. This comment isn't aimed at you personally, but people who've just bought themselves a new 8C/16T toy to play with thinking it'll magic up some Super AI out of thin air to fill up those 50-75% idling cores are being staggeringly naive in not grasping what really drives game development. We're not short of CPU horsepower, we're short of quality non-lazy developers and all the Threadrippers in the world won't cure that...

As for Doom Eternal, if even a 4C/8T CPU hits GPU bottleneck, it may well mean that an 8C/16T could potentially get more fps but that you simply can't test for that until future 2x more powerful GPU's appear. 200fps on lower end hardware is literally the exact opposite of a "poorly optimized" game though, and a lot of people who've just bought an enthusiast CPU fall into the trap of thinking "a rising tide lifts all boats" (a game is so efficient it simply doesn't need more cores to hit 144Hz) is somehow a "bad thing" simply because it doesn't "demo" their new purchase vs older hardware that well to other enthusiasts.
 
Last edited:
I am sure a patch will rectify that core count. Next Gen Gaming Consoles WILL utilize more than 8 cores and up to even 16 threads if they can.
How can next gen consoles utilize "more then 8 cores" if they only come with eight cores? :cool:

Because more used cores means more realism, more AI, more physics.
Because the mainstream is at least 6-core/12-thread today, with many people already rocking 12-core/24-thread and 16-core/32-thread.
I don't think mainstream means what you think it means and more cores do not equal more realism, ai, or physics.
 
Last edited:
Back
Top