• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

DOOM Eternal Benchmark Test & Performance Analysis

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,770 (3.74/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
DOOM Eternal is the long-awaited sequel to the epic DOOM series. There's even more carnage, and gameplay is super fast-paced. Built upon the id Tech 7 engine, visuals are excellent, and graphics performance is outstanding. We tested the game on all modern graphics cards at Full HD, 1440p and 4K Ultra HD.

Show full review
 
Another game that fills up 8GB of VRAM but doesn't actually use it.
 
Why the 2 most pupular cards(GTX 1650/Super) are missing?
 
Why the 2 most pupular cards(GTX 1650/Super) are missing?
Just not part of my benchmarking routine, didn't think they are that popular. Let me see if I can get some runs in for those
 
As I see there is 19-21% between RTX2080 and GTX1080Ti. How? It looks like Nvidia breaks GTX1080Ti from driver.
Turing shaders can do FP + INT at the same time
 
What are the FP + INT?
Floating Point + Integer calculations

so you can do 1.0 + 1.0 = 2.0 at the same time as 1 + 1 = 2, effectively running two operations at the same time, for each GPU core. If game (or driver) code is properly crafted to optimize for that capability you can gain a lot of performance. That's why most recent games run much better on Turing.
 
The charts show a 5600 XT with 8 GB?
Wondering how the 5500 XT with 8 GB compares to the 4 GB version and the 580 and 590 in perfomance
 
@W1zzard - Is there an integrated benchmark here? If not, how did you test? Apologies if I missed in glancing over the article.
 
Floating Point + Integer calculations

so you can do 1.0 + 1.0 = 2.0 at the same time as 1 + 1 = 2, effectively running two operations at the same time, for each GPU core. If game (or driver) code is properly crafted to optimize for that capability you can gain a lot of performance. That's why most recent games run much better on Turing.

Now I see. Thanks. That explains a lot.
 
As I see there is 19-21% between RTX2080 and GTX1080Ti. How? It looks like Nvidia breaks GTX1080Ti from driver.
In addition to FP+INT, there is also Variable Rate Shading that idTech definitely supports and very likely uses.

Edit:
There might be other features they are using, Rapid Packed Math (2*FP16 in place of FP32) comes to mind.
 
Last edited:
"The good thing is that our results show no major loss of performance (due to VRAM) for GTX 1060 3 GB and RTX 570 4 GB. What's surprising though is that RX 5500 XT 4 GB is doing much worse than expected. My best guess is that AMD's VRAM management for Navi isn't as refined yet as that for Polaris. At least the game doesn't crash when VRAM is exceeded, and continues to run fine."

The RX 5500 XT 4 GB, despite supporting PCIe 4.0 (and 3.0), is only physically x8 lanes. On the test setup, it's running PCIe 3.0 x8, where as the 1060 and 570 are x16 lane cards, so they can run PCIe 3.0 x16.
 
Now I see. Thanks. That explains a lot.
The technical terminology for it is concurrent execution of floating point and integer operations. It is actually only made possible by a hardware change in Turing, by moving the INT32 blocks to be separate. https://hexus.net/tech/reviews/grap...g-architecture-examined-and-explained/?page=2

As support matures for new hardware features, performance will pull away from last generation -- leaves a bit of a bitter after taste too.
 
The charts show a 5600 XT with 8 GB?
Whoops, fixed

Wondering how the 5500 XT with 8 GB compares to the 4 GB version and the 580 and 590 in perfomance
Should be roughly between RX 580 and RX 590 I'd say

Variable Rate Shading that idTech definitely supports and very likely uses.
I doubt they would secretly enable that as it would reduce image quality (if only a small bit)

Is there an integrated benchmark here? If not, how did you test?
No integrated benchmark, just play the game, find a good scene and keep playing that.

The RX 5500 XT 4 GB, despite supporting PCIe 4.0 (and 3.0), is only physically x8 lanes. On the test setup, it's running PCIe 3.0 x8, where as the 1060 and 570 are x16 lane cards, so they can run PCIe 3.0 x16.
Very good point, let me mention that in the review
 
I have a question about the Vram limit when the game was tested on Ultra Nightmare.

Was the texture quality lowered with 3-4GB cards and everything else left on max?

I'm playing the game on a RX 570 4GB and 2560x1080 res and with that I'm unable to use High texture cause the ingame counter goes over by 11 'yes 11..' Mb of Vram and it tells me to lower stuff else it wont let me apply the settings.
So now I'm playing with Medium textures,I could lower Shadows to low and use high Textures but I kinda prefer a more balanced setting. 'luckily I can't really see a diff between medium and high but still'
 
Last edited:
"The good thing is that our results show no major loss of performance (due to VRAM) for GTX 1060 3 GB and RTX 570 4 GB. What's surprising though is that RX 5500 XT 4 GB is doing much worse than expected. My best guess is that AMD's VRAM management for Navi isn't as refined yet as that for Polaris. At least the game doesn't crash when VRAM is exceeded, and continues to run fine."

The RX 5500 XT 4 GB, despite supporting PCIe 4.0 (and 3.0), is only physically x8 lanes. On the test setup, it's running PCIe 3.0 x8, where as the 1060 and 570 are x16 lane cards, so they can run PCIe 3.0 x16.

Would be nice to see the numbers for the 5500XT in a ryzen system as PCie 4.0 x8 = PCie 3.0 x16

But the test rig is intel so hopefully another site runs this on a PCIe 4.0 board.
 
@W1zzard Looks like your benchmarking is in-line with what I'm getting on my 2080 Super (442.74) and RX 5700 XT (Pro 20.Q1.1). :rockout:

I'm sure if I was on the latest Adrenalin it would see more FPS from the driver optimization.
 
Floating Point + Integer calculations

so you can do 1.0 + 1.0 = 2.0 at the same time as 1 + 1 = 2, effectively running two operations at the same time, for each GPU core. If game (or driver) code is properly crafted to optimize for that capability you can gain a lot of performance. That's why most recent games run much better on Turing.

Just for clarification, it's not that for each floating point operation Turing can also do an integer operation, it's just that they can occur concurrently within the same clock cycle. Before, the scheduling logic was simpler and allowed either for floating point or integer computations within 1 clock cycle.

To be fair I reckon the real wold gain in performance from this is modest, because usually after one clock cycle of doing something floating point related you probably had to compute a set of addresses in the next clock cycle anyway which is why they never bothered with this until now.
 
Just not part of my benchmarking routine, didn't think they are that popular. Let me see if I can get some runs in for those
According to steam survey GTX 1650 alone is more pupular than RX 570, even RX 580. So it deserve it place in the benchmark chart.

As I see there is 19-21% between RTX2080 and GTX1080Ti. How? It looks like Nvidia breaks GTX1080Ti from driver.
Because Turing is the first Nvidia architecture to fully support low level API like D3D12/Vulkan and as a result dont have performence penalty like Maxwell/Pascal.
 
Seems.. optimized? For 1920x1200 60Hz, looks like my GTX1080 will be good enough for a long time yet
 
As I see there is 19-21% between RTX2080 and GTX1080Ti. How? It looks like Nvidia breaks GTX1080Ti from driver.
Turing is better at id tech engine than pascal. this is not that surprising. you can see similar performance difference with doom 2016 as well.
 
RX 5700 XT is 10% faster than Radeon VII at 1920x1080.
While the Radeon VII is 5% faster than RX 5700 XT at 3840x2160.

:eek:

1920x1080:
1584715630251.png


3840x2160:
1584715660171.png
 
Back
Top