• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

DOOM with Vulkan Renderer Significantly Faster on AMD GPUs

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Over the weekend, Bethesda shipped the much awaited update to "DOOM" which can now take advantage of the Vulkan API. A performance investigation by ComputerBase.de comparing the game's Vulkan renderer to its default OpenGL renderer reveals that Vulkan benefits AMD GPUs far more than it does to NVIDIA ones. At 2560 x 1440, an AMD Radeon R9 Fury X with Vulkan is 25 percent faster than a GeForce GTX 1070 with Vulkan. The R9 Fury X is 15 percent slower than the GTX 1070 with OpenGL renderer on both GPUs. Vulkan increases the R9 Fury X frame-rates over OpenGL by a staggering 52 percent! Similar performance trends were noted with 1080p. Find the review in the link below.



View at TechPowerUp Main Site
 
no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it?

as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...
 
no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it?

as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...

Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.
 
no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it?

as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...

With those kind of performance boosts I would be throwing money and engineers at developers. Let's not also forgot that multiple cards in Vulkan is so much better than previous APIs as well.
 
This is good competition for NVIDIA which is good for customers. We need more of this.
 
So, GCN cards are faster in Mantle, DirectX 12 Mantle and also, Vulkan Mantle.
 
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.

a dev "supported" by nv ? cutting off the branch you are sitting on ?
 
I hope that this poor result on Nvidia's GPUs are only because Pascal is still not optimised for Vulkan. It would be highly inappropriate for Khronos to favor AMD in a multiplatform API.
 
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.

Because M$?
 
I'll quote myself (thread):

Both the Fury X and the Nano have 4.096 ALUs and a bus width of 4.096 bit. That architecture is literally begging for async compute. The gains are certainly impressive! I wonder what Nvidia is planning to do since Asynchronous Compute and Asynchronous Shader Pipelines is AMD proprietary hardware IP...
Right now, all Nvidia can do, is emulate it on a software level. It'll be interesting to see if that software emulation will lead to higher FPS as soon as Nvidia supports "async compute" in Doom.

It's even more incredible how the Vulkan API handles CPU bottlenecks, though. PC Games Hardware tested an i7-5820K that they manually put to a lower power state @ 1.2GHz (in tandem with an overclocked Titan X @ 1500/4200). At a resolution of 1.280 x 720 w/o AA/AF this setup pulled 89 FPS running on OpenGL and 152 FPS (+71%) running on Vulkan

s i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...
Bethesda weist darauf hin, dass Asynchronous Compute auch auf Grafikkarten von AMD nur dann funktioniert, wenn keine Kantenglättung oder zur Kantenglättung – wie in den Benchmarks von ComputerBase – TSSAA genutzt wird.

simplified translation:
Bethesda is pointing out, that Asynchronous Compute will only work with AMD's GPUs 1) when no anti-aliasing is being used or 2) when TSSAA is being used instead.
 
Last edited:
On another note, we see that 1000 series Nvidia cards, behave as 900 series Nvidia cards.
Who bought a Pascal card because now it supports Async? Raise your hands please. Don't be shy.

One more marketing lie from Nvidia. They where going to give Maxwell users Async Compute support through driver updates. Right? Instead, what they did in my opinion, was to prefer to keep that software emulation for Pascal and present it as a new feature. They also didn't used async as the name of that feature, so they don't get probably sued. Instead they used the "Dynamic load balancing" term and let users and tech sites speculate that this is Nvidia's async implementation in Pascal. Finally Pascal was offering async.Well, even with Nvidia's perfect driver optimizations, async could be offering at least 5% more performance to Pascal cards. It doesn't seems to do something like that.

Maxwell's biggest marketing disadvantage was the lack of async support, and they couldn't send Pascal, with a Founders Edition price tag, into the market, without a least the illusion that it supports async. People would have been less willing to pay $700 for just a better Maxwell.
Not the first time that Nvidia is trying to gimp specs, knowing that something like this is influencing the potential buyer's psychology.

Just my opinion of course.
 
If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.
 
If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.
It doesn't make games run slower or look worst on Nvidia cards. We are not talking about PhysX here. Nvidia users lose nothing in visuals or performance.

It's even more incredible how the Vulkan API handles CPU bottlenecks, though. PC Games Hardware tested an i7-5820K that they manually put to a lower power state @ 1.2GHz (in tandem with an overclocked Titan X @ 1500/4200). At a resolution of 1.280 x 720 w/o AA/AF this setup pulled 89 FPS running on OpenGL and 152 FPS (+71%) running on Vulkan
In the first presentation of Mantle's advantages over DirectX 11 in AoTS, AMD was using a system with an FX 8350 clocked down to 2GHz.
 
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.
Because like everything else, Nvidia will pay devs not to use Vulkan, as AMD destroys them with those 50% gains.
It's standard NV practice.
 
Lol, Vulkan isn't "biased". AMD GPU's are just more advanced when it comes to more direct GPU access (that Vulkan and DX12 allow), the fact they weren't shining is because software wasn't taking any advantage of all that yet. Till now. I mean, AMD had partial async compute since HD7000 series and full in R9 290X. NVIDIA still doesn't have even partial in GTX 1080 from the looks of it. Async is when you ca seamlessly blend graphics, audio and physics computation on a single GPU. Something AMD was aiming basically the whole time since they created GCN. They support graphics, they've added audio on R9 290X and they've been working with physics for ages, some with Havok and some with Bullet.

R9 Fury X users don't feel that let down anymore :P In fact R9 Fury cards in general shine in DX12 and apparently also in Vulkan. While I love my GTX 980 I kinda regret I haven't gone with R9 Fury/Fury X.

Also, for people saying "async emulation", there is no such thing, either you have hardware implementation or you don't. You can't emulate a feature that's sole purpose of it is massive performance boost through seamless connection of graphics and compute tasks. This is the same as emulation of pixel shaders when they became a thing with DirectX 8. Either you had them or you didn't. There were some software emulation techniques, but they were so horrendously slow it just wasn't feasible to use in real-time rendering within games. Async is no different. And NVIDIA apparently doesn't have it. Which kinda sucks when you pay 700+ € for a brand new graphic card...

It doesn't make games run slower or look worst on Nvidia cards. We are not talking about PhysX here. Nvidia users lose nothing in visuals or performance.

In the first presentation of Mantle's advantages over DirectX 11 in AoTS, AMD was using a system with an FX 8350 clocked down to 2GHz.

I guess that's how NVIDIA fanboys are comforting themselves after buying super expensive GTX 1000 series graphic card (or GTX 900) that sucks against last generation of AMD cards that weren't particularly awesome even back then. "uh oh it doesn't lose any performance". Well, you also gain none. What's the point then? The whole point of Vulkan/DX12 is to boost performance. When devs will cram more effects into games assuming all these gains, your performance will actually tank where AMD's will remain unchanged. How will you defend NVIDIA then?
 
Last edited by a moderator:
Damage Control, incoming...!!
 
I guess that's how NVIDIA fanboys are comforting themselves after buying super expensive GTX 1000 series graphic card (or GTX 900) that sucks against last generation of AMD cards that weren't particularly awesome even back then. "uh oh it doesn't lose any performance". Well, you also gain none. What's the point then? The whole point of Vulkan/DX12 is to boost performance. When devs will cram more effects into games assuming all these gains, your performance will actually tank where AMD's will remain unchanged. How will you defend NVIDIA then?


STOP THE PRESS.

FIRST PAGE MATERIAL.

john_ IS DEFENDING NVIDIA.

Are you serious? Read again what I wrote. Damn,....
 
Also read again what I wrote. I haven't even directed it at you... XD
 
I hope that this poor result on Nvidia's GPUs are only because Pascal is still not optimised for Vulkan. It would be highly inappropriate for Khronos to favor AMD in a multiplatform API.
Neah, AMD's implementation of OpenGL has been subpar for years. That's what we see here: with the dirty work out of the drivers and into the hands of capable programmers (id), the cards finally work as they should.
 
Also read again what I wrote. I haven't even directed it at you... XD

You didn't? And what exactly is this?

How will you defend NVIDIA then?

When you quote someone and you are just using his post as an opportunity to make a general comment, don't make questions that appear to be aimed at him.
 
Im an nGreedia user, but I love those kind of news. Keep it up AMD. If more games would use Vulkan, they will bitch smack nvidia's prices in the face.
 
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.

They are, now. Just that Doom didn't get developed overnight. Vulkan just came out. New games from this point forward are probably developed in either Vulkan or in DX12 from ground up, but games already in developement for years didn't even have the API to start with, so those are done in DX11 or OpenGL.
 
Back
Top