Wednesday, July 13th 2016

DOOM with Vulkan Renderer Significantly Faster on AMD GPUs

Over the weekend, Bethesda shipped the much awaited update to "DOOM" which can now take advantage of the Vulkan API. A performance investigation by ComputerBase.de comparing the game's Vulkan renderer to its default OpenGL renderer reveals that Vulkan benefits AMD GPUs far more than it does to NVIDIA ones. At 2560 x 1440, an AMD Radeon R9 Fury X with Vulkan is 25 percent faster than a GeForce GTX 1070 with Vulkan. The R9 Fury X is 15 percent slower than the GTX 1070 with OpenGL renderer on both GPUs. Vulkan increases the R9 Fury X frame-rates over OpenGL by a staggering 52 percent! Similar performance trends were noted with 1080p. Find the review in the link below.
Source: ComputerBase.de
Add your own comment

200 Comments on DOOM with Vulkan Renderer Significantly Faster on AMD GPUs

#2
laszlo
no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it?

as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...
Posted on Reply
#3
ZoneDymo
laszlo
no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it?

as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.
Posted on Reply
#4
evernessince
laszlo
no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it?

as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...
With those kind of performance boosts I would be throwing money and engineers at developers. Let's not also forgot that multiple cards in Vulkan is so much better than previous APIs as well.
Posted on Reply
#5
qubit
Overclocked quantum bit
This is good competition for NVIDIA which is good for customers. We need more of this.
Posted on Reply
#6
john_
So, GCN cards are faster in Mantle, DirectX 12 Mantle and also, Vulkan Mantle.
Posted on Reply
#7
laszlo
ZoneDymo
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.
a dev "supported" by nv ? cutting off the branch you are sitting on ?
Posted on Reply
#8
Ubersonic
laszlo
question is will they pay other developers also to implement it?
I doubt they had to pay iD, historically iD has always been an OpenGL developer and Vulkan (previously called glNext) is the successor to OpenGL 4
Posted on Reply
#9
nienorgt
I hope that this poor result on Nvidia's GPUs are only because Pascal is still not optimised for Vulkan. It would be highly inappropriate for Khronos to favor AMD in a multiplatform API.
Posted on Reply
#10
the54thvoid
ZoneDymo
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.
Because M$?
Posted on Reply
#11
Dethroy
I'll quote myself (thread):

Dethroy
Both the Fury X and the Nano have 4.096 ALUs and a bus width of 4.096 bit. That architecture is literally begging for async compute. The gains are certainly impressive! I wonder what Nvidia is planning to do since Asynchronous Compute and Asynchronous Shader Pipelines is AMD proprietary hardware IP...
Right now, all Nvidia can do, is emulate it on a software level. It'll be interesting to see if that software emulation will lead to higher FPS as soon as Nvidia supports "async compute" in Doom.

It's even more incredible how the Vulkan API handles CPU bottlenecks, though. PC Games Hardware tested an i7-5820K that they manually put to a lower power state @ 1.2GHz (in tandem with an overclocked Titan X @ 1500/4200). At a resolution of 1.280 x 720 w/o AA/AF this setup pulled 89 FPS running on OpenGL and 152 FPS (+71%) running on Vulkan
laszlo
s i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...
Bethesda weist darauf hin, dass Asynchronous Compute auch auf Grafikkarten von AMD nur dann funktioniert, wenn keine Kantenglättung oder zur Kantenglättung – wie in den Benchmarks von ComputerBase – TSSAA genutzt wird.

simplified translation:
Bethesda is pointing out, that Asynchronous Compute will only work with AMD's GPUs 1) when no anti-aliasing is being used or 2) when TSSAA is being used instead.
Posted on Reply
#12
john_
On another note, we see that 1000 series Nvidia cards, behave as 900 series Nvidia cards.
Who bought a Pascal card because now it supports Async? Raise your hands please. Don't be shy.

One more marketing lie from Nvidia. They where going to give Maxwell users Async Compute support through driver updates. Right? Instead, what they did in my opinion, was to prefer to keep that software emulation for Pascal and present it as a new feature. They also didn't used async as the name of that feature, so they don't get probably sued. Instead they used the "Dynamic load balancing" term and let users and tech sites speculate that this is Nvidia's async implementation in Pascal. Finally Pascal was offering async.Well, even with Nvidia's perfect driver optimizations, async could be offering at least 5% more performance to Pascal cards. It doesn't seems to do something like that.

Maxwell's biggest marketing disadvantage was the lack of async support, and they couldn't send Pascal, with a Founders Edition price tag, into the market, without a least the illusion that it supports async. People would have been less willing to pay $700 for just a better Maxwell.
Not the first time that Nvidia is trying to gimp specs, knowing that something like this is influencing the potential buyer's psychology.

Just my opinion of course.
Posted on Reply
#13
Parn
If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.
Posted on Reply
#14
john_
Parn
If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.
It doesn't make games run slower or look worst on Nvidia cards. We are not talking about PhysX here. Nvidia users lose nothing in visuals or performance.

Dethroy
It's even more incredible how the Vulkan API handles CPU bottlenecks, though. PC Games Hardware tested an i7-5820K that they manually put to a lower power state @ 1.2GHz (in tandem with an overclocked Titan X @ 1500/4200). At a resolution of 1.280 x 720 w/o AA/AF this setup pulled 89 FPS running on OpenGL and 152 FPS (+71%) running on Vulkan
In the first presentation of Mantle's advantages over DirectX 11 in AoTS, AMD was using a system with an FX 8350 clocked down to 2GHz.
Posted on Reply
#15
ShurikN
ZoneDymo
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.
Because like everything else, Nvidia will pay devs not to use Vulkan, as AMD destroys them with those 50% gains.
It's standard NV practice.
Posted on Reply
#16
RejZoR
Lol, Vulkan isn't "biased". AMD GPU's are just more advanced when it comes to more direct GPU access (that Vulkan and DX12 allow), the fact they weren't shining is because software wasn't taking any advantage of all that yet. Till now. I mean, AMD had partial async compute since HD7000 series and full in R9 290X. NVIDIA still doesn't have even partial in GTX 1080 from the looks of it. Async is when you ca seamlessly blend graphics, audio and physics computation on a single GPU. Something AMD was aiming basically the whole time since they created GCN. They support graphics, they've added audio on R9 290X and they've been working with physics for ages, some with Havok and some with Bullet.

R9 Fury X users don't feel that let down anymore :P In fact R9 Fury cards in general shine in DX12 and apparently also in Vulkan. While I love my GTX 980 I kinda regret I haven't gone with R9 Fury/Fury X.

Also, for people saying "async emulation", there is no such thing, either you have hardware implementation or you don't. You can't emulate a feature that's sole purpose of it is massive performance boost through seamless connection of graphics and compute tasks. This is the same as emulation of pixel shaders when they became a thing with DirectX 8. Either you had them or you didn't. There were some software emulation techniques, but they were so horrendously slow it just wasn't feasible to use in real-time rendering within games. Async is no different. And NVIDIA apparently doesn't have it. Which kinda sucks when you pay 700+ € for a brand new graphic card...

john_
It doesn't make games run slower or look worst on Nvidia cards. We are not talking about PhysX here. Nvidia users lose nothing in visuals or performance.

In the first presentation of Mantle's advantages over DirectX 11 in AoTS, AMD was using a system with an FX 8350 clocked down to 2GHz.
I guess that's how NVIDIA fanboys are comforting themselves after buying super expensive GTX 1000 series graphic card (or GTX 900) that sucks against last generation of AMD cards that weren't particularly awesome even back then. "uh oh it doesn't lose any performance". Well, you also gain none. What's the point then? The whole point of Vulkan/DX12 is to boost performance. When devs will cram more effects into games assuming all these gains, your performance will actually tank where AMD's will remain unchanged. How will you defend NVIDIA then?
Posted on Reply
#18
fynxer
Parn
If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.
Posted on Reply
#19
john_
RejZoR
I guess that's how NVIDIA fanboys are comforting themselves after buying super expensive GTX 1000 series graphic card (or GTX 900) that sucks against last generation of AMD cards that weren't particularly awesome even back then. "uh oh it doesn't lose any performance". Well, you also gain none. What's the point then? The whole point of Vulkan/DX12 is to boost performance. When devs will cram more effects into games assuming all these gains, your performance will actually tank where AMD's will remain unchanged. How will you defend NVIDIA then?
STOP THE PRESS.

FIRST PAGE MATERIAL.

john_ IS DEFENDING NVIDIA.

Are you serious? Read again what I wrote. Damn,....
Posted on Reply
#20
RejZoR
Also read again what I wrote. I haven't even directed it at you... XD
Posted on Reply
#21
bug
nienorgt
I hope that this poor result on Nvidia's GPUs are only because Pascal is still not optimised for Vulkan. It would be highly inappropriate for Khronos to favor AMD in a multiplatform API.
Neah, AMD's implementation of OpenGL has been subpar for years. That's what we see here: with the dirty work out of the drivers and into the hands of capable programmers (id), the cards finally work as they should.
Posted on Reply
#22
john_
RejZoR
Also read again what I wrote. I haven't even directed it at you... XD
You didn't? And what exactly is this?

RejZoR
How will you defend NVIDIA then?
When you quote someone and you are just using his post as an opportunity to make a general comment, don't make questions that appear to be aimed at him.
Posted on Reply
#23
Prima.Vera
Im an nGreedia user, but I love those kind of news. Keep it up AMD. If more games would use Vulkan, they will bitch smack nvidia's prices in the face.
Posted on Reply
#24
deemon
ZoneDymo
Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.
They are, now. Just that Doom didn't get developed overnight. Vulkan just came out. New games from this point forward are probably developed in either Vulkan or in DX12 from ground up, but games already in developement for years didn't even have the API to start with, so those are done in DX11 or OpenGL.
Posted on Reply
#25
Dethroy
Aside from the obvious gains on AMD's side one can observe something else as well...
According to this test done by PC Games Hardware (updated for the 4th time now) a GTX 980 Ti pulls ahead of a GTX 1070 by ~20% (average of the 4 resolutions that have been tested) thanks to Vulkan.

Vulkan really does utilize architectural advantages way better than the OpenGL implementation does. I wonder what kind of performance gains one would see with Vega...

It will be intersting to see if Pascal's faster pre-emption and its dynamic load balancing (which is Nvidia's current answer to async compute) will achieve similar results once ID - with the help of Nvidia - is done implementing it.
Posted on Reply
Add your own comment