• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

DOOM with Vulkan Renderer Significantly Faster on AMD GPUs

Isn't that just texture streaming? That happens on both brands and has been a "feature" of modern titles for quite some time now.

Edit: Just downloaded the demo. Ran fine on my 290X 4GB. Didn't notice any streaming issues. Probably just a game patch or driver update needed for NV cards.
 
Last edited:
This dude's video explains where the extra performance comes from:


The relevant part starts @ around 8:46.

The reason AMD cards are gaining so much is not so much because Vulkan is so much better but rather because AMD's OGL is so damn worse then nVidia's: look @ the CPU overhead on both camps with OGL. It explains why nVidia's gains are so much lower: there' much less room for improvement with nVidia.
That is because Nvidia OpenGL method is non standard and developers foolish used the hacked method.
 
Has async been implemented in Doom Vulkan yet?
 
Has async been implemented in Doom Vulkan yet?
Already available when turning TSAA on as pointed out by some readers owning AMD graphical cards (R380, Fury, RX480...).
Nvidia cards uses preemptions to compensate the lack of asynchronous compute in their hardware.
 
Already available when turning TSAA on as pointed out by some readers owning AMD graphical cards (R380, Fury, RX480...).
Nvidia cards uses preemptions to compensate the lack of asynchronous compute in their hardware.

Compensate is a strong word. More like marketing speak.

I highly doubt that compute is being properly leveraged, too, being a pretty new API and Async being new for that matter.
 
Compensate is a strong word. More like marketing speak.

I highly doubt that compute is being properly leveraged, too, being a pretty new API and Async being new for that matter.

Also developers that off load PC version to a third party are less likely to implement those. With the first wave we are seeing stripped version & DX11 regression. Ports being ports.
 
Compensate is a strong word. More like marketing speak.

I highly doubt that compute is being properly leveraged, too, being a pretty new API and Async being new for that matter.
Read Anandtech's GTX 1080 review. It explains in depth the state of async compute and why it currently makes more sense for AMD's hardware.
 
Read Anandtech's GTX 1080 review. It explains in depth the state of async compute and why it currently makes more sense for AMD's hardware.

I can only handle so much shill shrimpi (the intel paychecks are just too much).
Nvidia would benefit, but they wanted to cut power usage for today's perf/watt.
 
I can only handle so much shill shrimpi (the intel paychecks are just too much).
Nvidia would benefit, but they wanted to cut power usage for today's perf/watt.
Well, you couldn't be more wrong, but if you won't be bothered reading, there's nothing else I can add.
 
I just ran Talos Principle benchmark on my R9 390 and 16.7.3 drivers (ultra, medium AA, 1920x1200):
Vulkan: 80.1 fps
DX11: 96.6 fps

It doesn't have the massive boost on GCN cards that Doom got--at least not yet. It is still beta.
 
It doesn't have the massive boost on GCN cards that Doom got--at least not yet. It is still beta.
Hm, getting closer ... too bad they can't profit from cpu bound scenarios since there are none
 
Since this isn't on this thread will just add it,
JF7ngP5.png
 
Since this isn't on this thread will just add it,
JF7ngP5.png
Yeah, AMD's OpenGL implementation has sucked for ages. It's one of the reasons I've stuck with Nvidia.
 
Since this isn't on this thread will just add it,
JF7ngP5.png
Wait a minute ... is this just doom specific or it's like this in general ... are you telling me AMD Vulkan implementation requires beefy CPU to really shine? ... talk about negating all benefits of low cpu overhead in Vulkan.
 
Wait a minute ... is this just doom specific or it's like this in general ... are you telling me AMD Vulkan implementation requires beefy CPU to really shine? ... talk about negating all benefits of low cpu overhead in Vulkan.
Its something to remember, its def something that needs more testing in future with more games to see if its isolated to doom or if it could be recurring issue in other games. I can't remember where i read it since it was years ago but there was story about AMD gpu/drivers running higher cpu load in same game then nvidia card did which could be why if that issues comes to be it.

links to couple articles they had:
http://www.hardwareunboxed.com/gtx-1060-vs-rx-480-in-6-year-old-amd-and-intel-computers/
http://www.hardwareunboxed.com/amd-vs-nvidia-low-level-api-performance-what-exactly-is-going-on/
 
Last edited:
Wait a minute ... is this just doom specific or it's like this in general ... are you telling me AMD Vulkan implementation requires beefy CPU to really shine? ... talk about negating all benefits of low cpu overhead in Vulkan.

The reason you are seeing the delta between the brand new Intel CPU and the ancient one is because the more frames your GPU outputs the more CPU power is required. This is why, if you haven't noticed, benchmarkers use the best CPU possible. If they didn't the frame-rate would cap.

All we can draw from that benchmark is that it's pretty amazing that ancient CPU can even play that game over 60 FPS. You could drop $50 on a G3258 and it would easily double the performance. If you don't have that yet are buying an RX 480 there is something wrong.
 
I think the clockspeed is more important than the processor. 2.67/3.2 versus 4.5 GHz. Not only did they use the highest clocked Intel processor available, it's overclocked too (4.0 GHz stock). The two combined make the old processors look worse on GCN than they really are. The fact every test was over 60 FPS says it all.
 
The reason you are seeing the delta between the brand new Intel CPU and the ancient one is because the more frames your GPU outputs the more CPU power is required. This is why, if you haven't noticed, benchmarkers use the best CPU possible. If they didn't the frame-rate would cap.

That is exactly NOT the case.
Lowering CPU overhead is meant to let weaker CPUs push more FPS. Too early to tell just by looking at one title, though.
 
... This is why ...
Dude, look at the graph ... it also shows the same game and same cpu setups with gtx 1060, without the fps drop. :slap:
links to couple articles they had ...
As far as I see, this is something specific to couple of games ... and also specific to new apis ... and new apis are much more low level, it's all understandable .. but bethsoft/id did both amd and nvidia vulkan renderer implementations for idtech engine, I'd expect they'd like to have less fps drop on older cpus with amd gpu. Also GCN is in all consoles so I'd expect them to optimize. It almost looks like whatever they did in the code that benefited consoles (compiling with optimizations for jaguar cores and more heterogenous environment), hurts performance on cpus with older cores and older pcie controllers, but I'm just guessing. It may all be in the driver, also. With apis this low, you never know.
 
Last edited:
... It may all be in the driver, also. With apis this low, you never know.

Unlikely. Lower level APIs mean drivers become thinner. The driver does less work, the application has to do the heavy lifting now.
This is why I will not draw any conclusions based on one title. Where we had AMD and Nvidia doing the optimizations till now, we now (potentially) have every single developer to account for. In theory, everyone now only has to optimize for the API (kind of like coding for HTML5, not for a specific broswer), but sadly, there will always be calls that work better on one hardware than they do on the next. To be honest, it's not clear that smaller developers are even moving to Vulkan/DX12 at all. Interesting times ahead, though.
 
Its something to remember, its def something that needs more testing in future with more games to see if its isolated to doom or if it could be recurring issue in other games. I can't remember where i read it since it was years ago but there was story about AMD gpu/drivers running higher cpu load in same game then nvidia card did which could be why if that issues comes to be it.

links to couple articles they had:
http://www.hardwareunboxed.com/gtx-1060-vs-rx-480-in-6-year-old-amd-and-intel-computers/
http://www.hardwareunboxed.com/amd-vs-nvidia-low-level-api-performance-what-exactly-is-going-on/

Well read this too:
http://www.hardwareunboxed.com/gtx-1060-vs-rx-480-fx-showdown/
ONkarN9.png

97VRmUz.png

Results are quite schizophrenic, on 1080p with FX 8350 RX 480 is victorious on OGL but looses on Vulkan to gtx 1060. But in 1440p it's vice versa. I would say that RX 480 is on 1080p cpu limited and on 1440p gpu limited on vulkan, while gtx 1060 is gpu limited on both resolutions. In OGL gtx 1060 is cpu limited on 1080p but gpu limited on 1440p, amd is gpu limited in both res.
 
Unlikely. Lower level APIs mean drivers become thinner. The driver does less work, the application has to do the heavy lifting now.
GTX 1060 loses 4 fps moving from skylake to bulldozer ... and RX 480 loses 30 fps moving from skylake to bulldozer. Relatively it's obvious something CPU heavy is happening on radeon code path be it in game or in driver ... you shouldn't rule out the drivers yet because it's still early enough for non optimal critical code segments to exist. I mean drivers are thinner and game code has more control over GPU, it's not like drivers do absolutely nothing - in fact since less time overall per frame is spent running driver code, any sub optimal stuff potentially going on there has more effect overall in CPU bound scenarios if game code is properly optimized. 480 chews through single 1080p frame incredibly fast and doom is well optimized ...
 
Last edited:
The reason you are seeing the delta between the brand new Intel CPU and the ancient one is because the more frames your GPU outputs the more CPU power is required. This is why, if you haven't noticed, benchmarkers use the best CPU possible. If they didn't the frame-rate would cap.

All we can draw from that benchmark is that it's pretty amazing that ancient CPU can even play that game over 60 FPS. You could drop $50 on a G3258 and it would easily double the performance. If you don't have that yet are buying an RX 480 there is something wrong.
By you saying more frames gpu outputs the more cpu power is required, well if that is completely the case the fps of 1060 would dropped like 480 which wasn't the case. I think as we move more in to dx12 gonna have to start testing a more realistic rig option for users based on the card as well, if you are buying card like 480/1060 most don't have a 6/8core high end intel its more likely mid to low range cpu where its bit more limited. 4 year old cpu isn't really useless yet.
 
Back
Top