• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

DOOM with Vulkan Renderer Significantly Faster on AMD GPUs

......better quote @Tatty_One before is coming : "children are misbehaving in the nursery yet again, reply bans for this thread will be issued if it continues, followed by free holiday passes" :laugh:
Happily, free holiday passes have been issued, I am happy to issue more if needed, thank you.
 
You can't "make something" for low level, at least not hardware wise. It goes against the definition of low level. You optimize low level to a platform, you target your hardware. That is what low level means.
You can gear the architecture for it. In this case, the boost in Async queues in GCN giving AMD the clear advantage when utilized. You optimize your API usage during development toward the architectural strengths in play. Somebody else said it earlier how AMD has been preparing for this movement for quite a while. For once the industry is going in a direction that helps AMD and their decisions rather than hurting it.
 
Well, I played Doom for 5 minutes before going to work after enabling Vulkan and everything seems to run nicely, just as it does in OpenGL 4.5. I might go so far to say that it might even look a little nicer but, that could just be placebo effect. When I get home, I'm going to see if eyefinity is playable now because if the improved performance is that much, it might be realistic for me to do 5760x1080 where before it kind of struggled.

It makes you wonder if this is a preview of things to come with DX12. I guess time will tell. Until then, I'm going to enjoy this.
 
You can gear the architecture for it. In this case, the boost in Async queues in GCN giving AMD the clear advantage when utilized. You optimize your API usage during development toward the architectural strengths in play. Somebody else said it earlier how AMD has been preparing for this movement for quite a while. For once the industry is going in a direction that helps AMD and their decisions rather than hurting it.

reading your post come in my mind a thread from 2 months ago....AMD-The Master Plan.....(https://www.techpowerup.com/forums/threads/amd-the-master-plan.222334/) and slowly gossip turn in reality....

wondering if older older hardware (GCN 1st,2nd) show this improvement; if yes, means these gen. are still good to use a while
 
You can gear the architecture for it. In this case, the boost in Async queues in GCN giving AMD the clear advantage when utilized. You optimize your API usage during development toward the architectural strengths in play.

If someone wrote a low-level api game that utilized tons of nvidia friendly tessellation, nvidia would suddenly appear to be awesomely optimized for low level as well.

The fact is you don't gear an architecture to be low level. You gear your game for an architecture.
 
  • Like
Reactions: bug
....why no 1080 test?
 
I'm sorry but Maxwell/Pascal will be the greatest prank Nvidia has ever pulled.

This fall the old Fury X will come close to a 1080 in most games, and a $200 budget card from AMD will nearly match Nvidia's 1070.
 
If someone wrote a low-level api game that utilized tons of nvidia friendly tessellation, nvidia would suddenly appear to be awesomely optimized for low level as well.

The fact is you don't gear an architecture to be low level. You gear your game for an architecture.

Exactly.
 
If someone wrote a low-level api game that utilized tons of nvidia friendly tessellation, nvidia would suddenly appear to be awesomely optimized for low level as well.

The fact is you don't gear an architecture to be low level. You gear your game for an architecture.

That's just flat-out not true. There is a difference between optimizing a game to be good at things like tesselation/texturing/lighting, and just straight up not putting hardware like RAM/ACE's/SP's in a card. The fact is AMD has always given their cards more compute than needed for gaming so that they could have one unified arch, and so they could beat out the competition once compute gaming was ready - and now it is buddy!

Nvidia has had plenty of time to prepare their cards for the future, but just like how they gave Fermi cards half as much VRAM as they needed; they then stripped Kepler/Maxwell/Pascal of any useful compute hardware. This was done so that 1) It can operate more efficiently in today's inefficient games, 2) People will be forced to upgrade to Volta.
 
Not sure why there is so much talk here about async shaders. It's not clear that Doom even uses them. This is mostly about Vulkan being faster than OpenGL (which we knew) and, in AMD's case, Vulkan is a lot faster than OpenGL because AMD never had the best support of OpenGL in the first place.

Remember that most developers still used DirectX 11 on Windows while Mac and Linux releases used OpenGL. Why? DirectX 11 was faster.
 
Not sure why there is so much talk here about async shaders. It's not clear that Doom even uses them.
https://community.bethesda.net/thread/54585?start=0&tstart=0

Does DOOM support asynchronous compute when running on the Vulkan API?

Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set.
Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.
This is mostly about Vulkan being faster than OpenGL (which we knew) and, in AMD's case, Vulkan is a lot faster than OpenGL because AMD never had the best support of OpenGL in the first place.
Some of the gains are surely because AMD's OpenGL support wasn't as good as Nvidia's. But how come a Fury X beats a GTX 1070 all of a sudden?
 
Lol, Vulkan isn't "biased". AMD GPU's are just more advanced when it comes to more direct GPU access (that Vulkan and DX12 allow), the fact they weren't shining is because software wasn't taking any advantage of all that yet. Till now. I mean, AMD had partial async compute since HD7000 series and full in R9 290X. NVIDIA still doesn't have even partial in GTX 1080 from the looks of it. Async is when you ca seamlessly blend graphics, audio and physics computation on a single GPU. Something AMD was aiming basically the whole time since they created GCN. They support graphics, they've added audio on R9 290X and they've been working with physics for ages, some with Havok and some with Bullet.

R9 Fury X users don't feel that let down anymore :p In fact R9 Fury cards in general shine in DX12 and apparently also in Vulkan. While I love my GTX 980 I kinda regret I haven't gone with R9 Fury/Fury X.

Also, for people saying "async emulation", there is no such thing, either you have hardware implementation or you don't. You can't emulate a feature that's sole purpose of it is massive performance boost through seamless connection of graphics and compute tasks. This is the same as emulation of pixel shaders when they became a thing with DirectX 8. Either you had them or you didn't. There were some software emulation techniques, but they were so horrendously slow it just wasn't feasible to use in real-time rendering within games. Async is no different. And NVIDIA apparently doesn't have it. Which kinda sucks when you pay 700+ € for a brand new graphic card...



I guess that's how NVIDIA fanboys are comforting themselves after buying super expensive GTX 1000 series graphic card (or GTX 900) that sucks against last generation of AMD cards that weren't particularly awesome even back then. "uh oh it doesn't lose any performance". Well, you also gain none. What's the point then? The whole point of Vulkan/DX12 is to boost performance. When devs will cram more effects into games assuming all these gains, your performance will actually tank where AMD's will remain unchanged. How will you defend NVIDIA then?

Couldn't have said it better mate! :toast:
 
Some of the gains are surely because AMD's OpenGL support wasn't as good as Nvidia's. But how come a Fury X beats a GTX 1070 all of a sudden?
Most of it undoubtedly comes from Vulkan. Judging by Ashes of the Singularity, 5-10% is coming from async compute.
 
RX 480 vs GTX 1070 and 970
 
so even my R9 290 should see significant gains using Vulkan? I see the 390 does. The question then becomes how many games will start to utilize it? The performance is nice but changing drivers for each game doesn't sound fun at all.
 
I guess I hear champagne pops coming from the red camp :D
 
Well I tried it out, it definitely helps that's for sure. Kinda surprised actually...

Though I didn't have time to try it in full, I only saw the number for like a couple minutes. I will have to give it a more in depth try tonight as I am still finishing that game.
 
Here's my testing results. My system is in my specs to the left.

In the limited, and very quick peek I took, in one specific area I was getting about 60FPS with everything maxed out at 1440P, except motion blur turned to low (don't like motion blur). When I switched to Vulkan, I saw 100FPS at the lowest in the same area.

I'd say it improved for me quite a bit.
 
Here's my testing results. My system is in my specs to the left.

In the limited, and very quick peek I took, in one specific area I was getting about 60FPS with everything maxed out at 1440P, except motion blur turned to low (don't like motion blur). When I switched to Vulkan, I saw 100FPS at the lowest in the same area.

I'd say it improved for me quite a bit.
Those gains are probably due to Vulkan handling CPU bottlenecks much better.
 
The CPU usage in that video clearly show it is well multithreaded--all of it. That is impressive. The future of gaming has finally arrived.
 
RX 480 vs GTX 1070 and 970
The same guy has a video comparing RX 480 OGL/Vulkan, and by the looks of it, Vulkan is a lot less stressful on the CPU, which is great for budget gamers.
If only more devs would pick Vulkan (or true DX12)
 
The same guy has a video comparing RX 480 OGL/Vulkan, and by the looks of it, Vulkan is a lot less stressful on the CPU, which is great for budget gamers.
If only more devs would pick Vulkan (or true DX12)

Well its good for everyone because now a lot of CPU power is available for future games
 
Has anyone tried it on HD7XXX cards?
 
so even my R9 290 should see significant gains using Vulkan? I see the 390 does. The question then becomes how many games will start to utilize it? The performance is nice but changing drivers for each game doesn't sound fun at all.

Changing drivers? Vulkan has been in the drivers for months.
 
Back
Top