• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

DOOM with Vulkan Renderer Significantly Faster on AMD GPUs

Edit: I may not skip this generation, my 660Ti starts to show its age. And I say "may" because the 480 doesn't cut it for me. 1060 I think will provide enough HP, but I won't buy it at FE+ prices.

That 1060 will age poorly compared to the RX 480. All newer APIs favor it heavily and most new big games will be using said APIs already in the coming year. Will you be happy next year watching your more expensive 1060 not playing games anywhere near as well as the cheaper RX 480?

AMD cards simply age better.
 
There's alot of 'compute' potential on AMD GPU's in general, this is why they are favoured in for example the Bitcoin mining thing. They provide a much higher hashrate then nvidia cards. What you are seeing here is exploit of full potential of AMD gpu's, so if your avg AMD GPU is running 10 degrees hotter then usual then it means you are fully utilizing it.

I would be curious power consumption running in Vulcan. Obviously they are pumping more wattage if they are making more heat. This probably means I'll need to drop my gpu clocks down.
 
The power usage thing can't be right. The cards are designed and sold with a power consumption figure. In reviews in DX11 they generally run at that power usage. If a game gets a 40-50% performance uplift, that can't equate to a linear power increase - it would go against design?

Surely the API is using the compute more efficiently instead.

On that front, OcUK have Fury X cards on pre-order at £150 under normal pricing. I'm tempted to build a Skylake rig running a Fury X. But then I think, why sell so cheap unless there's a Fury X replacement incoming...
 
i also was interested in power consumption and found something interesting... vulkan increase performance/watt in same tdp.. :

power.jpg


 
If anyone remembers the Amiga 1200, That thing had less then 2MB of ram, and not even a GPU but an AGA capable graphics chip with up to 256 colors, lol. But still programmers managed to sqeeze graphics and tech demo's out of that peace of antique like never before.

You make it sound like the A1200 was weak but in reality it had a pretty beefy spec that dwarfed most of the other PC's available at the time and laughed at the consoles. It's 14Mhz CPU was faster than pretty much anything bar Intel's flagship 486 CPU and nobody really had them at the time. By comparison, when it launched in 1992 I had an 8Mhz IBM compatible with 640KB of RAM, a 27MB HDD and VGA graphics, that was considered a pretty good system at the time. Hell, I think it was '92 when a buddy of mine got a 286 with a whopping 1MB of RAM :O
 
That 1060 will age poorly compared to the RX 480. All newer APIs favor it heavily and most new big games will be using said APIs already in the coming year. Will you be happy next year watching your more expensive 1060 not playing games anywhere near as well as the cheaper RX 480?

AMD cards simply age better.
I have never seen a first generation card to provide adequate performance for any new DX generation.
FX5000 or Radeon 9000 series were not adequate for DX9 gaming, despite having support in place. Nvidia's 8 series or ATI's HD2000 series didn't have enough HP for DX 10. And we have DX11 titles bringing cards to their knees today.
So my guess is, we'll need something better than Pascal/Polaris for proper DX12/Vulkan gaming.
 
I don't honestly. I just have a better understanding of what "low level" means being a programmer, and am trying to share my knowledge. Async != low level. I will leave it at that for now.
In the context it does. You're arguing with me assuming I have a point to argue when I don't. The other guy thanked my post and stopped posting on that subject because of that right there. I was merely extending that. As somebody working with the media group and ADP (at Lockheed) for a project in UE4 helping them try to understand DX12 better in regards to hardware I'd say I have a pretty firm grasp of the subject as well.
 
In the context it does. You're arguing with me assuming I have a point to argue when I don't. The other guy thanked my post and stopped posting on that subject because of that right there. I was merely extending that. As somebody working with the media group and ADP (at Lockheed) for a project in UE4 helping them try to understand DX12 better in regards to hardware I'd say I have a pretty firm grasp of the subject as well.
I think it would help if you guys used the proper terminology. There is no low-level per se, there's low level programming and low leve programming languages. As defined here: https://en.wikipedia.org/wiki/Low-level_programming_language
So yes, Vulkan is lower level than OpenGL. Yet neither are low level programming languages by any means (low level API is actually an oxymoron). And no, hardware cannot be low-level ready. Hardware is always accessed at a low level.
 
I have never seen a first generation card to provide adequate performance for any new DX generation.
FX5000 or Radeon 9000 series were not adequate for DX9 gaming, despite having support in place. Nvidia's 8 series or ATI's HD2000 series didn't have enough HP for DX 10. And we have DX11 titles bringing cards to their knees today.
So my guess is, we'll need something better than Pascal/Polaris for proper DX12/Vulkan gaming.
This was true for previous DX gens because they were all about new features/effects. Both sides were designing cards for effects that were not entirely set in stone and they had no reasonable way of predicting how they would be used. DX12 is different because it focuses on lowering overhead and increasing efficiency of existing effects. HW DX12 support will actually increase the lifetime of a GPU, not decrease it like previous generations. The difference will show in a couple of years, when Pascal GPU's are insufficient to keep on gaming, while their AMD counterparts are able to just nudge by.

This is where AMD and nVidia differ most IMO: AMD thinks that being as forward thinking as possible is the way to go, while nVidia designs cards for games that are already out, with little regard to how the card will perform on API's that are yet to be released.

Both ideas have merit, and which one you prefer depends mostly on how frequently you change hardware.
 
I think it would help if you guys used the proper terminology. There is no low-level per se, there's low level programming and low leve programming languages. As defined here: https://en.wikipedia.org/wiki/Low-level_programming_language
So yes, Vulkan is lower level than OpenGL. Yet neither are low level programming languages by any means (low level API is actually an oxymoron). And no, hardware cannot be low-level ready. Hardware is always accessed at a low level.
We can take it a step further and argue AMD geared their architectures for future advancements in API. Again, not really what that was all about. The guy just didn't get where I was going with it and that's understandable.
 
In the context it does. You're arguing with me assuming I have a point to argue when I don't. The other guy thanked my post and stopped posting on that subject because of that right there. I was merely extending that. As somebody working with the media group and ADP (at Lockheed) for a project in UE4 helping them try to understand DX12 better in regards to hardware I'd say I have a pretty firm grasp of the subject as well.

I may have confused you with someone else (thought it was the same person this whole time, lol). My apologies, I get lost in these threads. :)
 
This was true for previous DX gens because they were all about new features/effects. Both sides were designing cards for effects that were not entirely set in stone and they had no reasonable way of predicting how they would be used. DX12 is different because it focuses on lowering overhead and increasing efficiency of existing effects. HW DX12 support will actually increase the lifetime of a GPU, not decrease it like previous generations. The difference will show in a couple of years, when Pascal GPU's are insufficient to keep on gaming, while their AMD counterparts are able to just nudge by.

This is where AMD and nVidia differ most IMO: AMD thinks that being as forward thinking as possible is the way to go, while nVidia designs cards for games that are already out, with little regard to how the card will perform on API's that are yet to be released.

Both ideas have merit, and which one you prefer depends mostly on how frequently you change hardware.
Well, in a couple of years I hope I'll be gaming at 4k, so none of today's cards will be able to deliver (I don't SLI/Crossfire). So I'll need to upgrade anyway. I just don't play the "futureproofing" game.
 
This was true for previous DX gens because they were all about new features/effects. Both sides were designing cards for effects that were not entirely set in stone and they had no reasonable way of predicting how they would be used. DX12 is different because it focuses on lowering overhead and increasing efficiency of existing effects.

It will be true for this and future gens too, the increased efficiency we are now seeing from Vulkan/D3D12 is no different than the one seen from D3D11, it isn't here to stay and once development leaves OpenGL/D3D11 behind things will return to normal.

I.E World of Warcraft launched with Direct3D 7/8/9 modes (and OpenGL) to ensure a wide range of GPU support, the higher the version used the better the effects but the bigger the performance hit (resulting in some users manually setting a lower version to boost their FPS) however when Direct3D 11 support was added it didn't bring any added effects, just increased FPS over Direct3D 9 due to higher efficiency.

If a developer designs their game with the aim to run at 60 FPS and look as good as possible on high end hardware using OpenGL/Direct3D11 then adds Vulkan/Direct3D12 support that will cause a big FPS bump. However if they instead design their game to run at 60 FPS and look as good as possible on high end hardware using Vulkan/Direct3D12 then the will be no big FPS bump there will just be a big effects/visual quality bump.
 
AMD cards simply age better.

I would actually disagree with you here sadly, and im not a fanboy of any GPU brand. I would agree with you if AMD kept supporting there older cards but they have stopped that which really annoyed me and where Nvidia seem to continue to support older cards for alot longer then AMD do and this for me is where I think Nvidia cards have a longer life then AMD cards purely because of Driver support.
 
i also was interested in power consumption and found something interesting... vulkan increase performance/watt in same tdp.. :

View attachment 76869


Of course it does. Unfortunately it is still behind the 1070 and especially the 1080 by a little.

But I am guessing one of the main reasons the reference 480 is so cheap is because these are the low-binned yields GloFlo can spit out quickly, and their process isn't quite as matured as TSMC's slightly larger 16nm process.
 
I would actually disagree with you here sadly, and im not a fanboy of any GPU brand. I would agree with you if AMD kept supporting there older cards but they have stopped that which really annoyed me and where Nvidia seem to continue to support older cards for alot longer then AMD do and this for me is where I think Nvidia cards have a longer life then AMD cards purely because of Driver support.

What card are you referring to?

In my experience AMD cards age WAY WAY better than Nvidia. The only exception imo is the power hungry Fermi cards, but that even that is only if you ignore the paltry amounts of VRAM on the high-end offerings (Which is a big deal).
 
I would agree with you if AMD kept supporting there older cards but they have stopped that which really annoyed me and where Nvidia seem to continue to support older cards for alot longer then AMD do and this for me is where I think Nvidia cards have a longer life then AMD cards purely because of Driver support.
I got 6 years of driver support for my 6870s, I wouldn't call that bad. On the other hand you have products like the E-350 which they dropped support for pretty quickly. I suspect that support for most GCN GPUs will last as long as my 6870s did.
 
Navi might not be of GCN lineage (which means 7### will lose support sooner rather than later). It's hard to tell at this point.
 
From the perspective of the API, no there is not. AMD is severely constrained in tesselation. You could counter your arguement by saying NVIDIA hardware is ready for future games featuring massive tesselation, and AMD just lacks the hardware to compete, etc

That AMD has better compute (they do) does not make them better in low level apis. It makes them better at games that ufilize compute heavily. The API? The API just facilitates access to the hardware. It has no brand loyalties.

And with that statement you just poked a hole in your own argument. "The API just facilitates access to the hardware". Yeah hardware Nvidia just straight up doesn't have.

I know here you come to say "Oh but AMD doesn't have tess..." - let me cut you off right there. AMD can run tessellation just fine (In fact better than most Nvidia cards at this point) because they don't have to emulate hardware.

I will make another analogy - what you are saying is the equivalent of someone going "This API is AMD biased because it allows the game to use 3GB of VRAM instead of just 2GB. A lot of people said this when BF4 came out about their 680's. Again, having larger textures isn't biased - it just allows the use of more hardware. If Nvidia users wanted Ultra textures they should have bought a card with more VRAM, but dont worry because you can simply turn the setting down. Nvidia cards gave RAM, just not as much. You can't "Turn down Async", you are better just turning it off because Nvidia doesn't have the hardware in any way.

 
Navi might not be of GCN lineage (which means 7### will lose support sooner rather than later). It's hard to tell at this point.

I am guessing Navi will be an entirely new arch as well, but that doesn't mean the old ones will lose support. Also keep in mind the 7000 series is nearly 5 years old, and will be 7 years old by the time Navi launches.
 
I am guessing Navi will be an entirely new arch as well, but that doesn't mean the old ones will lose support. Also keep in mind the 7000 series is nearly 5 years old, and will be 7 years old by the time Navi launches.
Also keep in mind that Nvidia has recently (i.e. this year) released a driver for GeForce 8 series. And that's 10 years old.
 
i also was interested in power consumption and found something interesting... vulkan increase performance/watt in same tdp.. :

View attachment 76869


This single sw update made the RX480 20% more efficient than 970. So, for DX12 and Vulkan games Vega will have a walk in the park vs nVidia GPUs for the next 2 years at least.

I also hope W1z will include Vulcan Doom into his reviews from now on.
 
Last edited:
Most future games will be built around PS4.5 Xbox Nintendo NX all running a AMD GPU. So more games will be built around AMD hardware less so green team.
 
for DX12 and Vulkan games Vega will have a walk in the park vs nVidia GPUs for the next 2 years at least.

Two points.
1) That park has green apples and red apples. Don't be so naive to think a company as large as Nvidia is 'out'.
2) On Vega. The GTX1080 smokes everything and it's only the 980 replacement. The GP100/102 chip is 'rumoured' 50% faster. That is Vegas competition.

Even without compute, Pascal's highly optimised and efficient core can run DX12 and Vulcan just fine. I'll wager with you, £10, through PayPal that Vega won't beat the full Pascal chip.
If I lose, I'll be happy because it should start a price war. If I win, I'll be unhappy because Nvidia prices will reach the stratosphere.
 
Back
Top