• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"

And Microsoft just comes in and starts building off it. The same way Sony used BSD to make OS for PS4, damn BSD licensing.
I was referring to game engine development work.

Well known 3D engines already has DX12 version e.g. Epic's Unreal Engine 4.9, Crytek's CryEngine, Unity , Square Enix's Luminous Engine and 'etc'.
 
Last edited:
AMD just released Mantle before DX12 as a bit of a PR stunt
I don't think they had the resources and money just for that kind of PR stunt. They didn't just throw out a new "Wonder Driver", they created an API. I don't know. Maybe this is something simple that anyone can do? Microsoft was delaying a low level API, so I think AMD came out with Mantle to warranty that there would be pressure on Microsoft to include DX12 with Windows 10. AMD was the only company losing because of the absence of a low level API. I also find it funny that people believe that AMD come out with a low level API in zero time with Microsoft taking almost two more years. And I find it funny because everyone plus the dog thinks that AMD is completely incompetent in creating anything in software. Not to mention the difference between AMD and Microsoft, one company with no money the other swimming in money, one company being a hardware company the other being a software company.
 
AMD wanted to prove that mantles technology worked, so that others would adopt it.

Microsoft with their Xbox one (running AMD hardware) and DX12 being the big example - mantle was *proof* the existing hardware would benefit.
 
I wouldn't describe creating an API as simple, but they also worked with a bunch of very skilled game developers, and from the looks of it grabbed stuff that they were already planning on contributing to DX12.
 
Yeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements. Do we even know what feature level they're using?

https://en.wikipedia.org/wiki/Graphics_Core_Next

GCN 1.0 is 11_1:
Oland
Cape Verde
Pitcairn
Tahiti

GCN 1.1 is 12.0:
Bonaire
Hawaii
Temash
Kabini
Liverpool
Durango
Kaveri
Godavari
Mullins
Beema

GCN 1.2 is 12.0:
Tonga (Volcanic Islands family)
Fiji (Pirate Islands family)
Carrizo

Only NVIDIA's GM2xx chips are 12.1 compliant (which the GTX 980 Ti is). Even Skylake's GPU is 12.0.

So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison. We'd have to know that both cards are running feature level 12.0.
http://www.dsogaming.com/news/oxide...to-disable-certain-settings-in-the-benchmark/

Oxide Developer on Nvidia's request to turn off certain settings:

“There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.”

“Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown Async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.




NVIDIA is just ticking the box for Async compute without any real practical performance.
 
NVIDIA is just ticking the box for Async compute without any real practical performance.

and blaming all problems on the game dev for not making the game around their hardware.
 
and blaming all problems on the game dev for not making the game around their hardware.
That reminds of a certain Company so much that compete's with nvidia.
 
That reminds of a certain Company so much that compete's with nvidia.

Lol! Too true. I have to say, both companies play that card equally as much.
 
and blaming all problems on the game dev for not making the game around their hardware.
The difference is Async feature on Maxwellv2 is faked AND Oxide has given Intel, AMD, nVidia and MS equal access to the source code.

In general, NVIDIA Gameworks restricts source code access to Intel and AMD.

From slide 23 https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX Advancements in the Many-Core Era Getting the Most out of the PC Platform.pdf
NVIDIA talks about DX12's Async.


This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12, so they append some Maxwell features into DX12.1 and called their GPUs "DX12 compatible" which isn't entirely true. The base feature of DX12 compatibility is Async Compute & Better CPU scaling.


Oxide's full reply from
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995


AMD's reply on Oxide's issue
https://www.reddit.com/r/AdvancedMi...ide_games_made_a_post_discussing_dx12/cul9auq
 
Last edited:
The difference is Async feature on Maxwellv2 is faked AND Oxide has given Intel, AMD, nVidia and MS equal access to the source code.

In general, NVIDIA Gameworks restricts source code access to Intel and AMD.

From slide 23 https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX Advancements in the Many-Core Era Getting the Most out of the PC Platform.pdf
NVIDIA talks about DX12's Async.


This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12, so they append some Maxwell features into DX12.1 and called their GPUs "DX12 compatible" which isn't entirely true. The base feature of DX12 compatibility is Async Compute & Better CPU scaling.


Oxide's full reply from
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995


AMD's reply on Oxide's issue
https://www.reddit.com/r/AdvancedMi...ide_games_made_a_post_discussing_dx12/cul9auq
Here is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. Problem with this even with source access DX12 exe for game likely wasn't an option til recently but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.

Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.

This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12,
^ pretty much confirmation of it.
Unlike like gameworks don't look like it can be turned off?

I will head this one off before its said, i bet someone will say "well its a standard". It maybe but so is DX11 tessellation but didn't stop AMD from whining about it when hairworks used it.
 
Last edited:
Here is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. Problem with this even with source access DX12 exe for game likely wasn't an option til recently but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.

Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.


^ pretty much confirmation of it.
Unlike like gameworks don't look like it can be turned off?

I will head this one off before its said, i bet someone will say "well its a standard". It maybe but so is DX11 tessellation but didn't stop AMD from whining about it when hairworks used it.
You can question the equal access and you can guess that the game favors AMD.

With GameWorks on the other hand there is CERTAINTY that NO ONE has access but Nvidia to the source code and that the game ABSOLUTELY favors specific Nvidia hardware (I wouldn't say all Nvidia hardware here, because Kepler owners could have a different opinion on that).

Can you see the difference?
 
Here is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. Problem with this even with source access DX12 exe for game likely wasn't an option til recently but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.

Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.


^ pretty much confirmation of it.
Unlike like gameworks don't look like it can be turned off?

I will head this one off before its said, i bet someone will say "well its a standard". It maybe but so is DX11 tessellation but didn't stop AMD from whining about it when hairworks used it.
Your "even with source access DX12 exe for game likely wasn't an option til recently" statement is false.

From http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

Being fair to all the graphics vendors

Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.

To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present).



THAT's "for over a year" hence your "wasn't an option til recently" assertion is wrong.
 
That game Ashes of the Singularity sure is getting a lot of free publicity. I bet Stardock is loving it. I hadn't even heard of it before this.
 
The difference is Async feature on Maxwellv2 is faked AND Oxide has given Intel, AMD, nVidia and MS equal access to the source code.

In general, NVIDIA Gameworks restricts source code access to Intel and AMD.

From slide 23 https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX Advancements in the Many-Core Era Getting the Most out of the PC Platform.pdf
NVIDIA talks about DX12's Async.


This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12, so they append some Maxwell features into DX12.1 and called their GPUs "DX12 compatible" which isn't entirely true. The base feature of DX12 compatibility is Async Compute & Better CPU scaling.


Oxide's full reply from
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995


AMD's reply on Oxide's issue
https://www.reddit.com/r/AdvancedMi...ide_games_made_a_post_discussing_dx12/cul9auq
Makes sense. Maxwell doesn't get the FPS boost AMD GCN 1.0 and newer cards get in DX12. That, in turn, explains why 290X in DX11 goes from about equal to GTX 970 to about 30% faster in DX12. NVIDIA will probably get it fixed for Pascal but NVIDIA users aren't going to see the major performance jump AMD users are until then.


NVIDIA did what they always do when contacted by AMD: hang up. AMD got the last laugh this time.
 
Your "even with source access DX12 exe for game likely wasn't an option til recently" statement is false.

From http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/
Sad part about that is the story you posted didn't prove what i said was false, there is no date listed when it was available. So what i said is still valid.

To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.
The question with that was that when it was DX11 and (proprietary locked) Mantle was 2 options for game? Wouldn't shock me if it was.

NVIDIA did what they always do when contacted by AMD: hang up. AMD got the last laugh this time.
that is if games even use async but that is still up for debate how many will outside the ones paid by AMD.
 
Sad part about that is the story you posted didn't prove what i said was false, there is no date listed when it was available. So what i said is still valid.


The question with that was that when it was DX11 and (proprietary locked) Mantle was 2 options for game? Wouldn't shock me if it was.


that is if games even use async but that is still up for debate how many will outside the ones paid by AMD.
The sad part is it's "more than a year" from the blog was posted i.e. back at least August 16, 2014. Furthermore, NVIDIA made changes to their own code path.

It doesn't need to be paid by AMD since XBO will get it's DirectX12 with it's Windows 10 which in-turn influence Async usage with PS4's multi-platform games. Once XBO gains full featured Async APIs, it will be the new baseline programming model. If Pascal gains proper Async, Maxwelv2 will age like Kelper 780 GTX.
 
Last edited:
The sad part is it's "more than a year" from the blog was posted i.e. back at least August 16, 2014. Furthermore, NVIDIA made changes to their own code path.

It doesn't need to be paid by AMD since XBO will get it's DirectX12 with it's Windows 10 which in-turn influence Async usage with PS4's multi-platform games. Once XBO gains full featured Async APIs, it will be the new baseline programming model. If Pascal gains proper Async, Maxwelv2 will age like Kelper 780 GTX.
As much as people Love to point out "more then a year" crap. Async may been enabled on the Mantle version of the game but that was AMD proprietary API which was closed source so yea. Thing with Async on console it could be useful but as for Desktop its not needed since those console APU's are pretty Low end weak AMD hardware that they gotta squeeze every possible thing outta of. Async as said was in Mantle version but likely wasn't in the DX version of the game til DX12 exe was released hence my point if you don't ignore that fact which wouldn't shock me if you do.
 
that is if games even use async but that is still up for debate how many will outside the ones paid by AMD.
Async compute is a cornerstone of Direct3D 12/Mantle/Vulkan. It isn't required (they'll be executed synchronously) but having it available means pretty big framerate gains because less of the GPU is idle.
 
As much as people Love to point out "more then a year" crap. Async may been enabled on the Mantle version of the game but that was AMD proprietary API which was closed source so yea. Thing with Async on console it could be useful but as for Desktop its not needed since those console APU's are pretty Low end weak AMD hardware that they gotta squeeze every possible thing outta of. Async as said was in Mantle version but likely wasn't in the DX version of the game til DX12 exe was released hence my point if you don't ignore that fact which wouldn't shock me if you do.
All this time that Nvidia looked superior, you and others looked really to enjoy trolling AMD fans, being in an advantageous position. Now you put your head in the sand and try to ignore reality.
The "more then a year" argument is not crap. Async compute is huge, not useless. It is useless when you try to fake it in the drivers, but not when it is implemented in the hardware. And no, it is not just for the consoles.
Also your convenient stories about Mantle.exe and DX12.exe, are not facts, only not so believable excuses.
 
Back
Top