Monday, August 31st 2015

Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12

It turns out that NVIDIA's "Maxwell" architecture has an Achilles' heel after all, which tilts the scales in favor of competing AMD Graphics CoreNext architecture, in being better prepared for DirectX 12. "Maxwell" lacks support for async compute, one of the three highlight features of Direct3D 12, even as the GeForce driver "exposes" the feature's presence to apps. This came to light when game developer Oxide Games alleged that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.

Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges.
"Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).

Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game. Sources: DSOGaming, WCCFTech
Add your own comment

196 Comments on Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12

#1
Alexandrus
Who the heck writes these and how do they pass to be posted on the website ?
Pressurized, pressurizing, does the writer even know what that means ?
I mean really.....
Posted on Reply
#2
Shihabyooo
"Alexandru Spataru said:
Who the heck writes these and how do they pass to be posted on the website ?
Pressurized, pressurizing, does the writer even know what that means ?
I mean really.....
Huh, I thought it was a pun. "Oxide" 'n all..

Need a new humour-meter...
Posted on Reply
#3
jihadjoe
That's interesting, and now I wonder how well the old Kepler cards do in that DX12 benchmark. Presumably those come with their compute capabilities intact.
Posted on Reply
#4
RejZoR
I think NVIDIA just couldn't be bothered with driver implementation till now because frankly, async compute units weren't really needed till now (or shall I say till DX12 games are here). Maybe drivers "bluff" the support just to prevent crashing if someone happens to try and use it now, but they'll implement it at later time properly. Until NVIDIA confirms that GTX 900 series have no async units, I call it BS.
Posted on Reply
#5
Cybrnook2002
Sneaky Sneaky

So, what your saying is it's easy to shine, so long as the path has been laid for you. The moment you take a detour, flaws get exposed. Have to commend AMD on this one that while they are hurting, at least they are truly baking in support for the features that they claim they are. Not just a quick once over to get to market (obviously they are not rushed on that front...)

Bluffing driver.... Wonder what other features we "think" we are using.
Posted on Reply
#6
Ferrum Master
Oxide guys truly ignited some fire in nVidia turf...

Bravo guys... bravo... I need the damn prices going down!
Posted on Reply
#7
ZeppMan217
"Shihabyooo said:
Huh, I thought it was a pun. "Oxide" 'n all..

Need a new humour-meter...
It's should be pressured.
Posted on Reply
#8
cadaveca
My name is Dave
reminds me of good ol' Gabe Newell handing the smack down on NVidia many many moons ago. (like, well over a decade ago).

"ZeppMan217 said:
It's should be pressured.
Which the original copy (text) had. It was changed on purpose, as humor, it seems.
Posted on Reply
#10
FrustratedGarrett
Here's the original article at WCCFTECH: http://wccftech.com/oxide-games-dev-replies-ashes-singularity-controversy/

There are two points here to look at: first, Oxide is saying that Nvida has had access to the source code of the game for over a year and that they've been getting updates to the source code on the same day as AMD and Intel.

"P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."

Second, they claim they're using the so called async-compute units found on AMD's GPUs. These are parts on AMD's recent GPUs that can be used to asynchronously schedule work to underutilized GCN clusters.

"AFAIK, Maxwell doesn’t support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not. Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD’s hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it’s scheduler is hard to say."
Posted on Reply
#11
AsRock
TPU addict
Well i believe nVidia already on to this and know full well there is no games yet and will just add it later when their is some and just say tuff crap to those who get the current hardware.
Posted on Reply
#12
64K
Looks like AMD got the opening salvo in DX12. I'm not sure what Nvidia can do at this point or how much they will care even if Maxwell sales do fall off some. The entry level Maxwells have been out for a year and a half and the mid range Maxwells for about a year. They've probably sold a ton of them already and the profit was made to cover R&D by now I would imagine.

We are late in the game on Maxwell. Pascal should be here in maybe 6 months and I think I read that they planned to add compute back in on the Pascals.

Hope it helps AMD to sell more cards.
Posted on Reply
#13
EarthDog
If people had half a clue, they would hold on to current NVIDIA cards until the next generation lands in 2016 that will likely perform better. There are going to be a couple of DX12 titles out by that time at least.
Posted on Reply
#14
TheMailMan78
Big Member
"Alexandru Spataru said:
Who the heck writes these and how do they pass to be posted on the website ?
Pressurized, pressurizing, does the writer even know what that means ?
I mean really.....
Well how it is written makes more sense....
NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges
If you wanted to use "pressured" it would read like this....
Oxide alledes, NVIDIA pressured them to remove parts of its code that use async compute altogether.
Posted on Reply
#15
ShurikN
This came to light when game developer Oxide Games claimed that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.
it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps.
NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges.
Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does.
NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute.
The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.
Why am I not surprised by any of this...
Posted on Reply
#16
rtwjunkie
PC Gaming Enthusiast
"Alexandru Spataru said:
Who the heck writes these and how do they pass to be posted on the website ?
Pressurized, pressurizing, does the writer even know what that means ?
I mean really.....
Welcome to TPU. Quite an auspicious start:
-Complain about article content, Check.
-Insult moderator, Check.
Posted on Reply
#17
DeathtoGnomes
"Cybrnook2002 said:
Sneaky Sneaky

*

Bluffing driver.... Wonder what other features we "think" we are using.
This. We all know Nvdia has cheated at benchmarks in the past.

so its like this:

Oxide: Look! we got benchmarks!
AMD: oh my we almost beat The Green Meanies
Nvidia: @oxide you cheated on the benchmarks
Oxide: did not. nyah.
Nvidia: disable competitive features so our non-async bluff works right!
Oxide: not gonna happen
Nvidia: F*** you AMD! you're not better than us! we'll fix you with our l33t bluffing skillz
AMD: *poops pants* and hides in a corner eating popcorn.
Posted on Reply
#18
Assimilator
I don't understand why a game company would write a benchmark that favours or doesn't favour a specific card... that seems like the opposite purpose of a benchmark.

Be that as it may, I'd also like to know which version of Maxwell allegedly doesn't support async shaders - v1, v2, or both?

If this is true, then it's a massive c**k-up on nVIDIA's part, but one that probably won't affect them until the end of this year, when big-name DX12 games arrive for the holiday season. Even so, said games will still probably have DX11 rendering paths, and once Pascal arrives in March/April 2016 this will all be forgotten... assuming Pascal isn't delayed.
Posted on Reply
#19
yogurt_21
"ShurikN said:
Why am I not surprised by any of this...
Because its been done time and time again. Whichever company doesn't have support for new shiney goes out of their way to downplay its importance and states that their competition is foolish for being early adopters by supporting it.

Later when they do support new shiney they go out of their way to claim its now more important and you should want it.

From a business standpoint not supporting the latest and greatest features of direct x makes sense. How much did nv spend on SM3.0 support in the their 6000 series? By the time the games that supported it were out, most were too resource heavy for anything short of the 6800 Ultra.

What about the Direct X 10 fiasco? how much was spent on getting compliance only to have next to no games support it and instead stick with Direct X 9 until 11 came out?

Though I am curious like the previous poster is kepler also suffers from this or if its something that was nerfed for maxwell.
Posted on Reply
#20
TheMailMan78
Big Member
"EarthDog said:
If people had half a clue, they would hold on to current NVIDIA cards until the next generation lands in 2016 that will likely perform better. There are going to be a couple of DX12 titles out by that time at least.
Still rocking two 670's here ;)

Not that I have a clue........I'm just broke.
Posted on Reply
#21
TRWOV
Maybe that explains Ars technica's results?

Posted on Reply
#22
RejZoR
It's again a SINGLE game. Until I see more (that aren't exclusive to either camp like this one is to AMD), then I'll accept the info...
Posted on Reply
#23
Mr McC
No support, no problem, just pay them to gimp it on AMD cards: welcome to the wonderful world of the nVidia console, sorry, pc gaming, the way it's meant to be paid.
Posted on Reply
#24
Legacy-ZA
*Sigh*

I am so tired of things like this...
Posted on Reply
#25
Casecutter
So where does this leave Nvidia, as I thought it had been pretty well purported that Nvidia "confirmed" some time ago that Maxwell 2 ( GM 200, 204, 206) can utilize the feature? Is it now that they don't have async compute (enough) and tried to emulated through software and their see bad teething plains? Is now surfacing perhaps Nvidia covered up the truth to sell GPU’s?

It been known AMD has baked in async compute into their hardware since GCN, and even in PS/Xbox consoles. And it been know well before Maxwell async compute was going to be something Dx12 would be able to leverage.

So Nvidia has been selling a shit load of cards, and now all those cards are found they might be able to provide emulated support in due time. Did they intend to have designed hardware that appears not to offer native support (or so little) that one might see it and negligent? So what were they intending most Maxwell owners (and even Kepler) look to do, wait for Pascal and be happy to thrown more money at them… while they watch resale of their cards plummet? Sure, right now owners can just convince themselves they can do with some half-baked support till more Dx12 games start to show.

Nvidia must provide a clear and truthful statement as to the goings on with this, or… IDK
Posted on Reply
Add your own comment