Monday, August 31st 2015

Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12

It turns out that NVIDIA's "Maxwell" architecture has an Achilles' heel after all, which tilts the scales in favor of competing AMD Graphics CoreNext architecture, in being better prepared for DirectX 12. "Maxwell" lacks support for async compute, one of the three highlight features of Direct3D 12, even as the GeForce driver "exposes" the feature's presence to apps. This came to light when game developer Oxide Games alleged that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.

Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges.
"Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).

Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.
Sources: DSOGaming, WCCFTech
Add your own comment

196 Comments on Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12

#51
ShurikN
GhostRyderWhether or not this becomes a big deal has yet to be seen at least from a more than one game standpoint. I believe more information is needed on the subject from both sides before we can make our final decisions, but so far this is just another bit of "Misinformation" that has been put out from their side which seems to be happening a lot recently (Just like how there was a "Misunderstanding" about the GTX 970).

Either way, its definitely nothing we have not seen before as far as any of the sides have made up their own shares of lies over the last few years.
That's a lot of "misunderstanding" from NV in a short amount of time...
When AMD does it it's the fucking guillotine for em. When NV does it, it's a misunderstanding.
Posted on Reply
#52
TheMailMan78
Big Member
lilhasselhofferFair. My only response is that too often not explaining yourself makes you look like an ass.

More than once I've been guilty of that...sigh....
Me looking like an ass to people on the internet is the least of my concerns. I make my point and bolt. Either you get it or don't. That's not directed at you by the way. I'm just saying when you respond you don't HAVE to explain yourself unless people ask you to elaborate. FYI I read your posts but, I can tell you that most people do not. Rule of thumb in marketing is most people don't read past the first paragraph unless it pertains to their wellbeing (Money or entertainment).

That's why I tell people to hit and run when you "debate" on the interwebz. You stick longer in peoples heads. My degree is in illustration. However I have worked in marketing/apparel for a LONG time. Why do you think I troll so well? Its all about the delivery. You'll ether love me or hate me but, YOU WILL READ what I say.
Posted on Reply
#53
Loosenut
lilhasselhofferFair. My only response is that too often not explaining yourself makes you look like an ass.

More than once I've been guilty of that...sigh....
Damned if you do, damned if you don't... o_O
Posted on Reply
#54
64K
ShurikNThat's a lot of "misunderstanding" from NV in a short amount of time...
When AMD does it it's the fucking guillotine for em. When NV does it, it's a misunderstanding.
afaik Nvidia is still dealing with a class action suit over the GTX 970 in California. I haven't seen any recent news on it but you never know with juries. It could be an expensive lesson for them.
Posted on Reply
#55
Fluffmeister
Seems like a bit of storm in the tea cup to me, this is based on one devs engine after all, the same dev AMD used to pimp Mantle.

Are AMD seeing bigger gains? Definitely, but then they are coming from further back because the DX11 performance is much worse compared to the meanies at nV.

It would be nice to at least have a baseline of 5 or so titles using different engines before jumping to any real conclusions.
Posted on Reply
#56
geon2k2
IkarugaWhy do people surprised by this? Directx12 (and Vulcan too) both have AMD roots in them (Microsoft has two consoles in the market with AMD GPUs).
Its about time for the console deal "with no profit in it" to pay back.

Its on NV that they are out of it. Next time they should get more involved in the gaming ecosystem if they want to have a word in its development, even if it doesn't bring them the big bucks immediately.
Posted on Reply
#57
rtwjunkie
PC Gaming Enthusiast
FluffmeisterSeems like a bit of storm in the tea cup to me, this is based on one devs engine after all, the same dev AMD used to pimp Mantle.

Are AMD seeing bigger gains? Definitely, but then they are coming from further back because the DX11 performance is much worse compared to the meanies at nV.

It would be nice to at least have a baseline of 5 or so titles using different engines before jumping to any real conclusions.
What? Rational thought?! Thank-you!!
Posted on Reply
#58
Xzibit
IkarugaThere is no misinformation at all, most of the dx12 features will be supported by software on most of the cards, there are no GPU on the market with 100% top tier dx12 support (and I'm not sure if the next generation will be one, but maybe). This is nothing but a very well directed market campaign to level the fields, but I expected more insight into this from some of the TPU vets tbh (I don't mind it btw, AMD needs all the help he can get anyways).
There bigger implications if either side is up to mischief.

If the initial game engine is being developed under AMD GNC with Async Compute for consoles then we PC gamers are getting even more screwed.

We are already getting a bad port, non-optimize, down-graded from improved API paths. GameWorks middle-ware. driver side emulation, developer who don't care = PC version of Batman Arkham Knight
Posted on Reply
#59
GreiverBlade
oohhh time to go back to the Reds ... not that i am unhappy with my GTX 980 ... but if i find a good deal on a Fury/Fury X/Fury Nano i would gladly re-switch camp ... (it's not like i care what GPU is in my rig tho ... )
Posted on Reply
#60
mastrdrver
The reason why async compute is important is because it has to do with why some get nauseated when using VR.

David Kanter says it needs to be under 20ms response, John Carmack agrees.

A quote from some articles linked on the Anandtech forum.
David Kanter:
Asynchronous shading is an incredibly powerful feature that increases performance by more efficiently using the GPU hardware while simultaneously reducing latency. In the context of VR, it is absolutely essential to reducing motion-to-photon latency, and is also beneficial for conventional rendering..........To avoid simulation sickness in the general population, the motion-to-photon latency should never exceed 20 milliseconds (ms).

John Carmack:
20 milliseconds or less will provide the minimum level of latency deemed acceptable.
In other words, if you want VR without motion sickness, then AMD's Liquid VR is currently the only way. If nVidia does not add a quicker way to do async compute in Pascal (whether hardware or software), it's going to hurt them when it comes to VR.
Posted on Reply
#61
Random Murderer
The Anti-Midas
Wow, this thread has been a fun read. Not because of the blatant fanboyism, or the supremely amusing yet subtle trolling, but because everybody seems to be missing the point here.
Both sides have lied, neither side is innocent, this isn't a win for AMD nor a loss for NV, but most importantly (and I'm putting this in all caps for emphasis, not because I'm yelling) :
NOBODY BUYS A GPU THINKING "THIS IS GOING TO BE THE BEST CARD IN A YEAR." You buy for the now and hope that it'll hold up for a year or two(or longer, depending on your build cycles). Nobody can accurately predict what the tech front holds in the next few years. Sure, we all have an idea, but even the engineers that are working on the next two or three generations of hardware at this very moment cannot say for certain what's going to happen between now and release, or how a particular piece of hardware will perform in a year's time, whether the reason is drivers, software, APIs, power restrictions(please Intel, give up the 130W TDP limit on your "Extreme" chips and give us a proper Extreme Edition again!),etc., or something completely unforeseeable.

TL;DR: Whether NV lied about its DX12 support on Maxwell is a moot point, the hardware is already in the wild. I would be extremely surprised if, by the time we have at least three AAA DX12 titles, today's current top-end cards still perform on-par with the top-tier cards of the future that will have DX12 support. As far as DX12 stands right now, we have a tech demo and a pre-beta game that is being run as a benchmark. Take the results with a grain of salt, and only as an indicator of how the landscape might look once DX12 is widely-adopted.

And now, before I get flamed to death by all the fanboys, RM OUT!
Posted on Reply
#62
Roph
They should just leave the benchmark trying to use async compute if whatever card in the system claims it supports it.

It's nvidia's fault, not Oxide's.
Posted on Reply
#63
semantics
What you mean the game engine that started off as a mantel tech demo runs better on AMD cards than Nvidia who would have thunk it?
Posted on Reply
#64
ensabrenoir
...........:eek: THOSE FIENDS!!!!!!!!!!!!! Anyone running a 980Ti or Titan X should immediately sell me their card for next to nothing and go buy one from Amd.........that'll show'em:D. But seriously...............when this becomes relevant to gaming.......a whole new architecture will be out......so while this is kinda dastardly...............its massively moot


wow by the time i typed this.....Random Murderer said it a lot better.
Posted on Reply
#65
NC37
AMD may be a bunch of deceptive scumbags when it comes to their paper launches, but nVidia is still the king of dirty trick scumbaggery. Ha :D
Posted on Reply
#66
rtwjunkie
PC Gaming Enthusiast
ensabrenoir...........:eek: THOSE FIENDS!!!!!!!!!!!!! Anyone running a 980Ti or Titan X should immediately sell me their card for next to nothing and go buy one from Amd.........that'll show'em:D. But seriously...............when this becomes relevant to gaming.......a whole new architecture will be out......so while this is kinda dastardly...............its massively moot
Love your sarcasm! :laugh:

Seriously though, if this does become a real issue; while a whole new architecture will be out, there will be a huge number of people with Maxwells, since they have sold so well. So, not really moot either, though I see your thought process.
Posted on Reply
#67
Sasqui
TRWOVMaybe that explains Ars technica's results?

Wow.
Posted on Reply
#68
HD64G
IF Oxide dev team is correct about nVidia not having async compute feature embeded in their hw and just letting anyone think they do by showing it up in their driver lvl only, we are talking about fraud here. AMD has done many mistakes in PR level by trying to hide disadvantages in raw performance. But 970 vram fiasco and this (if true) should get anyone willing to see advance in PC gaming or computing in general MAD with green team's practice. On the other side we need to praise AMD for all the features they have brought to us for some time now. Native 64bit support on CPUs, 4-core CPUs in a die, tesselation, DX 10.1, HDM vram, CGN, APUs with some graphic power and reasonable price, etc. They sometimes fail in performance level due to low budget in R&D but they try to advance PC world and they bring forward open source features too. Freesync, OPENCL, Mantle are the recent ones.

To sum up, if I were now in a position to buy a new GPU, I would choose a CGN one without 2nd thoughts just to be sure I will enjoy the most features and higher performance lvl of dx12 games in next 2-3 years.
Posted on Reply
#69
the54thvoid
Intoxicated Moderator
KarymidoNToday in DX12 = AMD > Nvidia.
When DX12 got Popular and have a lot of games = Nvidia (new Generation) > AMD.

Nvidia Marketing department is so stupid, they said Maxwell will be compatible with all DX12 feature levels... haha, jokes on you Nvidia fanboy.
I can't seem to lower my IQ far enough to understand anything you have said. If I could I would perhaps converse with the slow growing slime that forms around my bath plug. Until then I'll just stick to communicating with slugs, which after all, are still an evolutionary jump over your synaptic development.
Posted on Reply
#70
ShurikN
Random MurdererWhether NV lied about its DX12 support on Maxwell is a moot point, the hardware is already in the wild. I would be extremely surprised if, by the time we have at least three AAA DX12 titles, today's current top-end cards still perform on-par with the top-tier cards of the future that will have DX12 support. As far as DX12 stands right now, we have a tech demo and a pre-beta game that is being run as a benchmark. Take the results with a grain of salt, and only as an indicator of how the landscape might look once DX12 is widely-adopted.
And because of that way of thinking, NV can get away this type of bullshit over and over and over again:
During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges.
Posted on Reply
#71
N3M3515
the54thvoidI can't seem to lower my IQ far enough to understand anything you have said. If I could I would perhaps converse with the slow growing slime that forms around my bath plug. Until then I'll just stick to communicating with slugs, which after all, are still an evolutionary jump over your synaptic development.
Wow.......what's up with the hostility man. Is it really you? Doesn't seem like it.

I wonder why Humansmoke hasn't commented on this news....
Posted on Reply
#72
ValenOne
FrustratedGarrettHere's the original article at WCCFTECH: wccftech.com/oxide-games-dev-replies-ashes-singularity-controversy/

There are two points here to look at: first, Oxide is saying that Nvida has had access to the source code of the game for over a year and that they've been getting updates to the source code on the same day as AMD and Intel.

"P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."

Second, they claim they're using the so called async-compute units found on AMD's GPUs. These are parts on AMD's recent GPUs that can be used to asynchronously schedule work to underutilized GCN clusters.

"AFAIK, Maxwell doesn’t support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not. Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD’s hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it’s scheduler is hard to say."
here is the original post from Oxdie www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995
Posted on Reply
#73
Frick
Fishfaced Nincompoop
Cherry picked benchmarks are the only benchmarks from a marketing perspective. This is true.

Me I'm sailing the seas of cheese, not caring and obviously being better off by it.
Posted on Reply
#74
Xzibit
How long before AMD puts a Free-aSync logo on the box.

How long before people start saying AMDs inclusion of Async Compute was a stupid move and should have charged more and Nvidia is doing the right thing by making you upgrade for it if Pascal incorporates them.
Posted on Reply
#75
HumanSmoke
N3M3515Wow.......what's up with the hostility man. Is it really you? Doesn't seem like it.
I wonder why Humansmoke hasn't commented on this news....
1. Because the whole thing is based upon a demo of an unreleased game which may - or may not, have any significant impact on PC gaming
2. Because as others have said, the time to start sweating is when DX12 games actually arrive.
3. As I've said in earlier posts, there are going to instances where game engines favour one vendor or the other - it has always been the case, it will very likely continue to be so. Nitrous is built for GCN. No real surprises since Oxide's Star Swarm was the original Mantle demo poster child. AMD gets its licks in early. Smart marketing move. It will be interesting how they react when they are at the disadvantage, and what games draw what mix of hardware and software features available to DX12
4. With the previous point in mind, Unreal launched UE 4.9 yesterday. The engine supports a number of features that AMD has had problems with (GameWorks), or has architectural/driver issues with. 4.9 I believe has VXGI support, and ray tracing. My guess is that the same people screaming "Nvidia SUCK IT!!!!!" will be the same people crying foul when a game emerges that leverages any of these graphical effects.....of course, Unreal Engine 4 might be inconsequential WRT AAA titles, but I very much doubt it.

PC Gaming benchmarks and performance - vendors win some, lose some. Wash.Rinse.Repeat. I just hope the knee-jerk comments keep on coming - I just love bookmarking (and screencapping for those who retroactively rewrite history) for future reference.
The UE4.9 notes are pretty extensive, so here's an editor shot showing the VXGI support.
Posted on Reply
Add your own comment
May 12th, 2024 00:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts