• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12

WALL OF TEXT ACTIVATE!

I love reading your posts lilhasselhoffer but, man you gotta learn how to articulate your points into torpedo's and not a long spray of .22 caliber rat shot.

The only person who types more in a response than you is Ford.

Learn to hit hard and fast.

Fair. My only response is that too often not explaining yourself makes you look like an ass.

More than once I've been guilty of that...sigh....
 
Whether or not this becomes a big deal has yet to be seen at least from a more than one game standpoint. I believe more information is needed on the subject from both sides before we can make our final decisions, but so far this is just another bit of "Misinformation" that has been put out from their side which seems to be happening a lot recently (Just like how there was a "Misunderstanding" about the GTX 970).

Either way, its definitely nothing we have not seen before as far as any of the sides have made up their own shares of lies over the last few years.
That's a lot of "misunderstanding" from NV in a short amount of time...
When AMD does it it's the fucking guillotine for em. When NV does it, it's a misunderstanding.
 
Fair. My only response is that too often not explaining yourself makes you look like an ass.

More than once I've been guilty of that...sigh....
Me looking like an ass to people on the internet is the least of my concerns. I make my point and bolt. Either you get it or don't. That's not directed at you by the way. I'm just saying when you respond you don't HAVE to explain yourself unless people ask you to elaborate. FYI I read your posts but, I can tell you that most people do not. Rule of thumb in marketing is most people don't read past the first paragraph unless it pertains to their wellbeing (Money or entertainment).

That's why I tell people to hit and run when you "debate" on the interwebz. You stick longer in peoples heads. My degree is in illustration. However I have worked in marketing/apparel for a LONG time. Why do you think I troll so well? Its all about the delivery. You'll ether love me or hate me but, YOU WILL READ what I say.
 
Fair. My only response is that too often not explaining yourself makes you look like an ass.

More than once I've been guilty of that...sigh....

Damned if you do, damned if you don't... o_O
 
That's a lot of "misunderstanding" from NV in a short amount of time...
When AMD does it it's the fucking guillotine for em. When NV does it, it's a misunderstanding.

afaik Nvidia is still dealing with a class action suit over the GTX 970 in California. I haven't seen any recent news on it but you never know with juries. It could be an expensive lesson for them.
 
Seems like a bit of storm in the tea cup to me, this is based on one devs engine after all, the same dev AMD used to pimp Mantle.

Are AMD seeing bigger gains? Definitely, but then they are coming from further back because the DX11 performance is much worse compared to the meanies at nV.

It would be nice to at least have a baseline of 5 or so titles using different engines before jumping to any real conclusions.
 
Why do people surprised by this? Directx12 (and Vulcan too) both have AMD roots in them (Microsoft has two consoles in the market with AMD GPUs).

Its about time for the console deal "with no profit in it" to pay back.

Its on NV that they are out of it. Next time they should get more involved in the gaming ecosystem if they want to have a word in its development, even if it doesn't bring them the big bucks immediately.
 
Seems like a bit of storm in the tea cup to me, this is based on one devs engine after all, the same dev AMD used to pimp Mantle.

Are AMD seeing bigger gains? Definitely, but then they are coming from further back because the DX11 performance is much worse compared to the meanies at nV.

It would be nice to at least have a baseline of 5 or so titles using different engines before jumping to any real conclusions.

What? Rational thought?! Thank-you!!
 
There is no misinformation at all, most of the dx12 features will be supported by software on most of the cards, there are no GPU on the market with 100% top tier dx12 support (and I'm not sure if the next generation will be one, but maybe). This is nothing but a very well directed market campaign to level the fields, but I expected more insight into this from some of the TPU vets tbh (I don't mind it btw, AMD needs all the help he can get anyways).

There bigger implications if either side is up to mischief.

If the initial game engine is being developed under AMD GNC with Async Compute for consoles then we PC gamers are getting even more screwed.

We are already getting a bad port, non-optimize, down-graded from improved API paths. GameWorks middle-ware. driver side emulation, developer who don't care = PC version of Batman Arkham Knight
 
Last edited:
oohhh time to go back to the Reds ... not that i am unhappy with my GTX 980 ... but if i find a good deal on a Fury/Fury X/Fury Nano i would gladly re-switch camp ... (it's not like i care what GPU is in my rig tho ... )
 
The reason why async compute is important is because it has to do with why some get nauseated when using VR.

David Kanter says it needs to be under 20ms response, John Carmack agrees.

A quote from some articles linked on the Anandtech forum.
David Kanter:
Asynchronous shading is an incredibly powerful feature that increases performance by more efficiently using the GPU hardware while simultaneously reducing latency. In the context of VR, it is absolutely essential to reducing motion-to-photon latency, and is also beneficial for conventional rendering..........To avoid simulation sickness in the general population, the motion-to-photon latency should never exceed 20 milliseconds (ms).

John Carmack:
20 milliseconds or less will provide the minimum level of latency deemed acceptable.

In other words, if you want VR without motion sickness, then AMD's Liquid VR is currently the only way. If nVidia does not add a quicker way to do async compute in Pascal (whether hardware or software), it's going to hurt them when it comes to VR.
 
Wow, this thread has been a fun read. Not because of the blatant fanboyism, or the supremely amusing yet subtle trolling, but because everybody seems to be missing the point here.
Both sides have lied, neither side is innocent, this isn't a win for AMD nor a loss for NV, but most importantly (and I'm putting this in all caps for emphasis, not because I'm yelling) :
NOBODY BUYS A GPU THINKING "THIS IS GOING TO BE THE BEST CARD IN A YEAR." You buy for the now and hope that it'll hold up for a year or two(or longer, depending on your build cycles). Nobody can accurately predict what the tech front holds in the next few years. Sure, we all have an idea, but even the engineers that are working on the next two or three generations of hardware at this very moment cannot say for certain what's going to happen between now and release, or how a particular piece of hardware will perform in a year's time, whether the reason is drivers, software, APIs, power restrictions(please Intel, give up the 130W TDP limit on your "Extreme" chips and give us a proper Extreme Edition again!),etc., or something completely unforeseeable.

TL;DR: Whether NV lied about its DX12 support on Maxwell is a moot point, the hardware is already in the wild. I would be extremely surprised if, by the time we have at least three AAA DX12 titles, today's current top-end cards still perform on-par with the top-tier cards of the future that will have DX12 support. As far as DX12 stands right now, we have a tech demo and a pre-beta game that is being run as a benchmark. Take the results with a grain of salt, and only as an indicator of how the landscape might look once DX12 is widely-adopted.

And now, before I get flamed to death by all the fanboys, RM OUT!
 
They should just leave the benchmark trying to use async compute if whatever card in the system claims it supports it.

It's nvidia's fault, not Oxide's.
 
What you mean the game engine that started off as a mantel tech demo runs better on AMD cards than Nvidia who would have thunk it?
 
...........:eek: THOSE FIENDS!!!!!!!!!!!!! Anyone running a 980Ti or Titan X should immediately sell me their card for next to nothing and go buy one from Amd.........that'll show'em:D. But seriously...............when this becomes relevant to gaming.......a whole new architecture will be out......so while this is kinda dastardly...............its massively moot


wow by the time i typed this.....Random Murderer said it a lot better.
 
Last edited:
AMD may be a bunch of deceptive scumbags when it comes to their paper launches, but nVidia is still the king of dirty trick scumbaggery. Ha :D
 
...........:eek: THOSE FIENDS!!!!!!!!!!!!! Anyone running a 980Ti or Titan X should immediately sell me their card for next to nothing and go buy one from Amd.........that'll show'em:D. But seriously...............when this becomes relevant to gaming.......a whole new architecture will be out......so while this is kinda dastardly...............its massively moot

Love your sarcasm! :laugh:

Seriously though, if this does become a real issue; while a whole new architecture will be out, there will be a huge number of people with Maxwells, since they have sold so well. So, not really moot either, though I see your thought process.
 
IF Oxide dev team is correct about nVidia not having async compute feature embeded in their hw and just letting anyone think they do by showing it up in their driver lvl only, we are talking about fraud here. AMD has done many mistakes in PR level by trying to hide disadvantages in raw performance. But 970 vram fiasco and this (if true) should get anyone willing to see advance in PC gaming or computing in general MAD with green team's practice. On the other side we need to praise AMD for all the features they have brought to us for some time now. Native 64bit support on CPUs, 4-core CPUs in a die, tesselation, DX 10.1, HDM vram, CGN, APUs with some graphic power and reasonable price, etc. They sometimes fail in performance level due to low budget in R&D but they try to advance PC world and they bring forward open source features too. Freesync, OPENCL, Mantle are the recent ones.

To sum up, if I were now in a position to buy a new GPU, I would choose a CGN one without 2nd thoughts just to be sure I will enjoy the most features and higher performance lvl of dx12 games in next 2-3 years.
 
Today in DX12 = AMD > Nvidia.
When DX12 got Popular and have a lot of games = Nvidia (new Generation) > AMD.

Nvidia Marketing department is so stupid, they said Maxwell will be compatible with all DX12 feature levels... haha, jokes on you Nvidia fanboy.

I can't seem to lower my IQ far enough to understand anything you have said. If I could I would perhaps converse with the slow growing slime that forms around my bath plug. Until then I'll just stick to communicating with slugs, which after all, are still an evolutionary jump over your synaptic development.
 
Whether NV lied about its DX12 support on Maxwell is a moot point, the hardware is already in the wild. I would be extremely surprised if, by the time we have at least three AAA DX12 titles, today's current top-end cards still perform on-par with the top-tier cards of the future that will have DX12 support. As far as DX12 stands right now, we have a tech demo and a pre-beta game that is being run as a benchmark. Take the results with a grain of salt, and only as an indicator of how the landscape might look once DX12 is widely-adopted.

And because of that way of thinking, NV can get away this type of bullshit over and over and over again:
During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges.
 
I can't seem to lower my IQ far enough to understand anything you have said. If I could I would perhaps converse with the slow growing slime that forms around my bath plug. Until then I'll just stick to communicating with slugs, which after all, are still an evolutionary jump over your synaptic development.

Wow.......what's up with the hostility man. Is it really you? Doesn't seem like it.

I wonder why Humansmoke hasn't commented on this news....
 
Here's the original article at WCCFTECH: http://wccftech.com/oxide-games-dev-replies-ashes-singularity-controversy/

There are two points here to look at: first, Oxide is saying that Nvida has had access to the source code of the game for over a year and that they've been getting updates to the source code on the same day as AMD and Intel.

"P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."

Second, they claim they're using the so called async-compute units found on AMD's GPUs. These are parts on AMD's recent GPUs that can be used to asynchronously schedule work to underutilized GCN clusters.

"AFAIK, Maxwell doesn’t support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not. Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD’s hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it’s scheduler is hard to say."
here is the original post from Oxdie http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995
 
Cherry picked benchmarks are the only benchmarks from a marketing perspective. This is true.

Me I'm sailing the seas of cheese, not caring and obviously being better off by it.
 
How long before AMD puts a Free-aSync logo on the box.

How long before people start saying AMDs inclusion of Async Compute was a stupid move and should have charged more and Nvidia is doing the right thing by making you upgrade for it if Pascal incorporates them.
 
Back
Top