• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Cripples Older GCN GPUs of Async-Compute Support?

And when was the 290X released ? :p ;), although wasn't the 280 a tweaked 7970 or some thing like that, so yeah it's older than that but it's still GCN and devs should support it for a few more years at least as the card is not a bad card still today.

I think this is less AMD and more lazy devs.

well as you said 280x is nothing more than 7970 ghz ed. 290x is a newer architecture. 4 years lifespan of a card is more than enough. AMD is not running a charity nor do they have the luxury to do so. That effort to still support a 4 years old architecture is best spent elsewhere.
 
Seems like nowadays both nVidia and AMD are only supporting current and -1 generations, while intentionally crippling performance on older generations. Is just confirmed, no need to bitch about it. Both companies are doing it.
In all truth, nVidia was worst, by crippling performance on 7xx series, while 9xx series was still their current generation...
 
I see a lot of misunderstand since i post my finding.

In a first place, i started to search why GCN 1.0 was not supported in Rise Of The Tomb Raider last patch.

When Maxwell was accused to not support Async Compute, someone create a program for testing Async Compute. It worked on GCN 1.0, 1.1 and 1.2 back in time.

So i just wanted to verify if that still the case and that how i find that Async Compute was disabled on news drivers. Then i post on Reddit.

After that, many ask me to test a game that support Async Compute on GCN 1.0. So i tried Ashes Of Singularity. You know the end. That just confirm that Async Compute was disabled on news drivers. Old drivers performs way better because of Async Compute.

DirectX12 driver 16.3.1 Async Compute off : http://i.imgur.com/aiV1pSg.png

DirectX12 driver 16.3.1 Async Compute on : http://i.imgur.com/CGrb4yM.png

DirectX12 drivers superior to 16.9.2 Async Compute off :http://i.imgur.com/yiSSRCE.png

DirectX12 drivers superior to 16.9.2 Async Compute on :http://i.imgur.com/Fch5V8w.png

For all you know at this time it could be that AMD tweaked the drivers and waiting on the game devs to update the code. Although if that is the case they could of said some thing but they are not required to.

Shit maybe AMD accidentally broke it who knows for sure yet ?.


well as you said 280x is nothing more than 7970 ghz ed. 290x is a newer architecture. 4 years lifespan of a card is more than enough. AMD is not running a charity nor do they have the luxury to do so. That effort to still support a 4 years old architecture is best spent elsewhere.

Don't know about that, it all depends. Dropping the 290 from support would be a terrible thing to do at this time.
 
It's an indicator but, it's certainly not definitive as you can check to make sure that a version is under a certain version. For example, he said he tested:

16.3.1 to 16.9.2 is a really big gap and if an update on a minor version was when it stopped working, it could still be the case that games looking to make sure that certain features are used when the driver is in a certain range. I don't want to make assumptions about software though. Simple fact is that we don't really know. An interesting observation though is that the 16.9.2 results are somewhere between the 16.3.1 results.

Either way, I think more testing is in order to determine if AMD actually gimped their drivers or not or if AMD gimped async compute like how they gimped HDMI. ;)

Hence why I said "he believes", my point is many seem to be once again jumping to conclusions, some believing it could be the drivers, some thinking not, your statement earlier led me to believe you thought it was just more anti AMD FUD (you may be right of course) however the only actual tangible information we appear to have indicates otherwise, now that evidence may be inaccurate but it seems to have been dismissed by some already without consideration so when I read the thread all I see is those defending AMD and those not, either way it's unbalanced.
 
I spent all night installing every drivers between 16.3.1 and 16.9.2. The break point is the driver 16.4.2(released in April). After this driver no more Async Compute on GCN 1.0. So Nixxes was aware that Async Compute was not active on GCN 1.0. They released Async Compute patch for Rise Of The Tomb Raider in July, specifying that only GCN 1.1 and superior can take advantage of Async Compute.

I received a good amount of results all around the world that users did for me and their results just confirmed my findings. Thank you all.

All credit to you for actually making the effort to test multiple drivers.
 
Hence why I said "he believes", my point is many seem to be once again jumping to conclusions, some believing it could be the drivers, some thinking not, your statement earlier led me to believe you thought it was just more anti AMD FUD (you may be right of course) however the only actual tangible information we appear to have indicates otherwise, now that evidence may be inaccurate but it seems to have been dismissed by some already without consideration so when I read the thread all I see is those defending AMD and those not, either way it's unbalanced.
Sure, this problem is a little more hard to gauge than the HDMI one which was obviously misrepresented. It's very possible that drivers were gimped but, I think it's also possible that AMD could have done some async compute magic with the driver outside of explicitly using it in the application (hence why the non-async compute score on the newer driver is half way inbetween the non-async and async results on the old driver.) So is it really not using async compute or is it using it differently? We don't know. I do find it interesting though that on the newer driver that the scores don't change by enabling/disabling async compute but, the results are (in general,) higher than the non-async results on older drivers.

Either way, even if something did happen, GCN 1.0 parts are starting to get to the age that many VLIW parts were getting to when AMD decided to ditch support. It wouldn't surprise me if they actually did remove support to further optimize it for GCN 1.1+ but, even with a claim like that, we still don't know.

Edit: The only reason the title is okay is because, there is a question mark at the end. :p
 
Last edited:
That shows gains from async compute where already very minor in the first place. The same test would need to be run multiple times to see how close the results are to each other to determine if (in this case,) if it actually gained anything or rather numbers were just a little different. If the performance loss is 6% and amount of error is something like 3-5%, then you're not really showing much of a difference between the runs.

Something other than Ashes of the Singularity probably should be used to confirm this. We're putting a lot of faith in a single application to make a very broad claim if we take this at face value.
 
That shows gains from async compute where already very minor in the first place. The same test would need to be run multiple times to see how close the results are to each other to determine if (in this case,) if it actually gained anything or rather numbers were just a little different. If the performance loss is 6% and amount of error is something like 3-5%, then you're not really showing much of a difference between the runs.

Something other than Ashes of the Singularity probably should be used to confirm this. We're putting a lot of faith in a single application to make a very broad claim if we take this at face value.
Margin of error is about between 0,5% and 2% max.

Gain from Async Compute is between 7% and 10% on R9 270X, 5% and 7% on R9 280X(Maybe more with the same drivers as R9 270X).

Async Compute implementation in Ashes is very light.

Sure not that much(http://images.anandtech.com/doci/9124/Async_Perf.jpg) but higher than that.
 
http://imgur.com/Xr0Jhjo

AMD presentation June 2016, Async Compute retired in GCN 1.0 in April.
I'm still not sure where you're pulling the proof that they retired Async Compute from that image. All I see is from it is that they introduced async compute in GCN 1.0 and was going to continue to be updated as time progressed and the same exists in the same image for GCN 1.1. Nothing here says anything about its retirement or dropping support for anything, only that certain features were going to be updated over time. You really need to stop pulling assumptions from thin air.
 
I'm still not sure where you're pulling the proof that they retired Async Compute from that image. All I see is from it is that they introduced async compute in GCN 1.0 and was going to continue to be updated as time progressed and the same exists in the same image for GCN 1.1. Nothing here says anything about its retirement or dropping support for anything, only that certain features were going to be updated over time. You really need to stop pulling assumptions from thin air.
Did you miss the part where Kwee tested every driver since March and found the support is not there past April?
 
Did you miss the part where Kwee tested every driver since March and found the support is not there past April?
No, I'm just not sure how the image he posted ties into all of this. Once again, how about testing using something other than Ashes of the Singularity? Once again, version ranges can have a lot to do with it for what a game engine thinks the GPU is capable of. How about the improved performance without async compute enabled? As I said before, it's possible that the driver could be automagically using it under the hood when it can as opposed to just being able to be turned on. Simply put, there are too many variables to rule out the application or changes in how the driver operates.
 
No, I'm just not sure how the image he posted ties into all of this. Once again, how about testing using something other than Ashes of the Singularity? Once again, version ranges can have a lot to do with it for what a game engine thinks the GPU is capable of. How about the improved performance without async compute enabled? As I said before, it's possible that the driver could be automagically using it under the hood when it can as opposed to just being able to be turned on. Simply put, there are too many variables to rule out the application or changes in how the driver operates.
How about staying on topic? When Sony removed feature from PS4 (Linux support), they caught a lot of flak for it. I think it turned into a class-action or something.
What AMD seems to have done here is awfully similar. No one can sue, though, since I think async was not an advertised feature, it was enabled down the road. As for "they improved performance to make up for it", that was when Nvidia has done, too, yet people still burn them to the stake because they don't have "proper async".

Personally, I couldn't care less if AMD dropped async support. Yet I cannot help noticing they drop everything they can, as fast as they can (just try to see which hardware is supported by which driver and with witch kernel under Linux). They're short on resources and this trend has me worried.
 
No, I'm just not sure how the image he posted ties into all of this. Once again, how about testing using something other than Ashes of the Singularity? Once again, version ranges can have a lot to do with it for what a game engine thinks the GPU is capable of. How about the improved performance without async compute enabled? As I said before, it's possible that the driver could be automagically using it under the hood when it can as opposed to just being able to be turned on. Simply put, there are too many variables to rule out the application or changes in how the driver operates.

The first tool i used for determinate if the Async Compute work or not wasn't Ashes Of Singularity. It was a tool developped just for testing Async Compute. After that only i test Ashes Of Singularity which show a regression on performance. Many tools was tested after that like D3D12NBodyGravity.
 
Last edited:
The first tool i used for determinate if the Async Compute work or not wasn't Ashes Of Singularity. It was a tool developped just for testing Async Compute. After that only i test Ashes Of Singularity which show a regression on performance. Many tools was tested after that like D3D12NBodyGravity.
Don't bother. You literally drew him a picture and he still (pretends he) doesn't get it.
 
The first tool i used for determinate if the Async Compute work or not wasn't Ashes Of Singularity. It was a tool developped just for testing Async Compute. After that only i test Ashes Of Singularity which show a regression on performance. Many tools was tested after that like D3D12NBodyGravity.
If that's the case, why aren't we seeing the results of those as well to make this a more definitive than just behaving as if it were speculation?
Don't bother. You literally drew him a picture and he still (pretends he) doesn't get it.
A picture showing that async compute will get updates over time and says nothing about retiring it? Okay, buddy.
 
If that's the case, why aren't we seeing the results of those as well to make this a more definitive than just behaving as if it were speculation?

A picture showing that async compute will get updates over time and says nothing about retiring it? Okay, buddy.

Just click on the source, mate, i'm not going to take your hand and show you what it's going on. If you don't want know, then be my guest. http://img4.hostingpics.net/pics/489141lestroissinges3.jpg
 
Just click on the source, mate, i'm not going to take your hand and show you what it's going on. If you don't want know, then be my guest. http://img4.hostingpics.net/pics/489141lestroissinges3.jpg

Hi Kwee,
I think we should all think you for your tests.

I'm very disappointed to see AMD doing such a bad trick. - If the new driver is not optimizing older GCN 1.0 cards, I can understand. But the functions were already there, and now is disabled by new driver? that's a whole other story.
Nvidia used to do that, I was disappointed and came to AMD. Now this happened, next time buying a new Video card I should think again. Who knows, maybe one day AMD will secretly disable another function in another card, because they think it is "old"?
 
No, I'm just not sure how the image he posted ties into all of this. Once again, how about testing using something other than Ashes of the Singularity? Once again, version ranges can have a lot to do with it for what a game engine thinks the GPU is capable of. How about the improved performance without async compute enabled? As I said before, it's possible that the driver could be automagically using it under the hood when it can as opposed to just being able to be turned on. Simply put, there are too many variables to rule out the application or changes in how the driver operates.

Is there another way to test, other than install the whole Ashes of the Singularity, to quickly find out if a new version driver fixed the async or not?
 
Back
Top