Tuesday, March 6th 2018

AMD Project ReSX is an eSports Gamer Outreach by Making Radeon More Responsive

AMD's late-Monday release of the Radeon Software Adrenalin 18.3.1 has a big change not chronicled in its change-log, the first implementation of ReSX. Short for "Radeon eSports Experience," ReSX is the code-name of a major effort within the Radeon Software team to address some of the fundamental complaints competitive eSports gamers had been having with Radeon GPUs - click-to-response and frame-time. As an eSports gamer chasing a million-dollar price-pool, you'd want your graphics hardware to have the least frame-time (the most fluid output), the highest frame-rates, and of course the least click-to-response time (the infinitesimal amount it times for a click of your mouse to register as action, be sent to the game-server, and render on-screen, simultaneously.

AMD stated that has approached these problems from two fronts - by working with developers of the biggest eSports titles to optimize their game-engines for Radeon; as well as making under-the-hood changes to Radeon Software. The company is announcing not just marginally higher frame-rates in certain eSports titles, but also significant improvements to frame-time (99th percentile of), and lower click-to-response times. According to the performance numbers put out by AMD, while these improvements may not be double-digit percentage points difference, could still translate into a vastly improved gaming experience, according to AMD.
Add your own comment

30 Comments on AMD Project ReSX is an eSports Gamer Outreach by Making Radeon More Responsive

#1
dj-electric
AMD needs to stop giving new versions of its software new names.

This is ridicules. Some small, and respectable changes, and that's it.
Posted on Reply
#2
theoneandonlymrk
dj-electric said:
AMD needs to stop giving new versions of its software new names.

This is ridicules. Some small, and respectable changes, and that's it.
The name change is a yearly occurrence not more often??

As for minor versions if you just bought vermintide like me you'd be fine with it I'd guess.

What's ridicules is people bitching about tangential points on any positive pr piece amd put out.

Yeh and fix yo driver's And ???
Posted on Reply
#3
Readlight
keyboard haw much faster response than the mouse. Do you think they also logging how everything works in pc?
Almost impossible to find game who i like long time and who works perfectly.mybeneed to switch to Intel.
Posted on Reply
#4
RejZoR
I miss the days when it was just called ATi/AMD Catalyst. And everyone knew it's a driver for RADEON cards. Now we got ReLive and then Adrenalin where you strill have s component called ReLive to record video and now we got ReSX inside all this. What's next? AMD doesn't seem to understand branding. It's why ADIDAS, Nike or Coca Cola are so recognizable. They don't change or add anything. They made a brand and they are sticking with it. I really don't understand what was wrong with AMD Catalyst. It was a really cool name and even had a meaning as in "catalyst" is an accelerator that gets the reaction going aka, making Radeon hardware function. It was cool and made sense!
Posted on Reply
#5
bug
It's the first time I hear about this group and all they have for us includes 2, 4 and 6% increases. Impressive :wtf:
Posted on Reply
#6
Aldain
dj-electric said:
AMD needs to stop giving new versions of its software new names.

This is ridicules. Some small, and respectable changes, and that's it.
This is not a new software version name . it is a name of the project. How about you read first then comment
Posted on Reply
#7
dj-electric
Aldain said:
This is not a new software version name . it is a name of the project. How about you read first then comment
I read first, of course. Its a part of my own project, project G.Y.S.T, where i tell companies in public to get their $*(& together and stop doing PR stunts to get some attention for what is considered a normal day in the office.
Posted on Reply
#8
cucker tarlson
How about working on that friggin dx11 cpu overhead first ?
Posted on Reply
#9
Casecutter
I say it's good to see them still spending some of that "mining market share" on gaming!
Posted on Reply
#10
kruk
cucker tarlson said:
How about working on that friggin dx11 cpu overhead first ?
How about game developers doing some friggin optimizations or switch to another, better API?

There is simply no excuse for sh*tty programming, because we have several visually stunning games (DOOM, BF1, etc.) that run excellent on both AMD and nVidia ...
Posted on Reply
#11
bug
kruk said:
How about developers doing some friggin optimizations or switch to another, better API?

There is simply no excuse for sh*tty programming, because we have several visually stunning games (DOOM, BF1, etc.) that run excellent on both AMD and nVidia ...
AMD's drivers have historically been more CPU intensive than Nvidia's when it comes to DX (and OpenGL). How can a developer work around that? What's the job of a driver programmer if not squeezing everything that can be squeezed from the hardware?
Posted on Reply
#13
HD64G
I cannot get why some people here bash AMD for making gaming better on a game genre so popular. Some do the same just because AMD GPUs are better in compute performance and they are preferred from miners. But AMD GPUs are best in compute power due to CGN arch since the start (7970) before mining became a thing. I guess haters gonna hate... :shadedshu:
Posted on Reply
#14
bug
HD64G said:
I cannot get why some people here bash AMD for making gaming better on a game genre so popular.
Better? All they have to show for their effors is 11% in one game, in one specific configuration. I don't know where you've been living, but in real life this is called... wait for it... a regular driver update.
Posted on Reply
#15
HD64G
bug said:
Better? All they have to show for their effors is 11% in one game, in one specific configuration. I don't know where you've been living, but in real life this is called... wait for it... a regular driver update.
Increase in performance in any game is considered better than no increase, even if it is less than 10%. Where is the bad thing here that deserves critisism?
Posted on Reply
#16
renz496
HD64G said:
I cannot get why some people here bash AMD for making gaming better on a game genre so popular. Some do the same just because AMD GPUs are better in compute performance and they are preferred from miners. But AMD GPUs are best in compute power due to CGN arch since the start (7970) before mining became a thing. I guess haters gonna hate... :shadedshu:
if AMD card really are the best in compute then their Firepro will be used more in professional space. AMD GPU was better at mining not because they have better compute performance than nvidia but more mining algo was built to take advantage of AMD architecture. but the same can also be done on nvidia hardware. i don't know why some people look at AMD superior performance at mining (and mining only) and then assume AMD as better at compute compared to nvidia. in reality while AMD GPU theoretically have much higher compute peak performance than nvidia most often it is hard for them to properly utilize all that raw power compared to nvidia. there was two super computer that rank almost next to each other at TOP500 list. both are using accelerator. the one that using AMD accelerator (tahiti based) have much higher peak performance than the one using nvidia tesla. but nvidia based machine end up being much better rank because it can utilize it's raw power better and having much faster real world performance.
Posted on Reply
#17
HD64G
renz496 said:
if AMD card really are the best in compute then their Firepro will be used more in professional space. AMD GPU was better at mining not because they have better compute performance than nvidia but more mining algo was built to take advantage of AMD architecture. but the same can also be done on nvidia hardware. i don't know why some people look at AMD superior performance at mining (and mining only) and then assume AMD as better at compute compared to nvidia. in reality while AMD GPU theoretically have much higher compute peak performance than nvidia most often it is hard for them to properly utilize all that raw power compared to nvidia. there was two super computer that rank almost next to each other at TOP500 list. both are using accelerator. the one that using AMD accelerator (tahiti based) have much higher peak performance than the one using nvidia tesla. but nvidia based machine end up being much better rank because it can utilize it's raw power better and having much faster real world performance.
You are clearly confusing hw and sw. AMD is usually better in raw compute power due to its arch and nVidia is better in specific tasks due to their focus on sw. 1080Ti is the exception.
Posted on Reply
#18
bug
HD64G said:
Increase in performance in any game is considered better than no increase, even if it is less than 10%. Where is the bad thing here that deserves critisism?
The part that deserves criticism is taking an ordinary driver update and present it like the revolutionary work of an elite team.
We've had increases of 5% on average for years decades without making much fuss about it. (Remember how each time someone's favourite GPU maker brought to the market a video card that didn't compare favourably to the competition how the "immature drivers" card is always played?)
Posted on Reply
#19
renz496
HD64G said:
You are clearly confusing hw and sw. AMD is usually better in raw compute power due to its arch and nVidia is better in specific tasks due to their focus on sw. 1080Ti is the exception.
And they build their hardware in a way that it is more difficult for the software to utilize the raw performance. Just look at async compute for example. They need specific support from the API before they can use them. Nvidia primary improvement on maxwell (over kepler) also increasing GPU utilization but they do it in a more transparent to the game hence regardless of API being use by the game they can benefit from the improvement done on maxwell.

Also most of the complaint here is not so much about the minimal increase performance but more on AMD keep getting out with fancy names with everything they do. Nvidia also something similar before (reducing latency on esports game) when they launch GTX950 but they never did give any fancy names for it. It is just standard driver improvement.
Posted on Reply
#20
theoneandonlymrk
renz496 said:
And they build their hardware in a way that it is more difficult for the software to utilize the raw performance. Just look at async compute for example. They need specific support from the API before they can use them. Nvidia primary improvement on maxwell (over kepler) also increasing GPU utilization but they do it in a more transparent to the game hence regardless of API being use by the game they can benefit from the improvement done on maxwell.

Also most of the complaint here is not so much about the minimal increase performance but more on AMD keep getting out with fancy names with everything they do. Nvidia also something similar before (reducing latency on esports game) when they launch GTX950 but they never did give any fancy names for it. It is just standard driver improvement.
What fancy name 18.3.1??

This is an internal work effort code name , even in pr blurb it is not a new driver name calm down.

Also it's Nvidia that has the preferance for special coding ala cuda and some scenarios favour Nvidia because Nvidia work very hard on the software

And people call Nvidia out on compute because they culled some of it totally out the low to mid range and also now charge a premium for it's use or in some cases more like a triple premium ala quadro.
Posted on Reply
#21
bug
renz496 said:
And they build their hardware in a way that it is more difficult for the software to utilize the raw performance. Just look at async compute for example. They need specific support from the API before they can use them. Nvidia primary improvement on maxwell (over kepler) also increasing GPU utilization but they do it in a more transparent to the game hence regardless of API being use by the game they can benefit from the improvement done on maxwell.

Also most of the complaint here is not so much about the minimal increase performance but more on AMD keep getting out with fancy names with everything they do. Nvidia also something similar before (reducing latency on esports game) when they launch GTX950 but they never did give any fancy names for it. It is just standard driver improvement.
Tbh what you're describing is an old problem. Intel had the same problem with Itanium. Programming a RISC CPU used to be way harder then programming a CISC CPU.
AMD having the same problem doesn't mean they don't know how to build their hardware. It's just that things don't always work out in practice as well as they do on paper. And on top of that, you're supposed to be designing for years ahead.
Still, their driver overhead problem is old and they should have addressed it by now, imho.
Posted on Reply
#22
renz496
theoneandonlymrk said:
What fancy name 18.3.1??

This is an internal work effort code name , even in pr blurb it is not a new driver name calm down.

Also it's Nvidia that has the preferance for special coding ala cuda and some scenarios favour Nvidia because Nvidia work very hard on the software

And people call Nvidia out on compute because they culled some of it totally out the low to mid range and also now charge a premium for it's use or in some cases more like a triple premium ala quadro.
"Project " is the fancy name. Regarding to compute honestly i find it really funny though. When nvidia include lots of compute stuff with fermi people say nvidia selling useless feature to gamer. Then when AMD follow suit and nvidia start seperating compute feature from consumer card suddenly compute become important for a gaming card.

bug said:
Tbh what you're describing is an old problem. Intel had the same problem with Itanium. Programming a RISC CPU used to be way harder then programming a CISC CPU.
AMD having the same problem doesn't mean they don't know how to build their hardware. It's just that things don't always work out in practice as well as they do on paper. And on top of that, you're supposed to be designing for years ahead.
Still, their driver overhead problem is old and they should have addressed it by now, imho.
Not saying that AMD did not know to make their hardware but they might need to rethink the way they built their hardware. Just because now there is DX12 then DX11 is useless. Just look at Vega. When the game was specifically built to take it's feature it can be very fast. But reality is gamer did not play new games only. New feature is nice but they also need to think about existing software.
Posted on Reply
#23
theoneandonlymrk
renz496 said:
"Project " is the fancy name. Regarding to compute honestly i find it really funny though. When nvidia include lots of compute stuff with fermi people say nvidia selling useless feature to gamer. Then when AMD follow suit and nvidia start seperating compute feature from consumer card suddenly compute become important for a gaming card.



Not saying that AMD did not know to make their hardware but they might need to rethink the way they built their hardware. Just because now there is DX12 then DX11 is useless. Just look at Vega. When the game was specifically built to take it's feature it can be very fast. But reality is gamer did not play new games only. New feature is nice but they also need to think about existing software.
You getting on topic at some point??

You brought up compute not I, i just corrected some of your miss direction.

I find it really funny when trolls work tangential arguments.
Posted on Reply
#24
1stn00b
18.3.1 = Year.Month.Number

It's the 1st driver of march 2018 :> next ones will be 18.3.2 , 18.3.3 or even 18.4.1 if it's launched next month ;>
They are comparing to 17.12.1 the 1st driver of december 2017
Posted on Reply
#25
kruk
bug said:
AMD's drivers have historically been more CPU intensive than Nvidia's when it comes to DX (and OpenGL). How can a developer work around that? What's the job of a driver programmer if not squeezing everything that can be squeezed from the hardware?
My point is that some devs obviously can make games run faster on DX11 that others (just look at the benchmarks here on TPU from like 2016). If you can't optimize like they do in DX11, why not go Vulkan/DX12? Several engines already support it. Why should AMD resort to driver hacks, if there is a much better native alternative for both, red and green team?
Posted on Reply
Add your own comment