Tuesday, February 28th 2017

NVIDIA Announces DX12 Gameworks Support

NVIDIA has announced DX12 support for their proprietary GameWorks SDK, including some new exclusive effects such as "Flex" and "Flow." Most interestingly, NVIDIA is claiming that simulation effects get a massive boost from Async Compute, nearly doubling performance on a GTX 1080 using that style of effects. Obviously, Async Compute is a DX12 exclusive technology. The performance gains in an area where NVIDIA normally is perceived to not do so well are indeed encouraging, even if only in their exclusive ecosystem. Whether GCN powered cards will see similar gains when running GameWorks titles remains to be seen.
Add your own comment

31 Comments on NVIDIA Announces DX12 Gameworks Support

#1
Camm
I desperately hope that developers avoid this shit this generation. No gamer wants this crap, and most examples usually end up having tanky performance on all but the latest high ends.
Posted on Reply
#2
EzioAs
CammI desperately hope that developers avoid this shit this generation. No gamer wants this crap, and most examples usually end up having tanky performance on all but the latest high ends.
:confused: o_O :wtf:
Posted on Reply
#3
sweet
TIL "GameWorks" is called an ecosystem. This is fucking shame for the PC master race.
Posted on Reply
#4
thesmokingman
That's the whole point, tank your system so you have to buy the new ti. On a serious note, wasn't one of the points of directx 12 was to free ourselves from crap like gameworks?
Posted on Reply
#5
evernessince
CammI desperately hope that developers avoid this shit this generation. No gamer wants this crap, and most examples usually end up having tanky performance on all but the latest high ends.
Couldn't agree more, it hasn't earned the title Gimpworks for nothing. It's essentially a black box that doesn't allow AMD to optimize for the code. It tanks performance on last gen Nvidia and all AMD cards.

On the other hand you have AMD technology like TressFX that works well on both AMD and Nvidia.

GameWorks is equivalent to that one time Intel "asked" software devs to compile code using a specially provided compiler that gimped AMD CPUs. The only difference is Nvidia GameWorks is long running and Nvidia Fanboys don't seem to care the damage it is doing to the industry.

"Whether GCN powered cards will see similar gains when running GameWorks titles remains to be seen"

Let me answer that for you, no. I can answer with 100% confidence that Nvidia has never and will never let AMD get a boost from GameWorks.
Posted on Reply
#6
evernessince
thesmokingmanThat's the whole point, thank your system so you have to buy the new ti. On a serious note, wasn't one of the points of directx 12 was to free ourselves from crap like gameworks?
It is but Nvidia wants to make it's own version of everything. If it includes DX 12 support into GameWorks, it can send it's engineers to game devs to implement Nvidia favoring DX 12 code into the game. When Nvidia does this, the game dev has to sign a contract not to share Nvidia proprietary code with anyone else, meaning AMD are unable to optimize anything Nvidia engineers do. Often times this can mean AMD cannot optimize for very important parts of the code. Thus Nvidia can cover it's hardware's failure by essentially paying game devs off with free engineers.
Posted on Reply
#7
R-T-B
sweetTIL "GameWorks" is called an ecosystem. This is fucking shame for the PC master race.
It is an ecosystem of sorts. A commercial, proprietary one yes, (and in my personal opinion buggy too) but a system of products all the same.
Posted on Reply
#8
AndreiD
I'm surprised by the amount of sheer dumb comments on here, it's like people just fell for whatever turds and speculation the rumor mills throw.
The reason why some Gameworks features impacted other vendor hardware more than Nvidia's is because features like Hairworks, Furworks, Volumetric Lighting, etc.. make heavy use of tessellation, which is something Nvidia GPUs excel at.
evernessinceLet me answer that for you, no. I can answer with 100% confidence that Nvidia has never and will never let AMD get a boost from GameWorks.
HBAO+ has a lower performance impact on AMD's Fury X than it does on a 980 Ti is one example.
Posted on Reply
#9
RejZoR
This is why I like AMD more. They have their own stuff as well, but it's always open, meaning anyone can fiddle with it and optimize it.
Posted on Reply
#10
FordGT90Concept
"I go fast!1!11!1!"
R-T-BWhether GCN powered cards will see similar gains when running GameWorks titles remains to be seen.
Psssh! More like CPU. PhysX (a component of GameWorksBroke) won't run at all on GCN.
RejZoRThis is why I like AMD more. They have their own stuff as well, but it's always open, meaning anyone can fiddle with it and optimize it.
Especially NVIDIA and Intel. But NVIDIA, you know, has to sell you that bridge to no where.


Maybe NVIDIA do an about-face because NVIDIA has to pony up to get any developers to use it the way they want them to but I certainly won't be holding my breath for that. It's at least plausible because Direct3D 12 is a standard API so it may use standard compute calls to operate on the GPU instead of specifically CUDA code. I mean, they are touting that Direct3D 12 did something right, aren't they? They never claimed 200% increase in compute + graphics before, did they? Maybe that's incentive enough to let Intel and AMD accelerate it.
Posted on Reply
#11
evernessince
AndreiDI'm surprised by the amount of sheer dumb comments on here, it's like people just fell for whatever turds and speculation the rumor mills throw.
The reason why some Gameworks features impacted other vendor hardware more than Nvidia's is because features like Hairworks, Furworks, Volumetric Lighting, etc.. make heavy use of tessellation, which is something Nvidia GPUs excel at.



HBAO+ has a lower performance impact on AMD's Fury X than it does on a 980 Ti is one example.
There's nothing Nvidia can do about AMD being better at HBAO+, AMD cards are just better at the calculations require for most methods of SSAO, including Nvidia's own. Vega is likely to take that up another notch.
Posted on Reply
#12
W1zzard
RejZoRThis is why I like AMD more. They have their own stuff as well, but it's always open, meaning anyone can fiddle with it and optimize it.
Lots of NVIDIA Gameworks stuff has been open sourced for this GDC

Posted on Reply
#13
FordGT90Concept
"I go fast!1!11!1!"
About-face it is then, good. :D

I hope some games (like Witcher 3) and game engines (UE3/UE4) get retroactively patched for GCN support.
Posted on Reply
#14
Prima.Vera
evernessinceThere's nothing Nvidia can do about AMD being better at HBAO+, AMD cards are just better at the calculations require for most methods of SSAO, including Nvidia's own. Vega is likely to take that up another notch.
Tbh, I could never tell which one is better, HBAO+ or SSAO. They look different on every single game, however I cannot tell which one is more realistic. The HBAO performs a little worst also.
Posted on Reply
#15
dogen
evernessinceCouldn't agree more, it hasn't earned the title Gimpworks for nothing. It's essentially a black box that doesn't allow AMD to optimize for the code. It tanks performance on last gen Nvidia and all AMD cards.
evernessinceThere's nothing Nvidia can do about AMD being better at HBAO+, AMD cards are just better at the calculations require for most methods of SSAO, including Nvidia's own. Vega is likely to take that up another notch.
So much for a "black box" lol.

If amd had no power over what runs on their cards, nvidia could easily check the gpu vendor and run whatever inefficient ssao method they could come up with.

Luckily, your story is 100% wrong and ignores the entire purpose of a gpu driver.
Posted on Reply
#16
SetsunaFZero
Does Physix 3.x make use of the CPUS SEE/SSE2 instruction set now? Or is NV still crippling Physix CPU performance with the stone-age x87 instruction set?
Posted on Reply
#17
the54thvoid
Intoxicated Moderator
Did nobody read yesterdays DX12 post? It requires more coding than DX11 and still needs to be tuned to each vendor.
The level of instant Nvidia hate is amusing.

As W1zzard has said, GW has been opened up to a degree but even before that they've released SDKs for their ecosystem.
AMD have played open source because of their position, not because they love everyone. I don't fully trust their partnership with Bethesda to fully optimise their games for AMD.
Posted on Reply
#18
RejZoR
What this means in a nutshell, we won't see games performing better with current PhysX, they'll just cram more objects into games and we'll see ZERO gains while also not really seeing any visual improvement either. What difference does it make between 100 and 200 shards of shattered glass? Visually, not much. But calculation wise, you're taxing it twice as much. The times of 4 shards of glass vs 20 are long gone. When numbers are this high, you don't have to go stupid on it just because you can, you should leave the gain for players to enjoy in terms pof higher framerate. But sily me always thinking backwards by removing polygons instead of adding them with tessellation and wanting PhysX to perform better while looking the same instead of just cramming gazillion of everything to make it more real than reality. Ugh...
Posted on Reply
#19
Camm
W1zzardLots of NVIDIA Gameworks stuff has been open sourced for this GDC

The last time I looked at this, the github repo still held blobs\object code that according to the license, you are not able to dissemble. If its fully open now, I have no problem with random API's, for devs to use. Contrary to above posts saying its Nvidia hatred, its simply more broadbased that Gameworks was incredibly detrimental to the PC Ecosystem and games as a whole whilst it was in flavour. I for one do not want to see a repeat.
Posted on Reply
#20
microsista
i want to see 1080 ti oc 4k performance so bad
Posted on Reply
#21
renz496
SetsunaFZeroDoes Physix 3.x make use of the CPUS SEE/SSE2 instruction set now? Or is NV still crippling Physix CPU performance with the stone-age x87 instruction set?
PhysX 3 is actually quite good (cpu based PhysX) to the point havok have to make noise and remind everyone that they are still the best when it comes to third party physics solution for games. and funny things is despite people always complained about nvidia proprietary tendency PhysX right now more open than havok. it is still not open source but you can look at the source code now without the need to pay nvidia for the access unlike havok.
Posted on Reply
#22
cowie
you guys kidding me?
there has been shit all nada nothing with new or just even better visuals in dx12.
go get a console you will love the performance and looks
Posted on Reply
#23
medi01
CammI desperately hope that developers avoid this shit this generation
Couldn't agree more.
Posted on Reply
#24
GhostRyder
CammI desperately hope that developers avoid this shit this generation. No gamer wants this crap, and most examples usually end up having tanky performance on all but the latest high ends.
I agree, never been a big fan of Gameworks, never will.
dogenIf amd had no power over what runs on their cards, nvidia could easily check the gpu vendor and run whatever inefficient ssao method they could come up with.
The difference is if they did do that people would find out and it would be a PR nightmare. At least this way they can keep up the argument "Its better because Nvidia is better" rather than "Were not going to even let you try".

Either way, this just means were moving closer and closer to DX12 replacing DX11 as the main DirectX being used which is also a good thing.
Posted on Reply
#25
FordGT90Concept
"I go fast!1!11!1!"
RejZoRWhat this means in a nutshell, we won't see games performing better with current PhysX, they'll just cram more objects into games and we'll see ZERO gains while also not really seeing any visual improvement either. What difference does it make between 100 and 200 shards of shattered glass? Visually, not much. But calculation wise, you're taxing it twice as much. The times of 4 shards of glass vs 20 are long gone. When numbers are this high, you don't have to go stupid on it just because you can, you should leave the gain for players to enjoy in terms pof higher framerate. But sily me always thinking backwards by removing polygons instead of adding them with tessellation and wanting PhysX to perform better while looking the same instead of just cramming gazillion of everything to make it more real than reality. Ugh...
If NVIDIA did open source it so AMD and Intel can GPU accelerate it, then GameWorks can be used for important things in games like destroying buildings instead of just cosmetic things like shattering glass, liter flying around, fancy hair/fur, and realistic capes. Because GameWorks wasn't vendor agnostic, developers could only use it for visuals.
Posted on Reply
Add your own comment
May 4th, 2024 20:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts