Wednesday, March 23rd 2016

AMD Announces Exciting DirectX 12 Game Engine Developer Partnerships

AMD today once again took the pole position in the DirectX 12 era with an impressive roster of state-of-the-art DirectX 12 games and engines, each with extensive tuning for the Graphics Core Next (GCN) architecture at the heart of modern Radeon GPUs.

"DirectX 12 is poised to transform the world of PC gaming, and Radeon GPUs are central to the experience of developing and enjoying great content," said Roy Taylor, corporate vice president, Content and Alliances, AMD. "With a definitive range of industry partnerships for exhilarating content, plus an indisputable record of winning framerates, Radeon GPUs are an end-to-end solution for consumers who deserve the latest and greatest in DirectX 12 gaming."
"DirectX 12 is a game-changing low overhead API for both developers and gamers," said Bryan Langley, Principal Program Manager, Microsoft. "AMD is a key partner for Microsoft in driving adoption of DirectX 12 throughout the industry, and has established the GCN Architecture as a powerful force for gamers who want to get the most out of DirectX 12."

Optimized for AMD Radeon Graphics
  • Ashes of the Singularity by Stardock and Oxide Games
  • Total War: WARHAMMER by Creative Assembly
  • Battlezone VR by Rebellion
  • Deus Ex: Mankind Divided by Eidos-Montréal
  • Nitrous Engine by Oxide Games
Total War: WARHAMMER
A fantasy strategy game of legendary proportions, Total War: WARHAMMER combines an addictive turn-based campaign of epic empire-building with explosive, colossal, real-time battles, all set in the vivid and incredible world of Warhammer Fantasy Battles.
Sprawling battles with high unit counts are a perfect use case for the uniquely powerful GPU multi-threading capabilities offered by Radeon graphics and DirectX 12. Additional support for DirectX 12 asynchronous compute will also encourage lightning-fast AI decision making and low-latency panning of the battle map.

Battlezone VR
Designed for the next wave of virtual reality devices, Battlezone VR gives you unrivalled battlefield awareness, a monumental sense of scale and breathless combat intensity. Your instincts and senses respond to every threat on the battlefield as enemy swarms loom over you and super-heated projectiles whistle past your ears.
Rolling into battle, AMD and Rebellion are collaborating to ensure Radeon GPU owners will be particularly advantaged by low-latency DirectX 12 rendering that's crucial to a deeply gratifying VR experience.

Ashes of the Singularity
AMD is once again collaborating with Stardock in association with Oxide to bring gamers Ashes of the Singularity. This real-time strategy game set in the far future, redefines the possibilities of RTS with the unbelievable scale provided by Oxide Games' groundbreaking Nitrous engine. The fruits of this collaboration has resulted in Ashes of the Singularity being the first game to release with DirectX 12 benchmarking capabilities.

Deus Ex: Mankind Divided
Deus Ex: Mankind Divided, the sequel to the critically acclaimed Deus Ex: Human Revolution, builds on the franchise's trademark choice and consequence, action-RPG based gameplay, to create both a memorable and highly immersive experience. AMD and Eidos-Montréal have engaged in a long term technical collaboration to build and optimize DirectX 12 in their engine including special support for GPUOpen features like PureHhair based on TressFX Hair and Radeon exclusive features like asynchronous compute.

Nitrous Engine
Radeon graphics customers the world over have benefitted from unmatched DirectX 12 performance and rendering technologies delivered in Ashes of the Singularity via the natively DirectX 12 Nitrous Engine. Most recently, Benchmark 2.0 was released with comprehensive support for DirectX 12 asynchronous compute to unquestionably dominant performance from Radeon graphics.

With massive interplanetary warfare at our backs, Stardock, Oxide and AMD announced that the Nitrous Engine will continue to serve a roster of franchises in the years ahead. Starting with Star Control and a second unannounced space strategy title, Stardock, Oxide and AMD will continue to explore the outer limits of what can be done with highly-programmable GPUs.

Premiere Rendering Efficiency with DirectX 12 Asynchronous Compute
Important PC gaming effects like shadowing, lighting, artificial intelligence, physics and lens effects often require multiple stages of computation before determining what is rendered onto the screen by a GPU's graphics hardware.

In the past, these steps had to happen sequentially. Step by step, the graphics card would follow the API's process of rendering something from start to finish, and any delay in an early stage would send a ripple of delays through future stages. These delays in the pipeline are called "bubbles," and they represent a brief moment in time when some hardware in the GPU is paused to wait for instructions.

What sets Radeon GPUs apart from its competitors, however, is the Graphics Core Next architecture's ability to pull in useful compute work from the game engine to fill these bubbles. For example: if there's a rendering bubble while rendering complex lighting, Radeon GPUs can fill in the blank with computing the behavior of AI instead.

Radeon graphics cards don't need to follow the step-by-step process of the past or its competitors, and can do this work together-or concurrently-to keep things moving.
Filling these bubbles improves GPU utilization, input latency, efficiency and performance for the user by minimizing or eliminating the ripple of delays that could stall other graphics cards. Only Radeon graphics currently support this crucial capability in DirectX 12 and VR.

An Undeniable Trend
With five new DirectX 12 game and engine partnerships; unmatched DirectX 12 performance in every test thus far; plus, exclusive support for the radically powerful DirectX 12 asynchronous compute functionality, Radeon graphics and the GCN architecture have rapidly ascended to their position as the definitive DirectX 12 content creation and consumption platform.

This unquestionable leadership in the era of low-overhead APIs emerges from a calculated and virtuous cycle of distributing the GCN architecture throughout the development industry, then partnering with top game developers to design, deploy and master Mantle's programming model. Through the years that followed, open and transparent contribution of source code, documentation and API specifications ensured that AMD philosophies remained influential in landmark projects like DirectX 12.
Add your own comment

40 Comments on AMD Announces Exciting DirectX 12 Game Engine Developer Partnerships

#1
Prima.Vera
Deus Ex is kinda the only game worth mentioning from that list....
Posted on Reply
#2
ZoneDymo
Prima.VeraDeus Ex is kinda the only game worth mentioning from that list....
but what about BLOOD FOR THE BLOOD GOD, SKULLS FOR THE SKULLTHRONE!!!
Posted on Reply
#3
Prima.Vera
ZoneDymobut what about BLOOD FOR THE BLOOD GOD, SKULLS FOR THE SKULLTHRONE!!!
Posted on Reply
#4
TheGuruStud
Wake me up when devs switch to Vulkan and drop DX completely.
Posted on Reply
#5
jigar2speed
TheGuruStudWake me up when devs switch to Vulkan and drop DX completely.
You will need a lot of sleeping pills to be asleep.
Posted on Reply
#6
jigar2speed
Prima.VeraDeus Ex is kinda the only game worth mentioning from that list....
WarHammer has a lot of following, i just came back from Last stand online session, had 12K guys playing from one of the several DLCs on steam.
Posted on Reply
#7
Prima.Vera
TheGuruStudWake me up when devs switch to Vulkan and drop DX completely.
Posted on Reply
#8
RejZoR
Great, async compute for Deus Ex. Which means it'll run like shit on GTX 900 cards. Thanks NVIDIA for your "complete" DX12 support.
Posted on Reply
#9
the54thvoid
Intoxicated Moderator
RejZoRGreat, async compute for Deus Ex. Which means it'll run like shit on GTX 900 cards. Thanks NVIDIA for your "complete" DX12 support.
You need to relax. It simply means, at best given AoS benchmarks, that the DX12 path for AMD will yield greater benefits for them over DX11.
Nvidia will get minimal async compute benefits. It depends if, knowing NV don't do async well, AMD leverage higher use of it, like the tesselation hampered AMD in crysis 3.

Chill dude, it's all going to be fine.
Posted on Reply
#10
RejZoR
My system runs all my games at max possible settings at all times, it's this specific shit that always fucks up everything. And I'm not even blaming AMD here. They've done DX12 right, it's NVIDIA that was lazy. But I love Deus Ex franchise, that's why I'm worrying.

Then again, Deus Ex Human Revolution looked amazing so if I get that level of graphics I'm fine with it anyway. So yeah, chilling...
Posted on Reply
#11
the54thvoid
Intoxicated Moderator
RejZoRMy system runs all my games at max possible settings at all times, it's this specific shit that always fucks up everything. And I'm not even blaming AMD here. They've done DX12 right, it's NVIDIA that was lazy. But I love Deus Ex franchise, that's why I'm worrying.

Then again, Deus Ex Human Revolution looked amazing so if I get that level of graphics I'm fine with it anyway. So yeah, chilling...
Remember, Nvidia won't take a hit for DX12, if AoS is the baseline, it just won't improve under the current implementation of its warp schedulers until they (ironically) driver optimise each title.
Posted on Reply
#12
HalfAHertz
RejZoRGreat, async compute for Deus Ex. Which means it'll run like shit on GTX 900 cards. Thanks NVIDIA for your "complete" DX12 support.
And you deducted this from what exactly? The AoS demo performs perfectly fine on GTX 900 and it supposedly uses AC extensively...
Posted on Reply
#13
RejZoR
There is a difference between "fine" and "great". Fine might be 60-ish fps for you. Great is 144 fps V-Synced for me.
Posted on Reply
#14
ZoneDymo
RejZoRThere is a difference between "fine" and "great". Fine might be 60-ish fps for you. Great is 144 fps V-Synced for me.
Wait so 60-ish is not fine with you?
You're comparison is a bit off, you added a factor, namely you and the other.
Posted on Reply
#16
silentbogo
@RejZoR , I think you are overreacting. Your GTX980 will do just fine.

Source
Putting the blinders on and looking specifically at Ashes of the Singularity, is Nvidia as doomed as AMD fanboys would have you believe? No, we wouldn't say so.
Although Nvidia does go backwards in DX12 as AMD goes forward, the margins are far from catastrophic. Take the
GTX 980 Ti vs. Fury X battle at 1080p for example. Under DX11 the 980 Ti is 15% faster while it is just 2% faster when using DX12. The exact same thing was seen when comparing the GTX 980 and R9 390X.
That's from November.
Gamers should bear in mind that this is still just a single DX12 game and is unlikely to represent DX12 performance as a whole -- no single game could. As is the case with DX11 gaming, it's likely that some DX12 games will favor AMD while others prefer Nvidia.
Take into account results from Hitman, Fable Legends and Rise of the Tobm Raider, and you get a much clearer picture.
Posted on Reply
#17
NeoGalaxy
silentbogo@RejZoR , I think you are overreacting. Your GTX980 will do just fine.

Source


That's from November.



Take into account results from Hitman, Fable Legends and Rise of the Tobm Raider, and you get a much clearer picture.
Indeed Fable Legends will help us get the most cleared picture ever since... it was cancelled :P
Posted on Reply
#18
silentbogo
NeoGalaxyIndeed Fable Legends will help us get the most cleared picture ever since... it was cancelled :p
Holy crap! How did I miss that?
Just right now I reading the announcement from March 7th and it sucks...
Posted on Reply
#19
PP Mguire
RejZoRThere is a difference between "fine" and "great". Fine might be 60-ish fps for you. Great is 144 fps V-Synced for me.
You do realize of course that regardless of "async compute" performance that your single 980 won't always give you 144fps all the time in new titles maxed.....right? I think your expectations are just a bit high.
Posted on Reply
#20
trog100
PP MguireYou do realize of course that regardless of "async compute" performance that your single 980 won't always give you 144fps all the time in new titles maxed.....right? I think your expectations are just a bit high.
"I think your expectations are just a bit high."

they seem to be linked to the refresh rates of the latest monitors .. he he

trog
Posted on Reply
#21
FordGT90Concept
"I go fast!1!11!1!"
RejZoRMy system runs all my games at max possible settings at all times, it's this specific shit that always fucks up everything. And I'm not even blaming AMD here. They've done DX12 right, it's NVIDIA that was lazy. But I love Deus Ex franchise, that's why I'm worrying.

Then again, Deus Ex Human Revolution looked amazing so if I get that level of graphics I'm fine with it anyway. So yeah, chilling...
On AoS, it's about a 30% difference in FPS at most (less with async compute off). Might just have to set it to High instead of Ultra. Nothing to get too worked up about. :)
Posted on Reply
#22
PP Mguire
trog100"I think your expectations are just a bit high."

they seem to be linked to the refresh rates of the latest monitors .. he he

trog
Mmmm expectation vsync. Esync. :roll:
Posted on Reply
#23
Tom_
Prima.VeraDeus Ex is kinda the only game worth mentioning from that list....
That is the only uninteresting Game of the List.
Posted on Reply
#24
BiggieShady
Yeah, about async compute ... it's super easy on GCN because it is perfectly happy to accept compute commands in the 3D queue. There is no penalty for mixing draw calls and compute commands in the 3D queue.
With Maxwell you have performance penalties from using compute commands concurrently with draw calls, so compute queues are mostly used to offload and execute compute commands in batch.
Essentially if you want to use async compute efficiently on nvidia, you gotta cleanly separate the render pipeline into batches and even consider including CUDA.dll to fully use high priority jobs and independent scheduling (with GK110 and later, CUDA bypasses the graphics command processor and is handled by a dedicated function unit in hardware which runs uncoupled from the regular compute or graphics engine. It even supports multiple asynchronous queues in hardware). It's a complete mess, and all detailed here: ext3h.makegames.de/DX12_Compute.html
Posted on Reply
#25
the54thvoid
Intoxicated Moderator
BiggieShadyYeah, about async compute ... it's super easy on GCN because it is perfectly happy to accept compute commands in the 3D queue. There is no penalty for mixing draw calls and compute commands in the 3D queue.
With Maxwell you have performance penalties from using compute commands concurrently with draw calls, so compute queues are mostly used to offload and execute compute commands in batch.
Essentially if you want to use async compute efficiently on nvidia, you gotta cleanly separate the render pipeline into batches and even consider including CUDA.dll to fully use high priority jobs and independent scheduling (with GK110 and later, CUDA bypasses the graphics command processor and is handled by a dedicated function unit in hardware which runs uncoupled from the regular compute or graphics engine. It even supports multiple asynchronous queues in hardware). It's a complete mess, and all detailed here: ext3h.makegames.de/DX12_Compute.html
Nice read. Sort of.

It's possible for a dev to work with CUDA to make async work then. That would require Nvidia to sponsor titles and help with coding for CUDA to prioritise the batches to suit the hardware. The article said that would mean worse case for AMD but good gains for Nvidia as the CUDA route allows the hardware to do async batches better. Vice versa is the hardware only solution as AMD has designed GCN for, which is worse case for Nvidia.

So, AMD can sponsor titles and Nvidia lose out or Nvidia can sponsor titles and AMD can lose out.

No change then! :rolleyes:
Posted on Reply
Add your own comment
Apr 30th, 2024 12:47 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts