Wednesday, March 23rd 2016

AMD Announces Exciting DirectX 12 Game Engine Developer Partnerships

AMD today once again took the pole position in the DirectX 12 era with an impressive roster of state-of-the-art DirectX 12 games and engines, each with extensive tuning for the Graphics Core Next (GCN) architecture at the heart of modern Radeon GPUs.

"DirectX 12 is poised to transform the world of PC gaming, and Radeon GPUs are central to the experience of developing and enjoying great content," said Roy Taylor, corporate vice president, Content and Alliances, AMD. "With a definitive range of industry partnerships for exhilarating content, plus an indisputable record of winning framerates, Radeon GPUs are an end-to-end solution for consumers who deserve the latest and greatest in DirectX 12 gaming."
"DirectX 12 is a game-changing low overhead API for both developers and gamers," said Bryan Langley, Principal Program Manager, Microsoft. "AMD is a key partner for Microsoft in driving adoption of DirectX 12 throughout the industry, and has established the GCN Architecture as a powerful force for gamers who want to get the most out of DirectX 12."

Optimized for AMD Radeon Graphics
  • Ashes of the Singularity by Stardock and Oxide Games
  • Total War: WARHAMMER by Creative Assembly
  • Battlezone VR by Rebellion
  • Deus Ex: Mankind Divided by Eidos-Montréal
  • Nitrous Engine by Oxide Games
Total War: WARHAMMER
A fantasy strategy game of legendary proportions, Total War: WARHAMMER combines an addictive turn-based campaign of epic empire-building with explosive, colossal, real-time battles, all set in the vivid and incredible world of Warhammer Fantasy Battles.
Sprawling battles with high unit counts are a perfect use case for the uniquely powerful GPU multi-threading capabilities offered by Radeon graphics and DirectX 12. Additional support for DirectX 12 asynchronous compute will also encourage lightning-fast AI decision making and low-latency panning of the battle map.

Battlezone VR
Designed for the next wave of virtual reality devices, Battlezone VR gives you unrivalled battlefield awareness, a monumental sense of scale and breathless combat intensity. Your instincts and senses respond to every threat on the battlefield as enemy swarms loom over you and super-heated projectiles whistle past your ears.
Rolling into battle, AMD and Rebellion are collaborating to ensure Radeon GPU owners will be particularly advantaged by low-latency DirectX 12 rendering that's crucial to a deeply gratifying VR experience.

Ashes of the Singularity
AMD is once again collaborating with Stardock in association with Oxide to bring gamers Ashes of the Singularity. This real-time strategy game set in the far future, redefines the possibilities of RTS with the unbelievable scale provided by Oxide Games' groundbreaking Nitrous engine. The fruits of this collaboration has resulted in Ashes of the Singularity being the first game to release with DirectX 12 benchmarking capabilities.

Deus Ex: Mankind Divided
Deus Ex: Mankind Divided, the sequel to the critically acclaimed Deus Ex: Human Revolution, builds on the franchise's trademark choice and consequence, action-RPG based gameplay, to create both a memorable and highly immersive experience. AMD and Eidos-Montréal have engaged in a long term technical collaboration to build and optimize DirectX 12 in their engine including special support for GPUOpen features like PureHhair based on TressFX Hair and Radeon exclusive features like asynchronous compute.

Nitrous Engine
Radeon graphics customers the world over have benefitted from unmatched DirectX 12 performance and rendering technologies delivered in Ashes of the Singularity via the natively DirectX 12 Nitrous Engine. Most recently, Benchmark 2.0 was released with comprehensive support for DirectX 12 asynchronous compute to unquestionably dominant performance from Radeon graphics.

With massive interplanetary warfare at our backs, Stardock, Oxide and AMD announced that the Nitrous Engine will continue to serve a roster of franchises in the years ahead. Starting with Star Control and a second unannounced space strategy title, Stardock, Oxide and AMD will continue to explore the outer limits of what can be done with highly-programmable GPUs.

Premiere Rendering Efficiency with DirectX 12 Asynchronous Compute
Important PC gaming effects like shadowing, lighting, artificial intelligence, physics and lens effects often require multiple stages of computation before determining what is rendered onto the screen by a GPU's graphics hardware.

In the past, these steps had to happen sequentially. Step by step, the graphics card would follow the API's process of rendering something from start to finish, and any delay in an early stage would send a ripple of delays through future stages. These delays in the pipeline are called "bubbles," and they represent a brief moment in time when some hardware in the GPU is paused to wait for instructions.

What sets Radeon GPUs apart from its competitors, however, is the Graphics Core Next architecture's ability to pull in useful compute work from the game engine to fill these bubbles. For example: if there's a rendering bubble while rendering complex lighting, Radeon GPUs can fill in the blank with computing the behavior of AI instead.

Radeon graphics cards don't need to follow the step-by-step process of the past or its competitors, and can do this work together-or concurrently-to keep things moving.
Filling these bubbles improves GPU utilization, input latency, efficiency and performance for the user by minimizing or eliminating the ripple of delays that could stall other graphics cards. Only Radeon graphics currently support this crucial capability in DirectX 12 and VR.

An Undeniable Trend
With five new DirectX 12 game and engine partnerships; unmatched DirectX 12 performance in every test thus far; plus, exclusive support for the radically powerful DirectX 12 asynchronous compute functionality, Radeon graphics and the GCN architecture have rapidly ascended to their position as the definitive DirectX 12 content creation and consumption platform.

This unquestionable leadership in the era of low-overhead APIs emerges from a calculated and virtuous cycle of distributing the GCN architecture throughout the development industry, then partnering with top game developers to design, deploy and master Mantle's programming model. Through the years that followed, open and transparent contribution of source code, documentation and API specifications ensured that AMD philosophies remained influential in landmark projects like DirectX 12.
Add your own comment

40 Comments on AMD Announces Exciting DirectX 12 Game Engine Developer Partnerships

#26
FordGT90Concept
"I go fast!1!11!1!"
To use CUDA is to disable the 3D pipeline. NVIDIA cards require switching between the two. That doesn't go over well in games because the 3D pipeline is far more important.

GCN doesn't care if it is compute or 3D, it queues it into the same pipeline.

NVIDIA needs to fix it otherwise games just won't use it.
Posted on Reply
#27
Vayra86
Getting worked up about perf differences with current gen cards in DX12 mode:

1. Pointless
2. Predictable
3. Rather naive

I have underlined this with the release of maxwell and the PR around current gen in terms of being dx12 ready, every single time: NONE of the current gen cards are really ready and buying into them and expecting the opposite is pretty short sighted. We all knew big changes were close and here they are. New arch/next gen gpu will be ready for it and current gen is already bottlenecked in many other ways, most notably CPU load and VRAM.

Stop worrying for nothing because it only underlines how uneducated your purchase has been. And it displays a lack of insight in the way the industry works.

Next gen selling points are going to be bigger leaps in DX12 performance; thats how they get us to buy new product.
Posted on Reply
#28
Xzibit
Straight consoles ports will benefit AMD. Nvidia would have to sponsor to null (DX12 to DX11) and implement its beneficial code like the new Tomb Raider.
Posted on Reply
#29
the54thvoid
Intoxicated Moderator
XzibitStraight consoles ports will benefit AMD. Nvidia would have to sponsor to null (DX12 to DX11) and implement its beneficial code like the new Tomb Raider.
Straight console ports are abominations. Frame locked, key bindings screwed and PC graphics settings not there.
Posted on Reply
#30
TheHunter
RejZoRMy system runs all my games at max possible settings at all times, it's this specific shit that always fucks up everything. And I'm not even blaming AMD here. They've done DX12 right, it's NVIDIA that was lazy. But I love Deus Ex franchise, that's why I'm worrying.

Then again, Deus Ex Human Revolution looked amazing so if I get that level of graphics I'm fine with it anyway. So yeah, chilling...
DeusEx human revolution was a AMD evolved game too and it ran perfectly fine on Nvidia.. actually it ran much better then amd later.
I have no doubt it will be the same now.

As for these games list.. nothing but yawn fest.. only this new DeusEx looks interesting, rest not worth mentioning.
Posted on Reply
#31
FYFI13
TheGuruStudWake me up when devs switch to Vulkan and drop DX completely.
DirectX 12 performs better than Vulkan, according to first tests BUT... Vulkan works on most available platforms and only because of that DirectX should be ditched for good!
Posted on Reply
#32
BiggieShady
FordGT90ConceptTo use CUDA is to disable the 3D pipeline. NVIDIA cards require switching between the two. That doesn't go over well in games because the 3D pipeline is far more important.
It is doable, Just Cause 2 had a water simulation done in pure CUDA.
Quoted from the article:
Ask your personal Nvidia engineer for how to share GPU side buffers between DX12 and CUDA.
FordGT90ConceptNVIDIA needs to fix it otherwise games just won't use it.
Agreed. After all Just Cause 2 is only game I could think of that used CUDA.
Posted on Reply
#33
Hiryougan
RejZoRGreat, async compute for Deus Ex. Which means it'll run like shit on GTX 900 cards. Thanks NVIDIA for your "complete" DX12 support.
I don't think you understand how AC works.
For example if it wasn't used at all in AoS, the results for nvidia would be EXACTLY THE SAME, not better. It simply helps the performance on GCN cards. It doesn't make cards that are not able to do it perform worse.
Posted on Reply
#34
applejack
BiggieShadyIt is doable, Just Cause 2 had a water simulation done in pure CUDA.

Agreed. After all Just Cause 2 is only game I could think of that used CUDA.
if by "pure CUDA" you mean game devs leverage CUDA directly (no middle-ware) - add these:
CUDA/GPU Transcode used in Rage & Wolfenstein (maybe available to other ID Tech 5 titles by ini tweak)
Nascar '14 use CUDA to accelerate in-house particle effects.

however other ~45 games are using CUDA for advanced PhysX acceleration. a middle-ware designed by the creators of the architecture itself is "pure" enough for me.
Posted on Reply
#35
FordGT90Concept
"I go fast!1!11!1!"
HiryouganI don't think you understand how AC works.
For example if it wasn't used at all in AoS, the results for nvidia would be EXACTLY THE SAME, not better. It simply helps the performance on GCN cards. It doesn't make cards that are not able to do it perform worse.
Async on causes NVIDIA cards to lose performance across the board: www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/6


Developers need to enable async compute on GCN cards and disable it on NVIDIA cards to get the best framerate for their respective platforms.
Posted on Reply
#36
RejZoR
It's the same situation as with tessellation. If you use it to gain performance, you'd have gains on better hardware and no penalty on "unsupported" hardware. But when you use a feature t cram more "details" into a game, that simply isn't true anymore. And the same is with async compute. It's not there solely to boost identical graphics quality on all graphic cards, I bet they'll use it to cram more details into the game thanks to those gains and in that case, performance just wont' be the same.

What I'm saying is that they won't be using async compute to achieve insane framerate across the board, they'll use it to make more details and sacrifice performance with it. It has always been like this. Instead of making current game more adoptable by more players they want to make it more appealing to the rich crowd with beefed up PC's. And then they wonder why sales aren't up there. Lol...
Posted on Reply
#37
Hiryougan
FordGT90ConceptAsync on causes NVIDIA cards to lose performance across the board: www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/6


Developers need to enable async compute on GCN cards and disable it on NVIDIA cards to get the best framerate for their respective platforms.
Afaik Nvidia cards lost some performance there because they tried to emulate ac(and we see how that went).
Posted on Reply
#38
BiggieShady
applejackif by "pure CUDA" you mean game devs leverage CUDA directly (no middle-ware) - add these:
CUDA/GPU Transcode used in Rage & Wolfenstein (maybe available to other ID Tech 5 titles by ini tweak)
Nascar '14 use CUDA to accelerate in-house particle effects.

however other ~45 games are using CUDA for advanced PhysX acceleration. a middle-ware designed by the creators of the architecture itself is "pure" enough for me.
I meant "pure" CUDA in a sense that one part of rendering pipeline (e.g. simulated water geometry in JC2) that would normally be done via geometry shader is done via shared side buffers using CUDA only.
Incidentally that code in JC2 was written by nvidia themselves and probably most of the cuda specific code from your examples and half of the code from 45 instances of middleware integration. The other half came from SDK examples also written by them. So, yes, it's also "pure" that way too. Untainted by all the different middleware modifications or independent CUDA+DirectX engine implementations throughout the world.
Posted on Reply
#39
FordGT90Concept
"I go fast!1!11!1!"
HiryouganAfaik Nvidia cards lost some performance there because they tried to emulate ac(and we see how that went).
They did not. Async shaders is part of the DirectX 11 API. AMD cards handles async workloads asynchronously where NVIDIA cards handles async workloads synchronously; hence, performance boost for the former and performance hit for the latter. If those 2-4% frames are important, the only solution on NVIDIA is to not use async shaders at all.
Posted on Reply
#40
BiggieShady
FordGT90ConceptNVIDIA cards handles async workloads synchronously ... the only solution on NVIDIA is to not use async shaders at all.
That way too oversimplified. Both architectures benefit from async shaders and they will be used simply because it's in direct x. Difference is GCN is more efficient when 3d and compute commands are mixed, which is worst case for nvidia. Nvidia arch likes them separated (in lowest number of batches possible) which is worst case for AMD.
All serious engines will commit workload for a single frame differently on different gpu architectures to achieve peak efficiency.
GCN is great because it allows more simple/flexible approach where you can simply keep brute force saturating command queues. With nvidia currently you gotta treat async task invokes like draw calls, manage and batch them because of the time overhead from context switch. Not only time overhead because synchronicity here is side effect from context switch that may wait on longer running async tasks to finish.
Posted on Reply
Add your own comment
May 21st, 2024 17:49 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts