Ashes of the Singularity DirectX 12 Mixed GPU Performance 78

Ashes of the Singularity DirectX 12 Mixed GPU Performance

(78 Comments) »

Introduction



A true multi-GPU utopia has for long eluded us. While railroad companies can get locomotives of different make and horsepower to haul their trains around in perfect harmony, the idea of combining two or more GPUs of any brand, model, or architecture (as long as they support the same API or the feature-level the app is rendering on) has seemed impossible.

There have been efforts to make them work through GPU virtualization in the past, such as Lucid Hydra, but while the idea was noble, its implementation didn't really take off. Microsoft has now taken a bold step towards doing so by implementing an API-level multi-GPU tech of the kind we have dreamed of with DirectX 12, an API that made its debut with Windows 10.

Ashes of the Singularity, developed by Stardock, is a real-time, large-scale strategy game similar to Supreme Commander or Total Annihilation. The game created quite some buzz in enthusiast circles during its development because it was one of the first titles to take advantage of DirectX 12. This also makes it an interesting GPU benchmark for DirectX 12 capable hardware, hardware that has been on the market for over the past 18 months.

In this review, we'll be making a guacamole with video cards. We will compare the single-GPU performance of performance-thru-enthusiast segment cards to their respective SLI or CrossFire configurations, their DirectX 12 multi-GPU configuration, and several multi-GPU configurations that are a mix-and-match of various GPUs.

New DirectX 12 Multi-GPU Features

One of the highlights of DirectX 12 is its explicit multi-adapter support, which lets you harvest various GPUs for accelerated rendering performance. Unlike SLI or CrossFire, it is vendor agnostic and can combine the performance of any GPU with that of another instead of having to work with GPUs of the same architecture or a set of similar performance characteristics. All you need is DirectX 12 API support.



In DirectX 12, each video card has its own unique memory storage, which is unlike SLI or CrossFire where everything is mirrored. Gone are the days where the total video memory your 3D app could see is that of a single card. Two 4 GB cards make 8 GB of usable video memory and not two copies of 4 GB, for example. Microsoft innovated a new method of rendering with multiple GPUs called "Split Frame Rendering" (SFR). This breaks the 3D scene into tiles each GPU renders. The best part? Resources need not be perfectly mirrored between the two GPU systems, and as such, they have independent memory pools which add up to the machine's total available video memory.



Another key DirectX 12 feature is asynchronous shaders (async shaders). This feature allows a GPU's SIMD number-crunching resources to be optimally utilized.

Benchmark Video



We recorded a video of the benchmark in 4K to show you how amazingly well it runs, without any rendering errors or hiccups. It was particularly surprising to find the rendering to be extremely smooth at a sizable performance upscale.

You don't need any special drivers to take advantage of DirectX 12's native multi-GPU tech. Simply enable multi-GPU in the settings and the game will use the card with the monitor connected to it as its primary GPU while the other is used to improve rendering performance. You will have to disable CrossFire/SLI in the driver, though, to unlink both GPUs for them to be separately addressable.
Our Patreon Silver Supporters can read articles in single-page format.
Discuss(78 Comments)
Jul 4th, 2022 12:28 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts