• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ashes of the Singularity DirectX 12 Mixed GPU Performance

More specifically, NV haven't mentioned the full architecture of Pascal as it is NDA. Like Arctic Islands. The assumption is that knowing the move to DX12 would have brought a more basic API and having seen AMD utilise Mantle to reasonable effect, NV aren't exactly going to have sat on their laurels. With Maxwell, the drive was clearly to knock down CUDA's compute (which is great at parallelism) to focus on power efficiency with faster clocks. That gave the 980ti enormous headway for DX11 which is still the current and ruling API. Nvidia'a DX11 Maxwell focus, compared to GCN's DX12 advantage wasn't a fantastic move by AMD. Latest figures show despite Fiji parts being readily available, they are not selling as well as Maxwell parts.

http://hexus.net/business/news/comp...t-share-expected-hit-new-low-current-quarter/

I have no idea how Pascal will fare against Polaris. Perhaps Polaris (or whatever the arch is called) will have enough tweaks to finally and resoundingly become the gold standard in gfx architecture. Maybe Pascal will be a Maxwell Volta bastard son and simply hold on till Volta arrives proper?

What is for sure is that this single, DX12 bench isn't any revelation. If Async isn't a dev's priority (for whatever reason) then GCN loses it's edge. If Nvidia buy into some AAA titles before Pascal is out (with assumed parallelism) they'll be sure to 'persuade' lower focus on Async.

Roll on Summer - another round of gfx wars :cool:

It already started with Rise of the Tomb Raider

The last PC UE4 build still didn't have Async either but the console version does.
 
If I understood correctly that would be available on all DX12 games? So this technology allows to use two NVidia graphics cards in SLI uncerfitied motherboards without Different SLI/ Hyper SLI (those won't give 100% guarantee that it works fine)?

yes. DX12 titles with built in multi GPU support, do not require SLI or crossfire X. That's not to say those wont exist or perform better, just that there is a new option.
 
game looks boring.
 
Last time the Ashes benchmark was released AMD had an edge too, but after new drivers of both companys the AMD edge shrinked. AMD now delivered a optimized driver, so wait for Nvidia to release theirs and do all the circus again, maybe the same will happen again.

All in all, it's somewhat pointless. It's just 1 game and it's only beta. Just interesting (again) that Fury X + 980 Ti is the best combo, not 980 Ti + Fury X or 980 Ti + 980 Ti / Fury X + Fury X.
 
More specifically, NV haven't mentioned the full architecture of Pascal as it is NDA. Like Arctic Islands. The assumption is that knowing the move to DX12 would have brought a more basic API and having seen AMD utilise Mantle to reasonable effect, NV aren't exactly going to have sat on their laurels. With Maxwell, the drive was clearly to knock down CUDA's compute (which is great at parallelism) to focus on power efficiency with faster clocks. That gave the 980ti enormous headway for DX11 which is still the current and ruling API. Nvidia'a DX11 Maxwell focus, compared to GCN's DX12 advantage wasn't a fantastic move by AMD. Latest figures show despite Fiji parts being readily available, they are not selling as well as Maxwell parts.

http://hexus.net/business/news/comp...t-share-expected-hit-new-low-current-quarter/

I have no idea how Pascal will fare against Polaris. Perhaps Polaris (or whatever the arch is called) will have enough tweaks to finally and resoundingly become the gold standard in gfx architecture. Maybe Pascal will be a Maxwell Volta bastard son and simply hold on till Volta arrives proper?

What is for sure is that this single, DX12 bench isn't any revelation. If Async isn't a dev's priority (for whatever reason) then GCN loses it's edge. If Nvidia buy into some AAA titles before Pascal is out (with assumed parallelism) they'll be sure to 'persuade' lower focus on Async.

Roll on Summer - another round of gfx wars :cool:

i'm not "disagreeing" with what you're saying and you have qualified your statements with originally with"supposedly" and then "assumption."

to the best of my admittedly little knowledge, NV would almost have to go back to the drawing board to add/incorporate the hardware for asynchronous compute when i look at the difference between their hyperQ to AMD's ACE scheduling.

IF SO then i don't see that happening within 2 generations of gpus if mantle cought their attention or one genration if a-sync compute in DX12 raised an eye brow. still my thanks for a comprehensive reply.
 
Roll on Summer - another round of gfx wars :cool:
tumblr_n8z8ofPvcB1t2yru3o1_500.gif
 
Showing multi-gpu scaling only with 970 sucked big time... =(
 
i'm not "disagreeing" with what you're saying and you have qualified your statements with originally with"supposedly" and then "assumption."

to the best of my admittedly little knowledge, NV would almost have to go back to the drawing board to add/incorporate the hardware for asynchronous compute when i look at the difference between their hyperQ to AMD's ACE scheduling.

IF SO then i don't see that happening within 2 generations of gpus if mantle cought their attention or one genration if a-sync compute in DX12 raised an eye brow. still my thanks for a comprehensive reply.

The design specifics (and I mean specifics, not vague knowledge) of Pascal (numbers of units and types of design per CUDA core) isn't known. We know GCN has been using Async for some time. It's fair to say that goes back years to GCN 1.1? I don't think Nvidia will have designed Pascal without the foresight to address new and upcoming changes to the way the API's were working.

To be fair, DX12 uptake will be slow, it will be bits and bobs and maybe next year we'll have DX12 aplenty. By then, if my card isn't up to muster then I'll upgrade. I'll happily buy a Polaris card if it's what ticks the boxes. What is mildly interesting is that despite AMD's flair for developing the new stuff, their recent release doesn't cater for certain DX12 characteristics:


From Guru 3D: Single GPU frame pacing.

index.php
index.php


the final results from their benchmark are flawed for all Radeon cards. Internally at the 3D engine you might even be rendering 900 frames per second, but if at the end of the pipeline VSYNC remains to be enabled, 60 FPS (or something similar to your refresh rate) will be your upper limited and VSYNC itself will have an effect on the overall framerate as demonstrated in the FCAT above. With one and the same DX12 code-path behaviour then to be the same for both AMD and NVIDIA right ?:


Update: hours before the release of this article we got word back from AMD. They have confirmed what we are seeing. Radeon Software 16.1 / 16.2 does not support a DirectFlip in DX12, which is mandatory to solve to this specific situation/measurement. AMD intends to resolve this issue in a future driver update. Once that happens we'll revisit FCAT.

Remember - Async is not DX12, it's a feature of it. More balance is required to draw any concrete conclusions and as we keep saying - not enough evidence. Sure the odd brand loyalty card holder will chip in with irrelevant things here or there but I'm not aware anyone has a crystal ball. I think things will be far more interesting this time round than Maxwell/Fiji. HBM was great but the changes for DX12 are more interesting.
 
What is mildly interesting is that despite AMD's flair for developing the new stuff, their recent release doesn't cater for certain DX12 characteristics
Ah, thankfully somebody else also noticed ... as I pointed out in post number 7
 
As for me,this bench show AMD card had better lifespan.Summer is EVERLASTING in Hawaii :D
 
A thought crossed my mind: AMD cards are doing exceptionally well in this benchmark now because of async compute but how does async compute work in a multi-GPU or Crossfire scenario? Is async compute disabled in the second card? Are async tasks handed to the second card as if multi-GPU/Crossfire wasn't in use? Do both cards get and perform the same async compute tasks? Is the driver lording over all of the ACEs and assigning tasks to each card individually based on load?
 
Look at the andandtech review. They tested multi GPU and I believe have your answer. :)
 
The AnandTech review I'm looking at doesn't look into how async works on multi-GPUs--only that AMD benefits and NVIDIA does not.
 
Well, its clear it works (one of your questions, LOL!). I would imagine sinec DX12 and use AFR now, that it would not be disabled on the 2nd GPU... but, that is just a guess.
 
Oh, my. So the forced-vsync thing isn't exclusive to Windows Store, it is also mandatory when using D3D12? If Microsoft doesn't change their mind about D3D12, Vulkan is about to get VERY popular. It even appears Microsoft wants to kill adaptive sync.
 
thats a little worrying. they're making GPU's render at full speed, above vsync and wasting the frames?

thats kinda of stupid.
 
If Microsoft doesn't change their mind about D3D12, Vulkan is about to get VERY popular.
I'd like that a lot ... Vulkan also has no silly OS restrictions, and all it needs now is for major engines to add support to Vulkan render path out of the box.
 
Imagine a world where you will not have to sell or get rid of your old GPU when you need to upgrade, just buy a new one and add to your current GPU. I loved it.
 
Imagine a world where you will not have to sell or get rid of your old GPU when you need to upgrade, just buy a new one and add to your current GPU. I loved it.

After 3 upgrade cycles you'd be out of PCIe slots.
 
After 3 upgrade cycles you'd be out of PCIe slots.

... by which time the rest of the system is hopelessly outdated anyway, so that really can't be an issue.
 
Back
Top