• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ashes of the Singularity DirectX 12 Mixed GPU Performance

[hyper ventilation begins, before even reading the article]

Edit: Having read the article, yeah - its exciting stuff. As mentioned if we can adjust the rendering on each card (the 60/40 split, 30/70 etc) then we're in for the golden era of PC gaming.
even something as simple as putting AA onto GPU 2 would be enough.

I see this more happening on an engine level (look at how popular the Unity engine has become for indie games), that suddenly a dozen games can use at once.
 
It seems we might as well follow the same rules as before. A single 980Ti is faster than a 980Ti paired with the R9 380. In fact, a single 980Ti is even faster than the 980Ti paired with a GTX960, so it's not a vendor specific or even architecture specific issue.
 
It clearly shows the % gains in FPS from DX11 to DX12 for each card.

And to add something some of us talked about when someone asked for buying a new GPU to keep it. Look into the link below and check how much more VRAM DX12 asks for. And then rething about 970 3.5GB vs 390 8GB...

http://www.overclock3d.net/reviews/..._beta_phase_2_directx_12_performance_review/5

You mean, look at how much VRAM this single beta DirectX 12 engine uses. Making a blanket statement about the entirety of DX12 based on a single beta engine shows astounding ignorance, which you then confirmed with your "OMG GTX 970 HAZ 3.5GB" comment.
 
Would this work with something like HD520 and a 930M ?

Yeah, i know this is a crap laptop solution, but every additional fps matters :D:D:D
 
It seems we might as well follow the same rules as before. A single 980Ti is faster than a 980Ti paired with the R9 380. In fact, a single 980Ti is even faster than the 980Ti paired with a GTX960, so it's not a vendor specific or even architecture specific issue.

Check out Anandtech. They conclude this title is actually CPU limited and the CPU literally cannot do the work of keep both GPU working.

Edit: The title is also in beta, so there's no telling whether the end result will be the same. Maybe there's a lot of debugging code in there, maybe additional optimization are still to land.
 
Last edited:
As W1zzard says though, it's up to the dev's to code to make it work and how many will bother?
Sweet bugger all. Given the current state of QA testing of games I really can't see many devoting resources to a feature that won't be a primary selling point for games - and you can forget about Nvidia/AMD sponsorship to make it happen. Nvidia is happy with its ecosysyem and the last thing AMD needs to be seen to be endorsing Nvidia features like PhysX and the rest of Gameworks. Mixed-GPU looks like one of those promising ideas that will probably be fraught with issues until something else takes it place and it gets buried in a shallow grave - much the same as Lucidlogix's Hydra.
AoS is a known quantity so far but in the world of consoles I'm not so sure it'll catch. As far as the card differences, we know AMD has a better DX12 Async architecture so the results aren't too surprising. But once again we still need more DX12 game benchmarks. I have zero interest in AoS as a purchase so other titles implementation of DX12 features will have varying results and more interest for me. It's very annoying Deus Ex Mankind Divided was pushed back as that was going to incorporate DX12 was it not? Plus it was an AMD sponsored title so that would be a great platform for AMD/RTG to shout about their achievements.
I think AotS is a best case scenario for AMD and worst case scenario for Nvidia. The Nitrous game engine was developed on and with GCN and Mantle in mind for Star Swarm, so I would reserve judgement until we see we a few more game engines and dev implementations before drawing a conclusion.
Still, by the time DX12 and Vulkan actually mean gameplay I'll more than likely be on the next generation of cards from one (or possibly both if mixed-GPU actually takes off), so wake me when it is actually relevant.
 
Check out Anandtech. They conclude this title is actually CPU limited and the CPU literally cannot do the work of keep both GPU working.

Edit: The title is also in beta, so there's no telling whether the end result will be the same. Maybe there's a lot of debugging code in there, maybe additional optimization are still to land.

If that were the case, then why would we see such significant drops, and why would we see improvements with other setups? If it were CPU limited that badly, I'd expect to see minimal changes. The title still being in beta remains a valid point though, I suppose that remains to be seen in the final product.
 
If that were the case, then why would we see such significant drops, and why would we see improvements with other setups? If it were CPU limited that badly, I'd expect to see minimal changes. The title still being in beta remains a valid point though, I suppose that remains to be seen in the final product.

I haven't had the time to read their review properly (just glanced over while @work), but the explanation is somewhere on this page: http://anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/4
 
Makes you wonder though - will Nvidia cock-block this blender feature? They aren't keen on sharing, especially now when AMD is weak.
 
Makes you wonder though - will Nvidia cock-block this blender feature? They aren't keen on sharing, especially now when AMD is weak.


they'll more than likely work on a new DX12 version of SLI, possibly that makes the game engine think its a single GPU but allows *them* to customise what parts are rendered on what GPU. (forcing AA to GPU2, physX to GPU3, etc)
 
AMD's long term strategic choices are FINALLY coming to fruition with the changes in DX12. Async, the investments (early on) in Mantle, they may very well get their money's worth out of it, and myself along with (Im sure) many others have long thought otherwise.

I very, very much like the fact that AMD's cards are now overtaking the Nvidia counterparts. There is finally a performance gap on several price points that Nvidia can no longer 'fix' through Gameworks optimizations and just sending engineers around to devs to 'work on code'. This is exactly the way in which AMD can overtake Nvidia in the long run; not by code-specific adjustments, but by tech on the hardware level that is well suited to a new era in gaming. Having Nvidia play catch-up is good, very good for the market and the fact that an underdog can do this, shows how much there is still to win in terms of efficiency, performance and a healthy marketplace.

Go AMD. For the first time in years, you've got me interested beyond a few marketing slides. Put this performance to work in practical solutions and games, and they may very well be back in the game. I really like seeing Fury cards becoming worth the money, before this it was way too easy to think HBM had no real purpose. However it all depends so much on how well they manage to port this performance boost to games outside Ashes.

@Mussels That seems extremely Nvidia-like for a solution that both keeps their SLI contracts intact and at the same time gives them a feature to 'market'. If they do this, the current 780ti is the last Nvidia card for me ;)
 
And when Pascal releases that (supposedly) fixes/improves the async performance.....................

I don't think HBM has much to do with it... its still not 'needed' except for 4K res and VR... both of which hold a nearly non existent market share at this time.

I have to say, I think NVIDIA came into the party at the right time with their HBM2 on Pascal.
 
And when pascal releases that (supposedly) fixes/improves the async performance.....................

we get new conspiracy theories, duh.

i'm just glad that my $150 280x turned 290 via warranty is going to be even more awesome in DX12, i got my moneys worth :D
 
And when pascal releases that (supposedly) fixes/improves the async performance.....................

Supposedly. Hopefully. Maybe.

The last Pascal demo I saw, was actually running on Maxwell cards. I think Nvidia's powerpoint slides are miles ahead of reality and they may very well run into some trouble. Will Nvidia 'Do an AMD'? I've got popcorn at the ready :)
 
they'll more than likely work on a new DX12 version of SLI, possibly that makes the game engine think its a single GPU but allows *them* to customise what parts are rendered on what GPU. (forcing AA to GPU2, physX to GPU3, etc)
Is it not possible to do that now? I thought people were using one weak Nvidia GPU for PhysX and another one for the hard work for some time now.
 
Is it not possible to do that now? I thought people were using one weak Nvidia GPU for PhysX and another one for the hard work for some time now.

That is not in SLI, that is just assigning PhysX to whatever you want, like CPU or GPU of choice.

But yes, as far as implementation goes, I would much rather see Nvidia do that for other things like post processing, AA and whatnot, to make it SLI-independant. I mean they can also already run their AA across SLI and PhysX on a component of choice, now they just need to marry the two and remove the SLI requirement. Doesn't seem like a stretch to me.
 
Is it not possible to do that now? I thought people were using one weak Nvidia GPU for PhysX and another one for the hard work for some time now.

to a super limited extent where the physX GPU cant be used for any other task, yes. I was suggesting they may expand on that model, since it 'works' for them already.
 
whoa whoa wait just a minute. does this mean i can run my gtx560 4gb and my gtx 7604gb together? and ati and nvidia cards wtf? wow things have come a long way in my 2yr absence.
 
You mean, look at how much VRAM this single beta DirectX 12 engine uses. Making a blanket statement about the entirety of DX12 based on a single beta engine shows astounding ignorance, which you then confirmed with your "OMG GTX 970 HAZ 3.5GB" comment.
So, ignorant is the one who posts facts and not the one who predicts things opposing the facts that are showing clearly the tendency of how the use of the DX12 features increase the demand for VRAM. OK, keep trolling instead of finding proofs as they don't exist (at least for now)...
 
And when Pascal releases that (supposedly) fixes/improves the async performance....................

not to sure about that; i don't hearing a word about pascal's async until AotS benches hit the internet and exposed NV's weakness.
 
not to sure about that; i don't hearing a word about pascal's async until AotS benches hit the internet and exposed NV's weakness.

More specifically, NV haven't mentioned the full architecture of Pascal as it is NDA. Like Arctic Islands. The assumption is that knowing the move to DX12 would have brought a more basic API and having seen AMD utilise Mantle to reasonable effect, NV aren't exactly going to have sat on their laurels. With Maxwell, the drive was clearly to knock down CUDA's compute (which is great at parallelism) to focus on power efficiency with faster clocks. That gave the 980ti enormous headway for DX11 which is still the current and ruling API. Nvidia'a DX11 Maxwell focus, compared to GCN's DX12 advantage wasn't a fantastic move by AMD. Latest figures show despite Fiji parts being readily available, they are not selling as well as Maxwell parts.

http://hexus.net/business/news/comp...t-share-expected-hit-new-low-current-quarter/

I have no idea how Pascal will fare against Polaris. Perhaps Polaris (or whatever the arch is called) will have enough tweaks to finally and resoundingly become the gold standard in gfx architecture. Maybe Pascal will be a Maxwell Volta bastard son and simply hold on till Volta arrives proper?

What is for sure is that this single, DX12 bench isn't any revelation. If Async isn't a dev's priority (for whatever reason) then GCN loses it's edge. If Nvidia buy into some AAA titles before Pascal is out (with assumed parallelism) they'll be sure to 'persuade' lower focus on Async.

Roll on Summer - another round of gfx wars :cool:
 
What is for sure is that this single, DX12 bench isn't any revelation. If Async isn't a dev's priority (for whatever reason) then GCN loses it's edge.
DX12 redux:
AotS pre-alpha gets released, AMD rulez, AMD predicted to rule the known world even though the engine is tailored to GCN and isn't slated for wide uptake by game studios
Fable Legends (using the much more widely used UE4 engine) beta DX12 benchmark arrives and adds some perspective to the subject
A few months on, AotS does the rounds again and the AMD cheerleaders and doomsayers are back doing their Async/web version of the Nuremburg Rally.

It's almost as though there is an epidemic of attention deficit disorder.

If Nvidia buy into some AAA titles before Pascal is out (with assumed parallelism) they'll be sure to 'persuade' lower focus on Async.
If it is seen as a weak area for the architecture then most assuredly. If those AAA titles are built on the UE4 engine, it would almost be a certainty that they would do exactly as AMD/Oxide have done with AotS, and make the game settings at its highest level ( the marketing/tech site bench level) heavy with DX12 transparency and custom blending features since AMD's current architectures require software emulation for concerted use of conservative rasterization/ROV's
I have no idea how Pascal will fare against Polaris. Perhaps Polaris (or whatever the arch is called) will have enough tweaks to finally and resoundingly become the gold standard in gfx architecture.
You aren't the only one with no idea - you can count virtually everyone else in on that particular list. History says that both Nvidia and AMD/ATI have had comparable performance down their product stacks for the best part of twenty years. With the exception of a particularly well executed G80 and not particularly well executed R600 at the dawn of the unified shader architecture era, it has been largely give and take depending upon IHV feature emphasis even when the companies have used different manufacturing partners ( such as ATI using TSMC's 130nm Lo-K and Nvidia using IBM's 130nm FSG (Fluorosilicon Glass) process). I really don't see that trend changing in the space of a single GPU generation. TSMC's are already shipping commercial 16nmFF+ products, and Samsung/Glofo are ramping 14nmLPP, so aside from wafer start availability, the manufacturing side of the equation shouldn't be an issue either.
 
If I understood correctly that would be available on all DX12 games? So this technology allows to use two NVidia graphics cards in SLI uncerfitied motherboards without Different SLI/ Hyper SLI (those won't give 100% guarantee that it works fine)?
 
Last edited:
Back
Top