• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Why AMD will perform better than NVIDIA in DirectX 12

Status
Not open for further replies.
You have hard evidence that Bohemia are making Arma 3 DX12 ?, i heard of the gossip but not noticed any facts that it's actually going happen with Arma 3 just the possibility with the new map they are doing.
 
Last edited:
All you can really go on is the developers word, and as far as I know, the word is that ArmA 3 will get DX12 support.

Well that would explain why i never noticed it, i keep away from such sites. The point that bothers me as it's there and not on their website\forum. But maybe they are trying to get it added but need to see how it goes so to me i don't believe they know at this time.
 
Getting a list together might be a more worthwhile exercise than what is presently being argued. As far as I'm aware:

Fable Legends (Unreal Engine 4)
Gears of War Ultimate (Unreal Engine 3)
Ashes of the Singularity (Nitrous engine)
Deus Ex: Mankind Divided (Dawn Engine)
Ark: Survival Evolved (Unreal Engine 4)
DayZ (Real Virtuality 3 engine)
ArmA 3 (Real Virtuality 3 engine)
Star Citizen (CryEngine 4)
Doom (Tech 6 engine)

You would think that FrostBite would patch DX11 games to DX12 in addition to new DX12 titles given their Mantle association, but I haven't heard anything concrete - just PR-speak from the developer.

Much obliged!


I'm sorry to say this, and it kinda makes me feel dirty. The developer streams from Digital Extreme indicated that DX12 support is on their long list of upgrades for Warframe in the future. As PBR has been on the list for the better part of a year now, they may well have it in place by the time Pascal and Arctic Islands GPUs are available....maybe....half implemented...sigh....failure.......
 
In that position they'll have harder time selling Fiji. If Hawaii performs as well as 980ti in DX12, Fiji becomes a white elephant, bearing in mind on some of the AoS benches, Hawaii matches Fiji.

To be fair I dont think AMD has relied on there highend cards making them money for some time. there mid range price cuts have been crazy in the past 5 years. I honestly dont think AMDs business model is the people in this forum.
 
To be fair I dont think AMD has relied on there highend cards making them money for some time. there mid range price cuts have been crazy in the past 5 years. I honestly dont think AMDs business model is the people in this forum.
while that's true the old saying goes "if you want to sell station wagons, you need to have a sports car in the window" So while we're not the consumer base, their consumer base is affected when they don't compete well at the enthusiast level.

besides, amd shot themselves in the foot with their code names. everyone knows Hawaii is better than Fiji ;)
 
Fury X PCB's are still designed by partners, they are just restricted to reference design. Meaning they make the whole thing, but they have to stick to blueprints provided by AMD.

that's like saying "you can choose any color you want but it has to be blue." the PCBs aren't designed by partners. they are built by partners. everything is identical from the worthless cooler master pump they all use to the capacitors, VRMs, etc.

besides, amd shot themselves in the foot with their code names. everyone knows Hawaii is better than Fiji ;)

tahiti >>>>>>>>>>>>>> hawaii and fiji
 
Last edited:
Wow, bad case of dejavau.. :p


Halfway down the page:
http://www.extremetech.com/gaming/2...he-singularity-amd-and-nvidia-go-head-to-head


nem Joel Hruska11 days ago

Ok people what think about this great great explanation about why AMD should be better than NVIDIA over DirectX12 for have best supports the Shaders asynchronouscheck this is not my argument but It seems well argued.

https://a.disquscdn.com/upload...

first the souce:http://www.overclock.net/t/156...

Well I figured I'd create an account in order to explain away what you're all seeing in the Ashes of the Singularity DX12 Benchmarks. I won't divulge too much of my background information but suffice to say
that I'm an old veteran who used to go by the handle ElMoIsEviL
.

First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that's an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here's why...



And....
http://wccftech.com/amd-major-driver-update-catalyst-157-dx12/




And....

http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/400#post_24321843


All credit for perseverance though. :P
 
Last edited:
that's like saying "you can choose any color you want but it has to be blue." the PCBs aren't designed by partners. they are built by partners. everything is identical from the worthless cooler master pump they all use to the capacitors, VRMs, etc.



tahiti >>>>>>>>>>>>>> hawaii and fiji

You clearly don't understand what "reference design" means... And it actually means "you can use whatever color of stickers you want, for as long as hardware is designed per designer (AMD in this case) reference specs".
 
You clearly don't understand what "reference design" means... And it actually means "you can use whatever color of stickers you want, for as long as hardware is designed per designer (AMD in this case) reference specs".

i think the analogy confused you. forget the colors and stickers. you said that the fury x boards are "still designed by partners." then you said "they are just restricted to reference design." clearly, both things cannot be true. you either design it, or you follow the blue prints. all the Fury X cards use the same exact components down to worthless cooler master pump and the VRMs. which is why you don't see a fury x with "military grade" components or some stupid shit like that. at best, the different cards may have different fans on the radiators. since this is the case, no one is designing anything.
 
I may used inccorrect word for that (which is why you're confused). I meant manufacture. Design would be making their own board layout...
 
I may used inccorrect word for that (which is why you're confused). I meant manufacture. Design would be making their own board layout...

ehh, whatever. i guess we are on the same page after all.

in any case, back to the original conversation. i read all this crap and after all of that i assumed the 980 Ti would get stomped into the ground. but that's not what i'm seeing in actual comparisons...

DX12-Batches-4K-4xMSAA.png


those benches are from Ashes of the Singularity and these are stock cards. the 980 Ti, and I have 3 of them, OC like bats out of hell. all 3 of my 980 Tis can do at least 1475 MHz without ever throttling. one of them can do 1520MHz.

am i missing something? am i looking at the wrong thing?
 
Last edited:
What did ya expect from someone uses AMD logo for their avatar. Pascal is far from done likely will have it. AMD future on DX12 looks good but I wouldn't Bank on it given AMD's track record of last few years of taking things that look to be good and turning it in to a turd (cough hawaii and Fiji Launches cough). I would bet money on Nvidia way before AMD.

I use an AMD logo as an avatar because I respect the company. They have always been forward-looking. The Athlon 64 made 64 bit computing a reality for the masses, and forced Intel to make 64 bit x86 CPUs at a time when they were going to move to Itanium and cut off competition. AMD saw that multi-threaded software was the future, and built the Bulldozer series of chips for that multi-threaded future, along with baking Asynchronous compute into their GCN GPUs. nVidia and Intel, on the other hand, only ever seem to cheat, throw their weight around or pay off companies NOT to innovate. It disgusts me. If that makes me some kind of 'fan' of AMD, then I guess that's what I am.

Back on to the topic at hand, what's going on here with DX12 supporting Mantle's features is all part of AMD's long-term game plan. AMD has not turned anything into a 'turd', instead, Microsoft has been dragging its feet by holding on to a single-threaded API (DX11 and older) that can't take advantage of multi-core CPUs for years (hence the familiar 'only one core is heavily loaded' syndrome on all DX11 and older games) and AMD, through porting the game console API over to the PC in the form of Mantle, has once again pushed the industry forward, kicking and screaming. nVidia simply didn't expect a Mantle-like API to become the standard so quickly, and they've been caught with their pants down, plain and simple. This isn't fanboy-ism, it's a simple, empirical observation supported by overwhelming evidence. nVidia can try to cheat and pay-off game companies to use Gimp-works, but the game companies will have to ask themselves if it's worth it to ignore the consoles and only make a game run OK on an nVidia card on PC, instead of making it run well on all the consoles and Radeon GPUs. AMD can and will also pay companies to optimize for Radeon (Battlefield 4, Civilization, etc.) and they will.

I just see a performance boost for AMD GCN based cards, what actually worries me is if i buy a nVidia supported game if i should buy it. As we all know what nVidia is like.

Game company's should support both nVidia and AMD but chances of this happening is very unlikely. I know when a DX12 game is released and it's nVidia supported i be holding back a while to make sure i get what i pay for.

And to me this is not a nVidia V's AMD it's about AMD making GCN work better for their hardware and the only real example they can use is nVidia as they are not going to use Intels IGP lamo.

Yes, exactly. It's all fine and well to 'optimize' a game to run especially well for a given architecture, but when you deliberately sabotage a game so it'll run badly for the other GPU company's cards, it means owners of the targeted cards simply won't buy the game, and no game developer wants to limit their sales. nVidia's gimp-works is likely only going to alienate most game developers even further away from nVidia. It's a short-term, desperate strategy to try to drive sales of already-obsolete Maxwells because nVidia knows full well they've been caught with their pants down with the unexpected release of DX12 supporting multi-threaded access to the GPU (remember, it takes a couple of years to design and manufacture a GPU) and without the hardware level context switching and asynchronous shader support to take advantage of this.

If you somehow believe AMD was "lucky" here you're an idiot. AMD made Mantle and their hardware symbiotic. Instead of the Nvidia douche-baggery, they allowed Khronos and MS to adopt their tech, and make it open sourced. AMD may have flaws, but they know that the Nvidia specialty stuff (Hairworks, etc..) is poison for the industry.

Yes, exactly. Trouble is, many nVidia card owners WANT to believe that AMD is 'weak', and couldn't possibly have made such a deft, strategic manoever. They want the world to be simple, with 'winners' and 'losers', and the world isn't always simple. AMD was playing the long game, betting on multi-threaded software and hardware, and is now the big fish, and nVidia is the company on the ropes here. I would imagine that with Zen-based APUs sporting HBM and GCN2.0 cores next year, many gamers will have less need to buy an add-in board at all. If Intel steps up it's game with faster integrated graphics, nVidia will slowly be crushed between high-performance Intel and AMD APUs.

Keep drinking the crazy juice. Maxwell is certainly not obsolete now, nor will it be when DX12 is everywhere. While it may not perform as well as Fiji, it'll still do the required work, perhaps only performing as well as AMD's Hawaii rebrands (if AoS is the benchmark for DX12).
To think Nvidia can't address Maxwell shortcomings in DX12 with Pascal is also quite naive. Trust in the big nasty team green and they'll get their act together.
Certainly AMD have a very bright DX12 future and they definitely have a great theoretical advantage over Nvidia now but again, time will tell what's really going to happen.

Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them. As for DX12 being everywhere, Windows 10 is seeing an adoption rate that's unprecedented in Windows history. Steam is reporting that 17% of users are already using Windows 10 after only 1 month! That's higher than Windows 7's adoption rate. If it keeps going like this, nearly every gamer will have upgraded to Windows 10 by Christmas, just in time for the release of a bunch of DX12 titles that will run better on Radeons than nVidia cards.
 
Last edited by a moderator:
Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them. As for DX12 being everywhere, Windows 10 is seeing an adoption rate that's unprecedented in Windows history. Steam is reporting that 17% of users are already using Windows 10 after only 1 month! That's higher than Windows 7's adoption rate. If it keeps going like this, nearly every gamer will have upgraded to Windows 10 by Christmas, just in time for the release of a bunch of DX12 titles that will run better on Radeons than nVidia cards.
pls use edit and multi-quote options. dobule posting isnt tolerated, triple even less.
 
ehh, whatever. i guess we are on the same page after all.

in any case, back to the original conversation. i read all this crap and after all of that i assumed the 980 Ti would get stomped into the ground. but that's not what i'm seeing in actual comparisons...

DX12-Batches-4K-4xMSAA.png


those benches are from Ashes of the Singularity and these are stock cards. the 980 Ti, and I have 3 of them, OC like bats out of hell. all 3 of my 980 Tis can do at least 1475 MHz without ever throttling. one of them can do 1520MHz.

am i missing something? am i looking at the wrong thing?
It's a synthetic benchmark. Nobody cares.....
 
Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them. As for DX12 being everywhere, Windows 10 is seeing an adoption rate that's unprecedented in Windows history. Steam is reporting that 17% of users are already using Windows 10 after only 1 month! That's higher than Windows 7's adoption rate. If it keeps going like this, nearly every gamer will have upgraded to Windows 10 by Christmas, just in time for the release of a bunch of DX12 titles that will run better on Radeons than nVidia cards.
that's half the battle... now what about DX12 games being released before pascal? A handful?
 
that's half the battle... now what about DX12 games being released before pascal? A handful?
Probably.
Just to put this whole "AMD is going to rule the world" jag that anubis44 seems to be on into perspective - and probably send him into an apoplectic triple-post frenzy into the bargain, the first DX12 (patched) game will be ARK:Survival I believe - that's the one with GameWorks baked in. Now I'm reasonably sure that AMD's Gaming Evolved program won't go overboard on the async compute for computes sake - not just because it hobbles some performance for owners of its competitors cards, but because independent devs probably aren't going down that road either, and because if it's a "shots fired" scenario, no one wins - because sure as hell if it does, there is a pretty big chance that UE4 (which is already due to power over a hundred games, some of which will be AAA) - with its close association with Nvidia, and baked in support for conservative rasterization, raster ordered views, and hybrid ray tracing, could very well return the favour in spades. So, bearing that in mind, I kind of doubt that either side has much to gain by exposing the architectural shortcomings of the other.

By the time DX12 gains momentum ( a lot of announced AAA titles are still DX11 going forward), we will be looking at new architectures from both vendors. It isn't much different from the move from DX9. ATI and Nvidia both had unified shader architectures (R600 /G80) on the board years before D3D could take advantage of them - and didn't eventuate until they were required. Under DX9, pixel and vertex pipelines gave adequate performance with a low power use penalty.
 
Last edited:
My opinion right now is that the AMD cards out there are going to run a little better on DX12 than the Nvidia cards but that is quite a stretch from saying that DX12 games are going to run like shit on Nvidia cards. I say that for two reasons and they both are rooted in real world business. Publishers aren't going to give up billions of dollars in sales by allowing 75% of their PC gamer customers to be screwed out of buying their games. The second is more ugly. While AMD went into the red $400 million dollars last year and owe more than they're worth, Nvidia made a profit of $630 million dollars. If it came down to it Nvidia would pay some Publishers to make games run better on their cards. Yes that's shitty but that's business and God help AMD if they take too much food off of Intel's plate with Zen. They made a profit of 11.7 billion dollars last year. Over 5 times what AMD is even worth.
 
Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them. As for DX12 being everywhere, Windows 10 is seeing an adoption rate that's unprecedented in Windows history. Steam is reporting that 17% of users are already using Windows 10 after only 1 month! That's higher than Windows 7's adoption rate. If it keeps going like this, nearly every gamer will have upgraded to Windows 10 by Christmas, just in time for the release of a bunch of DX12 titles that will run better on Radeons than nVidia cards.

We're on the same philosophical side, yet it's time for a reality check.

Windows 10 has high adoption numbers because it's a free upgrade for anyone who has a recent (last 2 major releases, 7 and 8) version of Windows. The adoption rate is not, I repeat NOT, a factor of anything else. As such, adoption rates will be artificially inflated as people either not inclined to technology, or those who love being early adopters, get their "free" upgrade. Even now, the adoption rate has slowed down to the point where servers don't have a huge line to download W10. If you want it, you just have to click that stupid little flag and initiate download. This means that 17% figure is likely only going to creep to 18% going forward. 10.1, or whatever the first service pack will be, is going to bouy numbers again, but we're past the point of early adoption and "free" upgrading making adoption rates soar.

As far as ready in time for the holidays (December 2015), that's laughable. DX12 is currently being tested utilizing software that does a hand full of DX12 features. The cited async shader testing is only looking at shaders. The tests showing AMD is advantaged in draw calls only demonstrated that. The simple truth is that writing code to test for a metric is relatively easy, but making that code a fundamental part of a game engine in less than a year (DX12 was "finalized" this year) is pretty insane. Assuming it's somehow implemented, you've still got to be able to use it. As little as I like to say it, this is AMD based drivers we're talking about here. They've gotten better, but realistically tackling a whole new revision of DX with your drivers is a challenge. Even with the experience from developing Mantle, I'll wait until real world testing bears out AMD being superior before I swallow the red pill. Heck, I'm saying this from a 7970 if you can believe that.

Moving on, Maxwell isn't a slouch. When TSMC basically gave up on a die shrink, and doomed us to a third generation of 28 nm lithography, there was a decision made by both Nvidia and AMD. Nvidia said we're going to invest time and resources, and we'll improve Maxwell by making it the ultimate architecture for DX11. They invested in finding all sorts of optimization for DX11, and the performance of Maxwell bears that out. It's an excellent demonstration of getting the last bit of use out of the 28 nm node, barring the obvious controversy with the memory structure. AMD went the other way. They basically rebranded cards, focused on new memory technology (HBM1), and aimed at new standards (DX12, which was Mantle at the time). If you remove Nano and Fury from the 3xx stack of AMD GPUs you'll find that there's nothing new. The minor increases in clocks, combined with increases in power consumption, basically highlight exactly how much AMD's current cards aren't really meant for DX11.




In short, AMD has the minor lead on DX12 right now, but that's because they've given up on DX11. Their current product stack bears that out, and it isn't an advantage that will last very long. Between DX12 being a new standard, Pascal and Arctic Islands dropping in 2016, and completely new memory available with HBM2 the AMD vs. Nvidia debate is pointless. Both companies current offerings are good, but not good enough to warrant upgrading from any card you've bought in the last three years. 2016 will change that argument, but DX12 early adoption advantages aren't really going to sell AMD above Nvidia today.
 
Last edited:
Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them.

Who is in denial exactly when I said this as part of my post?

Certainly AMD have a very bright DX12 future and they definitely have a great theoretical advantage over Nvidia

@lilhasselhoffer and @HumanSmoke have explained in great detail the very reasons why Async is not super dooper relevant RIGHT NOW. And when it is relevant - both red and green will have new architectures to sell to the masses.
 
Assuming Pascal will be better at Async. I've heard all along that Pascal will be better at compute because their professional cards need it. I have no idea if that translates to better at Async. The rumors I've been seeing is that Big Pascal is already taped out and will come out first before the mid range and entry level Pascal GPUs but that's just rumors. Hopefully it's better at Async because it's a done deal now.

I think what some people are focusing on is that a lot of people aren't going to be buying a Pascal or Arctic Islands if they own a Maxwell right now. A lot of people only upgrade every other generation. They will be stuck with Maxwell for a couple more years at least. There will undoubtedly be several DX12 games over that lifespan that they will want to play. I still don't believe that they will be drastically affected by a lack of Async though. It will probably get worked through somehow.
 
I think what some people are focusing on is that a lot of people aren't going to be buying a Pascal or Arctic Islands if they own a Maxwell right now. A lot of people only upgrade every other generation. They will be stuck with Maxwell for a couple more years at least. There will undoubtedly be several DX12 games over that lifespan that they will want to play. I still don't believe that they will be drastically affected by a lack of Async though. It will probably get worked through somehow.

I think everyones just losing sight of the fact they're talking about current gen cards, right now - the only people that benefit are the ones who got lucky with a high end AMD card, since their investment drags out a little longer before needing an upgrade.
 
To be fair I dont think AMD has relied on there highend cards making them money for some time. there mid range price cuts have been crazy in the past 5 years. I honestly dont think AMDs business model is the people in this forum.

That's an apologists' remark of no value. I remember the same being said about CPU when FX turned out to suck from here to infinity, but there is no truth in it, case in point here is Zen which AMD says is a return to high end... Every company that wants to play ball has flagships that carry the lower tiers of products. If your flagships suck the rest won't sell, very simple. If BMW would only make 3-door mom's grocery carts they would never sell any cars. Hell even ultra-budget Dacia has a flagship car, go figure.
 
Status
Not open for further replies.
Back
Top