• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD GPUs See Lesser Performance Drop on "Deus Ex: Mankind Divided" DirectX 12

Both the frame time and average fps results in the techreport review are different from what Hilbert measured at guru3d. Perhaps techpowerup can run these tests and get us the final results.
Techreport and computerbase used actual gameplay scenarios in testing while Guru3D used the built in benchmark
 
Techreport and computerbase used actual gameplay scenarios in testing while Guru3D used the built in benchmark

Benchmark might be, dare I say it, favourable to one side.
 
I'm pretty sure AMD's poor DX 11 performance is due to the lack of hardware multi-threading, a DX 11 feature Nvidia GPUs do have. It's got little to do with the drivers.

Nvidia's claims of DX 12 support are definitely shaky, it's similar to the 3.5 GB on the 970. Nvidia "has" DX 12 support but it's mostly software based and not hardware. They support very few features of DX 12 in the hardware.
From https://developer.nvidia.com/dx12-dos-and-donts

On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible.


Under DX11, NVIDIA driver has asynchronous tasks and multiple threads where possible.




https://developer.nvidia.com/unlocking-gpu-intrinsics-hlsl

None of the intrinsics are possible in standard DirectX or OpenGL. But they have been supported and well-documented in CUDA for years. A mechanism to support them in DirectX has been available for a while but not widely documented. I happen to have an old NVAPI version 343 on my system from October 2014 and the intrinsics are supported in DirectX by that version and probably earlier versions. This blog explains the mechanism for using them in DirectX.

Unlike OpenGL or Vulkan, DirectX unfortunately doesn't have a native mechanism for vendor-specific extensions. But there is still a way to make all this functionality available in DirectX 11 or 12 through custom intrinsics. That mechanism is implemented in our graphics driver and accessible through the NVAPI library.



NVIDIA already has hit-the-metal intrinsics and Direct3D API kitbashing along time ago while AMD gains similar feature with Doom Vulkan.
 
While of course a possibility, I wouldn't put any money/stock/holding of breath in that happening. Call me cynical if you must but Nvidia won't lose significant market share in the next 2 gens even if they deserve to for the same reason Apple doesn't lose market share even though they deserve to; they have a refined and sadly effective hype and marketing machine that constantly builds and stokes the consumer norm that they are the superior choice.

Though I hope you're right for the price drops that need to happen for all consumers, in the age of Likes, Views and Trending, mindshare is the real metric that maintains a marketshare's status quo and Nvidia is throwing too much TWIMTBP money around for that to change any time soon.
i mostly agree, but the thing is that the market is something that can go either way sometimes, who expected ARM to dominate the smartphone/device market for example? but it did, and now the giant Intel is feeling the "heat" in that market section.
 
Because in Germany ComputerBase is called "NVidiaBase" --- similar to Polish PC Lab (Called PC LOL) and a few other sites in the world where the bias can't simply be explained by an error in the benchmarks, reviews etc.

Truth.
 
Because in Germany ComputerBase is called "NVidiaBase" --- similar to Polish PC Lab (Called PC LOL) and a few other sites in the world where the bias can't simply be explained by an error in the benchmarks, reviews etc.

But you haven't called them biased when they showed AMD Doom Vulkan results.

GameGPU, Techreport, Computerbase showed completely different results from Guru3D and they are wrong not Guru3D.
 
Another DX12 thread, another bunch of willy-waving.

Can we maybe rather wait until a non-AMD and non-NVIDIA-affiliated company actually writes a game that uses DX12 from the ground up, and then compare performance? Because every DX12 renderer until this point has been a half-assed second thought at best and terrible at worst.

Mankind Divided is a particularly poor example: a beta DX11 renderer port of a console port. Like, seriously.
 
Another DX12 thread, another bunch of willy-waving.

Can we maybe rather wait until a non-AMD and non-NVIDIA-affiliated company actually writes a game that uses DX12 from the ground up, and then compare performance? Because every DX12 renderer until this point has been a half-assed second thought at best and terrible at worst.

Mankind Divided is a particularly poor example: a beta DX11 renderer port of a console port. Like, seriously.

Games aren't built for API's but because there is a business case and economic incentive. They're built from new dev kits and applications, and those are here, have been here, for a while now. You know, engines too - like CryEngine that has been in use for over a decade, showing us 'true next gen' before anyone could run it, and it is only NOW that we start seeing console games catch up to that.

Consoles have had close to metal programming for a long time.
PC's have had massive amounts of processing power (compared to consoles) for a long time

The irony is that PC gaming doesn't NEED DX12 to progress, the consoles need it. VR and mobile need it. Systems with weak, low-power CPU's need it. And on the top end, PC gaming needs it to keep surpassing everything else. I suppose that last little bit is what you're looking for but let's face it, we won't see another Crysis in years.

You know how those high end GPUs are sold today? With inferior technology that really has no merit in real gaming, like VR, like 4K on crappy panels, like HDR with high latencies. These techs are all about moving hardware, not about a great experience, and they're all cash cows right now. None of them are mature.

In the end, the only thing that truly carries a platform and technological progress is an honest investment in stuff we can do with it. That doesn't mean more hardware, it means good, lovingly crafted software. Apple and Blizzard are some of the very few companies that understood this. MS has a love-hate relationship with it. A few smaller game developers also have it.
 
Last edited:
I have to quote you both because I agree with points in both...

Another DX12 thread, another bunch of willy-waving.

Can we maybe rather wait until a non-AMD and non-NVIDIA-affiliated company actually writes a game that uses DX12 from the ground up, and then compare performance? Because every DX12 renderer until this point has been a half-assed second thought at best and terrible at worst.

Mankind Divided is a particularly poor example: a beta DX11 renderer port of a console port. Like, seriously.

This is my take right now on DX12 too. There has been no "standard" as far as what Devs are "picking and choosing" from the DX12 feature set so there is just no consistency for anybody or any game so far. It just ends up like this game with alot of WTF? and "Whats the point?" One game uses "this feature" and the next game uses "another one" and we end up with these oddball bench results and wacky performance...






Games aren't built for API's. They're built from new dev kits and applications, and those are here, have been here, for a while now. You know, engines too - like CryEngine that has been in use for over a decade, showing us 'true next gen' before anyone could run it, and it is only NOW that we start seeing console games catch up to that.
Consoles have had close to metal programming for a long time.
PC's have had massive amounts of processing power (compared to consoles) for a long time

The irony is that PC gaming doesn't NEED DX12 to progress, the consoles need it. VR and mobile need it. Systems with weak, low-power CPU's need it. And on the top end, PC gaming needs it to keep surpassing everything else. I suppose that last little bit is what you're looking for but let's face it, we won't see another Crysis in years.

You know how those high end GPUs are sold today? With inferior technology that really has no merit in real gaming, like VR, like 4K on crappy panels, like HDR with high latencies. These techs are all about moving hardware, not about a great experience, and they're all cash cows right now. None of them are mature.

In the end, the only thing that truly carries a platform and technological progress is an honest investment in stuff we can do with it. That doesn't mean more hardware, it means good, lovingly crafted software. Apple and Blizzard are some of the very few companies that understood this. MS has a love-hate relationship with it. A few smaller game developers also have it.

Cryengine is/was a great example of "pushing the limits" and sadly the only thing lately that comes to mind that kinda comes close to that would be The Witcher 3 which allows you to turn it up to "ludicrous" letting you use pretty much every bell and whistle in the "toolbox" if you have the hardware to handle it.

I also agree with the pushing of "immature" tech and selling overpriced cards to try to "brute force" through them, sadly it's still selling the hardware...
 
I don't understand - The article says AMD see less performance drop on Deus Ex: MD on Dx 12 and gives reference to Guru3D but when you click on the link AMD cards are actually gaining performance.

How is Gaining performance = Seeing lesser performance drop ?
 
well is it just me or is there no beta tag in these articles?, Deux Ex DX12 is in BETA, aka unfinished
you have to switch to a beta branch to get dx12 enabled... proper dx12 is in next weeks patch on 19th september-ish ?
 
I don't understand - The article says AMD see less performance drop on Deus Ex: MD on Dx 12 and gives reference to Guru3D but when you click on the link AMD cards are actually gaining performance.

How is Gaining performance = Seeing lesser performance drop ?

try looking for more bench. from what i can see results are all over the place. but yes for AMD cards not all of them gaining performance. some even have performance decrease. and it seems that Guru3d only using built in benchmark tool to do their test. some other site using real game play instead.
 
well is it just me or is there no beta tag in these articles?, Deux Ex DX12 is in BETA, aka unfinished
you have to switch to a beta branch to get dx12 enabled... proper dx12 is in next weeks patch on 19th september-ish ?

This is true, and it has been pointed out in this thread.
 
Actually currently it does. While AMD current architecture is better placed or optimized for DX12, in terms of raw clock, max TFLOPS Nvidia dominates.

And no matter how well you optimize there is a limit in how much a hardware can do. While probably like cars, those numbers can be taken with a kind of grain of salt in terms that manufactures may inflate them, if you look at given numbers, Nvidia's are higher... Period.

OOps a typo. Actually what I mean is it doesn't beat NVidia on high end if you take in consideration max theoretical throughput (TFLOPS).
 
OOps a typo. Actually what I mean is it doesn't beat NVidia on high end if you take in consideration max theoretical throughput (TFLOPS).

Obviously because AMD hasn't put out their high end range yet.

TFLOPS are way too rough an indicator to be of any use in these kinds of comparisons. They're an indicator, nothing else, and I don't see how they are ever relevant in any GPU discussion. It's a shortcut to avoid getting into details, that is all.

Just because something appears on a marketing slide, doesn't make it informative.
 
Back
Top