• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Hitman 2 to Get DirectX 12 Renderer Through a Patch Later Today

the results are very repetitive across all sites,may be the benchmark skews the results in favor in nvidia
watching actual gameplay comparisons on youtube, RVII still is outperformed by 2080 and 1080Ti,but nowehere near what is shown in the benchmark.
this is just sad.should be pulled from the tpu's list in favor of just testing a 10 second gameplay sequence.

@W1zzard what do you say ?
look at this and compare against the benchmark results where 1070 outperforms V64.Here V64 matches 1080!
1

though the purepc link I posted does not test the in game benchmark yet it shows 1070 beat v64.I just don't know if it represent the actual experience in all locations,probably not.
they're pretty thorough in finding the absolute worst case scanario both for their gpu and cpu testing.however,I don't think this is consistent with tpu's approach.

I just meant as an nvidia sponsored game, it's obviously a casualty of their "optimization". Hitman 1 was poor perf all around (I had to turn stuff down to get 60 stable on my OCed 980ti).

This chart makes much more sense, though. Polaris does well, but Vega seems to be gimped by whatever (kinda looks like heavy throttling).
 
I just meant as an nvidia sponsored game, it's obviously casualty of their "optimization". Hitman 1 was poor perf all around (I had to turn stuff down to get 60 stable on my OCed 980ti).
I still think this is mostly the in game benchmark problem.
 
I just meant as an nvidia sponsored game, it's obviously a casualty of their "optimization". Hitman 1 was poor perf all around (I had to turn stuff down to get 60 stable on my OCed 980ti).

This chart makes much more sense, though. Polaris does well, but Vega seems to be gimped by whatever (kinda looks like heavy throttling).
Hitman 2 is just a train wreck in terms of optimization.
Hitman 1 is actually not too bad in terms of optimizations, as for 980ti having issues in DX12, that is pretty much a Maxwell issue.
Pascal does not lose performance in DX12 and Turing actually gains performance as much as AMD does in some case.
 
Hitman 1 was definitely a train wreck in optimizations. Hitman 1 is an AMD title, based on Hitman: Absolution engine (Dawn engine for Deus Ex: Mankind Divided was forked from the same thing) and was/is a DX12 showcase. It was patched and improved and optimized throughout its life, both on the game and drivers side of things. It took a long while to get its DX12 renderer stable and then performance was hit and miss. Early on, AMD had better DX12 performance, Nvidia had better DX11 performance. Now, both are about even - this is a heavily benchmarked game after all.

Hitman 2 did not start from scratch, it is the same engine as previous one, although obviously improved.
Pretty sure that wasn't the case, Hitman ran faster on wide variety of hardware on DX12.
Eventually, yes. It is one of very few games where DX12 is almost universally better.

The argument that keeps being made is that a DX12 engine needs to be substantially different and incompatible with DX11 implementation. I do not see why this would have to be the case. DX12 allows some different types of things on API level and considerably lower level optimization but it does not have to change how a renderer works, just the implementation.
 
Last edited:
great,except there are locations where vega 64 drops below 1070 performance,under 50 fps.

lol I'm joking,I don't like hitman for more reasons than just that.

There is some rare occurrences that I have no clue why would occur, I'd say 98% of the time I'm well above 80
It's perform below what I'd expect but I love the game though.
Running undervolt, 15% pwr limit, water and 1100 mhz hbm2.

Edit: I more reacted to absymal, lower than expected is more correct I'd say :)
 
DX12 : Vega 64 + 2700X vs 9900K+Vega 64 = 9% slower
DX11 : Vega 64 + 2700X vs 9900K+Vega 64 = 43% slower

unbelievable !

We are not talking about "A"vs"B" here just about boost in ryzen cpu with DX12 in comparision with DX11 yes?
 
gtx 1080 dx11 beats amd's v64 dx12 by 10%.
what is amazing is how inconsistent v64 perfromance is.
in all honesty,it should already have left gtx1080 in the dust,it's been almost 2 years since launch.
hard to assess 2700x vs 9900k performance here,though 2700x is where it should be,pushing a V64 card nicely at 1080p.
but V64 is lower than 1080 here,and 1080 is 2060 by today's standards.
 
We are not talking about "A"vs"B" here just about boost in ryzen cpu with DX12 in comparision with DX11 yes?

DX 11 : GTX 1080 + Ryzen 2700 vs GTX 1080 + 9900K : 45%
DX 12 : GTX 1080 + Ryzen 2700 vs GTX 1080 + 9900K : 8%

See pattern ?
Intel + nvidia = DX11
Ryzen + nvidia = DX12
Now next Ryzen 3000 should fix it.
I find it funny when you look at GTX 1080 + 9900K in both DX12,11 ( 99th test) , different is 48% with 7% different fps.

Edit: Both CPU test not GPU Test
 
Last edited:
D3D11 and D3D12 are effectively forks. To do D3D12 right, the D3D11 code is pretty much worthless. The D3D12 implementation in these cases (like The Division) is often underwhelming. I don't know that we've actually seen a pure-bred D3D12 game yet.

An API is a means, not a goal. I think its a simple matter of economy. There is too little profit in spending dev time on it, at least at launch, and also at a basic level.

When MS launched DX12 my main point about it actually was that its purpose was to drive new types of content on smaller/mobile devices and for example things like VR. It offers great support for efficient use of many core CPUs, that can run at lower clocks to compete. Thát is an economical incentive. You want to lower the bar in terms of hardware requirements.
 
Back
Top