Tuesday, February 9th 2016

Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?

Rise of Tomb Raider could be among the first AAA games that take advantage of DirectX 12, with developer Crystal Dynamics planning a massive update that adds a new renderer, and new content (VFX, geometry, textures). The latest version of the game features an ominously named "DX12.dll" library in its folder, and while it doesn't support DirectX 12 at the moment, a renderer selection has appeared in the game's launcher. DirectX 12 is currently only offered on Windows 10, with hardware support on NVIDIA "Kepler" and "Maxwell" GPUs, and on AMD Graphics CoreNext 1.1 and 1.2 GPUs.
Source: TweakTown
Add your own comment

39 Comments on Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?

#26
RejZoR
kn00tcnwhat a load of CRAP, every single one? i'm not sorry, you're spreading FUD

which games or benchmarks (that arent crysis 2 or hairworks enabled... or ubisoft unoptimized) have so much 'tesselation' (as if that's the only new feature devs use) causing worse than dx9 or even ogl performance, with no option to turn off or override in the driver?

star trek mmo's dx11 update with identical visuals brought so much performance that they had to make PR about it

even in the dx10 days, when i had 4870x2, crysis warhead was smoother in dx10 than dx9 even if the fps counter was a few numbers lower, i have never seen a regression with a new dx version unless it's brand new buggy drivers or OS... plus, nvidia has proven multithreaded gains

the main detail i see that hurts fps the most is HB/HDAO, so i switch to SSAO or off instead, given that my 570m or 660 arent powerful gpus... dont you think i would be one of the first to notice problems with dx11? how can i trust your word with your high end specs without real examples

by the way, crysis 2 runs great for how it looks, you are free to turn off tesselation or tweak the cvars rather than switching to dx9

also breit's logic is fine if you're only turning down the newly added details that are already disabled in the previous dx version of the game


but that's an optional feature... if it's running dx12 it IS dx12, what else can they call it?? 'getting dx12' is different from 'is dx12'

your dream should be 'built from the ground up for dx12', aka exclusive
If you read back, I actually haven't mentioned Crysis 2 ANYWHERE. And yes, that's exactl ywhat they do with tessellation. Instead of using it on most critical elements that tend to be blocky, they use it on this where you could use just 1/10th of polygons and achieve same results using polybumpmapping to give illusion of high poly count. Instead they waste actual poly count on that stuff and ignore things that can't be faked with bump mapping. Yeah, they do that and it's not just 3 studios that do that. It's basically everyone who use DX11 engines.
Posted on Reply
#27
rtwjunkie
PC Gaming Enthusiast
RejZoRI wonder if DX12 will be of better performance or they'll fuck it up just like with EVERY single bloody "new and faster DX" version of a game.
DX11 also promised better performance compared to DX9 and 10 and it ended up being slower just because they crammed 500 trillion metric tons of unnecessary tessellation in games. And I think they'll do exactly the same nonsense with DX12. All the advertised boost will be gone because they'll stuff bunch of useless stuff you have to look for with magnifying glass instead of just letting it be the way it is and boost performance significantly.

I don't understand their logic. Sure, fancier graphics sell a bit better with the enthusiast gamers, but better overall performance sells well with average gamers as well as enthusiasts, because that means the game will probably run super fast even at 4K with max details. Which is also something these days. Not sure why they bet everything on enthusiasts but cry in the same breath how PC gamers don't buy in huge enough numbers. I wonder why. Latest TR games already look good, no need to overbloat them with quasi quality and derp the performance entirely.
Isn't DX12 basically DX11 in the appearance department? I thought DX12 was only about keeping the same visuals, but making it easier to rended by offloading more of the work onto the CPU than presently done?

To me, this sounds like the recipe for better performance with the same visual fidelity you get now (whatever level people set their game at).
Posted on Reply
#28
PP Mguire
rtwjunkieIsn't DX12 basically DX11 in the appearance department? I thought DX12 was only about keeping the same visuals, but making it easier to rended by offloading more of the work onto the CPU than presently done?

To me, this sounds like the recipe for better performance with the same visual fidelity you get now (whatever level people set their game at).
Offload to GPU m8, while reducing overhead on the CPU so it has more left over to do more with things like AI.
Posted on Reply
#29
rtwjunkie
PC Gaming Enthusiast
PP MguireOffload to GPU m8, while reducing overhead on the CPU so it has more left over to do more with things like AI.
No, the GPU is already doing most of the work. The premise is to offload a shitload more calls to the CPU and have it do some more of the work. Unless all the tech review editors have been wrong?

EDIT: Nevermind. I've been misinterpreting the direction of the calls, as well as the term "better use of the CPU cores" to mean more involvement by the entire CPU, not just one or two. :)

Heck, this means I have even less incentive than I already had to upgrade my underworked (unless I'm playing Total War games) CPU.
Posted on Reply
#30
PP Mguire
rtwjunkieNo, the GPU is already doing most of the work. The premise is to offload a shitload more calls to the CPU and have it do some more of the work. Unless all the tech review editors have been wrong?

EDIT: Nevermind. I've been misinterpreting the direction of the calls, as well as the term "better use of the CPU cores" to mean more involvement by the entire CPU, not just one or two. :)

Heck, this means I have even less incentive than I already had to upgrade my underworked (unless I'm playing Total War games) CPU.
Yes, the whole point is to take more graphical load off the CPU so it has more room to do actual CPU work hence "less CPU overhead by going bare metal". More on the GPU is coming via Async Compute where tasks can be queued to both the game and compute side so the compute side isn't being untouched. That's where "30% more performance" is coming from on the consoles utilizing it.
Posted on Reply
#32
PP Mguire
xfiathere is a bit more to it than that but your perspective is helpful @PP Mguire
www.amd.com/en-us/innovations/software-technologies/directx12

no one does car analogies better than amd haha
I've had lunch with Nvidia on the technical aspects of DX12 and how it could help us with flight simulations. I was making simple so he could understand since he admitted to being slightly confused on the tech articles about it. I also feel that because of the AoS shitstorm that there's been a lot of misinformation about the general tech that people spew so keeping it simple resolves this.
Posted on Reply
#33
rtwjunkie
PC Gaming Enthusiast
PP MguireI also feel that because of the AoS shitstorm that there's been a lot of misinformation about the general tech that people spew so keeping it simple resolves this.
AoS is probably not the best choice they could have used for demonstration purposes, since it would be more CPU-heavy usage anyway, being RTS.
Posted on Reply
#34
RejZoR
rtwjunkieIsn't DX12 basically DX11 in the appearance department? I thought DX12 was only about keeping the same visuals, but making it easier to rended by offloading more of the work onto the CPU than presently done?

To me, this sounds like the recipe for better performance with the same visual fidelity you get now (whatever level people set their game at).
That's mine and your logic. Their logic is like this:
"Woohoo, we gained 30FPS with DX12, lets spend 45FPS on pointless garbage props on the scene." Because reasons. And so that we can brag how epic our engine is because it runs like shit.

I'm saying this because tessellation had the SAME purpose. And they fucked it up. Tessellation is a seamless replacement of multi-model LOD. Remember how in the past car and enemy models shifted from low poly model to high poly model as you approached them? That was the point of tessellation. A performance optimization just like LOD to achieve good performance without compromising visuals because there are no sudden jumps between models since polygons gradually decrease or increase in number. And what have devs done? "Look, 3 million polygons on this gas mask" while 2 meters away, bunch of railings and other elements all blocky because no one bothered to make them high poly in the first place so LOD can actually take away the data. You can always create high detail models and let tessellation decrease the number. You can't do it in the reverse order, expecting a poly multiplier to predict how things should look like when you increase the polygon count. ATi's TruForm did that and we all know how things looked when you enabled it in otherwise unsupported game. Everyone became like inflated balloons. Or, the tessellation engine pumps out polygons on a surface that doesn't really yield any visual gains. If a flat surface is made of 4 polygons or 3 million, it'll still look flat in both cases. But performance will sink like crazy. After all, polygon count is still very much a factor even though pixels are of an bigger issue these days with all the shader effects. But burdening a graphic card with excesive polygons isn't helping as a whole when a game scene is moving through the rendering pipeline.
Posted on Reply
#35
Breit
RejZoRThat's mine and your logic. Their logic is like this:
"Woohoo, we gained 30FPS with DX12, lets spend 45FPS on pointless garbage props on the scene." Because reasons. And so that we can brag how epic our engine is because it runs like shit.

[...]
I think by now we all know that you don't like tesselation, at least in the way it is implemented by ALL studios and in ALL games as of today. So why don't leave it at that and start discussing the topic of this thread?
Posted on Reply
#36
RejZoR
Which is EXACTLY what I'm doing. Discussing how they'll fuck up all the performance gains brought by the DX12. If the updated DX12 game will actually run faster than DX11 one while at least looking the same (or better) I'll be shockingly amazed.
Posted on Reply
#37
PP Mguire
I think we'll see that in the early implementations in the least, due to developers trying to grasp at a little more performance benefits. Won't be until we have from the ground up games based on DX12 that we might start seeing issues, if we do.
Posted on Reply
#38
kn00tcn
RejZoRIf you read back, I actually haven't mentioned Crysis 2 ANYWHERE. And yes, that's exactl ywhat they do with tessellation. Instead of using it on most critical elements that tend to be blocky, they use it on this where you could use just 1/10th of polygons and achieve same results using polybumpmapping to give illusion of high poly count. Instead they waste actual poly count on that stuff and ignore things that can't be faked with bump mapping. Yeah, they do that and it's not just 3 studios that do that. It's basically everyone who use DX11 engines.
so you're going to be like that, a bunch of air with zero examples

you are COMPLETELY describing crysis 2 & all its negative press about tesselating flat walls, it doesnt describe tomb raider/saints row/homefront/battlefield/bioshock/cod/civilization/dirt/deus ex/f1/far cry/gta5/grid/max payne/medal of honor/metro/sleeping dogs/witcher3 (with specifically hairworks disabled)/hawx2/the list keeps going...

once again, you can disable tesselation or cap it, extremely easy on amd drivers for example, the result is no more performance drop from your tesselation boogeyman!

this premise is insane... every time there is more gpu power, devs add more graphical effects, things get more demanding, completely unrelated to what API is being used since APIs dont come out every year

want to know why i had a 4870x2 back in 2008? there was no choice! you could not get 1080p 60fps in the latest games on a single gpu from either side... nowadays a single gpu gets you that at lower power while looking great

the pace of performance has been greater than the increased load by newly added effects that you're free to disable, not to mention the fact that most devs dont go nuts since they still have to be scalable on consoles, so huffing & puffing isnt going to change reality
Posted on Reply
#39
RejZoR
And then he lists bunch of DX11 games that do exactly that. LoL. And Metro 2133 XD Also, you cannot know what the engine is doing unless you can switch to wireframe mode or look for specifics how important and unimportant objects look like depending on distance. You just listing bunch of games is equally bunch of hot air like not listing anything by me.

And it does matter what API is used. DX10 and DX11 have things that DX9 doesn't have, particularly behind the scenes things that don't affect visuals as much as they do performance. Premise of tessellation is that you input high detail models into the game and engine then cuts their detail down. It can be distance, user settings or even framerate based. Instead, large majority of games that I know (Metro and AvP being two I'm sure of) do the opposite. Buff up poly count on models that don't actually benefit much from it. In such cases it's better going more aggressive on the poly culling than enhancing visuals since low end users will benefit more from it. For example in Unreal Tournament sort of games where things are happening so fast you don't have time to admire nice curved surfaces. But having extra performance from lower poly count where it doesn't matter is of greater benefit for everyone.
Posted on Reply
Add your own comment
May 15th, 2024 00:49 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts