Thursday, June 4th 2015

NVIDIA Could Capitalize on AMD Graphics CoreNext Not Supporting Direct3D 12_1

AMD's Graphics CoreNext (GCN) architecture does not support Direct3D feature-level 12_1 (DirectX 12.1), according to a ComputerBase.de report. The architecture only supports Direct3D up to feature-level 12_0. Feature-level 12_1 adds three features over 12_0, namely Volume-Tiled Resources, Conservative Rasterization and Rasterizer Ordered Views.

Volume Tiled-resources, is an evolution of tiled-resources (analogous to OpenGL mega-texture), in which the GPU seeks and loads only those portions of a large texture that are relevant to the scene it's rendering, rather than loading the entire texture to the memory. Think of it as a virtual memory system for textures. This greatly reduces video memory usage and bandwidth consumption. Volume tiled-resources is a way of seeking portions of a texture not only along X and Y axes, but adds third dimension. Conservative Rasterization is a means of drawing polygons with additional pixels that make it easier for two polygons to interact with each other in dynamic objects. Raster Ordered Views is a means to optimize raster loads in the order in which they appear in an object. Practical applications include improved shadows.
Given that GCN doesn't feature bare-metal support for D3D feature-level 12_1, its implementation will be as limited as feature-level 11_1 was, when NVIDIA's Kepler didn't support it. This is compounded by the fact that GCN is a more popular GPU architecture than Maxwell (which supports 12_1), thanks to new-generation game consoles. It could explain why NVIDIA dedicated three-fourths of its GeForce GTX 980 Ti press-deck to talking about the features of D3D 12_1 at length. The company probably wants to make a few new effects that rely on D3D 12_1 part of GameWorks, and deflect accusations of exclusivity to the competition (AMD) not supporting certain API features, which are open to them. Granted, AMD GPUs, and modern game consoles such as the Xbox One and PlayStation 4 don't support GameWorks, but that didn't stop big game devs from implementing them. Source: ComputerBase.de
Add your own comment

79 Comments on NVIDIA Could Capitalize on AMD Graphics CoreNext Not Supporting Direct3D 12_1

#51
HumanSmoke
Pap1er, post: 3292895, member: 86238"
What does it mean in terms of performance if GCN support or does not support Direct3D 12_1 ?
I would appreciate short and clear explanation.
Would it affect performance at all? If so, how much?
How long is a piece of string?
If the card is saving GPU horsepower by better use of rasterization resources then the amount of gain depends upon the scenes being rendered. Not all games or game engjnes are created equal, and that doesn't take into account a myriad other graphical computation levels also needing to be taken into consideration ( I.e. tessellation). Even if you could quantify the gains/deficits, they are then affected by how different architectures handle post-rasterization image quality features ( post process depth of field, motion blur, global illumination etc.)
Basically what you want is a set figure when the actuality is that wont ever be the case unless every other variable becomes a constant- and every architecture and every part within every architecture handles every facet of the game to a varying degree.
Posted on Reply
#52
RejZoR
So, out of this whole clusterfuck of info, when it comes to D3D12_0, old AMD GPU's still support way more than old NVIDIA GPU's (Kepler and Maxwell 1 support none of the feature levels and the ones that does are all Tier 1). Excluding Maxwell 2 since it's the newest one and was built for D3D12 to begin with anyway. Now it's just a question how far will the Fiji go with support. But seeing that GCN 1.0 already supports some, we can quite safely assume they'll support more than Maxwell 2. Not doing so would be kinda stupid from AMD considering how late they are releasing the Fiji compared to Maxwell 2...
Posted on Reply
#53
R-T-B
Pap1er, post: 3292895, member: 86238"
What does it mean in terms of performance if GCN support or does not support Direct3D 12_1 ?
I would appreciate short and clear explanation.
Would it affect performance at all? If so, how much?
I can provide a short one (no offense HumanSmoke)

If only one brand supports it, as is suggested, no one in their right mind will code for it.

So short answer is no, it won't make much difference at all.
Posted on Reply
#54
_larry
lilhasselhoffer, post: 3292789, member: 94231"
Sniff....sniff.....sniff.... Does anyone else smell a turd? I think I remember that smell... Vista, is that you?
Ah yes, DirectX 10. The redheaded step-child between DX9 and DX11.

Aren't they making a new Doom? (I bought Wolfenstein: The New Order the day of release and STILL have an unused Doom beta key sitting on my desk...)
What if they came out swinging with Doom on their IDTech OpenGL based engine when DX12 was launched? They have had a good bit of time to optimize it since Rage came out.
Posted on Reply
#55
wiak
Octopuss, post: 3292603, member: 74316"
What kind of nonsense is this? Dx12 is not even out yet.
and AMD can still have support in hardware, nobody knew of some of their past generation product like tessellation in 2900 XT that was unused in DX10.

but wait for Windows 10, DX12 and DX12 games to find out..
Posted on Reply
#56
HumanSmoke
R-T-B, post: 3292947, member: 41983"
I can provide a short one (no offense HumanSmoke)
If only one brand supports it, as is suggested, no one in their right mind will code for it.
So short answer is no, it won't make much difference at all.
No offense taken, and you're right, most PC games are developed for console - and consoles don't support FL 12_1
The only caveats are game engines developed primarily or in tandem with PC where the features could be unused in the console version, and OpenGL game engines of course.
wiak, post: 3293197, member: 804"
and AMD can still have support in hardware, nobody knew of some of their past generation product like tessellation in 2900 XT that was unused in DX10.
Not really. The tessellator in the R600 was known about from the moment it was launched - it wasn't some kind of secret squirrel hidden capability.
The whole point of this article and thread, is that the GCN 1.x architecture cannot undertake conservative raterization in hardware. It can however emulate it in software using the compute ability of the architectures inbuilt asynchronous compute engines.
Posted on Reply
#57
Steevo
HumanSmoke, post: 3293205, member: 98425"
No offense taken, and you're right, most PC games are developed for console - and consoles don't support FL 12_1
The only caveats are game engines developed primarily or in tandem with PC where the features could be unused in the console version, and OpenGL game engines of course.

Not really. The tessellator in the R600 was known about from the moment it was launched - it wasn't some kind of secret squirrel hidden capability.
The whole point of this article and thread, is that the GCN 1.x architecture cannot undertake conservative raterization in hardware. It can however emulate it in software using the compute ability of the architectures inbuilt asynchronous compute engines.
wiak, post: 3293197, member: 804"
and AMD can still have support in hardware, nobody knew of some of their past generation product like tessellation in 2900 XT that was unused in DX10.

but wait for Windows 10, DX12 and DX12 games to find out..
TruForm ATI 8500 had hardware tessellation, and no one used it as no competitors had it or wanted to invest in it at the time.
Posted on Reply
#58
HumanSmoke
Steevo, post: 3293233, member: 19251"
TruForm ATI 8500 had hardware tessellation, and no one used it as no competitors had it or wanted to invest in it at the time.
TruForm did have reasonable amount of game support- including a number of AAA titles.

Same old ATI/AMD tune isn't it?
At least ATI worked to get TruForm integrated as a game feature. AMD get involved and immediately turn an R600 feature into a footnote in history by tossing ATI's game development program into the nearest dumpster.
Posted on Reply
#59
xfia
i think i would have to agree this means little to nothing.. so game devs will continue to load textures the same and some will let you decide like forever now. by the time gamers actually need a full dx12.1+ or whatever we will be talking about dx13. assuming dx is still the way to go for gaming by then.

p.s didnt amd have a hand in developing gpu tessellation and had fully supporting dx11 gpu's before nvidia? would that not also be around the same time gcn was proving to be more powerful than kepler in direct compute and gpu acceleration?

well i know for sure tessellation works fine on my amd gpu's and amd optimized tessellation looks great.. especially in evolved games.
Posted on Reply
#60
HumanSmoke
xfia, post: 3293287, member: 154575"
i think i would have to agree this means little to nothing.. so game devs will continue to load textures the same and some will let you decide like forever now.
It won't be a major factor, but the consensus amongst developers seems to be that 12_1 features such as conservative rasterization, rasterizer ordered views (ROV)/ order-independent transparency (OIT), voxelization, and adaptive volumetric shadow maps are the way forward for more realistic portrayal of gameplay, reduction of GPU compute overhead, and greater developer control. These may be slow in coming to fruition with DirectX (thanks to consoles not supporting the features natively, or not at all), but OpenGL already has them enabled. In a way, AMD can thank Nvidia and Intel for making Vulkan that much more relevant - how's that for irony.
xfia, post: 3293287, member: 154575"
p.s didnt amd have a hand in developing gpu tessellation and had fully supporting dx11 gpu's before nvidia?
No and Yes.
No. ATI's TruForm was the first GPU tessellation hardware followed by Matrox's Parhelia line (N-Patch support), then ATI's Xenos (R500 / C1) graphics chip for the Xbox 360. All of these pre-date AMD's involvement with ATI.
Yes. AMD's Evergreen series were the first DirectX 11 compliant GPUs. They arrived just over six months before Nvidia's own DX11 cards.
would that not also be around the same time gcn was proving to be more powerful than kepler in direct compute and gpu acceleration?
Do tell? You're starting to sound like AMD's Roy Taylor.
DirectCompute, like most computation depends upon the workload, software, and just as importantly, software support (drivers). It also depends heavily upon the emphasis placed upon the designs by the architects. A case in point is the tessellation you seem very keen to explore. ATI pioneered it, but it went largely unused. Under AMD's regime tessellation wasn't prioritized where Nvidia made Maxwell a tessellation monster. Neither DC or tessellation on their own define the architecture, or are indicators of much besides that facet.

xfia, post: 3293287, member: 154575"
by the time gamers actually need a full dx12.1+ or whatever we will be talking about dx13. assuming dx is still the way to go for gaming by then.
Well, if Vulkan and the new OpenGL extensions take off like people are expecting, the DirectX coding arena may have their hand forced. If the new OGL turns into the old OGL, Microsoft can probably wait ten years before updating DirectX.
Posted on Reply
#61
xfia
thanks smoke.. i honestly just didnt even know what to believe with all the stuff people say around. thinking about it.. i think you shared a interview with amd's "gaming scientist" some time ago and he explained something of a tessellation war going on or rather over tessellation. before i started getting into it but still interesting.
10 years haha yeah dx12 should be easier to work with and i mean what are they even going to add to make it more complex that is a real game changer like tessellation was.
Posted on Reply
#63
xfia
DX12 FTW MICROSOFT!

1990 REHASHED HAHA

DX13? :)
Posted on Reply
#64
rvalencia
btarunr, post: 3292535, member: 43587"
AMD's Graphics CoreNext (GCN) architecture does not support Direct3D feature-level 12_1 (DirectX 12.1), according to a ComputerBase.de report. The architecture only supports Direct3D up to feature-level 12_0. Feature-level 12_1 adds three features over 12_0, namely Volume-Tiled Resources, Conservative Rasterization and Rasterizer Ordered Views.

Volume Tiled-resources, is an evolution of tiled-resources (analogous to OpenGL mega-texture), in which the GPU seeks and loads only those portions of a large texture that are relevant to the scene it's rendering, rather than loading the entire texture to the memory. Think of it as a virtual memory system for textures. This greatly reduces video memory usage and bandwidth consumption. Volume tiled-resources is a way of seeking portions of a texture not only along X and Y axes, but adds third dimension. Conservative Rasterization is a means of drawing polygons with additional pixels that make it easier for two polygons to interact with each other in dynamic objects. Raster Ordered Views is a means to optimize raster loads in the order in which they appear in an object. Practical applications include improved shadows.

[---]

Given that GCN doesn't feature bare-metal support for D3D feature-level 12_1, its implementation will be as limited as feature-level 11_1 was, when NVIDIA's Kepler didn't support it. This is compounded by the fact that GCN is a more popular GPU architecture than Maxwell (which supports 12_1), thanks to new-generation game consoles. It could explain why NVIDIA dedicated three-fourths of its GeForce GTX 980 Ti press-deck to talking about the features of D3D 12_1 at length. The company probably wants to make a few new effects that rely on D3D 12_1 part of GameWorks, and deflect accusations of exclusivity to the competition (AMD) not supporting certain API features, which are open to them. Granted, AMD GPUs, and modern game consoles such as the Xbox One and PlayStation 4 don't support GameWorks, but that didn't stop big game devs from implementing them.

Source: ComputerBase.de
CR(Conservative Rasterization)

Read http://devgurus.amd.com/message/1308511



Question:

I need for my application that every drawing produce at least one pixel output (even if this is an empty triangle = 3 identical points). NVidia have an extension (GL_NV_conservative_raster) to enable such a mode (on Maxwell+ cards). Is there a similar extension on AMD cards

Answer (from AMD):

Some of our hardware can support functionality similar to that in the NVIDIA extension you mention, but we are currently not shipping an extension of our own. We will likely hold off until we can come to a consensus with other hardware vendors on a common extension before exposing the feature, but it will come in time.








Even Nvidia first gen Maxwell card 750Ti that was launch early in 2014 doesn't even have 12.1.


From Christophe Riccio
https://twitter.com/g_truc/status/581224843556843521

It seems that shader invocation ordering is proportionally a lot more expensive on GM204 than S.I. or HSW.
Posted on Reply
#65
rvalencia
HumanSmoke, post: 3293321, member: 98425"
It won't be a major factor, but the consensus amongst developers seems to be that 12_1 features such as conservative rasterization, rasterizer ordered views (ROV)/ order-independent transparency (OIT), voxelization, and adaptive volumetric shadow maps are the way forward for more realistic portrayal of gameplay, reduction of GPU compute overhead, and greater developer control. These may be slow in coming to fruition with DirectX (thanks to consoles not supporting the features natively, or not at all), but OpenGL already has them enabled. In a way, AMD can thank Nvidia and Intel for making Vulkan that much more relevant - how's that for irony.

No and Yes.
No. ATI's TruForm was the first GPU tessellation hardware followed by Matrox's Parhelia line (N-Patch support), then ATI's Xenos (R500 / C1) graphics chip for the Xbox 360. All of these pre-date AMD's involvement with ATI.
Yes. AMD's Evergreen series were the first DirectX 11 compliant GPUs. They arrived just over six months before Nvidia's own DX11 cards.

Do tell? You're starting to sound like AMD's Roy Taylor.
DirectCompute, like most computation depends upon the workload, software, and just as importantly, software support (drivers). It also depends heavily upon the emphasis placed upon the designs by the architects. A case in point is the tessellation you seem very keen to explore. ATI pioneered it, but it went largely unused. Under AMD's regime tessellation wasn't prioritized where Nvidia made Maxwell a tessellation monster. Neither DC or tessellation on their own define the architecture, or are indicators of much besides that facet.


Well, if Vulkan and the new OpenGL extensions take off like people are expecting, the DirectX coding arena may have their hand forced. If the new OGL turns into the old OGL, Microsoft can probably wait ten years before updating DirectX.


AMD is aware of extreme tessellation issue hence improvements with R9-285 (28 CU).
Posted on Reply
#66
mastrdrver
Lets clarify this by someone who actually has some knowledge about this: Link.

Qualifications of the writer: Written by Alessio Tommasini, Directx 12 early access program member.
At this time, we are witnessing an incredible work of disinformation about the supported DirectX 12 features by the various GPUs currently available from AMD and NVIDIA. Personally, we do not know if some company arranges this sort-of campaign, or if it is just all due by some uninformed journalists.

What is for sure is that people need some clarifications. First of all, Direct3D 12 is an API designed to run on the currently available hardware, as long as it supports virtual memory and tiled resources.
The new API has been largely shaped around a new resource-binding model, which defines the management of textures and buffers in physical and virtual memory (dedicated or system-shared) of the graphics hardware.

In order to ensure that Direct3D 12 could support the widest range of hardware, without significant compromises that could limit the longevity of the new API, Microsoft and its partners established to divide into three “tiers” the level of support of the new resource-binding model.
Each tier is the superset of its predecessor, that is, tier 1 hardware comes with the strongest constraints about the resource-binding model, tier 3 conversely has no limitations, while tier 2 represent intermediate level of constrictions.

If we talk about the hardware on sale, the situation about the resource-binding tiers is the following:
  • Tier 1: INTEL Haswell e Broadwell, NVIDIA Fermi
  • Tier 2: NVIDIA Kepler, Maxwell 1.0 and Maxwell 2.0
  • Tier 3: AMD GCN 1.0, GCN 1.1 and GCN 1.2
Regarding the resource binding, currently only AMD GPUs come without hardware limitations, which has been erroneously defined as a “full support” by some sources.

In addition to the three resource-binding tiers, Direct3D 12 exposes four “feature levels”, that is, four levels of capabilities, each one that states a well-defined set of hardware rendering features.
It is important to specify that these feature levels are not directly related to the resource-binding tiers; moreover, they cover only some of the rendering capabilities exposed by Direct3D 12.
Some of these capabilities are not covered at all even by the highest feature level, and all others can be individually supported by the graphics hardware (if the GPU architecture and the drivers allow them) regardless of the supported feature level.

If we talk again about the hardware on sale, the situation about the feature levels is the following:
  • Feature level 11.0: NVIDIA Fermi, Kepler, Maxwell 1.0
  • Feature level 11.1: AMD GCN 1.0, INTEL Haswell and Broadwell
  • Feature level 12.0: AMD GCN 1.1 and GCN 1.2
  • Feature level 12.1: NVIDIA Maxwell 2.0
The first two feature levels roughly coincide to the DirectX 11 levels with the same name (with some differences due the new resource binding model), while feature level 12.0 and 12.1 are new to Direct3D 12.
Despite being pleonastic, it is worth to restate that feature level 12.1 does not coincide with an imaginary “full/complete DirectX 12 support” since it does not cover many important or secondary features exposed by Direct3D 12.

Reading various “exclusive news” found around the internet, additional confusion precisely comes from the individual capabilities: some of them are once again pooled together in groups of two or three tiers, but it has to specify that each of them is completely unrelated to the other, to the three resource-binding tiers, and clearly from the four feature levels.

In the end, as regards the support of every single capability, it is currently not possible, nor appropriate, to draw up a complete and well-defined table showing the support of on sale hardware.
Unless you name is AMD, INTEL or NVIDIA, you cannot present such report with the drivers currently available on the public channels, nor with non-NDA documentation, therefore everything else is only to be considered as pure rants.
Posted on Reply
#67
HumanSmoke
mastrdrver, post: 3293609, member: 66727"
Lets clarify this by someone who actually has some knowledge about this: Link.
Qualifications of the writer: Written by Alessio Tommasini, Directx 12 early access program member.
The same information - and informed opinion by programmers (including the author you just quoted and Andrew Lauritzen of Intel), and tech reviewers such as Anandtech's Ryan Smith is being actively discussed in a less FUD-orientated thread at B3D if anyone is actually interested. (Link is the last current page of the discussion but I would recommend reading the whole thread).
Posted on Reply
#68
R-T-B
HumanSmoke, post: 3293627, member: 98425"
The same information - and informed opinion by programmers (including the author you just quoted and Andrew Lauritzen of Intel), and tech reviewers such as Anandtech's Ryan Smith is being actively discussed in a less FUD-orientated thread at B3D if anyone is actually interested. (Link is the last current page of the discussion but I would recommend reading the whole thread).
But brain hurts! I just want video game.

In seriousness, I may browse through it when I get a chance...
Posted on Reply
#69
mastrdrver
HumanSmoke, post: 3293627, member: 98425"
The same information - and informed opinion by programmers (including the author you just quoted and Andrew Lauritzen of Intel), and tech reviewers such as Anandtech's Ryan Smith is being actively discussed in a less FUD-orientated thread at B3D if anyone is actually interested. (Link is the last current page of the discussion but I would recommend reading the whole thread).
Thanks, didn't realize that discussion was going on over there.

What it seems to be is that no GPU out now or soon fully supports DX12. nVidia 12_1 is not better then AMD 12_0 (or vise versa), it just seems to be a different feature sub, sub set.

edit: Even these sub, sub set numbers (12_0 or 12_1) do not fully support all of DX12 features. Also it would seem that DX11 is a subset of DX12. Therefore, while GCN 1.0 may not support some of the new features in DX12, it still "supports" DX12. Even Fermi appears to "support" DX12, but only support DX11.1 feature level.
Posted on Reply
#70
rvalencia
mastrdrver, post: 3294005, member: 66727"
Thanks, didn't realize that discussion was going on over there.

What it seems to be is that no GPU out now or soon fully supports DX12. nVidia 12_1 is not better then AMD 12_0 (or vise versa), it just seems to be a different feature sub, sub set.

edit: Even these sub, sub set numbers (12_0 or 12_1) do not fully support all of DX12 features. Also it would seem that DX11 is a subset of DX12. Therefore, while GCN 1.0 may not support some of the new features in DX12, it still "supports" DX12. Even Fermi appears to "support" DX12, but only support DX11.1 feature level.
From http://www.bitsandchips.it/52-english-news/5661-clarifications-about-tier-and-feature-levels-of-the-directx-12

•Tier 1: INTEL Haswell e Broadwell, NVIDIA Fermi
•Tier 2: NVIDIA Kepler, Maxwell 1.0 and Maxwell 2.0
•Tier 3: AMD GCN 1.0, GCN 1.1 and GCN 1.2

•Feature level 11.0: NVIDIA Fermi, Kepler, Maxwell 1.0
•Feature level 11.1: AMD GCN 1.0, INTEL Haswell and Broadwell
•Feature level 12.0: AMD GCN 1.1 and GCN 1.2
•Feature level 12.1: NVIDIA Maxwell 2.0


The max DX12 support would be Tier 3 and Feature Level 12.1.

Tier level and feature level support would be useless if the said features are slow i.e. decelerator
Posted on Reply
#71
mastrdrver
DX11 is a subset of DX12, thus you have different tiers. A GPU is considered to support DX12 "as long as it supports virtual memory and tiled resources", to quote the article. The feature level is not a DX version support, but a classification of what features of DX12 said GPU supports. "The first two feature levels roughly coincide to the DirectX 11 levels with the same name (with some differences due the new resource binding model), while feature level 12.0 and 12.1 are new to Direct3D 12."

The feature level is a classification of what DX12 features said GPU supports. So, Fermi, Kepler, Maxwell 1.0 all support feature level 11.0 of DX12. GCN 1.0, Haswell, and Broadwell support feature level 11.1 and so on.

With all that said, "Despite being pleonastic, it is worth to restate that feature level 12.1 does not coincide with an imaginary “full/complete DirectX 12 support” since it does not cover many important or secondary features exposed by Direct3D 12."
Posted on Reply
#72
xfia
its been my understanding that what all dx11 gpu's will support of dx12 is the way the gpu and cpu communicate.. draw calls i believe? and if a feature is gpu specific then well its just like normal and you need a new gpu for that feature but it doesnt seem like any gamer especially casual is going to need to be rushing out to upgrade even a year from now if they have a system from the past few generations.

Posted on Reply
#73
HumanSmoke
xfia, post: 3294144, member: 154575"
its been my understanding that what all dx11 gpu's will support of dx12 is the way the gpu and cpu communicate.. draw calls i believe? and if a feature is gpu specific then well its just like normal and you need a new gpu for that feature but it doesnt seem like any gamer especially casual is going to need to be rushing out to upgrade even a year from now if they have a system from the past few generations.
Sort of. AMD's VLIW architecture (HD 5000, 6000 series) isn't compatible with DX12, but anything GCN as well as any Nvidia DX11 capable card is DX12 compatible. All the cards support DX12 - it is just a matter of the level of support. No cards at the present time support every facet of the API or its complete feature set.

The basic features of DX12 will be available to most of the GPUs (as well as DirectX 11.3).It will be up to game developers as to which additional features they might include - but I would say that if there is not broad based support for existing cards, the additional features will be options within the game code rather than mandatory.
Posted on Reply
#75
rvalencia
Wshlist, post: 3294804, member: 56121"
People here act like it's somebody fabricating stuff, but the level of support of currently available AMD cards is confirmed by AMD.
But of course we don't know about the about to be released new card.

Anyway the german site has a nice table: http://www.computerbase.de/2015-06/directx-12-amd-radeon-feature-level-12-0-gcn/
Again, feature level 12.1 does not coincide with an imaginary “full/complete DirectX 12 support” since it does not cover many important or secondary features exposed by Direct3D 12

Microsoft allocated a lecture on Resource Binding tier levels during GDC 2015.
http://channel9.msdn.com/Events/GDC/GDC-2015/Advanced-DirectX12-Graphics-and-Performance
Time stamp: 8:09
Posted on Reply
Add your own comment