Monday, July 3rd 2017

NVIDIA Adds DirectX 12 Support to GeForce "Fermi" Architecture

With its latest GeForce 384 series graphics drivers, NVIDIA quietly added DirectX 12 API support for GPUs based on its "Fermi" architecture, as discovered by keen-eyed users on the Guru3D Forums. These include the GeForce 400-series and 500-series graphics cards. The support appears to be sufficient to run today's Direct3D feature-level 12_0 games or applications, and completes WDDM 2.2 compliance for GeForce "Fermi" graphics cards on Windows 10 Creators Update (version 1703), which could be NVIDIA's motivation for extending DirectX 12 support to these 5+ year old chips. Whether they meet your games' minimum system requirements is an entirely different matter.
Source: Guru3D Forums
Add your own comment

58 Comments on NVIDIA Adds DirectX 12 Support to GeForce "Fermi" Architecture

#26
P4-630
The Way It's Meant to be Played
Could this be this the reason I had so many problems installing/running windows 10 on an older (2008) Asus laptop with GT425M before?
It would simply freeze up all the time, either during installation or the short moments I was in windows it would freeze up as well...
Posted on Reply
#27
bug
P4-630 said:
Could this be this the reason I had so many problems installing/running windows 10 on an older (2008) Asus laptop with GT425M before?
It would simply freeze up all the time, either during installation or the short moments I was in windows it would freeze up as well...
I doubt it. Win10 supports DX12, it does not require it. It was probably Asus and their "famous" ACPIs.
Posted on Reply
#28
TheOne
rtwjunkie said:
On topic, this quiet Fermi rollout mystifies me. Why would Nvidia not tell Fermi owners? It makes no sense to release the capability and not publicize it.
There was someone harassing the GeForce forum for most of the last 2 years, I don't remember who, but he would always ask about Fermi getting DX12 support in driver threads.

Also 3DMark says my GTX470 doesn't have enough memory and may not complete TimeSpy.
Posted on Reply
#29
GoldenX
Kaotik said:
It's not 12_0 capable, it's 11_0 capable. The only thing that changed really is the fact that now it can run the API itself.
(for reference, even Kepler and Maxwell gen 1. are 11_0, not even 11_1 like GCN1 & Haswell/Broadwell, let alone 12_0 like GCN2-4 or 12_1 like GCN5 and Maxwell gen 2. & Pascal and Skylake/Kaby Lake)

edit:

No they didn't. GCN1 is 11_1, 2-4 are 12_0, 5 is 12_1.
D3D12 supports feature levels 11_0, 11_1, 12_0, 12_1
So? That makes it totally capable of running DX12 code. The problem here is that Nvidia took a couple of years to fulfill the promise of bringing DX12 to Fermi. They promised Vulkan too and later decided it was "not worth it".
AMD was very clear from the beginning saying only GCN will have it, and the best of all is Intel, giving support to Ivy Bridge and up.
Posted on Reply
#30
KainXS
Tried it on an old Fermi card I had laying around.

Posted on Reply
#31
bug
GoldenX said:
So? That makes it totally capable of running DX12 code. The problem here is that Nvidia took a couple of years to fulfill the promise of bringing DX12 to Fermi. They promised Vulkan too and later decided it was "not worth it".
AMD was very clear from the beginning saying only GCN will have it, and the best of all is Intel, giving support to Ivy Bridge and up.
I have to repeat my question: AMD left GCN 1.0 out initially. When did they add DX12 support for GCN 1.0?
Posted on Reply
#33
Red_Machine
This is like back in the old days, when companies would regularly add support for newer versions of DirectX in software. My old GeForce2 MX was a DX7 card, but the latest drivers made it report DX9 in dxdiag.
Posted on Reply
#34
bug
P4-630 said:
AMD Confirms GCN Cards Don’t Feature Full DirectX 12 Support – Feature Level 11_1 on GCN 1.0, Feature Level 12_0 on GCN 1.1/1.2
http://wccftech.com/amd-confirms-gcn-cards-feature-full-directx-12-support-feature-level-111-gcn-10-feature-level-120-gcn-1112/

That article only says what the hardware can do, it does not confirm support (notice how Fermi is listed in there, despite it receiving support only now). I honestly can't find any info about when GCN 1.0 received support.
Posted on Reply
#35
agent_x007
Last time I checked, I couldn't even lauch API test :)
Also, I'm sorry W1zzard.... ("fix" needs to be fixed) : LINK.

Scores for my GTX 580 Lighting (1,5GB) :
Time Spy : LINK (Score : 962)
API Overhead test 2.0 : LINK
Gears of War 4* :

*have a problem with resolution scaling lock... (trying to fix).
Posted on Reply
#36
GoldenX
bug said:
I have to repeat my question: AMD left GCN 1.0 out initially. When did they add DX12 support for GCN 1.0?
From day one, same with Vulkan.
Posted on Reply
#37
efikkan
Two pages of posts, and most of them negative. How can Direct3D 12 support for Fermi be a bad thing? If the shoe were on the other foot, the tone would be completely different if this was the competitor. Nvidia is offering great driver support for >10 years for their hardware, and even backporting some features to their legacy drivers, which beats the driver support of their competitor by far, which has been known to drop support for 2-3 year old hardware.

Many still use Fermi hardware, even I have one in daily use. But it seems that all of you have missed the most important part of this news; accelerating the adoption of new APIs. Even those who don't own a Fermi card should be really happy about this, since it removes the last reason for developers to spend resources on pre-Direct3D 12 APIs. Accelerating the adaptation of new APIs serves everyone, even if the new APIs don't show their full potential on old hardware. This is something worth celebrating.
Posted on Reply
#38
bug
GoldenX said:
From day one, same with Vulkan.
Thanks for the source. I know for a fact that at least on Linux Vulkan was only supported on GCN 2.0 at launch. It has trickled down since, but it still requires the closed source driver.
Posted on Reply
#39
GoldenX
On Linux is different, AMD is implementing a new open driver in parallel with a closed one, AMDGPU (based on the current open mesa radeonsi) and AMDGPU-PRO (using the Windows OpenGL and Vulkan components, the idea being to use the profile optimizations that professional programs need), the open one has alpha support for GCN1.0 and beta for GCN1.1, the AMDGPU-PRO has full support, but is a worse driver than the open AMDGPU. Support for Vulkan on AMDGPU-RPO is a given as it is the same one in use on Windows, but for AMDGPU you have to use the open RADV driver, that is not made by AMD (they did say they will open their Vulkan driver and fuse it with RADV, along with OpenCL). As GCN1.0 is no compatible by default with AMDGPU, you don't get a Vulkan driver unless you use AMDGPU-PRO.
Posted on Reply
#40
GoldenX
efikkan said:
Two pages of posts, and most of them negative. How can Direct3D 12 support for Fermi be a bad thing? If the shoe were on the other foot, the tone would be completely different if this was the competitor. Nvidia is offering great driver support for >10 years for their hardware, and even backporting some features to their legacy drivers, which beats the driver support of their competitor by far, which has been known to drop support for 2-3 year old hardware.

Many still use Fermi hardware, even I have one in daily use. But it seems that all of you have missed the most important part of this news; accelerating the adoption of new APIs. Even those who don't own a Fermi card should be really happy about this, since it removes the last reason for developers to spend resources on pre-Direct3D 12 APIs. Accelerating the adaptation of new APIs serves everyone, even if the new APIs don't show their full potential on old hardware. This is something worth celebrating.
It IS great news, but it's too late and on an API that is not as good as the one they already say they are not implementing (Vulkan).
Don't forget how Nvidia drops performance optimizations on older generations of "supported" cards, you get a new driver with Fermi or Kepler, but don't expect those 50% improvements in the notes to apply to your card. Personal experience with a, at the moment, "supported" 7600GT.
Posted on Reply
#41
MrGenius
bug said:
I have to repeat my question: AMD left GCN 1.0 out initially. When did they add DX12 support for GCN 1.0?
I guess at least since Windows 10 was released. Or even before then actually. I've been able to run DX12 programs on my 280X since I first installed one of the earliest pre-release Insider Preview builds. One of the first things I tried to run was Star Swarm in D3D12 mode to see how it compared to D3D11 and Mantle modes. Long story short it worked. And I found out D3D12 was ~2x as fast D3D11, but still ~50% slower than Mantle on my card.
Posted on Reply
#42
Red_Machine
Did the GeForce 200 series ever get DX11, or they still stuck on DX10/10.1?
Posted on Reply
#43
efikkan
GoldenX said:
It IS great news, but it's too late and on an API that is not as good as the one they already say they are not implementing (Vulkan).
Don't forget how Nvidia drops performance optimizations on older generations of "supported" cards, you get a new driver with Fermi or Kepler, but don't expect those 50% improvements in the notes to apply to your card. Personal experience with a, at the moment, "supported" 7600GT.
Nobody is expecting major architecture specific optimizations on >5 year old architectures, but the fact that you still get support and some new features and optimizations is far better than the competition.

Red_Machine said:
Did the GeForce 200 series ever get DX11, or they still stuck on DX10/10.1?
No, pre-Fermi never got Direct3D 11 and OpenGL 4.x, since there are major hardware features lacking. Nvidia does still supports all the parts of OpenGL 4.x they can on GeForce 8000->300 though.

The reason why Nvidia are able to implement Direct3D 12 on Fermi, even though it's very different from Kepler, is the fact that there are very few new hardware requirements from Direct3D 11 to 12 (base requirements). Since Fermi is using different modules internally in the driver, most of the support had to be rewritten from scratch, and since it's a low-priority issue it has taken some time. But Direct3D 12 will never work as well on Fermi as compared to newer architectures.
Posted on Reply
#44
GoldenX
Red_Machine said:
Did the GeForce 200 series ever get DX11, or they still stuck on DX10/10.1?
If the GPU is designed for DX10/OGL3, there is no changing that. You can ask the Linux Mesa team to emulate OpenGL 4.0, good luck with tesselation.
DX12 is DX11 with a low overhead design, that is why it can be implemented on old DX11 hardware, same as with Vulkan, can be implemented on anything that can run OpenGL 4.2 (we are at 4.5+). On the other hand, DX11 is a bunch of extensions on top of DX10, if you want to run DX11 apps on a card designed for DX10, you have to CPU emulate them, with the corresponding performance penalty, that is why it's not possible to make a 8800GT or an HD3850 run DX11.

efikkan said:
Nobody is expecting major architecture specific optimizations on >5 year old architectures, but the fact that you still get support and some new features and optimizations is far better than the competition.
Old architectures don't even get game optimizations, practically a zombie in the drivers. It's not different from having a fixed old driver. DX12 is not a "new feature", it was promised years ago, and is very late compared to the rest of the supported archs.
Posted on Reply
#45
efikkan
GoldenX said:

Old architectures don't even get game optimizations, practically a zombie in the drivers.
The usefulness of game optimizations are exaggerated, it usually only helps edge cases. In fact, I would prefer vendors to stop doing them all together.
Nevertheless, they add support for new versions of operating systems, both for Windows, Linux and BSD.

GoldenX said:
It's not different from having a fixed old driver. DX12 is not a "new feature", it was promised years ago, and is very late compared to the rest of the supported archs.
Fermi is using a different code path from Kepler and newer, so it is a new feature for the Fermi driver. There is a reason why Nvidia's Direct3D 12 performance has improved a lot since its first introduction, it's due to tweaking their API implementation in the specific driver code paths. Creating optimal code paths for Vulkan, Direct3D and OpenGL for each architecture are the optimizations that matter, this benefits all software, unlike random (stupid) tweaks to a shader program of a specific game which might help them 7% in that edge case.
Posted on Reply
#46
GoldenX
Yeah the thing with Linux or BSD is that the closed drivers are always playing catch-up with the kernel changes. But yeah, you want proper support outside of Windows? Go Nvidia, there is no other way.
Posted on Reply
#47
0x4452
agent_x007 said:
Last time I checked, I couldn't even lauch API test :)
Also, I'm sorry W1zzard.... ("fix" needs to be fixed) : LINK.

Scores for my GTX 580 Lighting (1,5GB) :
Time Spy : LINK (Score : 962)
API Overhead test 2.0 : LINK
Gears of War 4* :

*have a problem with resolution scaling lock... (trying to fix).
I have one of those somewhere in the basement too. I got it second hand :-)

To everyone saying AMD 7xxx supported DX12 from the start unlike Fermi. Fermi was released about the same time as AMD 5xxx series of GPUs, which still do not support DX12, fwiw.
Posted on Reply
#48
SaltyFish
TheOne said:
There was someone harassing the GeForce forum for most of the last 2 years, I don't remember who, but he would always ask about Fermi getting DX12 support in driver threads.
At that rate, Vulkan support for XP isn't far behind!

On a more serious note, it's nice to see hardware that's been out of production for more than five years get a feature boost. It understandably doesn't happen often (poor return on investment). Now to find a decent DX12 game that can run on a 560 Ti... oh wait, I'm not on Win10. So much for that I guess. Still nice to see this though.
Posted on Reply
#49
eidairaman1
The Exiled Airman
Who uses those hot bricks still?

Imho they rate with the 5800 and 2900.
Posted on Reply
#50
agent_x007
0x4452 said:
Fermi was released about the same time as AMD 5xxx series of GPUs, which still do not support DX12, fwiw.
Fermi was LATE to the party.
AMDs 5870/5850 were on market for half a year, before first Fermi was released (September 2009 vs. March 2010).
So, a fair comparison would be between it and Radeons of HD 6000 series.
Still, Radeons can't have DX12 support because doing it for VLIW5/4 based architectures is too expensive and not necessary at this point from AMD point of view.

I'm really glad NV added DirectX 12 to Fermi, because it means you don't need a PCI-e 3.0 GPU to have it :)
Posted on Reply
Add your own comment