• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Is Radeon REALLY better for old PCs? | Driver Overhead

GTX 660 3GB, I have a spare HD 7970 which would be a better choice I guess.
HD 7970 is always a winner unless you are running severely nVidia-biased software. It's stronger than GTX 680 in games from 2016 and more recent ones.
 
HD 7970 is always a winner unless you are running severely nVidia-biased software. It's stronger than GTX 680 in games from 2016 and more recent ones.
Given that Kepler has some of the same quirks that RDNA3 has, it isn't a surprise and in fact the 7970 is capable of giving the 780 a run for its money in newer games.
 
Like I mentioned on the Valley thread, AMD's DirectX 11 driver operates in an immediate context. This means that it issues commands through a limited amount of, and primarily one CPU thread.
This is outdated info, as it's obvious by now that the driver works very comparable to what Nvidia achieved some time ago (in fact a very long time). The high performance increases in older games Radeon achieved over a year ago with the optimized driver they delivered make it obvious that AMD changed a lot of things with their drivers, ie. more Nvidia like. This is about DX11 and older games, but especially DX11. IE. NOT low level API games which are basically optimized to run on Radeon.
It isn't a screw-up, it's a tradeoff that Nvidia opted for, (wisely) deducing that in the future, CPUs with an ample amount of threads and high clock frequencies would be available. This means that DirectX 11 games
Not much to do with "wisdom". When Nvidia developed their hardware and software there was no DX12 / VK / MANTLE, they did what made sense at the time and later it became sub-optimal, this is why Radeon scales better with weaker CPUs in DX12 / VK / MANTLE ie. low level APIs that use their superior hardware scheduler that was developed FOR mantle, which is a precursor of DX12 and VK, a trailblazer. In the end, the forward looking hardware of AMD prevailed in this regard. Old news that everyone knows about who regularly digests tech reviews.

You're basically giving NVIDIA more credit than they deserve, but this doesn't make much sense.

Given that Kepler has some of the same quirks that RDNA3 has, it isn't a surprise and in fact the 7970 is capable of giving the 780 a run for its money in newer games.
That generation generally outscaled GTX 600/700 (aside from the GK110 obviously) series easily down the path... it has higher DX12 capabilities, so the tech is simply more modern, and more vram in many cases. The GTX 680 is a clear loser vs the 7970 in the long run. But it was faster and more efficient when it came out. Again, AMDs hardware was too forward looking, the shaders were simply not properly utilized with the old games back then, and 3 GB vram wasn't a upside - yet.
 
Last edited:
This is outdated info, as it's obvious by now that the driver works very comparable to what Nvidia achieved some time ago (in fact a very long time). The high performance increases in older games Radeon achieved over a year ago with the optimized driver they delivered make it obvious that AMD changed a lot of things with their drivers, ie. more Nvidia like. This is about DX11 and older games, but especially DX11. IE. NOT low level API games which are basically optimized to run on Radeon.

Not much to do with "wisdom". When Nvidia developed their hardware and software there was no DX12 / VK / MANTLE, they did what made sense at the time and later it became sub-optimal, this is why Radeon scales better with weaker CPUs in DX12 / VK / MANTLE ie. low level APIs that use their superior hardware scheduler that was developed FOR mantle, which is a precursor of DX12 and VK, a trailblazer. In the end, the forward looking hardware of AMD prevailed in this regard. Old news that everyone knows about who regularly digests tech reviews.

You're basically giving NVIDIA more credit than they deserve, but this doesn't make much sense.


That generation generally outscaled GTX 600/700 (aside from the GK110 obviously) series easily down the path... it has higher DX12 capabilities, so the tech is simply more modern, and more vram in many cases. The GTX 680 is a clear loser vs the 7970 in the long run. But it was faster and more efficient when it came out. Again, AMDs hardware was too forward looking, the shaders were simply not properly utilized with the old games back then, and 3 GB vram wasn't a upside - yet.

Bring up GPU-Z, does your Radeon card support this? If no, then it is not outdated information.

1691838445969.png


We are talking about DirectX 11 contexts, to which DirectX 12 and Vulkan have no relation or bearing on. Nvidia's driver did not always support this feature either, it was the star feature of the 337.50 beta driver released in March 2014 which claimed huge performance improvements on a range of hardware back then, a feature which many took for granted at the time but after years of refinement and hardware evolving alongside it, has caused a pretty dramatic shift in API overhead performance.

It was so extreme that even low-end NVIDIA GPUs can produce and manage far more draw calls than AMD GPUs several generations newer, this is why the UL/Futuremark guys phased out the 3DMark API overhead test, it's just not representative of the GPU's overall performance or serves any meaningful comparison purpose, because it's so incredibly skewed towards Nvidia and it becomes ever more severe the fastest the CPU involved is. I'm confident enough of this that I'll issue you the challenge of beating this score with a 7900 XTX and the fastest CPU you can find, it's from an old Fermi GTX 580 on a Ryzen 3900XT, the score I have with my 13900KS and old 3090 is almost twice as high.


Now Mantle? Why do AMD fans keep bringing that obsolete API which was supported on a grand total of like five games like it's the bedrock of modern graphics and an outstanding red team achievement? It failed to gain ANY traction back in the day, was discontinued on newer generations of AMD's own hardware (it did not work on anything newer than R9 Fury X and was entirely removed from the old cards that did support it around 2019) and was "donated" to Khronos Group who developed the first few versions of Vulkan on top of its concept, calling Vulkan as we have it today "the evolution of Mantle" is a stretch beyond the wildest imagination to be honest. OpenGL predated DirectX, does that mean that every graphics programming language is derived from it? Come on now.

Like I said on my initial post and on this one: it's largely irrelevant to modern APIs, where both vendors are significantly closer apart. Often, AMD is even ahead. But DX11 is Nvidia land.
 
Yeah it's more of a bottlebeck if anything. Older cpus can't communicate with Faster GPUs of today hence Lower FPS. I tried this on a FX CPU with rx 580 8GB
 
I don't even have a Radeon card. We also don't need to talk further, I see where this is going. I skipped the rest.

Suit yourself, but make an argument if you're going to tell other people that they're wrong. If you do that and bring nothing to the table, might as well not say anything at all.

Particularly when whatever you're trying to convey has already been covered (rewriting the UMD with the PAL abstraction layer optimizes communication between the hardware and the DirectX 11 KMD, optimizing software calls which will be ordered more efficiently - these huge gains have been better seen on their OpenGL driver rewrite which leapfrogged AMD from a joke to being actually faster than NVIDIA's implementation - and their OGL driver was their pride and joy), but it does not implement driver command lists with deferred contexts, that's a battle in itself and AMD reaped what they've sown when they chose not to support it when first developing their implementation (it was, and is, considered an optional feature of the API).
 
Last edited:
From what I understand DX9 games work better on Nvidia as Nvidia did good optimisation. DX11 not sure, DX12 seems better for CPU utilisation on AMD.

So I think it depends on what games you playing.
 
So I think it depends on what games you playing.
Ofc it does. Not All games are created the same to work with cards/DirectX
 
From what I understand DX9 games work better on Nvidia as Nvidia did good optimisation. DX11 not sure, DX12 seems better for CPU utilisation on AMD.

So I think it depends on what games you playing.

Older APIs such as DX9 and 11 are currently better on Nvidia. OpenGL will depend, but AMD's new OpenGL on PAL tends to be faster, if less accurate. It is a much newer code base that AMD has rewritten from scratch, so this last part is quite forgivable. Lower-level APIs such as Vulkan and DirectX 12 are equal or better on AMD as far as performance overhead goes.
 
HUB already covered this but having a refreshed look at this is always good.

Right now on my i7 920 O/C from stock base/turbo 2.66/2.93Ghz to 3.8Ghz on all cores; with 12GB DDR3 RAM @1444Mhz; I'm getting FAR better scaling/performance with my RX 7900XT than my RTX 4080 Super; not even close; the 7900XT is dismantling the 4080 Super on this X58 platform.
 
Right now on my i7 920 O/C from stock base/turbo 2.66/2.93Ghz to 3.8Ghz on all cores; with 12GB DDR3 RAM @1444Mhz; I'm getting FAR better scaling/performance with my RX 7900XT than my RTX 4080 Super; not even close; the 7900XT is dismantling the 4080 Super on this X58 platform.

Nvidia probably no longer optimizes their drivers targeting something as weak as the lowest end Core i7 from 16 years ago. It probably won't be as bad if you test on a i7-990X, but I wouldn't expect miracles either. Those CPUs shouldn't be used by anyone trying to play video games nowadays.
 
Nvidia probably no longer optimizes their drivers targeting something as weak as the lowest end Core i7 from 16 years ago.

lol, since when does nvidia optimise CPU performance? :nutkick:
 
lol, since when does nvidia optimise CPU performance? :nutkick:

Read the necro'd thread. It contains the answers.
 
Read the necro'd thread. It contains the answers.

I don't understand it this way. There is no way they can optimise for different CPU architectures. It is simply their driver waits too much the CPU, no matter which it is...
 
I don't understand it this way. There is no way they can optimise for different CPU architectures. It is simply their driver waits too much the CPU, no matter which it is...

Because you have no grasp of the technical concept behind it... of course there is a way. By using more advanced instructions that are not present on earlier generation chips.
 
Because you have no grasp of the technical concept behind it... of course there is a way. By using more advanced instructions that are not present on earlier generation chips.

I think ARF is still right regarding the CPU usage, look at this older post of mine for example:
The thing is, it seems that this performance delta isn't only showing in low end CPUs being maxed out (or very close to) on all threads, as I've came across a video in which an RX 480 is seen beating the RTX 3060 in average and min* framerate in Hogwarts Legacy and in The Witcher 3 remaster (in the latter case also in the 1% lows) with a Ryzen 5 3600x:


*PD: I still find weird that min figure > 1% low, maybe low average, sustained min or something like that would be a better tag to define what it measures, because when I read a minimum figure I guess I expect the absolute lowest frametime translated into a second in frames.

Some time has passed since I wrote that and maybe Nvidia changed something in their drivers or on newer GPUs hardware, that I don't know. Still, an RX 480 should not be able to beat an RTX 3060 in any case (except maybe in some special AMD software shenanigan, but certainly not in mainstream videogames), specially if the CPU is entry level / mid-low range rather than pure low end.
 
Last edited:
I think ARF is still right regarding the CPU usage, look at this older post of mine for example:

Some time has passed since I wrote that and maybe Nvidia changed something in their drivers or on newer GPUs hardware, that I don't know. Still, an RX 480 should not be able to beat an RTX 3060 in any case (except maybe in some special AMD software shenanigan, but certainly not in mainstream videogames), specially if the CPU is entry level / mid-low range rather than pure low end.

That's... because it doesn't, unless there is something seriously wrong with the RTX 3060 system
 
The situation is always evolving, but in short, yes, Radeon's design is friendlier towards machines with limited CPU power with the tradeoff of having less potential when ample CPU power is available. Like I mentioned on the Valley thread, AMD's DirectX 11 driver operates in an immediate context. This means that it issues commands through a limited amount of, and primarily one CPU thread. These commands are then processed by the GPU's hardware command processor itself. It is highly optimized, and thus quite fast, but there are situations where this approach doesn't work as well, notably, when you're talking about Bethesda's Creation Engine games.

Nvidia's driver, on the other hand, is capable of parsing an instance where an immediate context is created, generate driver command lists and defer these commands through multiple CPU threads. This results in an increased CPU load, but significantly higher potential, as it will be able to handle a much, much, *much* higher amount of draw calls than an immediate context would. While a little older now, this is a good technical explanation by Intel, I had been talking about this with another forum member shortly before you joined:


Support for driver command lists is detected by GPU-Z and shown in the Advanced > DirectX 11 tab:

View attachment 297658

3DMark's old API overhead test is an useful tool to show the result of this behavior, you will find that NVIDIA GPUs have substantially higher performance in that benchmark, despite that not reflecting in real world applications. Thus, the UL guys dropped support, as it's a very technical tool which does not really serve to compare between different setups. The DirectX SDK samples will also contain examples where this will be quite apparent.

In DirectX 12 or Vulkan, this is irrelevant: they should be significantly closer apart, with NVIDIA's driver again using a little more CPU in exchange for slightly stronger performance if it can be helped, but nowhere near the drastic difference between vendors when it comes to DirectX 11. This is because they are low-level APIs and this abstraction is done in a completely different manner. Also, for the record: Intel does not support driver command lists in their Gen 12.2 architecture integrated graphics (such as UHD 770). Unsure if Arc does, but I would guess it does not either. Would appreciate if someone could clarify.
Wow
 
Linux and BSD have the advantage for old Radeon GPUs.
At some point, AMD turns its focus to new cards and the development of drivers for old cards is reduced or completely stopped.
With Linux, these old AMD GPU drivers can receive updates 20 years later and this has already happened occasionally.

I used to have a PowerColor HD 5830. In its day, this was a powerful GPU.
11 years after the purchase date, I was still able to use this GPU to run Dota 2 on Medium settings in Linux, which I thought was very impressive at the time.
 
This is wild because it used to be that AMD had way bigger driver overhead I wonder how Nvidia screwed this up.
I think the hardware caught up with the old reality, more so than anything else.

We have much better single thread on CPUs now so their DX11 drawbacks are mitigated, while in DX12 the issues don't exist and there are sufficient cores/threads.
 
Back
Top