• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon VII Retested With Latest Drivers

Prey (DX11, NVIDIA Bias, 2017 )
Oh come on. This game was launched with Vega as a confirmation of AMD-Bethesda cooperation.
Counting Prey as favouring AMD (i.e. it could be even worse) it's:
AMD: 4
Nvidia: 14
undecided: 4

AMD: 4/18 ~= 22%
Nvidia: 14/18 ~= 78%
which is basically the market share these companies have in discrete GPU.

Do you think it should be 50:50? Or what? And why?
 
Really, all they should do is focus on optimizing for the major game engines.
No, not at all. They should start focusing on optimizing the driver in general, not do workarounds to "cheat" benchmarks.

Many have misconceptions about what optimizations really are. Games are rarely specifically optimized for targeted hardware, and likewise drivers are rarely optimized for specific games in their core. The few exceptions to this are cases to deal with major bugs or bottlenecks.

Games should not be written for specific hardware, they are written using vendor-neutral APIs. Game developers should focus on optimizing their engine for the API, and driver developers should focus on optimizing their driver for the API, because when they try to cross over, that's when things starts to get messy. When driver developers "optimize" for games, they usually manipulate general driver parameters and replace some of the game's shader code, and in most cases it's not so much optimization as them trying to remove stuff without you seeing the degradation in quality. Games have long development cycles and are developed and tested against API specs, so it's obvious problematic when suddenly a driver diverges from spec and manipulate the game, and generally this causes more problems than it solves. If you have ever experienced a new bug or glitch in a game after a driver update, then you now know why…

This game "optimization" stuff is really just about manipulating benchmarks, and have been going on since the early 2000s. If only the vendors spend this effort on actually improving their drivers instead, then we'll be far better off!

Isn't it the case - the problem for Amd - that ALL games are tested and optimized for nVidia GPU'S?
Not at all. Many AAA titles are developed exclusively for consoles and then ported to PC, if anything there are many more games with a bias favoring AMD than Nvidia.

Most people don't understand what causes games to be biased. First of all, a game is not biased just because it scales better on vendor A than vendor B. Bias is when a game has severe bottlenecks or special design considerations, either intentional or "unintentional", that gives one vendor a disadvantage it shouldn't have. When some games scale better on one vendor and some other games scale better on another vendor isn't a problem by itself, games are not identical, and different GPUs have various strengths and weaknesses, so we should use a wide selection to determine real world performance. Significant bias happens when a game is designed around one specific feature, and the game scales badly on different hardware configurations. A good example of this is games which are built for consoles but doesn't really scale well with much more powerful hardware. But in general, games are much less biased than most people think, and just because the benchmark doesn't confirm your presumptions doesn't mean the benchmark is biased.

AMD perf improves over time, nVidia falls behind not only behind AMD, but behind own newer cards.
Over the past 10+ years, every generation have improved ~5-10% within their generation's lifecycle.
AMD is no better at driver improvements than Nvidia, this myth needs to die.

hahaha....
but I hate their (NVIDIA) approach to reduce performance through a driver update to older GPU.
FUD which has been disproven several times. I don't belive Nvidia have ever intentionally sabotaged older GPUs.
 
The game list is wrong. The actual games tested in Radeon VII review along with year, API and engine name are:
  • 2017 - DX11 - Assassin's Creed Odyssey (AnvilNext 2.0)
  • 2018 - DX11 - Battlefield V (Frostbite 3)
  • 2016 - DX11 - Civilization VI (Firaxis)
  • 2018 - DX11 - Darksiders 3 (Unreal Engine 4)
  • 2016 - DX12 - Deus Ex: Mankind Divided (Dawn)
  • 2017 - DX11 - Divinity Original Sin II (Divinity Engine)
  • 2018 - DX11 - Dragon Quest XI (Unreal Engine 4)
  • 2018 - DX11 - F1 2018 (EGO Engine 4.0)
  • 2018 - DX11 - Far Cry 5 (Dunia)
  • 2017 - DX11 - Ghost Recon Wildlands (AnvilNext)
  • 2015 - DX11 - Grand Theft Auto V (RAGE - Rockstar Advanced Game Engine)
  • 2017 - DX11 - Hellblade: Senua's Sacrifice (Unreal Engine 4)
  • 2018 - DX11 - Hitman 2 (Glacier 2.0)
  • 2018 - DX11 - Just Cause 4 (Apex)
  • 2018 - DX11 - Monster Hunter World (MT Framework)
  • 2017 - DX11 - Middle-earth: Shadow of War (LithTech)
  • 2015 - DX11 - Rainbow Six: Siege (AnvilNext)
  • 2018 - DX12 - Shadow of the Tomb Raider (Foundation)
  • 2018 - DX12 - Strange Brigade (Asura Engine)
  • 2015 - DX11 - The Witcher 3 (REDengine 3)
  • 2017 - Vulkan - Wolfenstein II (idTech6)
 
Last edited:
The game list is wrong. The actual games tested in Radeon VII review along with year, API and engine name are:
  • 2017 - DX11 - Assassin's Creed Odyssey (AnvilNext 2.0)
  • 2018 - DX11 - Battlefield V (Frostbite 3)
  • 2016 - DX11 - Civilization VI (Firaxis)
  • 2018 - DX11 - Darksiders 3 (Unreal Engine 4)
  • 2016 - DX12 - Deus Ex: Mankind Divided (Dawn)
  • 2017 - DX11 - Divinity Original Sin II (Divinity Engine)
  • 2018 - DX11 - Dragon Quest XI (Unreal Engine 4)
  • 2018 - DX11 - F1 2018 (EGO Engine 4.0)
  • 2018 - DX11 - Far Cry 5 (Dunia)
  • 2017 - DX11 - Ghost Recon Wildlands (AnvilNext)
  • 2015 - DX11 - Grand Theft Auto V (RAGE - Rockstar Advanced Game Engine)
  • 2017 - DX11 - Hellblade: Senua's Sacrifice (Unreal Engine 4)
  • 2018 - DX11 - Hitman 2 (Glacier 2.0)
  • 2018 - DX11 - Just Cause 4 (Apex)
  • 2018 - DX11 - Monster Hunter World (MT Framework)
  • 2017 - DX11 - Middle-earth: Shadow of War (LithTech)
  • 2015 - DX11 - Rainbow Six: Siege (AnvilNext)
  • 2018 - DX12 - Shadow of the Tomb Raider (Foundation)
  • 2018 - DX12 - Strange Brigade (Asura Engine)
  • 2015 - DX11 - The Witcher 3 (REDengine 3)
  • 2017 - Vulkan - Wolfenstein II (idTech5)
Isn't Wolfenstein id7 or 6+ ? I remember doom was id6 and then devs said that wolfenstein was a big technical advancement over that.Supports half precision and variable rate shading.
 
Last edited by a moderator:
This is an accusation you could take to court and become a millionaire. Can you prove it?

accusation? really?
I tested various driver version to my gpu and get it benched. pfftt..
 
Looking at the list of games:
- Assassin's Creed, Battlefield, Civilization, Far Cry, GTA, Just Cause, Tomb Raider, Witcher and Wolfenstein need to be in the list as the latest iteration of a long-running game series along with its engine. Same applies to Hitman and possibly F1.
- Strange Brigade as a game is one-off but its engine in a newer iteration fo the one behind Sniper Elite 4 which is one of the best DX12 implementations to date.
- Divinity: Original Sin 2, Monster Hunter and Shadow of War are bit of one-offs as relevant and popular games running unique engines.
- Hellblade is a game that is artistically important and actually has a good implementation of Unreal Engine 4.
- R6: Siege is unique case as despite its release year it is current and competitive game that is fairly heavy on GPUs.
- Deus Ex: Mankind Divided is a bit of a concession to AMD and DX12. This is a modified version of same Glacier 2 engine that is behind Hitman games.
- I am not too sure about the choice or relevance of Darksiders 3, Dragon Quest XI and Wildlands. Latest big UE4 releases and one of UbiSoft's non Assassin's Creed openworld games?

It is not productive to color games based on whether they use Nvidia GameWorks. The main problem with GameWorks as far as NVidia vs AMD is concerned was that it is closed source, making it impossible for AMD to optimize for it if needed. GameWorks has been open source since 2016 or so. AMD does not have a branded program in the same way, GPUOpen and tools-effects provided in it are non-branded but are present in a lot of games.
Isn't Wolfenstein id7 or 6+ ? I remember doom was id6 and then devs said that wolfenstein was a big technical advancement over that.Supports half precision and variable rate shading.
Wolfenstein II is idTech6. Fixed. Thanks.
 
Last edited:
Good to see they sorted things out. Would have been nice to have this upfront considering the architecture isn't exactly new or anything.

it seems to me that nvidea make better graphics card than amd which leaves those in camp having a difficult time justifying exactly why they are in red camp..

The real problem is that anyone cares. Grow up and move in. (Not directed at you.)

I tested various driver version to my gpu and get it benched.

The also could have introduced an issue accidentally and didn't circle back because it is not current gen.
 
I don't think there's much in it

780tiVs290xVsTitan-driver-c-1.jpg


*there's no performance degradation for 780ti,there's very slight improvement.
*780ti vs 290x on 2016 drivers - 780ti wins in 18 runs out of 28,290x in 10 out of 28.
 
Last edited:
Both AMD and Nvidia have a list of featured games:
https://www.amd.com/en/gaming/featured-games
https://www.nvidia.com/en-us/geforce/games/latest-pc-games/#gameslist

I doubt W1zzard had that in mind or checked it when choosing games but there are 6 games benchmarked from both vendors' featured list and the games were not what I really expected:
AMD: Assassin's Creed Odyssey, Civilization VI, Deus Ex: Mankind Divided, Far Cry 5, Grand Theft Auto V, Strange Brigade.
Nvidia: Battlefield V, Darksiders 3, Divinity Original Sin II, Dragon Quest XI, Monster Hunter World, Shadow of the Tomb Raider.
:)
 
From https://www.techpowerup.com/reviews/AMD/Radeon_VII/

Assassin's Creed Origins (NVIDIA Gameworks, 2017)

Battlefield V RTX (NVIDIA Gameworks, 2018)

Civilization VI (2016)

Darksiders 3 (NVIDIA Gameworks, 2018), old game remaster, where's Titan Fall 2.

Deus Ex: Mankind Divided (AMD, 2016)

Divinity Original Sin II (NVIDIA Gameworks, 2017)

Dragon Quest XI (Unreal 4 DX11, large NVIDIA bias, 2018)

F1 2018 (2018), Why? Microsoft's Forza franchise is larger than this Codemaster game.

Far Cry 5 (AMD, 2018)

Ghost Recon Wildlands (NVIDIA Gameworks, 2017), missing Tom Clancy's The Division

Grand Theft Auto V (2013)

Hellblade: Senuas Sacrif (Unreal 4 DX11, NVIDIA Gameworks)

Hitman 2

Monster Hunter World (NVIDIA Gameworks, 2018)

Middle-earth: Shadow of War (NVIDIA Gameworks, 2017)

Prey (DX11, NVIDIA Bias, 2017 )

Rainbow Six: Siege (NVIDIA Gameworks, 2015)

Shadows of Tomb Raider (NVIDIA Gameworks, 2018)

SpellForce 3 (NVIDIA Gameworks, 2017)

Strange Brigade (AMD, 2018),

The Witcher 3 (NVIDIA Gameworks, 2015)

Wolfenstein II (2017, NVIDIA Gameworks), Results different from https://www.hardwarecanucks.com/for...a-geforce-rtx-2080-ti-rtx-2080-review-17.html when certain Wolfenstein II map exceeded RTX 2080'
Except it’s AC odyssey and it’s Amd but good effort regardless...
 
You just need to stop projecting. It isn't hard.



I mean, just how shameless can it become, seriously?
You got what performance upfront, when 960 beat 780Ti ($699), come again?

AMD perf improves over time, nVidia falls behind not only behind AMD, but behind own newer cards.
As card you bought gets older, NV doesn't give a flying sex act.
It needs quite a twisting to turn this into something positive.

What's rather unusual this time, is AMD being notably worse at perf/$ edge, at least with game list picked up at TP.

290x was slower than 780Ti at launch, but it cost $549 vs $699, so there goes "I get 10% at launch" again.

Kepler architecture aged bad for some reason but maxwell has aged as it should.
1060 and 980 were in the same level of performance back in 2016 and they are in 2019.
Nothing has changed (except for games that need more than 4GB of VRAM).

I don't think someone who spends 700$ on a card even cares about its performance after 3~ years or so, high-end owner needs high-end performance and upgrades sooner than mid-range user in general.
 
Last edited:
I've heard of some sort of undervolting which improves the card's thermals greatly, the whole process seems very easy, why no one is bothering to use it?
 
While all of this is very interesting, I fail to see how it impacts this gpu. New thread maybe? Perhaps some conspiracy can be brought to light from it.
 
Kepler architecture aged bad for some reason but maxwell has aged as it should.
1060 and 980 were in the same level of performance back in 2016 and they are in 2019.
Nothing has changed (except for games that need more than 4GB of VRAM).

Kepler aged badly because of VRAM, and that is exactly where AMD had more to offer in the high-end at the time.

They had a 7970 with 3GB VRAM to compete with GTX 670/680 2GB.
And later they had a 290X with 4GB VRAM to compete with GTX 780/780ti 3GB.

At the same time, the mainstream res started slowly moving to 1440p as Korean IPS panels were cheap overseas and many enthusiasts imported one. This heavily increased VRAM demands, alongside the consoles being released with 6GB to address, which meant mainstream would quickly move to higher VRAM demands, and it happened across just 1,5 generation of GPUs, even in the Nvidia camp the VRAM almost doubled across the whole stack, and then doubled again with Pascal. That is why people are liable to think AMD cards 'aged well' and Nvidia cards lost performance over time. This culminated in the release of the 3.5GB 'fast' VRAM GTX 970. That little bit of history ALSO underlines why AMD now releases a 16GB HBM card. They are banking on the idea that people THINK it might double again over time, that is why you can find some people advocating the 16GB as a good thing for gaming. And of course the expenses of having to alter the card.

If any supporter of Red needed confirmation bias, there it is :toast:. But it doesn't make it any less of an illusion that Nvidia drivers handicap performance over time.
 
Last edited:
I've heard of some sort of undervolting which improves the card's thermals greatly, the whole process seems very easy, why no one is bothering to use it?
They are... but not in reviews... the majority of people don't bother. There is also the point of, why should anyone have to do this in the first place?
 
Low quality post by THANATOS
Kepler aged badly because of VRAM, and that is exactly where AMD had more to offer in the high-end at the time.

They had a 7970 with 3GB VRAM to compete with GTX 670/680 2GB.
And later they had a 290X with 4GB VRAM to compete with GTX 780/780ti 3GB.

At the same time, the mainstream res started slowly moving to 1440p as Korean IPS panels were cheap overseas and many enthusiasts imported one. This heavily increased VRAM demands, alongside the consoles being released with 6GB to address, which meant mainstream would quickly move to higher VRAM demands, and it happened across just 1,5 generation of GPUs, even in the Nvidia camp the VRAM almost doubled across the whole stack, and then doubled again with Pascal. That is why people are liable to think AMD cards 'aged well' and Nvidia cards lost performance over time. This culminated in the release of the 3.5GB 'fast' VRAM GTX 970. That little bit of history ALSO underlines why AMD now releases a 16GB HBM card. They are banking on the idea that people THINK it might double again over time, that is why you can find some people advocating the 16GB as a good thing for gaming. And of course the expenses of having to alter the card.

If any supporter of Red needed confirmation bias, there it is :toast:. But it doesn't make it any less of an illusion that Nvidia drivers handicap performance over time.

True and untrue.
It's not just VRAM, yeah VRAM requirements have risen significantly since then but don't forget you can easily remove the VRAM bottleneck by lowering the Texture quality.
970 was slower than the 780Ti at launch but it's not the case today, it's ahead actually, even in cases where the VRAM isn't a limiting factor.
 
Last edited:
True and untrue.
It's not just VRAM, yeah VRAM requirements have rised significantly since then but don't forget you can easily remove the VRAM bottleneck by lowering the Texture quality.
970 was slower than the 780Ti at launch but it's not the case today, it's ahead actually, without VRAM being a limiting factor.
Pay $350+ just to play with lower Texture quality!!!!
I have a better advice, Buy a(or two) console.
 
Low quality post by M2B
Pay $350+ just to play with lower Texture quality!!!!
I have a better advice, Buy a(or two) console.

What the hell are you smoking?
 
But yeah, its not just VRAM, but if you speak of aging wrt the 3GB cards, we've also seen the introduction of delta compression and much improved GPU Boost for Nvidia starting with Maxwell, while AMD was busy rebranding everything. Maxwell was that crucial moment where they actually gave up and created the gaping hole in performance and perf/watt we're still looking at today.
 
Last edited by a moderator:
Kepler aged badly because of VRAM, and that is exactly where AMD had more to offer in the high-end at the time.

They had a 7970 with 3GB VRAM to compete with GTX 670/680 2GB.
And later they had a 290X with 4GB VRAM to compete with GTX 780/780ti 3GB.

At the same time, the mainstream res started slowly moving to 1440p as Korean IPS panels were cheap overseas and many enthusiasts imported one. This heavily increased VRAM demands, alongside the consoles being released with 6GB to address, which meant mainstream would quickly move to higher VRAM demands, and it happened across just 1,5 generation of GPUs, even in the Nvidia camp the VRAM almost doubled across the whole stack, and then doubled again with Pascal. That is why people are liable to think AMD cards 'aged well' and Nvidia cards lost performance over time. This culminated in the release of the 3.5GB 'fast' VRAM GTX 970. That little bit of history ALSO underlines why AMD now releases a 16GB HBM card. They are banking on the idea that people THINK it might double again over time, that is why you can find some people advocating the 16GB as a good thing for gaming. And of course the expenses of having to alter the card.

If any supporter of Red needed confirmation bias, there it is :toast:. But it doesn't make it any less of an illusion that Nvidia drivers handicap performance over time.

Not only that, it does not have tiled raster either so memory bandwidth became problem to it too. I would be interested to see original Kepler Titan added to tpu benchmark. Performance should be close to rx 570/gtx1060 3GB model at 1080p. @W1zzard still has one? One interesting bit though there were 6GB gtx 780s too, gtx780ti was always 3GB as 6GB was the Titan black... But yeah we are going way off topic now.

Radeon VII has a lot vram only weakness in memory subsystem is it has quite low ROPs count. In normal tasks that is more than enough, but using MSAA can really tank the performance. Luckily for AMD MSAA is dying breed, fewer and fewer games are supporting that AA method anymore.
 
W1zzard says:
I'd focus on optimizations for UE4 next.

Nvidia is working closely with ue4 right now. I don't think AMD would risk working on optimizations on ue4 then have all the labor gone by a patch that would let AMD cards fall behind again. Just like Nvidia did with the dx11 HAWX game.
 
Low quality post by cucker tarlson
god damn this thread is just too fun of a read for those who enjoy conspiracy threories.
 
Back
Top