Wednesday, June 22nd 2022

Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks

Intel's Arc A380 desktop graphics card is generally available in China, and real-world gaming benchmarks of the cards by independent media paint a vastly different picture than what we've been led on by synthetic benchmarks. The entry-mainstream graphics card, being sold under the equivalent of $160 in China, is shown beating the AMD Radeon RX 6500 XT and RX 6400 in 3DMark Port Royal and Time Spy benchmarks by a significant margin. The gaming results see it lose to even the RX 6400 in each of the six games tested by the source.

The tests in the graph below are in the order: League of Legends, PUBG, GTA V, Shadow of the Tomb Raider, Forza Horizon 5, and Red Dead Redemption 2. We see that in the first three tests that are based on DirectX 11, the A380 is 22 to 26 percent slower than an NVIDIA GeForce GTX 1650, and Radeon RX 6400. The gap narrows in DirectX 12 titles SoTR and Forza 5, where it's within 10% slower than the two cards. The card's best showing, is in the Vulkan-powered RDR 2, where it's 7% slower than the GTX 1650, and 9% behind the RX 6400. The RX 6500 XT would perform in a different league. With these numbers, and given that GPU prices are cooling down in the wake of the cryptocalypse 2022, we're not entirely sure what Intel is trying to sell at $160.
Sources: Shenmedounengce (Bilibili), VideoCardz
Add your own comment

190 Comments on Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks

#151
AusWolf
sweetWhy would anyone expect something to be honest. When talking about games performance you need to count in the drivers, which I doubt that even Intel has the resources to pull off for a brand new architecture.
It's not brand new. It's based on current gen Xe, which you can find in Rocket Lake / Alder Lake CPUs.
Posted on Reply
#152
Psyclown
Disappointing but far from surprising.
Posted on Reply
#153
john_
sweetWhy would anyone expect something to be honest. When talking about games performance you need to count in the drivers, which I doubt that even Intel has the resources to pull off for a brand new architecture.
Have I said something different?
AusWolfIt's not brand new. It's based on current gen Xe, which you can find in Rocket Lake / Alder Lake CPUs.
It's not the same. Let's for example consider a case where Intel iGPU is having huge bugs when enabling feature A in a game. If that feature also reduces framerates from 20fps to 10 fps, gamers will just avoid enabling it because of the performance hit, not because of the bugs. If a gamer wants to enable it anyway, a tech support person could still insist in their reply that the solution is to just "disable that A feature for the game to run at reasonable framerates". Also a game running at low fps because of the lack of optimizations will probably pass unnoticed, with the majority thinking that it's normal for a slow iGPU to perform like that.

But when someone is trying to be competitive in the discrete GPU market, they can't avoid situations like this. They will have to fix the bugs, they will have to optimize performance. While Intel is building GPUs for decades and drivers for GPUs for decades, I doubt they had thrown the necessary resources on optimizations and bug fixing. That "heavy optimization and fixing ALL bugs" situation is probably "brand new" for Intel's graphics department.
Posted on Reply
#154
AusWolf
john_It's not the same. Let's for example consider a case where Intel iGPU is having huge bugs when enabling feature A in a game. If that feature also reduces framerates from 20fps to 10 fps, gamers will just avoid enabling it because of the performance hit, not because of the bugs. If a gamer wants to enable it anyway, a tech support person could still insist in their reply that the solution is to just "disable that A feature for the game to run at reasonable framerates". Also a game running at low fps because of the lack of optimizations will probably pass unnoticed, with the majority thinking that it's normal for a slow iGPU to perform like that.

But when someone is trying to be competitive in the discrete GPU market, they can't avoid situations like this. They will have to fix the bugs, they will have to optimize performance. While Intel is building GPUs for decades and drivers for GPUs for decades, I doubt they had thrown the necessary resources on optimizations and bug fixing. That "heavy optimization and fixing ALL bugs" situation is probably "brand new" for Intel's graphics department.
I'll just say what I have said in many other related threads: There's no reason to be overly negative or positive - we'll see when it comes out. ;)
Posted on Reply
#155
efikkan
john_But when someone is trying to be competitive in the discrete GPU market, they can't avoid situations like this. They will have to fix the bugs, they will have to optimize performance. While Intel is building GPUs for decades and drivers for GPUs for decades, I doubt they had thrown the necessary resources on optimizations and bug fixing. That "heavy optimization and fixing ALL bugs" situation is probably "brand new" for Intel's graphics department.
If you take a GPU architecture that works reasonably well and scale it let's say 10x, but the performance don't scale accordingly, then you're having a hardware problem, not a driver problem. The driver actually does far less than you think, and has fairly little to do with the scale of the GPU. You know Nvidia and AMD scales fairly consistently from low-end GPUs with just a few "cores" up to massive GPUs on the very same driver, like Pascal from GT 1010 at 256 cores up to Titan Xp at 3840. The reason why this works is the management of the hardware resources is done by the GPU scheduler, like allocating (GPU) threads, queuing memory operations etc. If these things were done by the driver, then the CPU overhead would grow with GPU size and large GPUs would just not perform at all.

My point is, Intel's architecture is not fundamentally new and they have a working driver from their integrated graphics, so if they have problems with scalability then it's a hardware issue.
I'm not saying there can't be minor bugs and tweaks to the driver, but the bigger problem lies in hardware, and will probably take them a couple more iterations to sort out.

Don't buy a product expecting the drivers to suddenly add performance later, that has not panned out well in the past.
Posted on Reply
#156
john_
efikkanIf you take a GPU architecture that works reasonably well and scale it let's say 10x, but the performance don't scale accordingly, then you're having a hardware problem, not a driver problem. The driver actually does far less than you think, and has fairly little to do with the scale of the GPU. You know Nvidia and AMD scales fairly consistently from low-end GPUs with just a few "cores" up to massive GPUs on the very same driver, like Pascal from GT 1010 at 256 cores up to Titan Xp at 3840. The reason why this works is the management of the hardware resources is done by the GPU scheduler, like allocating (GPU) threads, queuing memory operations etc. If these things were done by the driver, then the CPU overhead would grow with GPU size and large GPUs would just not perform at all.

My point is, Intel's architecture is not fundamentally new and they have a working driver from their integrated graphics, so if they have problems with scalability then it's a hardware issue.
I'm not saying there can't be minor bugs and tweaks to the driver, but the bigger problem lies in hardware, and will probably take them a couple more iterations to sort out.

Don't buy a product expecting the drivers to suddenly add performance later, that has not panned out well in the past.
I wasn't describing what you understood. You didn't understood my point and probably my English is the problem here.
Let's try to explain it with an example(in poorer English).


Let's say that Intel is producing only iGPUs and iGPUs are performing poorly in game title A and also have a bug(image corruption) with graphics setting X in that game.
Do you throw resources to optimize the driver in that game title A, to move fps from 20 to 22 and also fix that graphics setting X, especially when enabling that setting means dropping framerate from 20fps to 12fps? Probably not. If that game is a triple A title you might spent resources to optimize it, but at the same time the solution for graphics setting X will be simply to ask gamers to keep it disabled(if it is difficult to fix the bug). If that game is a not so much advertised game, you probably wouldn't even spend resources to move that fps counter from 20 to 22fps.

Let's say that Intel now is producing discrete GPUs and targets at least the mid range market against AMD and Nvidia. Well, now you will have to hire more programmers for your driver department and now optimization in game title A will move fps probably from 50 fps to 60 fps. You now also need to achieve this optimization, because you are competing with other discrete GPUs. Also you can't go out and say to gamers "please keep setting X in game disabled, because it does not work properly with ARC". No. You will have to throw resources to fix that bug or sales of your discrete GPUs will fall. People can ignore low performance and bugs from a discrete iGPU that comes for "free" in the CPU. It's a different situation for a discrete GPU that people bought paying $150-$400. People expect best performance and bugs fixed.

I wasn't describing a scaling problem. I was saying that building graphics drivers for low performing iGPUs is probably very different than building drivers for discrete GPUs. You can bypass/ignore some driver issues when you support "free" and slow iGPUs, you can't when you support expensive discrete GPUs.
Posted on Reply
#157
efikkan
john_I wasn't describing a scaling problem. I was saying that building graphics drivers for low performing iGPUs is probably very different than building drivers for discrete GPUs. You can bypass/ignore some driver issues when you support "free" and slow iGPUs, you can't when you support expensive discrete GPUs.
I know ;), I was trying to make you (and others in this thread) who assume that a driver for an integrated GPU and a dedicated GPU will be fundamentally different, in reality they would be mostly the same. The main difference will be in the hardware and the firmware which controls it. That's why I mentioned that Nvidia have low-end GPUs which are performing roughly comparable to integrated GPUs and high-end GPUs running the very same driver, and the same goes for AMD, which also runs the same driver for their integrated GPUs. So it's important to understand that this scaling has little to nothing to do with the driver.

Most of you in here attribute way too much to drivers in general, when the driver really does as little as possible, as anything the driver spends CPU time on will add overhead, so it's a trade-off. So let me explain how a driver works for rendering, and while this holds true for DirectX/OpenGL/Vulkan and others, I will use OpenGL as an example since it's the most simple to understand and I've used it for nearly two decades.
The main responsibility of the driver is to take generic API calls and translate it into the low-level API for the GPU architecture. This is not done one API call at the time, but instead of queues of operations. A typical block of code to render an "object" in OpenGL would look something like this:
glBindTexture(GL_TEXTURE_2D, ...);
glBindBuffer(GL_ARRAY_BUFFER, ...);
glVertexAttribPointer(...);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ...);
glDrawElements(GL_TRIANGLES, ...);
What kind of low-level operations this is translated into will vary depending on the GPU architecture, but it will be the same whether the GPU is integrated or a high-end GPU. And to make it clear, the driver will operate the same regardless of the application being a AAA game title or a hobby project.

And to your point of Intel not having to prioritize performance or driver quality overall for integrated GPUs vs. dedicated GPUs, I strongly disagree, and have some solid arguments to why;
1) AMD have offered horrible OpenGL support for ages, while Intel's support have been mostly fine. And while it took a while for Intel to catch up on OpenGL 4.x features, the ones they've implemented has been seemingly working. AMD's support have been really bad, they even managed around ~10 years ago to ship two drivers in a row which mostly broken GLSL shader compilation (essentially breaking nearly all OpenGL games and applications).
2) The overall quality and stability of Intel's drivers have been better than AMD's for years. Graphics APIs are not just used for games, but today are used by the desktop environment itself, CAD/modelling applications, photo and video editing and even some multimedia applications. And it's not just in the forums we hear about way more issues with AMD than the others, those who do graphics development quickly get a feeling of the quality of the drivers by how little "misbehaving code" is needed to crash the system. While this is of course totally anecdotal, none of my main systems run AMD graphics for this very reason, it's quite annoying to get something done when systems crash up to several times per day during development.

Now to answer even more specifically:
john_Let's say that Intel is producing only iGPUs and iGPUs are performing poorly in game title A and also have a bug(image corruption) with graphics setting X in that game.
Do you throw resources to optimize the driver in that game title A, to move fps from 20 to 22 and also fix that graphics setting X, especially when enabling that setting means dropping framerate from 20fps to 12fps? Probably not. If that game is a triple A title you might spent resources to optimize it, but at the same time the solution for graphics setting X will be simply to ask gamers to keep it disabled(if it is difficult to fix the bug). If that game is a not so much advertised game, you probably wouldn't even spend resources to move that fps counter from 20 to 22fps.
Drivers aren't really optimized for specific games, at least not the way you think. When you see driver updates offer up to X % more performance in <selected title>, it's usually tweaking the game profiles or sometimes overriding shader programs. These aren't really so much optimizations as them "cheating" to try to reduce image quality very slightly to get a few percent more performance in benchmarks.

When they do real performance optimizations, it's usually one of these;
a) General API overhead (tied to the internal state machine of an API) - Will affect anything that uses this API.
b) Overhead of a specific API call or parameter - Will affect anything that uses this API call
So therefore, I reject your premise of optimizing performance for a specific title.
Posted on Reply
#158
john_
efikkanAMD have offered horrible OpenGL support for ages
Why? Probably because it was not their priority? Just asking. How many games out there need OpenGL? Probably very few? On the other hand I guess there are pro apps using OpenGL. Intel was targeting office PCs, so OpenGL could be more important for them.
efikkanThe overall quality and stability of Intel's drivers have been better than AMD's for years.
If you don't try to optimize for every game or app or probable scenario out there and not implement a gazillion of features in your drivers, I guess you have better chances to offer something more stable. Much simpler, but more stable. Also people who have integrated Intel GPU, but do all their jobs on discrete Nvidia or AMD GPUs, I bet they will have no problems with their Intel iGPUs. Because, well, they are disabled.
efikkanDrivers aren't really optimized for specific games,
Well in Intel's driver faq you will read about games crushing and image quality problems. So, Intel might had thrown resources on their media engine, OpenGL performance and driver stability in office applications, but doesn't look like they where caring about games. They have to now that they are trying to become a discrete Graphics card maker. That's what I am saying all the time and while you started your post saying you understand my point, I am not sure about that.
Posted on Reply
#159
TheoneandonlyMrK
And yet with a driver AMD improved open GL by /50% just recently, not bad for optimization.
Posted on Reply
#160
efikkan
john_Why? Probably because it was not their priority? Just asking. How many games out there need OpenGL?
While DirectX is certainly more widespread, there are "minor" successes such as Minecraft(original version), most indie games and most emulators, and considering that AMD has really struggled to maintain market shares for the past decade, and have had decent value options, this should have been pretty low-hanging fruit to gain some extra percentage points of market share. And as for the stability issues of AMD drivers, those are not limited to OpenGL, and have been a persistent problem for over a decade. (we keep hearing about it every time there is new hardware)
john_If you don't try to optimize for every game or app or probable scenario out there and not implement a gazillion of features in your drivers
Well the answer is they don't, that's the point you still can't grasp.
The graphics APIs have a spec, the driver's responsibility is to behave according to that spec. If e.g. Nvidia wanted to deviate from that spec to boost the performance of a particular game, then that would add bloat and overhead to the driver and would risk introducing bugs in the driver. On top of that, if the API no longer behaves according to the spec, the game programmers are likely to introduce "bugs" which are very hard to track down and wastes a lot of the developer's time.
The driver developers don't know the game's internal state and don't know the assumptions the programmers who wrote the game. All the driver sees is a stream API calls, it can't know context to optimize differently frame to frame.

So this idea of the driver doing all kinds of wizardry to gain performance is just utter nonsense. As I've said, the driver does as little as possible to quickly translate a queue of API calls to the native instructions of the GPU, the GPU scheduler internally does the heavy lifting.
Most of people in forums like this thinks Nvidia's advantage is mostly due to game optimization and drivers optimized for those games, when in reality these optimizations are a myth. Nvidia has achieved most of their upper hand vs. AMD thanks to better scheduling of their GPUs' resources, which is why they've often managed to extract more performance out of less computational resources (Tflops, fillrate, etc.). When I say the following I mean it in a loving way; Please try to get this into your heads, when something performs better, it's usually because it's actually better, stop using optimizations (or lack thereof) as an excuse when there isn't evidence to support that.
Posted on Reply
#161
john_
efikkanWell the answer is they don't, that's the point you still can't grasp.
Well, don't worry I can see where you going, or maybe to be more accurate, where you are standing.

Anyway let's keep questions simple here.

Why ARC performs on par with the competition in 3DMark and lose badly in games?
Why most bugs in ARC are bugs that lead to crush of the application or texture corruption? In AMD's and Nvidia's driver FAQ you will read about strange behaviors when doing very specific stuff. In ARC FAQ half bugs are about application crush, or textures after just running the game.
Posted on Reply
#162
AusWolf
john_Why ARC performs on par with the competition in 3DMark and lose badly in games?
I'm not a programmer by far, but from an average user's point of view, I'd say 3DMark stresses a very specific part of your of hardware. I don't know what it is, but I see all of my graphics cards behaving very differently under 3DMark compared to games in terms of clock speed, power consumption, etc. The part of Arc GPU's that 3DMark stresses the most must be strong, while other parts of it fall behind the competition. Games, on the other hand, use a much broader range of your hardware's capabilities. To put it simply: 3DMark is designed to stress a specific part of your hardware, games are designed to use whatever you have.

I might be wrong, but these are my observations through GPU behaviour.
john_Why most bugs in ARC are bugs that lead to crush of the application or texture corruption?
Does it really do that? Do you have sources? If so, I believe it must be some bug in the driver that can be ironed out - and not an issue of optimisation. But I'm curious about a proper answer, as I don't know much about driver code myself.
Posted on Reply
#163
john_
AusWolfDo you have sources?
Go at AMD's, Intel's and Nvidia's page, go to download the latest version of the driver, don't download the driver, just read the release notes.
Posted on Reply
#164
AusWolf
john_Go at AMD's, Intel's and Nvidia's page, go to download the latest version of the driver, don't download the driver, just read the release notes.
A fair point. Personally, I think that's down to how the driver communicates with the API, and specific portions of the API the game uses. Like I said, bugs that can be ironed out. It's not an "optimisation" thing.

But let's wait for a proper answer from someone who knows more than I do.
Posted on Reply
#165
efikkan
john_Why ARC performs on par with the competition in 3DMark and lose badly in games?
AusWolf's reply is pretty good in layman's terms.
To add to that, while games usually try to render things with reasonable efficiency, while synthetic benchmarks try to simulate "future" gaming workloads. Usually they end up stressing the GPU much more than a normal game would, but honestly I don't think the performance scores here have any use to consumers. I use them for stress testing after setting up a computer. I think synthetics can be useful for driver developers though, to try to provoke bugs.
john_Why most bugs in ARC are bugs that lead to crush of the application or texture corruption? In AMD's and Nvidia's driver FAQ you will read about strange behaviors when doing very specific stuff. In ARC FAQ half bugs are about application crush, or textures after just running the game.
If there are texture corruption across multiple games, and the same games don't have the same problem on other hardware, then it means the driver doesn't behave according to spec. Finding the underlying reason would require more details though, it could be either the driver or the hardware. This might surprise you, but when it comes to software bugs it's actually better if the bug occurs across many use cases. That usually means the bug is easier to reproduce and precisely locate. Such bugs are usually caught and fixed once there are enough testers. A rare and obscure bug is in many ways worse, as it will lead to very poor bug reports, which in turn leads to large efforts to find those bugs.
Posted on Reply
#166
john_
efikkanI don't think the performance scores here have any use to consumers.
Those are the numbers that will be printed on advertisement material. That's why Intel is concentrating on those apps. While you say optimization is a myth, it seems Intel is focusing on that myth.
efikkanIf there are texture corruption across multiple games, and the same games don't have the same problem on other hardware, then it means the driver doesn't behave according to spec. Finding the underlying reason would require more details though, it could be either the driver or the hardware. This might surprise you, but when it comes to software bugs it's actually better if the bug occurs across many use cases. That usually means the bug is easier to reproduce and precisely locate. Such bugs are usually caught and fixed once there are enough testers. A rare and obscure bug is in many ways worse, as it will lead to very poor bug reports, which in turn leads to large efforts to find those bugs.
I guess I have to provide a link after all
downloadmirror.intel.com/733544/ReleaseNotes_101.1736.pdf
DRIVER VERSION: 30.0.101.1736
DATE: June 14, 2022
GAMING HIGHLIGHTS:
• Launch driver for Intel® Arc™ A380 Graphics (Codename Alchemist).
• Intel® Game On Driver support for Redout 2*, Resident Evil 2*, Resident Evil 3*, and Resident Evil 7:
Biohazard* on Intel® Arc™ A-Series Graphics.
Get a front row pass to gaming deals, contests, betas, and more with Intel Software Gaming Access.
FIXED ISSUES:
• Far Cry 6* (DX12) may experience texture corruption in water surfaces during gameplay.
• Destiny 2* (DX11) may experience texture corruption on some rock surfaces during gameplay.
• Naraka: Bladepoint* (DX11) may experience an application crash or become unresponsive during training
mode.
KNOWN ISSUES:
• Metro Exodus: Enhanced Edition* (DX12), Horizon Zero Dawn* (DX12), Call of Duty: Vanguard* (DX12), Tom
Clancy’s Ghost Recon Breakpoint (DX11), Strange Brigade* (DX12) and Forza Horizon 5* (DX12) may
experience texture corruption during gameplay.
• Tom Clancy’s Rainbow Six Siege* (DX11) may experience texture corruption in the Emerald Plains map when
ultra settings are enabled in game. A workaround is to select the Vulkan API in game settings.
• Gears 5* (DX12) may experience an application crash, system hang or TDR during gameplay.
• Sniper Elite 5* may experience an application crash on some Hybrid Graphics system configurations when
Windows® “Graphics Performance Preference” option for the application is not set to “High Performance”.
• Call of Duty: Black Ops Cold War* (DX12) may experience an application crash during gameplay.
• Map textures may fail to load or may load as blank surfaces when playing CrossFire*.
• Some objects and textures in Halo Infinite* (DX12) may render black and fail to load. Lighting may also appear
blurry or over exposed in the multiplayer game menus
What doesn't surprice me is how the glass is half empty or half full, depending on the situation.
Posted on Reply
#167
AusWolf
john_Those are the numbers that will be printed on advertisement material. That's why Intel is concentrating on those apps. While you say optimization is a myth, it seems Intel is focusing on that myth.
Even if a certain architecture performs better in one app than another, there's nothing to suggest that it's due to a magical driver rather than the hardware itself.

AMD CPUs have been famous for being better at productivity apps, while Intel is (or used to be) better at games. Is this due to some driver magic as well?
john_I guess I have to provide a link after all
downloadmirror.intel.com/733544/ReleaseNotes_101.1736.pdf



What doesn't surprice me is how the glass is half empty or half full, depending on the situation.
No one said that there can't be bugs in the driver-API communication. AMD is notorious for leaving bugs in for a long time. The argument was that these bugs in no way mean that games are "optimised" for a certain architecture or god forbid, manufacturer.
Posted on Reply
#168
john_
Damn you both are on a crusade to just insist that it is not how I assume it is, but differently, even if you don't have concrete proofs about that. And by the way let me remind here that we are just guessing. ALL of us.

Having said that, let's see why I said that.
AusWolfEven if a certain architecture performs better in one app than another, there's nothing to suggest that it's due to a magical driver rather than the hardware itself.
A driver does play a role. It's not a myth. When a new driver fixes performance in a game or multiple games, then something was changed in that driver. What was that? I am NOT a driver developer. Are you? Having luck of knowledge doesn't means that the phrase "nothing to suggest" has any real value here. A man from 100 BC will insist that there is "nothing to suggest" that a 10 tone helicopter is staying on the air by pushing that air down with it's rotor blade, lucking all the necessary knowledge about physics.*
AusWolfAMD CPUs have been famous for being better at productivity apps, while Intel is (or used to be) better at games. Is this due to some driver magic as well?
AMD CPUs have been famous for being better at productivity apps because they where having more cores until Alder Lake. On the other hand Intel almost always had the advantage in IPC and also many apps where optimized for Intel CPUs, not AMD CPUs.
AusWolfNo one said that there can't be bugs in the driver-API communication. AMD is notorious for leaving bugs in for a long time. The argument was that these bugs in no way mean that games are "optimised" for a certain architecture or god forbid, manufacturer.
I am not going to comment about the notorious AMD. It's boring, after so many years reading the same stuff. People having the need to bush AMD, even when using it's products, it is not my area of expertise. I am not going to play with words also with someone who will never ever accept something different. I am reading for decades, even from Intel/AMD/Nvidia representatives about apps/games optimizations, apps/games been developed on specific platforms, have seen how Nvidia's perfect image was ruined for a year or two somewhere in 2014 I think, when games where optimized for the consoles, meaning GCN and PC versions where having a gazillion of problems on PCs, especially those games payed from Nvidia to implement GameWorks in their PC versions.

So, I am stopping here. No reason to lose more time with people who insist that it is not A, it is B, without ANY REAL arguments of why it is B and not A.

Have a nice day.

PS * Just remembered Carl Sagan

Posted on Reply
#169
AusWolf
john_A driver does play a role. It's not a myth. When a new driver fixes performance in a game or multiple games, then something was changed in that driver. What was that? I am NOT a driver developer. Are you? Having luck of knowledge doesn't means that the phrase "nothing to suggest" has any real value here. A man from 100 BC will insist that there is "nothing to suggest" that a 10 tone helicopter is staying on the air by pushing that air down with it's rotor blade, lucking all the necessary knowledge about physics.*
I'm not a driver developer either, but I'm willing to learn from someone who knows a lot more about the topic than I do, for example:
efikkanDrivers aren't really optimized for specific games, at least not the way you think. When you see driver updates offer up to X % more performance in <selected title>, it's usually tweaking the game profiles or sometimes overriding shader programs. These aren't really so much optimizations as them "cheating" to try to reduce image quality very slightly to get a few percent more performance in benchmarks.

When they do real performance optimizations, it's usually one of these;
a) General API overhead (tied to the internal state machine of an API) - Will affect anything that uses this API.
b) Overhead of a specific API call or parameter - Will affect anything that uses this API call
So therefore, I reject your premise of optimizing performance for a specific title.
This. @efikkan presented a clear explanation with technical details as to why his claim is right. You didn't.
john_AMD CPUs have been famous for being better at productivity apps because they where having more cores until Alder Lake. On the other hand Intel almost always had the advantage in IPC and also many apps where optimized for Intel CPUs, not AMD CPUs.
There you go. That's down to differences in the hardware, isn't it?
john_I am not going to comment about the notorious AMD. It's boring, after so many years reading the same stuff. People having the need to bush AMD, even when using it's products, it is not my area of expertise. I am not going to play with words also with someone who will never ever accept something different. I am reading for decades, even from Intel/AMD/Nvidia representatives about apps/games optimizations, apps/games been developed on specific platforms, have seen how Nvidia's perfect image was ruined for a year or two somewhere in 2014 I think, when games where optimized for the consoles, meaning GCN and PC versions where having a gazillion of problems on PCs, especially those games payed from Nvidia to implement GameWorks in their PC versions.
1. You clearly misread my point. I never intended to criticise AMD. I merely stated the fact that bugs CAN be found in a driver, like in any software. It's not proof that drivers are specifically optimised for certain games.
2. Who said that you can't write a game to favour the hardware resources of a certain architecture? It's not the same thing as "optimising" a new driver for a game that's already been made.
Posted on Reply
#170
john_
AusWolfThis. @efikkan presented a clear explanation with technical details as to why his claim is right. You didn't.
No he didn't. He just wrote too much stuff that not necessarily are on topic or correct. If you know NOTHING about driver development how can you assume that what he wrote is in fact correct? You can't. And he was trying to support a specific argument where he was constantly changing the point of view which in my book doesn't make him objective or his arguments correct. You can give him all the credit you want seeing that he is supporting your idea of a notorious AMD, but I am someone who needs more specific and more concrete arguments than 5 lines of code.

OK, that's more than enough from me having said that I would stop and not make another post. Especially when the other person keeps moving the goalposts.
Posted on Reply
#171
Vayra86
KeatsWell, I suppose it would be more accurate to say that it cripples performance on non-Nvidia platforms, but the end result is the same.
No, it does not. FXAA originates from GameWorks for example. It runs fine on AMD. The same applies for HBAO+ and numerous other features.
stackoverflow.com/questions/12170575/using-nvidia-fxaa-in-my-code-whats-the-licensing-model

All it requires is a bit of code so the GPU knows what to do. In the end, its a processing unit working through an API, and the API just serves stuff to translate. If you have the full vocabulary on your GPU, you can have it translated. If not, you'll resort to something doing the same thing but slower. Or not at all, because it is somehow locked.

The end result might be the same, but the reasons are different, and the REASONS are the core of the Gameworks argument. There is absolutely not a single thing stopping AMD from providing similar to Gameworks solutions and support, and it hasn't stopped them either. The real question is, what features do you really need and how do they help gaming? The ones we really can use, are definitely getting copied and you're not missing much between Gameworks support or not. FXAA is a great example of that.
john_Well, don't worry I can see where you going, or maybe to be more accurate, where you are standing.

Anyway let's keep questions simple here.

Why ARC performs on par with the competition in 3DMark and lose badly in games?
Why most bugs in ARC are bugs that lead to crush of the application or texture corruption? In AMD's and Nvidia's driver FAQ you will read about strange behaviors when doing very specific stuff. In ARC FAQ half bugs are about application crush, or textures after just running the game.
The fact we don't have ready made answers but only guesses for these questions is quite simply because we don't know for sure. Perhaps there's monkeys disguised as humans building their code. Perhaps they have hardware issues they work around as we speak. Workarounds are going to be inefficient.

A benchmark is reproducible. Games are more variable in what they want at any given point in time.
john_A driver does play a role. It's not a myth.
It does. Here's a car analogy. The driver is the DRIVER. But the car is the car. It has limits, it can accelerate to 100 in a defined number of seconds. But if the driver of the car is crap at shifting gears, it certainly won't meet that spec. A better driver, or hey, let's use the at-one-point implemented Shader cache as an example: a more experienced driver, having driven the car a few times in that situation; will know exactly when to shift gears and therefore meets the spec.

Now, let's consider the car and the driver on a new road (new game). The job at hand is to accelerate as fast as possible, and then hit the brakes to come to full stop as fast as possible so he can accelerate again to full speed (clocks/boost!). One driver has experience on fresh roads, knowing they can be more smooth and slippery, so he applies different braking action while the other is oblivious to road types. The brakes on the cars are identical. The driver determines when to hit them and how hard.

So yes. Drivers play a role. And so does experience. Experience is pretty much scheduling, using the hardware resources in the best possible way at the best possible time. The other part of drivers, where they apply trickery to hit bigger numbers is usually at the cost of image quality. That could be called optimization, but that's a choice of semantics, the reality is, you render less, so you produce more frames.

So what does that all mean? It means, that if driver updates tell you they suddenly got a major perf boost within a select number of applications, you should be on the lookout for what work it is they're not doing anymore. And if the driver update tells you there is a major increase of perf across the board, scheduling likely improved.

Calling either of it optimization is not really accurate, is it? The first is cheating, the latter is basically dev work on your GPU (hardware) that wasn't done prior to its release. And bugs.. are bugs, again, a matter of experience with the hardware. How does it behave, and why? Intel seems to have arrived at a point where they documented the how and haven't quite found the why for most situations. When they find that why, there's going to be numerous much smaller how's and why's underneath for those very specific situations you speak of in AMD/Nv driver FAQs. Refinement happens over time.
Posted on Reply
#172
john_
Vayra86Calling either of it optimization is not really accurate, is it?
Considering we are not programmers, we might use words not really accurate. But in most times we will be describing the same thing, considering most of us having the same teachers and the same books (youtube, forums, tech sites).
Posted on Reply
#173
RH92
kapone32What is wrong with the 6500XT for Gaming?
What is wrong ? If would be a far shorter list to name what is right so that tells you everything . It's literally the worst GPU money can buy right now , you will be better served by 6 years old RX580 !

Posted on Reply
#174
john_
RH92What is wrong ? If would be a far shorter list to name what is right so that tells you everything . It's literally the worst GPU money can buy right now , you will be better served by 6 years old RX580 !

Two remarks here

These videos where made before having an indication that Intel's ARC 3 will be worst than RX 6400.
These videos where made before Nvidia introducing the GTX 1630 that is far worst than even the RX 6400.

If any of those Steves make a review of GTX 1630 - they will not for one obvious reason - they will have to change their conclusion for the RX 6500 XT. But for now YOU can change your post because it is totally incorrect. 1 month ago, it would have been correct, but today RX 6500 XT is a huge upgrade over A380 and GXT 1630, two DESKTOP GPUs that are coming AFTER RX 67500 XT's introduction and availability. It's literally NOT, BY FAR, the worst GPU money can buy right now.
Posted on Reply
#175
Vayra86
john_Considering we are not programmers, we might use words not really accurate. But in most times we will be describing the same thing, considering most of us having the same teachers and the same books (youtube, forums, tech sites).
Yes, and nobody is flaming over it, rather I think we use our collective knowledge to refine our use of terminology. I don't write drivers either ;) But I do know how important it is to call things what they truly are. Especially in technology, but also in daily anything.

Before you know it, a reality catches on to crowds that simply isn't a reality, but some weird combination of 'bits we heard'. That is the vast majority of Youtube and social media 'press' right now and in the past ten to twenty years. The facts are clear. We see tons of individuals believing the most diverse bullshit like its a new Bible. From pizzagate to all sorts of nonexistant threats to keto diets. Its all static, and it serves no one. In our forum, the closest example is how people flash a BIOS to gain some performance. They heard it on 'Tube, was a good thing, and follow like Lemmings.

In terms of our subject, optimization or 'just producing a product as it should be' is a pretty big difference. And yes, its difficult to avoid the difference between camps here. Team Green has a much higher quality of driver on release, and Team Red gets there eventually - bar exceptions in a positive and negative sense. Somehow, pop media started calling the latter 'Fine Wine' ;) But when you read the above about optimization, is it really the best way to cover what it truly is?

A question, not a verdict :)
Posted on Reply
Add your own comment
May 10th, 2024 09:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts