• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks

Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
It would be great if you could run Cyberpunk with raytracing High on 35FPS.
That's not gonna happen with a budget graphics card. Not in 2022, anyway.
 
Joined
Oct 1, 2013
Messages
250 (0.06/day)
3DMark performance shows possible potential. Games show current reality.

In any case Intel will be selling millions of those to OEMs, to be used in their prebuild systems, meaning that cards like Nvidia's MX line and AMD's RX 6400/6500XT are out of Intel based systems. And that's what Intel cares about. Those 3DMark scores are enough to convince consumers that they are getting a fast card.

Now if only someone could clear up things about ARC's hardware compatibility, that would be nice. Let's hope that Intel doesn't starts a new trend with cards being incompatible with some systems. If they start that kind of trend, then I wish they NEVER had reentered the market and competely fail.
Why would anyone expect something to be honest. When talking about games performance you need to count in the drivers, which I doubt that even Intel has the resources to pull off for a brand new architecture.
 
Last edited:
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Why would anyone expect something to be honest. When talking about games performance you need to count in the drivers, which I doubt that even Intel has the resources to pull off for a brand new architecture.
It's not brand new. It's based on current gen Xe, which you can find in Rocket Lake / Alder Lake CPUs.
 
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
Why would anyone expect something to be honest. When talking about games performance you need to count in the drivers, which I doubt that even Intel has the resources to pull off for a brand new architecture.
Have I said something different?
It's not brand new. It's based on current gen Xe, which you can find in Rocket Lake / Alder Lake CPUs.
It's not the same. Let's for example consider a case where Intel iGPU is having huge bugs when enabling feature A in a game. If that feature also reduces framerates from 20fps to 10 fps, gamers will just avoid enabling it because of the performance hit, not because of the bugs. If a gamer wants to enable it anyway, a tech support person could still insist in their reply that the solution is to just "disable that A feature for the game to run at reasonable framerates". Also a game running at low fps because of the lack of optimizations will probably pass unnoticed, with the majority thinking that it's normal for a slow iGPU to perform like that.

But when someone is trying to be competitive in the discrete GPU market, they can't avoid situations like this. They will have to fix the bugs, they will have to optimize performance. While Intel is building GPUs for decades and drivers for GPUs for decades, I doubt they had thrown the necessary resources on optimizations and bug fixing. That "heavy optimization and fixing ALL bugs" situation is probably "brand new" for Intel's graphics department.
 
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
It's not the same. Let's for example consider a case where Intel iGPU is having huge bugs when enabling feature A in a game. If that feature also reduces framerates from 20fps to 10 fps, gamers will just avoid enabling it because of the performance hit, not because of the bugs. If a gamer wants to enable it anyway, a tech support person could still insist in their reply that the solution is to just "disable that A feature for the game to run at reasonable framerates". Also a game running at low fps because of the lack of optimizations will probably pass unnoticed, with the majority thinking that it's normal for a slow iGPU to perform like that.

But when someone is trying to be competitive in the discrete GPU market, they can't avoid situations like this. They will have to fix the bugs, they will have to optimize performance. While Intel is building GPUs for decades and drivers for GPUs for decades, I doubt they had thrown the necessary resources on optimizations and bug fixing. That "heavy optimization and fixing ALL bugs" situation is probably "brand new" for Intel's graphics department.
I'll just say what I have said in many other related threads: There's no reason to be overly negative or positive - we'll see when it comes out. ;)
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
But when someone is trying to be competitive in the discrete GPU market, they can't avoid situations like this. They will have to fix the bugs, they will have to optimize performance. While Intel is building GPUs for decades and drivers for GPUs for decades, I doubt they had thrown the necessary resources on optimizations and bug fixing. That "heavy optimization and fixing ALL bugs" situation is probably "brand new" for Intel's graphics department.
If you take a GPU architecture that works reasonably well and scale it let's say 10x, but the performance don't scale accordingly, then you're having a hardware problem, not a driver problem. The driver actually does far less than you think, and has fairly little to do with the scale of the GPU. You know Nvidia and AMD scales fairly consistently from low-end GPUs with just a few "cores" up to massive GPUs on the very same driver, like Pascal from GT 1010 at 256 cores up to Titan Xp at 3840. The reason why this works is the management of the hardware resources is done by the GPU scheduler, like allocating (GPU) threads, queuing memory operations etc. If these things were done by the driver, then the CPU overhead would grow with GPU size and large GPUs would just not perform at all.

My point is, Intel's architecture is not fundamentally new and they have a working driver from their integrated graphics, so if they have problems with scalability then it's a hardware issue.
I'm not saying there can't be minor bugs and tweaks to the driver, but the bigger problem lies in hardware, and will probably take them a couple more iterations to sort out.

Don't buy a product expecting the drivers to suddenly add performance later, that has not panned out well in the past.
 
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
If you take a GPU architecture that works reasonably well and scale it let's say 10x, but the performance don't scale accordingly, then you're having a hardware problem, not a driver problem. The driver actually does far less than you think, and has fairly little to do with the scale of the GPU. You know Nvidia and AMD scales fairly consistently from low-end GPUs with just a few "cores" up to massive GPUs on the very same driver, like Pascal from GT 1010 at 256 cores up to Titan Xp at 3840. The reason why this works is the management of the hardware resources is done by the GPU scheduler, like allocating (GPU) threads, queuing memory operations etc. If these things were done by the driver, then the CPU overhead would grow with GPU size and large GPUs would just not perform at all.

My point is, Intel's architecture is not fundamentally new and they have a working driver from their integrated graphics, so if they have problems with scalability then it's a hardware issue.
I'm not saying there can't be minor bugs and tweaks to the driver, but the bigger problem lies in hardware, and will probably take them a couple more iterations to sort out.

Don't buy a product expecting the drivers to suddenly add performance later, that has not panned out well in the past.
I wasn't describing what you understood. You didn't understood my point and probably my English is the problem here.
Let's try to explain it with an example(in poorer English).


Let's say that Intel is producing only iGPUs and iGPUs are performing poorly in game title A and also have a bug(image corruption) with graphics setting X in that game.
Do you throw resources to optimize the driver in that game title A, to move fps from 20 to 22 and also fix that graphics setting X, especially when enabling that setting means dropping framerate from 20fps to 12fps? Probably not. If that game is a triple A title you might spent resources to optimize it, but at the same time the solution for graphics setting X will be simply to ask gamers to keep it disabled(if it is difficult to fix the bug). If that game is a not so much advertised game, you probably wouldn't even spend resources to move that fps counter from 20 to 22fps.

Let's say that Intel now is producing discrete GPUs and targets at least the mid range market against AMD and Nvidia. Well, now you will have to hire more programmers for your driver department and now optimization in game title A will move fps probably from 50 fps to 60 fps. You now also need to achieve this optimization, because you are competing with other discrete GPUs. Also you can't go out and say to gamers "please keep setting X in game disabled, because it does not work properly with ARC". No. You will have to throw resources to fix that bug or sales of your discrete GPUs will fall. People can ignore low performance and bugs from a discrete iGPU that comes for "free" in the CPU. It's a different situation for a discrete GPU that people bought paying $150-$400. People expect best performance and bugs fixed.

I wasn't describing a scaling problem. I was saying that building graphics drivers for low performing iGPUs is probably very different than building drivers for discrete GPUs. You can bypass/ignore some driver issues when you support "free" and slow iGPUs, you can't when you support expensive discrete GPUs.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I wasn't describing a scaling problem. I was saying that building graphics drivers for low performing iGPUs is probably very different than building drivers for discrete GPUs. You can bypass/ignore some driver issues when you support "free" and slow iGPUs, you can't when you support expensive discrete GPUs.
I know ;), I was trying to make you (and others in this thread) who assume that a driver for an integrated GPU and a dedicated GPU will be fundamentally different, in reality they would be mostly the same. The main difference will be in the hardware and the firmware which controls it. That's why I mentioned that Nvidia have low-end GPUs which are performing roughly comparable to integrated GPUs and high-end GPUs running the very same driver, and the same goes for AMD, which also runs the same driver for their integrated GPUs. So it's important to understand that this scaling has little to nothing to do with the driver.

Most of you in here attribute way too much to drivers in general, when the driver really does as little as possible, as anything the driver spends CPU time on will add overhead, so it's a trade-off. So let me explain how a driver works for rendering, and while this holds true for DirectX/OpenGL/Vulkan and others, I will use OpenGL as an example since it's the most simple to understand and I've used it for nearly two decades.
The main responsibility of the driver is to take generic API calls and translate it into the low-level API for the GPU architecture. This is not done one API call at the time, but instead of queues of operations. A typical block of code to render an "object" in OpenGL would look something like this:
C:
    glBindTexture(GL_TEXTURE_2D, ...);
    glBindBuffer(GL_ARRAY_BUFFER, ...);
    glVertexAttribPointer(...);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ...);
    glDrawElements(GL_TRIANGLES, ...);
What kind of low-level operations this is translated into will vary depending on the GPU architecture, but it will be the same whether the GPU is integrated or a high-end GPU. And to make it clear, the driver will operate the same regardless of the application being a AAA game title or a hobby project.

And to your point of Intel not having to prioritize performance or driver quality overall for integrated GPUs vs. dedicated GPUs, I strongly disagree, and have some solid arguments to why;
1) AMD have offered horrible OpenGL support for ages, while Intel's support have been mostly fine. And while it took a while for Intel to catch up on OpenGL 4.x features, the ones they've implemented has been seemingly working. AMD's support have been really bad, they even managed around ~10 years ago to ship two drivers in a row which mostly broken GLSL shader compilation (essentially breaking nearly all OpenGL games and applications).
2) The overall quality and stability of Intel's drivers have been better than AMD's for years. Graphics APIs are not just used for games, but today are used by the desktop environment itself, CAD/modelling applications, photo and video editing and even some multimedia applications. And it's not just in the forums we hear about way more issues with AMD than the others, those who do graphics development quickly get a feeling of the quality of the drivers by how little "misbehaving code" is needed to crash the system. While this is of course totally anecdotal, none of my main systems run AMD graphics for this very reason, it's quite annoying to get something done when systems crash up to several times per day during development.

Now to answer even more specifically:
Let's say that Intel is producing only iGPUs and iGPUs are performing poorly in game title A and also have a bug(image corruption) with graphics setting X in that game.
Do you throw resources to optimize the driver in that game title A, to move fps from 20 to 22 and also fix that graphics setting X, especially when enabling that setting means dropping framerate from 20fps to 12fps? Probably not. If that game is a triple A title you might spent resources to optimize it, but at the same time the solution for graphics setting X will be simply to ask gamers to keep it disabled(if it is difficult to fix the bug). If that game is a not so much advertised game, you probably wouldn't even spend resources to move that fps counter from 20 to 22fps.
Drivers aren't really optimized for specific games, at least not the way you think. When you see driver updates offer up to X % more performance in <selected title>, it's usually tweaking the game profiles or sometimes overriding shader programs. These aren't really so much optimizations as them "cheating" to try to reduce image quality very slightly to get a few percent more performance in benchmarks.

When they do real performance optimizations, it's usually one of these;
a) General API overhead (tied to the internal state machine of an API) - Will affect anything that uses this API.
b) Overhead of a specific API call or parameter - Will affect anything that uses this API call
So therefore, I reject your premise of optimizing performance for a specific title.
 
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
AMD have offered horrible OpenGL support for ages
Why? Probably because it was not their priority? Just asking. How many games out there need OpenGL? Probably very few? On the other hand I guess there are pro apps using OpenGL. Intel was targeting office PCs, so OpenGL could be more important for them.
The overall quality and stability of Intel's drivers have been better than AMD's for years.
If you don't try to optimize for every game or app or probable scenario out there and not implement a gazillion of features in your drivers, I guess you have better chances to offer something more stable. Much simpler, but more stable. Also people who have integrated Intel GPU, but do all their jobs on discrete Nvidia or AMD GPUs, I bet they will have no problems with their Intel iGPUs. Because, well, they are disabled.
Drivers aren't really optimized for specific games,
Well in Intel's driver faq you will read about games crushing and image quality problems. So, Intel might had thrown resources on their media engine, OpenGL performance and driver stability in office applications, but doesn't look like they where caring about games. They have to now that they are trying to become a discrete Graphics card maker. That's what I am saying all the time and while you started your post saying you understand my point, I am not sure about that.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
And yet with a driver AMD improved open GL by /50% just recently, not bad for optimization.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Why? Probably because it was not their priority? Just asking. How many games out there need OpenGL?
While DirectX is certainly more widespread, there are "minor" successes such as Minecraft(original version), most indie games and most emulators, and considering that AMD has really struggled to maintain market shares for the past decade, and have had decent value options, this should have been pretty low-hanging fruit to gain some extra percentage points of market share. And as for the stability issues of AMD drivers, those are not limited to OpenGL, and have been a persistent problem for over a decade. (we keep hearing about it every time there is new hardware)

If you don't try to optimize for every game or app or probable scenario out there and not implement a gazillion of features in your drivers
Well the answer is they don't, that's the point you still can't grasp.
The graphics APIs have a spec, the driver's responsibility is to behave according to that spec. If e.g. Nvidia wanted to deviate from that spec to boost the performance of a particular game, then that would add bloat and overhead to the driver and would risk introducing bugs in the driver. On top of that, if the API no longer behaves according to the spec, the game programmers are likely to introduce "bugs" which are very hard to track down and wastes a lot of the developer's time.
The driver developers don't know the game's internal state and don't know the assumptions the programmers who wrote the game. All the driver sees is a stream API calls, it can't know context to optimize differently frame to frame.

So this idea of the driver doing all kinds of wizardry to gain performance is just utter nonsense. As I've said, the driver does as little as possible to quickly translate a queue of API calls to the native instructions of the GPU, the GPU scheduler internally does the heavy lifting.
Most of people in forums like this thinks Nvidia's advantage is mostly due to game optimization and drivers optimized for those games, when in reality these optimizations are a myth. Nvidia has achieved most of their upper hand vs. AMD thanks to better scheduling of their GPUs' resources, which is why they've often managed to extract more performance out of less computational resources (Tflops, fillrate, etc.). When I say the following I mean it in a loving way; Please try to get this into your heads, when something performs better, it's usually because it's actually better, stop using optimizations (or lack thereof) as an excuse when there isn't evidence to support that.
 
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
Well the answer is they don't, that's the point you still can't grasp.
Well, don't worry I can see where you going, or maybe to be more accurate, where you are standing.

Anyway let's keep questions simple here.

Why ARC performs on par with the competition in 3DMark and lose badly in games?
Why most bugs in ARC are bugs that lead to crush of the application or texture corruption? In AMD's and Nvidia's driver FAQ you will read about strange behaviors when doing very specific stuff. In ARC FAQ half bugs are about application crush, or textures after just running the game.
 
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Why ARC performs on par with the competition in 3DMark and lose badly in games?
I'm not a programmer by far, but from an average user's point of view, I'd say 3DMark stresses a very specific part of your of hardware. I don't know what it is, but I see all of my graphics cards behaving very differently under 3DMark compared to games in terms of clock speed, power consumption, etc. The part of Arc GPU's that 3DMark stresses the most must be strong, while other parts of it fall behind the competition. Games, on the other hand, use a much broader range of your hardware's capabilities. To put it simply: 3DMark is designed to stress a specific part of your hardware, games are designed to use whatever you have.

I might be wrong, but these are my observations through GPU behaviour.

Why most bugs in ARC are bugs that lead to crush of the application or texture corruption?
Does it really do that? Do you have sources? If so, I believe it must be some bug in the driver that can be ironed out - and not an issue of optimisation. But I'm curious about a proper answer, as I don't know much about driver code myself.
 
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
Do you have sources?
Go at AMD's, Intel's and Nvidia's page, go to download the latest version of the driver, don't download the driver, just read the release notes.
 
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Go at AMD's, Intel's and Nvidia's page, go to download the latest version of the driver, don't download the driver, just read the release notes.
A fair point. Personally, I think that's down to how the driver communicates with the API, and specific portions of the API the game uses. Like I said, bugs that can be ironed out. It's not an "optimisation" thing.

But let's wait for a proper answer from someone who knows more than I do.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Why ARC performs on par with the competition in 3DMark and lose badly in games?
AusWolf's reply is pretty good in layman's terms.
To add to that, while games usually try to render things with reasonable efficiency, while synthetic benchmarks try to simulate "future" gaming workloads. Usually they end up stressing the GPU much more than a normal game would, but honestly I don't think the performance scores here have any use to consumers. I use them for stress testing after setting up a computer. I think synthetics can be useful for driver developers though, to try to provoke bugs.

Why most bugs in ARC are bugs that lead to crush of the application or texture corruption? In AMD's and Nvidia's driver FAQ you will read about strange behaviors when doing very specific stuff. In ARC FAQ half bugs are about application crush, or textures after just running the game.
If there are texture corruption across multiple games, and the same games don't have the same problem on other hardware, then it means the driver doesn't behave according to spec. Finding the underlying reason would require more details though, it could be either the driver or the hardware. This might surprise you, but when it comes to software bugs it's actually better if the bug occurs across many use cases. That usually means the bug is easier to reproduce and precisely locate. Such bugs are usually caught and fixed once there are enough testers. A rare and obscure bug is in many ways worse, as it will lead to very poor bug reports, which in turn leads to large efforts to find those bugs.
 
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
I don't think the performance scores here have any use to consumers.
Those are the numbers that will be printed on advertisement material. That's why Intel is concentrating on those apps. While you say optimization is a myth, it seems Intel is focusing on that myth.
If there are texture corruption across multiple games, and the same games don't have the same problem on other hardware, then it means the driver doesn't behave according to spec. Finding the underlying reason would require more details though, it could be either the driver or the hardware. This might surprise you, but when it comes to software bugs it's actually better if the bug occurs across many use cases. That usually means the bug is easier to reproduce and precisely locate. Such bugs are usually caught and fixed once there are enough testers. A rare and obscure bug is in many ways worse, as it will lead to very poor bug reports, which in turn leads to large efforts to find those bugs.
I guess I have to provide a link after all

DRIVER VERSION: 30.0.101.1736
DATE: June 14, 2022
GAMING HIGHLIGHTS:
• Launch driver for Intel® Arc™ A380 Graphics (Codename Alchemist).
• Intel® Game On Driver support for Redout 2*, Resident Evil 2*, Resident Evil 3*, and Resident Evil 7:
Biohazard* on Intel® Arc™ A-Series Graphics.
Get a front row pass to gaming deals, contests, betas, and more with Intel Software Gaming Access.
FIXED ISSUES:
• Far Cry 6* (DX12) may experience texture corruption in water surfaces during gameplay.
• Destiny 2* (DX11) may experience texture corruption on some rock surfaces during gameplay.
• Naraka: Bladepoint* (DX11) may experience an application crash or become unresponsive during training
mode.
KNOWN ISSUES:
• Metro Exodus: Enhanced Edition* (DX12), Horizon Zero Dawn* (DX12), Call of Duty: Vanguard* (DX12), Tom
Clancy’s Ghost Recon Breakpoint (DX11), Strange Brigade* (DX12) and Forza Horizon 5* (DX12) may
experience texture corruption during gameplay.
• Tom Clancy’s Rainbow Six Siege* (DX11) may experience texture corruption in the Emerald Plains map when
ultra settings are enabled in game. A workaround is to select the Vulkan API in game settings.
• Gears 5* (DX12) may experience an application crash, system hang or TDR during gameplay.
• Sniper Elite 5* may experience an application crash on some Hybrid Graphics system configurations when
Windows® “Graphics Performance Preference” option for the application is not set to “High Performance”.
• Call of Duty: Black Ops Cold War* (DX12) may experience an application crash during gameplay.
• Map textures may fail to load or may load as blank surfaces when playing CrossFire*.
• Some objects and textures in Halo Infinite* (DX12) may render black and fail to load. Lighting may also appear
blurry or over exposed in the multiplayer game menus

What doesn't surprice me is how the glass is half empty or half full, depending on the situation.
 
Last edited:
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Those are the numbers that will be printed on advertisement material. That's why Intel is concentrating on those apps. While you say optimization is a myth, it seems Intel is focusing on that myth.
Even if a certain architecture performs better in one app than another, there's nothing to suggest that it's due to a magical driver rather than the hardware itself.

AMD CPUs have been famous for being better at productivity apps, while Intel is (or used to be) better at games. Is this due to some driver magic as well?

I guess I have to provide a link after all



What doesn't surprice me is how the glass is half empty or half full, depending on the situation.
No one said that there can't be bugs in the driver-API communication. AMD is notorious for leaving bugs in for a long time. The argument was that these bugs in no way mean that games are "optimised" for a certain architecture or god forbid, manufacturer.
 
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
Damn you both are on a crusade to just insist that it is not how I assume it is, but differently, even if you don't have concrete proofs about that. And by the way let me remind here that we are just guessing. ALL of us.

Having said that, let's see why I said that.
Even if a certain architecture performs better in one app than another, there's nothing to suggest that it's due to a magical driver rather than the hardware itself.
A driver does play a role. It's not a myth. When a new driver fixes performance in a game or multiple games, then something was changed in that driver. What was that? I am NOT a driver developer. Are you? Having luck of knowledge doesn't means that the phrase "nothing to suggest" has any real value here. A man from 100 BC will insist that there is "nothing to suggest" that a 10 tone helicopter is staying on the air by pushing that air down with it's rotor blade, lucking all the necessary knowledge about physics.*
AMD CPUs have been famous for being better at productivity apps, while Intel is (or used to be) better at games. Is this due to some driver magic as well?
AMD CPUs have been famous for being better at productivity apps because they where having more cores until Alder Lake. On the other hand Intel almost always had the advantage in IPC and also many apps where optimized for Intel CPUs, not AMD CPUs.
No one said that there can't be bugs in the driver-API communication. AMD is notorious for leaving bugs in for a long time. The argument was that these bugs in no way mean that games are "optimised" for a certain architecture or god forbid, manufacturer.
I am not going to comment about the notorious AMD. It's boring, after so many years reading the same stuff. People having the need to bush AMD, even when using it's products, it is not my area of expertise. I am not going to play with words also with someone who will never ever accept something different. I am reading for decades, even from Intel/AMD/Nvidia representatives about apps/games optimizations, apps/games been developed on specific platforms, have seen how Nvidia's perfect image was ruined for a year or two somewhere in 2014 I think, when games where optimized for the consoles, meaning GCN and PC versions where having a gazillion of problems on PCs, especially those games payed from Nvidia to implement GameWorks in their PC versions.

So, I am stopping here. No reason to lose more time with people who insist that it is not A, it is B, without ANY REAL arguments of why it is B and not A.

Have a nice day.

PS * Just remembered Carl Sagan

 
Last edited:
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
A driver does play a role. It's not a myth. When a new driver fixes performance in a game or multiple games, then something was changed in that driver. What was that? I am NOT a driver developer. Are you? Having luck of knowledge doesn't means that the phrase "nothing to suggest" has any real value here. A man from 100 BC will insist that there is "nothing to suggest" that a 10 tone helicopter is staying on the air by pushing that air down with it's rotor blade, lucking all the necessary knowledge about physics.*
I'm not a driver developer either, but I'm willing to learn from someone who knows a lot more about the topic than I do, for example:
Drivers aren't really optimized for specific games, at least not the way you think. When you see driver updates offer up to X % more performance in <selected title>, it's usually tweaking the game profiles or sometimes overriding shader programs. These aren't really so much optimizations as them "cheating" to try to reduce image quality very slightly to get a few percent more performance in benchmarks.

When they do real performance optimizations, it's usually one of these;
a) General API overhead (tied to the internal state machine of an API) - Will affect anything that uses this API.
b) Overhead of a specific API call or parameter - Will affect anything that uses this API call
So therefore, I reject your premise of optimizing performance for a specific title.
This. @efikkan presented a clear explanation with technical details as to why his claim is right. You didn't.

AMD CPUs have been famous for being better at productivity apps because they where having more cores until Alder Lake. On the other hand Intel almost always had the advantage in IPC and also many apps where optimized for Intel CPUs, not AMD CPUs.
There you go. That's down to differences in the hardware, isn't it?

I am not going to comment about the notorious AMD. It's boring, after so many years reading the same stuff. People having the need to bush AMD, even when using it's products, it is not my area of expertise. I am not going to play with words also with someone who will never ever accept something different. I am reading for decades, even from Intel/AMD/Nvidia representatives about apps/games optimizations, apps/games been developed on specific platforms, have seen how Nvidia's perfect image was ruined for a year or two somewhere in 2014 I think, when games where optimized for the consoles, meaning GCN and PC versions where having a gazillion of problems on PCs, especially those games payed from Nvidia to implement GameWorks in their PC versions.
1. You clearly misread my point. I never intended to criticise AMD. I merely stated the fact that bugs CAN be found in a driver, like in any software. It's not proof that drivers are specifically optimised for certain games.
2. Who said that you can't write a game to favour the hardware resources of a certain architecture? It's not the same thing as "optimising" a new driver for a game that's already been made.
 
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
This. @efikkan presented a clear explanation with technical details as to why his claim is right. You didn't.
No he didn't. He just wrote too much stuff that not necessarily are on topic or correct. If you know NOTHING about driver development how can you assume that what he wrote is in fact correct? You can't. And he was trying to support a specific argument where he was constantly changing the point of view which in my book doesn't make him objective or his arguments correct. You can give him all the credit you want seeing that he is supporting your idea of a notorious AMD, but I am someone who needs more specific and more concrete arguments than 5 lines of code.

OK, that's more than enough from me having said that I would stop and not make another post. Especially when the other person keeps moving the goalposts.
 
Joined
Sep 17, 2014
Messages
20,944 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Well, I suppose it would be more accurate to say that it cripples performance on non-Nvidia platforms, but the end result is the same.

No, it does not. FXAA originates from GameWorks for example. It runs fine on AMD. The same applies for HBAO+ and numerous other features.

All it requires is a bit of code so the GPU knows what to do. In the end, its a processing unit working through an API, and the API just serves stuff to translate. If you have the full vocabulary on your GPU, you can have it translated. If not, you'll resort to something doing the same thing but slower. Or not at all, because it is somehow locked.

The end result might be the same, but the reasons are different, and the REASONS are the core of the Gameworks argument. There is absolutely not a single thing stopping AMD from providing similar to Gameworks solutions and support, and it hasn't stopped them either. The real question is, what features do you really need and how do they help gaming? The ones we really can use, are definitely getting copied and you're not missing much between Gameworks support or not. FXAA is a great example of that.

Well, don't worry I can see where you going, or maybe to be more accurate, where you are standing.

Anyway let's keep questions simple here.

Why ARC performs on par with the competition in 3DMark and lose badly in games?
Why most bugs in ARC are bugs that lead to crush of the application or texture corruption? In AMD's and Nvidia's driver FAQ you will read about strange behaviors when doing very specific stuff. In ARC FAQ half bugs are about application crush, or textures after just running the game.
The fact we don't have ready made answers but only guesses for these questions is quite simply because we don't know for sure. Perhaps there's monkeys disguised as humans building their code. Perhaps they have hardware issues they work around as we speak. Workarounds are going to be inefficient.

A benchmark is reproducible. Games are more variable in what they want at any given point in time.

A driver does play a role. It's not a myth.
It does. Here's a car analogy. The driver is the DRIVER. But the car is the car. It has limits, it can accelerate to 100 in a defined number of seconds. But if the driver of the car is crap at shifting gears, it certainly won't meet that spec. A better driver, or hey, let's use the at-one-point implemented Shader cache as an example: a more experienced driver, having driven the car a few times in that situation; will know exactly when to shift gears and therefore meets the spec.

Now, let's consider the car and the driver on a new road (new game). The job at hand is to accelerate as fast as possible, and then hit the brakes to come to full stop as fast as possible so he can accelerate again to full speed (clocks/boost!). One driver has experience on fresh roads, knowing they can be more smooth and slippery, so he applies different braking action while the other is oblivious to road types. The brakes on the cars are identical. The driver determines when to hit them and how hard.

So yes. Drivers play a role. And so does experience. Experience is pretty much scheduling, using the hardware resources in the best possible way at the best possible time. The other part of drivers, where they apply trickery to hit bigger numbers is usually at the cost of image quality. That could be called optimization, but that's a choice of semantics, the reality is, you render less, so you produce more frames.

So what does that all mean? It means, that if driver updates tell you they suddenly got a major perf boost within a select number of applications, you should be on the lookout for what work it is they're not doing anymore. And if the driver update tells you there is a major increase of perf across the board, scheduling likely improved.

Calling either of it optimization is not really accurate, is it? The first is cheating, the latter is basically dev work on your GPU (hardware) that wasn't done prior to its release. And bugs.. are bugs, again, a matter of experience with the hardware. How does it behave, and why? Intel seems to have arrived at a point where they documented the how and haven't quite found the why for most situations. When they find that why, there's going to be numerous much smaller how's and why's underneath for those very specific situations you speak of in AMD/Nv driver FAQs. Refinement happens over time.
 
Last edited:
Joined
Sep 6, 2013
Messages
2,978 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
Calling either of it optimization is not really accurate, is it?
Considering we are not programmers, we might use words not really accurate. But in most times we will be describing the same thing, considering most of us having the same teachers and the same books (youtube, forums, tech sites).
 
Top