Wednesday, June 22nd 2022

Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks

Intel's Arc A380 desktop graphics card is generally available in China, and real-world gaming benchmarks of the cards by independent media paint a vastly different picture than what we've been led on by synthetic benchmarks. The entry-mainstream graphics card, being sold under the equivalent of $160 in China, is shown beating the AMD Radeon RX 6500 XT and RX 6400 in 3DMark Port Royal and Time Spy benchmarks by a significant margin. The gaming results see it lose to even the RX 6400 in each of the six games tested by the source.

The tests in the graph below are in the order: League of Legends, PUBG, GTA V, Shadow of the Tomb Raider, Forza Horizon 5, and Red Dead Redemption 2. We see that in the first three tests that are based on DirectX 11, the A380 is 22 to 26 percent slower than an NVIDIA GeForce GTX 1650, and Radeon RX 6400. The gap narrows in DirectX 12 titles SoTR and Forza 5, where it's within 10% slower than the two cards. The card's best showing, is in the Vulkan-powered RDR 2, where it's 7% slower than the GTX 1650, and 9% behind the RX 6400. The RX 6500 XT would perform in a different league. With these numbers, and given that GPU prices are cooling down in the wake of the cryptocalypse 2022, we're not entirely sure what Intel is trying to sell at $160.
Sources: Shenmedounengce (Bilibili), VideoCardz
Add your own comment

190 Comments on Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks

#126
ravenhold
AusWolfWho said they don't allow us? Just buy a used 2060 and call it a day. The 6400 / 6500 XT pair are a different league. Even though they technically support RT, they're clearly not meant to do it.
My mind was on the brand new card from RX6000/RTX3000 series. There isn't any rule which says GPU can't be lower budget and great ray traced performant for its frame time. Only made rule this is. That is my opinion.
Posted on Reply
#127
AusWolf
ravenholdMy mind was on the brand new card from RX6000/RTX3000 series. There isn't any rule which says GPU can't be lower budget and great ray traced performant for its frame time. Only made rule this is. That is my opinion.
Ray tracing hardware isn't advanced enough to give us decent performance on budget level. This is more or less a fact.
Posted on Reply
#128
Keats
efikkanThe fact that a piece of software scales better on one piece of hardware is not evidence of the software being optimized for that particular hardware.
There are basically two ways to optimize for specific hardware; (these principles hold true to CPUs as well)
1) Using hardware-specific low-level API calls or instructions. (the few examples in games you will find of this will be to give extra eye-candy, not to give better performance)
2) Writing code where the code is carefully crafted to give an edge to a specific class of hardware. You will struggle to find examples of this being done intentionally. And even attempting to write code this way would be stupid, as the resource advantages of current gen. GPUs are likely to change a lot 1-2 generations down the road, and the competition is likely going to respond to any such advantage. So writing code that would give e.g. Nvidia an advantage years from now will be very hard, and could just as easily backfire and do the opposite. For these reasons this is never done, and the few examples where you see a clear advantage it's probably the result of the opposite effect; un-optimized code running into a hardware bottleneck. And as mentioned, most games today use generic or abstracted game engines, have very little if any low-level code, and are generally not optimized at all.

As a good example, a while ago I got to test some code that I had optimized on Sandy Bridge/Haswell/Skylake hardware for years on a Zen 3, and to my delight the optimizations showed even greater gains on AMD hardware, with the greatest example showing roughly double performance on Zen 3 vs. 5-10% on Intel hardware.
So this would mean that I either have supernatural powers to optimize for hardware that I didn't yet have my hands on, or you just don't understand how software optimizations work at all! ;)

In reality, games "optimized" for Nvidia or AMD is a myth.
You forgot 3)Use tools provided by a hardware vendor which do all of that for you automatically.

Admittedly 3 sometimes works by crippling the competition instead, as famously demonstrated by ICC...
Posted on Reply
#129
efikkan
KeatsYou forgot 3)Use tools provided by a hardware vendor which do all of that for you automatically.

Admittedly 3 sometimes works by crippling the competition instead, as famously demonstrated by ICC...
Which "tools" are there which supposedly optimizes game engine code for specific hardware?
Posted on Reply
#130
Keats
efikkanWhich "tools" are there which supposedly optimizes game engine code for specific hardware?
Any of the ones provided under Nvidia gameworks umbrella, for starters.
Posted on Reply
#131
efikkan
KeatsAny of the ones provided under Nvidia gameworks umbrella, for starters.
Aaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.
Posted on Reply
#132
Dr. Dro
efikkanAaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.
I suppose it can't be helped, it's a marketing trick. Nvidia is very keen on its closed-source ecosystem, to the point they keep GeForce as much of a black box as they can (somewhat like Apple and the iOS system), while AMD's technologies may be subpar at times but they've got a decent track record of open sourcing large parts of their software (somewhat like Android does).

That latter alone is enough to gather a lot of good will, now you add people's tendency to defend their choices and purchases no matter the cost and the inherent need to feel accepted among their peers, you'll find that the AMD vs. NVIDIA war is no different to iOS vs. Android or Pepsi vs. Coke, it's just people perpetuating lies, hearsay and spreading FUD about it :oops:
Posted on Reply
#133
Keats
efikkanAaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.
Well, I suppose it would be more accurate to say that it cripples performance on non-Nvidia platforms, but the end result is the same.
Posted on Reply
#134
simlife
worse then a 1060 well duh intel is betting hard and ai upscaling somthing that card cant do
Posted on Reply
#135
eidairaman1
The Exiled Airman
This is called the raja koduri effect because everything he touches turns into chitlins.
Posted on Reply
#136
efikkan
KeatsWell, I suppose it would be more accurate to say that it cripples performance on non-Nvidia platforms, but the end result is the same.
What specifically cripples non-Nvidia products?
If you knew how debugging and profiling tools worked, you wouldn't come up with something like that. These tools will not optimize(or sabotage) the code. The code is still written by the programmer.
And BTW, AMD offer comparable tools too.

Performance optimizations for specific hardware in modern PC-games is a myth.
Posted on Reply
#137
TheoneandonlyMrK
efikkanWhat specifically cripples non-Nvidia products?
If you knew how debugging and profiling tools worked, you wouldn't come up with something like that. These tools will not optimize(or sabotage) the code. The code is still written by the programmer.
And BTW, AMD offer comparable tools too.

Performance optimizations for specific hardware in modern PC-games is a myth.
Not sure you are right, do you have proof, because looking at one example, ray tracing performance it's quite clear that games made for Nvidia work best only on Nvidia and AMD sponsored titles perform well on both but don't allow the Nvidia hardware to quite stretch they're lead.
Posted on Reply
#138
efikkan
TheoneandonlyMrKNot sure you are right, do you have proof, because looking at one example, ray tracing performance it's quite clear that games made for Nvidia work best only on Nvidia and AMD sponsored titles perform well on both but don't allow the Nvidia hardware to quite stretch they're lead.
That's a nonsensical anecdote. And how can you even measure whether a game is "made for Nvidia"?

As of now Nvidia have stronger RT capabilities, so games which utilizes RT heavier will scale better on Nvidia hardware. Once AMD releases a generation with similar capabilities they will perform just as well, perhaps even better.

Firstly, as mentioned earlier, in order to optimize for e.g. Nvidia, we would have to write code targeting specific generations (e.g. Pascal, Turing, Ampere…), as the generations change a lot internally, and there could not be a universal Nvidia-optimization vs. AMD-optimization, as newer GPUs from competitors might be more similar with each other than their own GPUs two-three generations ago. This means the game developer needs to maintain multiple code paths to ensure their Nvidia-chips outperform their AMD counterparts. But this all hinges on the existence of a GPU specific low-level API to use. Does any such API exist publicly? Because if not, the whole idea of specific optimizations is dead. (The closest you will find is experimental features(extensions to OpenGL and Vulkan), but these are high-level API functions and are usually new features, and I've never seen such used in games. And these are not exclusive either, as anyone can implement them if needed.)

Secondly, optimizing for future or recently released GPU architectures would be virtually impossible. Game engines are often written/rewritten 2-3 years ahead of a game release date, and even top game studios rarely have access to new engineering samples more than ~6 months ahead of a GPU release. And we all know how badly game studios screw up when they try to patch in some new big feature at the end of the development cycle.

Thirdly, most games use third party game engines, which means the game studio don't even write any low-level rendering code. The big popular game engines might have many advanced features, but their rendering code is generic and not hand-tailored to the specific needs of the objects in a specific game. So any optimized game would have to use a custom game engine without bloat and abstractions.

As for proof, 1 is provable to the extent that these mystical GPU-specific APIs are not present on Nvidia's and AMD's developer websites. 2 is a logical deduction. 3 is provable in a broad sense as few games use custom game engines. The remaining would require disassembly to prove 100%, but is pointless unless you disprove 1 first.
Posted on Reply
#139
TheoneandonlyMrK
efikkanThat's a nonsensical anecdote. And how can you even measure whether a game is "made for Nvidia"?

As of now Nvidia have stronger RT capabilities, so games which utilizes RT heavier will scale better on Nvidia hardware. Once AMD releases a generation with similar capabilities they will perform just as well, perhaps even better.

Firstly, as mentioned earlier, in order to optimize for e.g. Nvidia, we would have to write code targeting specific generations (e.g. Pascal, Turing, Ampere…), as the generations change a lot internally, and there could not be a universal Nvidia-optimization vs. AMD-optimization, as newer GPUs from competitors might be more similar with each other than their own GPUs two-three generations ago. This means the game developer needs to maintain multiple code paths to ensure their Nvidia-chips outperform their AMD counterparts. But this all hinges on the existence of a GPU specific low-level API to use. Does any such API exist publicly? Because if not, the whole idea of specific optimizations is dead. (The closest you will find is experimental features(extensions to OpenGL and Vulkan), but these are high-level API functions and are usually new features, and I've never seen such used in games. And these are not exclusive either, as anyone can implement them if needed.)

Secondly, optimizing for future or recently released GPU architectures would be virtually impossible. Game engines are often written/rewritten 2-3 years ahead of a game release date, and even top game studios rarely have access to new engineering samples more than ~6 months ahead of a GPU release. And we all know how badly game studios screw up when they try to patch in some new big feature at the end of the development cycle.

Thirdly, most games use third party game engines, which means the game studio don't even write any low-level rendering code. The big popular game engines might have many advanced features, but their rendering code is generic and not hand-tailored to the specific needs of the objects in a specific game. So any optimized game would have to use a custom game engine without bloat and abstractions.

As for proof, 1 is provable to the extent that these mystical GPU-specific APIs are not present on Nvidia's and AMD's developer websites. 2 is a logical deduction. 3 is provable in a broad sense as few games use custom game engines. The remaining would require disassembly to prove 100%, but is pointless unless you disprove 1 first.
www.nvidia.com/en-gb/geforce/news/nvidia-rtx-games-engines-apps/

Where have you been, can AMD cards run RTX code, no.

No rant here though I disagree and your opinion isn't enough to change that, opinion.
Posted on Reply
#140
AusWolf
TheoneandonlyMrKwww.nvidia.com/en-gb/geforce/news/nvidia-rtx-games-engines-apps/

Where have you been, can AMD cards run RTX code, no.

No rant here though I disagree and your opinion isn't enough to change that, opinion.
Yes they can. It's only that nvidia's RT hardware is stronger at the moment.

The only vendor-specific code I can think about is GameWorks. If you enable Advanced Physics in a Metro game, an nvidia card will be OK, but AMD just dies.

Other than that, why and how would games be optimised for a vendor (and not architecture)?
Posted on Reply
#141
Unregistered
Am I the only one who expected this kind of results from a 1st gen Intel GPU?
It will take years for them to make something competitive.
As for their drivers, it will take them forever.
I said it before and I say it again: as a gamer, I will never buy their GPUs.
But they will probably come in handy for office computers without integrated graphics :D
#142
efikkan
TheoneandonlyMrKwww.nvidia.com/en-gb/geforce/news/nvidia-rtx-games-engines-apps/
Where have you been, can AMD cards run RTX code, no.
No rant here though I disagree and your opinion isn't enough to change that, opinion.
While you are entitled to your own opinion, this subject is a matter of facts, not opinions, so both of our opinions are irrelevant. And I mean no disrespect, but this is a deflection from your end, instead of facing the facts that prove you wrong.

"RTX" is a marketing term for their hardware, which you can clearly see uses DXR or Vulkan as the API front-end.
Direct3D 12 ray-tracing details: docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-12-raytracing
The Vulkan ray tracing spec: VK_KHR_ray_tracing_pipeline is not Nvidia specific, and includes contributions from AMD, Intel, ARM and others.

And as you can see from Nvidia's DirectX 12 tutorials and Vulkan-tutorial, this is vendor neutral high-level API code. And as their web page clearly states;
RTX Ray-Tracing APIs
NVIDIA RTX brings real time, cinematic-quality rendering to content creators and game developers. RTX is built to be codified by the newest generation of cross platform standards: Microsoft DirectX Ray Tracing (DXR) and Vulkan from Khronos Group.
I haven't looked into how Intel's ARC series compares in ray tracing support level vs. Nvidia and AMD.

So in conclusion again; modern PC games are not optimized for specific hardware. Some games may feature optional special effects which are vendor specific, but these are not low-level hardware-specific optimizations, and they are not the basis for comparing performance between products. If a Nvidia card performs better in a game than AMD or Intel, it's not because the game is optimized for that Nvidia card. Claiming it's an optimization would be utter nonsense.
AusWolfThe only vendor-specific code I can think about is GameWorks. If you enable Advanced Physics in a Metro game, an nvidia card will be OK, but AMD just dies.
GameWorks is the large suite of developer tools, samples, etc. Nvidia provides for game developers. They have some special effects in there that may only work on Nvidia hardware, but the vast majority is plain DirectX/OpenGL/Vulkan.
AMD have their own Developer tool suite, which is pretty much the same deal, complete with some unique AMD features.
Posted on Reply
#143
TheoneandonlyMrK
efikkanWhile you are entitled to your own opinion, this subject is a matter of facts, not opinions, so both of our opinions are irrelevant. And I mean no disrespect, but this is a deflection from your end, instead of facing the facts that prove you wrong.

"RTX" is a marketing term for their hardware, which you can clearly see uses DXR or Vulkan as the API front-end.
Direct3D 12 ray-tracing details: docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-12-raytracing
The Vulkan ray tracing spec: VK_KHR_ray_tracing_pipeline is not Nvidia specific, and includes contributions from AMD, Intel, ARM and others.

And as you can see from Nvidia's DirectX 12 tutorials and Vulkan-tutorial, this is vendor neutral high-level API code. And as their web page clearly states;


I haven't looked into how Intel's ARC series compares in ray tracing support level vs. Nvidia and AMD.

So in conclusion again; modern PC games are not optimized for specific hardware. Some games may feature optional special effects which are vendor specific, but these are not low-level hardware-specific optimizations, and they are not the basis for comparing performance between products. If a Nvidia card performs better in a game than AMD or Intel, it's not because the game is optimized for that Nvidia card. Claiming it's an optimization would be utter nonsense.


GameWorks is the large suite of developer tools, samples, etc. Nvidia provides for game developers. They have some special effects in there that may only work on Nvidia hardware, but the vast majority is plain DirectX/OpenGL/Vulkan.
AMD have their own Developer tool suite, which is pretty much the same deal, complete with some unique AMD features.
See the timeline was RTX proprietary came out, dx12 announced , games came out with no fallback support for non RTX hardware, then DX12 ultimate was released with DX12 raytracing API, then eventually Nvidia moved towards DX 12s implementation.

But you're pulling vulkan and similar out.

Parity may have been achieved Now, but Microsoft worked with Nvidia first on DxR so gains were made, and used.

So believe what you want.
Posted on Reply
#144
efikkan
TheoneandonlyMrKSee the timeline was RTX proprietary came out, dx12 announced , games came out with no fallback support for non RTX hardware, then DX12 ultimate was released with DX12 raytracing API, then eventually Nvidia moved towards DX 12s implementation.

But you're pulling vulkan and similar out.

Parity may have been achieved Now, but Microsoft worked with Nvidia first on DxR so gains were made, and used.

So believe what you want.
You got it all mixed up.
MS announced at GDC in March 2018 their DXR as the front-end to Nvidia's RTX which was announced at the same conference. So it was DXR long before Turing launched later the same year. The initial API draft may have been a little different from the final version, but that's irrelevant for the games which shipped with DXR support much later. Drafts and revisions are how the graphics APIs are developed.

The games which uses ray-tracing today use DXR (or Vulkan, if there are any). So this bogus claim that these games are optimized for Nvidia hardware should be defeated once and for all. Please stop spreading misinformation, as you clearly don't comprehend this subject.
Posted on Reply
#145
TheoneandonlyMrK
efikkanYou got it all mixed up.
MS announced at GDC in March 2018 their DXR as the front-end to Nvidia's RTX which was announced at the same conference. So it was DXR long before Turing launched later the same year. The initial API draft may have been a little different from the final version, but that's irrelevant for the games which shipped with DXR support much later. Drafts and revisions are how the graphics APIs are developed.

The games which uses ray-tracing today use DXR (or Vulkan, if there are any). So this bogus claim that these games are optimized for Nvidia hardware should be defeated once and for all. Please stop spreading misinformation, as you clearly don't comprehend this subject.
So September 20 2018 when RTX was released.

October 10 2018 DXR windows update 1809 came out.

Hmmnnn.
Posted on Reply
#146
kapone32
RH92You could buy anything that exists except for the 6500XT ....
What is wrong with the 6500XT for Gaming?
Posted on Reply
#147
AusWolf
kapone32What is wrong with the 6500XT for Gaming?
Nothing, except if you want it in a PCI-e 3.0 or older motherboard, and you want to play a game that's sensitive for PCI-e bandwidth.

I have one, and I assure you, the card is fine (and extremely silent).
Posted on Reply
#148
ravenhold
kapone32What is wrong with the 6500XT for Gaming?
It would be great if you could run Cyberpunk with raytracing High on 35FPS.
Posted on Reply
#149
AusWolf
ravenholdIt would be great if you could run Cyberpunk with raytracing High on 35FPS.
That's not gonna happen with a budget graphics card. Not in 2022, anyway.
Posted on Reply
#150
sweet
john_3DMark performance shows possible potential. Games show current reality.

In any case Intel will be selling millions of those to OEMs, to be used in their prebuild systems, meaning that cards like Nvidia's MX line and AMD's RX 6400/6500XT are out of Intel based systems. And that's what Intel cares about. Those 3DMark scores are enough to convince consumers that they are getting a fast card.

Now if only someone could clear up things about ARC's hardware compatibility, that would be nice. Let's hope that Intel doesn't starts a new trend with cards being incompatible with some systems. If they start that kind of trend, then I wish they NEVER had reentered the market and competely fail.
Why would anyone expect something to be honest. When talking about games performance you need to count in the drivers, which I doubt that even Intel has the resources to pull off for a brand new architecture.
Posted on Reply
Add your own comment
May 10th, 2024 17:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts