• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks

They will be available in the West eventually. According to Wikipedia these Arc cards were supposed to launch in Q2 *or* Q3, so they are doing just fine. For some reason Intel has decided to launch in China first. I am sure that they have strategic reasons for that. Perhaps they figure that the Chinese market will be more receptive to a new dGPU player or more interested in low-end cards or Intel has stronger brand recognition there compared to AMD and Nvidia. It does not really matter, we can hate Intel for any number of reasons but I don't think that they are strategically incompetent despite what some (IMO ignorant) people may think. Same applies for the people saying Raja Koduri does not know what he is doing, e.g. because GCN was not good enough for gaming or something like that. Well, maybe gaming was not their main focus? Maybe they were making a ton of money selling cards for compute in datacenters? Some people struggle to look beyond their own perspective, which I frankly find hard to understand at this point. It should have dawned on people by now that enthusiast/gamer desktop users are not the most important market for these large corporations after the mobile and server markets have been prioritized time and time again.
I agree with that. I'm still on a "wait and see" approach, and will be until the worldwide launch of the whole product line.

I must admit I am surprised by that. Note that my GTX 1050 was a non-Ti though (often people seem to forget those even existed). Still, my GTX 1050 at least had hardware encoding, unlike the RX 6500 "XT". My GTX 1050 was an EVGA low-profile, single-slot card (it was used in a used M92p ThinkCentre). I am not knocking the RX 6400, just the RX 6500 XT.
I think the biggest problem of the 6500 XT is the price. I absolutely love my Asus TUF. It can push all the frames I need at 1080p and it barely makes a whisper over my Be Quiet case fans, even fully overclocked. I don't even care about hardware encoding. In fact, I only know one single person who does. It's a niche thing, imo. Most people just want to play games. The only reason I still wouldn't recommend anyone buying one is the price (which wasn't an issue for me because I'm generally curious about any PC hardware).

Still, the 1050 (Ti) and the 6400 / 6500 XT duo are in entirely different leagues, and should not be compared. The 1650 and 1650 Super are their main competition.
 
I agree with that. I'm still on a "wait and see" approach, and will be until the worldwide launch of the whole product line.


I think the biggest problem of the 6500 XT is the price. I absolutely love my Asus TUF. It can push all the frames I need at 1080p and it barely makes a whisper over my Be Quiet case fans, even fully overclocked. I don't even care about hardware encoding. In fact, I only know one single person who does. It's a niche thing, imo. Most people just want to play games. The only reason I still wouldn't recommend anyone buying one is the price (which wasn't an issue for me because I'm generally curious about any PC hardware).

Still, the 1050 (Ti) and the 6400 / 6500 XT duo are in entirely different leagues, and should not be compared. The 1650 and 1650 Super are their main competition.
I think hardware encoding is useful for recording gameplay at least. Not that I do that a whole lot but I am interested in doing that every now and then and with my Polaris card I know that I can do it. I am not exactly running a Threadripper so depending on the game, it may be nice to be able to avoid straining my CPU. I don't know if the RXs have decoding or not but hardware AV1 decoding certainly interests me. Actually, hardware H.264 and H.265 decoding interests me as well since technically I do not even have patent licenses for the FFmpeg software decoders (used by applications such as mplayer and VLC media player), which means it is technically illegal in the US for me to use those software decoders on my desktop PC (if I had a Windows license for it, I could at least claim that I had already paid for them that way).

Anyway, I think people should at least be happy that the A380 will, one way or another, at least probably lower the pricing of the RX 6400. Personally, I do not have the money to buy a new GPU right now but it will be nice to know that there are at least some good (low-end, efficient & PCIe-power-only) options available that I can purchase if I need/want to.
 
I think hardware encoding is useful for recording gameplay at least. Not that I do that a whole lot but I am interested in doing that every now and then and with my Polaris card I know that I can do it. I am not exactly running a Threadripper so depending on the game, it may be nice to be able to avoid straining my CPU.
Fair enough. Like I said, I don't record anything, and I know only one person who does, so for me, it's not an issue. :)

I don't know if the RXs have decoding or not but hardware AV1 decoding certainly interests me. Actually, hardware H.264 and H.265 decoding interests me as well since technically I do not even have patent licenses for the FFmpeg software decoders (used by applications such as mplayer and VLC media player), which means it is technically illegal in the US for me to use those software decoders on my desktop PC (if I had a Windows license for it, I could at least claim that I had already paid for them that way).
The RX 6000 series have full decode. Navi 24 is only missing AV-1 (it has H.264 and H.265).

Anyway, I think people should at least be happy that the A380 will, one way or another, at least probably lower the pricing of the RX 6400. Personally, I do not have the money to buy a new GPU right now but it will be nice to know that there are at least some good (low-end, efficient & PCIe-power-only) options available that I can purchase if I need/want to.
Agreed. The lower end shouldn't be neglected, especially now that Nvidia and AMD (especially Nvidia) are doing everything they can to close up on the magical 1.21 Jiggawatt boundary.
 
Who said they don't allow us? Just buy a used 2060 and call it a day. The 6400 / 6500 XT pair are a different league. Even though they technically support RT, they're clearly not meant to do it.
My mind was on the brand new card from RX6000/RTX3000 series. There isn't any rule which says GPU can't be lower budget and great ray traced performant for its frame time. Only made rule this is. That is my opinion.
 
My mind was on the brand new card from RX6000/RTX3000 series. There isn't any rule which says GPU can't be lower budget and great ray traced performant for its frame time. Only made rule this is. That is my opinion.
Ray tracing hardware isn't advanced enough to give us decent performance on budget level. This is more or less a fact.
 
The fact that a piece of software scales better on one piece of hardware is not evidence of the software being optimized for that particular hardware.
There are basically two ways to optimize for specific hardware; (these principles hold true to CPUs as well)
1) Using hardware-specific low-level API calls or instructions. (the few examples in games you will find of this will be to give extra eye-candy, not to give better performance)
2) Writing code where the code is carefully crafted to give an edge to a specific class of hardware. You will struggle to find examples of this being done intentionally. And even attempting to write code this way would be stupid, as the resource advantages of current gen. GPUs are likely to change a lot 1-2 generations down the road, and the competition is likely going to respond to any such advantage. So writing code that would give e.g. Nvidia an advantage years from now will be very hard, and could just as easily backfire and do the opposite. For these reasons this is never done, and the few examples where you see a clear advantage it's probably the result of the opposite effect; un-optimized code running into a hardware bottleneck. And as mentioned, most games today use generic or abstracted game engines, have very little if any low-level code, and are generally not optimized at all.

As a good example, a while ago I got to test some code that I had optimized on Sandy Bridge/Haswell/Skylake hardware for years on a Zen 3, and to my delight the optimizations showed even greater gains on AMD hardware, with the greatest example showing roughly double performance on Zen 3 vs. 5-10% on Intel hardware.
So this would mean that I either have supernatural powers to optimize for hardware that I didn't yet have my hands on, or you just don't understand how software optimizations work at all! ;)

In reality, games "optimized" for Nvidia or AMD is a myth.
You forgot 3)Use tools provided by a hardware vendor which do all of that for you automatically.

Admittedly 3 sometimes works by crippling the competition instead, as famously demonstrated by ICC...
 
You forgot 3)Use tools provided by a hardware vendor which do all of that for you automatically.

Admittedly 3 sometimes works by crippling the competition instead, as famously demonstrated by ICC...
Which "tools" are there which supposedly optimizes game engine code for specific hardware?
 
Any of the ones provided under Nvidia gameworks umbrella, for starters.
Aaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.
 
Aaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.

I suppose it can't be helped, it's a marketing trick. Nvidia is very keen on its closed-source ecosystem, to the point they keep GeForce as much of a black box as they can (somewhat like Apple and the iOS system), while AMD's technologies may be subpar at times but they've got a decent track record of open sourcing large parts of their software (somewhat like Android does).

That latter alone is enough to gather a lot of good will, now you add people's tendency to defend their choices and purchases no matter the cost and the inherent need to feel accepted among their peers, you'll find that the AMD vs. NVIDIA war is no different to iOS vs. Android or Pepsi vs. Coke, it's just people perpetuating lies, hearsay and spreading FUD about it :oops:
 
Aaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.
Well, I suppose it would be more accurate to say that it cripples performance on non-Nvidia platforms, but the end result is the same.
 
This is called the raja koduri effect because everything he touches turns into chitlins.
 
Well, I suppose it would be more accurate to say that it cripples performance on non-Nvidia platforms, but the end result is the same.
What specifically cripples non-Nvidia products?
If you knew how debugging and profiling tools worked, you wouldn't come up with something like that. These tools will not optimize(or sabotage) the code. The code is still written by the programmer.
And BTW, AMD offer comparable tools too.

Performance optimizations for specific hardware in modern PC-games is a myth.
 
What specifically cripples non-Nvidia products?
If you knew how debugging and profiling tools worked, you wouldn't come up with something like that. These tools will not optimize(or sabotage) the code. The code is still written by the programmer.
And BTW, AMD offer comparable tools too.

Performance optimizations for specific hardware in modern PC-games is a myth.
Not sure you are right, do you have proof, because looking at one example, ray tracing performance it's quite clear that games made for Nvidia work best only on Nvidia and AMD sponsored titles perform well on both but don't allow the Nvidia hardware to quite stretch they're lead.
 
Not sure you are right, do you have proof, because looking at one example, ray tracing performance it's quite clear that games made for Nvidia work best only on Nvidia and AMD sponsored titles perform well on both but don't allow the Nvidia hardware to quite stretch they're lead.
That's a nonsensical anecdote. And how can you even measure whether a game is "made for Nvidia"?

As of now Nvidia have stronger RT capabilities, so games which utilizes RT heavier will scale better on Nvidia hardware. Once AMD releases a generation with similar capabilities they will perform just as well, perhaps even better.

Firstly, as mentioned earlier, in order to optimize for e.g. Nvidia, we would have to write code targeting specific generations (e.g. Pascal, Turing, Ampere…), as the generations change a lot internally, and there could not be a universal Nvidia-optimization vs. AMD-optimization, as newer GPUs from competitors might be more similar with each other than their own GPUs two-three generations ago. This means the game developer needs to maintain multiple code paths to ensure their Nvidia-chips outperform their AMD counterparts. But this all hinges on the existence of a GPU specific low-level API to use. Does any such API exist publicly? Because if not, the whole idea of specific optimizations is dead. (The closest you will find is experimental features(extensions to OpenGL and Vulkan), but these are high-level API functions and are usually new features, and I've never seen such used in games. And these are not exclusive either, as anyone can implement them if needed.)

Secondly, optimizing for future or recently released GPU architectures would be virtually impossible. Game engines are often written/rewritten 2-3 years ahead of a game release date, and even top game studios rarely have access to new engineering samples more than ~6 months ahead of a GPU release. And we all know how badly game studios screw up when they try to patch in some new big feature at the end of the development cycle.

Thirdly, most games use third party game engines, which means the game studio don't even write any low-level rendering code. The big popular game engines might have many advanced features, but their rendering code is generic and not hand-tailored to the specific needs of the objects in a specific game. So any optimized game would have to use a custom game engine without bloat and abstractions.

As for proof, 1 is provable to the extent that these mystical GPU-specific APIs are not present on Nvidia's and AMD's developer websites. 2 is a logical deduction. 3 is provable in a broad sense as few games use custom game engines. The remaining would require disassembly to prove 100%, but is pointless unless you disprove 1 first.
 
That's a nonsensical anecdote. And how can you even measure whether a game is "made for Nvidia"?

As of now Nvidia have stronger RT capabilities, so games which utilizes RT heavier will scale better on Nvidia hardware. Once AMD releases a generation with similar capabilities they will perform just as well, perhaps even better.

Firstly, as mentioned earlier, in order to optimize for e.g. Nvidia, we would have to write code targeting specific generations (e.g. Pascal, Turing, Ampere…), as the generations change a lot internally, and there could not be a universal Nvidia-optimization vs. AMD-optimization, as newer GPUs from competitors might be more similar with each other than their own GPUs two-three generations ago. This means the game developer needs to maintain multiple code paths to ensure their Nvidia-chips outperform their AMD counterparts. But this all hinges on the existence of a GPU specific low-level API to use. Does any such API exist publicly? Because if not, the whole idea of specific optimizations is dead. (The closest you will find is experimental features(extensions to OpenGL and Vulkan), but these are high-level API functions and are usually new features, and I've never seen such used in games. And these are not exclusive either, as anyone can implement them if needed.)

Secondly, optimizing for future or recently released GPU architectures would be virtually impossible. Game engines are often written/rewritten 2-3 years ahead of a game release date, and even top game studios rarely have access to new engineering samples more than ~6 months ahead of a GPU release. And we all know how badly game studios screw up when they try to patch in some new big feature at the end of the development cycle.

Thirdly, most games use third party game engines, which means the game studio don't even write any low-level rendering code. The big popular game engines might have many advanced features, but their rendering code is generic and not hand-tailored to the specific needs of the objects in a specific game. So any optimized game would have to use a custom game engine without bloat and abstractions.

As for proof, 1 is provable to the extent that these mystical GPU-specific APIs are not present on Nvidia's and AMD's developer websites. 2 is a logical deduction. 3 is provable in a broad sense as few games use custom game engines. The remaining would require disassembly to prove 100%, but is pointless unless you disprove 1 first.

Where have you been, can AMD cards run RTX code, no.

No rant here though I disagree and your opinion isn't enough to change that, opinion.
 

Where have you been, can AMD cards run RTX code, no.

No rant here though I disagree and your opinion isn't enough to change that, opinion.
Yes they can. It's only that nvidia's RT hardware is stronger at the moment.

The only vendor-specific code I can think about is GameWorks. If you enable Advanced Physics in a Metro game, an nvidia card will be OK, but AMD just dies.

Other than that, why and how would games be optimised for a vendor (and not architecture)?
 
Am I the only one who expected this kind of results from a 1st gen Intel GPU?
It will take years for them to make something competitive.
As for their drivers, it will take them forever.
I said it before and I say it again: as a gamer, I will never buy their GPUs.
But they will probably come in handy for office computers without integrated graphics :D
 
Where have you been, can AMD cards run RTX code, no.
No rant here though I disagree and your opinion isn't enough to change that, opinion.
While you are entitled to your own opinion, this subject is a matter of facts, not opinions, so both of our opinions are irrelevant. And I mean no disrespect, but this is a deflection from your end, instead of facing the facts that prove you wrong.

"RTX" is a marketing term for their hardware, which you can clearly see uses DXR or Vulkan as the API front-end.
Direct3D 12 ray-tracing details: https://docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-12-raytracing
The Vulkan ray tracing spec: VK_KHR_ray_tracing_pipeline is not Nvidia specific, and includes contributions from AMD, Intel, ARM and others.

And as you can see from Nvidia's DirectX 12 tutorials and Vulkan-tutorial, this is vendor neutral high-level API code. And as their web page clearly states;
RTX Ray-Tracing APIs
NVIDIA RTX brings real time, cinematic-quality rendering to content creators and game developers. RTX is built to be codified by the newest generation of cross platform standards: Microsoft DirectX Ray Tracing (DXR) and Vulkan from Khronos Group.

I haven't looked into how Intel's ARC series compares in ray tracing support level vs. Nvidia and AMD.

So in conclusion again; modern PC games are not optimized for specific hardware. Some games may feature optional special effects which are vendor specific, but these are not low-level hardware-specific optimizations, and they are not the basis for comparing performance between products. If a Nvidia card performs better in a game than AMD or Intel, it's not because the game is optimized for that Nvidia card. Claiming it's an optimization would be utter nonsense.

The only vendor-specific code I can think about is GameWorks. If you enable Advanced Physics in a Metro game, an nvidia card will be OK, but AMD just dies.
GameWorks is the large suite of developer tools, samples, etc. Nvidia provides for game developers. They have some special effects in there that may only work on Nvidia hardware, but the vast majority is plain DirectX/OpenGL/Vulkan.
AMD have their own Developer tool suite, which is pretty much the same deal, complete with some unique AMD features.
 
While you are entitled to your own opinion, this subject is a matter of facts, not opinions, so both of our opinions are irrelevant. And I mean no disrespect, but this is a deflection from your end, instead of facing the facts that prove you wrong.

"RTX" is a marketing term for their hardware, which you can clearly see uses DXR or Vulkan as the API front-end.
Direct3D 12 ray-tracing details: https://docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-12-raytracing
The Vulkan ray tracing spec: VK_KHR_ray_tracing_pipeline is not Nvidia specific, and includes contributions from AMD, Intel, ARM and others.

And as you can see from Nvidia's DirectX 12 tutorials and Vulkan-tutorial, this is vendor neutral high-level API code. And as their web page clearly states;


I haven't looked into how Intel's ARC series compares in ray tracing support level vs. Nvidia and AMD.

So in conclusion again; modern PC games are not optimized for specific hardware. Some games may feature optional special effects which are vendor specific, but these are not low-level hardware-specific optimizations, and they are not the basis for comparing performance between products. If a Nvidia card performs better in a game than AMD or Intel, it's not because the game is optimized for that Nvidia card. Claiming it's an optimization would be utter nonsense.


GameWorks is the large suite of developer tools, samples, etc. Nvidia provides for game developers. They have some special effects in there that may only work on Nvidia hardware, but the vast majority is plain DirectX/OpenGL/Vulkan.
AMD have their own Developer tool suite, which is pretty much the same deal, complete with some unique AMD features.
See the timeline was RTX proprietary came out, dx12 announced , games came out with no fallback support for non RTX hardware, then DX12 ultimate was released with DX12 raytracing API, then eventually Nvidia moved towards DX 12s implementation.

But you're pulling vulkan and similar out.

Parity may have been achieved Now, but Microsoft worked with Nvidia first on DxR so gains were made, and used.

So believe what you want.
 
See the timeline was RTX proprietary came out, dx12 announced , games came out with no fallback support for non RTX hardware, then DX12 ultimate was released with DX12 raytracing API, then eventually Nvidia moved towards DX 12s implementation.

But you're pulling vulkan and similar out.

Parity may have been achieved Now, but Microsoft worked with Nvidia first on DxR so gains were made, and used.

So believe what you want.
You got it all mixed up.
MS announced at GDC in March 2018 their DXR as the front-end to Nvidia's RTX which was announced at the same conference. So it was DXR long before Turing launched later the same year. The initial API draft may have been a little different from the final version, but that's irrelevant for the games which shipped with DXR support much later. Drafts and revisions are how the graphics APIs are developed.

The games which uses ray-tracing today use DXR (or Vulkan, if there are any). So this bogus claim that these games are optimized for Nvidia hardware should be defeated once and for all. Please stop spreading misinformation, as you clearly don't comprehend this subject.
 
You got it all mixed up.
MS announced at GDC in March 2018 their DXR as the front-end to Nvidia's RTX which was announced at the same conference. So it was DXR long before Turing launched later the same year. The initial API draft may have been a little different from the final version, but that's irrelevant for the games which shipped with DXR support much later. Drafts and revisions are how the graphics APIs are developed.

The games which uses ray-tracing today use DXR (or Vulkan, if there are any). So this bogus claim that these games are optimized for Nvidia hardware should be defeated once and for all. Please stop spreading misinformation, as you clearly don't comprehend this subject.
So September 20 2018 when RTX was released.

October 10 2018 DXR windows update 1809 came out.

Hmmnnn.
 
What is wrong with the 6500XT for Gaming?
Nothing, except if you want it in a PCI-e 3.0 or older motherboard, and you want to play a game that's sensitive for PCI-e bandwidth.

I have one, and I assure you, the card is fine (and extremely silent).
 
Back
Top