• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks

Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
They will be available in the West eventually. According to Wikipedia these Arc cards were supposed to launch in Q2 *or* Q3, so they are doing just fine. For some reason Intel has decided to launch in China first. I am sure that they have strategic reasons for that. Perhaps they figure that the Chinese market will be more receptive to a new dGPU player or more interested in low-end cards or Intel has stronger brand recognition there compared to AMD and Nvidia. It does not really matter, we can hate Intel for any number of reasons but I don't think that they are strategically incompetent despite what some (IMO ignorant) people may think. Same applies for the people saying Raja Koduri does not know what he is doing, e.g. because GCN was not good enough for gaming or something like that. Well, maybe gaming was not their main focus? Maybe they were making a ton of money selling cards for compute in datacenters? Some people struggle to look beyond their own perspective, which I frankly find hard to understand at this point. It should have dawned on people by now that enthusiast/gamer desktop users are not the most important market for these large corporations after the mobile and server markets have been prioritized time and time again.
I agree with that. I'm still on a "wait and see" approach, and will be until the worldwide launch of the whole product line.

I must admit I am surprised by that. Note that my GTX 1050 was a non-Ti though (often people seem to forget those even existed). Still, my GTX 1050 at least had hardware encoding, unlike the RX 6500 "XT". My GTX 1050 was an EVGA low-profile, single-slot card (it was used in a used M92p ThinkCentre). I am not knocking the RX 6400, just the RX 6500 XT.
I think the biggest problem of the 6500 XT is the price. I absolutely love my Asus TUF. It can push all the frames I need at 1080p and it barely makes a whisper over my Be Quiet case fans, even fully overclocked. I don't even care about hardware encoding. In fact, I only know one single person who does. It's a niche thing, imo. Most people just want to play games. The only reason I still wouldn't recommend anyone buying one is the price (which wasn't an issue for me because I'm generally curious about any PC hardware).

Still, the 1050 (Ti) and the 6400 / 6500 XT duo are in entirely different leagues, and should not be compared. The 1650 and 1650 Super are their main competition.
 
Joined
Jun 2, 2022
Messages
349 (0.50/day)
System Name HP EliteBook 725 G3
Processor AMD PRO A10-8700B (1.8 GHz CMT dual module with 3.2 GHz boost)
Motherboard HP proprietary
Cooling pretty good
Memory 8 GB SK Hynix DDR3 SODIMM
Video Card(s) Radeon R6 (Carrizo/GCNv3)
Storage internal Kioxia XG6 1 TB NVMe SSD (aftermarket)
Display(s) HP P22h G4 21.5" 1080p (& 768p internal LCD)
Case HP proprietary metal case
Audio Device(s) built-in Conexant CX20724 HDA chipset -> Roland RH-200S
Power Supply HP-branded AC adapter
Mouse Steelseries Rival 310
Keyboard Cherry G84-5200
Software Alma Linux 9.1
Benchmark Scores Broadcom BCM94356 11ac M.2 WiFi card (aftermarket)
I agree with that. I'm still on a "wait and see" approach, and will be until the worldwide launch of the whole product line.


I think the biggest problem of the 6500 XT is the price. I absolutely love my Asus TUF. It can push all the frames I need at 1080p and it barely makes a whisper over my Be Quiet case fans, even fully overclocked. I don't even care about hardware encoding. In fact, I only know one single person who does. It's a niche thing, imo. Most people just want to play games. The only reason I still wouldn't recommend anyone buying one is the price (which wasn't an issue for me because I'm generally curious about any PC hardware).

Still, the 1050 (Ti) and the 6400 / 6500 XT duo are in entirely different leagues, and should not be compared. The 1650 and 1650 Super are their main competition.
I think hardware encoding is useful for recording gameplay at least. Not that I do that a whole lot but I am interested in doing that every now and then and with my Polaris card I know that I can do it. I am not exactly running a Threadripper so depending on the game, it may be nice to be able to avoid straining my CPU. I don't know if the RXs have decoding or not but hardware AV1 decoding certainly interests me. Actually, hardware H.264 and H.265 decoding interests me as well since technically I do not even have patent licenses for the FFmpeg software decoders (used by applications such as mplayer and VLC media player), which means it is technically illegal in the US for me to use those software decoders on my desktop PC (if I had a Windows license for it, I could at least claim that I had already paid for them that way).

Anyway, I think people should at least be happy that the A380 will, one way or another, at least probably lower the pricing of the RX 6400. Personally, I do not have the money to buy a new GPU right now but it will be nice to know that there are at least some good (low-end, efficient & PCIe-power-only) options available that I can purchase if I need/want to.
 
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
I think hardware encoding is useful for recording gameplay at least. Not that I do that a whole lot but I am interested in doing that every now and then and with my Polaris card I know that I can do it. I am not exactly running a Threadripper so depending on the game, it may be nice to be able to avoid straining my CPU.
Fair enough. Like I said, I don't record anything, and I know only one person who does, so for me, it's not an issue. :)

I don't know if the RXs have decoding or not but hardware AV1 decoding certainly interests me. Actually, hardware H.264 and H.265 decoding interests me as well since technically I do not even have patent licenses for the FFmpeg software decoders (used by applications such as mplayer and VLC media player), which means it is technically illegal in the US for me to use those software decoders on my desktop PC (if I had a Windows license for it, I could at least claim that I had already paid for them that way).
The RX 6000 series have full decode. Navi 24 is only missing AV-1 (it has H.264 and H.265).

Anyway, I think people should at least be happy that the A380 will, one way or another, at least probably lower the pricing of the RX 6400. Personally, I do not have the money to buy a new GPU right now but it will be nice to know that there are at least some good (low-end, efficient & PCIe-power-only) options available that I can purchase if I need/want to.
Agreed. The lower end shouldn't be neglected, especially now that Nvidia and AMD (especially Nvidia) are doing everything they can to close up on the magical 1.21 Jiggawatt boundary.
 
Joined
Jun 12, 2022
Messages
64 (0.09/day)
Who said they don't allow us? Just buy a used 2060 and call it a day. The 6400 / 6500 XT pair are a different league. Even though they technically support RT, they're clearly not meant to do it.
My mind was on the brand new card from RX6000/RTX3000 series. There isn't any rule which says GPU can't be lower budget and great ray traced performant for its frame time. Only made rule this is. That is my opinion.
 
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
My mind was on the brand new card from RX6000/RTX3000 series. There isn't any rule which says GPU can't be lower budget and great ray traced performant for its frame time. Only made rule this is. That is my opinion.
Ray tracing hardware isn't advanced enough to give us decent performance on budget level. This is more or less a fact.
 

Keats

New Member
Joined
Jun 26, 2022
Messages
3 (0.00/day)
The fact that a piece of software scales better on one piece of hardware is not evidence of the software being optimized for that particular hardware.
There are basically two ways to optimize for specific hardware; (these principles hold true to CPUs as well)
1) Using hardware-specific low-level API calls or instructions. (the few examples in games you will find of this will be to give extra eye-candy, not to give better performance)
2) Writing code where the code is carefully crafted to give an edge to a specific class of hardware. You will struggle to find examples of this being done intentionally. And even attempting to write code this way would be stupid, as the resource advantages of current gen. GPUs are likely to change a lot 1-2 generations down the road, and the competition is likely going to respond to any such advantage. So writing code that would give e.g. Nvidia an advantage years from now will be very hard, and could just as easily backfire and do the opposite. For these reasons this is never done, and the few examples where you see a clear advantage it's probably the result of the opposite effect; un-optimized code running into a hardware bottleneck. And as mentioned, most games today use generic or abstracted game engines, have very little if any low-level code, and are generally not optimized at all.

As a good example, a while ago I got to test some code that I had optimized on Sandy Bridge/Haswell/Skylake hardware for years on a Zen 3, and to my delight the optimizations showed even greater gains on AMD hardware, with the greatest example showing roughly double performance on Zen 3 vs. 5-10% on Intel hardware.
So this would mean that I either have supernatural powers to optimize for hardware that I didn't yet have my hands on, or you just don't understand how software optimizations work at all! ;)

In reality, games "optimized" for Nvidia or AMD is a myth.
You forgot 3)Use tools provided by a hardware vendor which do all of that for you automatically.

Admittedly 3 sometimes works by crippling the competition instead, as famously demonstrated by ICC...
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
You forgot 3)Use tools provided by a hardware vendor which do all of that for you automatically.

Admittedly 3 sometimes works by crippling the competition instead, as famously demonstrated by ICC...
Which "tools" are there which supposedly optimizes game engine code for specific hardware?
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Any of the ones provided under Nvidia gameworks umbrella, for starters.
Aaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.
 
Joined
Dec 25, 2020
Messages
4,625 (3.80/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Logitech G305 Lightspeed K/DA
Keyboard Logitech K400 Plus
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
Aaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.

I suppose it can't be helped, it's a marketing trick. Nvidia is very keen on its closed-source ecosystem, to the point they keep GeForce as much of a black box as they can (somewhat like Apple and the iOS system), while AMD's technologies may be subpar at times but they've got a decent track record of open sourcing large parts of their software (somewhat like Android does).

That latter alone is enough to gather a lot of good will, now you add people's tendency to defend their choices and purchases no matter the cost and the inherent need to feel accepted among their peers, you'll find that the AMD vs. NVIDIA war is no different to iOS vs. Android or Pepsi vs. Coke, it's just people perpetuating lies, hearsay and spreading FUD about it :oops:
 

Keats

New Member
Joined
Jun 26, 2022
Messages
3 (0.00/day)
Aaah, the old Gameworks makes games optimized for Nvidia nonsense again, old BS never dies…
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.
Well, I suppose it would be more accurate to say that it cripples performance on non-Nvidia platforms, but the end result is the same.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.58/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
This is called the raja koduri effect because everything he touches turns into chitlins.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Well, I suppose it would be more accurate to say that it cripples performance on non-Nvidia platforms, but the end result is the same.
What specifically cripples non-Nvidia products?
If you knew how debugging and profiling tools worked, you wouldn't come up with something like that. These tools will not optimize(or sabotage) the code. The code is still written by the programmer.
And BTW, AMD offer comparable tools too.

Performance optimizations for specific hardware in modern PC-games is a myth.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
What specifically cripples non-Nvidia products?
If you knew how debugging and profiling tools worked, you wouldn't come up with something like that. These tools will not optimize(or sabotage) the code. The code is still written by the programmer.
And BTW, AMD offer comparable tools too.

Performance optimizations for specific hardware in modern PC-games is a myth.
Not sure you are right, do you have proof, because looking at one example, ray tracing performance it's quite clear that games made for Nvidia work best only on Nvidia and AMD sponsored titles perform well on both but don't allow the Nvidia hardware to quite stretch they're lead.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Not sure you are right, do you have proof, because looking at one example, ray tracing performance it's quite clear that games made for Nvidia work best only on Nvidia and AMD sponsored titles perform well on both but don't allow the Nvidia hardware to quite stretch they're lead.
That's a nonsensical anecdote. And how can you even measure whether a game is "made for Nvidia"?

As of now Nvidia have stronger RT capabilities, so games which utilizes RT heavier will scale better on Nvidia hardware. Once AMD releases a generation with similar capabilities they will perform just as well, perhaps even better.

Firstly, as mentioned earlier, in order to optimize for e.g. Nvidia, we would have to write code targeting specific generations (e.g. Pascal, Turing, Ampere…), as the generations change a lot internally, and there could not be a universal Nvidia-optimization vs. AMD-optimization, as newer GPUs from competitors might be more similar with each other than their own GPUs two-three generations ago. This means the game developer needs to maintain multiple code paths to ensure their Nvidia-chips outperform their AMD counterparts. But this all hinges on the existence of a GPU specific low-level API to use. Does any such API exist publicly? Because if not, the whole idea of specific optimizations is dead. (The closest you will find is experimental features(extensions to OpenGL and Vulkan), but these are high-level API functions and are usually new features, and I've never seen such used in games. And these are not exclusive either, as anyone can implement them if needed.)

Secondly, optimizing for future or recently released GPU architectures would be virtually impossible. Game engines are often written/rewritten 2-3 years ahead of a game release date, and even top game studios rarely have access to new engineering samples more than ~6 months ahead of a GPU release. And we all know how badly game studios screw up when they try to patch in some new big feature at the end of the development cycle.

Thirdly, most games use third party game engines, which means the game studio don't even write any low-level rendering code. The big popular game engines might have many advanced features, but their rendering code is generic and not hand-tailored to the specific needs of the objects in a specific game. So any optimized game would have to use a custom game engine without bloat and abstractions.

As for proof, 1 is provable to the extent that these mystical GPU-specific APIs are not present on Nvidia's and AMD's developer websites. 2 is a logical deduction. 3 is provable in a broad sense as few games use custom game engines. The remaining would require disassembly to prove 100%, but is pointless unless you disprove 1 first.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
That's a nonsensical anecdote. And how can you even measure whether a game is "made for Nvidia"?

As of now Nvidia have stronger RT capabilities, so games which utilizes RT heavier will scale better on Nvidia hardware. Once AMD releases a generation with similar capabilities they will perform just as well, perhaps even better.

Firstly, as mentioned earlier, in order to optimize for e.g. Nvidia, we would have to write code targeting specific generations (e.g. Pascal, Turing, Ampere…), as the generations change a lot internally, and there could not be a universal Nvidia-optimization vs. AMD-optimization, as newer GPUs from competitors might be more similar with each other than their own GPUs two-three generations ago. This means the game developer needs to maintain multiple code paths to ensure their Nvidia-chips outperform their AMD counterparts. But this all hinges on the existence of a GPU specific low-level API to use. Does any such API exist publicly? Because if not, the whole idea of specific optimizations is dead. (The closest you will find is experimental features(extensions to OpenGL and Vulkan), but these are high-level API functions and are usually new features, and I've never seen such used in games. And these are not exclusive either, as anyone can implement them if needed.)

Secondly, optimizing for future or recently released GPU architectures would be virtually impossible. Game engines are often written/rewritten 2-3 years ahead of a game release date, and even top game studios rarely have access to new engineering samples more than ~6 months ahead of a GPU release. And we all know how badly game studios screw up when they try to patch in some new big feature at the end of the development cycle.

Thirdly, most games use third party game engines, which means the game studio don't even write any low-level rendering code. The big popular game engines might have many advanced features, but their rendering code is generic and not hand-tailored to the specific needs of the objects in a specific game. So any optimized game would have to use a custom game engine without bloat and abstractions.

As for proof, 1 is provable to the extent that these mystical GPU-specific APIs are not present on Nvidia's and AMD's developer websites. 2 is a logical deduction. 3 is provable in a broad sense as few games use custom game engines. The remaining would require disassembly to prove 100%, but is pointless unless you disprove 1 first.

Where have you been, can AMD cards run RTX code, no.

No rant here though I disagree and your opinion isn't enough to change that, opinion.
 
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.

Where have you been, can AMD cards run RTX code, no.

No rant here though I disagree and your opinion isn't enough to change that, opinion.
Yes they can. It's only that nvidia's RT hardware is stronger at the moment.

The only vendor-specific code I can think about is GameWorks. If you enable Advanced Physics in a Metro game, an nvidia card will be OK, but AMD just dies.

Other than that, why and how would games be optimised for a vendor (and not architecture)?
 
D

Deleted member 211755

Guest
Am I the only one who expected this kind of results from a 1st gen Intel GPU?
It will take years for them to make something competitive.
As for their drivers, it will take them forever.
I said it before and I say it again: as a gamer, I will never buy their GPUs.
But they will probably come in handy for office computers without integrated graphics :D
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Where have you been, can AMD cards run RTX code, no.
No rant here though I disagree and your opinion isn't enough to change that, opinion.
While you are entitled to your own opinion, this subject is a matter of facts, not opinions, so both of our opinions are irrelevant. And I mean no disrespect, but this is a deflection from your end, instead of facing the facts that prove you wrong.

"RTX" is a marketing term for their hardware, which you can clearly see uses DXR or Vulkan as the API front-end.
Direct3D 12 ray-tracing details: https://docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-12-raytracing
The Vulkan ray tracing spec: VK_KHR_ray_tracing_pipeline is not Nvidia specific, and includes contributions from AMD, Intel, ARM and others.

And as you can see from Nvidia's DirectX 12 tutorials and Vulkan-tutorial, this is vendor neutral high-level API code. And as their web page clearly states;
RTX Ray-Tracing APIs
NVIDIA RTX brings real time, cinematic-quality rendering to content creators and game developers. RTX is built to be codified by the newest generation of cross platform standards: Microsoft DirectX Ray Tracing (DXR) and Vulkan from Khronos Group.

I haven't looked into how Intel's ARC series compares in ray tracing support level vs. Nvidia and AMD.

So in conclusion again; modern PC games are not optimized for specific hardware. Some games may feature optional special effects which are vendor specific, but these are not low-level hardware-specific optimizations, and they are not the basis for comparing performance between products. If a Nvidia card performs better in a game than AMD or Intel, it's not because the game is optimized for that Nvidia card. Claiming it's an optimization would be utter nonsense.

The only vendor-specific code I can think about is GameWorks. If you enable Advanced Physics in a Metro game, an nvidia card will be OK, but AMD just dies.
GameWorks is the large suite of developer tools, samples, etc. Nvidia provides for game developers. They have some special effects in there that may only work on Nvidia hardware, but the vast majority is plain DirectX/OpenGL/Vulkan.
AMD have their own Developer tool suite, which is pretty much the same deal, complete with some unique AMD features.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
While you are entitled to your own opinion, this subject is a matter of facts, not opinions, so both of our opinions are irrelevant. And I mean no disrespect, but this is a deflection from your end, instead of facing the facts that prove you wrong.

"RTX" is a marketing term for their hardware, which you can clearly see uses DXR or Vulkan as the API front-end.
Direct3D 12 ray-tracing details: https://docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-12-raytracing
The Vulkan ray tracing spec: VK_KHR_ray_tracing_pipeline is not Nvidia specific, and includes contributions from AMD, Intel, ARM and others.

And as you can see from Nvidia's DirectX 12 tutorials and Vulkan-tutorial, this is vendor neutral high-level API code. And as their web page clearly states;


I haven't looked into how Intel's ARC series compares in ray tracing support level vs. Nvidia and AMD.

So in conclusion again; modern PC games are not optimized for specific hardware. Some games may feature optional special effects which are vendor specific, but these are not low-level hardware-specific optimizations, and they are not the basis for comparing performance between products. If a Nvidia card performs better in a game than AMD or Intel, it's not because the game is optimized for that Nvidia card. Claiming it's an optimization would be utter nonsense.


GameWorks is the large suite of developer tools, samples, etc. Nvidia provides for game developers. They have some special effects in there that may only work on Nvidia hardware, but the vast majority is plain DirectX/OpenGL/Vulkan.
AMD have their own Developer tool suite, which is pretty much the same deal, complete with some unique AMD features.
See the timeline was RTX proprietary came out, dx12 announced , games came out with no fallback support for non RTX hardware, then DX12 ultimate was released with DX12 raytracing API, then eventually Nvidia moved towards DX 12s implementation.

But you're pulling vulkan and similar out.

Parity may have been achieved Now, but Microsoft worked with Nvidia first on DxR so gains were made, and used.

So believe what you want.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
See the timeline was RTX proprietary came out, dx12 announced , games came out with no fallback support for non RTX hardware, then DX12 ultimate was released with DX12 raytracing API, then eventually Nvidia moved towards DX 12s implementation.

But you're pulling vulkan and similar out.

Parity may have been achieved Now, but Microsoft worked with Nvidia first on DxR so gains were made, and used.

So believe what you want.
You got it all mixed up.
MS announced at GDC in March 2018 their DXR as the front-end to Nvidia's RTX which was announced at the same conference. So it was DXR long before Turing launched later the same year. The initial API draft may have been a little different from the final version, but that's irrelevant for the games which shipped with DXR support much later. Drafts and revisions are how the graphics APIs are developed.

The games which uses ray-tracing today use DXR (or Vulkan, if there are any). So this bogus claim that these games are optimized for Nvidia hardware should be defeated once and for all. Please stop spreading misinformation, as you clearly don't comprehend this subject.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
You got it all mixed up.
MS announced at GDC in March 2018 their DXR as the front-end to Nvidia's RTX which was announced at the same conference. So it was DXR long before Turing launched later the same year. The initial API draft may have been a little different from the final version, but that's irrelevant for the games which shipped with DXR support much later. Drafts and revisions are how the graphics APIs are developed.

The games which uses ray-tracing today use DXR (or Vulkan, if there are any). So this bogus claim that these games are optimized for Nvidia hardware should be defeated once and for all. Please stop spreading misinformation, as you clearly don't comprehend this subject.
So September 20 2018 when RTX was released.

October 10 2018 DXR windows update 1809 came out.

Hmmnnn.
 
Joined
Jun 2, 2017
Messages
7,935 (3.15/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Joined
Jan 14, 2019
Messages
9,872 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
What is wrong with the 6500XT for Gaming?
Nothing, except if you want it in a PCI-e 3.0 or older motherboard, and you want to play a game that's sensitive for PCI-e bandwidth.

I have one, and I assure you, the card is fine (and extremely silent).
 
Top