• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's Radeon RX 9070 XT Shatters Sales Records, Outperforming Previous Generations by 10X

CUDA is great and all, but virtually all video game code is DirectX, DirectX Raytracing or DirectCompute.

The exceptions are Vulkan, Vulkan Raytracing and Vulkan compute.

CUDA (and HIP) is for AI and Prosumer (ex: DaVinci Resolve, Blender Cycles) applications. Not video games.

Games have a ton of NV_API calls in them, which is essentially CUDA.

 
Last edited:
If they had released a GPU with 50% more performance for the same price ratio it wouldn’t have sold as well, proving their point and hitting their stated goal.
> "And frankly, 10x the first week sales of the 7900 XT and XTX is like taking candy from a baby, nobody wanted those things compared to the 4080 and 4090.

What’s the 4080 better at than the 7900XTX for 90% of gamers out there? I play the games I want at 4K 60+ FPS with all high settings. I have played with RT/PT but it doesn’t move the scale for me any, it’s still a mediocre implementation of a new technology. Higher resolution textures in games make a far larger difference in visual fidelity.
The 7900XTX vs 4080 when I bought mine was closer to 30% more performance per dollar.

1742942359738.png
 
When we look at Directx 12 all directX12 technologies are automatically compatible with directX12 gpu's, the latest ones come with it as standard, no matter if they are AMD/INTEL/Nvidia.
If I own an AMD 7000 or 6000 series right now I would use Optiscaler,there are numerous tutorials on youtube,it allows you to enable dssl/xess/fsr4 on older and more modern games,even though it is beta software.

While a 7900XTX is a very powerful gpu with much more vram its AI capabilities are up to FP 16 and yes it is more powerful than the 9070 XT up to fp 16 but the 9000 series does fp 8 and there it is more powerful than any previous gpu because the previous 7000/6000 series cannot.

In the RX 9000 series no chiplets are used, they are heterogeneous gpu like nvidia.
The 9070XT that I own is well defended in 4k depending on what game you want to play, there are games that in 4k nor a 5080 can get more than 50fps.in those cases is the gpu that is used desescale the screen to 1440p or 1080p, no matter if it is a 5080 as a 5070 Ti as a 9070XT.the performance will be higher.

The use of frame multipliers in both AMD / Nvidia, really only work well if you have a fps rate of minimum average of 30 fps and even then I would not use it, the reason one is because it will overload the gpu with higher load, the second reason is that it will increase the latency, the third point is that it drastically increases the energy consumption of the gpu..... If you already have in native 30 fps or 60 no longer need more, if you can get 80 fps and would be brutal if you have a gpu with a higher load, the second reason is that it will increase the latency, the third point is that it drastically increases the energy consumption of the gpu.

Translated with DeepL.com (free version)

If you already have 30 fps or 60 fps natively, you don't need more; if you can get 80 fps, that would be awesome. If you have an 80 fps or higher display, 60 fps is actually excellent; any additional fps increases latency.

What you need to worry about is whether your GPU can run textures not only in ultra mode, but also in cinematic mode, which is a higher standard than ultra at the resolution you're playing at or if your GPU is designed for it.

People think ray tracing or path tracing makes their game look better.
Explanations: Cinematic-quality textures have nothing to do with ray tracing or path tracing, but let's clarify something here: there are games created and optimized for NVIDIA where cinematic-quality textures were removed as an option in the cinematic textures section and only included the ray tracing option to justify the technology, implying that ray tracing improves the game's textures, but that's not the case. In games not paid for by NVIDIA, cinematic-quality textures are an additional option, and ray tracing does more than just illuminate with a specific light beam and create shadow paths.

Ray tracing and path tracing don't improve textures; they simply add extra lighting to the game. But that won't make a square Minecraft texture into a work of art like the Mona Lisa.

Games paid for by Nvidia will always be more optimized for higher performance with Nvidia, games paid for by AMD will be optimized for AMD, for example in Nvidia cyberpunk 2077, for example in AMD Black ops 6 where the 9070 Xt takes +30 fps from the 5070 TI and is equal to a 5080.
 
Poor old MSI. I bet they regret not releasing an AMD GPU, particularly in this generation for Nvidia and AMD.
 
The 9070 XT has been a fantastic success—it's the No. 1 selling AMD Radeon GPU in its first week, with sales 10 times higher than past generations."
Maybe AMD cards previously sold poorly at launch. Maybe this time, after the premiere was postponed, many waited... and this was also due to the fact that this was the moment when RTX 50XX disappeared from stores, and there were available ones for several dozen or several hundred percent higher.
 
There are no GPUs in the 9070 performance class in consistent stock in stores so all 9070s will sell, and it was delayed about 10 weeks, so 10x sales in the first week isn't news it's just what we all assumed.
 
> "It's the No. 1 selling AMD Radeon GPU in its first week, with sales 10 times higher than past generations."

This is just the exact same thing as Nvidia's "double the amount" claim, comparing your mid-tier launches to your previous flagship launches. And frankly, 10x the first week sales of the 7900 XT and XTX is like taking candy from a baby, nobody wanted those things compared to the 4080 and 4090.
It does say "past generations". Not previous generation. So, if this statement is accurate, we would want to at least include RDNA series. RDNA2 launched the 6800/6800XT first. RDNA the 5700/5700XT launched first.

So there could be some truth. If it did 10x better than the 6800/6800XT, that would be pretty nice.
 
It does say "past generations". Not previous generation. So, if this statement is accurate, we would want to at least include RDNA series. RDNA2 launched the 6800/6800XT first. RDNA the 5700/5700XT launched first.

So there could be some truth. If it did 10x better than the 6800/6800XT, that would be pretty nice.
I assume they're probably hrowing the sales of RDNA3 in there to make it seem better than it realistically is, if we were to compare to RDNA2, Polaris, etc. But probably not as bad as NVIDIA's claims (nor as egregious id assume) since AMD seems to like to intentionally be more transparent / more honest than NVIDIA to seemingly one up them.

I still think any improvement in sales (like x2 or x4 over RDNA3) is a huge success though, considering RDNA3 didn't sell super well compared to previous AMD generations iirc. Atleast not at launch.
 
What’s the 4080 better at than the 7900XTX for 90% of gamers out there? I play the games I want at 4K 60+ FPS with all high settings. I have played with RT/PT but it doesn’t move the scale for me any, it’s still a mediocre implementation of a new technology. Higher resolution textures in games make a far larger difference in visual fidelity.
The 7900XTX vs 4080 when I bought mine was closer to 30% more performance per dollar.
I didn't say anything about performance. I'm talking about sales. And the 7900 XTX and 7900 XT sold HORRIBLY against the 4090 and the 4080 and even the underwhelming 4070-Ti. Even 10x the sales wouldn't have come close to matching the 40 series first week. Same thing with the RX 6000 series. Nobody wanted the RX 6800 and 6800 XT compared to the 3080.

This launch *SHOULD* be different, since they're supposed to be competing in a different price bracket (cheaper cards should mean more sales), AND they actually have a competitive product this time that can match Nvidia on most features.

So when AMD says "10x the first week sales," that's a very, VERY low bar to clear given their past launches. Frankly, only 10x the sales of the 7900 XTX / XT or the 6800 / 6800 XT is worryingly low.

It does say "past generations". Not previous generation. So, if this statement is accurate, we would want to at least include RDNA series. RDNA2 launched the 6800/6800XT first. RDNA the 5700/5700XT launched first.

So there could be some truth. If it did 10x better than the 6800/6800XT, that would be pretty nice.
A comparison to the RDNA 2 launch is even less encouraging! The 6800 / 6800 XT sold worse! The RX 6800 / 6800 XT didn't even show up on the Steam Hardware Survey until 2022, 2 years after they were launched! And that was only because they were literally dirt-cheap and stock was clearing out for the next-gen launch. The RX 6000 flagship launch card literally didn't sell until it was put on fire sale.
 
It does say "past generations". Not previous generation. So, if this statement is accurate, we would want to at least include RDNA series. RDNA2 launched the 6800/6800XT first. RDNA the 5700/5700XT launched first.

So there could be some truth. If it did 10x better than the 6800/6800XT, that would be pretty nice.

It's likely accurate, outside of the Linux community and perhaps tech forums such as TPU or Guru3D, you would find little love for Radeon - and the only geographical region where AMD held a consistent level of performance is Europe. Although definitely not an universal truth, European customers tend to have lighter pockets and are willing to concede on having the latest and greatest if it means they get to save a few euros, it will not have escaped those with an observant eye that the most vocal people complaining tend to give themselves away by using the € and £ signs.

The timing is right, the product is decent, the pricing is (at least on paper) fair and perhaps even more importantly, gamers are fed up with Nvidia pushing the boundary on giving customers the least they can for the most money possible. Exceptionally few people fulfill the conditions of being passionate enough about games to drop 2k+ (realistically 3k+) on an RTX 5090 and having the funds to do so, especially given the AAA market no longer takes any risks. Production costs are sky high, publishers are risk-averse, and as such pretty much all new releases became "safe" and in trying to offend no one, manage to please nobody either.

I wish AMD a well-earned and well-deserved success. We need their horse on this race.

I still think any improvement in sales (like x2 or x4 over RDNA3) is a huge success though, considering RDNA3 didn't sell super well compared to previous AMD generations iirc. Atleast not at launch.

RDNA 3 sales picked up after DeepSeek AI launched, it can utilize the Navi 31/32 hardware very well and you get the kind of performance that AMD was hoping to get with games from the get go. Makes the 7900 XTX more than adequate competitor for the 4090... on this workload.
 
Hint: Do you know what D3D12 in the API calls mean?

View attachment 391701

Do you know what HLSL is? You might want to Google that. You literally linked to a DirectX API.
Facepalm.

I linked you to the NVAPI documentation, not the DirectX documentation. You didn’t know DirectX is extensible?
 
Facepalm.

I linked you to the NVAPI documentation, not the DirectX documentation. You didn’t know DirectX is extensible?

CUDA is great and all, but virtually all video game code is DirectX, DirectX Raytracing or DirectCompute.

The exceptions are Vulkan, Vulkan Raytracing and Vulkan compute.

CUDA (and HIP) is for AI and Prosumer (ex: DaVinci Resolve, Blender Cycles) applications. Not video games.

What the hell are you arguing with me for? Did you even read my post?
 
What the hell are you arguing with me for? Did you even read my post?

You‘re clueless dude. CUDA is used all over the place in games. Just look at the Nvidia branch of UE to start with. It’s open, so assuming you know how to read code you’ll be able to see all the NVAPI calls.

The above assumes you know what an API does of course.
 
There is not a single lick of CUDA in the entire page you linked to. Not one lick.

But don't mind me. I'm just someone who can read CUDA and DirectX.

Do YOU know what cuda<<<>>> calls look like? Because they don't look like the page you selected randomly.
 
Since video games are generally a commercial endeavor and must support the existing install base, developers choose to support a baseline and that baseline has almost always been what older AMD hardware with much older drivers can do. Just throwing it out there that video games not using these compute runtimes isn't necessarily a positive: it's a compatibility constraint.
 
Since video games are generally a commercial endeavor and must support the existing install base, developers choose to support a baseline and that baseline has almost always been what older AMD hardware with much older drivers can do. Just throwing it out there that video games not using these compute runtimes isn't necessarily a positive: it's a compatibility constraint.

DirectCompute and Vulkan Compute offer plenty of cross-GPU compute APIs.

If your programmers are already masters of HLSL (aka: DirectX Shader language), it's just easier to write HLSL than to switch everyone to CUDA.

DirectX is a beast. And for all other video game purposes, Vulkan is acceptable (mostly Linux / SteamOS).

If your video game is already cross-GPU compatible because it's HLSL DirectX and you want a big compute shader, the answer is simply DirectCompute (a subcomponent of DirectX). Besides, all your data is already in DirectX Buffer Objects so it's much more natural.
 
DirectCompute and Vulkan Compute offer plenty of cross-GPU compute APIs.

If your programmers are already masters of HLSL (aka: DirectX Shader language), it's just easier to write HLSL than to switch everyone to CUDA.

DirectX is a beast. And for all other video game purposes, Vulkan is acceptable (mostly Linux / SteamOS).

If your video game is already cross-GPU compatible because it's HLSL DirectX and you want a big compute shader, the answer is simply DirectCompute (a subcomponent of DirectX). Besides, all your data is already in DirectX Buffer Objects so it's much more natural.

See where my bone to pick with this lies: AMD almost never implements the optional extensions to DirectX in their driver, either. So that point gets kind of moot fast, the game industry today is almost universally bound to AMD's limitations.
 
There is not a single lick of CUDA in the entire page you linked to. Not one lick.

But don't mind me. I'm just someone who can read CUDA and DirectX.

Do YOU know what cuda<<<>>> calls look like? Because they don't look like the page you selected randomly.
Dude. NVAPI is implemented in CUDA. How many times do you have to be told?

DirectCompute and Vulkan Compute offer plenty of cross-GPU compute APIs.
Sure, that’s why everyone uses them, right?

Everyone as in nobody of significance.
 
Dude. NVAPI is implemented in CUDA. How many times do you have to be told?

Oh no. You've told me plenty about your ignorance on this subject. That's why this is so amusing to me.

CUDA doesn't need ID3D12Device btw. I'm just waiting to see if you ever notice. At this point, it seems like you're beyond my help though.

FYI: CUDA compiles to PTX. DirectX/HLSL compiles down to DXIL, a completely different technology. They are different. If your GPU code is in HLSL, it CANNOT be in CUDA. There's like a few extensions that allow data and stuff to be passed between the two in case you need to, but it's not common at all to mix DirectX and CUDA code.

Its one or the other. But please, tell me how little GPU programming I know. Its amusing to me.
See where my bone to pick with this lies: AMD almost never implements the optional extensions to DirectX in their driver, either. So that point gets kind of moot fast, the game industry today is almost universally bound to AMD's limitations.

Hmmm. My experience is that AMD does implement them but they tend to be slower (ex: DirectX Raytracing).
 
Hmmm. My experience is that AMD does implement them but they tend to be slower (ex: DirectX Raytracing).

RT isn't optional, without it, hardware won't qualify for DirectX 12 Ultimate. Granted, the situation has greatly improved as of late, but some of their legacy API stuff still suffers quite a bit
 
FYI: CUDA compiles to PTX. DirectX/HLSL compiles down to DXIL, a completely different technology.
Those are different intermediate languages - both DXIL and PTX are virtual machines and do not contain native code. Nvidia assembly is SASS, which is compiled down to native microcode via a JIT compiler.

Its one or the other. But please, tell me how little GPU programming I know. Its amusing to me.
I don’t have to, you are doing perfectly fine on your own. For example anyone working in PTX would know about SASS because PTX breakpoints are SASS addresses.
 
Last edited:
Good stuff, now please backport FSR 4 to more existing titles.



RT isn't optional, without it, hardware won't qualify for DirectX 12 Ultimate.
And who will give a flying F about it?
 
Back
Top