• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Details DeepSeek R1 Performance on Radeon RX 7900 XTX, Confirms Ryzen AI Max Memory Sizes

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,683 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD today put out detailed guides on how to get DeepSeek R1 distilled reasoning models to run on Radeon RX graphics cards and Ryzen AI processors. The guide confirms that the new Ryzen AI Max "Strix Halo" processors come in hardwired to LPCAMM2 memory configurations of 32 GB, 64 GB, and 128 GB, and there won't be a 16 GB memory option for notebook manufacturers to cheap out with. The guide goes on to explain that "Strix Halo" will be able to locally accelerate DeepSeek-R1-Distill-Llama with 70 billion parameters on the 64 GB and 128 GB memory configurations of "Strix Halo" powered notebooks, while the 32 GB model should be able to run DeepSeek-R1-Distill-Qwen-32B. Ryzen AI "Strix Point" mobile processors should be capable of running DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Llama-14B on their RDNA 3.5 iGPUs and NPUs. Meanwhile, older generation processors based on "Phoenix Point" and "Hawk Point" chips should be capable of DeepSeek-R1-Distill-Llama-14B. The company recommends running all of the above distills in Q4 K M quantization.

Switching gears to the discrete graphics cards, and AMD is only recommending its Radeon RX 7000 series for now, since the RDNA 3 graphics architecture introduces AI accelerators. The flagship Radeon RX 7900 XTX is recommended for DeepSeek-R1-Distill-Qwen-32B distill, while all SKUs with 12 GB to 20 GB of memory—that's RX 7600 XT, RX 7700 XT, RX 7800 XT, RX 7900 GRE, and RX 7900 XT, are recommended till DeepSeek-R1-Distill-Qwen-14B. The mainstream RX 7600 with its 8 GB memory is only recommended till DeepSeek-R1-Distill-Llama-8B. You will need LM Studio 0.3.8 or later and Radeon Software Adrenalin 25.1.1 beta or later drivers. AMD put out first party LMStudio 0.3.8 tokens/second performance numbers for the RX 7900 XTX, comparing it with the NVIDIA GeForce RTX 4080 SUPER and the RTX 4090.



When compared to the RTX 4080 SUPER, the RX 7900 XTX posts up to 34% higher performance with DeepSeek-R1-Distill-Qwen-7B, up to 27% higher performance with DeepSeek-R1-Distill-Llama-8B, and up to 22% higher performance with DeepSeek-R1-Distill-Qwen-14B. Next up, the big face-off between the RX 7900 XTX and the GeForce RTX 4090 with its 24 GB of memory. The RX 7900 XTX is shown to prevail in 3 out of 4 tests, posting up to 13% higher performance with DeepSeek-R1-Distill-Qwen-7B, up to 11% higher performance with DeepSeek-R1-Distill-Llama-8B, and up to 2% higher performance with DeepSeek-R1-Distill-Qwen-14B. It only falls behind the RTX 4090 by 4% with the larger DeepSeek-R1-Distill-Qwen-32B model.

Catch the step-by-step guide on getting DeepSeek R1 disrilled reasoning models to run on AMD hardware in the source link below.

View at TechPowerUp Main Site | Source
 
What? AMD for once in their life getting their timing right to capitalize on something?
 
Hmm, been meaning to try this.

Thanks for the link.
 
The guide confirms that the new Ryzen AI Max "Strix Halo" processors come in hardwired to LPCAMM2 memory configurations of 32 GB, 64 GB, and 128 GB, and there won't be a 16 GB memory option for notebook manufacturers to cheap out with.

I combed the source page for this language or any clarification to the matter, saying it is “Hardwired” to LPCAMM2 is a bit counterintuitive. Was it supposed to read LPDDR5 instead?

Either way, The 32 GB mandatory minimum is a welcome sight. I’m a bit surprised (hence the confusion above) that 48 and 96GB capacities weren’t also mentioned as those capacities should be possible via LPCAMM2.
 
I combed the source page for this language or any clarification to the matter, saying it is “Hardwired” to LPCAMM2 is a bit counterintuitive. Was it supposed to read LPDDR5 instead?

Either way, The 32 GB mandatory minimum is a welcome sight. I’m a bit surprised (hence the confusion above) that 48 and 96GB capacities weren’t also mentioned as those capacities should be possible via LPCAMM2.
Afaik, 48G isn't achievable in a quad-channel configuration (4x12G?) but 96G should be (as 4x24G is something available).
 
I combed the source page for this language or any clarification to the matter, saying it is “Hardwired” to LPCAMM2 is a bit counterintuitive. Was it supposed to read LPDDR5 instead?

Either way, The 32 GB mandatory minimum is a welcome sight. I’m a bit surprised (hence the confusion above) that 48 and 96GB capacities weren’t also mentioned as those capacities should be possible via LPCAMM2.
LPCAMM2 uses LPDDR5(X) modules still.
Afaik, 48G isn't achievable in a quad-channel configuration (4x12G?) but 96G should be (as 4x24G is something available).
IIRC each LPCAMM2 module is 128-bit, for strix halo you'll need 2 of those, so for 48GB you could go for 24GB modules.
However, crucial only lists 32 and 64GB modules in their page:

So it'd mean either 64 or 128GB for strix halo. I'm too lazy to look into other manufacturers.
 
I think an upgrade path for my 7900XT has just opened up right here.

Thanks AMD I guess?
 
It's crazy because the price to performance of hardware capable of running decently sized models fast enough is WAY WAY lower than what it's sold for. It should become much more affordable to run interesting models in the next few decades.
 
LPCAMM2 uses LPDDR5(X) modules still.

IIRC each LPCAMM2 module is 128-bit, for strix halo you'll need 2 of those, so for 48GB you could go for 24GB modules.
However, crucial only lists 32 and 64GB modules in their page:

So it'd mean either 64 or 128GB for strix halo. I'm too lazy to look into other manufacturers.
The 32 GB SKU might be using soldered LPDDR5X; that is the norm for laptops after all.
 
A small niche of enthusiasts has been asking for years for more VRAM on consumer GPUs to run bigger AI models; hopefully the current DeepSeek craze is going to make manufacturers reconsider their stance of just providing the bare minimum needed for running games at the resolution the GPUs are primarily intended to be used with.
 
A small niche of enthusiasts has been asking for years for more VRAM on consumer GPUs to run bigger AI models; hopefully the current DeepSeek craze is going to make manufacturers reconsider their stance of just providing the bare minimum needed for running games at the resolution the GPUs are primarily intended to be used with.
Honestly, I believe that for inference, Apple's approach is better; the unified DRAM pool allows memory capacities that consumer GPUs just can't match. A lot of people use laptops so a bigger Strix Halo with a 512-bit bus could have 256 GB of RAM with 76% of a desktop RTX 4080's bandwidth.
 
Honestly, I believe that for inference, Apple's approach is better; the unified DRAM pool allows memory capacities that consumer GPUs just can't match.
That could be a path forward too with mixture-of-expert (MoE) LLMs similar to DeepSeek V3/R1, but merely providing non-upgradable systems with relatively large amounts of RAM (e.g. 128GB) at mediocre-to-low bandwidth (~250-300 GB/s, still below the level of a low-end discrete GPU) isn't going to help a lot. Memory doesn't just have to be abundant, but fast too.
 
LPCAMM2 uses LPDDR5(X) modules still.

Right, that's why the confusion comes from the use of the phrase "hardwired to LPCAMM2 configurations of [fixed sizes]". The word "hardwired" implies that it is in fact soldered.
 
That could be a path forward too with mixture-of-expert (MoE) LLMs similar to DeepSeek V3/R1, but merely providing non-upgradable systems with relatively large amounts of RAM (e.g. 128GB) at mediocre-to-low bandwidth (~250-300 GB/s, still below the level of a low-end discrete GPU) isn't going to help a lot. Memory doesn't just have to be abundant, but fast too.
Yes, there's a tradeoff, and for inference, memory bandwidth trumps all. The trend for GPUs is clear though. GDDR leads to low memory capacities; HBM allows exceeding that capacity at infeasible cost. Upgradeable RAM allows the most capacity, but that comes at the expense of bandwidth as well.
 
Great time to uncancel Navi 41 and 42 then ? Bring them to Market with 30 and 36GB of VRAM.
 
Yes, there's a tradeoff, and for inference, memory bandwidth trumps all. The trend for GPUs is clear though. GDDR leads to low memory capacities; HBM allows exceeding that capacity at infeasible cost. Upgradeable RAM allows the most capacity, but that comes at the expense of bandwidth as well.
The main issue with HBM is the fact it require an interposer to sit on and communicate with the main die. That is drastically increase the cost as HBM need to be on package on silicon.

But there are work to produce 3D DRAM that wouldn't necessary be HBM in order to increase capacities. but from what i see, its still few years in the making

note that it look they are also working on stacked dram that would use the same bus size as GDDR* and would probably be a drop in solution while we wait
 
Last edited:
What? AMD for once in their life getting their timing right to capitalize on something?
Yeah, upon reading this I was lauding their reactivity... then I remembered that it's probably thanks to the marketing department not being in charge.
 
This is awesome. YESSSS
 
The main issue with HBM is the fact it require an interposer to sit on and communicate with the main die. That is drastically increase the cost as HBM need to be on package on silicon.
There are more issues. A HBM memory cell takes up twice as much space as a DDR cell. Then there's TSV stacking, which seems to be incredibly expensive, possibly because there's insufficient manufacturing capacity everywhere.
DRAM dies are also stacked in large capacity server DIMMs. That used to be the case for really, really expensive 128 GB DIMMs and up, but now as larger capacity dies exist, it's probably 256 GB and up. Going by the price, I assume it's TSV stacking.
LPDDR dies are also stacked in some designs, for example Apple's M chips. Probably TSV again because speed matters and cost doesn't.
A case of non-TSV stacked dies (with old style wire bonding instead) would be NAND, for several reasons: lower speed, small number of wires due to 8-bit bus, and requirement for low cost.

But there are work to produce 3D DRAM that wouldn't necessary be HBM in order to increase capacities. but from what i see, its still few years in the making
Thanks for the link. Semiengineering posted this nice overview of current tech in 2021 ... and later I ocassionally checked and found nothing. Yes, we'll wait some more for 3D. Someone will eventually modify the NAND manufacturing tech so that those capacitors, well, quickly charge and discharge. And when they succeed, they will try everything to compress four bits into one cell.

note that it look they are also working on stacked dram that would use the same bus size as GDDR* and would probably be a drop in solution while we wait
What sort of stacked DRAM dou yo mean here? Again, due to high speed, it would have to be TSV stacked, so in a different price category.
 
Last edited:
The 395 looks more and more interesting by the day and I can see it replace low/mid end GPU's in the laptop space in the future. Please AMD, release one on the desktop. Or Turin Threadripper. These two are a lot more interesting than the shit these three companies are spitting out the last couple of years and i'd love to tweak them out.

Fast forward a few years and a 16 cores with V-Cache + UDNA + CAMM2 should be awesome. HBM remains a pipe dream because their prices rose pretty and TSV stacking remains prohibitively expensive.
 
The 395 looks more and more interesting by the day and I can see it replace low/mid end GPU's in the laptop space in the future. Please AMD, release one on the desktop. Or Turin Threadripper. These two are a lot more interesting than the shit these three companies are spitting out the last couple of years and i'd love to tweak them out.

Fast forward a few years and a 16 cores with V-Cache + UDNA + CAMM2 should be awesome. HBM remains a pipe dream because their prices rose pretty and TSV stacking remains prohibitively expensive.

I'm positive that AMD and companies like Minisforum will be release mini motherboards with the SoC embedded for system builders.
 
Please AMD, release one on the desktop.
And its name shall be 10980XG. It would only fit in a TR socket though, with its four channels.

We have yet to see what becomes of CAMM2 and LPCAMM. Either of these may become a commodity in a couple years. Or they may remain a rarity, with poor availability, mostly available through OEMs.
 
Last edited:
The 7900 XTX being better at AI than the 4090? Good joke! :laugh: Wait... Seriously? :wtf:
 
So if the 7900 XTX is faster for AI than the 4090 and AMD mentions that RDNA3 specifically can run this model well because of hardware advantages over RDNA2, explain to me why is the new FSR version was supposed to be exclusive to their new GPUs? I mean even an RTX 2000 GPU can benefit of DLSS, so I'm just confused about these stuff.
 
So if the 7900 XTX is faster for AI than the 4090 and AMD mentions that RDNA3 specifically can run this model well because of hardware advantages over RDNA2, explain to me why is the new FSR version was supposed to be exclusive to their new GPUs? I mean even an RTX 2000 GPU can benefit of DLSS, so I'm just confused about these stuff.
FSR 4 could be vastly different from DeepSeek in how it runs. RDNA 3's AI accelerators are part of the shader engine. RDNA 4 may be getting dedicated units. Who knows.

Also, DLSS hasn't changed much in its base operation, so it can run on anything with tensor cores. FSR hasn't needed AI cores so far, but FSR 4 does.

My other theory is that Nvidia hasn't touched the RT and tensor cores much since RTX 2000 (judging by performance data). We know very little about what an AI/tensor core actually is and how it works.
 
Back
Top