Monday, February 3rd 2025

NVIDIA GeForce RTX 50 Series AI PCs Accelerate DeepSeek Reasoning Models

The recently released DeepSeek-R1 model family has brought a new wave of excitement to the AI community, allowing enthusiasts and developers to run state-of-the-art reasoning models with problem-solving, math and code capabilities, all from the privacy of local PCs. With up to 3,352 trillion operations per second of AI horsepower, NVIDIA GeForce RTX 50 Series GPUs can run the DeepSeek family of distilled models faster than anything on the PC market.

A New Class of Models That Reason
Reasoning models are a new class of large language models (LLMs) that spend more time on "thinking" and "reflecting" to work through complex problems, while describing the steps required to solve a task. The fundamental principle is that any problem can be solved with deep thought, reasoning and time, just like how humans tackle problems. By spending more time—and thus compute—on a problem, the LLM can yield better results. This phenomenon is known as test-time scaling, where a model dynamically allocates compute resources during inference to reason through problems. Reasoning models can enhance user experiences on PCs by deeply understanding a user's needs, taking actions on their behalf and allowing them to provide feedback on the model's thought process—unlocking agentic workflows for solving complex, multi-step tasks such as analyzing market research, performing complicated math problems, debugging code and more.
The DeepSeek Difference
The DeepSeek-R1 family of distilled models is based on a large 671-billion-parameter mixture-of-experts (MoE) model. MoE models consist of multiple smaller expert models for solving complex problems. DeepSeek models further divide the work and assign subtasks to smaller sets of experts. DeepSeek employed a technique called distillation to build a family of six smaller student models—ranging from 1.5-70 billion parameters—from the large DeepSeek 671-billion-parameter model. The reasoning capabilities of the larger DeepSeek-R1 671-billion-parameter model were taught to the smaller Llama and Qwen student models, resulting in powerful, smaller reasoning models that run locally on RTX AI PCs with fast performance.

Peak Performance on RTX
Inference speed is critical for this new class of reasoning models. GeForce RTX 50 Series GPUs, built with dedicated fifth-generation Tensor Cores, are based on the same NVIDIA Blackwell GPU architecture that fuels world-leading AI innovation in the data center. RTX fully accelerates DeepSeek, offering maximum inference performance on PCs.

Throughput performance of the Deepseek-R1 distilled family of models across GPUs on the PC:
Experience DeepSeek on RTX in Popular Tools
NVIDIA's RTX AI platform offers the broadest selection of AI tools, software development kits and models, opening access to the capabilities of DeepSeek-R1 on over 100 million NVIDIA RTX AI PCs worldwide, including those powered by GeForce RTX 50 Series GPUs. High-performance RTX GPUs make AI capabilities always available—even without an internet connection—and offer low latency and increased privacy because users don't have to upload sensitive materials or expose their queries to an online service.

Experience the power of DeepSeek-R1 and RTX AI PCs through a vast ecosystem of software, including Llama.cpp, Ollama, LM Studio, AnythingLLM, Jan.AI, GPT4All and OpenWebUI, for inference. Plus, use Unsloth to fine-tune the models with custom data.
Source: NVIDIA
Add your own comment

24 Comments on NVIDIA GeForce RTX 50 Series AI PCs Accelerate DeepSeek Reasoning Models

#1
Daven
NVIDIA GeForce RTX 50 Series AI PCs Accelerate DeepSeek Reasoning Models...and so does about every other accelerator provider apparently.
Posted on Reply
#2
tvshacker
Wasn't it shown that it runs fine on lower specced systems?
Aren't the 50 series vaporware right now, why promote this if there is no way to (independently) do it?
Posted on Reply
#3
bobsled
They’re espousing the benefits over 7900XTX, yet the AMD card is much cheaper and is readily available on shelves.
Posted on Reply
#4
john_
AMD has shown different results with 7900XTX beating RTX 4090.
Now, in the above benchmarks from Nvidia the Radeon card is running Vulkan. Is this optimum, or does Nvidia sabotaging the 7900 here?

Also even with the above results from Nvidia, 7900 wins on performance/dollar easily.
Posted on Reply
#5
Legacy-ZA
I saw an article about 5090's bricking by doing a driver update, anyone know if this happens with the 5080's?
Posted on Reply
#7
Rightness_1
And just like that, NV owns the mindshare in less than a week. Let the purchasing of NV chips re-commence!
Posted on Reply
#8
mb194dc
It does but other chips accelerate them better...
Posted on Reply
#9
igormp
tvshackerWasn't it shown that it runs fine on lower specced systems?
Aren't the 50 series vaporware right now, why promote this if there is no way to (independently) do it?
The distilled models, yeah. Smaller models can run better on lower specced systems.
The bigger ones require more hardware.

No consumer hardware is going to run the actual big MoE model tho.
john_AMD has shown different results with 7900XTX beating RTX 4090.
Now, in the above benchmarks from Nvidia the Radeon card is running Vulkan. Is this optimum, or does Nvidia sabotaging the 7900 here?

Also even with the above results from Nvidia, 7900 wins on performance/dollar easily.
Vulkan is quite a bit slower, but it's way easier to get up and running than ROCm.
Nonetheless, those results from AMD were really weird, as even a 3090 usually beats a 7900XTX:

source
Posted on Reply
#10
Raiden85
Great, so now the Chinese will be grabbing these by the truck load. At this rate they'll never be in stock.
Posted on Reply
#11
ty_ger
Raiden85Great, so now the Chinese will be grabbing these by the truck load. At this rate they'll never be in stock.
Chinese? Why Chinese? Why not everyone else?
DeepSeek is a open-source software which can be run anywhere in the world.
Posted on Reply
#12
Raiden85
True, but the current leaders in AI for now seem to be the US first and then China, so if someone is going to buy hardware that accelerates this, then I'd bet on China buying in bulk first.
Posted on Reply
#13
Tom Yum
I'm running the 14B distill of DeepSeek on my 7735HS laptop with just the integrated 680M. Inference doesn't require a lot of compute, it just needs a lot of memory (I have 32GB). It isn't the fastest, but it is still usable. 7B is much faster but doesn't perform well enough as an AI in my experience.

So many people still think that AI has to run in the 'cloud' because of the compute requirements, or require high end nvidia cards. Training an AI does, but most people aren't training AIs, they are just running a pre-trained model (inferencing). Any semi-recent GPU or CPU can do that for the small to mid-sized AI models. If you want to run the full 670b model then yeah you will need a high-end workstation (because of the ram requirements mainly), but a 14b distill can meet the majority of people's LlM needs and will run on consumer hardware.
Posted on Reply
#14
kapone32
This used to be called Damage Control. The narrative is so precious.
ty_gerChinese? Why Chinese? Why not everyone else?
DeepSeek is a open-source software which can be run anywhere in the world.
If you looked at how many 4090s were sold to China vs the rest of the World. The rest of the World is not locked out for 5090s but 5090Ds. Are those not China only variants? People act like China is inert after North America gave China it's manufacturing all of those years ago.
Posted on Reply
#15
Luminescent
Well, the Chinese threw a wrench in the US Ai business by making it free and if it wasn't enough, i don't know how much money AI makes right now but i know it eats up a lot of billions, Nvidia is certainly hurting now when some investors question the billions they threw at this just for Deepseek to make it free.
If you were a billionare, would you invest billions into a business that has the potential to be free for all ?
Posted on Reply
#16
SamuelL
The distilled models are not DeepSeek R1. They are modified Qwen models that behave similar(ish) but lack the reasoning component that makes the “real” R1 model a competitor with OpenAI.

In other words: any marketing that says, “deepseek runs on X consumer card” is a load of BS.
Posted on Reply
#17
ty_ger
kapone32If you looked at how many 4090s were sold to China vs the rest of the World. The rest of the World is not locked out for 5090s but 5090Ds. Are those not China only variants? People act like China is inert after North America gave China it's manufacturing all of those years ago.
This is an article about deepseek. There is another article about smuggling in nvidia gpus through Singapore. Seems that's a better place to discuss that.
If that's the discussion in this thread, I assume the person is confused about where deepseek can be used.
Posted on Reply
#18
kapone32
ty_gerThis is an article about deepseek. There is another article about smuggling in nvidia gpus through Singapore. Seems that's a better place to discuss that.
If that's the discussion in this thread, I assume the person is confused about where deepseek can be used.
Yes an article theorizing on how good Deep Seek will be using 5000 GPUs. I don't say this in a vacuum but with stories on TPU stating that Nvidia shifted sales to Eastern vendors once the sanctions were announced. Do you remember the Laptop 5090s that made their way onto DGPU PCBs once whatever they are used for did not pan out? I already mentioned the 4090D but the 5090D is already in production. Deepseek is still the property of the Chinese Govt.
Posted on Reply
#19
ty_ger
1) It came from a Chinese company, not Chinese government.
2) If it is open source, is it the property of anybody?

So, again, if we are going to comment about 'great, China is going to scalp all our GPUs', why not post in the thread dedicated to that topic? Posting that in every thread is counter-productive.
Posted on Reply
#20
R-T-B
ty_gerIt came from a Chinese company, not Chinese government.
One and the same really considering the Chinese government holds a 51% stake in every chinese company.
Posted on Reply
#21
ty_ger
R-T-BOne and the same really considering the Chinese government holds a 51% stake in every chinese company.
In general, that is simply untrue. Ownership ranges on a huge spectrum, and if you think China's government has a majority stake in all companies in China, you are way off base. When it comes to this private equity company, I don't know. Maybe. What difference does it make? What about the other half of the quote you left out? Like, the reason that this whole claim is nonsense?

The person I quoted clearly was clueless. No need to carry on with this devils advocate thing.

Carry on that sort on conversation here: www.techpowerup.com/forums/threads/us-investigates-possible-singapore-loophole-in-chinas-access-to-nvidia-gpus.331912/
Where it belongs.

If someone thinks that deepseek means that China will steal GPUs, they are clueless. Simple as that. China will steal GPUs for a variety of reasons, not for something that has already been publicly released and creates no income.

Let people call out nonsense without playing this weird devils advocate crap.
Posted on Reply
#22
R-T-B
ty_gerIn general, that is simply untrue.
Seems you are correct, appologies for perpetuating a rumor.
Posted on Reply
#23
kapone32
ty_gerIn general, that is simply untrue. Ownership ranges on a huge spectrum, and if you think China's government has a majority stake in all companies in China, you are way off base. When it comes to this private equity company, I don't know. Maybe. What difference does it make? What about the other half of the quote you left out? Like, the reason that this whole claim is nonsense?

The person I quoted clearly was clueless. No need to carry on with this devils advocate thing.

Carry on that sort on conversation here: www.techpowerup.com/forums/threads/us-investigates-possible-singapore-loophole-in-chinas-access-to-nvidia-gpus.331912/
Where it belongs.

If someone thinks that deepseek means that China will steal GPUs, they are clueless. Simple as that. China will steal GPUs for a variety of reasons, not for something that has already been publicly released and creates no income.

Let people call out nonsense without playing this weird devils advocate crap.
I am not saying they are using it t steal GPUs. All I am stating is that in China there is no such thing as a free market and I have no idea what China is doing with all the 4090s they are buying. This is based on what I have read and seen but Hong Kong is a good example of what I am talking about.
Posted on Reply
#24
ty_ger
kapone32I am not saying they are using it t steal GPUs. All I am stating is that in China there is no such thing as a free market and I have no idea what China is doing with all the 4090s they are buying. This is based on what I have read and seen but Hong Kong is a good example of what I am talking about.
Then why are you replying in defense of the person who did?
Posted on Reply
Add your own comment
Feb 8th, 2025 20:19 CST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts