• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Ports CUDA to RISC-V, Betting Big on Open-Source ISA

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,267 (1.13/day)
During the RISC-V Summit in China, NVIDIA's Frans Sijstermans, vice president of hardware engineering, officially announced that NVIDIA has ported its massive CUDA software library to RISC-V ISA, signaling that the company sees RISC-V adoption accelerating in every aspect of the hardware stack, and that it could be a perfect match for NVIDIA GPUs. Historically, NVIDIA ported its CUDA platform to every major instruction set, including x86, Arm, PowerPC, and even Sun's SPARC. This is done to ensure that if any platform takes off in the enterprise or any segment, NVIDIA is there from the start to support it and allow its GPUs to run at full speed. Now, the $4 trillion company is making a bet that RISC-V will be the next desirable ISA to bring full support for.

NVIDIA envisions systems where its GPUs are the center of any acceleration, and supplementary RISC‑V CPUs oversee CUDA drivers, application logic, and the operating system, orchestrating parallel workloads entirely within the CUDA ecosystem. The diagram shown at the summit highlights a DPU handling networking tasks, creating a cohesive trio of compute, control, and data‑movement elements. If you recall, NVIDIA already uses NV-RISC-V CPU inside its GPUs for handling control logic on the GPU itself. That demonstrates NVIDIA's strategy to build heterogeneous platforms that combine RISC-V controllers with its GPUs, DPUs, and networking silicon. Now that CUDA is fully supported on RISC-V, NVIDIA could look into alternatives for its Grace CPU to be built on the RISC-V ISA. As the open-source ISA slowly breaks into the server space with the RVA23 specification, which NVIDIA mandates for CUDA support, we could see some interesting heterogeneous designs.



RISC-V Foundation CEO Andrea Gallo, in an interview with TechPowerUp, confirmed that "There's a team that is working on a server SOC and a server platform. This includes things like having the same interfaces for timers, clock, IOMMU, RAS and the related error reporting mechanisms. We all agree that we should use the same interfaces for specific peripherals, that are part, for example, of a server platform." This puts confidence that there are companies preparing a major debut of RISC-V CPUs for server, and HPC too. We can't wait to see what the market will bring now, given that the world's most valuable company is now backing the world's largest open-source ISA industry movement.

View at TechPowerUp Main Site | Source
 
“We can't wait to see what the market will bring now, given that the world's largest company is now backing the world's largest open-source ISA industry movement.”

*sigh* I thought we went through that already in that market cap thread. Now even the news posters fail to understand what it is? NVidia is not, in fact, largest company in anything except for specifically GPGPUs, let alone the overall world’s largest company. Unless one also believes that Tesla is the world’s largest automaker.
 
Will AMD invest in something like this so they can also include CUDA on their GPUs? Or will Nvidia keep it at arm's length?
First thing that came to my mind, was the thought that AMD and Intel might rush to integrate RISC-V into their upcoming (GP)GPUs...
 
I did not realize that CUDA was an Nvidia thing until someone mentioned it a few days ago on a different point. But in my head, if you corner the market with this, who start spreading it out, this domain has a large part to why they are worth four trillion dollars.

Interesting!
 
A driver is part of a hardware purchase. looks like just offering drivers. Maybe binary closed source drivers?

NVIDIA is there from the start to support it and allow its GPUs to run at full speed.

Maybe someone who is deeper in the NVIDIA field can answer this. I would have written as title. "Initial NVIDIA GPU drivers for RISC-V CPU Plattform"
Or when they are not the first batch ... Improved Nvidia Gpu ....
 
Will AMD invest in something like this so they can also include CUDA on their GPUs? Or will Nvidia keep it at arm's length?

The parts of "CUDA" that execute on the GPU (running the typically embarrassingly parallel algorithms you need a GPU for) are not what's being talked about here.
 
I did not realize that CUDA was an Nvidia thing until someone mentioned it a few days ago on a different point. But in my head, if you corner the market with this, who start spreading it out, this domain has a large part to why they are worth four trillion dollars.

Interesting!
Nvidia has stated openly for a long time that they are not a GPU company. Nvidia is CUDA and has been since the 8800 GTX came out. PC Gamers are the only idiots on the planet who didn't realize the case and keep getting upset about it.
 
Will AMD invest in something like this so they can also include CUDA on their GPUs? Or will Nvidia keep it at arm's length?
Your question doesn't make much sense.
This OP is about Nvidia's compute stack being able to be built and run on RISC-V, alongside ARM and x86_64.
AMD's equivalent compute stack, ROCm, is already able to be built for RISC-V, among other ISAs as well.
(or maybe you just did some nice puns and I missed the point of the joke)

First thing that came to my mind, was the thought that AMD and Intel might rush to integrate RISC-V into their upcoming (GP)GPUs...
Both Intel's and AMD's stack are already able to run on RISC-V.

A driver is part of a hardware purchase. looks like just offering drivers. Maybe binary closed source drivers?
Nvidia's kernel modules are already open source, but were not really able to be built for RISC-V. Seems like this will be changing (there's even an open issue in their github repo, iirc).
Most of their userspace components are proprietary, seems like those will also have RISC-V options from now on.
Given that CUDA is a mix of both drivers + userspace components, the title in the OP is still accurate.
 
“We can't wait to see what the market will bring now, given that the world's largest company is now backing the world's largest open-source ISA industry movement.”

*sigh* I thought we went through that already in that market cap thread. Now even the news posters fail to understand what it is? NVidia is not, in fact, largest company in anything except for specifically GPGPUs, let alone the overall world’s largest company. Unless one also believes that Tesla is the world’s largest automaker.
In market capitalisation nvidia is rated at 4.2 trillion usd about 3% of all traded companies worldwide.
A serious chunk of change.
 
Well now they have access to Government admin systems for AI adoption.

In market capitalisation nvidia is rated at 4.2 trillion usd about 3% of all traded companies worldwide.
A serious chunk of change.
That has not been good for the average PC Gamer. Intel issued 4 core CPUs twice a year for 10 years when they were in leadership. Don't get me started on Sun Microsystems or Compaq. When a Company is in a position like this they get plenty of Hubris. Speaking of RISC-V do you know how big IBM was for 40 years? Now they give us GPUs that are fast but eye watering. Think about this a 5090 is 43% faster than a 7900XTX for 4 or 5 times more in cost depending on where you live and the Chinese Govt has more access to Nvidia GPUs than the average Gamer.
 
In market capitalisation nvidia is rated at 4.2 trillion usd about 3% of all traded companies worldwide.
A serious chunk of change.
It literally doesn’t matter to how “large” a company is. It’s, for the most part, make-believe. Again, market cap is NOT a direct indicator of what assets companies possess, what their revenue is, what amount of people they employ or anything else that can be an actual sane indicator of how “large” a company is. It’s just a speculative assessment of how much the STOCK of the company is THEORETICALLY worth.
 
supplementary RISC‑V CPUs oversee CUDA drivers, application logic, and the operating system, orchestrating parallel
What we really need a piece of silicon that can run CUDA parallel workload code. Then mary it with hundreds of GBs of speedy (V)RAM.
 
Is this whole news just to say that Nvidia will release RISC-V SoCs or RISC-V CPUs compatible with Nvidia GPUs that will run CUDA on those GPUs?
 
Shame nGreedia can't "port" it to open source.
Why would they? They invested heavily in CUDA. Nvidia has been a software company for well over a decade. That’s like asking Microsoft to open source Office, or Windows.

What we really need a piece of silicon that can run CUDA parallel workload code. Then mary it with hundreds of GBs of speedy (V)RAM.
So like a GPU?
 
Last edited:
What we really need a piece of silicon that can run CUDA parallel workload code. Then mary it with hundreds of GBs of speedy (V)RAM.
I'm probably wrong, but...
I can't help but feel like this was nVidia's secret weapon against AMD and Intel gaining traction in the 'scalable' AI market.
Companies like Tiny have been (figuratively) screaming for hardware options beyond nVidia. Seems like this move helps satisfy those demands, without giving nVidia's competitors an inch.

So like a GPU?
More like a RISC-V co-processor/accelerator card? Kinda like a RISC-V Intel Larrabee (conceptually)
 
More like a RISC-V co-processor/accelerator card? Kinda like a RISC-V Intel Larrabee (conceptually)
RISC-V isn’t a high performance core, and isn’t meant to be. It’s small and cheap.
 
Well now they have access to Government admin systems for AI adoption.


That has not been good for the average PC Gamer. Intel issued 4 core CPUs twice a year for 10 years when they were in leadership. Don't get me started on Sun Microsystems or Compaq. When a Company is in a position like this they get plenty of Hubris. Speaking of RISC-V do you know how big IBM was for 40 years? Now they give us GPUs that are fast but eye watering. Think about this a 5090 is 43% faster than a 7900XTX for 4 or 5 times more in cost depending on where you live and the Chinese Govt has more access to Nvidia GPUs than the average Gamer.
"43% faster"

Lol

It's almost twice as fast. More than twice as fast in RT. Even more than that in PT.

1753161611985.png

1753161672837.png
 
OpenCL is the compute benchmark comparable to CUDA/ROCm.

The 5090 is 1.2x the MI300X.

The MI350X hasn't been tested and is substantially faster than the MI300X. It does FP4 (3.5x Int8) for AI. 50% faster memory. Twice the TDP at 1000w.

RISC-V can be Nvidia's own IP that no one can take away, that's the benefit versus working with MediaTek/ARM or x86 CPU.

opencl.png
 
Keep your eyes open, another driver somewhere during today.
 
Is this whole news just to say that Nvidia will release RISC-V SoCs or RISC-V CPUs compatible with Nvidia GPUs that will run CUDA on those GPUs?
No, more like that Nvidia will support you having an existing Nvidia GPU with the entire CUDA stack running on existing server-grade RISC-V platforms.
No immediate news from a RISC-V CPU from Nvidia, just speculation.

RISC-V isn’t a high performance core, and isn’t meant to be. It’s small and cheap.
RISC-V is an ISA, not a core. Many different core designs can be made using a specific ISA.
A ISA has no direct relationship to a core being made for performance, efficiency or size.

OpenCL is the compute benchmark comparable to CUDA/ROCm.
OpenCL has been pretty much abandoned by both Nvidia and AMD. Only Intel still puts some love in it.
Comparing the performance numbers for either AMD or Nvidia products on such stack is throwing performance away. Even with Vulkan you can easily get 2~10x more performance.
Using the vendor-specific stack (like CUDA or ROCm) nets an even bigger performance uplift. Also, lots of features aren't even supported within OpenCL, such as Nvidia not having fp16 support in that stack.
 
Back
Top