• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Contributes CUDA Compiler to Open Source Community

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,696 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA today announced that LLVM, one of the industry's most popular open source compilers, now supports NVIDIA GPUs, dramatically expanding the range of researchers, independent software vendors (ISVs) and programming languages that can take advantage of the benefits of GPU acceleration.

LLVM is a widely used open source compiler infrastructure, with a modular design that makes it easy to add support for programming languages and processor architectures. The CUDA compiler provides C, C++ and Fortran support for accelerating application using the massively parallel NVIDIA GPUs. NVIDIA has worked with LLVM developers to provide the CUDA compiler source code changes to the LLVM core and parallel thread execution backend. As a result, programmers can develop applications for GPU accelerators using a broader selection of programming languages, making GPU computing more accessible and pervasive than ever before.



LLVM supports a wide range of programming languages and front ends, including C/C++, Objective-C, Fortran, Ada, Haskell, Java bytecode, Python, Ruby, ActionScript, GLSL and Rust. It is also the compiler infrastructure NVIDIA uses for its CUDA C/C++ architecture, and it has been widely adopted by leading companies such as Apple, AMD and Adobe.

"Double Negative has ported their fluid dynamics solver over to use their domain-specific language, Jet, which is based on LLVM," said Dan Bailey, researcher at Double Negative and contributor to the LLVM project. "In addition to the existing architectures supported, the new open-source LLVM compiler from NVIDIA has allowed them to effortlessly compile highly optimized code for NVIDIA GPU architectures to massively speed up the computation of simulations used in film visual effects."

"MathWorks uses elements of the LLVM toolchain to add GPU support to the MATLAB language," said Silvina Grad-Freilich, senior manager, parallel computing marketing, MathWorks. "The GPU support with the open source LLVM compiler is valuable for the technical community we serve."

"The code we provided to LLVM is based on proven, mainstream CUDA products, giving programmers the assurance of reliability and full compatibility with the hundreds of millions of NVIDIA GPUs installed in PCs and servers today," said Ian Buck general manager of GPU computing software at NVIDIA. "This is truly a game-changing milestone for GPU computing, giving researchers and programmers an incredible amount of flexibility and choice in programming languages and hardware architectures for their next-generation applications."

To download the latest version of the LLVM compiler with NVIDIA GPU support, visit the LLVM site. To learn more about GPU computing, visit the NVIDIA website. To learn more about CUDA or download the latest version, visit the CUDA website.

View at TechPowerUp Main Site
 
This is a very nice news. It will be of great help and will make, at least me, to think twice to buy nvidia cards.
 
Very interdasting.
 
Wow, last year the CUDA Compiler gets revamped using LLVM (which resulted in a 20% reduction in compile time), now LLVM itself supports NVIDIA GPUs.

This basically means that almost all compilers that use the LLVM core libraries just got a huge speed boost if you use a NVIDIA card for code generation.
 
very interesting, since i've begun working with Cuda and OpenCL parallel processing :cool:, ingredients : one nvidia card and one amd card :rockout:
 
This might be a bit stupid question, but does this mean anything at all to the Intel and AMD GPU users?
 
This might be a bit stupid question, but does this mean anything at all to the Intel and AMD GPU users?

No, Intel and AMD have had LLVM Compilers for a while.

This news indicates that nVidia has joined its competitors in offering this feature.
 
Uh-huh, I seem to have found outwhy they need a performance boost (from Wikipedia's entry on clang):

"Although Clang's overall compatibility with GCC is very good, and its compilation speed typically better than GCC's, as of early 2011 the runtime performance of clang/LLVM output is sometimes worse than GCC's."

So basically LLVM is a faster and more compact compiler that has worse optimisation ? Nothing new here, this has been a trade-off since ... forever ?
 
Hopefully Apple picks these changes up for xCode. My MBA has a little bit of a spike when I compile :/
 
CUDA (kuda) in my country means horse.
 
Back
Top