Monday, February 28th 2011

New CUDA 4.0 Release Makes Parallel Programming Easier

NVIDIA today announced the latest version of the NVIDIA CUDA Toolkit for developing parallel applications using NVIDIA GPUs. The NVIDIA CUDA 4.0 Toolkit was designed to make parallel programming easier, and enable more developers to port their applications to GPUs. This has resulted in three main features:
  • NVIDIA GPUDirect 2.0 Technology -- Offers support for peer-to-peer communication among GPUs within a single server or workstation. This enables easier and faster multi-GPU programming and application performance.
  • Unified Virtual Addressing (UVA) -- Provides a single merged-memory address space for the main system memory and the GPU memories, enabling quicker and easier parallel programming.
  • Thrust C++ Template Performance Primitives Libraries -- Provides a collection of powerful open source C++ parallel algorithms and data structures that ease programming for C++ developers. With Thrust, routines such as parallel sorting are 5X to 100X faster than with Standard Template Library (STL) and Threading Building Blocks (TBB).


"Unified virtual addressing and faster GPU-to-GPU communication makes it easier for developers to take advantage of the parallel computing capability of GPUs," said John Stone, senior research programmer, University of Illinois, Urbana-Champaign.

"Having access to GPU computing through the standard template interface greatly increases productivity for a wide range of tasks, from simple cashflow generation to complex computations with Libor market models, variable annuities or CVA adjustments," said Peter Decrem, director of Rates Products at Quantifi. "The Thrust C++ library has lowered the barrier of entry significantly by taking care of low-level functionality like memory access and allocation, allowing the financial engineer to focus on algorithm development in a GPU-enhanced environment."

The CUDA 4.0 architecture release includes a number of other key features and capabilities, including:
  • MPI Integration with CUDA Applications -- Modified MPI implementations automatically move data from and to the GPU memory over Infiniband when an application does an MPI send or receive call.
  • Multi-thread Sharing of GPUs -- Multiple CPU host threads can share contexts on a single GPU, making it easier to share a single GPU by multi-threaded applications.
  • Multi-GPU Sharing by Single CPU Thread -- A single CPU host thread can access all GPUs in a system. Developers can easily coordinate work across multiple GPUs for tasks such as "halo" exchange in applications.
  • New NPP Image and Computer Vision Library -- A rich set of image transformation operations that enable rapid development of imaging and computer vision applications.
  • o New and Improved Capabilities
    o Auto performance analysis in the Visual Profiler
    o New features in cuda-gdb and added support for MacOS
    o Added support for C++ features like new/delete and virtual functions
    o New GPU binary disassembler
A release candidate of CUDA Toolkit 4.0 will be available free of charge beginning March 4, 2011, by enrolling in the CUDA Registered Developer Program here. The CUDA Registered Developer Program provides a wealth of tools, resources, and information for parallel application developers to maximize the potential of CUDA.

For more information on the features and capabilities of the CUDA Toolkit and on GPGPU applications, please visit: http://www.nvidia.com/cuda
Add your own comment

77 Comments on New CUDA 4.0 Release Makes Parallel Programming Easier

#1
ToTTenTranz
by: Fourstaff
And abandon the CUDA community to rot? That is harsh, seeing that there are quite a few people already using CUDA. Supporting and gradually releasing CUDA away is the correct route, imo, but supporting both CUDA and OpenCL at the same time shouldn't hurt anybody, should it?
Of course I don't mean firing off every CUDA-related support engineer and suddenly kiling support.
I meant assisting the same CUDA community into transferring their knowledge to OpenCL instead, and gradually terminate the CUDA brand.


This new CUDA 4.0 announcement shows they're doing the exact opposite. It shows they're still trying to force a vendor-specific API, while there are already capable and "open" alternatives.
They're actively trying to cock-block every other GPU maker from doing GPGPU, and their support for OpenCL is just the Plan B.
That's why I call it "evil".
Posted on Reply
#2
Fourstaff
by: ToTTenTranz
Of course I don't mean firing off every CUDA-related support engineer and suddenly kiling support.
I meant assisting the same CUDA community into transferring their knowledge to OpenCL instead, and gradually terminate the CUDA brand.


This new CUDA 4.0 announcement shows they're doing the exact opposite. It shows they're still trying to force a vendor-specific API, while there are already capable and "open" alternatives.
They're actively trying to cock-block every other GPU maker from doing GPGPU, and their support for OpenCL is just the Plan B.
That's why I call it "evil".
Well, the release of CUDA 4.0 gives everybody the confidence that Nvidia is not going to abandon CUDA, and therefore they can continue with their work without worrying about the near future. But assisting the CUDA community to transition to OpenCL is going to be both costly and brings negligible benefits, and as a business you would choose to avoid that. Providing the infrastructure (as in support for OpenCL) for the community to naturally migrate over is good enough. Also, keep in mind that CUDA "came first", in the sense that it saw widespread adoption earlier than OpenCL, so I believe "legacy support" is in order.

Edit: I do not believe that Nvidia is forcing the adoption of CUDA and cockblocking other GPU maker, because while they have not offered CUDA to the public, Intel has not made x86 a "freeware" either, despite the overwhelming penetration of x86 based "general computers". Only AMD need not pay license to Intel for x86, because of some fancy agreement they had last time, and even VIA pays Intel licensing on a product which should have been "public domain".
Posted on Reply
#3
Jack Doph
Awesome.
This kind of tit-for-tat is exactly why I like TPU.
One never stops learning about the pros and cons of .

A question, if I may (as I'm not brand-loyal to any camp - hell, give me Matrox purity anytime):
why is there such a disparity between the different camps?
Is it truly because one is classed as a more established platform (free or not), or because one platform has a more established user-base over the other (CUDA vs OpenCL)?
/me is curious
Posted on Reply
#4
pr0n Inspector
by: ToTTenTranz
Of course I don't mean firing off every CUDA-related support engineer and suddenly kiling support.
I meant assisting the same CUDA community into transferring their knowledge to OpenCL instead, and gradually terminate the CUDA brand.


This new CUDA 4.0 announcement shows they're doing the exact opposite. It shows they're still trying to force a vendor-specific API, while there are already capable and "open" alternatives.
They're actively trying to cock-block every other GPU maker from doing GPGPU, and their support for OpenCL is just the Plan B.
That's why I call it "evil".
If you love OpenCL so much go join the developers or blow up the Khronos Group.

It should be pretty obvious that a bunch of bickering vendors naturally produce something inferior. Quit hoping the superior solution to die and actually improve your favorite one.
Posted on Reply
#5
ToTTenTranz
by: pr0n Inspector
If you love OpenCL so much go join the developers or blow up the Khronos Group.

It should be pretty obvious that a bunch of bickering vendors naturally produce something inferior. Quit hoping the superior solution to die and actually improve your favorite one.
And what exactly do you consider "something inferior" by a "bunch of bickering vendors"?
Posted on Reply
#6
Fourstaff
by: Jack Doph

A question, if I may (as I'm not brand-loyal to any camp - hell, give me Matrox purity anytime):
why is there such a disparity between the different camps?
Is it truly because one is classed as a more established platform (free or not), or because one platform has a more established user-base over the other (CUDA vs OpenCL)?
/me is curious
Well, its generally the case the TPU members like to support the underdog, and also believes in "righteousness". Its not a bad thing, but sometimes it gets incited into a fanatical "my god is holier than thou" rage, in which case it rots and Mods come in and clean up our mess.

Well, Nvidia offered CUDA first, and advertised heavily about it (and also provided quite a bit of support). OpenCL came later (ever so slightly later), and since that OpenCL is "open", as in everybody can use it as long as the hardware supports it, regardless or royalty, people viewed it as "the right path", and they are largely right. But as you correctly pointed out, by the time OpenCL became mainstream, the CUDA base has grown to quite a large proportion of GPGPU users, and hence OpenCL is almost to the point being ignored. Hence the community decided to become free adverts for the greater good: supporting OpenCL, and this is where it got ugly. CUDA users still want their support, but OpenCL should be the future. Kudos to Nvidia for supporting both, but people thought Nvidia is still stifling OpenCL. That might be the case I do not know, but for now I am contented with the fact that Nvidia supports OpenCL, regardless of the amount of flame Nvidia is throwing at OpenCL (I have yet to see any).

We need to have less of these posts:

by: pr0n Inspector
If you love OpenCL so much go join the developers or blow up the Khronos Group.

It should be pretty obvious that a bunch of bickering vendors naturally produce something inferior. Quit hoping the superior solution to die and actually improve your favorite one.
If you know something, or want to voice your opinion in a sensible way (even if it is wrong), by all means, do it. But coming in and shouting "blow up Khronos Group" (exagerrated for effects) and things like that should better be kept in GN, I do not wish for this thread to descend to amateurish egg pelting.
Posted on Reply
#7
Benetanegia
by: ToTTenTranz
Of course I don't mean firing off every CUDA-related support engineer and suddenly kiling support.
I meant assisting the same CUDA community into transferring their knowledge to OpenCL instead, and gradually terminate the CUDA brand.


This new CUDA 4.0 announcement shows they're doing the exact opposite.
No, they are not. They are just making CUDA better for those who use CUDA.

It's not Nvidia's (nor AMD's, nor Intel's, nor Apple's) responsability to make the shift to OpenCL, it's developers responsablitily. It's not their right to do so even, forcing them to spend more money and time into something they don't really need at this point (by stopping to support it). All the people who invested in CUDA (and right now that's a lot of people in the scientific and accounting bussiness to name a few), invested in Nvidia cards too, for obvious reasons* so there's absolutely no need for them to move to an alternative that would cost them more (because of the change) and would have zero benefits, or even hurt their performance.

Developers will move to OpenCL when and if they want to, which is going to be when that change supposes an advantage to them.

* In case it's not so obvious, it was the only alternative back then.



I don't know there but here Voodoo 3 sold much much more than any other cards including the Riva TNT and TNT2. The only thing that the TNT2 was better was 32 bit support and that's all.
At 16 bit (90%++ of games) the Voodoo3 was a lot faster and back at the time that made it more successful, again, at least here. The Glide mode that was present in every game I owned back then was far superior to the OpenGL or Directx counterparts. Granted, you may call those games old by the time the Voodoo3 launched, since I'm refering to UT and Half-Life...

http://www.guru3d.com/review/3dfx/voodoo3_3000/index3.html

The thing is that back at the time I bought a TNT2 because the seller adviced me to do so, but had to return soon after because the drivers sucked (artifacts) and there was some kind of incompatibility or something with my Athlon PC. After returning the card to the store 2 times because we couldn't find the problem, and even bringing my PC there to see if they could fix it**. Nothing worked so they gave me the option to get a Voodoo 3 and I never looked back. It was significantly faster in the games I had (I played mainly UT, Half-Life and DID flight simulators EF2000 and F-22 ADF) and had superb antialiaing which I don't remember the TNT having.


**it was not a normal store, they were geeks that helped you, an amazing concept for consumers, that apparently failed because they helped you with the best deals you could get and not the best deals for them.
They're actively trying to cock-block every other GPU maker from doing GPGPU, and their support for OpenCL is just the Plan B.
That's why I call it "evil".
If they were doing so they wouldn't be the first ones giving out drivers to everybody... even one of the AMD's most mentioned OpenCL application started with Nvidia's OpenCL drivers before they got AMD drivers. Bullet Physics.

Your own biased perception of how things are (i.e. OpenCL is plan B etc) does not make it true. It is not true, at all, and if you have the smallest proof af that, please you are free to post it. In the meantime the facts point out that you are wrong. Nvidia is the first one releasing OpenCL drivers for every OpenCL version and that let's everybody develop for OpenCL months in advance of what they could do if they had to wait for other's drivers. How releasin OpenCL drivers 3 months earlier than compatiotion is hurting OpenCL in the benefit of CUDA just scapes my comprehension. You would think that if they wanted to slow down OpenCL they would release it after the competition or maybe 1 week before the competition, in order to brag about, but 3 months. No, no.
Posted on Reply
#8
ToTTenTranz
by: Fourstaff
Well, Nvidia offered CUDA first, and advertised heavily about it (and also provided quite a bit of support). OpenCL came later (ever so slightly later), and since that OpenCL is "open", as in everybody can use it as long as the hardware supports it, regardless or royalty, people viewed it as "the right path", and they are largely right. But as you correctly pointed out, by the time OpenCL became mainstream, the CUDA base has grown to quite a large proportion of GPGPU users, and hence OpenCL is almost to the point being ignored. Hence the community decided to become free adverts for the greater good: supporting OpenCL, and this is where it got ugly. CUDA users still want their support, but OpenCL should be the future. Kudos to Nvidia for supporting both, but people thought Nvidia is still stifling OpenCL. That might be the case I do not know, but for now I am contented with the fact that Nvidia supports OpenCL, regardless of the amount of flame Nvidia is throwing at OpenCL (I have yet to see any).
That is a good summary but it's missing the fact that CUDA is only supported by nVidia cards. -> the MAIN complaint for CUDA in the first place.
This means only some 46% of discrete desktop cards support it, and an even lesser number of IGPs (and this is obviously going to drop drastically, since nVidia has quit the IGP business).

And if we talk about mobile SoCs, Tegra 3 is probably the only next-gen mobile GPU that's not OpenCL-capable (they're not even going unified shaders for T3).
And OpenCL should become BIG in handhelds, in years to come.
Posted on Reply
#9
Cheeseball
by: ToTTenTranz
Yes, and my argument is that they should be supporting only one: OpenCL.
It's not NVIDIA's responsibility to develop OpenCL as whole, but it is their responsibility to assist with it's growth, since they are part of the Khronos Group. The reason for this is:

1. As you said, to promote their (NVIDIA) own cards.
2. Further GPGPU development.

What people (average users, and even the above average users) don't know is that NVIDIA is backporting features from CUDA into OpenCL development. This is evident in the OpenCL 1.1 man pages, in regards to address space and the built-in functions.

And why are you guys arguing about Glide and non-related topics in a CUDA thread? :P
Posted on Reply
#10
Fourstaff
by: ToTTenTranz
That is a good summary but it's missing the fact that CUDA is only supported by nVidia cards. -> the MAIN complaint for CUDA in the first place.
This means only some 46% of discrete desktop cards support it, and an even lesser number of IGPs (and this is obviously going to drop drastically, since nVidia has quit the IGP business).

And if we talk about mobile SoCs, Tegra 3 is probably the only next-gen mobile GPU that's not OpenCL-capable (they're not even going unified shaders for T3).
And OpenCL should become BIG in handhelds, in years to come.
Yes, of course CUDA is supported by Nvidia cards only, I don't really see DirectX being supported by Apple, or x86 supported by ARM based chips. Nvidia is willing to implement support for OpenCL means a lot means a lot to me. To me it means that they are not ignoring what the supposed "future standard", but to others it might mean that Nvidia is having a Plan B. CUDA has always been an "added functionality" to me: "get Nvidia card and you can have this awesome API! Don't worry, we still provide OpenCL if you don't like our product".

Tegra 3 might not be supporting OpenCL, but that's Nvidia's fault: its almost like AMD not supporting x86 implementation and that's just plain stupid rather than being an ass.
Posted on Reply
#11
pr0n Inspector
by: Fourstaff


If you know something, or want to voice your opinion in a sensible way (even if it is wrong), by all means, do it. But coming in and shouting "blow up Khronos Group" (exagerrated for effects) and things like that should better be kept in GN, I do not wish for this thread to descend to amateurish egg pelting.
You probably heard about the hilarious story of OpenGL 3.0(massive plan-reality discrepancy), thanks to the committees in the Khronos Group. Or that in recent years OpenGL is more or less just matching D3D in headline features, which is predictable since major members have vested interests in D3D.
Posted on Reply
#12
Cheeseball
Exactly, CUDA is just the "added functionality" bonus. :P They already support OpenCL fully.

It just so happens that CUDA is the preferred among "the masses", so of course they're going to roll with it since it's popular.

Again, if you need reasons why it's popular among "the masses":

1. Open source, as in software, not in hardware.
2. Dedicated development and support.
3. Ease of implementation to existing GPGPU applications.
4. Easy portability between itself and OpenCL.

To summarize, it's because it works as it should in an efficient manner, not because NVIDIA is helping humanity or some other bullshit.

by: pr0n Inspector
You probably heard about the hilarious story of OpenGL 3.0(massive plan-reality discrepancy), thanks to the committees in the Khronos Group. Or that in recent years OpenGL is more or less just matching D3D in headline features, which is predictable since major members have vested interests in D3D.
History might repeat itself with OpenCL here, where CUDA = DirectX and OpenCL = OpenGL. :wtf:

All AMD has to do is allow full low-level access to the memory buffer on their cards (not possible as of the HD 6xxx series) and support bitwise and integer functions (Why they don't, I have no idea. Probably has to do with their stream processor implementation.). Intel got it right with Sandy Bridge.
Posted on Reply
#13
Benetanegia
by: Cheeseball
To summarize, it's because it works as it should in an efficient manner, not because NVIDIA is helping humanity or some other bullshit.
Because this is related to my posts let me explain that I don't think that Nvidia is trying to help humanity or anything like that. It's that CUDA (which coincidentaly happens to be owned by Nvidia) is being used in apps that do help humanity and because of that we should support it. In the end,

- Nvidia supports CUDA for the obvious reasons you mentioned.

- Developers who use CUDA use it for their own reasons, which you already explained.

- Users. I'm neither Nvidia nor a GPGPU developer so I just explained why I, as a potential user, support it, it's because I find that the apps being made with CUDA (and that I'm sure will be ported to OpenCL as soon as it becomes an equal ecosystem) are very beneficial to humanity and that's something worth supporting.

I thought I had to make this point clear.
Posted on Reply
#14
Cheeseball
Understood, but the first time you wrote it sounded confusing, and it was actually a reply to ToTTenTranz's reply to you. :cool:

I think we can all agree that PhysX is the evil one here, since it is proprietary and wholly restricted to NVIDIA cards. :mad:
Posted on Reply
#15
JEskandari
by: Mr McC
Haven't seen the film yet.

Don't read too much into the comparison, my point is that open-source alternatives exist and can rapidly achieve mass acceptance, provided they are given the chance.
Well an opensource alternative to windows is Linux why you don't support it
open source alternative of PhotoShop is Gimp why not support it
opensource Alternative of DirectX is OpenGL why not support it
why instead of office don't use open office

Why because you can't use a program on your video card bash CUDA after all
do you considered that you are not the target market for this technology ? and
the target population have the correct tools to run these programs and more
importantly they don't care if the program is based on open source technology
they want it to be fast and have the feature they need they are paying money for
it and expect receive what they paid for it .
By the way the Irony is that NVIDIA implementation of OpenCL is better than other
Solutions
Posted on Reply
#16
JEskandari
by: ToTTenTranz
Yes, and my argument is that they should be supporting only one: OpenCL.
why abandon the superior and support the inferior .
your argument do not make sense to me .
cuda must vanish only when it does not have the ability to compete
with opencl but right now the situation is completely different .
Posted on Reply
#17
Solaris17
Creator Solaris Utility DVD
by: Benetanegia
Nvidia is doing for OpenCL as much as AMD if not more, AMD is just being more vocal about it now that they can finally use it as an advantage (i.e they have Fusion and Intel has nothing). But because Nvidia supports OpenCL, that does not mean they should stop development on CUDA. It's the absolute opposite. Creating and evolving an open source API takes a lot of time, because of all the parties involved. i.e. not only matters WHAT things the API does but also HOW they are done and everyone involved wants it to be their way, so it takes time and the API is always one step behind what the actual users NEED. This is less of a problem in mature markets and APIs like DirectX/OpenGL* because the market is "stagnated" and it's the users who are one step behind. But on a emerging market like GPGPU new needs are created on a daily basis and for the actual people using them it's critical to get them ASAP. Nvidia actually helps them by evolving CUDA and exposing to their hardware all those things that developers need, without the requirement to go through months or years of certifications and whatnot. It's because of this that CUDA is successful and REQUIRED in the industry. For actual users is imperative to have those features now. Let's discuss this in a few years.

*And even then it's more than known that OpenGL has been 1 even 2 steps behind and still is in many way. It's also known how that has affected the market and most people would agree that advancement in DX has been a good thing. Well it is.



That works the other way around too. That's the most stupid thing that people don't seem to understand. OpenCL may be cross-platform, but its optimizations certainly aren't. Code optimized for Nvidia GPUs would be slow on AMD GPUs and code optimized for AMD would be slow on Nvidia. Developers still have to code specifically for every platform, so what's so bad about Nvidia offering a much better and mature solution again? Nvidia should deliberately botch down their development so that the open for all platform can catch up? The enterprise world (i.e medical/geological imaging) should wait 2 years more in order to get what they could have now just because you don't want to feel in disadvantage in that little meaningless application or that stupid game? Come on...

"To hell the ability to best diagnose cancer or predict hearthquakes/tornados, I want this post process filter run as fast in my card as in that other one. That surely should be way up on their list, and to hell the rest. After all, I spend millions helping in the development of GPGPU and/or paying for the program afterwards... NO. Wait. That's the enterprises :banghead:, I'm actually the little whinny boy that demands that the FREE feature I get with my $200 GPU is "fair".
Hit it on the head fair or not (i take no side) I think its important and required for far more then gaming. CUDA brings innovations in the medical field. Not to mention work done for new levels of effects in movies. Or computational times on large mathematical equations.

by: Cheeseball
Understood, but the first time you wrote it sounded confusing, and it was actually a reply to ToTTenTranz's reply to you. :cool:

I think we can all agree that PhysX is the evil one here, since it is proprietary and wholly restricted to NVIDIA cards. :mad:
Physx isnt evil thats BS fanboy speak. Thats like saying VTEC is evil because everyone buys a honda for it when almost every other manufacturer has VVTI they just dont market it. Not to mention havok physx is used in far more games then nvidia physx we have all seen the supported games charts. Why are people complaining? Physx ruins nothing. It isnt even used in the majority of games that developers impliment a physx engine in. Are people blind?
Posted on Reply
#18
Fourstaff
by: Solaris17
Are people blind?
Yes
Posted on Reply
#19
Mr McC
by: JEskandari
Well an opensource alternative to windows is Linux why you don't support it
open source alternative of PhotoShop is Gimp why not support it
opensource Alternative of DirectX is OpenGL why not support it
why instead of office don't use open office

Why because you can't use a program on your video card bash CUDA after all
do you considered that you are not the target market for this technology ? and
the target population have the correct tools to run these programs and more
importantly they don't care if the program is based on open source technology
they want it to be fast and have the feature they need they are paying money for
it and expect receive what they paid for it .
By the way the Irony is that NVIDIA implementation of OpenCL is better than other
Solutions
I don't use Linux because DirectX is exclusive to Windows and I play games.
I use Paint.net rather than Gimp.
I use Abiword rather than Office...

I dislike proprietary marketing, and everybody should care whether or not something is open source, unless they have a product to sell or feel that a particular company's merchandise deserves blind adulation. I don't mind being dependent on a specific technology, but I do not want to be dependent on a specific corporation, where I can help it.

CUDA might work and might work well, but the primacy of an open alternative would be better for all of us, from a consumer's point of view. I hope that clarifies my position.
Posted on Reply
#20
Wile E
Power User
by: ToTTenTranz
Windows allows hardware differentiation and promotes hardware competiteveness.
CUDA does not.
At most, you could compare it to MacOS X, since it only supports whatever hardware that Apple choses to include in their computers at a given time.

Regardless of how well seen it is from a developer's point of view, it's just one more method for nVidia to try to sell more hardware with an exclusive computing API.
There's nothing wrong with that. That's the entire point of building and selling a product as a business. To offer something your competitors do not at a price consumers are willing to pay. How do you think car makers sell cars? By offering features other car makers don't in one way or another.

by: ToTTenTranz
Completely wrong.
Glide was abandoned when relevant GPU makers appeared, that used non-vendor-specific APIs. It was abandoned when PowerVR, nVidia, ATI, S3, Matrox, Rendition, etc started to take a significant share of the consumer 3D graphics card market, and 3dfx wasn't the stand-alone king of 3d accelerators anymore.

It had nothing to do with technological advance or OpenGLDirectX catching up in terms of features <- that's what you fail to understand.




lol




And 3DFx was the first with DirectX drivers, despite having Glide. And Creative was the first with OpenAL drivers, despite having EAX.
Point being?
And when something better for developers andend users comes along, CUDA will disappear. OpenCL does not yet qualify as something better.
Posted on Reply
#21
mdm-adph
by: Wile E
And when something that's promoted heavily and is cross-platform better for developers andend users comes along, CUDA will disappear
Fixed that for you. :P It doesn't matter if it's better -- just that it's promoted heavily and is cross platform. That's how Windows caught on. It was cheap, it was promoted, and it worked on everything. Android's the same way, I guess.

Even if CUDA is faster at some things, if it only works on one specific type of hardware, it's never going to be as widely used as something cross-platform would be.
Posted on Reply
#22
Wile E
Power User
by: mdm-adph
Fixed that for you. :P It doesn't matter if it's better -- just that it's promoted heavily and is cross platform. That's how Windows caught on. It was cheap, it was promoted, and it worked on everything. Android's the same way, I guess.

Even if CUDA is faster at some things, if it only works on one specific type of hardware, it's never going to be as widely used as something cross-platform would be.
In terms of hardware, Windows isn't cross platform either. It doesn't run on PPC, or ARM or anything like that. Only x86 (and x64). CUDA's success depends strictly on adoption by devs and marketing. If software is good enough, people will buy the hardware needed to run it.
Posted on Reply
#23
RejZoR
I think they have too go the unified standard way. Programs also don't care if you run Intel or AMD, they just run. But for GPGPU, you need CUDA for NVIDIA and AMD Stream for AMD. And one another doesn't support competitor's GPU and won't run the code made for the other. In a way, they are both useless because only each camp is hyping and using them. If there was one standard, everyone will be using it rather easily, resulting in loads of apps being available opposed to current state where you find few of them and most of the time none of them is any good really. And if it is, you're in the end forced to use that GPU vendor even if some series in between are crap.
Posted on Reply
#24
Fourstaff
by: RejZoR
But for GPGPU, you need CUDA for NVIDIA and AMD Stream for AMD.
Alternatively you can code using OpenCL and it runs on both ;)
Posted on Reply
#25

by: pr0n Inspector
You probably heard about the hilarious story of OpenGL 3.0(massive plan-reality discrepancy), thanks to the committees in the Khronos Group. Or that in recent years OpenGL is more or less just matching D3D in headline features, which is predictable since major members have vested interests in D3D.
I don't know where you get your news, but OpenGL is the standard in professional 3D graphics industry. Starting with Maya, 3D Studio, SolidWorks, CATIA, ProE, and going with the proprietary software like the ones from PIXAR or Industrial Light & Magic, ALL are using OpenGL, not Direct3D. ;):laugh:
Add your own comment