They aren't. Microsoft is going to have directx physics API (openCl, not owned by MS but they will use it), which is not physx and will be implementable in hardware on ati and nvidia gpu's (all they have to do is map the api commands to cuda equivalent commands for nvidia cards and stream equivalent commands on ATI gpus (stream is the ati equivalent of cuda)).
That won't work, game developers will not waste resources on an API which can't be used with all graphics cards when there is a free, fully directx integrated api available which will work with all graphics cards, and against the might of microsoft and directx/opencl nvidia has a lot less influence over developers.
opencl is "open computing language(sp?)" and yes ms plans their own physx type enterface that would work with 3rd party engins like havoc and physx(tho intel owns havoc so i honestly dont see it becoming gpu accelerated, intel insists he gpu is on the way out and everythings gonna be on the cpu)
First of all congratulations for the driver tweak. BTW no one tried it yet??
Now, about CUDA and PhysX there's a lot of inacurate, incorrect and some borserline false info giving here.
- The only game with GPU PhysX released to date is Mirror's Edge and when GPU PhysX option is dissabled, there's no offloading of anything to the CPU. When dissabled both Nvidia and Ati cards have exactly the same CPU usage.
- CUDA can't offload ANY work out from the CPU unless the program was especifically programmed for that purpose. CUDA for instance, has to be initialized in the program itself to work, just as you have to initialize DirectX or any other API.
- CUDA and OpenCL and STREAM are virtually the same thing (the three come from Brook+), just that each of them have their own sintaxis. There's no such thing as Nvidia cards run better CUDA than they do OpenCL, nor Ati cards run better STREAM. There's going to be differences? Yes of course, some little ones, but if a Nvidia card is faster with CUDA than a comparable Ati card with STREAM it wil be so with OpenCL and viceversa.
- PhysX is a physics API and engine first, which is exactly the same as Havok, and on a second level has an optional interface to comunicate with CUDA, with Ageia physx card or even PS3's SPEs. By default it runs on the CPU just as Havok, it's required a condition to run it in other platforms.
- Because PhysX is independant from the platform, and GPGPU solutions are so similar between them there's not going to be any problem to run PhysX through OpenCL or DX11. It took Nvidia 2 months to adapt PhysX to CUDA, it will take about the same (or less) to adapt it to OpenCL.
- AFAIK DX11 will not have a physics engine implemented similar to Havok or PhysX. It will just have some shaders and extensions to make things easier, but won't be any different from D3D. You still have to make the renderer and game engine, right? The same will be with physics. DX11 will make possible GPU accelerated physics, but companies will still need an engine and will most probaly outsource to Havok or PhysX as they have always done. Now who you do prefer to dominate the physics world and future is your own decision: Intel (they own Havok) or Nvidia? And don't be naive, if you think that Nvidia would make PhysX run better on their hardware, think that Intel will do the same (and that hardware is Larrabe, with nothing in common with Ati/Nvidia GPUs). Intel has a much longer and intense history with antitrust behavior, plus it's like 10 times bigger, omnipotent and omnipresent, so think it twice.
acctualy hate to have to tell you this Dark but its not dx11,its now going to be called d3d11 as ms is mainly updating the 3d portion of the suit(just bugfixes for direct sound and such)
and physx is an OPEN API, ATI/AMD could choose to support it dirrectly, but they dont, mostly i think because its got the nvidia name on it......same as nvidia wont allow CF on their chipsets or SLI on ati/amd's.
now about your false info reguarding physx, With current physx drivers from nvidia yes cuda can offload physx work from the cpu in physx games, ut3, MassEffect, Warmonger, and a stack of other games i have all bennifit when i enable GPU physx dispite these games coming out b4 cuda physx support was avalable.
Cuda physx works just like slaping in a stand along physx card once drivers are updated, in a few cases u need to copy files from the physx driver folder into the games folder over writing older physx files(pre gpu physx) but even then its CAKE and yes i works.
As to the argument about IQ with AA settings, it depends on the game in some games nvidia 8x and ati x8 are gonna look the same, in others you need 16x or 8xQ to come close to ati's IQ, its VERY dependant on the game your playing.
Transpairancy AA on nvidia sucks in multisample mode, its blotchy/spotty in most games i play, using supersampling has a bigger hit but looks FAR better.
8800gts 512@757/1900/2200 incase your wondering, and i have dirrectly compaired ati and nvidia IQ, and sometimes its even, other times ati is clearly ahead.
want an easy example.
WoW, in wow i could set 4x on my 1900xtx and it looked SMOOTH, no jaggies at all(1600x1200), on this nvidia card, 16x or 8xQ is a must to get it mostly smooth, and you DO NOT WANT TO USE MULTI SAMPLE for tras AA, it looks HORRIBLE.
on the other hand ut2k4 you can use any aa settings and it looks fine, tho it still looks far better with supersampled trans aa.
Fable:the lost chapers, HATES multi sample trans aa, but looks NICE with supersampled, also it dosnt care what AA mode you use, it still bennifits nicely from any of them
oh and alot of games like combine mode(supersample aa+multi sample aa) if anybody wants to try this, grab nhancer(google it) specly older games, combine mode can really make them look lightyears better.