Tuesday, April 15th 2008

NVIDIA CUDA PhysX Engine Almost Complete

Although NVIDIA bought AGEIA Technologies only two months ago (on February 13, 2008), the GeForce creator informed recently that the conversion of AGEIA's PhysX API engine to CUDA programming language that interfaces with the GPUs is almost complete. Upong completeion of CUDA, owners of GeForce 8 and 9 series graphics cards will be able to play PhysX-enabled games without the need of an additional AGEIA PhysX PCI card. The big question here is, how much will this PhysX addition worse the frame rate in games. Well for now we only know that NVIDIA showed off a particle demo at its recent analysts day that was apparently similar to Intel's Nehalem physics demo from IDF 2008. For the record, the Nehalem demo managed 50,000 - 60,000 particles at 15-20 fps (without a GPU), while NVIDIA's demo on a GeForce 9800 card achieved the same level of particles at an amazing 300 fps, quite a boost. NVIDIA's next-gen parts (G100: GT100/200) in theory can double this score to top 600 fps. Manju Hegde, co-founder and former CEO of AGEIA added that in-game physics will be the "second biggest thing" in 2008.
Source: TG Daily
Add your own comment

53 Comments on NVIDIA CUDA PhysX Engine Almost Complete

#1
mab1376
my next video card will be a GT200 based card
Posted on Reply
#2
Weer
mab1376my next video card will be a GT200 based card
Suuure it will. And mine an R700 based. :cool:
Posted on Reply
#3
jbunch07
WeerSuuure it will. And mine an R700 based. :cool:
like wise

i hope Ati is making note of this!
Posted on Reply
#4
alexp999
Staff
malware... Upong completeion of CUDA...
:roll: I know its a typo but Upong just sounds so funny! :roll:

Anyway, on a more serious note :p

I thought AGEIA ran on computers without an Ageia PCI card anyway??
And I'm sure we will all have something similar be it through CPU or GPU, once game developers start needing something more than a quad core to do their physics calculations...
Posted on Reply
#5
Thompson5439
So my questions is what about those of us (fools) who own a physic card. Are our cards obsolete or will they work with CUDA.
Posted on Reply
#6
choppy
Thompson5439So my questions is what about those of us (fools) who own a physic card. Are our cards obsolete or will they work with CUDA.
LOL!! boy do i feel sorry for you guys who forked out on such items...and those that bought into hd-dvd too :p
Posted on Reply
#7
Deleted member 3
WeerSuuure it will. And mine an R700 based. :cool:
So, considering this news item is about Cuda and has nothing to do with ATI, care to give us your insight in why we should go for ATI instead?
Posted on Reply
#8
Necrofire
AGEIA runs in software mode when used without the card. I remember playing one of the demo games, and it ran fine until I saw a large cloth object, after which I couldn't play anymore do to slowdown. A gpu would be much more efficient at handling the software. I can't wait to get it and try and find the differences in UT3.

Software mode uses cpu cycles, and isn't as powerful as using the dedicated card.
Posted on Reply
#9
lemonadesoda
If PhysX + GPU = slowdown due to extra rendering, then:

PhsyX card replaced by PhysX on GPU + GPU = one less processor designed for this kind of stuff = even bigger slowdown.

eek. I dont like that.

So go SLI/crossfire? Well guess what, a second GPU is more expensive than a physx card. And it should be too... all that extra hardware for driving a VDU out, large RAM requirements for texture and frame buffers, etc. No need for that on a plain FPU/PhysX solution.

I'll be interested to see the benchmarks on this. At this point I'm not convinced. Better for nvidia to have designed a chipset/mainboard with an open socket that would allow a PLCC drop-in with the Physx engine on it, just like the FPU's (387, 487) of yesteryear. en.wikipedia.org/wiki/Plastic_leaded_chip_carrier
en.wikipedia.org/wiki/Intel_80387

I'll probably get one of these :D FTW :D = the power of PhysX acceleration ;)
Posted on Reply
#10
Necrofire
But then whatever game you're playing will, hopefully, see the CUDA Ageia as a real card, and then those with a card that can handle UT3 at 1600 x 1200 with everything up can take a small frame drop and not have to purchase a completely seperate card with limited use just to see the goodies in UT3 or whatever other game.
Posted on Reply
#11
newtekie1
Semi-Retired Folder
My question is, do I have to use the same card for the physx as I do for the graphics rendering? Or can I have an 8800GTS 512MB running the graphics and put an 8600GT or even 8500GT in there to run the physx?
Posted on Reply
#12
mab1376
i don't think they are ever gonna do that, i'm pretty sure a second shader unit will be for physics calculations along with the gpu or a second gpu on the same card.
Posted on Reply
#13
Solaris17
Super Dainty Moderator
i still cant wait to get my physx card i just think it will be cool to have but i cant wait till cuda is out so i can use my 9600 too!!! obviously i wont be able to use them at the same time but doing a performance comparison would be cool....hmmm i wonder if cuda is beta..


damn just checked it seems you can get the cuida drivers/sdk for XP but no vista i cant wait
Posted on Reply
#14
mab1376
There was also a demonstration of cloth: A quad-core Intel Core 2 Extreme processor was working in 12 fps, while a GeForce 8800 GTS board resulted came in at 200 fps. Former Ageia employees did not compare it to Ageia's own PhysX card, but if we remember correctly, that demo ran at 150-180 fps on an Ageia card.
seems the physics card is way less powerful compared to the estimated 600fps from a gt200 card.
Posted on Reply
#16
eidairaman1
The Exiled Airman
be surprised if its using Geometry Instancing (every particle being the same)
Posted on Reply
#17
Solaris17
Super Dainty Moderator
Morgothwhats teh use of this pysisX ? not many games doest use it
specialy doest increas fps in crysis or anny other havoc pysics game

Nehalem Fire demo www.tgdaily.com/content/view/36726/135/
isnt the reason to use physics is that it makes the game more real? saying their is no point in physx is like saying theirs no point in making the graphics better. We want better graphics to have a better "feelable" visual experiance....physx is the exact same..sure physx vrings the fps down but doesnt improving the graphics do the same thing? saying physx is useless is also saying better graphics are its like why have better graphics or physx if we have a good storyline? wait nvm lets not even have that most ppl beyond the age of 35 think video games are mindless time traps so lets scratch it all physx graphics and story line if we want the kids to rot away on somthing mindless lets make pong 23 the next big hit

riddle me this.
Posted on Reply
#18
Morgoth
Fueled by Sapphire
yea why use PysisX when we have Havoc?
Posted on Reply
#19
Solaris17
Super Dainty Moderator
to off load from the cpu?....because ppl who arent rich boys cant afford a Nehalem or any other 4-8 core rig that can do it for us
Posted on Reply
#20
imperialreign
TBH - I really hope ATI stays out of this right now. Physics is a great idea and all - and we're seeing growing engines in games and more enhanced realism every year . . .

but, with nVidia spearheading their own project, and backed with their TWIMTBP campaign, ATI would get their ass handed to them currently in the physics arena.



Although, I think ATI GPUs are much better suited for physics processing than nVidia's are. ATI has even demonstrated in the past a dominance when they were showcasing their 2+1 and 1+1 Crossfire GPU/PPU implimentations. I still think they would demonstrate that same dominance today - except, nVidia now has their own physics engine, and can rally support for it quicker in games through their campaign than ATI could. Unless we see both manufacturer's designing around a 3rd party physics engine (i.e. Havok), competition will be severelly one-sided.

And on top of that - further development of physics in game like this is still iffy. Sure, having a GPU manufacturer like nVidia supporting and pushing game devs to further implimentation is great for the whole - what if there are major game devs who refuse to fall prey to nVidia and instead go with someone elses engine?



If ATI does jump aboard this market upon wind of this from the nVidia camp - it wouldn't surprise me to see ATI partner with Intel and their physics engine.
Posted on Reply
#21
mab1376
Unless we see both manufacturer's designing around a 3rd party physics engine (i.e. Havok), competition will be severelly one-sided.
for the next set of games coming out people would need a skulltrail system to keep their framerates up with advanced physics without a PPU, whether it be a propritariey PPU or a GPU acting as one. personally i would like to use a tri-sli motherboard for SLI and a physics dedicated GPU like and 8600GT or something.
Posted on Reply
#22
imperialreign
mab1376for the next set of games coming out people would need a skulltrail system to keep their framerates up with advanced physics without a PPU, whether it be a propritariey PPU or a GPU acting as one. personally i would like to use a tri-sli motherboard for SLI and a physics dedicated GPU like and 8600GT or something.
exactly - which is why I'm hoping that ATI will stay out of it; unless they decide to partner with Intel so they can impliment physics capability in with their GPUs. Still a possibility, considering ATI and Intel are major competitors of nVidia. We'll just have to see how this will unfold. ATI has proven themselves capable of excelling with physics processing (IIRC, even besting Aegia's PPUs when they were indepedent), so they can hang in there - just without a campaign to push their technology (if they were to go into physics processing), they'll get raped by nVidia in the market.


It could also go over like it did last time and just dead end all together.
Posted on Reply
#23
Wile E
Power User
nVidia is keeping the Aegia physics API open. ATI is free to use it on their cards as well.
Posted on Reply
#24
[I.R.A]_FBi
Wile EnVidia is keeping the Aegia physics API open. ATI is free to use it on their cards as well.
wh00tles!
Posted on Reply
#25
imperialreign
Wile EnVidia is keeping the Aegia physics API open. ATI is free to use it on their cards as well.
that's great and all - but I still wouldn't be surprised at all if Aegia physics run better "in-game" with nVidia hardware compared to ATI's, based solely on nVidia's TWIMTBP campaign. Although ATI has shown a dominance in the past with physics processing, when there is a manufacturer supported campaign to optimize for certain hardware, ATI won't have that edge. i.e. comparing to the open source OpenGL, both manufcaturer's hardware are capable of running OGL 2.0 on par with each other, but we've seen some games in the past where ATi has failed in due to optimizations for nVidia hardware (Doom3 comes to mind).

But, this is all in theory on my part as well - can't say for sure until we actually see the technology hitting our motherboards, y'know?
Posted on Reply
Add your own comment
Apr 19th, 2024 19:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts