• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA CUDA PhysX Engine Almost Complete

malware

New Member
Joined
Nov 7, 2004
Messages
5,422 (0.72/day)
Location
Bulgaria
Processor Intel Core 2 Quad Q6600 G0 VID: 1.2125
Motherboard GIGABYTE GA-P35-DS3P rev.2.0
Cooling Thermalright Ultra-120 eXtreme + Noctua NF-S12 Fan
Memory 4x1 GB PQI DDR2 PC2-6400
Video Card(s) Colorful iGame Radeon HD 4890 1 GB GDDR5
Storage 2x 500 GB Seagate Barracuda 7200.11 32 MB RAID0
Display(s) BenQ G2400W 24-inch WideScreen LCD
Case Cooler Master COSMOS RC-1000 (sold), Cooler Master HAF-932 (delivered)
Audio Device(s) Creative X-Fi XtremeMusic + Logitech Z-5500 Digital THX
Power Supply Chieftec CFT-1000G-DF 1kW
Software Laptop: Lenovo 3000 N200 C2DT2310/3GB/120GB/GF7300/15.4"/Razer
Although NVIDIA bought AGEIA Technologies only two months ago (on February 13, 2008), the GeForce creator informed recently that the conversion of AGEIA's PhysX API engine to CUDA programming language that interfaces with the GPUs is almost complete. Upong completeion of CUDA, owners of GeForce 8 and 9 series graphics cards will be able to play PhysX-enabled games without the need of an additional AGEIA PhysX PCI card. The big question here is, how much will this PhysX addition worse the frame rate in games. Well for now we only know that NVIDIA showed off a particle demo at its recent analysts day that was apparently similar to Intel's Nehalem physics demo from IDF 2008. For the record, the Nehalem demo managed 50,000 - 60,000 particles at 15-20 fps (without a GPU), while NVIDIA's demo on a GeForce 9800 card achieved the same level of particles at an amazing 300 fps, quite a boost. NVIDIA's next-gen parts (G100: GT100/200) in theory can double this score to top 600 fps. Manju Hegde, co-founder and former CEO of AGEIA added that in-game physics will be the "second biggest thing" in 2008.

View at TechPowerUp Main Site
 
my next video card will be a GT200 based card
 
... Upong completeion of CUDA...

:roll: I know its a typo but Upong just sounds so funny! :roll:

Anyway, on a more serious note :p

I thought AGEIA ran on computers without an Ageia PCI card anyway??
And I'm sure we will all have something similar be it through CPU or GPU, once game developers start needing something more than a quad core to do their physics calculations...
 
So my questions is what about those of us (fools) who own a physic card. Are our cards obsolete or will they work with CUDA.
 
So my questions is what about those of us (fools) who own a physic card. Are our cards obsolete or will they work with CUDA.

LOL!! boy do i feel sorry for you guys who forked out on such items...and those that bought into hd-dvd too :p
 
AGEIA runs in software mode when used without the card. I remember playing one of the demo games, and it ran fine until I saw a large cloth object, after which I couldn't play anymore do to slowdown. A gpu would be much more efficient at handling the software. I can't wait to get it and try and find the differences in UT3.

Software mode uses cpu cycles, and isn't as powerful as using the dedicated card.
 
If PhysX + GPU = slowdown due to extra rendering, then:

PhsyX card replaced by PhysX on GPU + GPU = one less processor designed for this kind of stuff = even bigger slowdown.

eek. I dont like that.

So go SLI/crossfire? Well guess what, a second GPU is more expensive than a physx card. And it should be too... all that extra hardware for driving a VDU out, large RAM requirements for texture and frame buffers, etc. No need for that on a plain FPU/PhysX solution.

I'll be interested to see the benchmarks on this. At this point I'm not convinced. Better for nvidia to have designed a chipset/mainboard with an open socket that would allow a PLCC drop-in with the Physx engine on it, just like the FPU's (387, 487) of yesteryear. http://en.wikipedia.org/wiki/Plastic_leaded_chip_carrier
http://en.wikipedia.org/wiki/Intel_80387

I'll probably get one of these :D FTW :D = the power of PhysX acceleration ;)
 
But then whatever game you're playing will, hopefully, see the CUDA Ageia as a real card, and then those with a card that can handle UT3 at 1600 x 1200 with everything up can take a small frame drop and not have to purchase a completely seperate card with limited use just to see the goodies in UT3 or whatever other game.
 
My question is, do I have to use the same card for the physx as I do for the graphics rendering? Or can I have an 8800GTS 512MB running the graphics and put an 8600GT or even 8500GT in there to run the physx?
 
i don't think they are ever gonna do that, i'm pretty sure a second shader unit will be for physics calculations along with the gpu or a second gpu on the same card.
 
i still cant wait to get my physx card i just think it will be cool to have but i cant wait till cuda is out so i can use my 9600 too!!! obviously i wont be able to use them at the same time but doing a performance comparison would be cool....hmmm i wonder if cuda is beta..


damn just checked it seems you can get the cuida drivers/sdk for XP but no vista i cant wait
 
There was also a demonstration of cloth: A quad-core Intel Core 2 Extreme processor was working in 12 fps, while a GeForce 8800 GTS board resulted came in at 200 fps. Former Ageia employees did not compare it to Ageia's own PhysX card, but if we remember correctly, that demo ran at 150-180 fps on an Ageia card.

seems the physics card is way less powerful compared to the estimated 600fps from a gt200 card.
 
be surprised if its using Geometry Instancing (every particle being the same)
 
whats teh use of this pysisX ? not many games doest use it
specialy doest increas fps in crysis or anny other havoc pysics game

Nehalem Fire demo http://www.tgdaily.com/content/view/36726/135/

isnt the reason to use physics is that it makes the game more real? saying their is no point in physx is like saying theirs no point in making the graphics better. We want better graphics to have a better "feelable" visual experiance....physx is the exact same..sure physx vrings the fps down but doesnt improving the graphics do the same thing? saying physx is useless is also saying better graphics are its like why have better graphics or physx if we have a good storyline? wait nvm lets not even have that most ppl beyond the age of 35 think video games are mindless time traps so lets scratch it all physx graphics and story line if we want the kids to rot away on somthing mindless lets make pong 23 the next big hit

riddle me this.
 
yea why use PysisX when we have Havoc?
 
to off load from the cpu?....because ppl who arent rich boys cant afford a Nehalem or any other 4-8 core rig that can do it for us
 
TBH - I really hope ATI stays out of this right now. Physics is a great idea and all - and we're seeing growing engines in games and more enhanced realism every year . . .

but, with nVidia spearheading their own project, and backed with their TWIMTBP campaign, ATI would get their ass handed to them currently in the physics arena.



Although, I think ATI GPUs are much better suited for physics processing than nVidia's are. ATI has even demonstrated in the past a dominance when they were showcasing their 2+1 and 1+1 Crossfire GPU/PPU implimentations. I still think they would demonstrate that same dominance today - except, nVidia now has their own physics engine, and can rally support for it quicker in games through their campaign than ATI could. Unless we see both manufacturer's designing around a 3rd party physics engine (i.e. Havok), competition will be severelly one-sided.

And on top of that - further development of physics in game like this is still iffy. Sure, having a GPU manufacturer like nVidia supporting and pushing game devs to further implimentation is great for the whole - what if there are major game devs who refuse to fall prey to nVidia and instead go with someone elses engine?



If ATI does jump aboard this market upon wind of this from the nVidia camp - it wouldn't surprise me to see ATI partner with Intel and their physics engine.
 
Unless we see both manufacturer's designing around a 3rd party physics engine (i.e. Havok), competition will be severelly one-sided.

for the next set of games coming out people would need a skulltrail system to keep their framerates up with advanced physics without a PPU, whether it be a propritariey PPU or a GPU acting as one. personally i would like to use a tri-sli motherboard for SLI and a physics dedicated GPU like and 8600GT or something.
 
for the next set of games coming out people would need a skulltrail system to keep their framerates up with advanced physics without a PPU, whether it be a propritariey PPU or a GPU acting as one. personally i would like to use a tri-sli motherboard for SLI and a physics dedicated GPU like and 8600GT or something.

exactly - which is why I'm hoping that ATI will stay out of it; unless they decide to partner with Intel so they can impliment physics capability in with their GPUs. Still a possibility, considering ATI and Intel are major competitors of nVidia. We'll just have to see how this will unfold. ATI has proven themselves capable of excelling with physics processing (IIRC, even besting Aegia's PPUs when they were indepedent), so they can hang in there - just without a campaign to push their technology (if they were to go into physics processing), they'll get raped by nVidia in the market.


It could also go over like it did last time and just dead end all together.
 
nVidia is keeping the Aegia physics API open. ATI is free to use it on their cards as well.
 
Back
Top