• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

PhysX only using one cpu core?

Joined
Mar 1, 2008
Messages
290 (0.05/day)
Location
Antwerp, Belgium
http://techreport.com/articles.x/17618/13

http://forums.overclockers.co.uk/showpost.php?p=14687943&postcount=4

I'll quote:
You can see the performance hit caused by enabling PhysX at this resolution. On the GTX 295, it's just not worth it. Another interesting note for you... As I said, enabling the extra PhysX effects on the Radeon cards leads to horrendous performance, like 3-4 FPS, because those effects have to be handled on the CPU. But guess what? I popped Sacred 2 into windowed mode and had a look at Task Manager while the game was running at 3 FPS, and here's what I saw, in miniature:

s2-physx-cpu-util-620.jpg


Ok, so it's hard to see, but Task Manager is showing CPU utilization of 14%, which means the game—and Nvidia's purportedly multithreaded PhysX solver—is making use of just over one of our Core i7-965 Extreme's eight front-ends and less than one of its four cores. I'd say that in this situation, failing to make use of the CPU power available amounts to sabotaging performance on your competition's hardware. The truth is that rigid-body physics isn't too terribly hard to do on a modern CPU, even with lots of objects. Nvidia may not wish to port is PhysX solver to the Radeon, even though a GPU like Cypress is more than capable of handling the job. That's a shame, yet one can understand the business reasons. But if Nvidia is going to pay game developers to incorporate PhysX support into their games, it ought to work in good faith to optimize for the various processors available to it. At a very basic level, threading your easily parallelizable CPU-based PhysX solver should be part of that work, in my view.

In Batman demo:
PhysX is only using one core on the CPU in that game so that maybe contributing to the slowdown.
 
I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.
 
I don't understand what they expect exactly. If one core is giving 3 fps, 8 cores would give 24 fps, still very far from a good experience. And that's on a fast Core i7. i7 920 would give what, 20 fps? And what about Core 2, which is what most people have? i7 has like 50% more raw floating point power than Core2 per core and with only 4 cores, we would be moving in the 10-12 fps realms. Want to start talking about dualies, which is what most people still have?

What do they expect? That developers create 8 different paths with different level of detail and thread penetration, so that every machine without a GeForce can have increasingly better physics, that will be still very far off from what the weakests of GeForces can do??

No, friends, what it makes more sense is to make 2 paths, one that will run in almost everything there, and one that uses GPU accelerated physics on GeForces which by market share is 2/3 of the graphics cards out there. Hence it uses only 1 CPU core, because:

1- the GPU is much more powerful anyway, the power that even a Quad would add is irrelevant.
2- It can run on everything out there, including old dualies, whose second core could be full of secondary processes, etc.

Enabling GPU accelerated physics and expecting to see CPU load, when that's highly unnecesary, is dumb IMHO. I like TechReport like any other, it's the second site I go looking for reviews after TPU, but there they are just being a little bit dumb.
 
I don't understand what they expect exactly. If one core is giving 3 fps, 8 cores would give 24 fps, still very far from a good experience. And that's on a fast Core i7. i7 920 would give what, 20 fps? And what about Core 2, which is what most people have? i7 has like 50% more raw floating point power than Core2 per core and with only 4 cores, we would be moving in the 10-12 fps realms. Want to start talking about dualies, which is what most people still have?

What do they expect? That developers create 8 different paths with different level of detail and thread penetration, so that every machine without a GeForce can have increasingly better physics, that will be still very far off from what the weakests of GeForces can do??

No, friends, what it makes more sense is to make 2 paths, one that will run in almost everything there, and one that uses GPU accelerated physics on GeForces which by market share is 2/3 of the graphics cards out there. Hence it uses only 1 CPU core, because:

1- the GPU is much more powerful anyway, the power that even a Quad would add is irrelevant.
2- It can run on everything out there, including old dualies, whose second core could be full of secondary processes, etc.

Enabling GPU accelerated physics and expecting to see CPU load, when that's highly unnecesary, is dumb IMHO. I like TechReport like any other, it's the second site I go looking for reviews after TPU, but there they are just being a little bit dumb.

Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.



The process mamager isn't even close to maxed out on one CPU HT core, let alone any more of the eight HT cores available. It looks to be using spikes of CPU power, and only on one HT core. The point of the article is to show that Nvidia is causing the game to run poorly, and the game developer is causing issues by allowing such crap to be produced.


So if one HT core is giving 3 FPS times 8 HT cores, 24 FPS, the same framerate many consoles are limited too. But lets just throw that out and focus instead on how to insult other members.
 
jah know nv at it again ...
 
Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.



The process mamager isn't even close to maxed out on one CPU HT core, let alone any more of the eight HT cores available. It looks to be using spikes of CPU power, and only on one HT core. The point of the article is to show that Nvidia is causing the game to run poorly, and the game developer is causing issues by allowing such crap to be produced.


So if one HT core is giving 3 FPS times 8 HT cores, 24 FPS, the same framerate many consoles are limited too. But lets just throw that out and focus instead on how to insult other members.

Nvidia is not causing poor performance, enabling a mode that is only suppossed to run on the GPU is.

If you disable the GPU accelerated PhysX the game will play like a charm. Again you can't expect to enable GPU accelerated physics mode and expect to see CPU load, you can't. You can't expect to have the same improved PhysX running on the CPU either, simply because CPU can't keep up with the power needed. Not even the i7 965 could keep up let alone a Core2 6400, for example. They are saying 14%, that's more than 1 "core" in the i7. 100/8= 12.5.

BTW, any CPU will show those spikes in single threaded applications. That's because it uses the next ALU available and that's always the one in the next core, but since it's single threaded it has to wait to the previous one to finish, accounting to the equivalent of one core being in use. That method increses reliability and improves temperatures too.

And like erocker said it was AMD who refused to use PhysX, Nvidia gave it to AMD for free. They refused for obvious reasons, none of them being to do the best for the consumer. They didn't even allow third parties to do the work, even though Nvdia supported them.

http://www.bit-tech.net/news/2008/07/09/nvidia-helping-to-bring-physx-to-ati-cards/1

However, an intrepid team of software developers over at NGOHQ.com have been busy porting Nvidia's CUDA based PhysX API to work on AMD Radeon graphics cards, and have now received official support from Nvidia - who is no doubt delighted to see it's API working on a competitor's hardware (as well as seriously threatening Intel's Havok physics system.)

As cheesed off as this might make AMD, which is unsurprisingly not supporting NGOHQ's work, it could certainly be for the betterment of PC gaming as a whole. If both AMD and Nvidia cards support PhysX, it'll remove the difficult choice for developers of which physics API to use in games. We've been growing more and more concerned here at bit-tech at the increasingly fragmented state of the physics and graphics markets, and anything that has the chance to simplify the situation for consumers and developers can only be a good thing.
 
Last edited:
Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.



The process mamager isn't even close to maxed out on one CPU HT core, let alone any more of the eight HT cores available. It looks to be using spikes of CPU power, and only on one HT core. The point of the article is to show that Nvidia is causing the game to run poorly, and the game developer is causing issues by allowing such crap to be produced.


So if one HT core is giving 3 FPS times 8 HT cores, 24 FPS, the same framerate many consoles are limited too. But lets just throw that out and focus instead on how to insult other members.
I agree. Physics calculations are very linear in design so it should multithread excellently. NVIDIA decided to focus on PhysX for GPU only so without the GPU, PhysX is a massive burden being poorly optimized for CPU load. Most games don't even use physics enough to warrant using a GPU anyway. I question everything about PhysX (the premise, the execution, the strategy, etc.).

DirectX 11 will murder PhysX because it will work on CPU and GPU. Instead of using, for instance, Havok for CPU and PhysX for GPU, just use DX11 and be done with it.
 
Last edited:
I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.

I don't know why everybody keeps saying this.
If you mean compute shaders then it won't matter until someone actually uses it for a physics engine. Compute shaders by itself is nothing more than what OpenCL & CUDA are.

My opinion about this matter is: what can be threaded easily, should be threaded. No excuses. I mean, the CUDA version is obviously already threaded, isn't it ...
Doesn't matter how you look at it, nVidia is holding the CPU version back in favour of the CUDA version. Havok uses multiple cores BTW.
 
Nothing is multithreaded easily but there's a lot of things that benefit greatly by doing so--physics for games is one of them.
 
Honestly, people should learn the difference between PhysX and GPU accelerated PhysX and stop bashing something they don't know about.

EDIT: When I say people I'm not directing this to anyone in particular. There's a lot of people in TPU and outside of it that don't understand that difference.

PhysX is a physics API that can run on the CPU like Havok or any other, but it has one particularity: it can run on the GPU too, allowing for much more details thanks to the vastly superior floating point power of GPUs.

http://www.brightsideofnews.com/new...-bullet3b-are-havok-and-physx-in-trouble.aspx

In case you wondered, according to Game Developer Magazine world's most popular physics API is nVidia PhysX, with 26.8% market share [if there was any doubt that nVidia PhysX isn't popular, which was defense line from many AMD employees], followed by Intel's Havok and its 22.7% - but Open sourced Bulled Physics Library is third with 10.3%.
We have no doubt that going OpenCL might be the golden ticket for Bullet. After all, Maxon chose Bullet Physics Library for its Cinema 4D Release 11.5 [Cinebench R11 anyone?].

A lot of games use PhysX nowadays, to name a few, Unreal Tournament (without the add-on too), Gears of War, Mass Effect, NFS:Shift. All of them use the CPU PhysX and I have yet to see a complain about those games.

What less sense makes to me is that the same people that support OpenCL physics, because it will make developers life easier by them having to make only one physics path, are asking developers to make a physics engine with very different paths so that it can run on different CPUs and design game with very different levels of physics, the latter thing being the one that would add more work. An experimented coder can implement a physcs engine in under a week (optimizing it is another story), but changing the level of detail of the physics on all the levels (maps, whatever) takes months to a lot of artists.

EDIT: As much as this might surprise you, it would be much easier for a developer to mantain the level of physics details and code the engine for two different languages (CUDA and Stream for example) to move those physics than creating two different levels.

I don't know why everybody keeps saying this.
If you mean compute shaders then it won't matter until someone actually uses it for a physics engine. Compute shaders by itself is nothing more than what OpenCL & CUDA are.

My opinion about this matter is: what can be threaded easily, should be threaded. No excuses. I mean, the CUDA version is obviously already threaded, isn't it ...
Doesn't matter how you look at it, nVidia is holding the CPU version back in favour of the CUDA version. Havok uses multiple cores BTW.

I doubt it. Unless they make different paths, with different requirements a developer can't make the engine use many cores by defalut, because it wouldn't run in slower CPUs. I don't remember having a physics slide in any game using Havok either, but it could be.
 
Last edited:
I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.

Not sure how it all went down, but I do know ATI did not refuse. They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers. This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off. The ATI put a lot of effort behind helping Havok as a purely software Physics solution.

How is Havok doing? I haven't heard anything from them in a while.
 
Havok is still used in many games today, many more than physx is

The problem with physX is that even though you can run it with a cpu, on newer games like batman, try turning it up and see what happens, either A. you will get horrible fps unless you have a very good ATI card or B. It will crash.

and if you have an ATI card on 7 or higher your kinda screwed because in newer drivers nvidia has killed support if you have an ATI card period. You have an ATI card and an nvidia card then your not getting physx acceleration even if your nvidia card supports it unless its the primary adapter which renders the ATI card useless.

and what does nvidia say about this
Nvidia supports GPU accelerated Physx on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes Physx a great experience for customers. For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated Physx with NVIDIA GPUs while GPU rendering is happening on non- NVIDIA GPUs.

and the ones who get hurt the most by this are not the customers, I think its the developers.

but you have a 8800GTS so I am confused as to why its not working right.
 
Not sure how it all went down, but I do know ATI did not refuse. They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers. This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off. The ATI put a lot of effort behind helping Havok as a purely software Physics solution.

How is Havok doing? I haven't heard anything from them in a while.

Havok is owned by Intel its not like a separate thing or anything, its just software owned by Intel. Its pretty basic most games use it, its nothing like proper Physics.
Wait yes it is its an Irish company, but they are owned by intel i think?
 
Not sure how it all went down, but I do know ATI did not refuse. They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers. This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off. The ATI put a lot of effort behind helping Havok as a purely software Physics solution.

How is Havok doing? I haven't heard anything from them in a while.

What AMD was asking for was to leave PhysX all along and postpone GPU accelerated physics until there was an open API that would run on everything. They did say that Nvidia could offer PhysX to an open stardarization company (i.e Khronos) and then they would "use PhysX" although indirectly. Obviously Nvidia didn't want to wait 2+ years to offer something they could offer back then, so they continued with PhysX through CUDA, while working closely with Khronos to make OpenCL and MS on DX11. What I still don't understand is how they support Havok without the need of any standardization progress.

Havok is still used in many games today, many more than physx is.

Nope, read post #10. Before reading that a week ago or so, I thought Havok was more widely used too. But it's not. PhysX is much much cheaper for developers than Havok BTW. It's even free if you don't want access to thsource code...
 
Last edited:
8800GTS so I am confused as to why its not working right.

It may be disabled in the Nvidia Control Panel as it is by default.

Havok is owned by Intel its not like a separate thing or anything, its just software owned by Intel. Its pretty basic most games use it, its nothing like proper Physics.
Wait yes it is its an Irish company, but they are owned by intel i think?

Not sure who owns them, but if they come through on the Physics calculation plugin thingy they were talking about last time I checked on Havok, it will be proper physics. It will just be design to run on whatever GPU you are using in your system. That project was a joint ventor between Havok, Intel, and AMD/ATI. It was called Havok FX.
 
Last edited by a moderator:
It may be disabled in the Nvidia Control Panel as it is by default.



Not sure who owns them, but if they come through on the Physics calculation plugin thingy they were talking about last time I checked on Havok, it will be proper physics. It will just be design to run on whatever GPU you are using in your system. That project was a joint ventor between Havok, Intel, and AMD/ATI. It was called Havok FX.

Havok FX was another thing. Was effects physics on Ati and Nvidia GPUs, before Intel bought Havok (guess why) and much earlier than Nvidia bought Ageia. Havok FX was the respnse from ATi and Nvidia to Ageia's PPU. Then Intel bought Havok and everything became, how to say... cloudy. :)

Havok=Intel and AMD are working on GPU accelerated physics, but it's not called Havok FX unless they reused the name for something that is almost completely different. GPU physics = PhysX = whatever that project is called != effects physics. Effect physics are the ones in which there's no interactions, that is, you could break something in hundred of pieces (that would trigger a change from a solid to a bunch of particles) and those would fall to the floor realistically and maybe even the wind or something could deviate them, but once on the floor the player wouldn't be able to move them, nothing would.

Havok=Intel is hardly the answer anyway. In making a fair engine that runs well on everything, I would trust Nvidia over Intel anyday, remember who's been paying and blackmailing PC vendors to not use competitors products. Besides Larrabee will be so different that no doubt they will optimize Havok to run in that architecture much much better than in Ati and Nvidia GPUs. Nvidia and Ati's architecture are like twins compared to what Larrabee will be.
 
Last edited:
You know. I am getting sick of these random pissing contest between those 3. Granted it is funny at times especially when Intel and Nvidia ignore AMD while AMD is steadily pulling itself up by the boot straps (to coin a phrase).

Personal favorite pissing contest moment. Intel and Nvidia opening arguing about Integrated Graphic "solutions". Then AMD quietly releasing the 780G, which at the time made all other IGP look pathetic.
 
You know. I am getting sick of these random pissing contest between those 3. Granted it is funny at times especially when Intel and Nvidia ignore AMD while AMD is steadily pulling itself up by the boot straps (to coin a phrase).

Personal favorite pissing contest moment. Intel and Nvidia opening arguing about Integrated Graphic "solutions". Then AMD quietly releasing the 780G, which at the time made all other IGP look pathetic.

Yeah, I'm sick too, but reality is reality and the reality is that there's no interest free physics developer with a solution capable of fighting with those two. Before Intel and Nvidia bought them, both Havok and Ageia bought almost every serious competitor, including the one that was best IMO, Meqon physics (bought by Ageia). Those guys were offering things similar to Euphoria back in 1999. Duke Nukem Forever was going to use it for a level of physics and interactivity never seen before. Purportedly much much better than Half-Life 2.

I personally want much better physics in games, the kind of physics that only hardware accelerated physics can offer and I want them the sooner the better. It's not the fisrt time I say this. That's why I always supported PhysX, because they were the only ones offering a revolution and they were offering it now (erm back then :)). Honestly, even today, I don't care what OpenCL or DX11 will offer in that regards, because whatever they offer, even if it's 10 times better and 10 times easier for developers, it won't happen until late 2010 the sooner. That doesn't mean that I don't support them, but from a distance, because as things are now, supporting them means burying PhysX, and I want something until OpenCL and DX11 accelerated solutions are ready. They are nothing more than a paper launch today.
 
Interesting OP, I wonder if that is happening in other cpu physx games like Batman (retail) or Shift? From that author's POV he calls it flat out sabotage. And it's interesting, if you reduce the number of threads you can reduce the frame rate.
 
check out the non gpu accelerated physics in Red Faction:G , by havok i believe.

I was so unimpressed with the physics in BM:AA..

GRID has all of the things BM:AA (smoke, flags breakables) and BM:AA did not have breakable objects... (why would anyone want to run the suggested 9800gtx for physics... so useless)
 
PhysX is emulated on CUDA, which is trying to then be emulated on CPU code. That isn't exactly efficient, what did you all expect?

PhysX is inherently NOT multi-threaded. It is designed to run on a single PPU. Why would you expect it to suddenly become multi-threaded when ran on a CPU?
 
counterstrike source is old. i know.. but I remember the very first time I ever played it coming from 1.6 and I was amazed at how the barrels could be knocked over.. debris could be kicked. rag doll deaths...

Its been a long time since Ive played a game that had such a great physics "feel"

Even half-life 2.. the first time i played and shot one of those guards.. it felt so realistic. We never needed old rebranded cards to do the job next to our main gpu. I know its apples ornages comparing source to todays games, but Im just saying..
 
Saying what? That the physics in old games don't even come close to current games?
 
Which games are you reffering to that have such great physics?

Or have I just gotten spoiled?
 
You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.
 
Back
Top