1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PhysX only using one cpu core?

Discussion in 'Graphics Cards' started by MrMilli, Sep 27, 2009.

  1. MrMilli

    MrMilli

    Joined:
    Mar 1, 2008
    Messages:
    216 (0.09/day)
    Thanks Received:
    35
    Location:
    Antwerp, Belgium
    http://techreport.com/articles.x/17618/13

    http://forums.overclockers.co.uk/showpost.php?p=14687943&postcount=4

    I'll quote:
    In Batman demo:
     
  2. erocker

    erocker Super Moderator Staff Member

    Joined:
    Jul 19, 2006
    Messages:
    39,828 (13.16/day)
    Thanks Received:
    14,202
    I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.
     
  3. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.43/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    I don't understand what they expect exactly. If one core is giving 3 fps, 8 cores would give 24 fps, still very far from a good experience. And that's on a fast Core i7. i7 920 would give what, 20 fps? And what about Core 2, which is what most people have? i7 has like 50% more raw floating point power than Core2 per core and with only 4 cores, we would be moving in the 10-12 fps realms. Want to start talking about dualies, which is what most people still have?

    What do they expect? That developers create 8 different paths with different level of detail and thread penetration, so that every machine without a GeForce can have increasingly better physics, that will be still very far off from what the weakests of GeForces can do??

    No, friends, what it makes more sense is to make 2 paths, one that will run in almost everything there, and one that uses GPU accelerated physics on GeForces which by market share is 2/3 of the graphics cards out there. Hence it uses only 1 CPU core, because:

    1- the GPU is much more powerful anyway, the power that even a Quad would add is irrelevant.
    2- It can run on everything out there, including old dualies, whose second core could be full of secondary processes, etc.

    Enabling GPU accelerated physics and expecting to see CPU load, when that's highly unnecesary, is dumb IMHO. I like TechReport like any other, it's the second site I go looking for reviews after TPU, but there they are just being a little bit dumb.
     
  4. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    8,380 (2.55/day)
    Thanks Received:
    1,230
    Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.



    The process mamager isn't even close to maxed out on one CPU HT core, let alone any more of the eight HT cores available. It looks to be using spikes of CPU power, and only on one HT core. The point of the article is to show that Nvidia is causing the game to run poorly, and the game developer is causing issues by allowing such crap to be produced.


    So if one HT core is giving 3 FPS times 8 HT cores, 24 FPS, the same framerate many consoles are limited too. But lets just throw that out and focus instead on how to insult other members.
     
    10 Million points folded for TPU
  5. [I.R.A]_FBi

    [I.R.A]_FBi New Member

    Joined:
    May 19, 2007
    Messages:
    7,664 (2.82/day)
    Thanks Received:
    540
    Location:
    c:\programs\kitteh.exe
    jah know nv at it again ...
     
  6. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.43/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Nvidia is not causing poor performance, enabling a mode that is only suppossed to run on the GPU is.

    If you disable the GPU accelerated PhysX the game will play like a charm. Again you can't expect to enable GPU accelerated physics mode and expect to see CPU load, you can't. You can't expect to have the same improved PhysX running on the CPU either, simply because CPU can't keep up with the power needed. Not even the i7 965 could keep up let alone a Core2 6400, for example. They are saying 14%, that's more than 1 "core" in the i7. 100/8= 12.5.

    BTW, any CPU will show those spikes in single threaded applications. That's because it uses the next ALU available and that's always the one in the next core, but since it's single threaded it has to wait to the previous one to finish, accounting to the equivalent of one core being in use. That method increses reliability and improves temperatures too.

    And like erocker said it was AMD who refused to use PhysX, Nvidia gave it to AMD for free. They refused for obvious reasons, none of them being to do the best for the consumer. They didn't even allow third parties to do the work, even though Nvdia supported them.

    http://www.bit-tech.net/news/2008/07/09/nvidia-helping-to-bring-physx-to-ati-cards/1

     
    Last edited: Sep 27, 2009
  7. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    13,838 (6.26/day)
    Thanks Received:
    3,706
    Location:
    IA, USA
    I agree. Physics calculations are very linear in design so it should multithread excellently. NVIDIA decided to focus on PhysX for GPU only so without the GPU, PhysX is a massive burden being poorly optimized for CPU load. Most games don't even use physics enough to warrant using a GPU anyway. I question everything about PhysX (the premise, the execution, the strategy, etc.).

    DirectX 11 will murder PhysX because it will work on CPU and GPU. Instead of using, for instance, Havok for CPU and PhysX for GPU, just use DX11 and be done with it.
     
    Last edited: Sep 27, 2009
    Crunching for Team TPU
  8. MrMilli

    MrMilli

    Joined:
    Mar 1, 2008
    Messages:
    216 (0.09/day)
    Thanks Received:
    35
    Location:
    Antwerp, Belgium
    I don't know why everybody keeps saying this.
    If you mean compute shaders then it won't matter until someone actually uses it for a physics engine. Compute shaders by itself is nothing more than what OpenCL & CUDA are.

    My opinion about this matter is: what can be threaded easily, should be threaded. No excuses. I mean, the CUDA version is obviously already threaded, isn't it ...
    Doesn't matter how you look at it, nVidia is holding the CPU version back in favour of the CUDA version. Havok uses multiple cores BTW.
     
  9. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    13,838 (6.26/day)
    Thanks Received:
    3,706
    Location:
    IA, USA
    Nothing is multithreaded easily but there's a lot of things that benefit greatly by doing so--physics for games is one of them.
     
    Crunching for Team TPU
  10. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.43/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Honestly, people should learn the difference between PhysX and GPU accelerated PhysX and stop bashing something they don't know about.

    EDIT: When I say people I'm not directing this to anyone in particular. There's a lot of people in TPU and outside of it that don't understand that difference.

    PhysX is a physics API that can run on the CPU like Havok or any other, but it has one particularity: it can run on the GPU too, allowing for much more details thanks to the vastly superior floating point power of GPUs.

    http://www.brightsideofnews.com/new...-bullet3b-are-havok-and-physx-in-trouble.aspx

    A lot of games use PhysX nowadays, to name a few, Unreal Tournament (without the add-on too), Gears of War, Mass Effect, NFS:Shift. All of them use the CPU PhysX and I have yet to see a complain about those games.

    What less sense makes to me is that the same people that support OpenCL physics, because it will make developers life easier by them having to make only one physics path, are asking developers to make a physics engine with very different paths so that it can run on different CPUs and design game with very different levels of physics, the latter thing being the one that would add more work. An experimented coder can implement a physcs engine in under a week (optimizing it is another story), but changing the level of detail of the physics on all the levels (maps, whatever) takes months to a lot of artists.

    EDIT: As much as this might surprise you, it would be much easier for a developer to mantain the level of physics details and code the engine for two different languages (CUDA and Stream for example) to move those physics than creating two different levels.

    I doubt it. Unless they make different paths, with different requirements a developer can't make the engine use many cores by defalut, because it wouldn't run in slower CPUs. I don't remember having a physics slide in any game using Havok either, but it could be.
     
    Last edited: Sep 27, 2009
  11. TheLaughingMan

    TheLaughingMan

    Joined:
    May 7, 2009
    Messages:
    4,998 (2.50/day)
    Thanks Received:
    1,291
    Location:
    Marietta, GA USA
    Not sure how it all went down, but I do know ATI did not refuse. They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers. This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off. The ATI put a lot of effort behind helping Havok as a purely software Physics solution.

    How is Havok doing? I haven't heard anything from them in a while.
     
  12. KainXS

    KainXS

    Joined:
    Sep 25, 2007
    Messages:
    5,601 (2.16/day)
    Thanks Received:
    502
    Havok is still used in many games today, many more than physx is

    The problem with physX is that even though you can run it with a cpu, on newer games like batman, try turning it up and see what happens, either A. you will get horrible fps unless you have a very good ATI card or B. It will crash.

    and if you have an ATI card on 7 or higher your kinda screwed because in newer drivers nvidia has killed support if you have an ATI card period. You have an ATI card and an nvidia card then your not getting physx acceleration even if your nvidia card supports it unless its the primary adapter which renders the ATI card useless.

    and what does nvidia say about this
    and the ones who get hurt the most by this are not the customers, I think its the developers.

    but you have a 8800GTS so I am confused as to why its not working right.
     
  13. KieranD

    KieranD

    Joined:
    Aug 16, 2007
    Messages:
    8,043 (3.05/day)
    Thanks Received:
    822
    Location:
    Glasgow, Scotland
    Havok is owned by Intel its not like a separate thing or anything, its just software owned by Intel. Its pretty basic most games use it, its nothing like proper Physics.
    Wait yes it is its an Irish company, but they are owned by intel i think?
     
  14. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.43/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    What AMD was asking for was to leave PhysX all along and postpone GPU accelerated physics until there was an open API that would run on everything. They did say that Nvidia could offer PhysX to an open stardarization company (i.e Khronos) and then they would "use PhysX" although indirectly. Obviously Nvidia didn't want to wait 2+ years to offer something they could offer back then, so they continued with PhysX through CUDA, while working closely with Khronos to make OpenCL and MS on DX11. What I still don't understand is how they support Havok without the need of any standardization progress.

    Nope, read post #10. Before reading that a week ago or so, I thought Havok was more widely used too. But it's not. PhysX is much much cheaper for developers than Havok BTW. It's even free if you don't want access to thsource code...
     
    Last edited: Sep 27, 2009
  15. TheLaughingMan

    TheLaughingMan

    Joined:
    May 7, 2009
    Messages:
    4,998 (2.50/day)
    Thanks Received:
    1,291
    Location:
    Marietta, GA USA
    It may be disabled in the Nvidia Control Panel as it is by default.

    Not sure who owns them, but if they come through on the Physics calculation plugin thingy they were talking about last time I checked on Havok, it will be proper physics. It will just be design to run on whatever GPU you are using in your system. That project was a joint ventor between Havok, Intel, and AMD/ATI. It was called Havok FX.
     
  16. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.43/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Havok FX was another thing. Was effects physics on Ati and Nvidia GPUs, before Intel bought Havok (guess why) and much earlier than Nvidia bought Ageia. Havok FX was the respnse from ATi and Nvidia to Ageia's PPU. Then Intel bought Havok and everything became, how to say... cloudy. :)

    Havok=Intel and AMD are working on GPU accelerated physics, but it's not called Havok FX unless they reused the name for something that is almost completely different. GPU physics = PhysX = whatever that project is called != effects physics. Effect physics are the ones in which there's no interactions, that is, you could break something in hundred of pieces (that would trigger a change from a solid to a bunch of particles) and those would fall to the floor realistically and maybe even the wind or something could deviate them, but once on the floor the player wouldn't be able to move them, nothing would.

    Havok=Intel is hardly the answer anyway. In making a fair engine that runs well on everything, I would trust Nvidia over Intel anyday, remember who's been paying and blackmailing PC vendors to not use competitors products. Besides Larrabee will be so different that no doubt they will optimize Havok to run in that architecture much much better than in Ati and Nvidia GPUs. Nvidia and Ati's architecture are like twins compared to what Larrabee will be.
     
    Last edited: Sep 27, 2009
  17. TheLaughingMan

    TheLaughingMan

    Joined:
    May 7, 2009
    Messages:
    4,998 (2.50/day)
    Thanks Received:
    1,291
    Location:
    Marietta, GA USA
    You know. I am getting sick of these random pissing contest between those 3. Granted it is funny at times especially when Intel and Nvidia ignore AMD while AMD is steadily pulling itself up by the boot straps (to coin a phrase).

    Personal favorite pissing contest moment. Intel and Nvidia opening arguing about Integrated Graphic "solutions". Then AMD quietly releasing the 780G, which at the time made all other IGP look pathetic.
     
  18. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.43/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Yeah, I'm sick too, but reality is reality and the reality is that there's no interest free physics developer with a solution capable of fighting with those two. Before Intel and Nvidia bought them, both Havok and Ageia bought almost every serious competitor, including the one that was best IMO, Meqon physics (bought by Ageia). Those guys were offering things similar to Euphoria back in 1999. Duke Nukem Forever was going to use it for a level of physics and interactivity never seen before. Purportedly much much better than Half-Life 2.

    I personally want much better physics in games, the kind of physics that only hardware accelerated physics can offer and I want them the sooner the better. It's not the fisrt time I say this. That's why I always supported PhysX, because they were the only ones offering a revolution and they were offering it now (erm back then :)). Honestly, even today, I don't care what OpenCL or DX11 will offer in that regards, because whatever they offer, even if it's 10 times better and 10 times easier for developers, it won't happen until late 2010 the sooner. That doesn't mean that I don't support them, but from a distance, because as things are now, supporting them means burying PhysX, and I want something until OpenCL and DX11 accelerated solutions are ready. They are nothing more than a paper launch today.
     
  19. EastCoasthandle

    EastCoasthandle New Member

    Joined:
    Apr 21, 2005
    Messages:
    6,889 (1.98/day)
    Thanks Received:
    1,505
    Interesting OP, I wonder if that is happening in other cpu physx games like Batman (retail) or Shift? From that author's POV he calls it flat out sabotage. And it's interesting, if you reduce the number of threads you can reduce the frame rate.
     
  20. shevanel

    shevanel New Member

    Joined:
    Jul 27, 2009
    Messages:
    3,479 (1.81/day)
    Thanks Received:
    406
    Location:
    Leesburg, FL
    check out the non gpu accelerated physics in Red Faction:G , by havok i believe.

    I was so unimpressed with the physics in BM:AA..

    GRID has all of the things BM:AA (smoke, flags breakables) and BM:AA did not have breakable objects... (why would anyone want to run the suggested 9800gtx for physics... so useless)
     
  21. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    20,063 (6.14/day)
    Thanks Received:
    6,122
    PhysX is emulated on CUDA, which is trying to then be emulated on CPU code. That isn't exactly efficient, what did you all expect?

    PhysX is inherently NOT multi-threaded. It is designed to run on a single PPU. Why would you expect it to suddenly become multi-threaded when ran on a CPU?
     
    Crunching for Team TPU 50 Million points folded for TPU
  22. shevanel

    shevanel New Member

    Joined:
    Jul 27, 2009
    Messages:
    3,479 (1.81/day)
    Thanks Received:
    406
    Location:
    Leesburg, FL
    counterstrike source is old. i know.. but I remember the very first time I ever played it coming from 1.6 and I was amazed at how the barrels could be knocked over.. debris could be kicked. rag doll deaths...

    Its been a long time since Ive played a game that had such a great physics "feel"

    Even half-life 2.. the first time i played and shot one of those guards.. it felt so realistic. We never needed old rebranded cards to do the job next to our main gpu. I know its apples ornages comparing source to todays games, but Im just saying..
     
  23. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    20,063 (6.14/day)
    Thanks Received:
    6,122
    Saying what? That the physics in old games don't even come close to current games?
     
    Crunching for Team TPU 50 Million points folded for TPU
  24. shevanel

    shevanel New Member

    Joined:
    Jul 27, 2009
    Messages:
    3,479 (1.81/day)
    Thanks Received:
    406
    Location:
    Leesburg, FL
    Which games are you reffering to that have such great physics?

    Or have I just gotten spoiled?
     
  25. erocker

    erocker Super Moderator Staff Member

    Joined:
    Jul 19, 2006
    Messages:
    39,828 (13.16/day)
    Thanks Received:
    14,202
    You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.
     
    tigger says thanks.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page