• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PhysX only using one cpu core?

EastCoasthandle

New Member
Joined
Apr 21, 2005
Messages
6,885 (0.99/day)
System Name MY PC
Processor E8400 @ 3.80Ghz > Q9650 3.60Ghz
Motherboard Maximus Formula
Cooling D5, 7/16" ID Tubing, Maze4 with Fuzion CPU WB
Memory XMS 8500C5D @ 1066MHz
Video Card(s) HD 2900 XT 858/900 to 4870 to 5870 (Keep Vreg area clean)
Storage 2
Display(s) 24"
Case P180
Audio Device(s) X-fi Plantinum
Power Supply Silencer 750
Software XP Pro SP3 to Windows 7
Benchmark Scores This varies from one driver to another.
It is my understanding that the use of physx (as an API) can be seen more so from I/O activity more so then just CPU activity. Has anyone looked at it to see?
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
It is my understanding that the use of physx (as an API) can be seen more so from I/O activity more so then just CPU activity. Has anyone looked at it to see?

It depends. If the calculations are made in the CPU it will be more than I/O, obviously, but when the GPU is being used for calculations, then probably yes, although the branching work is still there. Anyway, I have always understood that the CPU activity is based on the load in the instruction decoder and not the ALUs.

And ok, I have just tried Batman and I guess it uses as many cores as it needs in my PC, because I've seen 60%+ loads when PhysX extensions were enabled in the game running from the CPU, PhysX acceleration in CP off. It was not playable. Here:



It's curious, because maximum CPU load happened when close to the smoke, but even then the fps's are better than when you move in any other place. Apparently the coat eats up more resources from the GPU.

Around 50% load when PhysX acceleration was on and smooth gameplay on my 8800GT (30 low -60 high). Everything maxed out, including physics. Certainly the recommended GTX260 + 9800GTX+ dedicated to PhysX is a marketing joke.
 
Joined
Mar 1, 2008
Messages
282 (0.05/day)
Location
Antwerp, Belgium
Man a lot of rendering process happens in the CPU. The graphics cards need to be fed by the CPU, between other things, and a faster CPU almost always gives better fps.

I don't think that zombies collide with each other, which would add a lot of processing, so it's not a lot of physics going on there IMO. I've just played to be sure about this and they do not collide. Collisions in Source are based in the typical hit-box squeme anyway, it's fairly simple. I don't know if there is any slowdown when the zombie hordes, because it never goes below 60fps. But the fact that it always remains above 60fps with a single core tells it all anyway.

It's curious, because maximum CPU load happened when close to the smoke, but even then the fps's are better than when you move in any other place. Apparently the coat eats up more resources from the GPU.

You really don't need to take me to school. I know how rendering in DirectX is done. :)
While a lot of the rendering used to be done on the cpu in the pre-DX8 era, with each new DX iteration less and less work is done by the cpu for the actual rendering (and less cpu overhead).

While I have no doubt that a single core can run +60 fps for most of the time in L4D, i highly doubt that it can maintain that during a final battle (or you must be running a Core2 close to 4Ghz).

About the smoke, these days that's done through particle effects which runs on the physics engine. Coat simulation is done by the physics engine too (unlike older games where such a thing was pre-calculated).

This is a nice read for you guys:
http://arstechnica.com/gaming/news/2006/11/valve-multicore.ars
Basically, they need multi-threading for physics and AI.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
You really don't need to take me to school. I know how rendering in DirectX is done. :)
While a lot of the rendering used to be done on the cpu in the pre-DX8 era, with each new DX iteration less and less work is done by the cpu for the actual rendering (and less cpu overhead).

While I have no doubt that a single core can run +60 fps for most of the time in L4D, i highly doubt that it can maintain that during a final battle (or you must be running a Core2 close to 4Ghz).

About the smoke, these days that's done through particle effects which runs on the physics engine. Coat simulation is done by the physics engine too (unlike older games where such a thing was pre-calculated).

This is a nice read for you guys:
http://arstechnica.com/gaming/news/2006/11/valve-multicore.ars
Basically, they need multi-threading for physics and AI.

Who's taking who to school now? :laugh:

Seriously, we both know what we both are talking about, we have just a different view of the weight of each task within the pipeline. After reading your link I'm even more convinced of my PÔV regarding L4D and the topic at hand, physics. Rendering and AI (mainly pathfinding), heavily outweighs physics processing in that game, so IMO any performance increment from adding multiple threads, comes from the expanded rendering and AI capabilities and much less from physics processing.

Another thing is that physics calculations (sound, AI) are not part of the rendering pipeline, at least in most games. That's why I didn't consider that multi-threaded rendering meant the use of more than one core. I thought the game was multi-threaded (having physics, AI in other threads), but when MR was enabled the rendering tasks where split into more threads too. What they are calling multi-threaded rendering, I would call Multi-threaded execution or something like that.

:toast:
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
What gets me is that the CPUs are advancing just as fast as GPUs. Why move tasks off the CPU just to leave the CPU bored and wasting power? Optimizing a game means taking a big workload and finding ways to make it consume much less. If it takes PhysX 1 billion cycles to do something Havok can emulate with 1 million cycles, why not use the option that is 1000x more efficient? Few would be the wiser.

Oh, and Zombie AI is pretty simple. Most of the work in a game like L4D comes from none other than animations. When you have a horde of zombies running at you, that's a crapload of triangles to update in a frame. Models/character animations are a joint CPU and GPU task. GPU takes care of most of the visuals while the CPU handles synchronizing the visuals with the backend (AI, position, etc.). The GPU obviously bares the majority of the load because the CPU has much more complex things to be concerned with (like collision detection).
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
What gets me is that the CPUs are advancing just as fast as GPUs. Why move tasks off the CPU just to leave the CPU bored and wasting power?

First of all, because CPUs have not advanced as fast as GPUs. Also GPUs have much greater floating point power to boot and are parallel by default. They are much better suited for physics. Anyway, PhysX when GPU accelerated doesn't use much less CPU, the idea is that using all CPU power available, you too use all the GPU power available to make 20x more physics calculations.

Optimizing a game means taking a big workload and finding ways to make it consume much less. If it takes PhysX 1 billion cycles to do something Havok can emulate with 1 million cycles, why not use the option that is 1000x more efficient? Few would be the wiser.

PhysX doesn't require any more power than Havok.

Oh, and Zombie AI is pretty simple. Most of the work in a game like Zombie comes from none other than animations. When you have a horde of zombies running at you, that's a crapload of triangles to update in a frame. Models/character animations are a joint CPU and GPU task. GPU takes care of most of the visuals while the CPU handles synchronizing the visuals with the backend (AI, position, etc.). The GPU obviously bares the majority of the load because the CPU has much more complex things to be concerned with (like collision detection).

Path finding >> collision detection. At least if the game has been properly coded. Position of objects/characters is always active, for obvious reasons. Path finding must be active always, as long as the AI enters the players zone (to be specified by the dev.). Collision detection only needs to kick in when two objects are close and most of the times only if the object/character is onscreen.

The rest you said is correct which accounts for my original claim:

In games like L4D, physics <<<<<<<<< everything else.
 
Joined
May 5, 2009
Messages
2,270 (0.42/day)
Location
the uk that's all you need to know ;)
System Name not very good (wants throwing out window most of time)
Processor xp3000@ 2.17ghz pile of sh** /i7 920 DO on air for now
Motherboard msi kt6 delta oap /gigabyte x58 ud7 (rev1.0)
Cooling 1 green akasa 8cm(rear) 1 multicoloured akasa(hd) 1 12 cm (intake) 1 9cm with circuit from old psu
Memory 1.25 gb kingston hyperx @333mhz/ 3gb corsair dominator xmp 1600mhz
Video Card(s) (agp) hd3850 not bad not really suitable for mobo n processor/ gb hd5870
Storage wd 320gb + samsung 320 gig + wd 1tb 6gb/s
Display(s) compaq mv720
Case thermaltake XaserIII skull / coolermaster cm 690II
Audio Device(s) onboard
Power Supply corsair hx 650 w which solved many problems (blew up) /850w corsair
Software xp pro sp3/ ? win 7 ultimate (32 bit)
Benchmark Scores 6543 3d mark05 ye ye not good but look at the processor /uknown as still not benched
Here is a random screen shot!!!!

It will prove a point. I'm not sure what, but it will. :toast:

ye thanks Steevo it reminded me that gpu-z is out
available here
 
Joined
Jan 2, 2009
Messages
9,899 (1.77/day)
Location
Essex, England
System Name My pc
Processor Ryzen 5 3600
Motherboard Asus Rog b450-f
Cooling Cooler master 120mm aio
Memory 16gb ddr4 3200mhz
Video Card(s) MSI Ventus 3x 3070
Storage 2tb intel nvme and 2tb generic ssd
Display(s) Generic dell 1080p overclocked to 75hz
Case Phanteks enthoo
Power Supply 650w of borderline fire hazard
Mouse Some wierd Chinese vertical mouse
Keyboard Generic mechanical keyboard
Software Windows ten
Play the game without phsyx, notice physics still happen!

Having it off means the calculations run on the CPU. Simples!

I've always been happy with the phsyics in games just off the CPU.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
First of all, because CPUs have not advanced as fast as GPUs. Also GPUs have much greater floating point power to boot and are parallel by default. They are much better suited for physics. Anyway, PhysX when GPU accelerated doesn't use much less CPU, the idea is that using all CPU power available, you too use all the GPU power available to make 20x more physics calculations.
The GPU is already burdened with millions of triangles while the CPU is at < 50% load. Why add a physics workload to a device that is already strained?


PhysX doesn't require any more power than Havok.
Then there's no reason to use PhysX at all.


Path finding >> collision detection. At least if the game has been properly coded. Position of objects/characters is always active, for obvious reasons. Path finding must be active always, as long as the AI enters the players zone (to be specified by the dev.). Collision detection only needs to kick in when two objects are close and most of the times only if the object/character is onscreen.

The rest you said is correct which accounts for my original claim:

In games like L4D, physics <<<<<<<<< everything else.
Collision detection is what keeps you, all movable entities, and perhaps AI characters, from falling out of the play area. The number of calculations climb when one entity (player or object) impacts another. Collision detection is a very simple greater than/less than check that happens frequently, it's what happens after a collision (entity velocity transferred to another entity) has been detected that can exponentially increase the workload.


Yes, L4D is a bad platform for the benchmarking of physics. Red Faction: Guerrilla would have been a better choice.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
The GPU is already burdened with millions of triangles while the CPU is at < 50% load. Why add a physics workload to a device that is already strained?

The GPU is not strained at all. Most powerful cards render the games at 100fps when 60 is the maximum that you really need. But that point of view is pointless, GPUs are not using all of their SPs when rendering so it makes sense to use them for something. MIMD will make that even easier. Also Core i7 running at 4Ghz doesn't reach 100 GFLOPS, current gen graphics cards have almost 3000 GFlops. You could use 2500 for rendering and 500 for phyisics and you would never notice it.

Then there's no reason to use PhysX at all.

Whaaat? That makes no sense. Havok and PhysX use almost the same CPU resources to offer the same funtionality when doing physics in the CPU, how is there's no need for PhysX then, based on that? There's no need for Havok either? As I said it makes no sense.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
What good is 3000gflops if all they ever do is bunch of stupid debris?! Ever seen Havok in action?

LINK:
http://www.havok.com/index.php?page=showcase

They don't mention they do it on GPU, so i assume it's done on CPU. Yet, i can see ALL the effects done with Havok + many i've never seen with PhysX. Especially lame is the lack of simple stuff as flags in PhysX games when using CPU, where Havok simulates like thousands of flags at once. Interior destruction also looks impressive. Or the cloth simulation.

But noooo, NVIDIA wants their crappy exclusive physics no one wants except them. I know many GeForce users who said they are cool when seen for the first time, but physics become lame and boring very fast.
Only way to really evolve physics is to make an open standard. Otherwise developers have no intention to waste time implementing it just so only half of the users will be able to use it.
And those who do, are just bribed by NVIDIA to do it. Nothing else.
Ppl at NVIDIA obviously don't seem to understand the need for physics to evolve.

We'we come a long way with graphics in 10 years from cartoonish to near photo realistic.
But what has happened with physics in 10 years? We've come from realistic ragdolls (most of games), debris (Red Faction, Max Payne 2, Half-Life 2), destructive environments (Red Faction - 2001! and partially Max Payne 20Half-Life 2) and evolved gameplay and puzzles that rely on physics (Half-Life 2 and partially Max Payne 2) to ragdolls and fuckin debris only !?!?!?!? Where is everything else!?
What the hell!? We are going backwards? Physics haven't evolved. They have devolved.
If you don't believe me, just look at the games and when it started happening.
Max Payne 2 was insane as far as overal physics are involved. Ragdolls add insane cinematic feeling, small objects like cans, tires, gas cans, bottles, walls destruction (scripted but when they went down, they did this with realistic physics). Everything flies around in firefights and and when you slow down it looks even far more dramatic. You shoot a guy with shotgun and he flies over some table, sweeping glasses and bottles from it in all directions. And best of all, all the physics were done on CPU back in freakin year 2003! But today in late 2009 NVIDIA is feeding as with GPU only bullshit that can only showcase us bunch of useless and boring effects like flags, papers flying around and some smoke. Are you kidding me?! You needed uber powerful GPU and 10 years for that!? I think that's just embarrassing at best, but only ppl at NVIDIA and some fanboys think PhysX is a great thing. I think it's not and it's just damaging the games, physics effects an gameplay.

We could already have physics that would give player a feeling of actually believable world, where you step into it and everything would just feel like a real world. But because of "awesome" PhysX, we still have 99% static world and 1% of useless debris we've seen decade ago.
Way to go NVIDIA. Way to go (are you feeling the sarcasm here? :D ).

So spread the word and educate the gamers about the damage PhysX and NVIDIA are doing too the gaming community. It's the only way to stop this crap.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
What good is 3000gflops if all they ever do is bunch of stupid debris?! Ever seen Havok in action?

LINK:
http://www.havok.com/index.php?page=showcase

They don't mention they do it on GPU, so i assume it's done on CPU. Yet, i can see ALL the effects done with Havok + many i've never seen with PhysX. Especially lame is the lack of simple stuff as flags in PhysX games when using CPU, where Havok simulates like thousands of flags at once. Interior destruction also looks impressive. Or the cloth simulation.

But noooo, NVIDIA wants their crappy exclusive physics no one wants except them. I know many GeForce users who said they are cool when seen for the first time, but physics become lame and boring very fast.
Only way to really evolve physics is to make an open standard. Otherwise developers have no intention to waste time implementing it just so only half of the users will be able to use it.
And those who do, are just bribed by NVIDIA to do it. Nothing else.
Ppl at NVIDIA obviously don't seem to understand the need for physics to evolve.

We'we come a long way with graphics in 10 years from cartoonish to near photo realistic.
But what has happened with physics in 10 years? We've come from realistic ragdolls (most of games), debris (Red Faction, Max Payne 2, Half-Life 2), destructive environments (Red Faction - 2001! and partially Max Payne 20Half-Life 2) and evolved gameplay and puzzles that rely on physics (Half-Life 2 and partially Max Payne 2) to ragdolls and fuckin debris only !?!?!?!? Where is everything else!?
What the hell!? We are going backwards? Physics haven't evolved. They have devolved.
If you don't believe me, just look at the games and when it started happening.
Max Payne 2 was insane as far as overal physics are involved. Ragdolls add insane cinematic feeling, small objects like cans, tires, gas cans, bottles, walls destruction (scripted but when they went down, they did this with realistic physics). Everything flies around in firefights and and when you slow down it looks even far more dramatic. You shoot a guy with shotgun and he flies over some table, sweeping glasses and bottles from it in all directions. And best of all, all the physics were done on CPU back in freakin year 2003! But today in late 2009 NVIDIA is feeding as with GPU only bullshit that can only showcase us bunch of useless and boring effects like flags, papers flying around and some smoke. Are you kidding me?! You needed uber powerful GPU and 10 years for that!? I think that's just embarrassing at best, but only ppl at NVIDIA and some fanboys think PhysX is a great thing. I think it's not and it's just damaging the games, physics effects an gameplay.

We could already have physics that would give player a feeling of actually believable world, where you step into it and everything would just feel like a real world. But because of "awesome" PhysX, we still have 99% static world and 1% of useless debris we've seen decade ago.
Way to go NVIDIA. Way to go (are you feeling the sarcasm here? :D ).

So spread the word and educate the gamers about the damage PhysX and NVIDIA are doing too the gaming community. It's the only way to stop this crap.

:slap: Considering that you linked to a page where Havok Reactor is being shown, which is the offline physics engine that comes integrated in 3DStudio Max, Maya and the likes, I won't bother reading all your post. What in bold already tells what the tone of the post is and seriously... nevermind.

Reactor takes seconds and sometimes minutes to calculate a single frame. That's what is being shown there and has nothing to do with Havok in games, except that it belongs to the same company.

Overall, nice try. :laugh:

And TBH, I'm really tired of the people saying we don't need PhysX, because we have Havok. That's the most stupid thing I have heard ever. We don't need Ati, because there is Nvidia or viceversa? We don't need AMD, because Intel or viceversa? Have you ever heard of competition?

I'll say this one more time in big bold letters, maybe this way you guys can understand that simple fact. I don't know if it has been unnoticed until now, but now you will be able to read it easily, if that has been the case:

PhysX is an API, just like Havok that can run on the CPU and offer the same functionality that Havok does on the CPU.

It also can run on the GPU offering the possibility of much better physics/graphics effects, something that Havok doesn't (yet).

If you don't want the added detail, you can just disable it, and have the same lacking physics that all the Havok games have (i.e Fallout3, L4D...). Nvidia or developers don't force you to use that option, it's entirely your choice and it is free.

If develpers wanted to use more tham one core, they would. But they don't. As I have demostrated above, most developers only use one core. That's because they have to gather to the highest posible audience and that audience still has single core or dual core CPUs, or a triple core that is weakest than a single core Athlon 64 (Xbox360).


*Red Faction Guerrilla uses their own propietary physics engine for destructions and especial details, Havok is only used for common physics, the ones you see in Fallout3, etc. Another game that has circulated and has been used for comparing to PhysX is Star Wars: The Force Unleashed, this game also uses a propietary engine based in Havok Euphoria, which is another offline physics engine. Note that I said based on, which means that has been heavily modified. That is always an option, but is almost irrelevant from where you start if you are going to tweak that much. You can even start from scratch like Crytek did, but using a third party engine is more productive. Well there are only two big players and a third smaller one, those are PhysX, Havok and Bullet. Wanting anyone to disapear is like wanting a CPU manufacturer to disapear.It's STUPID.
 
Last edited:
Joined
Apr 2, 2009
Messages
582 (0.11/day)
System Name Flow
Processor AMD Phenom II 955 BE
Motherboard MSI 790fx GD70
Cooling Water
Memory 8gb Crucial Ballistix Tracer Blue ddr3 1600 c8
Video Card(s) 2 x XFX 6850 - Yet to go under water.
Storage Corsair s128
Display(s) HP 24" 1920x1200
Case Custom Lian-Li V2110b
Audio Device(s) Auzentech X-Fi Forte 7.1
Power Supply Corsair 850HX
yo can someone tell me something?

what the HELL is the point of running a physics calculation using GPU power when 99% of new games today are GPU limited? meanwhiles our expensive multi-core, multi-thread CPU's are sitting around using 1.75 of their 4-8 possible cores(threads)

yeah thats what I thought... GPU physics is a bad idea in the first place, we need our GPU's to do GRAPHICS PROCESSING not a whole bunch of other crap.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
yo can someone tell me something?

what the HELL is the point of running a physics calculation using GPU power when 99% of new games today are GPU limited? meanwhiles our expensive multi-core, multi-thread CPU's are sitting around using 1.75 of their 4-8 possible cores(threads)

yeah thats what I thought... GPU physics is a bad idea in the first place, we need our GPU's to do GRAPHICS PROCESSING not a whole bunch of other crap.

Have you read the thread? NO.

Ok. I'll make you a resume that explains that:

GPUs have a lot of spare Shader Processing power. Most times when a program says the GPU is at 100% load, it's NOT. Most of the times not even 75% of the Shaders are being used*. The only program so far that uses all of them is Furmark. Modern GPUs, HD5xxx and GT300 can take advantage of that spare processing power, which accounts for 500++ GFlops, to do something. What the hell is wrong with that? Your expensive CPU using all of it's 8 cores can only do 80-90 GFlops. A <<$100 GPU can do orders of magnitude more. No CPU power is much better used to improve the idiotic AI that most games still have for example. Anyway the idea is that by moving all the parrallel work to the GPU, you wouldn't need any expensive CPU. In fact in the near future very few people will need an expensive CPU, wether they are gamers, designers or artists.

* It says 100% load because either all the ROPs or most commonly all the texture units are in use, and since the DX pipeline requires one free unit in every stage in the pipeline it says 100% load.
 
Joined
Apr 2, 2009
Messages
582 (0.11/day)
System Name Flow
Processor AMD Phenom II 955 BE
Motherboard MSI 790fx GD70
Cooling Water
Memory 8gb Crucial Ballistix Tracer Blue ddr3 1600 c8
Video Card(s) 2 x XFX 6850 - Yet to go under water.
Storage Corsair s128
Display(s) HP 24" 1920x1200
Case Custom Lian-Li V2110b
Audio Device(s) Auzentech X-Fi Forte 7.1
Power Supply Corsair 850HX
yes I read the thread... lol

anyway, tell me then why does Physx drop performance of the video card when used?


and saying we will stop using expensive CPU's... lol do you kiss a framed picture of Jenn Hsung when you wake up in the morning?
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
yes I read the thread... lol

anyway, tell me then why does Physx drop performance of the video card when used?
and saying we will stop using expensive CPU's... lol do you kiss a framed picture of Jenn Hsung when you wake up in the morning?

Because a lot more detail is being rendered? Let me reformulate that question:

anyway, tell me then why does Anti-aliasing drop performance of the video card when used.

Regarding the demise of expensive CPUs, first of all, no, I kiss no ones photos and second I use my brain. There are several programs that use GPUs parallel processing right now and all of them are much faster than CPU only programs. Much more are being developed for OpenCL, DX compute, CUDA and Stream. Soon every bit of floating point power in GPUs is going to be used. That means that most common tasks like video encoding, image processing, data sorting and a long list of many others are going to take advantage of 3000 GFlops of power, while the CPU will only have lless than 150 GFlops. Games are going to take advantage of that too for physics, AI, posistional sound... Tell me then for what are you going to use those 150 GFlps of power, when you have 3, 5, or 10 Tflops on the GPU? Web browsing? Chating? Wording? In the most demanding applications, whe re the power is really needed, the difference between an expensive CPU and a cheap one is going to be 3050 GFlops for the cheap $100 one (equivalent to a heavily OCed Core2 Quad) and 3150 Gflps for the expensive $600 one. Wow big difference.

Yeah Intel is going to try to implement parallel computing inside the CPUs to fight that, but that is very stupid, because they will make CPUs very big and expensive and GPUs are going to be more powerful anyway. Adding any sort of parallel units in the CPU is adding innecesary and redundant power (and silicon) in a system that won't need it.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
The GPU is not strained at all. Most powerful cards render the games at 100fps when 60 is the maximum that you really need. But that point of view is pointless, GPUs are not using all of their SPs when rendering so it makes sense to use them for something. MIMD will make that even easier. Also Core i7 running at 4Ghz doesn't reach 100 GFLOPS, current gen graphics cards have almost 3000 GFlops. You could use 2500 for rendering and 500 for phyisics and you would never notice it.
Most powerful is quite irrelevant because few gamers have them. Only the mid-range cards matter and most of them struggle to hold 30 fps at the high resolutions of common LCDs. Developing the games for the most powerful hardware of the day could have well lead to the consumer exodus from PCs to consoles--the budget was just too much to justify.

As to the rest of your point, it is imperative every SP be put to work on frames if the fps is not at least 30.

x86/x86-64 processors have SSE which can take complex instructions/tasks and complete them in very few clocks--an advantage GPUs don't have. The problem is, few SSE instructions are dedicated to games. Anyway, CPUs are designed to handle complex, multi-faceted tasks while GPUs are limited to simple, linear tasks.

You would notice it if that 500 wasn't enough for physics or that 2500 wasn't enough to get decent framerates. Physx in most games only take, what? maybe 2 GFLOPS? Better to put it on the CPU which is far better at multitasking.



Whaaat? That makes no sense. Havok and PhysX use almost the same CPU resources to offer the same funtionality when doing physics in the CPU, how is there's no need for PhysX then, based on that? There's no need for Havok either? As I said it makes no sense.
How do you know they aren't falling back on a different (Havok, or something similar), CPU-based physics engine when there is not a PhysX enabled device present (for the sake of not killing performance)? I haven't heard/seen any commentary on what developers thought of PhysX and how they implement it.


Um, and DirectX only includes compute shaders which could accelerate physics calculations; I see nothing that suggests DirectX includes an open standard for calculating physics. That said, NVIDIA could make PhysX compute shader compatible (which they won't) so that it could run on AMD cards too. That doesn't mean DirectX 11 will "kill" PhysX or Havok, or any other physics engine out there. Kind of sad, actually... :(
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Most powerful is quite irrelevant because few gamers have them. Only the mid-range cards matter and most of them struggle to hold 30 fps at the high resolutions of common LCDs. Developing the games for the most powerful hardware of the day could have well lead to the consumer exodus from PCs to consoles--the budget was just too much to justify.

High-end of today, mainstream of tomorrow. You can have a GTX260 for less than $150 today. That will handle any game today. My 8800 GT handles Batman with GPU PhysX on high and 1280x1024 4xAA. Not a high resolution, but the GTX260 being twice as powerful and 1920x1200 having 75% more pixels, it must run the game at high resolutions better than mine at low resolutions.

As to the rest of your point, it is imperative every SP be put to work on frames if the fps is not at least 30.

If the game doesn't use more shaders and uses more textures instead or more z, it's not imperative to put more SP to work on frames, it would be pointless. In fact, most games don't use all the SPs, maybe Crysis being the exception with a very big question mark over it. Only Furmark stresses the SPs to the maximum.

x86/x86-64 processors have SSE which can take complex instructions/tasks and complete them in very few clocks--an advantage GPUs don't have. The problem is, few SSE instructions are dedicated to games. Anyway, CPUs are designed to handle complex, multi-faceted tasks while GPUs are limited to simple, linear tasks.

SSE are used a lot in games, at least when SSE3 was released in I don't remember which processor, it improved performance by 5-10%. The processor was the same, except SS3. Most demanding applications are based on repetitive, simple, but heavily parallel tasks. Tell me a task that requires heavy amount of complex calculations, that can't be split into simple ones, as is the case with F@H.

You would notice it if that 500 wasn't enough for physics or that 2500 wasn't enough to get decent framerates. Physx in most games only take, what? maybe 2 GFLOPS? Better to put it on the CPU which is far better at multitasking.

It takes way more than that. In Batman my Quad jumped to 60% load with the "simple"** smoke, as I showed above, that's around 30 GFlops. No game that has been realeased is representative of GPU physics anyway, that's the only things that developers want to do right now, considering the lack of support. Because of the SIMD nature of current GPUs, it's just easier for them to dedicate one SP cluster (16-24 SPs) to PhysX or around 100-120 GFlops, thaat obviously aren't being used. In that sense in this generation of cards, you are losing one cluster all the time, but they are not using all the available power. That won't happen n the future thatnks to new architectures and specially MIMD.

** Simple compared to what GPU PhysX can do, but it's still way more complex than any other smoke seen in a game to date.

How do you know they aren't falling back on a different (Havok, or something similar), CPU-based physics engine when there is not a PhysX enabled device present (for the sake of not killing performance)? I haven't heard/seen any commentary on what developers thought of PhysX and how they implement it.

How many times do I have to explain this? PhysX is a multiplatform API that can run on the CPU or the GPU (or Ageia PPU, Cell, Xenos). It will take as much as it can from everywhere. If there is no CUDA compatible card or Ageia PPU, it runs everything in the CPU***. There's no difference between the (GPU) expanded mode and the standard mode, except that it adds a lot of detail*. Why do you think you can run the enhanced mode without a Nvidia card otherwise?

*It's no different from object detail, or texture size. The game will try to run identically, but will be affected performance wise if there is not enough available power.

*** And because it's an API, even though it's a complete physics engine it comes in API form, it's integrated in the game engine and will follow the rules specified by the engine. That means that it will be limited to the use of as many cores as the game developer wanted the game to run, based on their targeted audience.

Um, and DirectX only includes compute shaders which could accelerate physics calculations; I see nothing that suggests DirectX includes an open standard for calculating physics. That said, NVIDIA could make PhysX compute shader compatible (which they won't) so that it could run on AMD cards too. That doesn't mean DirectX 11 will "kill" PhysX or Havok, or any other physics engine out there. Kind of sad, actually... :(

The existence of 3rd party physics developers is a good thing actually. Not only they save up a lot of money and time to develpers, but they also have a higher expertise than them. Would you want Dell starting to make CPUs, GPUs, ram, etc, instead of buying them from other comanies that are 100% dedicated to their respective products? Outsourcing is essential nowadays.
 
Last edited:
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
@Benetanegia
You're missing just one fact. I (we) have seen all these fancy physics effects done on CPU a decade ago. And now they're feeding us some propertiary crap that works only on GeForce cards and doesn't offer us absolutely NOTHING new or exciting. Don't tell me few flying papers and crappy fog makes you hard. All this was done a decade ago.

Also the "it's open yadidiadida" is complete BS. Yeah it's open to developers but closed to end users. So even though developers can implement it, users with ATI, Intel, S3 or any other than NVIDIA cards cannot use it at all. I'm sure ATI would add support for PhysX if it would trully be open standard like DirectX or OpenGL is. But would you as developer waste months of development just so those half of users could taste what you've done? I think not. It's just not worth the effort. And so NVIDIA is showing us their crap, developers are hesitating and physics are stagnating or even going backwards. Because i'm not seeing any progress what so ever.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
@Benetanegia
You're missing just one fact. I (we) have seen all these fancy physics effects done on CPU a decade ago. And now they're feeding us some propertiary crap that works only on GeForce cards and doesn't offer us absolutely NOTHING new or exciting. Don't tell me few flying papers and crappy fog makes you hard. All this was done a decade ago.

Also the "it's open yadidiadida" is complete BS. Yeah it's open to developers but closed to end users. So even though developers can implement it, users with ATI, Intel, S3 or any other than NVIDIA cards cannot use it at all. I'm sure ATI would add support for PhysX if it would trully be open standard like DirectX or OpenGL is. But would you as developer waste months of development just so those half of users could taste what you've done? I think not. It's just not worth the effort. And so NVIDIA is showing us their crap, developers are hesitating and physics are stagnating or even going backwards. Because i'm not seeing any progress what so ever.

Show me where have you seen those effect a decade ago please? And more importantly in such high cuantities and definition. Please dont' come to me with the "in the first splinter cell..."*.Fact is there is not.

*A cloth simulation that had 6 nodes and 6 polygons. Please...

Show me the realistic and detailed smoke that you (or any object) can displace and has many small particles and not few big ones. Show me the meta-particle water. Show me lots and lots of sparks that bounce on actual geometry and not fall down through the ground. Show me cloth with more than...

http://www.youtube.com/watch?v=g_11T0jficE - Come on tell me both sides are equally compelling. Show me something like this in any game.
http://www.youtube.com/watch?v=luSAnouAFJs - Come on, show me smoke, show me cloth.
http://www.youtube.com/watch?v=vrUYX7R53LY
http://www.youtube.com/watch?v=FcqDzdwzaEU&NR=1

Show me this running on a CPU :laugh:: http://www.youtube.com/watch?v=IJ0HNHO5Uik - Especially the second half of the video.

Of course you can simulate the same effect on a cpu, but not to the extent that you can on a GPU, not in the same quantities that are necesary for realism. And again the games that have been released are not representative of what it can be done, because of lack of support and that's AMD's and only AMD's fault. Current games barely use a 25% of the power that a 16 SP cluster can give, GTX2xx cards have 15 times that, GT300 will have 30 times that.

And it's open, and it would have been open to Ati users if when Nvidia gave PhysX to Ati for free with no conditions they had said yes, or when the guy from ngohq.com made one moded driver that made it posible, if they had supported him, like Nvidia did, instead of scare the hell out of him with demands. But of course, back then Ati had nothing to compete in that arena and Nvidia cards would have destroyed their cards, so they said no no nooo! And now poor Ati users can't do anything but cry, and say it's not that great. :laugh:

Anyway, they are NOT FORCING you to use the expanded physics mode, and the normal mode is NOT any worse than other games' physics, that is the fact, so why all this crying I ask? Yeah, exactly, because Nvidia users can and you can't. It's that simple. I don't pay anything more to have those effects, you don't need to pay either nor you have to enable them if you don't want or you think they add nothing. Again, if they add nothing why all this crying? AAhhh jealousy...
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
I just saw this in TechReport:

http://techreport.com/discussions.x/17671

In resume, for what they would have needed 8000 CPU cores (1000 8p servers), they are using 48 servers with the 2 GPU Tesla each. Orders of magnitude cheaper and cheaper to mantain, cool and in power bills.

Say goodbye while you are leaving expensive CPU. Be polite, die with honor.
 
Joined
Sep 25, 2007
Messages
5,965 (0.99/day)
Location
New York
Processor AMD Ryzen 9 5950x, Ryzen 9 5980HX
Motherboard MSI X570 Tomahawk
Cooling Be Quiet Dark Rock Pro 4(With Noctua Fans)
Memory 32Gb Crucial 3600 Ballistix
Video Card(s) Gigabyte RTX 3080, Asus 6800M
Storage Adata SX8200 1TB NVME/WD Black 1TB NVME
Display(s) Dell 27 Inch 165Hz
Case Phanteks P500A
Audio Device(s) IFI Zen Dac/JDS Labs Atom+/SMSL Amp+Rivers Audio
Power Supply Corsair RM850x
Mouse Logitech G502 SE Hero
Keyboard Corsair K70 RGB Mk.2
VR HMD Samsung Odyssey Plus
Software Windows 10
The future is not going to be CPU's or GPU's its going to be a mix

IE . . . . I hate to say it but a design similar to Larrabees will be CPU's or the future, even though I pray Larrabee fails.
 
Joined
May 4, 2009
Messages
1,970 (0.36/day)
Location
Bulgaria
System Name penguin
Processor R7 5700G
Motherboard Asrock B450M Pro4
Cooling Some CM tower cooler that will fit my case
Memory 4 x 8GB Kingston HyperX Fury 2666MHz
Video Card(s) IGP
Storage ADATA SU800 512GB
Display(s) 27' LG
Case Zalman
Audio Device(s) stock
Power Supply Seasonic SS-620GM
Software win10
Physx is not open source, just like DX. The programers don't have to pay to code it in now, but if at a later point in time Nvidia decides to cash in on it(and they will), they will be backed by every legal system out there. The only difference here is that DX is already the widespread standard. Only Microsoft can change the code for DX, because they own it. This means that even if ATi accepted the Physx API, they woudn't be able to make any changes or optimisations to it for their hardware. The only one that can change the code is Nvidia. This in turn means that Ati will always have to play second fiddle with Nvidia due to poorly optimised code.
 
Last edited:
Top