• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PhysX only using one cpu core?

Joined
Nov 4, 2005
Messages
11,681 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.

They did, it's called DX, and Nvidia ignored the evolution of it years ago, and ATI and MS are holding hands like school kids.


Now DX11 will allow the direct implementation of physics calculations and implementation on the X brand GPU. However I believe this is just the start of the next large idea, doing all complex rendering on the GPU, movie, audio, pictures, etc.... everything but the basic functions passd on to the faster processor, and expansive capabilaties built into the software.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
This might have already been said: Havok is a wholly owned subsidiary of Intel and may be part of Intel's motivation to create Larrabee.
 

shevanel

New Member
Joined
Jul 27, 2009
Messages
3,464 (0.64/day)
Location
Leesburg, FL
They did, it's called DX, and Nvidia ignored the evolution of it years ago, and ATI and MS are holding hands like school kids.


Now DX11 will allow the direct implementation of physics calculations and implementation on the X brand GPU. However I believe this is just the start of the next large idea, doing all complex rendering on the GPU, movie, audio, pictures, etc.... everything but the basic functions passd on to the faster processor, and expansive capabilaties built into the software.

This might have already been said: Havok is a wholly owned subsidiary of Intel and may be part of Intel's motivation to create Larrabee.

Couldnt have said it better.
 

Sonido

New Member
Joined
Aug 25, 2008
Messages
356 (0.06/day)
Location
USA
System Name Sonido
Processor E6600 @ 3.20 GHz (1600 FSB)
Motherboard abit IX38 QuadGT
Cooling Tt SpinQ (load @ 29c)
Memory TopRam SpeedRAM A.K.A "El Cheapo" @ 800 MHz
Video Card(s) Diamond HD 4870 (@ Stock)
Storage Primary: 500 GB WD
Display(s) 32" Samsung 1080p HDTV
Case Antec Twelve hundred (Modded Tri-cool fans)
Audio Device(s) On-board HD Audio
Power Supply 700W Rocketfish
Software Vista\7\Linux\Hackintosh
I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.

Well, it's complicated. They won't do it themselves, but they aren't against someone else doing it. Meaning, AMD won't, but if you can, great!
 

shevanel

New Member
Joined
Jul 27, 2009
Messages
3,464 (0.64/day)
Location
Leesburg, FL
With this new era of gaming and gpu's in general.. and DX11 and faster CPU/GPUS's I think we are in for a treat in the world of gaming. I was at bestbuy tonight looking for a game to play.. I left empty handed.. nothing looks interesting enough to pay $20-50 for right now..
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
They did, it's called DX, and Nvidia ignored the evolution of it years ago, and ATI and MS are holding hands like school kids.

I couldn't agree more on what's in bold, but the rest is not tre. Not wanting to implement all of the features that were unilaterally developed by MS and Ati doesn't mean they ignored the evolution of DX. Nvidia cards were 99% DX10.1 compliant, but since the introduction of DX10 MS decided that you need the 100% to call your card compliant. Apart from being 99% DX10.1 Nvidia cards were 100% or 110% compliant with what game developers wanted, which is what matter in the end. Basically Nvidia decided to depart a bit from DX, because DX had departed from what game developers really wanted.

DX10.1 was what MS and Ati decided DX10 had to be before Nvidia or game developers had the opportunity to say something and was a direct evolution of XB360's API and thus a joint venture between MS and Ati on their own. Hence it had to be changed afterwards to fit what Nvidia and game developers wanted (of the three companies Nvidia is the one closest to game develpers, that's something not even an Ati fanboy can deny). MS has always done DX unilaterally deciding what to implement and what not, and more importantly how, with very little feedback from game developers. It's because of that that most big game developers in the past prefered OpenGL (iD, Epic, Valve...). That sentiment has not changed in recent years really, but game developers had no option but to resign and play by MS rules, because OpenGL project was almost dead until Khronos tried to revive it (without much success BTW).
 
Joined
Nov 4, 2005
Messages
11,681 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I couldn't agree more on what's in bold, but the rest is not tre. Not wanting to implement all of the features that were unilaterally developed by MS and Ati doesn't mean they ignored the evolution of DX. Nvidia cards were 99% DX10.1 compliant, but since the introduction of DX10 MS decided that you need the 100% to call your card compliant. Apart from being 99% DX10.1 Nvidia cards were 100% or 110% compliant with what game developers wanted, which is what matter in the end. Basically Nvidia decided to depart a bit from DX, because DX had departed from what game developers really wanted.

DX10.1 was what MS and Ati decided DX10 had to be before Nvidia or game developers had the opportunity to say something and was a direct evolution of XB360's API and thus a joint venture between MS and Ati on their own. Hence it had to be changed afterwards to fit what Nvidia and game developers wanted (of the three companies Nvidia is the one closest to game develpers, that's something not even an Ati fanboy can deny). MS has always done DX unilaterally deciding what to implement and what not, and more importantly how, with very little feedback from game developers. It's because of that that most big game developers in the past prefered OpenGL (iD, Epic, Valve...). That sentiment has not changed in recent years really, but game developers had no option but to resign and play by MS rules, because OpenGL project was almost dead until Khronos tried to revive it (without much success BTW).

Eat teh green pill, its only 1% poison.;)


Nvidia ignored DX years ago, and is either doing it again now, by their refusal to do anything with what DX10.1 brought to the table and either are ignoring DX11 for thei own selfish gains, or have nothign to bring and only want the competition to fail. I lean twords the latter.

As far as the game developers, how many games are made for the 360? lots. They are easy to port, have advanced functionality compared to the lame duck PS3 offering, DX is what game developers want, and thus we have DX11, and i'm sure alot of other enhancements that are going to be wanted for the next installment, if developers really wanted more or something different, they would use Open GL alot more than they do now, and coding for the DX11 API is much easier as it is the GPU's job to decode the high level language and run it. So how is something with wide support, a set in stone standard for coding, thousands of features, and a huge userbase, multi-platform ready, and waiting hardware bad? Not to mention it is free.....you just have to download it, the Beta is available and open to the public, so there is time to get ready.


Unless you have nothing, and only want to throw shit, it isn't.
 
Last edited:

shevanel

New Member
Joined
Jul 27, 2009
Messages
3,464 (0.64/day)
Location
Leesburg, FL
Several threads about this topic which is always off topic to OP.. once this topic is touched the thread dies quick.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Eat teh green pill, its only 1% poison.;)


Nvidia ignored DX years ago, and is either doing it again now, by their refusal to do anything with what DX10.1 brought to the table and either are ignoring DX11 for thei own selfish gains, or have nothign to bring and only want the competition to fail. I lean twords the latter.

As far as the game developers, how many games are made for the 360? lots. They are easy to port, have advanced functionality compared to the lame duck PS3 offering, DX is what game developers want, and thus we have DX11, and i'm sure alot of other enhancements that are going to be wanted for the next installment, if developers really wanted more or something different, they would use Open GL alot more than they do now, and coding for the DX11 API is much easier as it is the GPU's job to decode the high level language and run it. So how is something with wide support, a set in stone standard for coding, thousands of features, and a huge userbase, multi-platform ready, and waiting hardware bad? Not to mention it is free.....you just have to download it, the Beta is available and open to the public, so there is time to get ready.


Unless you have nothing, and only want to throw shit, it isn't.

None of them, Nvidia nor Ati followed MS ways strictly until DX10 so you are completely wrong. It's not only a matter of implementing the features, it's a matter of how those features are implemented. Before DX10 Nvidia and Ati always made their own aproaches to some of the features and those features were eventually implemented in DX months if not years after those had been implemented in hardware by any of the GPU companies. That's how DX8.1, and DX9.0 a, b and c were born, just to name a few. For many developers that was a far better approach and you can read blogs from people like Carmack, Tim Sweeny and the likes if you don't believe me and want to learn a thing or two in the meantime. Why did they prefer that approach? Because it offered much better performance and much more control over how things worked in every architecture (jack of all trades, master of none they say). Of course the DX10 approach is better for lazy developers that don't care shit about optimization, but that's all. Most important developers make a different rendering path for the different architectures anyway, so they don't care as much if they had to make things differently for one or another. In fact because of the closed nature of DX10, many developers have to deal more with HLSL and assembly in order to better fit their tech to the various architectures. DX10 is to graphics as Java is to general programming. Java is supposed to be able to run the same code in every platform, but the reality is that deveoplers create different code for different cell phones, for example. Otherwise they wouldn't be able to be competitive in all of them.

Because of how strict DX10 is hardware can only be in one way, which doesn't mean it's going to be better. Ati or Nvidia could come up with a better way of doing graphics (along with game developers), but they will never be able to do that because of DX's closed nature. That in no way helps first tier developers, because they will not be able to access any new features or ways of doing those features until MS decides is the time to implement that on DX12. Take tesselation for example, the way that MS wants it done is different from what Ati wanted it to be and still very very different from what a perfect design would be, but even if Ati/Nvidia/Intel come up with a better design in the future they will not be able to do it that way.

When it comes to DX10.1 features Nvidia decided to use their own aproach to many things like reading from back buffer and cube map arrays, but they couldn't say their cards were DX10.1 because they were not doing them the way that MS had specified. If t had been DX9, not only Nvidia would have been compliant with DX9, but the development of a new DX9.0d would have started to include that way of doing things. In the end MS's DirectX API is nothing more than an interface and brigde, between software developers and hardware developers, it's those two who have or should have the last word and not MS. If you want a comparison with cars, is as if M$ spcified that all engines had to be in V shape, V6, V8, V12... That is good and all, and most sportive cars use that, but then here it comes Porsche and say that for their cars the Boxer (confronted cylinders) is much better. They would be way out of the standard, but the reality is that Porsche 911's with their 6-cylinder Boxer engines are much better than most V8s out there and certainly better cars than any V6. Boxer engines migth not fit every car, but they certainly make Porsche's the best supercars for the money.

Now regarding XBox 360, again, in no way the fact that a game can be ported to PC straight from the Xbox360 is a good thing. It surely is better for lazy developers that don't care to port a game to a platform that is 4 times more powerful and that will not get anything in return because of how poorly optimized it is. It certainly isn't for us who have been obtaining subpar console ports in the last years.

And last about OpenGL. Develpers don't use OpenGL because it didn't evolve with the years the project was almost abandoned and before that things were not going well inside the forum, because there was to many fragmentation about how things had to be done. That's the absolute oposite of what MS did with DX10, where they said things are going to be done this way, period. Well none of the aproaches are good. Another reason OGL was abandoned in favor of DX, amd aybe the most important one, is that most developers had to use DirectInput and DirectSound for compatibility reasons anyway, even if they were using OpenGL for the graphics, so it just became easier to only use one of the APIs.

I'm not a fanboy, so you can take this as it is, an statement of how things trully have happened or you can swalllow another of your own red pills, which is obvious you already did.
 
Last edited:
Joined
Nov 4, 2005
Messages
11,681 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Joined
May 5, 2009
Messages
2,270 (0.42/day)
Location
the uk that's all you need to know ;)
System Name not very good (wants throwing out window most of time)
Processor xp3000@ 2.17ghz pile of sh** /i7 920 DO on air for now
Motherboard msi kt6 delta oap /gigabyte x58 ud7 (rev1.0)
Cooling 1 green akasa 8cm(rear) 1 multicoloured akasa(hd) 1 12 cm (intake) 1 9cm with circuit from old psu
Memory 1.25 gb kingston hyperx @333mhz/ 3gb corsair dominator xmp 1600mhz
Video Card(s) (agp) hd3850 not bad not really suitable for mobo n processor/ gb hd5870
Storage wd 320gb + samsung 320 gig + wd 1tb 6gb/s
Display(s) compaq mv720
Case thermaltake XaserIII skull / coolermaster cm 690II
Audio Device(s) onboard
Power Supply corsair hx 650 w which solved many problems (blew up) /850w corsair
Software xp pro sp3/ ? win 7 ultimate (32 bit)
Benchmark Scores 6543 3d mark05 ye ye not good but look at the processor /uknown as still not benched
i'm only interested in knowing if i can play pong using it he he .jpgnow where's mycookie.gifrolf lol.gif
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.

Resorting to jokes and name calling means you have nothing say. No arguments on your favor. Well. I won't say I didn't expect that.

On topic, I have done the same on L4D and Fallout 3, 2 games using Havok and only one core is being used, so no difference. I have tried Mirror's Edge and it always uses 2 cores, 50% load, no matter if PhysX was enabled, disabled in game or disabled in control panel, but enabled in game. The performance difference between the modes was notable though.




Mirror's Edge doesn't have windowed mode option so I only post the task manager, if someone knows how to enable windowed mode in Mirror's edge please tell me and I will redo. The level is the next one after you talk to your sister, it's perfect because soon after you start some soldiers shoot many cristals out, stoped soon after that shootout:

EDIT: here http://www.youtube.com/watch?v=V_B3_upOvmc - the one that starts at 0:50.

With PhysX disabled in control panel and GPU accelerated PhysX enabled in game, didn't run FRAPS but fps were below 10 for sure.



This one is with PhysX completely enabled.


^^ NOTE: Even if it says 29% the actual average usage was around 40-45%, a little bit less than when PhysX acceleration was disabled (50-55%), but negligible. I posted the SS so that the graphs can be seen as those tell the story better. When PhysX is enabled there's more variance (and spikes) in CPU usage, between different cores. That 29% also confirms the variance, when physx was disabled it never came lower than 50%.

So that pretty much says it all. It depends on the developer how many cores are used.
 
Last edited:
Joined
Nov 4, 2005
Messages
11,681 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I called you a name? You said you were not a fanboy, and I just searched your posts, most being anti-ATI, and pro Nvidia. If that isn't then what is?


As to your screen shots, a game on pause doesn't use physics, and if you use any one of the many available apps you can watch your in game GPU, CPU, memory, Vmem, temps and much else.


The article in question is merely showing that a system with a faster processor can run a game with Physics enabled on the CPU, but due to Nvidia having their hands in developers the consumer suffers unless you want to purchase theirs, and only their propretary hardware and run their proprietary software.


http://www.ngohq.com/graphic-cards/16223-nvidia-disables-physx-when-ati-card-is-present.html



Nvidia disables PhysX when ATI card is present

--------------------------------------------------------------------------------

Well for all those who have have used Nvidia cards for PhysX and ATI cards to render graphics in Windows 7...All that is about to change.

Since the release of 186 graphics drivers Nvidia has decided to disable PhysX anytime a Non-Nvidia GPU is even present in the same PC. Nvidia again has shot themselves in the foot here and showed they are not customer oriented. Since they are pushing Physx this will not win over any ATI fanboys with this latest decision.

Here is a copy of the email I received from Nvidia support confirming what they have done.

"Hello JC,

Ill explain why this function was disabled.

Physx is an open software standard any company can freely develop hardware or software that supports it*. Nvidia supports GPU accelerated Physx on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes Physx a great experience for customers. For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated Physx with NVIDIA GPUs while GPU rendering is happening on non- NVIDIA GPUs. I'm sorry for any inconvenience caused but I hope you can understand.

Best Regards,
Troy
NVIDIA Customer Care"

------------------------------------
*So as long as you have their card, you can run any of theri open source hardware or software, on their card. No any other card, just theirs, being open source and all, for all hardware, that is theirs, and only theirs, since it is still open to all, and everyone who has their software, and hardware, of course.


Enabling PhysX in game creates MORE demand on system components, not less.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8862&Itemid=1
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
I called you a name? You said you were not a fanboy, and I just searched your posts, most being anti-ATI, and pro Nvidia. If that isn't then what is?


As to your screen shots, a game on pause doesn't use physics, and if you use any one of the many available apps you can watch your in game GPU, CPU, memory, Vmem, temps and much else.


The article in question is merely showing that a system with a faster processor can run a game with Physics enabled on the CPU, but due to Nvidia having their hands in developers the consumer suffers unless you want to purchase theirs, and only their propretary hardware and run their proprietary software.


http://www.ngohq.com/graphic-cards/16223-nvidia-disables-physx-when-ati-card-is-present.html



Nvidia disables PhysX when ATI card is present

--------------------------------------------------------------------------------

Well for all those who have have used Nvidia cards for PhysX and ATI cards to render graphics in Windows 7...All that is about to change.

Since the release of 186 graphics drivers Nvidia has decided to disable PhysX anytime a Non-Nvidia GPU is even present in the same PC. Nvidia again has shot themselves in the foot here and showed they are not customer oriented. Since they are pushing Physx this will not win over any ATI fanboys with this latest decision.

Here is a copy of the email I received from Nvidia support confirming what they have done.

"Hello JC,

Ill explain why this function was disabled.

Physx is an open software standard any company can freely develop hardware or software that supports it*. Nvidia supports GPU accelerated Physx on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes Physx a great experience for customers. For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated Physx with NVIDIA GPUs while GPU rendering is happening on non- NVIDIA GPUs. I'm sorry for any inconvenience caused but I hope you can understand.

Best Regards,
Troy
NVIDIA Customer Care"

------------------------------------
*So as long as you have their card, you can run any of theri open source hardware or software, on their card. No any other card, just theirs, being open source and all, for all hardware, that is theirs, and only theirs, since it is still open to all, and everyone who has their software, and hardware, of course.


Enabling PhysX in game creates MORE demand on system components, not less.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8862&Itemid=1

Nvidia offered Ati to run PhysX on their cards. They said no, end of story. You can't run two drivers in Vista, that's why they decided to not support Ati+Nvidia for PhysX.

EDIT: The games were not paused. :roll:

About PhysX requiring more CPU when the GPU is being used for PhysX instead of Ageias' PPU, there's no doubt about that, but it uses much less CPU than if the CPU had to do the physics. You proved nothing, mate. BTW the reason that the ageia PPU used less CPU is because the PPU had a CPU incorporated, it was a CPU + parallel procesors, it could run ALL the physics code in the PPU, including branching. GPUs can't do branching (yet) and hence it requires some CPU, but much less than if the CPU was doing the physics.

About fanboys and not fanboys you need a reality check. Here in these forums many people directly bash Nvidia, and no one says nothing, those people are never fanboys. But if someone that looks things from all the angles even dares to explain why Nvidia shouldn't be bashed for that, providing all the reasons behind his statement... he is a fanboy. Honestly I even doubt any of you have ever read my arguments and just keep saying the same things and resorting to jokes and whatnot. You give no reasons and links that have nothing to do with the topic at hand. You don't comment about the results that I just posted, because, frankly, they do no good to your POV. You are a fanboy and I am an anti fanboy, that's why I will always jump in when a fanboy bashes one company. It just happens that in these forums that's always Nvidia. Check these forums better and you will see how Ati is never bashed like Nvidia is. In real life I always side with the one that is having an unjustified beating too, that even led me to a fine, because I tried to help a guy who was being hit by some guards at the disco and the guards were friends with the police patrol that was in charge that day. So I got hit by the guards, later by the police and then I got a fine. Nice eh? But that's how I am. So as I said, if I see someone bashing a company or technology for something it doesn't deserve I will always jump in. I just can't stand the fanboy's mindless lack of reasoning and this thread is a good example of that lack of reasoning. I mean almost no game is multi-threaded (and I proved it somehow above), but one game is not multi-treaded AND uses PhysX and "OMFG, Nvidia cheating, Nvidia paying developers so that they don't make their games multi-threaded". It's absurd, and the fact that you seem unable to see that after all the reasons I gave for the games not usng more than one core just shows how willingly believe in that sensence.
 
Last edited:

EastCoasthandle

New Member
Joined
Apr 21, 2005
Messages
6,885 (0.99/day)
System Name MY PC
Processor E8400 @ 3.80Ghz > Q9650 3.60Ghz
Motherboard Maximus Formula
Cooling D5, 7/16" ID Tubing, Maze4 with Fuzion CPU WB
Memory XMS 8500C5D @ 1066MHz
Video Card(s) HD 2900 XT 858/900 to 4870 to 5870 (Keep Vreg area clean)
Storage 2
Display(s) 24"
Case P180
Audio Device(s) X-fi Plantinum
Power Supply Silencer 750
Software XP Pro SP3 to Windows 7
Benchmark Scores This varies from one driver to another.
You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.

Can't argue with that. :D This is how consumers wins in the long run.
 
Joined
Mar 1, 2008
Messages
282 (0.05/day)
Location
Antwerp, Belgium
On topic, I have done the same on L4D and Fallout 3, 2 games using Havok and only one core is being used, so no difference. I have tried Mirror's Edge and it always uses 2 cores, 50% load, no matter if PhysX was enabled, disabled in game or disabled in control panel, but enabled in game. The performance difference between the modes was notable though.

Did you enable 'Multicore rendering' in L4D?
I'm asking this because when they introduced MR in TF2, the explosions ran much better than before. But it's possible that Valve just split off the physics processing to a seperate core (and not really thread the physics).
What bugs me about PhysX is that the PPU and GPU version are already heavily threaded.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Wolfenstein another Havok game.



Crysis Warhead.



NFS:Shift



One uses Havok, the other one uses their propietary physics engine, the last one uses CPU PhysX, none of them use more than one core. Of all the games that I have installed right now only Mirror's Edge, a PhysX game, used 2 cores. So what's the reasoning behind that?

As I explained in my first post in this thread. Game developers have to gather to the biggest audience possible, so making their games use more than one core would cripple their user base dramatically or they would have to make two or more different versions to run on different number of cores and probably target performance as a slow Quad could probably be worse than a fast dualie. They simply cannot use 4-8 cores because only a very small portion of the people have Quads and better. It's their call if they want to use two cores or one core depending on their targeted audience. I think that I have provided enough samples of new games to show that most games are single threaded no matter which API they use.

Physics is an added feature on top of that version of the game that is meant to run everywhere. That way is easy for them to implement it. The GPU accelerated PhysX version is also meant to run in almost any Nvidia GPU, it's very basic compared to what newer cards would be capable off. I have a 8800GT in a Quad (this PC) and a 9800GTX+ on a Athlon X2 and both systems all PhysX enabled games perfectly, doing PhysX and rendering (1600x1200 4xAA) smoothly. That's how developers work, very few of them want to risk losing user base because they asked for too high requirements. Hence they alway develop for the least common denominator: 1 core.
 
Joined
Jul 19, 2006
Messages
43,587 (6.72/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
This is with multicore enabled in L4D

 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Did you enable 'Multicore rendering' in L4D?
I'm asking this because when they introduced MR in TF2, the explosions ran much better than before. But it's possible that Valve just split off the physics processing to a seperate core (and not really thread the physics).
What bugs me about PhysX is that the PPU and GPU version are already heavily threaded.

L4D, no I don't use MR, because for some reason it sttuters heavily on my PC. I have better overall fps with one core anyway. Besides that's rendering, so I think it's better to have it that way. If the game used more than one core, then it would be because of the physics. If I had both enabled how would we know if the CPU usage comes from the rendering or the physics?

The problem is not in making the engine multi-threaded. The problem is that most PCs have not so many cores so you would have to make a lot of versions. If you make your engine use 8 threads, but in your game you only put enough objects to use 1 of them, so that the game can run everywhere, that's pointless. If you put scenery to load all 8 cores, anything slower will be incapable of running that game. So with PhysX they decided that instead of doing a basic (for all) 1 core version and then one with 4 or 8 cores. They were going to make one with 1 core and one with the GPU that is comparable to a 20 core CPU and still use only 1 CPU core so that someone with a slow CPU, but a decent GPU (9600GT, 8600GT?) can take advantage of that.

Having 4x the power will not enable much better physics than what a single core can do anyway. Instead of the usual < 20 physically enabled objects (count them if you dont believe me), it would enable the use of 80, but that won't make any difference, a single glass in Mirror's Edge splatters in 4x that ammount of pieces, and still look like they are not enough.


So it uses 2 cores, and I suppose you get better framerates? That's what I have heard althought it doesn't work for me.
 
Last edited:
Joined
Nov 4, 2005
Messages
11,681 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Here is a random screen shot!!!!

It will prove a point. I'm not sure what, but it will. :toast:
 

Attachments

  • random screenshot.jpg
    random screenshot.jpg
    208.1 KB · Views: 375

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Here is a random screen shot!!!!

It will prove a point. I'm not sure what, but it will. :toast:

Is that sarcasm? Again... :shadedshu

If it is, maybe you can explain me what makes the screenshot in the OP any less random and useless? (Not deprecating the OP, making a point.)

And if it's not sarcasm, I suppose it could demostrate any of these two posibilities:

1- NFS:Shit (edit: <-- that was accidental I swear, but... I think I'll leave it that way) uses PhysX and when runnig on a Radeon card uses two cores, even though it only uses one with a Nvidia card. Absolutely destroying what is being suggested in this thread.

2- It uses 2 cores under Win7, but only one on XP. I'll take the effort of installing the game in my Win 7 partition tomorrow and see if this one is a posibility.
 
Joined
Mar 1, 2008
Messages
282 (0.05/day)
Location
Antwerp, Belgium

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Well that's what i thought. MR spawns four threads for your quad core.
Actually Valve is making a big push towards MR. The Valve particle simulation benchmark is one example of their efforts. But since Valve uses a modified Havok engine, it's going to be hard to compare it to other Havok based games.

But IMHO Rendering != physics, nothing in the meaning of rendering includes physics, that added CPU usage is because of the improved rendering and not from physics calculations IMO. There's very little physics in L4D anyway, they wouldn't max out a core.
 
Joined
Mar 1, 2008
Messages
282 (0.05/day)
Location
Antwerp, Belgium
But IMHO Rendering != physics, nothing in the meaning of rendering includes physics, that added CPU usage is because of the improved rendering and not from physics calculations IMO. There's very little physics in L4D anyway, they wouldn't max out a core.

Rendering is happening on the GPU, not the CPU.
Of Course having four threads doesn't mean two of them are for physics but that's the most feasible obviously. There is AI, sound processing, game mechanics, ... too.
Actually L4D contains a lot of physics (and AI & Director)! All characters have complete interaction with the surrounding (movable objects, line of sight, collision detection, ...). Especially when a +100 horde rushes at you.

This shows the difference between single and dual core pretty good:


The difference between the 4000+ and the 5000+ is huge.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Rendering is happening on the GPU, not the CPU.
Of Course having four threads doesn't mean two of them are for physics but that's the most feasible obviously. There is AI, sound processing, game mechanics, ... too.
Actually L4D contains a lot of physics (and AI & Director)! All characters have complete interaction with the surrounding (movable objects, line of sight, collision detection, ...). Especially when a +100 horde rushes at you.

This shows the difference between single and dual core pretty good:
http://www.pcgameshardware.de/screenshots/original/2008/11/L4D_CPUs_1280_181108_x.PNG

The difference between the 4000+ and the 5000+ is huge.

Man a lot of rendering process happens in the CPU. The graphics cards need to be fed by the CPU, between other things, and a faster CPU almost always gives better fps.

I don't think that zombies collide with each other, which would add a lot of processing, so it's not a lot of physics going on there IMO. I've just played to be sure about this and they do not collide. Collisions in Source are based in the typical hit-box squeme anyway, it's fairly simple. I don't know if there is any slowdown when the zombie hordes, because it never goes below 60fps. But the fact that it always remains above 60fps with a single core tells it all anyway.
 
Top