• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PhysX only using one cpu core?

Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
http://techreport.com/articles.x/17618/13

http://forums.overclockers.co.uk/showpost.php?p=14687943&postcount=4

I'll quote:
You can see the performance hit caused by enabling PhysX at this resolution. On the GTX 295, it's just not worth it. Another interesting note for you... As I said, enabling the extra PhysX effects on the Radeon cards leads to horrendous performance, like 3-4 FPS, because those effects have to be handled on the CPU. But guess what? I popped Sacred 2 into windowed mode and had a look at Task Manager while the game was running at 3 FPS, and here's what I saw, in miniature:



Ok, so it's hard to see, but Task Manager is showing CPU utilization of 14%, which means the game—and Nvidia's purportedly multithreaded PhysX solver—is making use of just over one of our Core i7-965 Extreme's eight front-ends and less than one of its four cores. I'd say that in this situation, failing to make use of the CPU power available amounts to sabotaging performance on your competition's hardware. The truth is that rigid-body physics isn't too terribly hard to do on a modern CPU, even with lots of objects. Nvidia may not wish to port is PhysX solver to the Radeon, even though a GPU like Cypress is more than capable of handling the job. That's a shame, yet one can understand the business reasons. But if Nvidia is going to pay game developers to incorporate PhysX support into their games, it ought to work in good faith to optimize for the various processors available to it. At a very basic level, threading your easily parallelizable CPU-based PhysX solver should be part of that work, in my view.

In Batman demo:
PhysX is only using one core on the CPU in that game so that maybe contributing to the slowdown.
 
Joined
Jul 19, 2006
Messages
43,585 (6.74/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
I don't understand what they expect exactly. If one core is giving 3 fps, 8 cores would give 24 fps, still very far from a good experience. And that's on a fast Core i7. i7 920 would give what, 20 fps? And what about Core 2, which is what most people have? i7 has like 50% more raw floating point power than Core2 per core and with only 4 cores, we would be moving in the 10-12 fps realms. Want to start talking about dualies, which is what most people still have?

What do they expect? That developers create 8 different paths with different level of detail and thread penetration, so that every machine without a GeForce can have increasingly better physics, that will be still very far off from what the weakests of GeForces can do??

No, friends, what it makes more sense is to make 2 paths, one that will run in almost everything there, and one that uses GPU accelerated physics on GeForces which by market share is 2/3 of the graphics cards out there. Hence it uses only 1 CPU core, because:

1- the GPU is much more powerful anyway, the power that even a Quad would add is irrelevant.
2- It can run on everything out there, including old dualies, whose second core could be full of secondary processes, etc.

Enabling GPU accelerated physics and expecting to see CPU load, when that's highly unnecesary, is dumb IMHO. I like TechReport like any other, it's the second site I go looking for reviews after TPU, but there they are just being a little bit dumb.
 
Joined
Nov 4, 2005
Messages
11,654 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I don't understand what they expect exactly. If one core is giving 3 fps, 8 cores would give 24 fps, still very far from a good experience. And that's on a fast Core i7. i7 920 would give what, 20 fps? And what about Core 2, which is what most people have? i7 has like 50% more raw floating point power than Core2 per core and with only 4 cores, we would be moving in the 10-12 fps realms. Want to start talking about dualies, which is what most people still have?

What do they expect? That developers create 8 different paths with different level of detail and thread penetration, so that every machine without a GeForce can have increasingly better physics, that will be still very far off from what the weakests of GeForces can do??

No, friends, what it makes more sense is to make 2 paths, one that will run in almost everything there, and one that uses GPU accelerated physics on GeForces which by market share is 2/3 of the graphics cards out there. Hence it uses only 1 CPU core, because:

1- the GPU is much more powerful anyway, the power that even a Quad would add is irrelevant.
2- It can run on everything out there, including old dualies, whose second core could be full of secondary processes, etc.

Enabling GPU accelerated physics and expecting to see CPU load, when that's highly unnecesary, is dumb IMHO. I like TechReport like any other, it's the second site I go looking for reviews after TPU, but there they are just being a little bit dumb.

Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.



The process mamager isn't even close to maxed out on one CPU HT core, let alone any more of the eight HT cores available. It looks to be using spikes of CPU power, and only on one HT core. The point of the article is to show that Nvidia is causing the game to run poorly, and the game developer is causing issues by allowing such crap to be produced.


So if one HT core is giving 3 FPS times 8 HT cores, 24 FPS, the same framerate many consoles are limited too. But lets just throw that out and focus instead on how to insult other members.
 
Joined
May 19, 2007
Messages
7,662 (1.24/day)
Location
c:\programs\kitteh.exe
Processor C2Q6600 @ 1.6 GHz
Motherboard Anus PQ5
Cooling ACFPro
Memory GEiL2 x 1 GB PC2 6400
Video Card(s) MSi 4830 (RIP)
Storage Seagate Barracuda 7200.10 320 GB Perpendicular Recording
Display(s) Dell 17'
Case El Cheepo
Audio Device(s) 7.1 Onboard
Power Supply Corsair TX750
Software MCE2K5
jah know nv at it again ...
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.



The process mamager isn't even close to maxed out on one CPU HT core, let alone any more of the eight HT cores available. It looks to be using spikes of CPU power, and only on one HT core. The point of the article is to show that Nvidia is causing the game to run poorly, and the game developer is causing issues by allowing such crap to be produced.


So if one HT core is giving 3 FPS times 8 HT cores, 24 FPS, the same framerate many consoles are limited too. But lets just throw that out and focus instead on how to insult other members.

Nvidia is not causing poor performance, enabling a mode that is only suppossed to run on the GPU is.

If you disable the GPU accelerated PhysX the game will play like a charm. Again you can't expect to enable GPU accelerated physics mode and expect to see CPU load, you can't. You can't expect to have the same improved PhysX running on the CPU either, simply because CPU can't keep up with the power needed. Not even the i7 965 could keep up let alone a Core2 6400, for example. They are saying 14%, that's more than 1 "core" in the i7. 100/8= 12.5.

BTW, any CPU will show those spikes in single threaded applications. That's because it uses the next ALU available and that's always the one in the next core, but since it's single threaded it has to wait to the previous one to finish, accounting to the equivalent of one core being in use. That method increses reliability and improves temperatures too.

And like erocker said it was AMD who refused to use PhysX, Nvidia gave it to AMD for free. They refused for obvious reasons, none of them being to do the best for the consumer. They didn't even allow third parties to do the work, even though Nvdia supported them.

http://www.bit-tech.net/news/2008/07/09/nvidia-helping-to-bring-physx-to-ati-cards/1

However, an intrepid team of software developers over at NGOHQ.com have been busy porting Nvidia's CUDA based PhysX API to work on AMD Radeon graphics cards, and have now received official support from Nvidia - who is no doubt delighted to see it's API working on a competitor's hardware (as well as seriously threatening Intel's Havok physics system.)

As cheesed off as this might make AMD, which is unsurprisingly not supporting NGOHQ's work, it could certainly be for the betterment of PC gaming as a whole. If both AMD and Nvidia cards support PhysX, it'll remove the difficult choice for developers of which physics API to use in games. We've been growing more and more concerned here at bit-tech at the increasingly fragmented state of the physics and graphics markets, and anything that has the chance to simplify the situation for consumers and developers can only be a good thing.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.



The process mamager isn't even close to maxed out on one CPU HT core, let alone any more of the eight HT cores available. It looks to be using spikes of CPU power, and only on one HT core. The point of the article is to show that Nvidia is causing the game to run poorly, and the game developer is causing issues by allowing such crap to be produced.


So if one HT core is giving 3 FPS times 8 HT cores, 24 FPS, the same framerate many consoles are limited too. But lets just throw that out and focus instead on how to insult other members.
I agree. Physics calculations are very linear in design so it should multithread excellently. NVIDIA decided to focus on PhysX for GPU only so without the GPU, PhysX is a massive burden being poorly optimized for CPU load. Most games don't even use physics enough to warrant using a GPU anyway. I question everything about PhysX (the premise, the execution, the strategy, etc.).

DirectX 11 will murder PhysX because it will work on CPU and GPU. Instead of using, for instance, Havok for CPU and PhysX for GPU, just use DX11 and be done with it.
 
Last edited:
Joined
Mar 1, 2008
Messages
281 (0.05/day)
Location
Antwerp, Belgium
I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.

I don't know why everybody keeps saying this.
If you mean compute shaders then it won't matter until someone actually uses it for a physics engine. Compute shaders by itself is nothing more than what OpenCL & CUDA are.

My opinion about this matter is: what can be threaded easily, should be threaded. No excuses. I mean, the CUDA version is obviously already threaded, isn't it ...
Doesn't matter how you look at it, nVidia is holding the CPU version back in favour of the CUDA version. Havok uses multiple cores BTW.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Nothing is multithreaded easily but there's a lot of things that benefit greatly by doing so--physics for games is one of them.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Honestly, people should learn the difference between PhysX and GPU accelerated PhysX and stop bashing something they don't know about.

EDIT: When I say people I'm not directing this to anyone in particular. There's a lot of people in TPU and outside of it that don't understand that difference.

PhysX is a physics API that can run on the CPU like Havok or any other, but it has one particularity: it can run on the GPU too, allowing for much more details thanks to the vastly superior floating point power of GPUs.

http://www.brightsideofnews.com/new...-bullet3b-are-havok-and-physx-in-trouble.aspx

In case you wondered, according to Game Developer Magazine world's most popular physics API is nVidia PhysX, with 26.8% market share [if there was any doubt that nVidia PhysX isn't popular, which was defense line from many AMD employees], followed by Intel's Havok and its 22.7% - but Open sourced Bulled Physics Library is third with 10.3%.
We have no doubt that going OpenCL might be the golden ticket for Bullet. After all, Maxon chose Bullet Physics Library for its Cinema 4D Release 11.5 [Cinebench R11 anyone?].

A lot of games use PhysX nowadays, to name a few, Unreal Tournament (without the add-on too), Gears of War, Mass Effect, NFS:Shift. All of them use the CPU PhysX and I have yet to see a complain about those games.

What less sense makes to me is that the same people that support OpenCL physics, because it will make developers life easier by them having to make only one physics path, are asking developers to make a physics engine with very different paths so that it can run on different CPUs and design game with very different levels of physics, the latter thing being the one that would add more work. An experimented coder can implement a physcs engine in under a week (optimizing it is another story), but changing the level of detail of the physics on all the levels (maps, whatever) takes months to a lot of artists.

EDIT: As much as this might surprise you, it would be much easier for a developer to mantain the level of physics details and code the engine for two different languages (CUDA and Stream for example) to move those physics than creating two different levels.

I don't know why everybody keeps saying this.
If you mean compute shaders then it won't matter until someone actually uses it for a physics engine. Compute shaders by itself is nothing more than what OpenCL & CUDA are.

My opinion about this matter is: what can be threaded easily, should be threaded. No excuses. I mean, the CUDA version is obviously already threaded, isn't it ...
Doesn't matter how you look at it, nVidia is holding the CPU version back in favour of the CUDA version. Havok uses multiple cores BTW.

I doubt it. Unless they make different paths, with different requirements a developer can't make the engine use many cores by defalut, because it wouldn't run in slower CPUs. I don't remember having a physics slide in any game using Havok either, but it could be.
 
Last edited:
Joined
May 7, 2009
Messages
5,392 (0.99/day)
Location
Carrollton, GA
System Name ODIN
Processor AMD Ryzen 7 5800X
Motherboard Gigabyte B550 Aorus Elite AX V2
Cooling Dark Rock 4
Memory G Skill RipjawsV F4 3600 Mhz C16
Video Card(s) MSI GeForce RTX 3080 Ventus 3X OC LHR
Storage Crucial 2 TB M.2 SSD :: WD Blue M.2 1TB SSD :: 1 TB WD Black VelociRaptor
Display(s) Dell S2716DG 27" 144 Hz G-SYNC
Case Fractal Meshify C
Audio Device(s) Onboard Audio
Power Supply Antec HCP 850 80+ Gold
Mouse Corsair M65
Keyboard Corsair K70 RGB Lux
Software Windows 10 Pro 64-bit
Benchmark Scores I don't benchmark.
I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.

Not sure how it all went down, but I do know ATI did not refuse. They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers. This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off. The ATI put a lot of effort behind helping Havok as a purely software Physics solution.

How is Havok doing? I haven't heard anything from them in a while.
 
Joined
Sep 25, 2007
Messages
5,965 (0.99/day)
Location
New York
Processor AMD Ryzen 9 5950x, Ryzen 9 5980HX
Motherboard MSI X570 Tomahawk
Cooling Be Quiet Dark Rock Pro 4(With Noctua Fans)
Memory 32Gb Crucial 3600 Ballistix
Video Card(s) Gigabyte RTX 3080, Asus 6800M
Storage Adata SX8200 1TB NVME/WD Black 1TB NVME
Display(s) Dell 27 Inch 165Hz
Case Phanteks P500A
Audio Device(s) IFI Zen Dac/JDS Labs Atom+/SMSL Amp+Rivers Audio
Power Supply Corsair RM850x
Mouse Logitech G502 SE Hero
Keyboard Corsair K70 RGB Mk.2
VR HMD Samsung Odyssey Plus
Software Windows 10
Havok is still used in many games today, many more than physx is

The problem with physX is that even though you can run it with a cpu, on newer games like batman, try turning it up and see what happens, either A. you will get horrible fps unless you have a very good ATI card or B. It will crash.

and if you have an ATI card on 7 or higher your kinda screwed because in newer drivers nvidia has killed support if you have an ATI card period. You have an ATI card and an nvidia card then your not getting physx acceleration even if your nvidia card supports it unless its the primary adapter which renders the ATI card useless.

and what does nvidia say about this
Nvidia supports GPU accelerated Physx on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes Physx a great experience for customers. For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated Physx with NVIDIA GPUs while GPU rendering is happening on non- NVIDIA GPUs.

and the ones who get hurt the most by this are not the customers, I think its the developers.

but you have a 8800GTS so I am confused as to why its not working right.
 
Joined
Aug 16, 2007
Messages
7,180 (1.18/day)
Not sure how it all went down, but I do know ATI did not refuse. They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers. This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off. The ATI put a lot of effort behind helping Havok as a purely software Physics solution.

How is Havok doing? I haven't heard anything from them in a while.

Havok is owned by Intel its not like a separate thing or anything, its just software owned by Intel. Its pretty basic most games use it, its nothing like proper Physics.
Wait yes it is its an Irish company, but they are owned by intel i think?
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Not sure how it all went down, but I do know ATI did not refuse. They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers. This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off. The ATI put a lot of effort behind helping Havok as a purely software Physics solution.

How is Havok doing? I haven't heard anything from them in a while.

What AMD was asking for was to leave PhysX all along and postpone GPU accelerated physics until there was an open API that would run on everything. They did say that Nvidia could offer PhysX to an open stardarization company (i.e Khronos) and then they would "use PhysX" although indirectly. Obviously Nvidia didn't want to wait 2+ years to offer something they could offer back then, so they continued with PhysX through CUDA, while working closely with Khronos to make OpenCL and MS on DX11. What I still don't understand is how they support Havok without the need of any standardization progress.

Havok is still used in many games today, many more than physx is.

Nope, read post #10. Before reading that a week ago or so, I thought Havok was more widely used too. But it's not. PhysX is much much cheaper for developers than Havok BTW. It's even free if you don't want access to thsource code...
 
Last edited:
Joined
May 7, 2009
Messages
5,392 (0.99/day)
Location
Carrollton, GA
System Name ODIN
Processor AMD Ryzen 7 5800X
Motherboard Gigabyte B550 Aorus Elite AX V2
Cooling Dark Rock 4
Memory G Skill RipjawsV F4 3600 Mhz C16
Video Card(s) MSI GeForce RTX 3080 Ventus 3X OC LHR
Storage Crucial 2 TB M.2 SSD :: WD Blue M.2 1TB SSD :: 1 TB WD Black VelociRaptor
Display(s) Dell S2716DG 27" 144 Hz G-SYNC
Case Fractal Meshify C
Audio Device(s) Onboard Audio
Power Supply Antec HCP 850 80+ Gold
Mouse Corsair M65
Keyboard Corsair K70 RGB Lux
Software Windows 10 Pro 64-bit
Benchmark Scores I don't benchmark.
8800GTS so I am confused as to why its not working right.

It may be disabled in the Nvidia Control Panel as it is by default.

Havok is owned by Intel its not like a separate thing or anything, its just software owned by Intel. Its pretty basic most games use it, its nothing like proper Physics.
Wait yes it is its an Irish company, but they are owned by intel i think?

Not sure who owns them, but if they come through on the Physics calculation plugin thingy they were talking about last time I checked on Havok, it will be proper physics. It will just be design to run on whatever GPU you are using in your system. That project was a joint ventor between Havok, Intel, and AMD/ATI. It was called Havok FX.
 
Last edited by a moderator:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
It may be disabled in the Nvidia Control Panel as it is by default.



Not sure who owns them, but if they come through on the Physics calculation plugin thingy they were talking about last time I checked on Havok, it will be proper physics. It will just be design to run on whatever GPU you are using in your system. That project was a joint ventor between Havok, Intel, and AMD/ATI. It was called Havok FX.

Havok FX was another thing. Was effects physics on Ati and Nvidia GPUs, before Intel bought Havok (guess why) and much earlier than Nvidia bought Ageia. Havok FX was the respnse from ATi and Nvidia to Ageia's PPU. Then Intel bought Havok and everything became, how to say... cloudy. :)

Havok=Intel and AMD are working on GPU accelerated physics, but it's not called Havok FX unless they reused the name for something that is almost completely different. GPU physics = PhysX = whatever that project is called != effects physics. Effect physics are the ones in which there's no interactions, that is, you could break something in hundred of pieces (that would trigger a change from a solid to a bunch of particles) and those would fall to the floor realistically and maybe even the wind or something could deviate them, but once on the floor the player wouldn't be able to move them, nothing would.

Havok=Intel is hardly the answer anyway. In making a fair engine that runs well on everything, I would trust Nvidia over Intel anyday, remember who's been paying and blackmailing PC vendors to not use competitors products. Besides Larrabee will be so different that no doubt they will optimize Havok to run in that architecture much much better than in Ati and Nvidia GPUs. Nvidia and Ati's architecture are like twins compared to what Larrabee will be.
 
Last edited:
Joined
May 7, 2009
Messages
5,392 (0.99/day)
Location
Carrollton, GA
System Name ODIN
Processor AMD Ryzen 7 5800X
Motherboard Gigabyte B550 Aorus Elite AX V2
Cooling Dark Rock 4
Memory G Skill RipjawsV F4 3600 Mhz C16
Video Card(s) MSI GeForce RTX 3080 Ventus 3X OC LHR
Storage Crucial 2 TB M.2 SSD :: WD Blue M.2 1TB SSD :: 1 TB WD Black VelociRaptor
Display(s) Dell S2716DG 27" 144 Hz G-SYNC
Case Fractal Meshify C
Audio Device(s) Onboard Audio
Power Supply Antec HCP 850 80+ Gold
Mouse Corsair M65
Keyboard Corsair K70 RGB Lux
Software Windows 10 Pro 64-bit
Benchmark Scores I don't benchmark.
You know. I am getting sick of these random pissing contest between those 3. Granted it is funny at times especially when Intel and Nvidia ignore AMD while AMD is steadily pulling itself up by the boot straps (to coin a phrase).

Personal favorite pissing contest moment. Intel and Nvidia opening arguing about Integrated Graphic "solutions". Then AMD quietly releasing the 780G, which at the time made all other IGP look pathetic.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
You know. I am getting sick of these random pissing contest between those 3. Granted it is funny at times especially when Intel and Nvidia ignore AMD while AMD is steadily pulling itself up by the boot straps (to coin a phrase).

Personal favorite pissing contest moment. Intel and Nvidia opening arguing about Integrated Graphic "solutions". Then AMD quietly releasing the 780G, which at the time made all other IGP look pathetic.

Yeah, I'm sick too, but reality is reality and the reality is that there's no interest free physics developer with a solution capable of fighting with those two. Before Intel and Nvidia bought them, both Havok and Ageia bought almost every serious competitor, including the one that was best IMO, Meqon physics (bought by Ageia). Those guys were offering things similar to Euphoria back in 1999. Duke Nukem Forever was going to use it for a level of physics and interactivity never seen before. Purportedly much much better than Half-Life 2.

I personally want much better physics in games, the kind of physics that only hardware accelerated physics can offer and I want them the sooner the better. It's not the fisrt time I say this. That's why I always supported PhysX, because they were the only ones offering a revolution and they were offering it now (erm back then :)). Honestly, even today, I don't care what OpenCL or DX11 will offer in that regards, because whatever they offer, even if it's 10 times better and 10 times easier for developers, it won't happen until late 2010 the sooner. That doesn't mean that I don't support them, but from a distance, because as things are now, supporting them means burying PhysX, and I want something until OpenCL and DX11 accelerated solutions are ready. They are nothing more than a paper launch today.
 

EastCoasthandle

New Member
Joined
Apr 21, 2005
Messages
6,885 (1.00/day)
System Name MY PC
Processor E8400 @ 3.80Ghz > Q9650 3.60Ghz
Motherboard Maximus Formula
Cooling D5, 7/16" ID Tubing, Maze4 with Fuzion CPU WB
Memory XMS 8500C5D @ 1066MHz
Video Card(s) HD 2900 XT 858/900 to 4870 to 5870 (Keep Vreg area clean)
Storage 2
Display(s) 24"
Case P180
Audio Device(s) X-fi Plantinum
Power Supply Silencer 750
Software XP Pro SP3 to Windows 7
Benchmark Scores This varies from one driver to another.
Interesting OP, I wonder if that is happening in other cpu physx games like Batman (retail) or Shift? From that author's POV he calls it flat out sabotage. And it's interesting, if you reduce the number of threads you can reduce the frame rate.
 

shevanel

New Member
Joined
Jul 27, 2009
Messages
3,464 (0.65/day)
Location
Leesburg, FL
check out the non gpu accelerated physics in Red Faction:G , by havok i believe.

I was so unimpressed with the physics in BM:AA..

GRID has all of the things BM:AA (smoke, flags breakables) and BM:AA did not have breakable objects... (why would anyone want to run the suggested 9800gtx for physics... so useless)
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.25/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
PhysX is emulated on CUDA, which is trying to then be emulated on CPU code. That isn't exactly efficient, what did you all expect?

PhysX is inherently NOT multi-threaded. It is designed to run on a single PPU. Why would you expect it to suddenly become multi-threaded when ran on a CPU?
 

shevanel

New Member
Joined
Jul 27, 2009
Messages
3,464 (0.65/day)
Location
Leesburg, FL
counterstrike source is old. i know.. but I remember the very first time I ever played it coming from 1.6 and I was amazed at how the barrels could be knocked over.. debris could be kicked. rag doll deaths...

Its been a long time since Ive played a game that had such a great physics "feel"

Even half-life 2.. the first time i played and shot one of those guards.. it felt so realistic. We never needed old rebranded cards to do the job next to our main gpu. I know its apples ornages comparing source to todays games, but Im just saying..
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.25/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Saying what? That the physics in old games don't even come close to current games?
 

shevanel

New Member
Joined
Jul 27, 2009
Messages
3,464 (0.65/day)
Location
Leesburg, FL
Which games are you reffering to that have such great physics?

Or have I just gotten spoiled?
 
Joined
Jul 19, 2006
Messages
43,585 (6.74/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.
 
Top