• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA PhysX and Flow Made Fully Open-Source

Short answer, yes. 32bit Physx is all legacy code at this point. No one is using it. More advanced and 64bit versions had replaced the legacy code many years ago. This is why NVidia has dropped support for it.
That's the amusing thing about this issue. Nvidia has been clear that 32 bit CUDA will not be supported on Blackwell since 2022. And they have started deprecating it long before that. What's amusing is that no one seemed to have added 1 and 1 together and figured this would also mean ye ol' PhysX won't also run on Blackwell until the latter hit the shelves.
But I suppose this should also be blamed on Nvidia. Failing to communicate the repercussions to non-technical consumers was obviously stupid and short-sighted.

cpu physx is literally baked into Unity as the default physics engine, for better or for worse (mostly worse because yes it is singlethreaded dogshit in cpu collidable form).
Man, KSP did a number on you, hasn't it? :laugh:
Recent Unity physics engines -including PhysX-powered ones- are multithreaded. Although Unity itself is (or was, till the last time I checked) notoriously single-threaded, so that could be a bottleneck. Perhaps things got better since they started getting serious with ECS/DOTS tho...

(gpu physx does not support collisions to my knowledge)
Collisions can be paralellized, and afaik, GPU physx supports it. Interaction of cloth, fluids and debris with the player/environment was a major marketing point for GPU-powered PhysX games. Remember Batman Arkham Asylum's smoke and flying paper/trash?
 
I suspect they would have fixed it themselves if it was trivial. Now they're hoping modders will do it for free.
Modders are more likely to do a better job than the entire AMD, Intel and nVidia development team combined.
Modders drive the QoL mods for all of our favorite games
Corpo sees that, screeches, outright steals code then paywalls shittier implementations after YEARS of playing whack-a-mole with everyone that first used them.
Those egregious behaviors are 90% of the reason I bailed from 1000s of friends, fans and multiple games that could not be bothered to get it together.
So we'll see how this one goes. Give it two weeks.
 
Do you people just like, not read ?

It's version V4, you're not gonna find source code for older SDKs because there aren't any. Those old games that run like crap which people are complaining about were using SDKs prior to V4.
You asked for the version 4 source code. That is version 4. Seriously?
 
Modders are more likely to do a better job than the entire AMD, Intel and nVidia development team combined.
Modders drive the QoL mods for all of our favorite games
Some modders would give Chris Sawyer and John Carmack a run for their money, but modders are collectively "more likely" to do a better job?
Well, if we're talking nudity mods and cheats...

People tend to forget that mods compliment mountains of development and design in the base game. Impressive as they are/were, Fallout London and Counter-Strike would have achieved nothing if it weren't for the work (and decades of collective experience) put into FO4 and Half-Life.
 
So the patent that Aegia was awarded have expired then? Hopefully there will be someone brave enough to make dedicated Physx accelators once again.

Nah, just one more thing to power and cause more heat. Blah.
 
Can't see why it wouldn't. They opened the source and now devs/modders can patch all of the 32bit code to work with modern hardware.
They can't, as mentioned before those older games use older versions that precede a major refractor within the framework and which has not been open sourced.
You asked for the version 4 source code. That is version 4. Seriously?
They did ask for an older version, pre-v4.
 
Nvidia was trying to use PhysX in a way to make it's hardware look superior to the competition, even when the competition had hardware that was as fast as Nvidia's. Looks familiar? Yes, they keep doing it.

Anyway, CPU PhysX was made so bad, that people wanting to enjoy PhysX effects had to pay for an Nvidia card. But, not any Nvidia card. A fast Nvidia card. Why? Because Nvidia locked the driver so that, if a competing GPU was primary, PhysX and CUDA was instantly disabled. So, a fast AMD GPU and a mid-low end Nvidia GPU for PhysX was not possible(except if someone was bypassing the driver lock). Yeah, they are the good guys. They opened up PhysX, hoorah!

Physx uses x87 code? | TechPowerUp Forums
 
No. Modern CPUs and GPUs will run circles about the older hardware. Not sure where this logic is coming from but it's not meritful.

You should have checked publicly available databases before posting the above misleading statement. The truth is that FLOPS of modern CPUs such as Ryzen 7 9700X (year 2024), when all CPU cores are running at 100% (5GHz*8*2*512/32=1280 GFLOPS, AVX512), is roughly equivalent to a mid-range GPU in year 2011 (when Batman: Arkham City utilizing PhysX was released) such as GTX 560 which is capable of about 1 TFLOPS. Memory bandwidth of a desktop CPU with DDR5 memory in year 2025 is slightly smaller than the memory bandwidth of a mid-range GPU in year 2011.

I'm aware of that fact. I was referring to interoperability between 32 bit apps and 64 bit libraries, as was explained in previous post.

ROCm is 64-bit only.
Also aware of that fact. Again, check my previous posts. :)

Of course. You are always right, never mistaken. You hate opinions of other people if they are contradicting your fairy-tales.
 
Of course. You are always right, never mistaken. You hate opinions of other people if they are contradicting your fairy-tales.
While you are up there in my head, mind finding out where I put those papers I'm looking for? Can't for the life me remember.
 
I suspect that even recent CPUs such as Ryzen 9700X might not be fast enough to outperform older GPUs when running PhysX simulations.
My R5 7600 performs about the same as my GT 710.
 
While you are up there in my head, mind finding out where I put those papers I'm looking for? Can't for the life me remember.

Do this please: Ask ChatGPT/Gemini/etc whether it is possible for a single Linux/Windows process to utilize 32-bit and 64-bit libraries.
 
They are pulling a bethesda it seems.
Is that only about this particular nVidia's move? Huge number of publishers already have pulled the "Bethesda", and have put their entire game making on the "rails" of such arrogant approach, for many years already. They give the players the barely functionaly buggy barebones, and "let" modders to make entire QA, and QoS stuff, "for free", and later grab all the merrits to themselves. Or, even sell the mods as own paid DLCs.

Judging by the train wreck that is Blackwell, I'm going to say Nvidia just doesn't care about the gaming market right now more than it being challenging to fix.
nV is doing it now because the cost of maintaining it is larger than the benefit of keeping it in-house. No more, no less and certainly no malice beyond the required corporate greed.
Exactly! This seems like nVidia has completely turned towards Enterprise segment, and this was just the farewell gesture from them. And of course the main reason, as described above- money. nVidia simply cuts costs for "unsuported" hardware/software. As they indeed don't care anymore, since won't be able to make any profit from the HW and closed "eco"system. There's simply no point to keep the support for it.
The only issue here, is that nVidia might still hypocritically sell the AI/Enterprise GPU under the disguise of gaming SKUs.

Dunno. Me thinks this would be more helpful for recent and future products than past ones.


It's not a technical issue for it to be trivial or not, it's a financial one, and they obviously gauged it to be of no value solve.

What they are offloading to the others seems to be QA of future releases. Although this probably targets studios and engine devs more than it does modders.
Yeah, they've pretty much drew the line, for the PhysX 64bit, to be the only viable, for "future" games developed, or if at all. Since the abandoning the proprietary tech, after keeping it inside their sick walled garden for almost one and a half decade, doesn't show as a future-proof move.

Also, this brings unnessesary burden and overheads on the developers, to support 32 bit API, if there is any yet. So it's seems like it's asking for another unified solution, akin to what OpenAL, and OpenSpy have become. Maybe even some open "universal" transition layer, rather than making it individually by each studio.
However, the game devs have already baited to the "new" and "fancier" toys, such as AI, and RT, frame gen, horrible TAA, which should have give them a leg up on the development.

But who knows, maybe as nVidia have gave up, the Radeons, even older ones, might one day, receive the "PhysX treatment". One can only dream... :laugh:
Only about a decade late. But I wonder if someone out there can make physx work on Blackwell. If they manage to then Nvidia should be ashamed of themselves.
Blackwell gaming cards, might as well end up just a symbolic move, and a one-off limited event. Considering the non-existant stock, and complete silence of the amount sold.

The first truly positive Nvidia-related news in recent years. I hope developers will make good use of it. I'm also wondering if this will improve things on Linux and/or maybe PhysX on AMD?
It's only positive, because nVidia have dropped it out of their scope of interests. But you can wait, and there may be solution... maybe in another decade :rolleyes:
About 15 years too late.
As usual for any company, after it dropped the hostage milking of their property. They kept the predatory grip on PhysX, for as long as it was giving them a brand's boast boost (even Intel let the Havok to be CPU neutral). Once it deemed less atractive than AI this and AI that, it was dumped. This isn't a surprise, since nVidia has made first steps making PhysX open source back in 2018.
 
Last edited:
You should have checked publicly available databases before posting the above misleading statement. The truth is that FLOPS of modern CPUs such as Ryzen 7 9700X (year 2024), when all CPU cores are running at 100% (5GHz*8*2*512/32=1280 GFLOPS, AVX512), is roughly equivalent to a mid-range GPU in year 2011 (when Batman: Arkham City utilizing PhysX was released) such as GTX 560 which is capable of about 1 TFLOPS. Memory bandwidth of a desktop CPU with DDR5 memory in year 2025 is slightly smaller than the memory bandwidth of a mid-range GPU in year 2011.



Of course. You are always right, never mistaken. You hate opinions of other people if they are contradicting your fairy-tales.
You're underestimating Zen 5 and undercounting a FMA as one op when it's actually 2 ops. A 9700X can execute 2 AVX-512 FMA and 2 FADD per cycle. That works out to 96 (2*512*2/32 + 2*512/32) 32 bit floating point operations per cycle. Across the entire CPU, at 5 GHz, the peak throughput would be 3840 FLOPS which is more than a GTX 680, let alone a GTX 560.
 
Nvidia was trying to use PhysX in a way to make it's hardware look superior to the competition, even when the competition had hardware that was as fast as Nvidia's. Looks familiar? Yes, they keep doing it.

Anyway, CPU PhysX was made so bad, that people wanting to enjoy PhysX effects had to pay for an Nvidia card. But, not any Nvidia card. A fast Nvidia card. Why? Because Nvidia locked the driver so that, if a competing GPU was primary, PhysX and CUDA was instantly disabled. So, a fast AMD GPU and a mid-low end Nvidia GPU for PhysX was not possible(except if someone was bypassing the driver lock). Yeah, they are the good guys. They opened up PhysX, hoorah!

Physx uses x87 code? | TechPowerUp Forums
You didn’t answer the question…

Why do you care so much about Nvidia when you don’t use their products? Do you just get off on moaning the internet about things that don’t impact you?
 
You're underestimating Zen 5 and undercounting a FMA as one op when it's actually 2 ops. A 9700X can execute 2 AVX-512 FMA and 2 FADD per cycle. That works out to 96 (2*512*2/32 + 2*512/32) 32 bit floating point operations per cycle. Across the entire CPU, at 5 GHz, the peak throughput would be 3840 FLOPS which is more than a GTX 680, let alone a GTX 560.

Yes, but the source you cited also states that AVX-512 on Zen5 will be severely limited by cache-and-memory bandwidth. In other words, in a real-world scenario it is unlikely for AVX-512 in Zen5 to outperform AVX-512 in Zen4 by a large margin - unless the dataset can fit in Zen5's L1D or L2 cache.
 
Last edited:
You go ahead and show me where I can find the source code for older versions than PhysX V4. Unity is using CPU PhysX post V4, which is a lot more optimized, that's why it runs relatively well.
Well no duh I specifically excluded gpu accelerated PhysX in my statement. I don't think its updated post V4 even, though could be wrong. Either way the trees are completely seperate for somewhat obvious reasons. The opensourcing of the gpu compute cuda kernel tree is basically what just happened.

Also, working with Unitys PhysX implementation a lot, I have to say I find it humorous, the idea that it "works well." But well, maybe it isn't really intended for planetarium n-body sims. I do know it is avx oblivious, using sse2 mostly.
 
Last edited:
You should have checked publicly available databases before posting the above misleading statement. The truth is that FLOPS of modern CPUs such as Ryzen 7 9700X (year 2024), when all CPU cores are running at 100% (5GHz*8*2*512/32=1280 GFLOPS, AVX512), is roughly equivalent to a mid-range GPU in year 2011 (when Batman: Arkham City utilizing PhysX was released) such as GTX 560 which is capable of about 1 TFLOPS. Memory bandwidth of a desktop CPU with DDR5 memory in year 2025 is slightly smaller than the memory bandwidth of a mid-range GPU in year 2011.
This is the funniest thing I've read this week... and it's been a great week for funny as hell statements.
 
This is the funniest thing I've read this week... and it's been a great week for funny as hell statements.

"Funny" isn't a measure. Only "true" and "false" are measures.
 
Do this please: Ask ChatGPT/Gemini/etc whether it is possible for a single Linux/Windows process to utilize 32-bit and 64-bit libraries.
And take it seriously? Sorry I'm not that stupid.
 
Nvidia was trying to use PhysX in a way to make it's hardware look superior to the competition, even when the competition had hardware that was as fast as Nvidia's. Looks familiar? Yes, they keep doing it.

Anyway, CPU PhysX was made so bad, that people wanting to enjoy PhysX effects had to pay for an Nvidia card. But, not any Nvidia card. A fast Nvidia card. Why? Because Nvidia locked the driver so that, if a competing GPU was primary, PhysX and CUDA was instantly disabled. So, a fast AMD GPU and a mid-low end Nvidia GPU for PhysX was not possible(except if someone was bypassing the driver lock). Yeah, they are the good guys. They opened up PhysX, hoorah!

Physx uses x87 code? | TechPowerUp Forums
It uses sse2 now but for practically forever it was x87 yeah.

Try it and then judge it. Don't judge it before trying it out.
No, I know well enough to know a bad source when I see it (not to mention I have tried it). Even a broken clock can be right once a day.

Any ways guys, while you are all happy debating theory here, I am going to go back to my OSS KSP planetarium work:

 
Last edited:
you know PhysX is ”dead” when Nvidia open sources it, lol.
 
Back
Top