• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Publishes FEMFX Deformable Physics Library on GPUOpen

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
FEMFX is a multithreaded CPU library for deformable material physics, using the Finite Element Method (FEM). Solid objects are represented as a mesh of tetrahedral elements, and each element has material parameters that control stiffness, how volume changes with deformation, and stress limits where fracture or plastic (permanent) deformation occur. The model supports a wide range of materials and interactions between materials. We intend for these features to complement rather than replace traditional rigid body physics. The system is designed with the following considerations:



  • Fidelity: realistic-looking wood, metal, plastic, even glass, because they bend and break according to stress as real materials do.
  • Deformation effects: non-rigid use cases such as soft-body objects, bending or warping objects. It is not just a visual effect, but materials will resist or push back on other objects.
  • Changing material on the fly: you can change the settings to make the same object behave very differently, e.g., turn gelatinous or melt.
  • Interesting physics interactions for gameplay or puzzles.
The library uses extensive multithreading to utilize multicore CPUs and benefit from the trend of increasing CPU core counts.

Features
  • Elastic and plastic deformation
  • Implicit integration for stability with stiff materials
  • Kinematic control of mesh vertices
  • Fracture between tetrahedral faces
  • Non-fracturing faces to control shape of cracks and pieces
  • Continuous collision detection (CCD) for fast-moving objects
  • Constraints for contact resolution and to link objects together
  • Constraints to limit deformation
  • Dynamic control of tetrahedron material parameters
  • Support for deforming a render mesh using the tetrahedral mesh
To maximize the value for developers, we're providing the implementation source code as part of GPUOpen under the MITx11 License. The full release includes the library source code, sample code, and for Unreal Engine developers, source for a plugin that demonstrates custom rendering and scene creation.

View at TechPowerUp Main Site
 
physics on cpu in 2020 :rolleyes:
get back to us when you have something better than we've had for years.
and a friggin screenshot in the OP. :laugh:
 
physics on cpu in 2020 :rolleyes:
get back to us when you have something better than we've had for years.
and a friggin screenshot in the OP. :laugh:

But you really don't have a anything. Nvidia has crippled physx and I guess no one cares about havok anymore, apparently.
 
But you really don't have a anything. Nvidia has crippled physx and I guess no one cares about havok anymore, apparently.
well at least this is multithreaded and open source.
still,physics should be done on gpu.
physx is pretty good,played control this year,environmental destruction is absolutely ridiculous in boss fights.naturally it's not widely adopted tho.
 
Last edited:

But you really don't have a anything. Nvidia has crippled physx and I guess no one cares about havok anymore, apparently.
PhysX isn't crippled, and Havok is still widely used (especially in multiplatform titles). The only downside, is that MS and Havoc haven't done anything to improve it since 2011, and DX physics is taking too long to come to fruition.

physics on cpu in 2020 :rolleyes:
get back to us when you have something better than we've had for years.
Properly threaded, scalable and not platform-dependent? You've got to share what you are smoking.
 
well at least this is multithreaded and open source.
still,physics should be done on gpu.
physx is pretty good,played control this year,environmental destruction is absolutely ridiculous in boss fights.naturally it's not widely adopted tho.
Because you say so, Nvidia backer says gpu physx only please.
 
physics on cpu in 2020 :rolleyes:
get back to us when you have something better than we've had for years.
and a friggin screenshot in the OP. :laugh:

Back then, you had mostly dual or quad-core CPUs with no extra threads: the penalty for using it would significantly impact the game's performance, which is why a dedicated card was required. nVidia integrated PhysX into their GPUs but that still had a significant impact in performance when used, though orders of magnitude less than via CPU.

Now, 8c / 16t are "normal" and you can already get 16c / 32t. There's little to no benefit in that many cores regarding performance increase in games but, if you can take advantage of those extra cores for Physics, that means the GPU can be less negatively affected by the performance penalty associated with those computations via GPU.
 
Back then, you had mostly dual or quad-core CPUs with no extra threads: the penalty for using it would significantly impact the game's performance, which is why a dedicated card was required. nVidia integrated PhysX into their GPUs but that still had a significant impact in performance when used, though orders of magnitude less than via CPU.

Now, 8c / 16t are "normal" and you can already get 16c /32t. There's little to no benefit in that many cores regarding performance increase in games but, if you can take advantage of those extra cores for Physics, that means the GPU can be less negatively affected by the performance penalty associated with those computations via GPU.
yes but you've got GPUs having absolutely ridiculous compute power too.why spend extra on an 8c/16t when a 6c/6t is plenty and your gpu packs so much power. how much does a 5700xt/2070 super pack ? 8-9 tflops ? probaly 10 overclocked. And both can do fp+int or fp16. Your rdna2 console gpu will probably be close to that too.
 
yes but you've got GPUs having absolutely ridiculous compute power too.why spend extra on an 8c/16t when a 6c/6t is plenty and your gpu packs so much power.

Said compute power, when utilized for Physics can have the negative effect of introducing higher frame times. If you can offload to the CPU and it's unused cores / threads that portion of the computations required for the game, that helps, no?
So long as it doesn't negatively affect frame times more than what's currently available via GPUs, it's a viable alternative, IMO.
 
Said compute power, when utilized for Physics can have the negative effect of introducing higher frame times. If you can offload to the CPU and it's unused cores / threads that portion of the computations required for the game, that helps, no?
So long as it doesn't negatively affect frame times more than what's currently available via GPUs, it's a viable alternative, IMO.
it is an alternative,but I'd rather max out my budget on the gpu and keep the cpu a good value option rather than go buy 8c/16t cause it's there.
3700x is nice as far as cost per core,but 8c/16t is not even close to being fully utilized.I never spent as much on any of my i7s as the 3700x costs and I think most pc gamers don't intend to either.I never even wanted an i7 but 2015 came,I got a 144hz dispaly,games got multithreaded and there was no other option than get a 4790k.seriously,whatever utility software most of us home/gaming rig owners run does well on a 9400f/3500x or even ryzen 3/core i3.it's for gaming we buy the CPU.


but seriously,btranrur,can we get at least a video ? people lauhged at rtx demos.I guess screenshots are preferred now.for physics.
 
Last edited:
The video @silentbogo posted has demos of the FEMFX Deformable Physics Library at 3:52.

 
Low quality post by TheoneandonlyMrK
no,because it's better.
keep the invectives to yourself.
Keep your shite opinion to yourself, re read the Op it's not doing the same physics as physx , nuanced but different.
You can't possibly have tried or used both either,so subjective bullshit statements like physx is better is just an opinion backed up by f all.

And in an AMD PR piece your coming off as a troll or a butt hurt pc owner that doesn't like PC's progressing or as important game's progressing beyond his own rig.

Or do you have some facts to add, we don't know about this new technique?.
 
Last edited:
no,because it's better.


So a technology that used a ASIC is better when it delivers lower performance on generic hardware that is already being fully utilized for its primary function? Tell me more about how going slower wins the race..... Games rarely use more than 6 cores, we have fully utilized GPU hardware, and underutilized CPU cores, but somehow they shouldn't be used?

I guess thats what happens when you buy hype.
 
Games rarely use more than 6 cores (...) underutilized CPU cores
No.Can't be more wrong.
take a game that uses some sort of cpu physics,bf5 as a good example,and see what happens to cpu loads during explosion.

what we have is gpu architectures that pack more and more compute power into smaller and smaller power envelopes.

I guess thats what happens when you buy hype.
Exactly,like recommending buying 8c/16t workstation cpus for gaming cause of physics.

you got $700 to spend ? get a $200 cpu and a $500 gpu instead of packing a $350 cpu in there.
 
No.Can't be more wrong.
take a game that uses some sort of cpu physics,bf5 as a good example,and see what happens to cpu loads during explosion.

what we have is gpu architectures that pack more and more compute power into smaller and smaller power envelopes.


Exactly,like recommending buying 8c/16t workstation cpus for gaming cause of physics.


By that idea we should still all have single core 256MB machines.

Also a lot of the Physx libraries aren't real time, most of the "GPU" work was precooked and prerendered. Meaning any GPU could render it, or any CPU could.

Out of order at 4Ghz is better than out of order on a GPU at 2Ghz, just how silicon design and cost work. And yes, I guess if I have the choice of a CPU with 20 cores and its faster and costs the same as a competitive CPU with 4 I will buy it.
 
By that idea we should still all have single core 256MB machines.
no,but that's your opinion and I'll defend your right to voice it.

let's wait and see how this thing turns out.
 
yes but you've got GPUs having absolutely ridiculous compute power too.why spend extra on an 8c/16t when a 6c/6t is plenty and your gpu packs so much power. how much does a 5700xt/2070 super pack ? 8-9 tflops ? probaly 10 overclocked. And both can do fp+int or fp16. Your rdna2 console gpu will probably be close to that too.

Maybe now with int capable GPUs, some new doors will open?

Its more wishful thinking than anything mind; I am still baffled we're exploring RT while proper physics is still in its infancy after so many years.

But the more likely route is that CPUs will simply keep gaining cores and once mainstream has come up to 8c (we're closing fast) a CPU library is becoming very useful. AMD's timing here is quite right, and it will further enforce their core/thread advantage vs Intel too. Its probably better too, we don't need another Physx with ditto adoption.
 
Maybe now with int capable GPUs, some new doors will open?
Its more wishful thinking than anything mind; I am still baffled we're exploring RT while proper physics is still in its infancy after so many years.
yup,you get a game that looks beautiful and then physics look like crap.

as for the rt,since what I wrote above very much relates to shadows,I'm glad rt came along.we're wasting resources for incredibly accurate and sharp shadows,while the goal should be totally somewhere else.smooth,life-like and dynamic.

look at reflections too.SSR looks like crap in many cases.want high quality ssr reflections ? in rdr2 they perfected it at the cost of 40% performance hit.ridiculous,might as well get rtx option,would run the same and look better.
 
It's 2019 - of course it's called FEMFX ;)
 
yup,you get a game that looks beautiful and then physics look like crap.

as for the rt,since what I wrote above very much relates to shadows,I'm glad rt came along.we're wasting resources for incredibly accurate and sharp shadows,while the goal should be totally somewhere else.smooth,life-like and dynamic.

look at reflections too.SSR looks like crap in many cases.want high quality ssr reflections ? in rdr2 they perfected it at the cost of 40% performance hit.ridiculous,might as well get rtx option,would run the same and look better.


True, I remember how crappy old games, and hell even new ones are when you get stuck in places due to faulty game physics. Swings in GTA were deadly.

I think a combined approach of "precooked" tables and vector data which can be handled easily on a CPU core handed to the GPU for Z depth pass, lookup tables of reflectivity values while running the ray tracing, then use the rendered angle values for objects and store that as long as its in frame and only have to update the angle relative to the "user" to update the shadow and reflection map. Its going to take new hardware, and its still computationally expensive, but so was AF for a long time, then we found the right way to do it in hardware with almost no performance penalty.

Physics can do the same, its all just math, and a lot of it, but hardware acceleration for other things are just data tables or actual physical transistors in the right pattern to match an algorithm.
 
True, I remember how crappy old games, and hell even new ones are when you get stuck in places due to faulty game physics. Swings in GTA were deadly.

I think a combined approach of "precooked" tables and vector data which can be handled easily on a CPU core handed to the GPU for Z depth pass, lookup tables of reflectivity values while running the ray tracing, then use the rendered angle values for objects and store that as long as its in frame and only have to update the angle relative to the "user" to update the shadow and reflection map. Its going to take new hardware, and its still computationally expensive, but so was AF for a long time, then we found the right way to do it in hardware with almost no performance penalty.

Physics can do the same, its all just math, and a lot of it, but hardware acceleration for other things are just data tables or actual physical transistors in the right pattern to match an algorithm.
I mean we're getting so much compute power even with entry level hardware,and mid range has built in asic accelerators.

apparently people who were screaming amd cards we superior in terms of compute pefromance conveniently forgot about it for the sake of arguing (not you)
 
One guy tried to bullshitty femfx
garbage physx been doing this for a grip

then one answered :
Wrong.
NVidia Flex is doing this, but is not in the default UE4. You have to install a fork made by NVidia or make your own.
PhysX is only used to manage collisions between solid meshes. It does not allows you to use soft bodies out of the box.
 
Back
Top