Look at the R24 MT score. If someone needs that much MT without a threadripper, and can cool it, then this is the chip. So not exactly the worst is it.
If you NEED that MT performance, why
don't you have a ThreadRipper?
NEEDING that much performance implies that time=money for your renders and even a 24-core Threadripper will pay for itself in very little time.
IMO, CPU rendering is something that's easily distributed because it's only animations that really take time. The time it takes to render a single frame is never so long that your workflow is being delayed by your render previews - but the time it takes to render 5 minutes of 60fps 4K video can be hours or even days on a single processor. For a long time now, we've been farming CPU renders out to a dedicated renderfarm consisting of dozens of cheap boxen that have the most cost-effective CPU in them. Most of them are 5950X/64GB boxen still, though we have a couple of 8-boxen render nodes with Zen3 ThreadRippers for extremely heavy 50 megapixel final static renders that go on A0 posters and the like.
For the $10,000 asking price of a 96-core Threadripper 7995WX, likely to go into a $2500 platform, you can buy 10+ complete systems with 16-core 7950X processors in them.
Yes, you have to pay for 10+ motherboards, RAM, cases, and power supplies but the rendering software usually grants you 10 render node licenses per 1 workstation seat at zero or minimal cost, your distributed rendering management is free (eg Backburner) or affordable (Deadline/Nuke etc), and the farm is more granular. If you can only afford three ThreadRipper systems then only 3 people can render jobs simultaneously, whilst you could have, say, 40x 7950X systems in a renderfarm for the same money and allocate any number of boxes to anywhere from 1 to 40 people at the same time, dynamically, with spare capacity going back to pool as it finishes work for those additional people.
There's no right or wrong way to CPU render, but looking at costs, and looking at the number of tools in the market (both first-party and third-party) to facilitate cloud rendering and local distributed rendering, it's clear that there's a huge market for CPU rendering to lots of smaller, less powerful systems.
As for simulations and ML/LLM stuff - the Threadrippers are definitely worth it for that, solely for the additional RAM channels, but they're rarely used by us for that since in my primary office we run virtualised servers like most people do, and specifically high-availability where there's spare capacity to account for uninterrupted serving in the event of downtime, hypervisor updates, hardware failure etc. When you Server pool has several hundred cores and terabytes of RAM, you can afford to allocate the unused reserve to compute VMs that users can book time on. In the event of an exceedingly rare failure that requires that reserve, the compute VMs get suspended and the servers failover to the reserve hypervisors. Once the server emergency is resolved, you can un-suspend the compute VMs and have them pick up where they left off. So far, in the last 4+ years of operating like this with two different AEC companies, this suspending user VMs has happened 0 times unexpectedly, and only occured as part of planned, scheduled maintenance or testing the failovers.