• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 9070 XT Alleged Benchmark Leaks, Underwhelming Performance

SAM and Rebar are the same thing?
Yes but not quite, there are some minor specifics on SAM (SAM is basically rebar with some extra optimizations on top).
The way you said it sounded like you could enable rebar, get 10~17% perf, and then add SAM to get another 10~17% on top of the previous one, which is not the case.
 
Yes but not quite, there are some minor specifics on SAM (SAM is basically rebar with some extra optimizations on top).
The way you said it sounded like you could enable rebar, get 10~17% perf, and then add SAM to get another 10~17% on top of the previous one, which is not the case.

Thanks for that. I read up on it.
 
I don't get why AMD would go this route, RT and upscaling are just Nvidia snake oil. Just call them out as such. The tech press also not saying what's in front of them is the other problem. Pressure is applied?

The performance hit from RT for what amounts to slightly better shadows and reflections isn't worth it at all.

Upscaling is another step backwards, reducing image quality and introducing shimmering and other artifacting.

AMD shouldn't bother with either and say why they're doing it. They'll never get anywhere copying Nvidia's sales tactics but doing it much worse.

Good for Amd then that you aren't leading it , you'd run it into the ground in 6 months lol

RT is the single biggest leap in photorealism in real time rendering , EVER, you either use amd and cant enjoy fhe feature properly and need fsr performance or you are simply trolling. Its here to stay. Games are starting to use RT features that cant even be disabled anymore AMD is lagging behind sorely because of RT and updcaling and other bells and whistles, RDNA2 was very competitive in raster and often faster and yet it still got beat because the vast majority wants these new features , a very tiny minority on the Internet moans about RT being a gimmick.

Upscaling surpassed native in more and more ways ever since dlss 2.0 launched , if you still get artifacting or shimmering then you are a fsr user.

and anyone who calls it a gimmick has his brain shoved where the sun doesn't shine.

RT, Updcaling, AI is the future and any company that does not implement better solutions wilm be left in the dust
 
Good for Amd then that you aren't leading it , you'd run it into the ground in 6 months lol

RT is the single biggest leap in photorealism in real time rendering , EVER, you either use amd and cant enjoy fhe feature properly and need fsr performance or you are simply trolling. Its here to stay. Games are starting to use RT features that cant even be disabled anymore AMD is lagging behind sorely because of RT and updcaling and other bells and whistles, RDNA2 was very competitive in raster and often faster and yet it still got beat because the vast majority wants these new features , a very tiny minority on the Internet moans about RT being a gimmick.

Upscaling surpassed native in more and more ways ever since dlss 2.0 launched , if you still get artifacting or shimmering then you are a fsr user.

and anyone who calls it a gimmick has his brain shoved where the sun doesn't shine.

RT, Updcaling, AI is the future and any company that does not implement better solutions wilm be left in the dust

Don't agree at all, it's amazing to me how Nvidia have been able to take image quality backwards. Meanwhile imposing a huge performance penalty for barely noticeable reflection and lighting improvements.

Unfortunately it's very hard to do image quality comparisons between engines and different technologies. It's more subjective than fps... There are games from 5+ years ago that look and run better than what's coming out now imo.

It's not a gimmick, it's snake oil that exists so Nvidia can sell more GPUs. Its as simple as, if a 4 year old card can run every game at 4k 120fps+, no one will have any reason to upgrade.

So they came up with RT and added upscaling to the mix.
 
Don't agree at all, it's amazing to me how Nvidia have been able to take image quality backwards. Meanwhile imposing a huge performance penalty for barely noticeable reflection and lighting improvements.

Unfortunately it's very hard to do image quality comparisons between engines and different technologies. It's more subjective than fps... There are games from 5+ years ago that look and run better than what's coming out now imo.

It's not a gimmick, it's snake oil that exists so Nvidia can sell more GPUs. Its as simple as, if a 4 year old card can run every game at 4k 120fps+, no one will have any reason to upgrade.

So they came up with RT and added upscaling to the mix.
Agreed, real time ray tracing is impressive tech, but its not worth the trade off on performance to have more reflections and more realistic lighting.
And the fact upscaling reduces image quality, combined with games coming out now with reduced texture quality looking like vaseline smeared all over the screen, in order to force the RT marketing even to those on low end cards. Also some games are coming out with RT on by default, or no option to turn RT off, which makes AMD cards look worse than they really are. I think Nvidia realized they made a mistake with the GTX 10 series, those cards lasted for years without needing to upgrade, so Nvidia sells software features, some of those are only available unless you buy the latest GPU.
 
Last edited:
Do you have all your points saved in notepad so you can just copy-paste them multiple times a day? Or do you use Onenote?
I see experience here. Which one is better for the job you describe? Notepad or Onenote?
I am only asking out of curiosity. I prefer enriching my ignore list than doing this as a part time job.

Agreed, real time ray tracing is impressive tech, but its not worth the trade off on performance to have more reflections and more realistic lighting.
RT is going to become a necessity really soon and even people who insist on raster will be forced to reconsider. The way to do it is the same as with hardware PhysX. When Hardware PhysX came out, programmers suddenly forgot how to program physics effects on CPUs. You either had physics effects with hardware PhysX, or almost nothing at all without hardware PhysX.
Alice was one of my favorite games back then and I was totally refusing to play it without PhysX at high. So a second mid range Nvidia GPU was used in combination with an AMD primary card to play the game. Of course it was also necessary to crack Nvidia's driver lock for this to work. Hardware PhysX in the end failed to gain traction, so now we again enjoy high quality Physics effects without needing to pay Nvidia for what was completely free in the past.

Now about RT. In that latest video from Tim of HUB, where he spotted full screen noise when enabling RT(maybe they rejected his application to work at Nvidia? Strange no other site, including TPU, investigated these findings), there was at least one comparison in Indiana Jones that completely shocked me. And in both images RT was on, the difference was in one was RT NORMAL and in the other RT FULL.
1735308912320.png

The difference in lighting is shocking at least. In NORMAL it is like the graphics engine is malfunctioning, not working as expected, or like there is a huge bug somewhere in the code. In the past, games that where targeting both audiences who wanted RT and audiences who where happier with high fps raster graphics, had lighting differences between RT and raster modes, where you had to pause the image and start investigating lighting to really see the differences.
Here in a game that RT is the only option, the difference can be spotted with the eyes closed. And lighting is totally broken when leaving RT in NORMAL setting, the same as 15 years ago having PhysX in low.

I think Nvidia will try to push programmers in using libraries where only with FULL RT the gamer gets correct lighting. With medium, normal, low or whatever other option the gamer chooses in settings, lighting will be completely broken. That will drive people to start paying 4 digit prices just to get the lighting that today enjoy for free.

Maybe TPU would like to investigate it, .....or maybe not?
 
Last edited:
I see experience here. Which one is better for the job you describe? Notepad or Onenote?
I am only asking out of curiosity. I prefer enriching my ignore list than doing this as a part time job.


RT is going to become a necessity really soon and even people who insist on raster will be forced to reconsider. The way to do it is the same as with hardware PhysX. When Hardware PhysX came out, programmers suddenly forgot how to program physics effects on CPUs. You either had physics effects with hardware PhysX, or almost nothing at all without hardware PhysX.
Alice was one of my favorite games back then and I was totally refusing to play it without PhysX at high. So a second mid range Nvidia GPU was used in combination with an AMD primary card to play the game. Of course it was also necessary to crack Nvidia's driver lock for this to work. Hardware PhysX in the end failed to gain traction, so now we again enjoy high quality Physics effects without needing to pay Nvidia for what was completely free in the past.

Now about RT. In that latest video from Tim of HUB, where he spotted full screen noise when enabling RT(maybe they rejected his application to work at Nvidia? Strange no other site, including TPU, investigated these findings), there was at least one comparison in Indiana Jones that completely shocked me. And in both images RT was on, the difference was in one was RT NORMAL and in the other RT FULL.
View attachment 377379
The difference in lighting is shocking at least. In NORMAL it is like the graphics engine is malfunctioning, not working as expected, or like there is a huge bug somewhere in the code. In the past, games that where targeting both audiences who wanted RT and audiences who where happier with high fps raster graphics, had lighting differences between RT and raster modes, where you had to pause the image and start investigating lighting to really see the differences.
Here in a game that RT is the only option, the difference can be spotted with the eyes closed. And lighting is totally broken when leaving RT in NORMAL setting, the same as 15 years ago having PhysX in low.

I think Nvidia will try to push programmers in using libraries where only with FULL RT the gamer gets correct lighting. With medium, normal, low or whatever other option the gamer chooses in settings, lighting will be completely broken. That will drive people to start paying 4 digit prices just to get the lighting that today enjoy for free.

Maybe TPU would like to investigate it, .....or maybe not?
I have problem with that.
Every single game I've seen with CPU PhysX all have the same problem on DX12 STUTTER FEST all synchronization problems. It's using the same base main thread to sync up. GPU are still many magnitudes faster than CPUS in physX. My 2080 ti is literally 4 times faster than my 5800x 3D in Physx. The problem is Nvidia took away GPU physic from the 4,000 series now. No one has noticed until they try to use it on older game that have GPU physX that can enable & runs like crap on an RTX 4090.
 
My 2080 ti is literally 4 times faster than my 5800x 3D in Physx.
Because the software version was always crap and slow and single thread and using MMX. At least in it's first versions.
GPU physX that can enable & runs like crap on an RTX 4090.
Didn't knew that. Why do it? Maybe they didn't wanted to spend any more time to support it? Only option a secondary GPU in a second PCIe X16 slot. Maybe even a 1 lane PCIe slot will be enough for PhysX.
 
Because the software version was always crap and slow and single thread and using MMX. At least in it's first versions.

Didn't knew that. Why do it? Maybe they didn't wanted to spend any more time to support it? Only option a secondary GPU in a second PCIe X16 slot. Maybe even a 1 lane PCIe slot will be enough for PhysX.
Did you not read my RTX 2080 is 4 times faster than a CPU in physX???? . A GPU on pci-express 1x would have no hope of coming that close.
 
Nvidia doesn't dictate anything. The company will have problems once TSMC stops releasing new processes, which will inevitably happen because the Moore's law has been dead for a while already.
Nvidia relies on the old nodes 6nm and 4nm, and this is a disaster for them.
First of all....

Acknowledging Moore's Law as anything more than a tactic to convince investors is simply uninformed.

And second, the slowed rate of transistor density doubling is something that affects ALL COMPANIES who design chip architectures. If anything, NVIDIA has traditionally been the one of the companies who have not relied on a superior process node to make their products competitive.

And finally, let's remember that AMD, Intel, and NVIDIA are all American companies. They aren't each others enemies and none of them are any less of a greedy corporation than the other.
 
I have problem with that.
Every single game I've seen with CPU PhysX all have the same problem on DX12 STUTTER FEST all synchronization problems. It's using the same base main thread to sync up. GPU are still many magnitudes faster than CPUS in physX. My 2080 ti is literally 4 times faster than my 5800x 3D in Physx. The problem is Nvidia took away GPU physic from the 4,000 series now. No one has noticed until they try to use it on older game that have GPU physX that can enable & runs like crap on an RTX 4090.
Once again Nvidia removes a feature and does not tell anyone.
 
Dude, PhysX has been open source for a decade. They didn’t take anything away, they gave it to everyone.



From my research it's only "open source" for CPU & nothing else.
Once again Nvidia removes a feature and does not tell anyone.

I noticed this when RTX 4090 got reviewed I even mentioned it to W1zzrd about it in his GPU-z. Seems more like no one cared to say anything about it.
 
Last edited:
From my research it's only "open source" for CPU & nothing else.Once again Nvidia removes a feature and does not tell anyone.


I noticed this when tRTX 4090 got reviewed I even mentioned it to W1zzrd about it in his GPU-z. Seems more like no one cared to say anything about it.

GPU support was removed around the same time. When CPUs became fast enough to run physics there was no point to running on the GPU and incurring all the communication overhead.

Edit: I looked it up, Nvidia stopped GPU development of Physx in 2018 because nobody was using it anymore.
 
Last edited:
From my research it's only "open source" for CPU & nothing else.Once again Nvidia removes a feature and does not tell anyone.
No, that repo has the GPU code as well (reliant on CUDA), although I don't think anyone uses that.
The O3DE's fork also has support for GPU on CUDA.
FWIW, there are many other physics engines nowadays that are really good. And doing those physics calculations on the CPU has become way better given the advancements in SIMD stuff and whatnot.
 
I noticed that in the 4080, 4090 reviews the PhysX is missing but in my case is ticked.
What am I missing?

4080.gif
 
GPU support was removed around the same time. When CPUs became fast enough to run physics there was no point to running on the GPU and incurring all the communication overhead.

Edit: I looked it up, Nvidia stopped GPU development of Physx in 2018 because nobody was using it anymore.
There is more overhead on the CPU compared to GPU. So that statement is contradictor to the actual facts of how it works in games when it's applied.
When it is still locked to the main thread for anything done in gaming & it's also synced to that same thread.

That edit is funny because I can play a bunch of games where physX will show up when I enable it to show up on the Nvidia control panel & there's about recent 2020 on up games I have that show it.

No, that repo has the GPU code as well (reliant on CUDA), although I don't think anyone uses that.
The O3DE's fork also has support for GPU on CUDA.
FWIW, there are many other physics engines nowadays that are really good. And doing those physics calculations on the CPU has become way better given the advancements in SIMD stuff and whatnot.

If it's Reliant on Cuda then it won't work anything, but a Nvidia Gpu.
So
Why bother calling open sources, when it's not?

That's why I said it only works on "CPU" for the open-source part, as it isn't limited to Cuda.
 
Why bother calling open sources, when it's not?
It is open source, you're free to use it anywhere you want and modify it to your will.
You're even free to port the CUDA-specific parts to vulkan, OpenCL or whatever you may want.

Many machine learning frameworks are also open-source but mostly work on CUDA on the GPU-side because that's what most users use and it's the best API to work with. No one stops calling those "open source" because of that.
 
Did you not read my RTX 2080 is 4 times faster than a CPU in physX???? . A GPU on pci-express 1x would have no hope of coming that close.
Did you not read what I wrote? I gave you an explanation of why the CPU version is so slow. And please first try the 2080 for physics, than come tell me it is slow if you limit the PCIe lanes. It's not processing graphics and comes with a huge amount of VRAM for simply physics. I haven't checked it, but I doubt it becomes so slow to have to avoid it completely. Just run a couple benchmarks and see what happens. It wouldn't harm you.
 
Did you not read what I wrote? I gave you an explanation of why the CPU version is so slow. And please first try the 2080 for physics, than come tell me it is slow if you limit the PCIe lanes. It's not processing graphics and comes with a huge amount of VRAM for simply physics. I haven't checked it, but I doubt it becomes so slow to have to avoid it completely. Just run a couple benchmarks and see what happens. It wouldn't harm you.
I have my 5800x 3D is still 4 times slower than a single RTX 2080 ti in physX calculations.
 
I have my 5800x 3D is still 4 times slower than a single RTX 2080 ti in physX calculations.

How does it affect real world game performance? Which gives better performance, running a current version of PhysX on a CPU, or a 3.x (or earlier) version on a GPU?
 
GPU support was removed around the same time. When CPUs became fast enough to run physics there was no point to running on the GPU and incurring all the communication overhead.

Edit: I looked it up, Nvidia stopped GPU development of Physx in 2018 because nobody was using it anymore.

CPU to GPU communications are over the incredibly slow PCIe communication lines. Putting physics on GPU only makes sense if they had zero effect on the gameplay or game-engine (ex: emulating wind or cloth physics or hair movements). But if you wanted those physics-changes to actually affect the engine, it needed to traverse PCIe lane and back to CPU-land.

GPU might be faster than CPU, but PCIe is slower than both of them. GPUs are only fast because all the graphics data is on GPU-side only and never leaves. If data needs to traverse backwards over PCIe, it slows down to the point of not being worth it at all.

CPUs never were that slow at physics (even if GPUs are better at it). But its a matter of PCIe more than anything else.
 
CPU to GPU communications are over the incredibly slow PCIe communication lines. Putting physics on GPU only makes sense if they had zero effect on the gameplay or game-engine (ex: emulating wind or cloth physics or hair movements). But if you wanted those physics-changes to actually affect the engine, it needed to traverse PCIe lane and back to CPU-land.

GPU might be faster than CPU, but PCIe is slower than both of them. GPUs are only fast because all the graphics data is on GPU-side only and never leaves. If data needs to traverse backwards over PCIe, it slows down to the point of not being worth it at all.

CPUs never were that slow at physics (even if GPUs are better at it). But its a matter of PCIe more than anything else.
Except having physx on gpu net big gains in games s that had physx. Adding in a dedicated smaller gpu did the same. So i dont think your point stands given it eas tested extensively back then. The current issues are likely due to a software bug
 
Except having physx on gpu net big gains in games s that had physx. Adding in a dedicated smaller gpu did the same. So i dont think your point stands given it eas tested extensively back then. The current issues are likely due to a software bug

My point is that those games where PhysX was faster carefully crafted the physics to be doable on graphics only.

Ex: waving flags, cloth or hair that moves on GPU but never telling the CPU of those positions.

Aka: the game engine was blind to the physics. If you calculate physics in the GPUs, you need to leave it on the GPU.
 
Back
Top