• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces FidelityFX Super Resolution 3 (FSR 3) Fluid Motion Rivaling DLSS 3, Broad Hardware Support

Status
Not open for further replies.
That's proof that it works better on Ada. It's not proof that it doesn't work on Turing and Ampere.
If Nvidia could enable RT on cards that have zero dedicated RT hardware (GTX 1000 series), then this shouldn't be a problem, either.

My guess is they are in a damn if they do damn if they don't situation and decided against releasing it on Turing/Ampere.

Let's say they did release it and it performs like crap and has a ton of artifacts people will say they gimped it on purpose so basically the same situation that they are in now.

Whenever I buy a gpu I buy it for the performance it gives me that day I think most people just buy whatever performs the best within their budget regardless anyone buying a gpu because the box is red or green if there are better options at the same price point is only doing themselves a disservice.
 
I buy it for the RGB :pimp:
Bad Boy Deal With It GIF by TikTok
Cat Swag GIF by MOODMAN
Any way future proofing or gaining more features over time is hardly something which potential buyers should not pursue! This is also why a lot of phone makers these days are promising 3-4 years of major OS updates, even on mid range phones. Which is to say that expecting/hoping for more or a lot more out of your GPU (or CPU) down the line is not unfair as far as I'm concerned.
 
In my gtx 970 days I Had the daily black screen with the "Display Driver Stopped Responding and Has Recovered" error for years.
Don't talk to me about nvidia's perfect driver stability. Just check their driver & hotfix releases change log
 
That's proof that it works better on Ada. It's not proof that it doesn't work on Turing and Ampere.
If Nvidia could enable RT on cards that have zero dedicated RT hardware (GTX 1000 series), then this shouldn't be a problem, either.

I think the difference is that one is adding a new feature, RT. The other is not and is instead adding perceived smoothness, FrameGen.

RT sucks the frames out of your card no matter which one you use, so having it cut to 20-25% on a 1xxx series card was merely 2-3x worse than a Turing card but still allowed the user to "see what they were missing." It's a decent advertising gimmick.

Frame Generation exists specifically to make more frames to increase perceived smoothness. If adding FrameGen to Turing and Ampere ends up adding few or no additional frames, then you are getting nothing yet taking a hit on latency in the process.

One (RT) adds something while the other (FG) adds nothing on "unsupported" cards hence why RT got added to those cards and not FG.
 
I think the difference is that one is adding a new feature, RT. The other is not and is instead adding perceived smoothness, FrameGen.

RT sucks the frames out of your card no matter which one you use, so having it cut to 20-25% on a 1xxx series card was merely 2-3x worse than a Turing card but still allowed the user to "see what they were missing." It's a decent advertising gimmick.

Frame Generation exists specifically to make more frames to increase perceived smoothness. If adding FrameGen to Turing and Ampere ends up adding few or no additional frames, then you are getting nothing yet taking a hit on latency in the process.

One (RT) adds something while the other (FG) adds nothing on "unsupported" cards hence why RT got added to those cards and not FG.

The argument now is that if AMD regardless of being nearly a year late can get it working with Asynchronous compute surely nvidia could. We will have to wait till the technology is actually out before critiquing how close they are actually coming to the competing technology. So far according to digital foundry in hands off demonstration it's promising.
 
My guess is they are in a damn if they do damn if they don't situation and decided against releasing it on Turing/Ampere.

Let's say they did release it and it performs like crap and has a ton of artifacts people will say they gimped it on purpose so basically the same situation that they are in now.

Whenever I buy a gpu I buy it for the performance it gives me that day I think most people just buy whatever performs the best within their budget regardless anyone buying a gpu because the box is red or green if there are better options at the same price point is only doing themselves a disservice.
I think the difference is that one is adding a new feature, RT. The other is not and is instead adding perceived smoothness, FrameGen.

RT sucks the frames out of your card no matter which one you use, so having it cut to 20-25% on a 1xxx series card was merely 2-3x worse than a Turing card but still allowed the user to "see what they were missing." It's a decent advertising gimmick.

Frame Generation exists specifically to make more frames to increase perceived smoothness. If adding FrameGen to Turing and Ampere ends up adding few or no additional frames, then you are getting nothing yet taking a hit on latency in the process.

One (RT) adds something while the other (FG) adds nothing on "unsupported" cards hence why RT got added to those cards and not FG.
Maybe, maybe not. We won't know unless they do decide to roll it out for Turing and Ampere in the future.

Personally, I don't like all this "new tech" Nvidia introduces with every generation. One may see it as something new and exciting, but to me, it's just gimmicks to make people spend money on an upgrade even if they wouldn't have to otherwise. I'm more of an advocate of unified, hardware-agnostic standards, and a level playing field where the only major qualities of a graphics card are its computing power and price. If Nvidia is really a software company as some may claim, then they should develop software that runs on everything instead of hardware dedicated for not giving people a choice when buying a GPU.
 
That's proof that it works better on Ada. It's not proof that it doesn't work on Turing and Ampere.
If Nvidia could enable RT on cards that have zero dedicated RT hardware (GTX 1000 series), then this shouldn't be a problem, either.
But nvidia themselves said that yes, it can work on older hardware. It will just look like crap.
 
But nvidia themselves said that yes, it can work on older hardware. It will just look like crap.
I'd rather judge that for myself than believe Nvidia without any evidence presented.
 
Maybe, maybe not. We won't know unless they do decide to roll it out for Turing and Ampere in the future.

Personally, I don't like all this "new tech" Nvidia introduces with every generation. One may see it as something new and exciting, but to me, it's just gimmicks to make people spend money on an upgrade even if they wouldn't have to otherwise. I'm more of an advocate of unified, hardware-agnostic standards, and a level playing field where the only major qualities of a graphics card are its computing power and price. If Nvidia is really a software company as some may claim, then they should develop software that runs on everything instead of hardware dedicated for not giving people a choice when buying a GPU.
You need to imagine how someone who isn't a big tech nerd might react to a lesser implementation of DLSS 3: they will toggle the setting by curiosity, see that it looks/perform like crap, and base their whole opinion of the tech based on their personal experience. They are not going to research about how DLSS3 perform best starting from a specific generation because the hardware used for FG is more powerful. Letting the client trying out everything that they want is a double edge sword: if it doesn't work, they will still expect you to fix it somehow, if you don't fix you hurt your brand image, and the product will be deemed as crap. Nvidia isn't the first brand that would rather not poke that bear. The more mainstream something is meant to be, the lesser control you'll have over it because tech support doesn't want to get swarmed by people who don't understand what "not officially supported" means :D

Sometimes the industry needs a push. Vulkan was born from mantle, anything that tressFX and gameworks did is now a standard feature in games engine.
 
Sometimes the industry needs a push. Vulkan was born from mantle, anything that tressFX and gameworks did is now a standard feature in games engine.
Yes, but they were all hardware-agnostic, just like they are now. The industry needs a push, but not by X company to buy only X company's cards.

As for the longer part of your post: I guess I see the point. It's just now how I would prefer. Nvidia at least could release some footage of a Turing GPU running FG like crap.
 
Yes, but they were all hardware-agnostic, just like they are now. The industry needs a push, but not by X company to buy only X company's cards.
Honestly, the impression that I'm getting from the GPU market right now is that there's a growing pain about the machine learning hardware. For a while nvidia was alone to have that, Intel followed them, but their ML hardware isn't software compatible with Nvidia, and AMD doesn't seem to think that dev accessible AI on a consumer GPU is the future, and it's MIA on the consoles.
-So Nvidia want to power everything with machine learning.
-Intel wants to do it as well, but they still propose an agnostic solution because they can't make XESS works with the tensor core apparently.
-AMD just want to use the basic GPU hardware since that seems to be the only workable agnostic solution at the moment.
-Direct ML is a thing that supposed to be hardware agnostic, but no one use it for upscaling and frame generation? (genuine question)

Upscaling/FG seems to suffer from a difference of philosophy about the means to achieve it, and the fact that each company seems unable to make use of the specialised hardware of the other. So, there's something to clean up and standardise there.... but I think that Microsoft would need make direct X 12_3 (direct x Ultimate ML) where every constructor would have a guideline about what the ML hardware need to be able to do to be compliant.
 
3dfx killed itself. Stop making up stupid bullshit to justify your lack of actual argument.

Ah yes the good old "I don't actually have an argument so I'm going to bring up everything that I think NVIDIA has ever done wrong". I could do the same for ATI/AMD, but I won't, because I'm smart enough to know that that's not an argument, it's just stupid whining by a butthurt AMD fanboy.
You asked for a history lesson, don't be annoyed you got one.
 
Are you not happy with your 13900KS/4090 setup? The inevitable and excessive crying never ends for team green/blue. :D
 
Last edited by a moderator:
That's proof that it works better on Ada. It's not proof that it doesn't work on Turing and Ampere.
Of course it isn't. But you keep claiming that it will work on Turing and Ampere, also without any proof. Do you see your hypocrisy?

You asked for a history lesson, don't be annoyed you got one.
I didn't ask for a history lesson, and the stupid bullshit you made up in a pathetic attempt to support your not-argument wasn't one. Unless it's a history of your inability to make a coherent argument.
 
The way AMD presented it, it seems too good to be true.
 
I didn't ask for a history lesson, and the stupid bullshit you made up in a pathetic attempt to support your not-argument wasn't one. Unless it's a history of your inability to make a coherent argument.
Since you forgot,

You asked for history of anti-consumer behavior.
And frankly I am not sure why you are in denial of it, both the historical facts and having asked for it lol.
 
Of course it isn't. But you keep claiming that it will work on Turing and Ampere, also without any proof. Do you see your hypocrisy?
No - I keep claiming that we have proof that Turing and Ampere have the necessary hardware, and that we have no proof that it doesn't work. Nvidia is kindly asking us to believe whatever they say at face value. If they provided a video to compare how it runs across Turing/Ampere/Ada, so we could see with our own eyes why they chose to only enable it on Ada, it would be the difference of night and day.

Edit: Here's a little info morsel on the topic:

Honestly, the impression that I'm getting from the GPU market right now is that there's a growing pain about the machine learning hardware. For a while nvidia was alone to have that, Intel followed them, but their ML hardware isn't software compatible with Nvidia, and AMD doesn't seem to think that dev accessible AI on a consumer GPU is the future, and it's MIA on the consoles.
-So Nvidia want to power everything with machine learning.
-Intel wants to do it as well, but they still propose an agnostic solution because they can't make XESS works with the tensor core apparently.
-AMD just want to use the basic GPU hardware since that seems to be the only workable agnostic solution at the moment.
-Direct ML is a thing that supposed to be hardware agnostic, but no one use it for upscaling and frame generation? (genuine question)

Upscaling/FG seems to suffer from a difference of philosophy about the means to achieve it, and the fact that each company seems unable to make use of the specialised hardware of the other. So, there's something to clean up and standardise there.... but I think that Microsoft would need make direct X 12_3 (direct x Ultimate ML) where every constructor would have a guideline about what the ML hardware need to be able to do to be compliant.
That makes perfect sense. And I agree - standardisation is needed.
 
Last edited:
No - I keep claiming that we have proof that Turing and Ampere have the necessary hardware, and that we have no proof that it doesn't work. Nvidia is kindly asking us to believe whatever they say at face value. If they provided a video to compare how it runs across Turing/Ampere/Ada, so we could see with our own eyes why they chose to only enable it on Ada, it would be the difference of night and day.


That makes perfect sense. And I agree - standardisation is needed.
I would say AMD didn't go directML route as that would cut out all their old cards, RX7000 is the first with "tensor cores"
https://gpuopen.com/learn/wmma_on_rdna3/ edit: it looks like directML could work just slow on old cards...
 
Last edited:
we have proof that Turing and Ampere have the necessary hardware
It's a piece of hardware with the same name, but lesser capability by a significant amount, so at least by their own measure, it's not the necessary hardware.
 
It's a piece of hardware with the same name, but lesser capability by a significant amount, so at least by their own measure, it's not the necessary hardware.
That is the problem with a lack of transparency in product details.
AMD has this same issue with the WMMA, they have dedicated AI cores but... that's all we know, they will do things sometime in the future...
In the same way Nvidia doesn't mention the differences between its consumer tensor core implementation and workstation tensor cores.
 
It's a piece of hardware with the same name, but lesser capability by a significant amount, so at least by their own measure, it's not the necessary hardware.
I would still like to see how it handles (or doesn't handle) FG instead of believing Nvidia's claims without a second thought.
 
I would still like to see how it handles (or doesn't handle) FG instead of believing Nvidia's claims without a second thought.
I too would be very interested to know, and benefit of the doubt goes in all directions, I need to see certain claims tested before I am willing to accept AMD's word for it, especially after showing they have extensive "we can be dodgy and anti consumer" chops, especially recently.
 
I too would be very interested to know, and benefit of the doubt goes in all directions, I need to see certain claims tested before I am willing to accept AMD's word for it, especially after showing they have extensive "we can be dodgy and anti consumer" chops, especially recently.
Absolutely. Marketing material is never to be believed from any company.
 
I admit, I said mean things about the queen and some other royal family and was told by someone on Twitter they were going to phone me into the local police. However living in Montana I really don't give a fuck who they call, and even offered to donate to get the local constable a row boat to make the trip.
Some places aren't speech nazis.
Montana? The voice from the depth of whale's belly.
 
So AMD just boosted Ampere cards on behalf of Nvidia lol.

Nvidia meanwhile will continue to use software to sell hardware.

In my gtx 970 days I Had the daily black screen with the "Display Driver Stopped Responding and Has Recovered" error for years.
Don't talk to me about nvidia's perfect driver stability. Just check their driver & hotfix releases change log
I used to get that, I now routinely increase the driver timeout out of paranoia.
 
Status
Not open for further replies.
Back
Top