• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Details DLSS 4 Design: A Complete AI-Driven Rendering Technology

but but but nvidia's motto is better than native resolutions?
DLSS is better than native at iso performance, which is all that matters. Native looking better while running at 30 fps vs DLSS T running at 60 is useless. Just compare 1440p native to 4k DLSS (youll get the same performance) and dlss will look insanely better, it's not even a comparison.
 
MFG is snakeoil, don't buy into that crap, source? Me.
 
You do realize AMD and Intel were busy with developing a bunch of this stuff way before Nvidia ever released any of it right?

It's ok. People don't have reading comprehension.
You said AMD and Intel were Developing this stuff before Nvidia Released it.

but people read... AMD and Intel were Developing this stuff before Nvidia Developed it.

:shadedshu:
 
You do realize AMD and Intel were busy with developing a bunch of this stuff way before Nvidia ever released any of it right?
What's the point in being "busy developing a bunch of this stuff way before Nvidia" if you take too long to release it, or can't get it as good as your competitors?
As an example AMD has had Matrix cores since CDNA1, but that's useless to the regular consumer market and only now has reached a consumer product. Even on the enterprise market it's kinda moot since their software stack can't make proper use of those.
 
Agree 100%, it was much better back when they pushed every year to get the best of the best (Performance, Technologies, Efficiency, etc.) !

Unfortunately all those companies only care about maximizing profits now. They're not pushing the limits anymore and are just relying on AI to do all the work (that they don't want to do because they're lazy). Look at the Industry as a whole, there have been huge are layoffs everywhere since 2021 and it's still going. Also the people staying are working for 2-3 jobs at once but barely make more money than before, it's like "modern slavery" and AI is only going to replace more and more people anyway.
Look, I am with you that the old days of improvement were much better than what is coming out year-to-year these days.

As for higher prices, you can blame the death of Dennard scaling, the growth of AI and crypto, the maturation and mass adoption of CUDA, ever-slowing ever-increasingly costly fab nodes, the higher proportions of sales now going to deep-pocketed corporate buyers instead of retail sales, the pandemic changing habits, EVs and other products requiring more chips, higher interest rates, more expensive shipping, inflation in general, etc, etc.

But if you are implying that these companies did not care about maximizing profits back in 2005 or 2010 or 2015 but now they do care and this is why the prices are so high now but were not in the past, I have a massive AI-enhanced bridge to sell to you. :kookoo:

Define properly? It's not like they can conjure unlimited computing power out of thin air. The RTX 5090 is already unreasonably powerful, it's the first graphics card to break the single precision (FP32) 100 TFLOPS mark, rated at around ~105TF at its nominal clock speeds (it runs faster!). A half-rack IBM Blue Gene/L supercomputer with 512 nodes installed (8192 cores) from the early 2010's still falls utterly short of its performance - and you can run that on your PC, off your common desktop power supply, too.

This stuff is unreal, it's that real time ray traced graphics at super high resolutions and super high frame rates is every bit of the "holy grail" that Jensen Huang calls it and then some. It's insanely, extremely advanced technology that will still take years to achieve.
I took a look at Anandtech's review of the HD 7970 back in 2012 and it achieves 3.79 Tflops (of what, FP32, not specified at all) vs the 48.7 Tflops (of FP32) that the 9070 XT can achieve. Pretty nice improvement in performance considering that Moore's law has been dying the whole time.
 
This IS what Machine learning SHOULD be used for, not to spy or enable human rights abuse.
 
Look, I am with you that the old days of improvement were much better than what is coming out year-to-year these days.

As for higher prices, you can blame the death of Dennard scaling, the growth of AI and crypto, the maturation and mass adoption of CUDA, ever-slowing ever-increasingly costly fab nodes, the higher proportions of sales now going to deep-pocketed corporate buyers instead of retail sales, the pandemic changing habits, EVs and other products requiring more chips, higher interest rates, more expensive shipping, inflation in general, etc, etc.

But if you are implying that these companies did not care about maximizing profits back in 2005 or 2010 or 2015 but now they do care and this is why the prices are so high now but were not in the past, I have a massive AI-enhanced bridge to sell to you. :kookoo:

I took a look at Anandtech's review of the HD 7970 back in 2012 and it achieves 3.79 Tflops (of what, FP32, not specified at all) vs the 48.7 Tflops (of FP32) that the 9070 XT can achieve. Pretty nice improvement in performance considering that Moore's law has been dying the whole time.

Yeah, it's 3.79 TF of single precision float (FP32). The 5090 is so absurdly powerful that it's faster at FP32 than the 7900 XTX is at FP16 (double rate, so 102TF) - and the 7900 XTX is still the 3rd most powerful gaming GPU on paper, behind only the 5090 and 4090. It's faster than the 5080 and 4080S at raw compute.

Also to correct myself: the computer I meant to compare was the Blue Gene/Q. The /L is the one from 2004, which in its highest configuration was just about as powerful as a single RTX 4090 (actually a little slower)
 
Nvidia should invest more into improving DLSS further now that FSR4 SR is pretty close to DLSS4 SR already, Nvidia can't afford sleeping at the wheels like Intel did.
 
Nvidia should invest more into improving DLSS further now that FSR4 SR is pretty close to DLSS4 SR already, Nvidia can't afford sleeping at the wheels like Intel did.
A pretty decent chance that the fighting will start to die down and there will be some sort of alignment both about the hardware needed and algorithms/models used. Everyone is on the same level as far as technology goes and while models can undoubtedly be improved, it does seem that we are heavily in diminishing returns area. Microsoft trying to standardize the feature will also play a part.
 
A pretty decent chance that the fighting will start to die down and there will be some sort of alignment both about the hardware needed and algorithms/models used. Everyone is on the same level as far as technology goes and while models can undoubtedly be improved, it does seem that we are heavily in diminishing returns area. Microsoft trying to standardize the feature will also play a part.

I once thought DLSS3 was reaching diminishing return but the Transformer model change all that, now it's only the beginning.
Even FSR4 is a hybrid CNN/Transformer model according to AMD.

The fight for technological superiority must go on :p
 
Last edited:
I once thought DLSS3 was reaching diminishing return
Maybe quality did, but there was a lot of room for improvement with eg. ultra performance. Transformer model made the performance usable, which is insane.
 
Came here for the moaning, wasn't disappointed.

DLSS is the best thing since baked bread. I'll gladly sacrifice the minuscule loss of quality, (that most people hardly ever notice when actually playing the game, not pore over comparison screenshots or vids) for the massive fps gains that let me play latest games in 4K, often maxed out and with RT to boot.

Yeah, sure, it'd be great if you could play them in native res. But the fact that a GPU that let’s you do that doesn't even exist should tell you something about the state of affairs.
 
Been using DLSS 4 transformer model (profiles J and K) in FF7 Rebirth it looks and runs fantastic. It's the best part of this new GPU gen to me so far, no new hardware needed.
 
What's the point in being "busy developing a bunch of this stuff way before Nvidia" if you take too long to release it, or can't get it as good as your competitors?
As an example AMD has had Matrix cores since CDNA1, but that's useless to the regular consumer market and only now has reached a consumer product. Even on the enterprise market it's kinda moot since their software stack can't make proper use of those.

well what is the point of releasing something that does not work well yet? like DLSS1?
 
well what is the point of releasing something that does not work well yet? like DLSS1?
Gathering adoption, customer feedback and iterating over time. DLSS1 was shit but served to carve the path.
Being first to market has tons of benefits in any kind of business.
 
well what is the point of releasing something that does not work well yet? like DLSS1?

What was the point of FSR 1?

Everything has a 1.0 release.
 
Gathering adoption, customer feedback and iterating over time. DLSS1 was shit but served to carve the path.
Being first to market has tons of benefits in any kind of business.

Sure, but also a lot of risk and again, its just a case of when you feel its good enough to show to the audience and that bar is different for different people.

DE allows people to play the pre-alpha version of their upcoming game Soulframe, but its such a nothing product at the moment that I would not recommend it to anyone, there is virtually nothing to play because obviously Pre-Alpha, I know they want to gather feedback but I would not have exposed this ti the public at this time...and its been excisable for like 2 years now.....

What was the point of FSR 1?

Everything has a 1.0 release.

Re-read the conversation, this is not the type of thing you would ask if you understand what it was about, sorry.
 
There are two main ways to speed up image rendering:
  1. Increase the number of transistors in the GPU chip.
  2. Develop new algorithmic techniques (AI is also one of them).
So, what’s the result? More transistors mean exponentially higher costs, making it unaffordable for many users. On the other hand, improving architecture and optimizing algorithms only requires time investment, without significantly increasing manufacturing costs.
 
what is what in that analogy ?
An axe to grind, there's little to glean from it aside from that.
The fight for technological superiority must go on :p
Absolutely, there is clearly a lot more room to take the technology so let the race continue, we're not ready to settle for the ok but open one just yet, especially since the latest models from all 3 major camps hold so much promise.
 
Last edited:
Came here for the moaning, wasn't disappointed.

DLSS is the best thing since baked bread. I'll gladly sacrifice the minuscule loss of quality, (that most people hardly ever notice when actually playing the game, not pore over comparison screenshots or vids) for the massive fps gains that let me play latest games in 4K, often maxed out and with RT to boot.

Yeah, sure, it'd be great if you could play them in native res. But the fact that a GPU that let’s you do that doesn't even exist should tell you something about the state of affairs.

Perfectly valid way of looking at it. I think the issue is when we require upscaling to run games decently at resolutions below 4k. 4k and above are the "peak" of current display technology so obviously you're going to need absurdly powerful hardware OR the software trickery like DLSS/FSR.

But I shouldn't have to enable DLSS/FSR for something like Monster Hunter Wilds to get 60fps and higher at 1440p! The visual in the majority of modern games that require DLSS/FSR just don't justify the absurd resource cost they ask for, that's all.
 
Perfectly valid way of looking at it. I think the issue is when we require upscaling to run games decently at resolutions below 4k. 4k and above are the "peak" of current display technology so obviously you're going to need absurdly powerful hardware OR the software trickery like DLSS/FSR.

But I shouldn't have to enable DLSS/FSR for something like Monster Hunter Wilds to get 60fps and higher at 1440p! The visual in the majority of modern games that require DLSS/FSR just don't justify the absurd resource cost they ask for, that's all.

The amount of effort needed to implement Upscaling in games today is miniscule compare to doing any sort of optimization for the vast number of hardwares currently in the PC space LOL.

Do you think Monster Hunter Wilds would be better optimized had upscaling not implemented?

Also you are not forced to enable Upscaling if 30FPS is all you need anyways

Absolutely, there is clearly a lot more room to take the technology so let the race continue, we're not ready to settle for the ok but open one just yet, especially since the latest models from all 3 major camps hold so much promise.

Yup, as tech lover I'm always excited for the break-through techs that could improve my gaming experience further, whether it's from a new CPU, GPU or a Monitor (4K 480hz OLED screen would be sick).
 
But I shouldn't have to enable DLSS/FSR for something like Monster Hunter Wilds to get 60fps and higher at 1440p! The visual in the majority of modern games that require DLSS/FSR just don't justify the absurd resource cost they ask for, that's all.
I'm sure there's an element of using upscaling as a development crutch these days. But to what extent? It’s impossible to tell because most of the internet commentary is based on projection and hyperbole. I don't know whether it’s really "majority" of modern games. Haven't played MHW but the gfx looks fairly advanced and it's an open world, so...
 
Back
Top