• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce "Ampere" Hits 3DMark Time Spy Charts, 30% Faster than RTX 2080 Ti

you mean sharpening is an alternative to image recreation :laugh: and a better one :laugh:
and if you claim otherwise you're a fanboy :roll:


sweetie,image sharpening is hardly a feature,calling it that is a stretch in 2020.it's the easiest trick a gpu can do to produce image that may look better to the untrained eye.
nvidia has had that feature via freestlye for years.but "folks from the known camp" tend to forget about it,until amd has it 2 years later that is

and hwunboxed has their own video on dlss 2.0,I guess you missed that one.I advise you not to watch it cause it'll just make you sad how good they say it is.

let me inform you since you're slow to learn new facts

Provided we get the same excellent image quality in future DLSS titles, the situation could be that Nvidia is able to provide an additional 30 to 40 percent extra performance when leveraging those tensor cores. We'd have no problem recommending gamers to use DLSS 2.0 in all enabled titles because with this version it’s basically a free performance button.

The visual quality is impressive enough that we’d have to start benchmarking games with DLSS enabled -- provided the image quality we’re seeing today holds up in other DLSS games -- similar to how we have benchmarked some games with different DirectX modes based on which API performs better on AMD or Nvidia GPUs It’s also apparent from the Youngblood results that the AI network tensor core version is superior to the shader core version in Control.
 
Last edited:
image recreation
I hope, this was sarcasm.

that feature
No, not "that feature" but NVidias upscaling tech, called DLSS 1.0 was beaten, according to the review linked above.
Nobody direclty compared it to the image upscaler called DLSS 2.0.

If you are too concerned about the naming:



It is notable that using neural network like processing to upscale images is not something either of the GPU manufacturers pioneered:

it'll just make you sad how good they say it is.
Sad is the reality distortion thing, there is nothing sad about upscaling doing its job.
 
Nobody direclty compared it to the image upscaler called DLSS 2.0.
what are you talking about,there's been a dozen reviews of dlss 2.0 form every major site.FOR MONTHS
here,even your favorite amd-leaning data manipulating channel has to admit that

Provided we get the same excellent image quality in future DLSS titles, the situation could be that Nvidia is able to provide an additional 30 to 40 percent extra performance when leveraging those tensor cores. We'd have no problem recommending gamers to use DLSS 2.0 in all enabled titles because with this version it’s basically a free performance button.

The visual quality is impressive enough that we’d have to start benchmarking games with DLSS enabled -- provided the image quality we’re seeing today holds up in other DLSS games -- similar to how we have benchmarked some games with different DirectX modes based on which API performs better on AMD or Nvidia GPUs It’s also apparent from the Youngblood results that the AI network tensor core version is superior to the shader core version in Control.
 
Last edited:
This image is a good indicator of what the 3080 will be x the 2080 ti, on this image we can clearly see that the gtx 1080 was 26% stock and 34% little overclocked faster than the gtx 980 TI and this ampere news leak shows gtx 3080 is 31% faster than 2080ti.

perfrel_3840_2160.png
 
This image is a good indicator of what the 3080 will be x the 2080 ti, on this image we can clearly see that the gtx 1080 was 34% faster than the gtx 980 TI and this ampere news leak shows gtx 3080 is 31% faster than 2080ti.

perfrel_3840_2160.png
mostly due to vram and bandwidth at 4k
at 1440p that was 25% maybe
 
No, it shows that an unknown ampere card is 31%faster. Could be 3080,could be 3090, heck, could be 3070.
exactly.
this is rtx whangdoodle the thread should say
 
let me inform you since you're slow to learn new facts
Let's be clear about your "facts" ~ we'll need a lot more data & a lot more (in game) comparisons to see if there's a discernible loss in IQ or not, as compared to when DLSS is off. So when you're quoting them, remember to highlight this very important part ~
The visual quality is impressive enough that we’d have to start benchmarking games with DLSS enabled -- provided the image quality we’re seeing today holds up in other DLSS games
And I'll add again ~ we need a lot more data not just one off comparisons from youtubers.
 
Let's be clear about your "facts" ~ we'll need a lot more data & a lot more (in game) comparisons to see if there's a discernible loss in IQ or not, as compared to when DLSS is off. So when you're quoting them, remember to highlight this very important part ~
And I'll add again ~ we need a lot more data not just one off comparisons from youtubers.
I quoted one source.
This does not mean there is just one dlss 2.0 review out there.
and dlss 2.0 is not trained on a per-game basis.
but how would you know that,there's been reviews for 4 months and you don't even know
 
Yes & that's why I said a lot more data than "just one off comparisons from youtubers" so I know there's multiple reviews out there, I'd also like more print media comparisons featuring DLSS 2.0 including yours truly TPU. Again more games & a lot more data.
but how would you know that,there's been reviews for 4 months and you don't even know
You don't wanna go down that rabbit hole with me, trust me :rolleyes:
 
what are you talking about,there's been a dozen reviews of dlss 2.0 form every major site.FOR MONTHS

Comprehending written statements is hard, I guess. What was your interpretation of "compared it", what is that "it", insightful one?
 
Comprehending written statements is hard, I guess. What was your interpretation of "compared it", what is that "it", insightful one?
what ? can you start making sense please ? quote the part you're referring to maybe ?

why would anyone compare image sharpening to image resonstruction really ? make sense much ?

nvidia vs amd image sharpening in driver makes sense.

dlss vs image sharpening ? why ?

to get a 40% performance uplift from using resolution drop + sharpening you have to drop the resolution by 40% and apply tons of sharpening that may look good to an untrained eye but real bad on closer inspection.
you're not getting same image quality as native resolution vs dlss quality preset with said performance uplift.
maybe dlss performance vs res scale drop + sharpening would be comparable in terms of quality,but then again with performance preset you're getting double the framerate

sorry,but what you're arguing here is just irreleveant.
 
Last edited:
I think a few of you are going off topic.

This is a Ampere performance rumour thread.


Soo on topic.
Dya see the pciex version of a100.

250 watts not 400 like sxm and only 10% performance loss.

To me this indicates they really are pushing the silicon passed it's efficiency curve ,150 watts for that last 10%

We're expecting upto 300 watts Tdp on a 3080ti so it seems like they're pushing that curve.
 
We'll have to wait on that one, pretty sure Nvidia can dial down clocks if they see RDNA2 top tiers not able to compete with them on price or performance. Clocks can literally change after a launch, as AMD showed us o_O
 
I think a few of you are going off topic.

This is a Ampere performance rumour thread.


Soo on topic.
Dya see the pciex version of a100.

250 watts not 400 like sxm and only 10% performance loss.

To me this indicates they really are pushing the silicon passed it's efficiency curve ,150 watts for that last 10%

We're expecting upto 300 watts Tdp on a 3080ti so it seems like they're pushing that curve.
tbp
tdp is unknown
 
I didn't say it was , I said upto.

Based on cooling and hypothetical common sense.
2080Ti is 285W and it's the most power efficient turing

performance-per-watt_3840-2160.png


so yeah,400w and up to 300w are kinda different,right ?
 
Last edited:
quote the part you're referring to maybe ?
Radeon Susan, dude, it ain't hard:

1592831842378.png

why would anyone compare image sharpening to image resonstruction really ?
Why would anyone call upscaling different names?
Because marketing.

Dya see the pciex version of a100.

250 watts not 400 like sxm and only 10% performance loss.

To me this indicates they really are pushing the silicon passed it's efficiency curve ,150 watts for that last 10%
Puzzled where you've seen the perf consumption figures.
 
Radeon Susan, dude, it ain't hard:

View attachment 159824

Why would anyone call upscaling different names?
Because marketing.


Puzzled where you've seen the perf consumption figures.
didn't answer me.
why compare resolution drop w. image reconstruction to just a pure,simple resolution scale drop.
is resolution dropping a new feature now ? :laugh:
 
TPU>


sxm uses 400watts, it says so.
pciex uses 250 watts ,it says so.

2080ti uses near 300 watts TPU showed.

the cooler we saw a week or two ago was /300 watts
 
Turing was very disappointing (overpriced) for the performance uplift compared to Pascal, hopefully Ampere and RDNA2 will bring real generational jump.
it was both.
I mean I could understand a $1000 2080Ti cause the competition wasn't there (still isn't 2 yrs later,might not be this year),but for a +30-35% over 1080Ti - not really
 
Back
Top