• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 2080 Ti Founders Edition 11 GB

The only thing to complain about is the price, and it's fine letting Nvidia know it's too high.
But the price doesn't change the fact that Turing is a great achievement in performance gains, efficiency, etc. 39% gains over the previous generation is a substantial step forward, and is still more than most generational changes in the past. If AMD achieved this, no one would complain at all.

What's not to understand?
Nvidia introduces GameWorks, it catches flak for using proprietary technologies.
Nvidia introduces Pascal, it catches flak for introducing a mere refinement over Maxwell.
Nvidia introduces Turing, it catches flak for RTX not being in widespread use already.
Is the pattern more obvious now? ;)
Nvidia introduces CUDA, it catches flak for using proprietary technologies.
Nvidia introduces G-Sync, it catches flak for using proprietary technologies.

AMD introduces Mantle, a proprietary AMD-only alternative to Direct3D and OpenGL with "support" in a handful games. Even after it turned out that Mantle only had marginal gains for low-end APUs, and that it will never catch on among developers, AMD are still praised.

I wonder if there is a double standard?
 
Only DLSS can save nvidia. They bet on a large die with this generation and they cannot sell it at peanuts price. They simply cannot afford to sell a 2080ti at 1080Ti prices, cause they won't make almost any profit, since 2080ti is on a newer node and its die is much bigger compared to 1080Ti (because of RT and tensor cores). If dlss can add 25% more performance at least, they might sell pretty well.
 
@efikkan Yeah, "price is too high" is an understatement.
But even that is only in part because of greed and/or lack of competition. The price is high because chips this large are that expensive.
Like you said, it's ok to let Nvidia know they went too far, because at the end of the day, justified or not, these prices mean extremely few gamers will afford to buy these cards. Even then, low sales will convey the message better than any forums whining.
 
I mean Vega was essentially AMD's first time in the high end GPU market because previously they just heavily saturated the mid-tier market, but how did AMD do? Well Sapphire's Nitro+ Vega 64 consistently beats the GTX 1080 FE so I think AMD did pretty good for their first time in the high end market.

Are you serious? Vega was the first hig-end card from AMD?

What about Fury X? What about 290X? What about HD7970/Ghz Edition? What about the multiple dual-GPU cards they released?

AMD has been competing in the high-end market for a long time.
 
The only thing to complain about is the price, and it's fine letting Nvidia know it's too high.
But the price doesn't change the fact that Turing is a great achievement in performance gains, efficiency, etc. 39% gains over the previous generation is a substantial step forward, and is still more than most generational changes in the past. If AMD achieved this, no one would complain at all.


Nvidia introduces CUDA, it catches flak for using proprietary technologies.
Nvidia introduces G-Sync, it catches flak for using proprietary technologies.

AMD introduces Mantle, a proprietary AMD-only alternative to Direct3D and OpenGL with "support" in a handful games. Even after it turned out that Mantle only had marginal gains for low-end APUs, and that it will never catch on among developers, AMD are still praised.

I wonder if there is a double standard?


So, Cuda I don't know enough about so I won't comment. However Freesync is essentially just AMD's name for adaptive sync, a VESA standard. It's something Nvidia could support, but instead choose to continue on with Gsync which is basically identical except for the added price premium.

Your statement about Mantle is entirely false though. Mantle was not designed to be a proprietary AMD-only API. Vulcan is what we got from AMD's API development, an open standard API that every single gamer should support(unless they work for Microsoft).
 
I mean Vega was essentially AMD's first time in the high end GPU market because previously they just heavily saturated the mid-tier market, but how did AMD do? Well Sapphire's Nitro+ Vega 64 consistently beats the GTX 1080 FE so I think AMD did pretty good for their first time in the high end market.

Navi is suppose to be even better than Vega with next gen memory and I guarantee you that Navi will be cheaper than the new Nvidia cards. Unless you are solely gaming at 4k, the GTX 1080ti and the GTX 2080ti are completely overkill. I personally think that 1440p is the ideal resolution for gaming and both the Vega 56 and 64 handle 1440p like a champ.
So the 9800 pro, 2900xt, 4870, 4870x2, 5870, 5970, 6970, 6990, 7970, 290x, 295x2, fury X, ece, do NONE of those exist anymore?
 
@efikkan Yeah, "price is too high" is an understatement.

But even that is only in part because of greed and/or lack of competition. The price is high because chips this large are that expensive.

Like you said, it's ok to let Nvidia know they went too far, because at the end of the day, justified or not, these prices mean extremely few gamers will afford to buy these cards. Even then, low sales will convey the message better than any forums whining.
We still have to remember that Nvidia is trying to dump the remaining stock of Pascal cards as well. It remains to be seen if the price remains unchanged in a few months as the Pascal cards runs out.

Nvidia should be cautious though, driving up the cost of desktop gaming might hurt the long-term gains we've seen in user base vs. consoles.

Still, we have to remember that AMD have done similar things that have been long forgotten. When Vega launched, it was priced $100 over "MSRP" with "$100 worth of games included", but they quickly changed this after massive criticism.

So, Cuda I don't know enough about so I won't comment. However Freesync is essentially just AMD's name for adaptive sync, a VESA standard. It's something Nvidia could support, but instead choose to continue on with Gsync which is basically identical except for the added price premium.

Your statement about Mantle is entirely false though. Mantle was not designed to be a proprietary AMD-only API. Vulcan is what we got from AMD's API development, an open standard API that every single gamer should support(unless they work for Microsoft).
G-Sync still relies on adaptive sync, with advanced processing on the monitor side, which FreeSync doesn't have. If it's worth it or not is up to you, but they are not 100% comparable.

You're wrong about Mantle. It was a proprietary API designed around GCN. Direct3D and Vulkan inherited some of the function names from it, but not the underlying structure, they are only similar on the surface. Vulkan is built on a SPIR-V, which is derived from OpenCL. The next version of Direct3D is planning to copy this compiler structure from Vulkan, which is something Mantle lacks completely.
 
Are you serious? Vega was the first hig-end card from AMD?

What about Fury X? What about 290X? What about HD7970/Ghz Edition? What about the multiple dual-GPU cards they released?

AMD has been competing in the high-end market for a long time.

So neither the Fury X or the 290X (or its multiple revisions) are considered high end, though they may be in the upper echelon of mid-tier cards. With today's standards, you don't start hitting the high-end GPU market until you get to GTX 1070 levels. The Fury X's performance is slightly above a GTX 1060 6GB, but it is no where near a GTX 1070. The 290X and its family can't even compete with the GTX 1060 6GB which is a mid tier card. If these cards were released 4-6 years ago that would be a different story, but when they were released they weren't really high end.

The HD7970 is a different story because for it's time it was definitely high end, but that was also 7 years ago. AMD hasn't made high end GPUs for a very long time so I guess it was a poor choice of wording on my end- what I should have said was Vega was essentially AMD's return to the high end GPU market. Would that be more acceptable?
 
CUDA means absolutely nothing to the average consumer.

So neither the Fury X or the 290X (or its multiple revisions) are considered high end,

Absolutely not true, 290X was faster than anything else at the time of release, you cannot get more high end than that.

I think you are confusing it with something esle.
 
Last edited:
So neither the Fury X or the 290X (or its multiple revisions) are considered high end, though they may be in the upper echelon of mid-tier cards. With today's standards, you don't start hitting the high-end GPU market until you get to GTX 1070 levels. The Fury X's performance is slightly above a GTX 1060 6GB, but it is no where near a GTX 1070. The 290X and its family can't even compete with the GTX 1060 6GB which is a mid tier card. If these cards were released 4-6 years ago that would be a different story, but when they were released they weren't really high end.

The HD7970 is a different story because for it's time it was definitely high end, but that was also 7 years ago. AMD hasn't made high end GPUs for a very long time so I guess it was a poor choice of wording on my end- what I should have said was Vega was essentially AMD's return to the high end GPU market. Would that be more acceptable?
So nvidia never competed in the high end either, because the 780ti wasnt anywhere near the 1070 right?

I think you just dont understand what the term high end GPU means.

The 290X was faster then the 780, and traded blows with the 780ti. In case you dont remember, the 290x made nvidia panic, as the 780ti was not slated for a consumer release and was rushed out after the 290x turned out to be way better then the 780. The fury X was trading blows with the 980ti after a few months of driver updates. Ergo, those cards are "high end". The 9800 pro was king of the hill for over a year after its launch, and absolutely embarrassed nvidia. The 5870 at launch was the fastest GPU available.

Also interesting you have gone from "AMD never competed in the high end" to "Well AMD DID compete int he high end, but that was like 7 years ago". Which is it?
 
So neither the Fury X or the 290X (or its multiple revisions) are considered high end, though they may be in the upper echelon of mid-tier cards. With today's standards, you don't start hitting the high-end GPU market until you get to GTX 1070 levels. The Fury X's performance is slightly above a GTX 1060 6GB, but it is no where near a GTX 1070. The 290X and its family can't even compete with the GTX 1060 6GB which is a mid tier card. If these cards were released 4-6 years ago that would be a different story, but when they were released they weren't really high end.

The HD7970 is a different story because for it's time it was definitely high end, but that was also 7 years ago. AMD hasn't made high end GPUs for a very long time so I guess it was a poor choice of wording on my end- what I should have said was Vega was essentially AMD's return to the high end GPU market. Would that be more acceptable?

I ask again, are you serious or trolling at this point?
 
CUDA means absolutely nothing to the average consumer.



Absolutely not true, 290X was faster than anything else at the time of release, you cannot get more high end than that.

I think you are confusing it with something esle.

I mean the 290x was not faster than the GTX 780ti and they were released within a month of each other. From the GTX 780ti, Nvidia went with the 900 series and how did AMD respond? By rehashing the 290x with the 390x.
 
If these cards were released 4-6 years ago that would be a different story, but when they were released they weren't really high end.

Did I miss the the Fury or 290X release the other day? I could have sworn I read something about them many, many moons ago.
 
So nvidia never competed in the high end either, because the 780ti wasnt anywhere near the 1070 right?

I think you just dont understand what the term high end GPU means.

The 290X was faster then the 780, and traded blows with the 780ti. In case you dont remember, the 290x made nvidia panic, as the 780ti was not slated for a consumer release and was rushed out after the 290x turned out to be way better then the 780. The fury X was trading blows with the 980ti after a few months of driver updates. Ergo, those cards are "high end". The 9800 pro was king of the hill for over a year after its launch, and absolutely embarrassed nvidia. The 5870 at launch was the fastest GPU available.

Also interesting you have gone from "AMD never competed in the high end" to "Well AMD DID compete int he high end, but that was like 7 years ago". Which is it?

The Fury X got absolutely stomped on by the GTX 980ti... wasn't really trading blows.

http://gpu.userbenchmark.com/Compare/Nvidia-GTX-980-Ti-vs-AMD-R9-Fury-X/3439vs3498

The last AMD card that I had was the R9 390x which got released around the same time as the GTX 980ti, but Nvidia's card was a lot better in terms of performance. The Fury X got smoked, and the 290X / 390X got smoked. I sold my R9 390X and I got a GTX 1070, yet I am about to sell my GTX 1070 because I just got a Vega 64. Either way you spin it, it boils down to this: AMD hasn't been competitive in the high end GPU market for 3+ years or so.
 
Last edited:
I mean the 290x was not faster than the GTX 780ti and they were released within a month of each other. From the GTX 780ti, Nvidia went with the 900 series and how did AMD respond? By rehashing the 290x with the 390x.

That has nothing to do with your claim that the 290X wasn't high end, which is undoubtedly not true.

The Fury X got absolutely stomped on by the GTX 980ti... wasn't really trading blows.

http://gpu.userbenchmark.com/Compare/Nvidia-GTX-980-Ti-vs-AMD-R9-Fury-X/3439vs3498

Userbenchmark, ehm ... OK. Though have a look at TPUs own rankings:

d.png


7%. We certainly have different definitions of "stomped" , still a high end card nonetheless.
 
That has nothing to do with your claim that the 290X wasn't high end, which is undoubtedly not true.



Userbenchmark, ehm ... OK. Though have a look at TPUs own rankings:

View attachment 107143

7%. We certainly have different definitions of "stomped" , still a high end card nonetheless.

Instead of showing me some useless benchmark, why don't you look at this and compare and contrast.

http://gpu.userbenchmark.com/Compare/Nvidia-GTX-980-Ti-vs-AMD-R9-Fury-X/3439vs3498

The funny thing is that I am an AMD fan, but until Vega, they haven't really been relevant in the high end GPU market. Why buy a Fury X when I can buy a GTX 980ti that is not only cheaper, but completely outperforms it?
 
G-Sync still relies on adaptive sync, with advanced processing on the monitor side, which FreeSync doesn't have. If it's worth it or not is up to you, but they are not 100% comparable.

You're wrong about Mantle. It was a proprietary API designed around GCN. Direct3D and Vulkan inherited some of the function names from it, but not the underlying structure, they are only similar on the surface. Vulkan is built on a SPIR-V, which is derived from OpenCL. The next version of Direct3D is planning to copy this compiler structure from Vulkan, which is something Mantle lacks completely.

G-Sync & Freesync are absolutely directly comparable. They do the same basic thing, syncronizing a monitors refresh with the frames the GPU is producing, that's it. There are multiple video's both objective & subjective that show hardware that Nvidia adds simply isn't needed. Simply put, Nvidia doesn't need to use G-sync modules.

As for Mantle we know AMD gave their Mantle code to Khronos. With where AMD was in the market in 2014, it's unlikely they thought they could get away with mantle being a proprietary AMD-only API. Mantle was essentially the awakening of low-level API's, and it would be more sane to praise AMD for developing mantle to criticize them for it.

AMD simply doesn't get a lot of flak because they support many open standards(Adaptive Sync, OpenCL, etc.), and maybe because of where they are at on the market they need to, however the same cannot be said about Nvidia(G-sync, Cuda, etc.).
 
Performs well, but nVidia wanged out with the prices. Also can't help but wonder how much better they would have been on 10nm? Lower power usage and higher clocks? Ten to twenty percent more performance?
 
So, AdoredTV was right, or was he not?
adored was wrong - this is WORSE, he did predicted +40% increase over gtx1080 ti - but obviously that was leakded from nvidias "testing guidlines".

adored was not the only one who predicted this, everyone did. everyone was saying 2080 ti would be 35%-40% faster than 1080 Ti. this review showed it was 38% faster, other reviewers got 35% or so. seems spot on to me. the 1080p and 1440p benches don't count as many of them were bottlenecked by the CPU and in case of doom a lot of the cards were hitting the 200 FPS hard limit. if you were picking either of these up for 1080P, you need to get your head examined. the 2080 Ti still makes sense for ultrawide monitors (performance wise) if you absolutely must get 100+ FPS in every game with everything maxed out.

Only the 2080Ti is amazing, the regular 2080 has no place in the market at this moment. It trades blows with 1080Ti, which you can get for far less money.

are we comparing prices of used 1080 Tis? in the US, new 1080 Tis cost about $700 while 2080 can be picked up for as little as $750 and custom models with beefy coolers start at $800 (links in prices). so for $50 - $100 extra you get a card that's a little faster BUT most importantly it does not choke in properly optimized DX12 titles and async, which has been a problem for nvidia for a while now. in productivity benchmarks, both Turing cards destroy everything. if i was getting into high end GPUs now, i would buy a 2080 over 1080 Ti. if you're already spending $700+, what's another $50 for a card that won't get owned by $500 AMD cards anytime async is involved?


and you can't blame nvidia for these prices too. we, the consumers are not obligated to buy anything. however, nvidia's board partners still have millions of pascal cards they have to move. these high RTX prices will allow them to do so. if the 2080 Ti was the same price as 1080 Ti, a lot of the partners would take massive hits to their profits.
 
Last edited:
Thanks for spamming the same link, didn't notice it the first time. Still quite irrelevant but have it your way , 980ti stomped Fury X.

It's completely relevant because it shows that the 980ti beats (sometimes significantly) in almost every performance category. The Fury X can't even run Witcher 3 on max settings at 1080p at a stable 60fps, but the 980ti can.
 
HAHAHAHA oh wow. That is just pathetic.

To top that off, we have the clueless hordes here crying that a $ 1,200 card (2080 Ti) that's twice as fast as a $600 card (Vega 64) is overpriced. Who wants to bet half of them will own a Turing card in 6 months' time?


Man the Red Guards are out in doves this time.

When RTG GPU delivers: OMFG world peace happily ever after

When RTG GPU sucks: They have smaller budget, underdog, FineWine(tm) etc etc

When NV GPU delivers: We demand 1000fps at 8k for $200, while still complaining about “big bad green”

When NV GPU sucks: Hell yeah death the devil green!

Meanwhile in reality when you check their system profiles, most of them rock a Nvidia “evil” GPU

All you haters should go buy a Vega64 to support RTG today.
 
Seems like your numbers are quite ways off, for example hardware unboxed on the same games and resolutions found the RTX 2080 much closer to the GTX 1080ti and even losing in two games.

Again, maybe you should recheck your results as they seem off compared to other more reputable reviewers.

The cards themselves are impressive no doubt about it pure performance wise. Value wise they are really bad actually, in fact for the price increase it actually offers negative performance. Both the RTX 2080 and RTX 2080ti offer negative value compared to the 2080 and 2080ti respectfully.
 
Man the Red Guards are out in doves this time.

When RTG GPU delivers: OMFG world peace happily ever after

When RTG GPU sucks: They have smaller budget, underdog, FineWine(tm) etc etc

When NV GPU delivers: We demand 1000fps at 8k for $200, while still complaining about “big bad green”

When NV GPU sucks: Hell yeah death the devil green!

Meanwhile in reality when you check their system profiles, most of them rock a Nvidia “evil” GPU

All you haters should go buy a Vega64 to support RTG today.

i wish i could like this 1000 times. i've owned more AMD hardware than any of the morons calling me an nvidia shill. i put a vega 64 into my VR rig over a 1080 not because it's faster (it is slower) but because i wanted to support the little guy. meanwhile they're all salivating over used 1080 ti mining cards on ebay.
 
Seems like your numbers are quite ways off, for example hardware unboxed on the same games and resolutions found the RTX 2080 much closer to the GTX 1080ti and even losing in two games.

Again, maybe you should recheck your results as they seem off compared to other more reputable reviewers.

The cards themselves are impressive no doubt about it pure performance wise. Value wise they are really bad actually, in fact for the price increase it actually offers negative performance. Both the RTX 2080 and RTX 2080ti offer negative value compared to the 2080 and 2080ti respectfully.


You call a well established 10+ years reviewer and review site less reputable than some self claimed “youtuber” WTF are you smoking? Seriously?
 
Back
Top