Friday, June 28th 2019

NVIDIA RTX SUPER Lineup Detailed, Pricing Outed

NVIDIA has officially confirmed pricing and SKU availability for its refreshed Turing lineup featuring the SUPER graphics cards we've been talking about for ages now. Primed as a way to steal AMD's Navi release thunder, the new SUPER lineup means previously-released NVIDIA gppahics cards have now hit an EOL-status as soon as their souped-up, SUPER versions are available, come July 2nd.

The RTX 2060 and RTX 2080 Ti will live on, for now, as the cheapest and most powerful entries unto the world of hardware-based raytracing acceleration, respectively. The RTX 2070 and RTX 2080, however, will be superseded by the corresponding 2070 SUPER and 2080 SUPER offerings, with an additional RTX 2060 SUPER being offered so as to compete with AMD's RX 5700 ($399 for NVIDIA's new RTX 2060 SUPER vs $379 for the AMD RX 5700, which is sandwiched in the low-end by the RTX 2060 at $349).
The RTX 2070 SUPER will be positioned at a higher pricing point than AMD's upcoming RX 5700 XT ($499 vs $449), which should put it mildly ahead in performance - just today we've seen benchmarks that showed AMD's RX 5700 XT trading blows with the non-SUPER RTX 2070. The NVIDIA RTX 2080 SUPER will get improved performance as well as a drop in pricing, down to $699 versus the original's (exorbitantly high compared to the GTX 1080's pricing of $549) $799.
Source: Videocardz
Add your own comment

152 Comments on NVIDIA RTX SUPER Lineup Detailed, Pricing Outed

#101
Pumper
FluffmeisterSo is the RX 5700 replacing the RX 570?
If the "leaked" trademarks are correct, then yes, 5700 should be a 570 replacement as AMD is working on RX 58xx and RX 59xx NAVI cards.
Posted on Reply
#102
Fluffmeister
Okay, it's just that is quite a jump in price too. I guess that is the market these days.
Posted on Reply
#103
Vayra86
Vya DomusAnd they don't plan to. Undercutting your competitor while trying to offer a better product has been a losing strategy for them. Whether or not they've done all that was possible with that doesn't matter, they've opted out of that battle, the 450$ 5700XT is a clear indication of that. We are looking a complete reversal of mindset from AMD, they are letting Nvidia fight itself trying to sell more cards to the masses of people who already have them.



We both know stuff such as mesh shaders and RT mean jack shit if market share is your goal.
I'm not really seeing that to be honest. All I'm seeing is AMD trying to catch up with minimal resources and failing time and time again. Nothing's changed, and its not like AMD was the value option all the time in the past years either. They weren't; despite endless back and forth about driver quality, promised features, and a whole lot of wishful thinking about MSRP versus real world pricing with a constant supply problem - from Vega interposers to HBM to even something as stupidly simple as a Polaris card... You can blame mining for that, or you can blame Nvidia for that, but the fact remains that value proposition was never really there even on the most basic product. And if it ever was, you'd get a hot, noisy card in return, more often than not. Funny, how that works huh... while in the meantime Nvidia kept shelves stocked with x50's and x60s and even pushed optimized Pascal 9/11Gbps VRAM versions out for good measure (ring a bell? :D). Price? It was always much closer to MSRP because of good supply. The bottom line was that Nvidia was the value option in the end, in many regions and moments. And the result is Indeed Nvidia fighting itself. Is that a reversal of mindset? I think its just an inevitable conclusion to events that have occurred and AMD/RTG focusing on CPU and console.

Sure there are a few Nvidia halo cards that have pushed price up, but below that, Nvidia and AMD have been toe to toe like they are now all the time. What is really happening here, is that AMD is riding along on Nvidia's price hikes with much smaller GPUs and even though that might help their bottom line a bit, it certainly does not help us and its actually a polar opposite of what Nvidia does with the larger Turing dies. AMD right now does not innovate, does not bring absolute performance up to a new level, and does not have a value option except in its 2/3 year old leftovers - and probably has a better margin on Navi than Nvidia on Turing from 2060 and up.

Its easy to shit on Nvidia (not you persay) for pushing the envelope, but really? And you know that even I don't like RT nonsense in GPUs...
AnymalSimilar situation as in 1999 with T&L and Geforce 256. Maybe nvidia is not that stupid. People hates what they dont understand. BTW, 1660 for 220€ and 1660ti in DE are great p/p against 1060 3gb and 1060 6gb. Peace out!
Small difference, its not 1999 anymore, we have 20 years of graphics development to work with and get almost similar results with much less horsepower. In those 20 years we also saw production cost for games explode and the market demand did the same. With that demand, the current state of graphics is really good already. Any new technology is fighting an uphill battle, while back in 1999 even a blind man could see there was a lot to improve. And then there's that nasty little bugger called Moore's Law and the limited potental for shrinks.
Posted on Reply
#104
Vya Domus
AnymalSimilar situation as in 1999 with T&L and Geforce 256.
You mean hardware that was used to accelerate something which was already implemented in software for years by that point in time ? Yeah that's definitely the same thing. Peace out!
Vayra86but really?
But really what ? I am calling it for what it is, AMD had has been doing the same thing over and over for the past 7-8 years while Nvidia pulled ahead in revenue and market share, clearly they can't and wont keep doing that forever. The technology they have at their disposal right now is fine, a large Navi chip clocked in it's optimal power envelope would be plenty fast, there is no need to catch up anymore. While I don't think we'll see one, should the case be and something like that it's released it's price tag is going to be eye watering.
Posted on Reply
#105
Aquinus
Resident Wat-man
I'm pretty sure this makes me want to sell off my remaining stock in nVidia and put it somewhere else.
Posted on Reply
#106
Vayra86
Vya DomusYou mean hardware that was used to accelerate something which was already implemented in software for years by that point in time ? Yeah that's definitely the same thing. Peace out!



But really what ? I am calling it for what it is, AMD had has been doing the same thing over and over for the past 7-8 years while Nvidia pulled ahead in revenue and market share, clearly they can't and wont keep doing that forever. The technology they have at their disposal right now is fine, a large Navi chip clocked in it's optimal power envelope would be plenty fast, there is no need to catch up anymore. While I don't think we'll see one, should the case be and something like that it's released it's price tag is going to be eye watering.
While I get what you are saying about Navi, the key point is timing. Nvidia can still move to 7nm and Navi cannot catch Nvidia top end even today. Unless you believe Navi 20 is capable of topping 2080ti...

Honestly they can price that halo card up to the moon its still better than nothing. Even the 2080ti is helping the trickle down of performance. But releasing 'plenty fast' sub top end cards does not and we see proof of that right now.
Posted on Reply
#107
Vya Domus
Vayra86Unless you believe Navi 20 is capable of topping 2080ti...
I don't know what Navi 20 is or will be, I look at die sizes, 400mm^2 would easily boost Navi in 2080ti territory. Turing wont scale well size wise on 7nm because it's already huge. You are forgetting AMD is now on even playing field process wise.
Posted on Reply
#108
efikkan
With Navi 10's 40 CUs reaching a TDP of 225W, Navi 2x could get really hot. Navi 2x might reach beyond RTX 2080 in performance, but at what cost? And by the time it arrives, Nvidia's next gen is right around the corner.
Posted on Reply
#109
medi01
efikkanWith Navi 10's 40 CUs reaching a TDP of 225W, Navi 2x could get really hot. Navi 2x might reach beyond RTX 2080 in performance, but at what cost?
Remind me, why do people who are after $300-400-ish card needing to wait for release of $700-800 card?

TDP of 5700 is around 180W.. (a 250mm^2 chip).
2080 is what, 30%-is faster than that at 545mm^2 (minus node)?
Hardly something unreachable, even ignoring high yield rumors.
Vayra86Nvidia can still move to 7nm and Navi cannot catch Nvidia top end even today
nvidia cannot move to 7nm overnight, for starters.
Elaborate why AMD "can't" catch nVidia's "top end" please.
Posted on Reply
#110
efikkan
medi01Remind me, why do people who are after $300-400-ish card needing to wait for release of $700-800 card?
TDP of 5700 is around 180W.. (a 250mm^2 chip).
2080 is what, 30%-is faster than that at 545mm^2 (minus node)?
Hardly something unreachable, even ignoring high yield rumors.
No, I was more thinking along the lines of which sacrifices they have to make to achieve it, like a >300W TDP, noise etc., not monetary cost.
Right now Nvidia offers RTX 2080 at $700 and 215W, and RTX 2080 Ti costing >$1000 and 250W.
Competing with these will be hard enough, but remember that Navi 2x will primarily compete with the successor of Turing on 7nm, and I assume by that time Nvidia will push down those performance tiers and improve efficiency further.
Posted on Reply
#111
Vayra86
medi01Remind me, why do people who are after $300-400-ish card needing to wait for release of $700-800 card?

TDP of 5700 is around 180W.. (a 250mm^2 chip).
2080 is what, 30%-is faster than that at 545mm^2 (minus node)?
Hardly something unreachable, even ignoring high yield rumors.


nvidia cannot move to 7nm overnight, for starters.
Elaborate why AMD "can't" catch nVidia's "top end" please.
The bigger you go, the lower the payoff for additional die size and shaders because clocks tend to suffer. This may be less apparent on 7nm and depends on architecture as well, but still, it won't ever be the reverse of that, at least - even on Pascal and Turing you do see the midrange SKUs clock somewhat higher than the top-end.

But as @efikkan points out, TDP budget is going to be a problem once again. 7nm doesn't change that all that much, and if you remove the node and just look at architecture AMD still has work to do. Perf/watt is still a thing and again, we're only even comparing this all to OLD Nvidia stuff - while Navi 20 is yet to release. Timing. Time to market. Relevance. Did you seriously think Nvidia is just now taking a look at what to do with 7nm? I surely hope not... If you were, be ready for another Kepler refresh >>> Maxwell curb stomp because that is very likely the jump we will see there.

The reason people with a 300-400 card budget are looking up at the halo cards is because that will indicate how worthwhile that 300-400 dollar purchase really is. After all, if performance just about flatlines after, say, a 2060, why would you spend 700-800 on the 2080? At the same time, today's 700-800 card is tomorrow's 300-400 card (simply put).

Progress in the high end matters, it is essential to keep the market moving forward. What we are seeing since Turing is not that and the result is price ánd performance stagnation. Since Navi will be too late to even matter in that sense, even Navi 20 catching up to 2080ti is unlikely to make a difference, unless, again, AMD is willing to play the value game they really could play with these GPUs due to their size.
Posted on Reply
#112
bug
Vayra86The bigger you go, the lower the payoff for additional die size and shaders because clocks tend to suffer. This may be less apparent on 7nm and depends on architecture as well, but still, it won't ever be the reverse of that, at least - even on Pascal and Turing you do see the midrange SKUs clock somewhat higher than the top-end.

But as @efikkan points out, TDP budget is going to be a problem once again. 7nm doesn't change that all that much, and if you remove the node and just look at architecture AMD still has work to do. Perf/watt is still a thing and again, we're only even comparing this all to OLD Nvidia stuff - while Navi 20 is yet to release. Timing. Time to market. Relevance. Did you seriously think Nvidia is just now taking a look at what to do with 7nm? I surely hope not... If you were, be ready for another Kepler refresh >>> Maxwell curb stomp because that is very likely the jump we will see there.

The reason people with a 300-400 card budget are looking up at the halo cards is because that will indicate how worthwhile that 300-400 dollar purchase really is. After all, if performance just about flatlines after, say, a 2060, why would you spend 700-800 on the 2080? At the same time, today's 700-800 card is tomorrow's 300-400 card (simply put).

Progress in the high end matters, it is essential to keep the market moving forward. What we are seeing since Turing is not that and the result is price ánd performance stagnation. Since Navi will be too late to even matter in that sense, even Navi 20 catching up to 2080ti is unlikely to make a difference, unless, again, AMD is willing to play the value game they really could play with these GPUs due to their size.
What he said. I always buy cards around the $250 mark, but at the same time I always read about high end. Just so I know what to expect in a generation or two. Yes, that's correct, I don't expect the high-end to automagically transform into next year's mid-range, because experience has taught that doesn't always happen (no matter how much I can whine about it).
Posted on Reply
#113
Mescalamba
They might be worth it over current, simply cause later batches. My very late version of 1080 core is quite impressive when it comes to OC, while early batches were much less impressive.
Posted on Reply
#114
erixx
I'd love to turbo charge my build, as always, but basic info contrasted to GTX1080 Ti does not look revolutionary
Posted on Reply
#115
medi01
Vayra86The bigger you go, the lower the payoff for additional die size and shaders because clocks tend to suffer. This may be less apparent on 7nm and depends on architecture as well, but still, it won't ever be the reverse of that, at least - even on Pascal and Turing you do see the midrange SKUs clock somewhat higher than the top-end.

But as @efikkan points out, TDP budget is going to be a problem once again. 7nm doesn't change that all that much, and if you remove the node and just look at architecture AMD still has work to do. Perf/watt is still a thing and again, we're only even comparing this all to OLD Nvidia stuff - while Navi 20 is yet to release. Timing. Time to market. Relevance. Did you seriously think Nvidia is just now taking a look at what to do with 7nm? I surely hope not... If you were, be ready for another Kepler refresh >>> Maxwell curb stomp because that is very likely the jump we will see there.
Sure, performance doesn't grow 1:1 with chip size increase, but we are talking about 30%-ish performance gap that 180w-ish 250mm^2 card has. (I'm talking about non-XT 5700!)
Going from 180w to 280-ish w and doubling chip size should get one way past 30%-ish performance bump.
efikkanNavi 2x will primarily compete with the successor of Turing on 7nm,
I think we'll see 5800, 5900 by the end of the year, while Turing would come not earlier than Q2 next year, given Huang's comments.
Besides, if Turing is good, pricing on it hardly will be.
Posted on Reply
#116
Vayra86
medi01Sure, performance doesn't grow 1:1 with chip size increase, but we are talking about 30%-ish performance gap that 180w-ish 250mm^2 card has. (I'm talking about non-XT 5700!)
Going from 180w to 280-ish w and doubling chip size should get one way past 30%-ish performance bump.
Yes. But there are a few caveats
- memory bandwidth; AMD's delta compression is still behind the curve, and they will be needing a lot of bandwidth to work with 2080ti-levels of data transfer. Something that even Radeon VII with 16GB HBM hasn't had to do yet; even though it should be more than capable; Navi carries GDDR6 and we've seen that even HBM equipped Vega benefits from memory tweaks... The best thing AMD could achieve on GDDR5 was GTX 1060 6GB performance, give or take. Not exactly a feat.
- we have yet to see a proper GPU Boost implementation, though I believe Navi does offer that, or at least improves on it. But as good as GPU Boost 3.0? Fingers crossed.
- if they go very big and lower clockrate as a result, that will rapidly destroy their die size advantage and therefore margins; and ideally they'd go the other way around: higher clockrates while keeping die size under control. They've only just begun on the 7nm node. Exploding die size this early is a huge long-term problem if you intend to remain competitive. Turing's large dies are built on a 12nm node with no future. On 7nm, they will have a lot of breathing room even with dedicated RT hardware.
- Time to market. Nvidia already releases the Super cards now... and they still have 7nm to work with. So by then, AMD once again has a 280~300W(OC) card with probably a large die fighting Nvidia's sub-top end that will probably need about 180-210W. History repeats...

I'm finding it hard to be optimistic about this. The numbers don't lie and unless AMD pulls out an architectural rabbit, they're always going to lag behind. And note: that is even while completely lacking RT hardware. If the shit really hits the fan, Nvidia could just shrink their die by 20% and nobody would ever notice :rolleyes:
Posted on Reply
#117
medi01
Vayra86AMD's delta compression is still behind the curve.
Possibly, but we haven't seen how well Navi handles it yet and we know Navi isn't Vega.
Vayra86The best thing AMD could achieve on GDDR5 was GTX 1060 6GB performance, give or take. Not exactly a feat.
Hold on, AMD simply didn't try to develop bigger Polaris, instead focusing on Vega.
That doesn't at all mean AMD could not do it, let alone, it could simply use wider memory bus.
Vayra86- if they go very big and lower clockrate as a result, that will rapidly destroy their die size advantage and therefore margins; and ideally they'd go the other way around: higher clockrates while keeping die size under control.
Keep in mind, that a sizable part of the mentioned 180w is consumed by the memory/mem controller. So doubling the chip size should be at around 280-300w, I think. (as it was with Vega 64 vs Polaris. In fact, Vega 64 is more than twice bigger than Polaris)
Vayra86- Time to market. Nvidia already releases the Super cards now... and they still have 7nm to work with. So by then, AMD once again has a 280~300W(OC) card with probably a large die fighting Nvidia's sub-top end that will probably need about 180-210W. History repeats...
5700/5700XT will be available starting 7.7.2019, bigger guys probably later on, but the most attractive thing about them will be the price.
Obscene margins on 2080 and beyond mean AMD has lots of space to maneuver, downclocking, bigger size, dropping price.

Heck, anything will be better than having that 16Gb HBM2 Vega VII at $699.
Posted on Reply
#118
bug
medi01Hold on, AMD simply didn't try to develop bigger Polaris, instead focusing on Vega.
That doesn't at all mean AMD could not do it, let alone, it could simply use wider memory bus.
Yes, with the RX590 drawing almost as much power as a 2080, there was definitely room for more powerful Polaris chips :wtf:
Posted on Reply
#119
Vayra86
medi01Hold on, AMD simply didn't try to develop bigger Polaris, instead focusing on Vega.
That doesn't at all mean AMD could not do it, let alone, it could simply use wider memory bus.

Keep in mind, that a sizable part of the mentioned 180w is consumed by the memory/mem controller. So doubling the chip size should be at around 280-300w, I think. (as it was with Vega 64 vs Polaris.
We'll have to see, but can you see the problem in this combination of statements? I can... Pre-Polaris we had a Fury X that used HBM1 to reach a GTX 1070 (980ti) performance levels. They could have used a wider bus for Polaris... but then what do you really have? Hawaii (XT) with a new name and a problem with power and perf/watt. Not something you can scale further - not viable for iterative improvement. If AMD could really make a viable GPU with a wider GDDR5 bus, they would have, but we have absolutely not a shred of evidence they were ever capable of doing so - the performance simply wouldn't be there.

So yes, I would agree that Navi's (5700/xt) selling point will be price. And that is another case of history repeats, unfortunately.
Posted on Reply
#120
medi01
Vayra86Not something you can scale further - not viable for iterative improvement. If AMD could really make a viable GPU with a wider GDDR5 bus, they would have, but we have absolutely not a shred of evidence they were ever capable of doing so - the performance simply wouldn't be there.
I don't see how any magic is needed to do a bigger Polaris.
It's just a matter of allocating money to the project.

Back to bigger chip discussion. 5700/XT are 40CU.
60CU chip would have size of about 350mm^2 (with a couple of CUs disabled)

40CUs at 1700Mhz = 8.7TF
60CU @1700 = 13TF (+50% vs 5700XT) - at around 250W perhaps?
60CU @1600 = 12.2TF (+40% vs 5700XT)
60CU @1500 = 11.5TF (+32% vs 5700XT)

Ain't outlook quite rosy in team red?
Posted on Reply
#121
bug
medi01I don't see how any magic is needed to do a bigger Polaris.
It's just a matter of allocating money to the project.

Back to bigger chip discussion. 5700/XT are 40CU.
60CU chip would have size of about 350mm^2 (with a couple of CUs disabled)

40CUs at 1700Mhz = 8.7TF
60CU @1700 = 13TF (+50% vs 5700XT) - at around 250W perhaps?
60CU @1600 = 12.2TF (+40% vs 5700XT)
60CU @1500 = 11.5TF (+32% vs 5700XT)

Ain't outlook quite rosy in team read?
It's so rosy you'll have to remind me how much a 50% bigger chip will cost.
Posted on Reply
#122
Vayra86
medi01I don't see how any magic is needed to do a bigger Polaris.
It's just a matter of allocating money to the project.

Back to bigger chip discussion. 5700/XT are 40CU.
60CU chip would have size of about 350mm^2 (with a couple of CUs disabled)

40CUs at 1700Mhz = 8.7TF
60CU @1700 = 13TF (+50% vs 5700XT) - at around 250W perhaps?
60CU @1600 = 12.2TF (+40% vs 5700XT)
60CU @1500 = 11.5TF (+32% vs 5700XT)

Ain't outlook quite rosy in team read?
It would look rosy if there was no competitor, yes. What we're missing is the time to market and the aforementioned bandwidth constraints. I used the GDDR5 Polaris example because it shines light on the GDDR6 Navi situation - a repeat of that is on the horizon and Navi is on a fast track to become the next Hawaii XT. A card that has the performance, but falls short in everything else (noise, heat, die size). And note: that is not a huge problem when a release is placed at the END of a node (like 28nm Hawaii), but when you're right at the start....
medi01I simply don't see it. There is transistor on transistor parity with nVidia, to begin with.
How come 350mm chip taking on 2080 with roughly the same power consumption will "fail short"?
It can well fail short at sales, because clueless buy green.

It's more of a perception than anything else, just how many refer to Fury as "power hog" when, in fact, it was on par with 980Ti.

More to it, both Sony and Microsoft have promised that upcoming consoles will support RT.
They are very likely to go with 7nm EUV, which further lowers power consumption.
People perceived Fury quite right though - power hog maybe not so much, but 980ti was allround a better card, which is leagues more relevant today than a Fury X is due to its 6GB and even surpasses Fury at 4K now. Fury X aged horribly, didn't perform as well on 1080p and lost its high resolution performance advantage over time. And yes, it did also use more power for it.

A difference in perception is fine. Time will tell... But I will say my crystal ball has a pretty decent hitrate.
Posted on Reply
#123
medi01
Vayra86A card that has the performance, but falls short in everything else (noise, heat, die size).
I simply don't see it. There is transistor on transistor parity with nVidia, to begin with.
How come 350mm chip taking on 2080 with roughly the same power consumption will "fail short"?
It can well fail short at sales, because clueless buy green.

It's more of a perception than anything else, just how many refer to Fury as "power hog" when, in fact, it was on par with 980Ti.

More to it, both Sony and Microsoft have promised that upcoming consoles will support RT.
They are very likely to go with 7nm EUV, which further lowers power consumption.
Posted on Reply
#124
efikkan
medi0140CUs at 1700Mhz = 8.7TF
60CU @1700 = 13TF (+50% vs 5700XT) - at around 250W perhaps?
60CU @1600 = 12.2TF (+40% vs 5700XT)
60CU @1500 = 11.5TF (+32% vs 5700XT)

Ain't outlook quite rosy in team red?
I'm just curious, how do you figure that a 50% larger chip at similar clocks would only consume 11% more power?
Posted on Reply
#125
bug
efikkanI'm just curious, how do you figure that a 50% larger chip at similar clocks would only consume 11% more power?
He doesn't figure anything, but since he set out to paint a rosy picture of Navi, there simply wasn't room for a bigger number there.
Posted on Reply
Add your own comment
Apr 26th, 2024 01:59 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts