Tuesday, May 7th 2019

AMD Radeon RX 3080 XT "Navi" to Challenge RTX 2070 at $330

Rumors of AMD's next-generation performance-segment graphics card are gaining traction following a leak of what is possibly its PCB. Tweaktown put out a boatload of information of the so-called Radeon RX 3080 XT graphics card bound for an 2019 E3 launch, shortly after a Computex unveiling. Based on the 7 nm "Navi 10" GPU, the RX 3080 XT will feature 56 compute units based on the faster "Navi" architecture (3,584 stream processors), and 8 GB of GDDR6 memory across a 256-bit wide memory bus.

The source puts out two very sensational claims: one, that the RX 3080 XT performs competitively with NVIDIA's $499 GeForce RTX 2070; and two, that AMD could start a price-war against NVIDIA by aggressively pricing the card around the $330 mark, or about two-thirds the price of the RTX 2070. Even if either if not both hold true, AMD will fire up the performance-segment once again, forcing NVIDIA to revisit the RTX 2070 and RTX 2060.
Source: Tweaktown
Add your own comment

208 Comments on AMD Radeon RX 3080 XT "Navi" to Challenge RTX 2070 at $330

#126
Markosz
I still heavily doubt this will be the series name...
NVIDIA's next gen would be named the same in that case, plus the 'XT' is wierd.
Posted on Reply
#127
Auer
Am I the only one that thinks that AMD producing a RTX2070 competitor for $330 a year after the RTX2070 launched is really not all that remarkable?
And without RTX and DLSS as well. I know some ppl dont care about that but there it is all the same.
Posted on Reply
#128
notb
Markosz said:
I still heavily doubt this will be the series name...
If you read forum/reddit discussions about Ryzen chipset names, you'd notice this... well... impresses the hardcore followers.
Doubt not. AMD is capable of doing this. It's sad, but that's how they do business.
NVIDIA's next gen would be named the same in that case, plus the 'XT' is wierd.
"XT" suffix was used by ATI and AMD as well. The latest "XT" card was China-only RX560 XT.
Casecutter said:
How's that helping the "de-contented" GTX 1660 Ti, even with low TDP many run hot and not better on dBA, they just cheapen-up what they give as a cooler to the point it look like something you use to expect from $120 budget construction. Oh, but "gussy it up" with a plastic backing-plate.
https://www.techpowerup.com/reviews/MSI/GeForce_GTX_1660_Ti_Ventus_XS/32.html
Temperature has little to do with heat emission, which is still very low. But this is a small, cheap-ish cooler. It's expected to provide less cooling than "flagships", which it does.

Also, the load temperature is still perfectly fine. Many expensive cards reach over 70*C (still GPU's comfort zone).
In other words: there is some potential to limit Ventus' RPM, leading to lower noise.
And there was hardly any sacrifice performance-wise. It's almost an MSI 1660Ti Gaming (within measurement error margin).
moving past the pruning dead wood from the Raja Koduri era.
I remember perfectly well that Koduri was an AMD-fanboy hero not so long ago. Funny how quickly things change.
Lisa Su will quit at some point - likely for an AMD's competitor). I wonder what will happen to all those signed CPUs then...

Auer said:
Am I the only one that thinks that AMD producing a RTX2070 competitor for $330 a year after the RTX2070 launched is really not all that remarkable?
As far as selling go, they can ask $10. Making a profit is another story.
And without RTX and DLSS as well. I know some ppl dont care about that but there it is all the same.
I think everyone cares now. It's just that AMD fans are still reluctant to admit it (they mocked RTX just few months ago).
It only takes Lisa Su to announce an RT acceleration chip and they'll all praise the idea. :-)
Posted on Reply
#129
Nkd
spnidel said:
isn't the rtx 2070 pretty much a 1080 performance-wise? if so, then god damn, AMD... node shrink, and you still can't beat a 1080 ti with something that isn't HBM memory with power consumption that isn't trash. sad.
its same damn number of CUs or less, what else you do you expect? Miracles. If the price is wrong then complain.
Posted on Reply
#130
Casecutter
EarthDog said:
Well, heat and temperature are different things, mind you.
Correct in that two thing matter what heat ends up inside your case (or keeps you warm) and perhaps noise if you don't play with headset or someone else is in the room getting annoyed. (But what does any of that matter... "I'm gaming here!")

Auer said:
Am I the only one that thinks that AMD producing a RTX2070 competitor for $330 a year after the RTX2070 launched is really not all that remarkable?
Sure not that remarkable, although when you consider AMD/RGT hasn't drop near the engineering R&D for graphics, and has made it to TSCM and their 7nm process they are scrappy and showing they may still have competitive ability.

notb said:
Koduri was an AMD-fanboy hero
To me he never was more than a "show-boat" in all things, master of none... self-made celebrity.

Nkd said:
If the price is wrong then complain
Complain to whom? Those floating rumor... this is not AMD/RGT spreading any of it...
Posted on Reply
#131
Manoa
I read this all :) but I still don't know: is this a GCN or not ?
Posted on Reply
#132
GreiverBlade
interested ... at 330$


well ... if the RTX 2070 weren't 550$+ for me ...

oh well, wait and see then
Posted on Reply
#133
notb
Manoa said:
I read this all :) but I still don't know: is this a GCN or not ?
GCN.
Posted on Reply
#134
Rexolaboy
Title of article suggests that AMD or a 3rd party has tested the rx 3080xt and it has benchmarked similarly to an rtx 2070. So far AMD hasn't said anything like this, and there are no numbers from the leak to suggest this. I think stuff like this is a little childish or even dangerous for the release of the product creating a bigger let down than needed. People should understand what to expect is still a GCN GPU and that should speak for itself.
Posted on Reply
#135
Totally
THANATOS said:
Did I mention in that quotation that 480/580 was pitted against 1050? I don't think so.
Is It really such an obvious example?
I didn't say you said that either. I stated a fact and implied you were wrong because you are.

THANATOS said:

470/570 was(is) 1060 3GB and not 1050Ti!
I can't say you suck at basic math but you must have serious difficulty with word problems, answer is right but that is not the solution. To paraphrase, the statement is: similar performance at 2/3rds of the cost*. What's 2/3rds of $299? So (2/3)*$NVIDIA = $AMD -> (2/3)*$299 = $199

Let's see: 1060 6gb, $299; rx 480 4gb $199; at 1080p they perform more or less the same and they're GPUs and not CPUs.

*hint, keywords there. "Of" not "more/less."
Posted on Reply
#136
Casecutter
Manoa said:
I read this all :) but I still don't know: is this a GCN or not ?
Short answer yes, yes it still is... That said, back in 2016 some of the last work to GCN was to unburden it from a lot of the computing and professional requirements that have all this time remained, while moving to GDDR6 will help in GCN being somewhat need(y) for bandwidth. Sure don't expect a lot or OMG... more stripping-out unused bit's games didn't use, memory bump while lowering power, and 7nm a mix of increased clock balance against power improvements.

All that said a 56 CU part (aka Vega 56) needs to add 30% in performance to be nipping at the 2070. Now if you say that none of that is aided in GDDR6, being Vega had that assistance covered in HBM. They'll need 15% in chip tweaks, and 15% from 7nm, about what they pushed Vega 7 above that of basically existing Vega 64 with all it compute still there. I'm seriously not seeing this Navi as really besting a RTX 2070 @ 1440p performance, but if close and say $350 it will be competitive and can't come soon enough.
Posted on Reply
#137
Manoa
this sounds like the bulldozer improves to excavator xD
I think it's not worth guys, it will give verry littel, yhe thare mybe some price change nvidia and mybe low cost this version but like you say, it don't give OMG
mybe people with old cards who wanne higher resolution or more speed in same lower resolution but not going to be for 3840x2160 :x
it best wait for something mutch more faster, like something that can give good speed 60+ in 3840x2160
I have 780 Ti and I running anything, even new games with 1920x1080 and I don't have any performance problems :)
if you wanne upgrade, I think it best to do it for something significant and worth :)
or wait....unless it will can faster than radeon 7 ? like 30% or 50% ?

Casecutter said:
I'm seriously not seeing this Navi as really besting a RTX 2070 @ 1440p performance
wait I don't understand, radeon 7 is also 2070 no ? so whay they make new card that is same or less performance than old card ?! what the point ?
Posted on Reply
#138
bug
Rexolaboy said:
Title of article suggests that AMD or a 3rd party has tested the rx 3080xt and it has benchmarked similarly to an rtx 2070. So far AMD hasn't said anything like this, and there are no numbers from the leak to suggest this. I think stuff like this is a little childish or even dangerous for the release of the product creating a bigger let down than needed. People should understand what to expect is still a GCN GPU and that should speak for itself.
You are right, but for the past few generations, raising false expectations through "unsanctioned" leaks is all that AMD could do in the GPU space. To the point some people pay now over a grand for a video card :(
Posted on Reply
#139
Casecutter
Manoa said:
but not going to be for 3840x2160
Sure no one should consider these for "stellar top -self 4K " at some supposed 350'ish price, but it's better than having pay a 45% higher price to see basically the similar immersive play.

bug said:
through "unsanctioned" leaks is all that AMD could do in the GPU space
Blame the victim... much? I never see this as AMD tamping or igniting expectations just folks soliciting "Click Bait" from someone will to start some speculation.
Posted on Reply
#140
efikkan
Casecutter said:
Short answer yes, yes it still is... That said, back in 2016 some of the last work to GCN was to unburden it from a lot of the computing and professional requirements that have all this time remained, while moving to GDDR6 will help in GCN being somewhat need(y) for bandwidth.
Neither bandwidth nor computational power has been the problem for GCN.
RX 580 have a 256 GB/s memory bandwidth compared to the similarly performing GTX 1060 at 192 GB/s
or
Radeon VII at a massive 1 TB/s vs. RTX 2080's 448 GB/s (which is still overkill).
The problem for GCN have always been resource utilization, and improvements to resource management will be the deciding factor for Navi, if there is anything substantial at all.
Posted on Reply
#141
Manoa
efikkan said:
The problem for GCN have always been resource utilization, and improvements to resource management will be the deciding factor for Navi, if there is anything substantial at all.
yhe that whay it the "async monster" xD
it's better than having pay a 45% higher price to see basically the similar immersive play.
+1, but it still a waste of card, make card that is same or worse than your own older card radeon 7, I realy don't understand the point in it, mybe bether not to make card at all lel
Posted on Reply
#143
Caring1
eidairaman1 said:
Just Wait till Its out.
But then we miss out on fun threads like this ;)
Posted on Reply
#144
Camm
notb said:
I think everyone cares now. It's just that AMD fans are still reluctant to admit it (they mocked RTX just few months ago).
I'll still mock the shit out of it, and I own a 2080 Ti. No other way to cut it, image fidelity tanks with them enabled, with the only game I even consider turning on these technologies for being Metro Exodus, with every other implementation not worth the hit to FPS and fidelity.

There needs to be a more efficient way of doing RT without chunking out a huge part of the die to do so. As for DLSS, most testing with it shows that you get better fidelity just lowering the res for the same FPS. Could it be better someday? Maybe. But not this gen.
Posted on Reply
#145
cucker tarlson
Camm said:
I'll still mock the shit out of it, and I own a 2080 Ti. No other way to cut it, image fidelity tanks with them enabled, with the only game I even consider turning on these technologies for being Metro Exodus, with every other implementation not worth the hit to FPS and fidelity.

There needs to be a more efficient way of doing RT without chunking out a huge part of the die to do so. As for DLSS, most testing with it shows that you get better fidelity just lowering the res for the same FPS. Could it be better someday? Maybe. But not this gen.
yup,except no one that's reasonable could even imagine just having rt just like that,going from rasterization to rt in just one generation.
you took the first step, a $1200 one at that.
Posted on Reply
#146
steen
efikkan said:
Neither bandwidth nor computational power has been the problem for GCN.
RX 580 have a 256 GB/s memory bandwidth compared to the similarly performing GTX 1060 at 192 GB/s
or
Radeon VII at a massive 1 TB/s vs. RTX 2080's 448 GB/s (which is still overkill).

The problem for GCN have always been resource utilization, and improvements to resource management will be the deciding factor for Navi, if there is anything substantial at all.
No continual incremental updates like Fermi->Turing will do that for you. It must be acknowledged NV has executed, esp given the relative competition vacuum across the entire product stack. Not to say GCN8/9 made no improvements to earlier GCN, but without new RTL the key shortcoming of GCN is tough to work around. IMO the front end & entire register/cache/pipeline are 3 gens behind NV. Performance of Vega is reasonable in that context. Until AMD sort their front end 4tris/clk to 6tris/clk deficit they will likely have to keep pushing their silicon harder & lose out in perf/watt. DSBR/primitive didn't pan out with Vega & TU now has more flexible mesh & VRS. We'll see what Raja Koduri's fixes for GCN amount to. Other than leveraging 7nm & supporting VRS, I am a bit sceptical.

I disagree about Vega20/TU104 bandwidth being overkill in the context of compute use. It is Mi50. In the case of ML, there are many cases where nn is bandwidth limited, not compute limited. More bandwidth for TU102/4 would yield closer to its theoretical max. Frame buffer size is also an issue where >6GB nn models make TU104/6 marginal. You can never have enough frame buffer or bandwidth.

Camm said:
There needs to be a more efficient way of doing RT without chunking out a huge part of the die to do so.
The whole point is that the RTX/tensor silicon is only ~10% of the die space. It's the redesigned TU uarch that was beefed up to support the register/cache/pipeline demands of RTX. That's why TU is 10-20% faster than GP at the same clock, but die area has blown out.
As for DLSS, most testing with it shows that you get better fidelity just lowering the res for the same FPS. Could it be better someday? Maybe. But not this gen.
I keep asking the Q, what do people think DLSS is? My not so humble view is that it's a misnomer on NV's part. They should have left it at MLAA.
Posted on Reply
#147
sutyi
Gasaraki said:
Radeon VII is 7nm... still hot as balls, just saying.
Still Vega mostly unchanged which was not really efficient in the first place, espcially for gaming type workloads. They used 7nm to clock it sky high, it is running out of the clock / power sweet spot yet again, so it can be a bridging product on the desktop roadmap till Navi takes it's place. I'm hoping that Navi improves the perf / watt somewhat, but it won't be anything revolutionary in the power department as it's still based on GCN and that has / had some pretty hefty short comings especially on Geometry SEs. I just hope it will be competitive price / performance wise with a tad lower power and that would be well enough, till Arcturus or whatever the new Super-SIMD GPU design will be called.

New tech products excite me enough to make me interested and informed about them, but never really rode the hype train in the years and I would urge others to stay off it too.
Posted on Reply
#148
Camm
steen said:
The whole point is that the RTX/tensor silicon is only ~10% of the die space. It's the redesigned TU uarch that was beefed up to support the register/cache/pipeline demands of RTX. That's why TU is 10-20% faster than GP at the same clock, but die area has blown out.

I keep asking the Q, what do people think DLSS is? My not so humble view is that it's a misnomer on NV's part. They should have left it at MLAA.
Whats this 10% of die space shen, just basic bucket math, you can see that tensor/rt takes up about half of an SM, with SM's taking up about half of the die, you have at least 20% of the die dedicated to making RT work (on the basis that RT is currently too slow to run without supersampling).

As for DLSS, I don't really care on the how, more that it should be providing better fidelity at the same FPS than just using a lower resolution in the first place. Which bluntly, it doesn't.
Posted on Reply
#149
medi01
Super XP said:
He will do the same for Intel, then move onto other ventures or simply retire.
Jim Keller is at Tesla now.

cucker tarlson said:
what's all the fuss
cucker tarlson said:
before we have any tangible indication of performance and price.
Let's not pretend that you are an impartial bystander, shall we?

it is about grenboi disability to accept "water is wet" kind of fact.
AMD never undercut competitor like that is an apparent bullshit, cheapest 8 core by Intel was $1089 when AMD 1800x hit with $499. End of story.
"Let's spin it" effort have come up, so far with:

1) "But $1089 is a HEDT chip" (yay)
2) "But he meant GPU" (actually 290(x) vs 780 wasn't that far at $549 vs slower chip at $650)
3) "But we are not talking about GPUs" (coming from "cartels are fine" guy)

Yay. Pathetic.

cucker tarlson said:
do you really think rtg is in the position to undercut nvidia that much?
I believe AdoredTV does have an actual insider connection and as I see it, AdoredTV just dropped BAD NEWS NOT GOOD NEWS. Namely:
1) Not meeting target clocks
2) Power hungry <= the worst part

3) Losing to VII CU for CU

As for whether 7nm chip with GDDR mem with 2070-ish performance is possible at $330, uh, is it even a question?

cucker tarlson said:
nvidia themselves have a $350 competitor
Could you guys at least hide your BH? I mean, what the fuck does "oh, but my great company has an answer to this, I don't need to hide and cry" have to do with it? Jesus.


dicktracy said:
Adored already did a 180 to this bogus rumor.
It's the 180 we are discussing here, the "I don't read even the first page"/"I only read reddit titles" kid.
Video is linked on the very first page, with most relevant parts of it as screenshots.


Manoa said:
is this a GCN or not ?
Do you even understand what "is GCN" means?

Camm said:
There needs to be a more efficient way of doing RT without chunking out a huge part of the die to do so.
I recall someone estimated that 22% of die are dedicated to it, not that much.
Posted on Reply
#150
Caring1
medi01 said:

Do you even understand what "is GCN" means?
Great Card Now?
Posted on Reply
Add your own comment