Tuesday, May 7th 2019

AMD Radeon RX 3080 XT "Navi" to Challenge RTX 2070 at $330

Rumors of AMD's next-generation performance-segment graphics card are gaining traction following a leak of what is possibly its PCB. Tweaktown put out a boatload of information of the so-called Radeon RX 3080 XT graphics card bound for an 2019 E3 launch, shortly after a Computex unveiling. Based on the 7 nm "Navi 10" GPU, the RX 3080 XT will feature 56 compute units based on the faster "Navi" architecture (3,584 stream processors), and 8 GB of GDDR6 memory across a 256-bit wide memory bus.

The source puts out two very sensational claims: one, that the RX 3080 XT performs competitively with NVIDIA's $499 GeForce RTX 2070; and two, that AMD could start a price-war against NVIDIA by aggressively pricing the card around the $330 mark, or about two-thirds the price of the RTX 2070. Even if either if not both hold true, AMD will fire up the performance-segment once again, forcing NVIDIA to revisit the RTX 2070 and RTX 2060.
Source: Tweaktown
Add your own comment

213 Comments on AMD Radeon RX 3080 XT "Navi" to Challenge RTX 2070 at $330

#176
Midland Dog
new game guess the tdp
1: Pleasant surprise, 150w
2: Dissapointing for 7nm, 225w
3: Good Ole GCN, 300w
4: Meme status, 400w+

Manoa, post: 4044484, member: 174695"
funny you would say that, implying that you do. hey fellas check this out [USER=158537]medi01[/USER] is an AMD engineer how cool is that ?
Graphics Core Next, and as far as i can tell the CU layout hasnt changed since the HD 7000 series, inclusive of vega's "NCU"
Posted on Reply
#177
jabbadap
Midland Dog, post: 4044646, member: 168254"
new game guess the tdp
1: Pleasant surprise, 150w
2: Dissapointing for 7nm, 225w
3: Good Ole GCN, 300w
4: Meme status, 400w+
Well if it has Virtual link connector cards tdp might be lower than pcie power connectors on the card suggest. I would say one thing though: if it is 300W tdp it will beat Radeon VII.
Posted on Reply
#178
Auer
Midland Dog, post: 4044646, member: 168254"
new game guess the tdp
1: Pleasant surprise, 150w
2: Dissapointing for 7nm, 225w
3: Good Ole GCN, 300w
4: Meme status, 400w+


Graphics Core Next, and as far as i can tell the CU layout hasnt changed since the HD 7000 series, inclusive of vega's "NCU"
Hoping for #1

Would be nice for AMD to finally shake the "Hot, Loud and Slow" image. True or false, public perceptions matter.
Posted on Reply
#179
rvalencia
efikkan, post: 4044092, member: 150226"
The problem is not ROP performance, it's management of resources.
GCN have changed very little over the years, while Kepler -> Maxwell -> Pascal -> Turing have continued to advance and achieve more performance per core and GFlop, to the point where they have about twice the performance per watt and 30-50% more performance per GFlop.


More is usually better, except when it comes at a great cost.
16 GB of 1 TB/s HBM2 is just pointless for gaming purposes. AMD could have used 8 or even 12 GB, and priced it lower.


AMD does certainly leak information when they see a reason to. But no, these specs are not leaked by AMD, as they don't leak them until they are finalized. This thread is just another "victim" of someones speculation in a Youtube channel…
Geforce RTX 2080 Ti has 88 ROPS with 6 GPC which each contains geometry-raster engine.

Vega II has 64 ROPS with 4 Shader Engines which each contains geometry-raster engine.

In terms of GPU basics, NVIDIA has geometry and raster superiority.

NVIDIA has memory compression superiority over AMD.

For GPU role, TFLOPS is nothing without geometry-raster engines and ROPS read/write units. Reminder for AMD, GPUs are not DSPs.
Posted on Reply
#180
BorgOvermind
VulkanBros, post: 4043078, member: 6693"
Reminds me....

And is still running - just changed the thermal paste and the thermal paddings
I still have one too.
I was the most long-lasting card from a performance perspective, remaining in the top best cards for many years.
Posted on Reply
#181
rvalencia
Super XP, post: 4044222, member: 8670"
If NAVI (That is based on old 2011 GCN Design) ends up anywhere near the RTX 2070 for a $300 to $350 cost, would be an Nvidia embarrassment.

Can AMD give us one more crack at GCN before trashing it? We'll soon find out.

Vega 56 at 1710 Mhz rival or beat RTX 2070 and Vega 64 at 1590Mhz
Posted on Reply
#182
medi01
jabbadap, post: 4044056, member: 148195"
im Keller moved to Intel from Tesla.
Oh, wow.

Midland Dog, post: 4044646, member: 168254"
1: Pleasant surprise, 150w
2: Dissapointing for 7nm, 225w
#2 is most likely, given how AMD easily pushes power consumption by a third, for single digit gains.


Midland Dog, post: 4044646, member: 168254"
Good Ole GCN
It's both an instruction set that is 7 years old (CUDA is 11 years old, x86 is 40 years old) as well as microarch.
The former evolves, the latter can be completely different or mostly the same, only AMD knows.

cucker tarlson, post: 4044034, member: 173472"
Let's not pretend
Projections like that from "cartels are fine" camp are particularly appalling.
Posted on Reply
#183
Assimilator
rvalencia, post: 4044701, member: 99935"

Vega 56 at 1710 Mhz rival or beat RTX 2070 and Vega 64 at 1590Mhz
In some games.
... compared to stock 2070
... consuming DOUBLE the power of overclocked RTX 2070
... using a procedure that is, to quote the guy in the video, "not recommended" because he has no idea what exposing the card to a couple hundred extra watts will do to it over time.

*slow clap*

I really wonder why RTG isn't hiring you guys for its engineering department.
Posted on Reply
#184
medi01
rvalencia, post: 4044701, member: 99935"

Vega 56 at 1710 Mhz rival or beat RTX 2070 and Vega 64 at 1590Mhz
Very impressive min frame rates, V56 starts at 275 Euro at mindfactory (e.g. MSI custom cooler one), but ouch at power consumption.

Posted on Reply
#185
Totally
vega22, post: 4044322, member: 41040"
290x beating the first Tiran at half the price?
He's gone Jim.

Midland Dog, post: 4044646, member: 168254"
new game guess the tdp
1: Pleasant surprise, 150w
2: Dissapointing for 7nm, 225w
3: Good Ole GCN, 300w
4: Meme status, 400w+


Graphics Core Next, and as far as i can tell the CU layout hasnt changed since the HD 7000 series, inclusive of vega's "NCU"
1. Not happening, even NV cards don't touch this.
2. Hopefully they land under here, no disappointing at all because of diminishing returns on power savings when dropping nodes. If there is any it will probably be eaten up by increased transistor count and GDDR6.
3. Really hope not
Posted on Reply
#186
Casecutter
Auer, post: 4044665, member: 186365"
Hoping for #1
If you start with a 56CU part (Vega 56) the TDP was 210W, though TDP is “not the end all be all”. If we look at power usage under “Gaming”, we know that a 1070 was better by 57%, 1080 27%, while a 2070 is 17% better than a Vega 56.

Now sure the 2070 is higher performance, although if we postulate this Navi might nip at the heels of a 2070, it comes down to performance / watts. And yes we should see power savings moving to 7nm, but we know Navi at its' heart is GCN architecture. AMD will be going more for clocks, but I don't think it will be at “dam the efficiency savings” they went with on Vega 7.

So someone can find the real numbers (l looked around), but if the mix is 15% higher clocks while 15% saving in power (12mn vs. 7nm) what’s that look like? Boost clock on the Vega 56+15% = 1700Mhz (that's around what Gamer Nexus was needing). Then 15% lower power would make it 195W in “normal gaming” same as the 2070 Founders Edition. So if we say the TDP on Vega 56 could by that drop 15%, that equates to 178 TDP. That's right inline with the 175 TDP claimed by the 2070 FE. I think a range of 170-180W is plausible.

So working from that there’s still chip architecture resource utilization, and improvements to resource management in Navi, could that bring that little extra shine... Consider a determined team of AMD engineers have been tasked over the last 3 years to "wring-out" GNC and implement GDDR6… Is 7-10% gain from architecture that much of a stretch goal?

I found this about the CLN16FF+ technology (TSMC’s most widely used FinFET process technology) the CLN7FF will enable chip designers to shrink their die sizes by 70% (at the same transistor count), drop power consumption by 60%, or increase frequency by 30% (at the same complexity).
https://www.anandtech.com/show/12677/tsmc-kicks-off-volume-production-of-7nm-chips

rvalencia, post: 4044698, member: 99935"
For GPU role
For a role as a "Gaming" GPU...
Posted on Reply
#187
Vario
Once again I really dislike the AMD brand numbering system where they take the competitor's chipset / model series and one up it. It just creates confusion for the customer.
Posted on Reply
#188
rvalencia
Assimilator, post: 4044792, member: 7058"
In some games.
... compared to stock 2070
... consuming DOUBLE the power of overclocked RTX 2070
... using a procedure that is, to quote the guy in the video, "not recommended" because he has no idea what exposing the card to a couple hundred extra watts will do to it over time.

*slow clap*
I really wonder why RTG isn't hiring you guys for its engineering department.
... Vega 56 has older 14 nm process tech not 7 nm
... no under voltage



Real time power consumption comparison between stock VII vs Vega 64 LC at +1700Mhz overclock and under voltage.

My point with Vega 56 at 1710 Mhz OC rivaling or beating RTX 2070 is gaming performance estimate NOT power consumption i.e. cut-down VII into VII 56 and attach GDDR6 memory chips for estimated power consumption for RX 3080 XT. NAVI seems have GCN raster performance behavior.


medi01, post: 4044797, member: 158537"
Very impressive min frame rates, V56 starts at 275 Euro at mindfactory (e.g. MSI custom cooler one), but ouch at power consumption.


AMD should release cut down VII into 56 CU and 256bit GDDR6 memory chips and offer competition to RTX 2070. NAVI has memory compression improvements.
Posted on Reply
#189
Midland Dog
rvalencia, post: 4044701, member: 99935"

Vega 56 at 1710 Mhz rival or beat RTX 2070 and Vega 64 at 1590Mhz
but at over 300 watts, and with reg edits
Posted on Reply
#190
medi01
Vario, post: 4045042, member: 18224"
Vega 56 at 1710 Mhz OC rivaling or beating RTX 2070 is gaming performance
It's not really rivaling, but outright winning, check framerate stability, even where 2070 has bigger avg, it's fluctuating like crazy.
Posted on Reply
#191
rvalencia
Midland Dog, post: 4045202, member: 168254"
but at over 300 watts, and with reg edits
Power consumption is not the point i.e. it's Vega with 56 CU at 1710 Mhz reaching frame rate performance levels like the speculated RX 3080 with 56 CU relative to RTX 2070.

Are you arguing RX 3080 is built on the old 14 nm process tech?

1st gen 7nm process tech helps reduce the power consumption, hence RX 3080 is like cut-down Vega II 56 with 256 bit GDDR6 memory.
Posted on Reply
#192
Assimilator
medi01, post: 4045259, member: 158537"
It's not really rivaling, but outright winning, check framerate stability, even where 2070 has bigger avg, it's fluctuating like crazy.
Midland Dog, post: 4045202, member: 168254"
but at over 300 watts, and with reg edits
Posted on Reply
#193
John Naylor
vega22, post: 4044322, member: 41040"
290x beating the first Tiran at half the price?
This was the 1st instance of AMD very aggressively clocking cards in the box. While the 290x was faster then the 780 out of the box, the teeny OC headroom left it unable to compete with the 780 .... with both cards overclocked... it was all 780 ... even Linus figured that out.

See 8:40 mark


It gets worse under water...4:30 mark


Aside from these prerelease announcements never living up to the hype ever sinc AMDs 2xx series, there's one thing here that gives me great pause with this announcements.... the name. Now if you want to distinguish your product from the competition because you have a better one, as is taught in Marketing 101 is "distinguish your product". The "RX 3080 XT" ... the copied the RX, they went from 2 to 3 and from 70 to 80 and threw in an XT for "extra" I guess. We saw the same thing with MoBos in mimicking the Intel MoBo naming conventions switching Z to an X. When you mimic the competition, it says "I wanna make mine sound like theirs so they will see RX 3 to their RX 2 ad 80 is bigger than 70 and infer that its "like theirs but newer, bigger, badder, faster". That was nVidias whole goal with the partnering idea ... "we will loosen up restrictions on our cards if you agree that we will lock down the naming so this type of thing won't cut into our sales". Regardless of what the new card line actually does, I wish they'd stake out their own naming conventions.

I do hope that AMD can actually deliver on this kind performance .... But if they gonna push the value claim, let's do apples and apples for a change. Right now the 2060 is faster for 100 watts less ... 100 watts at 30 hours a week costs me $44.20 a year. If the new RX 3080 XT is 100 watts more .... from a cost PoV ...

+100 watts would add +$20 to PSU Cost (Focus Gold Plus)
+100 watts would warrant an extra $15 case fan
+$44.20 a year is $176.80 ... $211.18 total ... I'd rather the pay the extra $170 the 1070.

Now my cost for electricity is way higher than most folks in USA , comparable to many Eurpean countraies and a lot cheaper than many of those. I pay 24 cents per kwh versus average US peep pays $0.11 ...for those folks the cost would be $81.03 over 4 years.

The reality is that most folks won't consider electric cost and if that's the case, the "value' argument is no longer apples and apples. Many live in apartments and it's in the rent, some living at parents house ... but if ya gonna make the "well it's not as fast but it has best value claim", it isn't valid w/o including all associated costs. Those would be mine, others may not mind the extra heat and load / extra inefficiency of on PSU; but whatever they are in each instance, all impacts should be considered.

Now with "apples and apples' having been considered, I would much welcome a card that was comparable in performance, comparable in power usage and comparable in sound and heat generated.... but in each instance only interested in comparisons with both cards were at max overclock. I hope against hope that AMD can deliver one but I'm weary of pre-release fanfare that consistenty fails to deliver. I hop that this time they can manage to out out something that fullfills the promise, but weary of followinmg pre-release news for 6 months only to be disappointed.
Posted on Reply
#194
Casecutter
I couldn't follow all that...
John Naylor, post: 4045702, member: 156078"
Right now the 2060 is faster for 100 watts less
But using a PSU calculator I delved into what you discussed. Working from a 8700K not OC, and normal system 2x8Gb, SSD/HDD, combo drive, WLAN card, a nice AIO Cooler H100i, some 120mm fans while using Gold+ PSU. 8/hr Gaming; .24¢ (cents US) per kWh. Yea, that's like silly high, here in So. Cal. we're 14¢
The 2060 (500W recommended) would use $361; step-up to a 2080 = $404 (550W). That's $43/yr or $3.50 USD per month.
Same system Vega 56 (550W) 56 = $400; a Vega 64 = $467. Even that is $63 a year is the V64 is $420 and 2080 is $700 it take 4 years to absorb the $63 before you pay back for a 2080.
That's all on 40/wk. If you're at that level it's really a Job!
https://outervision.com/power-supply-calculator

John Naylor, post: 4045702, member: 156078"
I'm weary of pre-release fanfare that consistenty fails to deliver. I hop that this time they can manage to out out something that fullfills the promise, but weary of followinmg pre-release news for 6 months only to be disappointed.
Wow, you understand this isn't AMD/RGT, just purely a mountain of salt. Why do you listen to folk who project the "utmost" and you're disappointed? There's a malfunction somewhere and it's not just this rumor crap.
Posted on Reply
#195
Midland Dog
rvalencia, post: 4045444, member: 99935"
Power consumption is not the point i.e. it's Vega with 56 CU at 1710 Mhz reaching frame rate performance levels like the speculated RX 3080 with 56 CU relative to RTX 2070.

Are you arguing RX 3080 is built on the old 14 nm process tech?

1st gen 7nm process tech helps reduce the power consumption, hence RX 3080 is like cut-down Vega II 56 with 256 bit GDDR6 memory.
GCN is always bandwidth starved regardless of being 7nm its gddr not hbm, amd will struggle to get it any better
Posted on Reply
#196
EarthDog
Midland Dog, post: 4045899, member: 168254"
GCN is always bandwidth starved regardless of being 7nm its gddr not hbm, amd will struggle to get it any better
Hbm isnt a difference maker unless it's high res.. and even then gddr6 is plenty. HBM really hasnt played out well yet IMO.
Posted on Reply
#197
Manoa
the card is going to be "7 nm" DUV or EUV ?
Posted on Reply
#198
HenrySomeone
So, if past hyping is anything to go about (Crapeon 7 billed as 2080 level perf and only ending up 2070OC like :D) and considering that 2060 and 2070 are closer together than 2070 and 280, this new turdeon will be below 2060 while consuming over 200W, lmao! :p
Posted on Reply
#199
Midland Dog
EarthDog, post: 4045974, member: 79836"
Hbm isnt a difference maker unless it's high res.. and even then gddr6 is plenty. HBM really hasnt played out well yet IMO.
im here to tell you it aint, gtx 960 vs r9 380, gtx 1060 vs rx 480, gtx 1080 vs vega 64, gtx 2080 vs vega 7. proof is there, GCN doesnt scale with teraflops, and hits a bandwidth wall quickly

the biggest bottleneck in any and every gpu is bandwidth
Posted on Reply
#200
Manoa
it the true [USER=186988]Henry[/USER], AMD many time make tarded decisions :x
and if it's DUV it's will be like you and many others say
but if it's EUV thare is chance it mybe good
Posted on Reply
Add your own comment