Tuesday, May 7th 2019

AMD Radeon RX 3080 XT "Navi" to Challenge RTX 2070 at $330

Rumors of AMD's next-generation performance-segment graphics card are gaining traction following a leak of what is possibly its PCB. Tweaktown put out a boatload of information of the so-called Radeon RX 3080 XT graphics card bound for an 2019 E3 launch, shortly after a Computex unveiling. Based on the 7 nm "Navi 10" GPU, the RX 3080 XT will feature 56 compute units based on the faster "Navi" architecture (3,584 stream processors), and 8 GB of GDDR6 memory across a 256-bit wide memory bus.

The source puts out two very sensational claims: one, that the RX 3080 XT performs competitively with NVIDIA's $499 GeForce RTX 2070; and two, that AMD could start a price-war against NVIDIA by aggressively pricing the card around the $330 mark, or about two-thirds the price of the RTX 2070. Even if either if not both hold true, AMD will fire up the performance-segment once again, forcing NVIDIA to revisit the RTX 2070 and RTX 2060.
Source: Tweaktown
Add your own comment

213 Comments on AMD Radeon RX 3080 XT "Navi" to Challenge RTX 2070 at $330

#151
steen
CammWhats this 10% of die space shen, just basic bucket math, you can see that tensor/rt takes up about half of an SM, with SM's taking up about half of the die, you have at least 20% of the die dedicated to making RT work
You looking @ comparative die shots or marketing slides? ;)
(on the basis that RT is currently too slow to run without supersampling).
Sorry, what? If RTX ran with SSAA it would be a slide show.
As for DLSS, I don't really care on the how, more that it should be providing better fidelity at the same FPS than just using a lower resolution in the first place. Which bluntly, it doesn't.
I can appreciate that. DLSS x2 runs @ native res but performance suffers. I actually think that MLAA tech is interesting & has scope for future IQ/perf advancement, especially on the TAA front. Even MS "super resolution" via DirectML results in undersampled edges with gaps remaining undersampled edges with gaps. Best description of DLSS is a lossy nn image compressor with a reconstruction process side channel (the DLSS profile).
Posted on Reply
#152
cucker tarlson
medi01Jim Keller is at Tesla now.



Let's not pretend that you are an impartial bystander, shall we?

it is about grenboi disability to accept "water is wet" kind of fact.
AMD never undercut competitor like that is an apparent bullshit, cheapest 8 core by Intel was $1089 when AMD 1800x hit with $499. End of story.
"Let's spin it" effort have come up, so far with:

1) "But $1089 is a HEDT chip" (yay)
2) "But he meant GPU" (actually 290(x) vs 780 wasn't that far at $549 vs slower chip at $650)
3) "But we are not talking about GPUs" (coming from "cartels are fine" guy)

Yay. Pathetic.


I believe AdoredTV does have an actual insider connection and as I see it, AdoredTV just dropped BAD NEWS NOT GOOD NEWS. Namely:
1) Not meeting target clocks
2) Power hungry <= the worst part

3) Losing to VII CU for CU

As for whether 7nm chip with GDDR mem with 2070-ish performance is possible at $330, uh, is it even a question?


Could you guys at least hide your BH? I mean, what the fuck does "oh, but my great company has an answer to this, I don't need to hide and cry" have to do with it? Jesus.



It's the 180 we are discussing here, the "I don't read even the first page"/"I only read reddit titles" kid.
Video is linked on the very first page, with most relevant parts of it as screenshots.



Do you even understand what "is GCN" means?


I recall someone estimated that 22% of die are dedicated to it, not that much.
I was just pointing out that this is a rumor.whether adtv is your guru changes norhing.what's with your rabid attitude?did you piss your pants hearing adtv and it started to itch now?
You're only active on tpu to attack ppl whom you disagree with and bait.
Posted on Reply
#153
Vayra86
Caring1Great Card Now?
No, Great Card Next - its always coming!
Posted on Reply
#154
Spencer LeBlanc
I miss the days of the XT's and GTs etc. Bring back the good ole days.
Posted on Reply
#155
bug
Spencer LeBlancI miss the days of the XT's and GTs etc. Bring back the good ole days.
I miss the days of Trio64 and Unreal demos :D:D:D
Posted on Reply
#156
medi01
cucker tarlsonadtv is your guru
Because stating "I believe he has insider links" makes him "my guru".
cucker tarlson<A bunch of childish insults>
Oh, get lost.
Posted on Reply
#157
cucker tarlson
medi01Because stating "I believe he has insider links" makes him "my guru".




Oh, get lost.
Sir.come on
Let's not pretend
Posted on Reply
#158
jabbadap
medi01Jim Keller is at Tesla now.

Let's not pretend that you are an impartial bystander, shall we?

it is about grenboi disability to accept "water is wet" kind of fact.
AMD never undercut competitor like that is an apparent bullshit, cheapest 8 core by Intel was $1089 when AMD 1800x hit with $499. End of story.
"Let's spin it" effort have come up, so far with:

1) "But $1089 is a HEDT chip" (yay)
2) "But he meant GPU" (actually 290(x) vs 780 wasn't that far at $549 vs slower chip at $650)
3) "But we are not talking about GPUs" (coming from "cartels are fine" guy)

Yay. Pathetic.

I believe AdoredTV does have an actual insider connection and as I see it, AdoredTV just dropped BAD NEWS NOT GOOD NEWS. Namely:
1) Not meeting target clocks
2) Power hungry <= the worst part

3) Losing to VII CU for CU

As for whether 7nm chip with GDDR mem with 2070-ish performance is possible at $330, uh, is it even a question?

Could you guys at least hide your BH? I mean, what the fuck does "oh, but my great company has an answer to this, I don't need to hide and cry" have to do with it? Jesus.

It's the 180 we are discussing here, the "I don't read even the first page"/"I only read reddit titles" kid.
Video is linked on the very first page, with most relevant parts of it as screenshots.

Do you even understand what "is GCN" means?

I recall someone estimated that 22% of die are dedicated to it, not that much.
Jim Kellermoved to Intel from Tesla. Not that it really have anything to do with the GPUs not to mention Navi or RTX...
Posted on Reply
#159
Rexolaboy
bugYou are right, but for the past few generations, raising false expectations through "unsanctioned" leaks is all that AMD could do in the GPU space. To the point some people pay now over a grand for a video card :(
You honestly think AMD would leak this information to the public? Its not even impressive information, I'm sure AMD would "leak" something a little more titillating. Clocking issues, IPC decrease, and silly naming scheme sound like rumors and hearsay spread by non technical staff. Just like the MSI rep that was telling someone that his b350 board wouldn't work with Ryzen 3rd gen.
Posted on Reply
#160
bug
RexolaboyYou honestly think AMD would leak this information to the public? Its not even impressive information, I'm sure AMD would "leak" something a little more titillating. Clocking issues, IPC decrease, and silly naming scheme sound like rumors and hearsay spread by non technical staff. Just like the MSI rep that was telling someone that his b350 board wouldn't work with Ryzen 3rd gen.
I don't care much about what AMD would or wouldn't leak. But at the same time I can't help noticing they never act on these leaks, therefore they must be ok with the generated word of mouth.

And just look at how the world was taken by surprise by Turing or Zen. That proves when companies want to keep something from the public, that's what they'll do.
Posted on Reply
#161
Auer
bugI don't care much about what AMD would or wouldn't leak. But at the same time I can't help noticing they never act on these leaks, therefore they must be ok with the generated word of mouth.

And just look at how the world was taken by surprise by Turing or Zen. That proves when companies want to keep something from the public, that's what they'll do.
Well it's free publicity.
Posted on Reply
#162
Rexolaboy
Zens leaks were pretty accurate, they were based on engineering samples. We already know what Navi should perform like considering it's based on GCN, but it looks worse lol
Posted on Reply
#163
efikkan
steenNo continual incremental updates like Fermi->Turing will do that for you. It must be acknowledged NV has executed, esp given the relative competition vacuum across the entire product stack. Not to say GCN8/9 made no improvements to earlier GCN, but without new RTL the key shortcoming of GCN is tough to work around. IMO the front end & entire register/cache/pipeline are 3 gens behind NV. Performance of Vega is reasonable in that context. Until AMD sort their front end 4tris/clk to 6tris/clk deficit they will likely have to keep pushing their silicon harder & lose out in perf/watt.
The problem is not ROP performance, it's management of resources.
GCN have changed very little over the years, while Kepler -> Maxwell -> Pascal -> Turing have continued to advance and achieve more performance per core and GFlop, to the point where they have about twice the performance per watt and 30-50% more performance per GFlop.
steenYou can never have enough frame buffer or bandwidth.
More is usually better, except when it comes at a great cost.
16 GB of 1 TB/s HBM2 is just pointless for gaming purposes. AMD could have used 8 or even 12 GB, and priced it lower.
RexolaboyYou honestly think AMD would leak this information to the public? Its not even impressive information, I'm sure AMD would "leak" something a little more titillating.
AMD does certainly leak information when they see a reason to. But no, these specs are not leaked by AMD, as they don't leak them until they are finalized. This thread is just another "victim" of someones speculation in a Youtube channel…
Posted on Reply
#164
Casecutter
efikkanGCN have always been resource utilization
Yea, that's an over simplification on my part or wrong way of saying it.... As correct its' not "band-width" more how that memory speed/throughput is utilized in GCN, and correct that wasn't corrected in the architecture with GDDR5, even with Vega/HBM not sure what changes where brought-in to free that up.

And true the "computational power" of GCN is not its' problem, however never truly got employed in gaming engines (DX11). Today I'm not sure its' extent even for DX12, is aiding gaming enough to warrant a huge dependence in the architecture. I understood that they tasked engineering (2015-16) to trim that back in a way to save power, and this "Navi" is the first chip to have that.
bugI can't help noticing they never act on these leaks, therefore they must be ok with the generated word of mouth.
So they are suppose to come out to confirm/deny (argue, attest, authenticate, bear out, certify, corroborate, substantiate, support, validate, verify, vindicate) every Tom, Dick and Harry story! That's not how any smart-individual or company does it. Once you start... your giving away "something" every time you open your mouth, and where does it stop. And yes any generation of discussion from nothing but rumor is still talk/discussions keeping you or company relevant.
Kardashian's built an Empire on just that kind of crap ?
Posted on Reply
#165
efikkan
CasecutterAnd true the "computational power" of GCN is not its' problem, however never truly got employed in gaming engines (DX11). Today I'm not sure its' extent even for DX12, is aiding gaming enough to warrant a huge dependence in the architecture. I understood that they tasked engineering (2015-16) to trim that back in a way to save power, and this "Navi" is the first chip to have that.
Game engines don't implement low-level GPU scheduling, dependency analysis and low-level resource management, not even the driver can do this, it's managed on chip. While you can tune some aspects of a game engine and see how it impacts performance, you can't solve GCN's underlying problem in software.
Posted on Reply
#166
Casecutter
efikkanyou can't solve GCN's underlying problem in software
I don't see that I said it anything that is fixed by the software/driver? It was game engine developers that never saw/knew the value or tools to constructed in a way to make use of such "on chip" resources. And much like Bulldozer core implementation, something that was a fault of their going toward a direction nobody was looking to go.
Posted on Reply
#167
Super XP
ToxicTaZNvidia will lower the cost of the TU106 (2060 and 2070)

Nvidia will counter Navi 10/20 with the TU104

Nvidia will release the RTX 2070Ti (TU104-300A) witch is (1080Ti/Radeon 7) performance for a lower price. Possibly with 8GB and 16GB options.

I herd talk about RTX 2080+ (unlocked TU104-475A) @2GHz with 8GB & 16GB models. ((3072 Cuda Cores))

"RTX 2070Ti" will blow away RX 3080 XT

As the "RTX 2080 Plus" blows away RX 3090 XT next February 2020

This is most likely what's going to happen
If NAVI (That is based on old 2011 GCN Design) ends up anywhere near the RTX 2070 for a $300 to $350 cost, would be an Nvidia embarrassment.

Can AMD give us one more crack at GCN before trashing it? We'll soon find out.
Posted on Reply
#168
Auer
Super XPIf NAVI (That is based on old 2011 GCN Design) ends up anywhere near the RTX 2070 for a $300 to $350 cost, would be an Nvidia embarrassment.

Can AMD give us one more crack at GCN before trashing it? We'll soon find out.
nV's level of embarrassment becomes less every day that goes by with no sign of competition. RTX2070 has been out since October. With RT and DLSS.

If anything if AMD doesn't have a clear rival out by the end of summer for $350 the embarrassment will be all theirs.

The RTX2070 is current production, not new production. Shouldn't AMD at this stage release something better than a RTX2070 for the same $$$?
Posted on Reply
#169
ToxicTaZ
Nvidia will lower there cost of TU106 (2060/2070)

Nvidia will use their TU104 to fight against AMD Navi 10/20 GPUs.

Nvidia is releasing RTX 2070Ti (TU104 300A) same performance as (1080Ti/Radeon 7) at a lower cost. To deal with AMD Navi 10

Nvidia also has a RTX 2080U model coming. (Fully unlocked TU104 model with full 3072 Cuda Cores @2GHz) to deal with AMD Navi 20

Both RTX 2070Ti and RTX 2080U both come with optional 16GB Plus models.

Like always AMD has nothing to go against Nvidia TU102 (RTX Titan/RTX 2080Ti)
Posted on Reply
#170
Casecutter
Auerfor the same $$$
Let me fix that, IF... AMD/RGT releases a 7nm part before Nvidia, even if close or nips at RTX 2070, at a price that's 30% less you'll see that as embarrassing?

Though all the while they do it with a pittance of the R&D/financials, while staff restructuring, and basically new relation with a foundry. All while using an architecture that taped out in 2010 and ultimately only saw slight revisions until this (might be) the first major overhaul. I'm not seeing it... unless you mean embarrassing for Nvidia?
Posted on Reply
#171
Auer
CasecutterLet me fix that, IF... AMD/RGT releases a 7nm part before Nvidia, even if close or nips at RTX 2070, at a price that's 30% less you'll see that as embarrassing?

Though all the while they do it with a pittance of the R&D/financials, while staff restructuring, and basically new relation with a foundry. All while using an architecture that taped out in 2010 and ultimately only saw slight revisions until this (might be) the first major overhaul. I'm not seeing it... unless you mean embarrassing for Nvidia?
No one cares about AMD's staff restructuring etc. Except market analysts and investors. Gamers don't give a damn about that. Corporate Heroics don't impact in game FPS.

7nm means nothing unless it's cheaper and faster. Equal won't be good enough. AMD's market share for GPU's is abysmal atm and they need a lot bigger splash than R7 was.

Meanwhile I doubt nV is just doing nothing. AMD is never going to compete by releasing a matching product 6 months later for 30% less. Actually it feels like AMD doesn't even really care that much atm. And that's a shame.
Posted on Reply
#172
Casecutter
AuerActually it feels like AMD doesn't even really care that much atm
On that we agree AMD /RGT really has not cared to via Nvidia, especially of the enthusiast market. They kept up appearances, but like Intel is now "sitting up straight" you can never be caught slouching in the seat of postulating King.
Posted on Reply
#173
vega22
THANATOSBTW erocker was quoting and replying to my post and surprise I was talking only about Nvidia vs AMD and GPUs. So the whole debate was started by me and was about GPUs and not both of them.
290x beating the first Tiran at half the price?
Posted on Reply
#174
efikkan
CasecutterI don't see that I said it anything that is fixed by the software/driver? It was game engine developers that never saw/knew the value or tools to constructed in a way to make use of such "on chip" resources. And much like Bulldozer core implementation, something that was a fault of their going toward a direction nobody was looking to go.
They couldn't even if they wanted to.
The APIs we use (Direct3D, OpenGL and Vulkan) are GPU architecture agnostic.
When it comes to "optimizing" game engines there are very little developers can do, and they certainly can't control the internal GPU scheduling even if they wanted to, and optimization is largely limited to tweaking buffer sizes, resource sizes and generic operations to see what performs better, not any true low-level GPU-specific optimization like most people think.
Super XPIf NAVI (That is based on old 2011 GCN Design) ends up anywhere near the RTX 2070 for a $300 to $350 cost, would be an Nvidia embarrassment.
How would anyone be embarrassed by Navi coming close to Nvidia?
AMD is the one who should be embarrassed if they can't make a "better" node and a "newer" design with probably more cores and GFlop beat last year's contender from Nvidia, which I don't expect them to do…
Posted on Reply
#175
Manoa
medi01Do you even understand what "is GCN" means?
funny you would say that, implying that you do. hey fellas check this out medi01 is an AMD engineer how cool is that ?
Posted on Reply
Add your own comment
Apr 26th, 2024 14:12 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts