• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon "Navy Flounder" Features 40CU, 192-bit GDDR6 Memory

Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The only problem is, your posts reek of bias and ad hominem arguments.
A sign of low quality debate: when a person who has a history of explicitly stating their support for one actor in a market and why (in my case: I prefer AMD due to a desire to support the underdog, plus my objections to Nvidia's history of shady business practices), and who has in fact never bought a product from the competing company, is nonetheless accused of being biased in favor of that competing company.

I am, of course, biased against MLID, as well as RedGamingTech and all those other low quality rumor-mongering clickbait YouTubers. I have so far seen zero reason to trust their content to be anything more than entertainment masquerading as news.

As for the ad honinems: really? Where? Seriously, please show some quotes. As far as I'm aware I haven't commented on you personally whatsoever. (And no, saying your approach to source criticism is poor is not an ad hominem.)
 
Joined
May 15, 2020
Messages
697 (0.49/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
The thing is, with trash channels like MLID
As for the ad honinems: really? Where? Seriously, please show some quotes. As far as I'm aware I haven't commented on you personally whatsoever. (And no, saying your approach to source criticism is poor is not an ad hominem.)
That is an ad hominem right there, I never said the ad hominem was directed at me, that would be simply rude and I really hope we're past this kind of poor behaviour.

Anyways, although I appreciate sincerely the fact that you are trying to educate me on how to choose my sources, there's no need for it really, I suggest you apply the methodology that you preach and share with us your solid sources and information about the architecture of the coming Navi21. And that would have the added quality of being on topic.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
That is an ad hominem right there, I never said the ad hominem was directed at me, that would be simply rude and I really hope we're past this kind of poor behaviour.

Anyways, although I appreciate sincerely the fact that you are trying to educate me on how to choose my sources, there's no need for it really, I suggest you apply the methodology that you preach and share with us your solid sources and information about the architecture of the coming Navi21. And that would have the added quality of being on topic.
I hate having to pull out the dictionary in a debate, but no, that is not at all an ad hominem.
Wordnik said:
  • adjective: Attacking a person's character or motivations rather than a position or argument.
Saying that someone's approach to source criticism is poor in a debate where source criticism is relevant is an entirely on-topic argument. Asking you to provide proof for making a claim about personal attacks isn't an on-topic response, but it is a valid response to an unsourced claim. Nether says anything about the motivations or character of the person in question. I also fail to understand how pointing out if someone was making a personal attack against you would somehow be rude? I mean, isn't that what you do when people behave badly - call them out and ask them to change their behaviour?

As for my sources: I don't have any, as I haven't been making any claims about the architecture of these GPUs. I've speculated loosely on the basis of existing chips and known data about the nodes, and I have engaged with the speculations of others by attempting to compare them with what we know about current products and generalizable knowledge about chip production and design, but I have made zero claims about how Navi 21 will be. Especially architecturally, as there is no way of knowing that without a trustworthy source. So I don't quite see how your objection applies. If I had been making any claims, I obviously would need to source them (which, for example, I did in rebutting the RTX 3080 launch being a paper launch).

As for pointing out ad hominem arguments: If you're alluding to @Assimilator's semi-rant against MLID, that post presents an entirely valid criticism of a specific sub-genre of tech youtubers. It does of course engage with their motivations - in this case, preferring clicks, ad-views and profit over trustworthy sourcing - but that is a relevant argument when looking into the trustworthiness of a source. One would have a very, very hard time judging the quality of a source if one wasn't allowed to account for their motivations in such a judgement, and there's nothing unfair or personal about that. For example, there's long-running debate about open access vs. paywalled publication of research in academia, a situations in which the arguments presented by publishers and paywalled journals obviously need to be contrasted with their motivations for profit and continued operation, as such motivations can reasonably cause them to be biased. Just like the statements of most career politicians obviously need to be judged against their desire to get reelected.

Now can we please move past this silly bickering?
 
Joined
May 15, 2020
Messages
697 (0.49/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
I also fail to understand how pointing out if someone was making a personal attack against you would somehow be rude? I mean, isn't that what you do when people behave badly - call them out and ask them to change their behaviour?
I really have a hard time with your semantics.
Ad hominem it's attacking the person instead of their arguments. It's rude and it's a simple way of trying to win an argument without being right. If you attacked me instead of my ideas, that would be indeed both rude and a sophism, but you're not doing that, you're just a tad patronizing, but hey I've seen worse on fora.
I haven't alluded to anything, I've put in a quote that you asked, the opening of the quote is a typical ad hominem.
Now, I already told you that you have a weird way of not putting enough effort to understand the other's ideas, but putting a lot of effort to argue with them.
Probably the fastest way to end the bickering would be to use the ignore button, but that would be a pity because from time to time you guys do say interesting things. But at the same time most discussions end up feeling like a waste of my time so, yeah, maybe that would be the better solution.

Now back to the topic of this discussion: there are some rumors about AMD having overhauled the memory architecture of Big Navi. The 2 guys talking about that are RGT and MLID. These are just rumors, although RGT said the rumors came with photos of the real cards and showed them. As always, they might be true, they might not.
If you have a source that says otherwise, or an argument to why that is not true, please share. If you have nothing to contribute to the conversation than personal attacks and free advice about how to check sources, there's really no need for it, and you already did that.
So, let's get on to the topic, please.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
The thing is, with trash channels like MLID it's not even filtering good from bad, because there is no good: it's regurgitated from legitimate sources like Gamers Nexus in order to make MLID appear legitimate, and he then abuses that apparent legitimacy to peddle his half-baked bullshit. Result, people who aren't good at discerning trash from quality believe it all and fall back on "but he was right regarding XXX (that he copied from a real source) so he must be right on YYY (nonsense that he crapped out)".

Melding your lies with the mainstream's truth in order to make your lies appear truthful is the oldest trick in the book when it comes to manipulating discourse and public opinion (see: Russia and US elections), and unfortunately most people choose news sources based on whether that source agrees with their worldview, rather than how trustworthy said source is. They also have a penchant for doubling down and defending "their" news source when said the credibility of said source is brought into question (instead of holding it accountable), or handwaving the source's inaccuracy away with excuses such as "everyone gets it wrong now and then". Except the dodgy sources get it wrong time and time again.

Make no mistake though, MLID is laughing all the way to the bank with every cent of ad revenue he gets from every chump who watches his reddit clickbait videos. Anyone who wants to reward a liar for his lies, that's your business - but don't expect me to do the same.
Total nonesense from a guy who hasn't watched mlid ,his sources beat Gamer's Nexus time and again.
I watch them all(tubers, websites etc, etc)
, and extract small amounts of trend data personally, then use salt.

Flounder's eh , where's that salt.
 
Joined
Oct 22, 2014
Messages
13,210 (3.81/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
Ad hominem it's attacking the person instead of their arguments. It's rude and it's a simple way of trying to win an argument without being right.
Sometimes it's telling the person the truth they don't like.
you're just a tad patronizing, but hey I've seen worse on fora.
Pot, meet kettle, as for seeing worse, were you told you were the cause then too?
Probably the fastest way to end the bickering would be to use the ignore button, but that would be a pity because from time to time you guys do say interesting things. But at the same time most discussions end up feeling like a waste of my time so, yeah, maybe that would be the better solution.
No doubt people will use the ignore button, against you.
As for being a waste of your time, get over yourself and stop wasting our time with your pompous attitude.
 
D

Deleted member 185088

Guest
Total nonesense from a guy who hasn't watched mlid ,his sources beat Gamer's Nexus time and again.
I watch them all(tubers, websites etc, etc)
, and extract small amounts of trend data personally, then use salt.

Flounder's eh , where's that salt.
I watched his videos, they are mostly spectaculation and poor analysis and very biased, I'm still waiting for the magical hardware upgrade for the PS5.
 
Joined
Feb 13, 2012
Messages
522 (0.12/day)
Well, I guess things change differently depending on your location. Here in the Nordics, prices have increased significantly across the board over the past decade. That's mostly due to the NOK/SEK to USD conversion rate, which made a big jump around 2015 or so, but as I said also due to knock-on effects from this. The same applies to prices in EUR though, as the same USD price jump can be seen there. This largely accounds for the change in practices where previously USD MSRP w/o tax ~= EU MSRP w/tax, simply because the EUR (and closely linked currencies) used to be worth more relative to the USD. That means that GPUs, consoles, phones, whatever - they've all become noticeably more expensive.

That is of course possible, but remember that power increases exponentially as clock speeds increase, so a 10-15% increase in clocks never results in a 10-15% increase in power draw - something more along the lines of 25-35% is far more likely. Which is part of why I'm skeptical of this. Sony's rated clocks are as you say peak boost clocks, but they have promised that the console will run at or near those clocks for the vast majority of use cases. That means that you're running a slightly overclocked 4900H or HS (the consoles have the same reduced cache sizes as Renoir IIRC, so let's be generous and say they manage 3.5GHz all-core at 40W) and an overclocked 5700 within the SoC TDP. That leaves X minus 40W for the GPU. Your numbers then mean they would be able to run an overclocked 5700 equivalent at just 135W. If this was approached through a wide-and-slow, more CUs but lower clocks approach (like the XSX), I would be inclined to agree with you that it would be possible given the promised efficiency improvements (up to 50%, though "up to" makes that a very loose promise) and node improvements. But for a chip of the same width with clocks pushed likely as high as they are able to get them? We have plenty of data to go on for AMD GPU implementations like that (5700 XT vs 5700, RX 590 vs 580, etc.), and what that data shows us is that power consumption makes a very significant jump to reach those higher clocks. And while Smart Shift will of course help some with power balancing, it won't have that much to work with given the likely 40-50W power envelope of the CPU. Even lightly threaded games are unlikely to drop below 30W of CPU power consumption after all, so even that gives the GPU just 155W to work with.

You're also too optimistic in thinking that 50% perf/W increase is across the board in all cases. The wording was - very deliberately, as this was an investor call - up to 50%. That likely means a worst case vs best case scenario comparison, so something like a 5700 XT compared to the 6000-series equivalent of a 5600 XT. The PS5 GPU with its high clocks does not meet the criteria for being a best case scenario for efficiency. Of course that was stated a while ago, and they might have managed more than 50% best-case-scenario improvements, but that still doesn't mean we're likely to get 50% improvement when clocks are pushed high.





All of those fish code names just made me think of Dr. Seuss.
https://www.reddit.com/r/Amd/comments/j06xcd And here's a leak. Apparently it's data extracted from the kernel so who knows. It is in line with my guesstimation, with the exception of that one part with 2500mhz. We will see soon enough
 
Joined
Jul 10, 2015
Messages
749 (0.23/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
Both hbm and gddr6 is bollocks.
 
Joined
Mar 21, 2016
Messages
2,197 (0.74/day)
Both hbm and gddr6 is bollocks.
Unlikely perhaps, but very possible with a tiered cache level approach. Put the most important stuff in the HBM and less important stuff in the slower GDDR6. The plus side of it would be GDDR6 is more affordable than HBM is overall. Still I don't know how cost effective it would be in the end doing it that way because you've still got the additional cost of the interposer for the HBM. If that cost isn't much it's very plausible especially with HBCC they could tier NVME with the GDDR6 to bump up the performance parity closer plus there is variable rate shading as well if that could be more utilized on the GDDR6 data the drop off in bandwidth from HBM combined with the other things mentioned would be lessened.

I guess it boils down to performance, cost, and efficiency and how they all interrelate together. I would be viewed a bit like GTX970 though doing it in that manner, but the difference is they could have another cache tier in NVME boosting the performance parity drop off and variable rate shading is the other major factor had that been a thing with the GTX970 some of those performance points might not have been as pronounced and big a issue. I only see it happening if makes sense from a relative cost to performance perspective otherwise it seems far more likely they use one or the other.

Something to speculate is if they did do infinity cache they might scale it alongside the memory bus bit rate something like 192-bit rate 96MB infinity cache 6GB VRAM /256-bit rate 128MB infinity cache 12GB VRAM/ 320-bit rate160MB infinity cache 18GB VRAM. It's anyone's guess what AMD's game plan is for RDNA2, but we'll know soon enough.
 
Joined
May 15, 2020
Messages
697 (0.49/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
The idea of having one chip with 2 memory controllers is not bollocks, a GPU mask is extremely expensive, so having one that can satisfy multiple SKU means a lot of money saved, especially if one of the variants is not destined to be sold in huge volumes.
 
Joined
Jul 10, 2015
Messages
749 (0.23/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
But just an idea.
 
Joined
May 15, 2020
Messages
697 (0.49/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
But just an idea.
There were some rumors about GPU coming with both type of controllers. It was just an idea because there was no solid info on RDNA2. The info in this post is the most solid we've got by far until now.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I'll keep this in a spoiler tag as it's getting quite OT, but sadly it still necessitates a response.
I really have a hard time with your semantics.
Ad hominem it's attacking the person instead of their arguments. It's rude and it's a simple way of trying to win an argument without being right. If you attacked me instead of my ideas, that would be indeed both rude and a sophism, but you're not doing that, you're just a tad patronizing, but hey I've seen worse on fora.
I haven't alluded to anything, I've put in a quote that you asked, the opening of the quote is a typical ad hominem.
Now, I already told you that you have a weird way of not putting enough effort to understand the other's ideas, but putting a lot of effort to argue with them.
Probably the fastest way to end the bickering would be to use the ignore button, but that would be a pity because from time to time you guys do say interesting things. But at the same time most discussions end up feeling like a waste of my time so, yeah, maybe that would be the better solution.

Now back to the topic of this discussion: there are some rumors about AMD having overhauled the memory architecture of Big Navi. The 2 guys talking about that are RGT and MLID. These are just rumors, although RGT said the rumors came with photos of the real cards and showed them. As always, they might be true, they might not.
If you have a source that says otherwise, or an argument to why that is not true, please share. If you have nothing to contribute to the conversation than personal attacks and free advice about how to check sources, there's really no need for it, and you already did that.
So, let's get on to the topic, please.
Firstly, what you're doing here is what we in Norway call a hersketeknikk - the correct translation is master suppression technique, though that's a psychological term that probably isn't as well known as the original term is for us Norwegians. Anyhow, you are simultaneously claiming that I (and some of the people agreeing with me in this debate) did something wrong, then saying I actually didn't do this, but did something that might as well be that and is pretty much just as bad, then attempting to silence any counterarguments by claiming a moral high ground and saying discussion is meaningless, despite simultaneously continuing the debate yourself. I'm not saying you're doing this consciously, but techniques like these are explicitly shaped to limit actual debate and silence your opponents. I.e. it's a straightforward bad-faith line of arguing. I would appreciate if you could try to avoid that going forward, as I try to do the same.

Moving past that: I still haven't seen you actually explain how saying
The thing is, with trash channels like MLID
is actually a personal attack. (I am assuming that's the quote you were alluding to - your wording is a bit unclear as you put two quotes after each other and then said
That is an ad hominem right there
which means that the meaning of "that [...] right there" in your sentence is unclear - it could indicate either quote, or both.) If I'm right in thinking that was what you meant: again, please explain how that is an ad hominem. @Assimilator said that MLID falls into a category of "trash channels". MLID is not a person, but a YouTube channel, making it essentially impossible to level a personal attack against it. The channel is neither logically nor factually equivalent to the person in the channel's videos, regardless if that person is the only person involved in its production. That would just make the channel equivalent to (some of) their work, not them.

Attacking the channel, no matter how viciously and rudely, can still not be a personal attack - for that to be true, it would need to be directed at the person directly. The criteria for being "trash" must then also be related to the content of that channel - in this case, I would assume it relates to general content quality as well as reliability due to the channel's frequent content on rumors and speculation. Being "trash" in relation to any of this is still not personal - it just says the content is bad. For that descriptor to be personal, they would have had to say "the guy making MLID is trash", which isn't what that quote says. Criticizing the quality of someone's work - even by calling it trash - is not a personal attack, and it certainly doesn't reach the level of attacking a person's character or motivations. So no, you still haven't shown how this is an ad hominem. Also, you did originally address both of us ("you guys") and then said "your posts reek of bias and ad hominem arguments", strongly implying that posts from both of us did so. I'm still waiting for you to show me some actual examples of that.

This is also an example of where you (seemingly unintentionally) fall into a bad-faith argument: you are arguing as if calling MLID "trash" is the same as calling the guy making MLID trash. Not only is this a false equivalency, but by putting the line for what amounts to a personal attack there, you are essentially making criticizing the contents of the channel impossible, as there is no way for it to not be personal by the standard you've established.

I am at least glad we can agree that I haven't attacked you personally. That's a start. It's a bit weird to equate personal attacks with sophistry, though, as personal attacks are typically not "subtly deceptive".

Oh, and for being "a tad patronizing", I'll just leave this here:
from time to time you guys do say interesting things

As for this though:
Now, I already told you that you have a weird way of not putting enough effort to understand the other's ideas, but putting a lot of effort to argue with them.
That is an ad hominem. That sentence is directed solely at my character, motivations and intentions in this discussion. You're not criticizing the results of my work, and not even just my methods, but explicitly saying that I'm arguing just to argue and not actually interested in understanding you. You're very welcome to try to rephrase that into not being a personal attack, but that is very clearly one.

And I understand what you're saying just fine, I'm just asking you to show examples of what you're arguing where they are needed, and to clarify the parts that don't stand up to scrutiny. You seem to be treating that as a personal attack and lashing out instead of attempting to continue an actual debate, which is why this keeps escalating.

You're entirely welcome to ignore me if you want. I personally think forum ignore buttons should be reserved for harassment and other extreme cases, as willfully blocking out parts of a discussion regardless of its content is contrary to how I want a forum to work. But again, that's up to you. I'll be glad to end this if that's your choice, but if not, I'm looking forward to you actually addressing the questions I have raised to your posts (as well as the points above), as I'm genuinely interested in finding out what you meant by them.
https://www.reddit.com/r/Amd/comments/j06xcd And here's a leak. Apparently it's data extracted from the kernel so who knows. It is in line with my guesstimation, with the exception of that one part with 2500mhz. We will see soon enough
That sure looks interesting. 2.5GHz for a GPU, even a relatively small one, is bonkers even if it's AMDs boost clock (=maximum boost, not sustained boost) spec. And if 40 CUs at 2.5GHz is also with a 170W TBP as some sites are reporting from this same data, that is downright insane. Also rather promising for overclocking of the 80CU SKU if that ends up being clocked lower. A lot of driver data like this is preliminary (especially clocks) but that also tends to mean that it's on the low end rather than overly optimistic. Which makes this all the more weird. I'm definitely not less interested in what they have to show in a month after this, that's for sure.

I'm pretty dubious about the chance of any dual concurrent VRAM config though. That would be a complete and utter mess on the driver side. How do you decide which data ought to live where? It also doesn't quite compute in terms of the configuration: if you have even a single stack of HBM2(e), adding a 192-bit GDDR6 bus to that ... doesn't do all that much. A single stack of HBM2e goes up to at least 12GB (though up to 24GB at the most), and does 460GB/s bandwidth if it's the top-end 3.6Gbps/pin type. Does adding another layer of GDDR6 below that actually help anything? I guess you could increase cumulative bandwidth to ~800GB/s, but that also means dealing with a complicated two-tier memory system, which would inevitably carry significant complications with it. Even accounting for the cost of a larger interposer, I would think adding a second HBM2e stack would be no more expensive and would perform better than doing a HBM2e+GDDR6 setup. If it's actually that the fully enabled SKU gets 2x HBM2e, cut-down gets 192-bit GDDR6, on the other hand? That I could believe. That way they could bake both into a single die rather than having to make the HBM SKU a separate piece of silicon like the (undoubtedly very expensive) Apple only Navi 12 last time around. It would still be expensive and waste silicon, but given the relatively limited amount of wafers available from TSMC, it's likely better to churn out lots of one adaptable chip than to tape out two similar ones.
 
Joined
May 15, 2020
Messages
697 (0.49/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
I'm pretty dubious about the chance of any dual concurrent VRAM config though. That would be a complete and utter mess on the driver side. How do you decide which data ought to live where?
The rumors that I heard (and which were also mentioned MLID, by the way) is that both controllers will exist on-chip, but only one is active on any given SKU. So basically they will activate the HBM one for the prosumer cards and the GDDR6 for the cheaper versions. But this is from the "take it with a huge pinch of salt" category.

For the other thing, I'll try to give you a proper answer via PM later on.
 
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
That sure looks interesting. 2.5GHz for a GPU, even a relatively small one, is bonkers even if it's AMDs boost clock (=maximum boost, not sustained boost) spec. And if 40 CUs at 2.5GHz is also with a 170W TBP as some sites are reporting from this same data, that is downright insane. Also rather promising for overclocking of the 80CU SKU if that ends up being clocked lower. A lot of driver data like this is preliminary (especially clocks) but that also tends to mean that it's on the low end rather than overly optimistic. Which makes this all the more weird. I'm definitely not less interested in what they have to show in a month after this, that's for sure.

2.5GHz @ 170W TBP is impossible unless we are talking an incredibly tiny chip.

The rumors that I heard (and which were also mentioned MLID, by the way) is that both controllers will exist on-chip, but only one is active on any given SKU. So basically they will activate the HBM one for the prosumer cards and the GDDR6 for the cheaper versions. But this is from the "take it with a huge pinch of salt" category.

Not gonna happen. That's a massive amount of die space and transistors to be wasting for no good reason. No designer is going to (be allowed to) do that because it's essentially throwing money away. If they want to use different memory with the same GPU, they will make a derivative design with a different memory controller, and at that stage you might as well split that derivative design off entirely and cater it entirely for the prosumer market (e.g. GA100 vs GA102). AMD's long-running focus on keeping costs down makes this even less likely.

The additional thing that makes this a "not gonna happen" is the amount of die area that's going to be needed for ray-tracing hardware. Considering how large Turing and Ampere dies are, wasting space on inactive MCs would be an exceedingly poor, and therefore unlikely, decision on AMD's part.

As for the hybrid GDDR/HBM on a single card... that's pie-in-the-sky BS, always has been.
 
Joined
May 15, 2020
Messages
697 (0.49/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Not gonna happen. That's a massive amount of die space and transistors to be wasting for no good reason. No designer is going to (be allowed to) do that because it's essentially throwing money away. If they want to use different memory with the same GPU, they will make a derivative design with a different memory controller, and at that stage you might as well split that derivative design off entirely and cater it entirely for the prosumer market (e.g. GA100 vs GA102). AMD's long-running focus on keeping costs down makes this even less likely.

The additional thing that makes this a "not gonna happen" is the amount of die area that's going to be needed for ray-tracing hardware. Considering how large Turing and Ampere dies are, wasting space on inactive MCs would be an exceedingly poor, and therefore unlikely, decision on AMD's part.

As for the hybrid GDDR/HBM on a single card... that's pie-in-the-sky BS, always has been.
I don't try to pretend that I understand what's gonna come, but a 128MB cache would also be absolutely huge... None of the rumors that I have seen makes complete sense to me.

2.5GHz @ 170W TBP is impossible unless we are talking an incredibly tiny chip.
The Newegg leak puts this SKU at 150W :kookoo:
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
2.5GHz @ 170W TBP is impossible unless we are talking an incredibly tiny chip.
I completely agree. As I said, absolutely insane if true - an unprecedented GPU clock speed in absolute numbers, a near unprecedented boost in clock speeds, and combined with a significant drop in power consumption compared to what ought to be a very close comparison (previous gen of same arch, slightly less mature node)? Something must be off about that.
Not gonna happen. That's a massive amount of die space and transistors to be wasting for no good reason. No designer is going to (be allowed to) do that because it's essentially throwing money away. If they want to use different memory with the same GPU, they will make a derivative design with a different memory controller, and at that stage you might as well split that derivative design off entirely and cater it entirely for the prosumer market (e.g. GA100 vs GA102). AMD's long-running focus on keeping costs down makes this even less likely.
The thing is, with a (rumored) 192-bit GDDR6 bus it's not a massive amount of die space - significant, yes, but possibly a cost it could make sense to swallow if the alternatives are a) leaving performance on the table for the top end SKU, or b) taping out two separate dice, one with HBM and one with GDDR6. And of course, HBM2 controllers are tiny, so the added cost to the cheaper (and thus more price sensitive) SKUs would be negligible. It could be that 40 CUs with HBM2 significantly outperforms 40 CUs with GDDR6, which could allow for them to get a new SKU out of this rather than, say, making a new 52 CU die - and that's a cost savings on the scale of hundreds of millions of dollars. "Wasting" a few percent die area might be cheap in comparison. I'm not saying this is happening, but there are at least somewhat reasonable arguments for it.
The additional thing that makes this a "not gonna happen" is the amount of die area that's going to be needed for ray-tracing hardware. Considering how large Turing and Ampere dies are, wasting space on inactive MCs would be an exceedingly poor, and therefore unlikely, decision on AMD's part.
We don't yet know how AMD's RT hardware is implemented, so speculating about the die area required for it is difficult, but we do have one very well detailed piece of data: The Xbox Series X SoC. We know they got what is essentially a Renoir CPU layout + 56 fully RT-enabled RDNA2 CUs into 360.4mm² in the Xbox Series X. According to their Hot Chips presentation, the CUs account for around 47% of the die area, with the CPU and memory controllers each using ~11%. The rest is likely encode/decode, display processing, decompression, audio processing, I/O, and so on. Given that the non-CU parts of the core likely scale very little if at all as the GPU grows, we can make some napkin math estimates. Let's say display processing, 16x PCIe 4.0 and other things that a GPU needs accounts for 25% of that die size - that's 90,1mm². Ten memory channels are 11% or 39,7mm², or 4mm² per channel (some of the XSX channels are double bandwidth but let's try to keep this simple). That means a 192-bit GDDR6 bus (6 channels) needs 24mm², plus 90,1mm² for the other non-CU parts, plus 40 CUs at (360,4/100*47 = 169,4; 169,4/56*40=) ~121mm². For a total of 121+90+24mm² = 235mm². Admittedly that's with a narrower memory bus than Navi 10, but it's also smaller overall - though the margin of error with napkin math like this is obviously enormous. The part where this can be relevant, rather than absolute numbers, is how it would scale: doubling the CU count to 80 and doubling the memory bus would just mean a 61% increase in die size, at 380mm2. There is no doubt something missing from this calculation (for example, DF didn't mention if the 11% number for the memory accounts for the controllers, physical connections, or both, and judging by the die shot it being both seems unlikely, and it's also likely these numbers omit interconnects, internal buses and so on). But nonetheless, it would seem that AMD has some leeway in terms of die sizes for RDNA2. Are they going to use that space for some weird dual memory layout? I have no idea. It's possible, but it would also be unprecedented. I'm not dismissing it outright, but I'm not saying I believe it either.
 
Joined
Feb 13, 2012
Messages
522 (0.12/day)
I'll keep this in a spoiler tag as it's getting quite OT, but sadly it still necessitates a response.
Firstly, what you're doing here is what we in Norway call a hersketeknikk - the correct translation is master suppression technique, though that's a psychological term that probably isn't as well known as the original term is for us Norwegians. Anyhow, you are simultaneously claiming that I (and some of the people agreeing with me in this debate) did something wrong, then saying I actually didn't do this, but did something that might as well be that and is pretty much just as bad, then attempting to silence any counterarguments by claiming a moral high ground and saying discussion is meaningless, despite simultaneously continuing the debate yourself. I'm not saying you're doing this consciously, but techniques like these are explicitly shaped to limit actual debate and silence your opponents. I.e. it's a straightforward bad-faith line of arguing. I would appreciate if you could try to avoid that going forward, as I try to do the same.

Moving past that: I still haven't seen you actually explain how saying

is actually a personal attack. (I am assuming that's the quote you were alluding to - your wording is a bit unclear as you put two quotes after each other and then said

which means that the meaning of "that [...] right there" in your sentence is unclear - it could indicate either quote, or both.) If I'm right in thinking that was what you meant: again, please explain how that is an ad hominem. @Assimilator said that MLID falls into a category of "trash channels". MLID is not a person, but a YouTube channel, making it essentially impossible to level a personal attack against it. The channel is neither logically nor factually equivalent to the person in the channel's videos, regardless if that person is the only person involved in its production. That would just make the channel equivalent to (some of) their work, not them.

Attacking the channel, no matter how viciously and rudely, can still not be a personal attack - for that to be true, it would need to be directed at the person directly. The criteria for being "trash" must then also be related to the content of that channel - in this case, I would assume it relates to general content quality as well as reliability due to the channel's frequent content on rumors and speculation. Being "trash" in relation to any of this is still not personal - it just says the content is bad. For that descriptor to be personal, they would have had to say "the guy making MLID is trash", which isn't what that quote says. Criticizing the quality of someone's work - even by calling it trash - is not a personal attack, and it certainly doesn't reach the level of attacking a person's character or motivations. So no, you still haven't shown how this is an ad hominem. Also, you did originally address both of us ("you guys") and then said "your posts reek of bias and ad hominem arguments", strongly implying that posts from both of us did so. I'm still waiting for you to show me some actual examples of that.

This is also an example of where you (seemingly unintentionally) fall into a bad-faith argument: you are arguing as if calling MLID "trash" is the same as calling the guy making MLID trash. Not only is this a false equivalency, but by putting the line for what amounts to a personal attack there, you are essentially making criticizing the contents of the channel impossible, as there is no way for it to not be personal by the standard you've established.

I am at least glad we can agree that I haven't attacked you personally. That's a start. It's a bit weird to equate personal attacks with sophistry, though, as personal attacks are typically not "subtly deceptive".

Oh, and for being "a tad patronizing", I'll just leave this here:


As for this though:

That is an ad hominem. That sentence is directed solely at my character, motivations and intentions in this discussion. You're not criticizing the results of my work, and not even just my methods, but explicitly saying that I'm arguing just to argue and not actually interested in understanding you. You're very welcome to try to rephrase that into not being a personal attack, but that is very clearly one.

And I understand what you're saying just fine, I'm just asking you to show examples of what you're arguing where they are needed, and to clarify the parts that don't stand up to scrutiny. You seem to be treating that as a personal attack and lashing out instead of attempting to continue an actual debate, which is why this keeps escalating.

You're entirely welcome to ignore me if you want. I personally think forum ignore buttons should be reserved for harassment and other extreme cases, as willfully blocking out parts of a discussion regardless of its content is contrary to how I want a forum to work. But again, that's up to you. I'll be glad to end this if that's your choice, but if not, I'm looking forward to you actually addressing the questions I have raised to your posts (as well as the points above), as I'm genuinely interested in finding out what you meant by them.

That sure looks interesting. 2.5GHz for a GPU, even a relatively small one, is bonkers even if it's AMDs boost clock (=maximum boost, not sustained boost) spec. And if 40 CUs at 2.5GHz is also with a 170W TBP as some sites are reporting from this same data, that is downright insane. Also rather promising for overclocking of the 80CU SKU if that ends up being clocked lower. A lot of driver data like this is preliminary (especially clocks) but that also tends to mean that it's on the low end rather than overly optimistic. Which makes this all the more weird. I'm definitely not less interested in what they have to show in a month after this, that's for sure.

I'm pretty dubious about the chance of any dual concurrent VRAM config though. That would be a complete and utter mess on the driver side. How do you decide which data ought to live where? It also doesn't quite compute in terms of the configuration: if you have even a single stack of HBM2(e), adding a 192-bit GDDR6 bus to that ... doesn't do all that much. A single stack of HBM2e goes up to at least 12GB (though up to 24GB at the most), and does 460GB/s bandwidth if it's the top-end 3.6Gbps/pin type. Does adding another layer of GDDR6 below that actually help anything? I guess you could increase cumulative bandwidth to ~800GB/s, but that also means dealing with a complicated two-tier memory system, which would inevitably carry significant complications with it. Even accounting for the cost of a larger interposer, I would think adding a second HBM2e stack would be no more expensive and would perform better than doing a HBM2e+GDDR6 setup. If it's actually that the fully enabled SKU gets 2x HBM2e, cut-down gets 192-bit GDDR6, on the other hand? That I could believe. That way they could bake both into a single die rather than having to make the HBM SKU a separate piece of silicon like the (undoubtedly very expensive) Apple only Navi 12 last time around. It would still be expensive and waste silicon, but given the relatively limited amount of wafers available from TSMC, it's likely better to churn out lots of one adaptable chip than to tape out two similar ones.

I wonder if a new boost algorithm is in place to clock that high. Perhaps more like bursts or something.

As for the side memory thing; I don't believe they will use both configs at the same time, rather it is believed to have 2 memory controllers, one for HBM and one for gddr6

RedGamingTech reported and is adamant that hbm is not for the gaming cards, and that the gaming cards will have some sort of side memory.
I'm curious to see if there is any truth behind that report. If side memory uses less energy than gddr6 and is cheaper than hbm then it's a win I suppose. I hope that is the case honestly because of how useful it would be on APUs where bandwidth is limited.
It could also be a step before multi chip gaming gpus where that side memory basically acts as a L4 cache to feed the chips, so to begin moving in that direction makes the transition easier perhaps.
 
Joined
Oct 22, 2014
Messages
13,210 (3.81/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
The Newegg leak puts this SKU at 150W :kookoo:
Problem?
I don't see why that being much lower is an issue, efficiencies can improve.
 
Joined
May 8, 2020
Messages
578 (0.40/day)
System Name Mini efficient rig.
Processor R9 3900, @4ghz -0.05v offset. 110W peak.
Motherboard Gigabyte B450M DS3H, bios f41 pcie 4.0 unlocked.
Cooling some server blower @1500rpm
Memory 2x16GB oem Samsung D-Die. 3200MHz
Video Card(s) RX 6600 Pulse w/conductonaut @65C hotspot
Storage 1x 128gb nvme Samsung 950 Pro - 4x 1tb sata Hitachi 2.5" hdds
Display(s) Samsung C24RG50FQI
Case Jonsbo C2 (almost itx sized)
Audio Device(s) integrated Realtek crap
Power Supply Seasonic SSR-750FX
Mouse Logitech G502
Keyboard Redragon K539 brown switches
Software Windows 7 Ultimate SP1 + Windows 10 21H2 LTSC (patched).
Benchmark Scores Cinebench: R15 3050 pts, R20 7000 pts, R23 17800 pts, r2024 1050 pts.
Why not to optimize everything up (crank up the efficiency at least) and also start using HBM memory mid-range onwards ? Seems like an impossible step for amd tho (but only AMD can do such a thing) :\
Ah and invest more $$$ into driver quality control...yeah
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Why not to optimize everything up (crank up the efficiency at least) and also start using HBM memory mid-range onwards ? Seems like an impossible step for amd tho (but only AMD can do such a thing) :\
Ah and invest more $$$ into driver quality control...yeah
"Why not?" is a simple question to answer: cost. HBM is still very expensive, and even GDDR6 is cheaper and simpler to implement. If they could bring costs down to a level where this became feasible, they would be able to make some very interesting products - clearly demonstrated by the very impressive Radeon Pro 5600M. But the cost is likely still prohibitively high.

Problem?
I don't see why that being much lower is an issue, efficiencies can improve.
The problem is whether this is even remotely possible. A ~30-50% clock speed increase combined with a 33% drop in power consumption - all without a new production node to help reach this goal - would be completely unprecedented in the modern semiconductor industry. With most new nodes you're lucky if you get one of those two (clock speed gains or power consumption drop), and this isn't a new node, just a tweak of the existing one.
 
Joined
Mar 21, 2016
Messages
2,197 (0.74/day)
The problem is whether this is even remotely possible. A ~30-50% clock speed increase combined with a 33% drop in power consumption - all without a new production node to help reach this goal - would be completely unprecedented in the modern semiconductor industry. With most new nodes you're lucky if you get one of those two (clock speed gains or power consumption drop), and this isn't a new node, just a tweak of the existing one.
It's not necessarily a clock speed increase it could be better IPC at the same clock speeds which would also drop power consumption. It's worth nothing that AMD's transistor density is still quite a bit lower than Nvidia's so I wouldn't at all say it's impossible or unprecedented. Also look what Intel's done with 14nm+++++++++++++++ to counteract and hold it's ground and retain the single thread high frequency scaling performance advantages it still carries. Sure that's happened over a longer period of time, but there is no question AMD's had more R&D emphasis in the last 5 years or so devoted to Ryzen, but gradually shifting more back towards Radeon at the same time. I feel RDNA was the first major pushback from AMD on the graphics side and RDNA2 could be a continuance of it. Nvidia with Ampere and node shrink coinciding makes that more difficult, but let's face it we know AMD didn't eek out all the performance that can be tapped into with 7nm.

Nvidia has a higher transistor count on a larger node for starters and we've seen what Intel's done with 14nm+++++++++++++ as well the idea that 7nm isn't maturing and hasn't is just asinine to think it defiantly has improved from a year ago and AMD defiantly can squeeze more transistors into the design as well at least as many as Nvidia's previous designs or more is reasonable to comprehend being entirely feasible. We can only wait and see what happens. Let's also not forget AMD also use to be fabrication company and spun off global foundries the same can't be said of Nvidia they could certainly be working closely with TSMC on improvements to the node itself for their designs and we some some signs that they did for Ryzen in fact work alongside TSMC to incorporate some node tweaks to get more out of the chip designs on the manufacturing side.

It's just one of those things where everyone is going to have to wait and see what AMD did come up with for RDNA2 will it underwhelm, overwhelm, or be about what you can expect from AMD all things taken into consideration!!? Nvidia is transitioning to a smaller node so the ball is more in their court in that sense however AMD's transistor count is lower so it's defiantly not that simple. If AMD incorporated something clever and cost effective they could certainly make big leaps in performance and efficiency though and we know that AMD's compression is already trailing Nvidia's so they have room to improve there as well. Worth noting is AMD is transitioning toward RTRT hardware, but we really don't at all know to what extent and how invested into it they plan incorporate into that on this initial pushing into it. I think if they match a RTX2080 on the RTRT side non SUPER model they honestly are doing fine with it RTRT isn't going to take off overnight and the RDNA3 design can be more aggressive things will have changed a lot by then and hopefully it'll be 5nm by that point and perhaps HBM costs will have improved.
 
Top