- Joined
- Oct 6, 2021
- Messages
- 1,924 (1.45/day)
System Name | Raspberry Pi 7 Quantum @ Overclocked. |
---|
It seems to me that you are very good at constructing long texts devoid of any argument other than saying that everyone who thinks for themselves and questions is a faboy and absolutely everything Nvidia does is right, and the future undeniable, unquestionable. A suggestion: you might as well replace the text with a photo of you kissing a Huang statue, achieving the same effect!My laptop's RTX 3050 does a pretty decent job, considering what it is and that it has 4 GB of VRAM.
You nailed it. Personally, I have the view that the RT vs. no RT debate is largely fueled by clubism and the whole AMD vs. NVIDIA feud. It has been painfully obvious that AMD's hardware has historically been brutally incompetent at handling this type of workload, they were years late to the party, remain a full generation behind and even with RDNA 3, it's not quite there yet, generating resentment that there's such an insane gap between Nvidia and AMD in this regard - and Nvidia has actively exploited this in their marketing material, and took advantage that they were the only vendor that offered complete DirectX 12 Ultimate and hardware accelerated RT to keep their ASP high during the twilight years of GCN and the bug festa that was the original RDNA.
Couple that with the frame rates being relatively poor even on high end hardware (at least until high-end Ampere came about), budget-conscious customers that shop at performance segment or below (which is the only market where AMD has any meaningful presence) as "superfluous", and often baseless arguments are made against the tech, in a futile attempt to "reject" this change.
But it's the future, whether one likes it or not. Games will be heading towards using ray and path tracing, ray traced global illumination, etc. - because that is the only way to improve graphics any further from where they currently stand. True that graphics don't make the game - but some games do need the graphics, and I will admit that I like having games with eye candy and a high degree of immersion... or i'd just have a RTX 3050 on my desktop, too.
In the Navi 21 design, the RT accelerator is bound to a compute unit, each workgroup consisting of two compute units. If you simply doubled it, it'd likely be resource starved and even more inefficient. It'd not necessarily be faster, possibly slower, even.
Non-XT RX 6800 is simply a low-quality Navi 21 die that is 25% disabled. Only three of the four compute slices (each comprised of ten WGPs or 20 CUs) are enabled on this model. Only way to extract more out of this generation would be to enable the remaining units (6800 XT, 6900 XT), and then overclock it further (6950 XT), in an attempt to defy scaling with the tradeoff for power efficiency. If the 6800 achieves 60% of the RT performance of the 6950 XT, then it's already succeeded at what it had to do, but it's not enough: AMD's first-generation RT design in RDNA 2 really sucks compared even to Turing.
RT proves inefficient, consuming die space, energy, and resources, particularly in the specified context. With manufacturing processes becoming increasingly expensive and cache no longer substantially shrinking, proportional advancements are unlikely in the foreseeable future. Given the current stage, as we're beyond the beginning of the century, the imminent challenges make it apparent that there's insufficient margin for genuine RT viability in average consumer GPUs.There is no counter argument other than generic texts because no one refutes reality.