Friday, June 23rd 2023

Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?

AMD Radeon RX 7800 XT will be a much-needed performance-segment addition to the company's Radeon RX 7000-series, which has a massive performance gap between the enthusiast-class RX 7900 series, and the mainstream RX 7600. A report by "Moore's Law is Dead" makes a sensational claim that it is based on a whole new ASIC that's neither the "Navi 31" powering the RX 7900 series, nor the "Navi 32" designed for lower performance tiers, but something in between. This GPU will be AMD's answer to the "AD103." Apparently, the GPU features the same exact 350 mm² graphics compute die (GCD) as the "Navi 31," but on a smaller package resembling that of the "Navi 32." This large GCD is surrounded by four MCDs (memory cache dies), which amount to a 256-bit wide GDDR6 memory interface, and 64 MB of 2nd Gen Infinity Cache memory.

The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
Source: Moore's Law is Dead (YouTube)
Add your own comment

169 Comments on Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?

#101
Vayra86
john_@Vayra86 I agree with most of your post. I was screaming about RT performance and was called Nvidia shill back then. With 7900XTX offering 3090/Ti RT performance, how could someone be negative about that? How can 3090/Ti performance being bad today when it was a dream yesterday?
The answer is marketing. I believe people buy RTX 3050 cards over RX 6600 because of all that RT fuss.
This is the thing. RDNA2 as well... I cannot for the life of me understand why they didn't get more out of that. I cannot understand why they aren't presenting the full value story 'while still doing 99% of all the things nicely', instead we get random responsive blurbs about VRAM important. At the same time, I guess with the misfire they keep having on GPU launches, what marketing can you really make on that. The chances of looking absolutely stupid are immense. 'Poor Volta' ... the best marketing stunt in years eh

I mean really, I'm looking at an 7900XT and I have yet to find fault with it in actual usage. Its virtually the same experience as an Nvidia card, and in terms of settings/GUI its a bit better. No Geforce Experience nagging you, but a proper functional and complete thing instead containing all the stuff you want. In games, I see a GPU that boosts and clocks dynamically, isn't freaking out over random stuff thrown at it, and just does what it must do, while murdering any game I throw at it.
Posted on Reply
#102
Tek-Check
KellyNyanbinaryAverage MLID leak
Did we hear about a new package for Navi 31 die anywhere else before Tom leaked it?
nguyenSo there is no Navi32 GCD with the supposed bug fixes, i guess AMD would like to skip this gen and move on (while putting RDOA3 on fire sale)
"Supoosed bug fix" already?
And where is a "bug" in the first place?
In your mind?
tabascosauzThough I am curious what these clock "bugs" are, even when it comes to problems seems like Navi31 has more relevant concerns that need to be solved first.
I have a reference model of 7900XTX and play on LG 4K/120Hz OLED TV with VRR over HDMI port.

Gameplay is smooth and 10-bit images are fantastic.

Is there anything wrong with my card? I keep hearing that Navi 31 has 'issues'. I can't see them. Is anyone able to enlighten me?
john_I haven't spend much time reading stuff about AMD's older Navi 32, but with RDNA3's failure to offer (really) better performance and better efficiency over RDNA2 equivalent specs, those specs of the original Navi 32 where looking like a sure fail compared to Navi 21 and 22. Meaning we could get an 7800 that would have been at the performance levels or even slower than 6800XT or even plain 6800 and 7700 models slower or at the same performance levels with 67X0 cards. I think the above rumors do hint that this could be the case. So, either AMD threw into the dust bin the original Navi 32 and we are waiting for mid range Navi cards because AMD had to build a new chip, or maybe Navi 32 still exists and will be used for 7700 series while this new one will be used for 7800 series models.

Just random thoughts ...
"Fail to offer better performance over RDNA2..." What kind of nonsense is this?

7900XTX is 50% faster in 4K than 6900XT. To be 50% faster it needed 20% more CUs. All that for exactly the same price as in November 2020, no inflation included. So, the card is effectively even cheaper despite the same nominal value.

Where is alleged "failure" in any of this?
Posted on Reply
#103
john_
rv8000It doesn’t matter what you’re a fan of. The opinion you keep reiterating is flat out wrong.
Your OPINION is that my OPINION is wrong.
First time in a forum?
Posted on Reply
#104
Tek-Check
kapone32there is nothing that tells what the performance of the 7800XT will be
Igor's Lab tested recently Pro W7800 to simulate future 7800XT possible performance with 70CUs. It was up to 12% faster than 6800XT. Treat it as very rough attempt.
Posted on Reply
#105
john_
Tek-CheckWhat kind of nonsense is this?
Keep reading the thread. I posted more nonsense for you to enjoy.
Posted on Reply
#106
kapone32
Tek-CheckIgor's Lab tested recently Pro W7800 to simulate future 7800XT possible performance with 70CUs. It was up to 12% faster than 6800XT. Treat it as very rough attempt.
Yes but do not the actual specs that it will have.
Posted on Reply
#107
Tek-Check
john_This is from the original review of TechPowerUp but I think it still remains a problem even today.
We know this. You are telling us 'news' that the water is wet... No reason to repeat what we know and what is on their list for driver update.

Besides, anyone can try to fit this at home by lowering refresh rate by 1Hz.
Posted on Reply
#108
rv8000
john_Your OPINION is that my OPINION is wrong.
First time in a forum?
So because a random arm chair enthusiast says there are no efficiency and or performance gains from RDNA2 to RDNA3 we should just completely ignore the objective facts presented to us; legitimately tested data on the exact site you’re making up facts on literally proves your opinion to be false. There is nothing else to say.
Posted on Reply
#109
Tek-Check
kapone32Yes but do not the actual specs that it will have
It doesn't really matter. 7800XT will have at least 70CUs, if not a few more. So, if lucky, it will reach the performance close to 6950XT
kapone32Yes but do not the actual specs that it will have
It doesn't really matter. 7800XT will have at least 70CUs, if not a few more. So, if lucky, it will reach the performance close to 6950XT
Posted on Reply
#110
john_
Tek-CheckWe know this. You are telling us 'news' that the water is wet... No reason to repeat what we know and what is on their list for driver update.

Besides, anyone can try to fit this at home by lowering refresh rate by 1Hz.
Oh, didn't knew the water is wet. So, maybe we should close up the forum section of this site now that we know water is wet.
rv8000So because a random arm chair enthusiast says there are no efficiency and or performance gains from RDNA2 to RDNA3 we should just completely ignore the objective facts presented to us; legitimately tested data on the exact site you’re making up facts on literally proves your opinion to be false. There is nothing else to say.
So, first time in a forum.

You jump in this thread that is now in 5th page with probably ABSOLUTELY NONE knowledge of what is written after page 1 and instead of hitting the brakes and tell to yourself "wait, let's just see how things progressed in this thread" you keep playing the same song.
This is beyond boring. Find someone else to vent your nerves.
Posted on Reply
#111
tabascosauz
TheoneandonlyMrKIf you're looking as hard as you did, for, and at issues and only came up with the three I just read, I wouldn't call that a shocking amount relative to Intel and Nvidia. GPUs but that's me, fan stopping really seamed to bother you plus multi monitor idle above all else, one of those doesn't even register on my radar most of the time, as for stuttering, I don't use your setup you do, some of that stuttering Was down to your personal set-up IE cable's, glad you got it sorted by removing the card entirely but for many other silent users and some vocal, few of your issues applied.

And more importantly, this Is made by the same people, but it isn't your card, so I think your issues might not apply to this rumour personally.
Been thinking about going back, actually. Pulse was not out when I had mine, and has seen some good prices lately. Not out of discontent with current card, just impulsive and ill-advised curiosity I guess. Maybe curiosity to see if they've done anything about the various different VRAM behaviours.

The multi monitor VRAM fluctuation is no longer on the latest release bug list, but also isn't listed as being fixed either, so not sure if they forgot to write it or only add it under issues later once it's verified to still exist.

AMD clearly cares and can achieve some excellent power optimization (Rembrandt and Phoenix), but struggle when chiplets are involved. It's no easy task, to be clear, but they did advertise fanout link as being extremely efficient and capable of aggressively power management when Navi31 came out. Frustrating, because GCD power management is clearly outstanding, only to be squandered twice over by the MCDs and fanout link.
Posted on Reply
#112
Assimilator
john_Unfortunately I have to agree with @Assimilator in his above comment. And I am mentioning Assimilator here because he knows my opinion about him. Simply put, he in in my ignore list!
But his comment here is correct.
Guess it goes to show that nerds can agree about something sometimes :p
Posted on Reply
#113
londiste
Dr. DroIt's an invalid comparison because they aren't the same architecture or work in a similar way, remember back in the Fermi v. TeraScale days, the GF100/GTX 480 GPU had 480 shaders (512 really but that config never shipped) while a Cypress XT/HD 5870 had 1600... nor can you go by the transistor count estimate because the Nvidia chip has several features that consume die area such as tensor cores and an integrated memory controller and on-die cache that the Navi 31 design does not (with L3 and IMCs being offloaded onto the MCDs and the GCD focusing strictly on graphics and the other SIPP blocks). It's a radically different approach in GPU design that each company has taken this time around, so I don't think it's "excusable" that the Radeon has less compute units because that's an arbitrary number (to some extent).
They work in a similar way. Shader is a shader and the differences in large picture are minor. There are differences in organization and a bunch of add-on functionalities but unless there is a clear bottleneck somewhere in that - again, in the large picture of performance it does not really matter.

RDNA3 and Ada are closer in many aspects than quite a few previous generations. Nvidia went for 5nm-class manufacturing process for Ada, RDAN3 is using the same, RDNA3 doubled up on compute resources similarly to what Nvidia did in Turing/Ampere. Nvidia went with large LLC and narrower memory buses similarly to what AMD did in RDNA2.

Fermi vs TeraScale was different times. And DX11/DX12 transition on top of that. Off the top of my head these 480 shaders ion GTX480 were running at twice the clock rate of rest of the GPU. Wasn't TeraScale plagued by occupancy problems due to VLIW approach making those perform much slower than theoretical compute capabilities? The drawbacks present back then have been figured out for quite a while now.

Transistor counts by and large still follow the shader count and in recent times the large cache. Other parts - even if significant like Tensor cores - are comparatively smaller.
RDNA3 chiplet is not a radically different approach. It is clever, should be good for yields (=cost) but there really was no radical breakthrough here.
Dr. DroIf you ask me, I would make a case for the N31 GCD being technically a more complex design than the portion responsible for graphics in AD102.
What would that case be?
john_With 7900XTX offering 3090/Ti RT performance, how could someone be negative about that? How can 3090/Ti performance being bad today when it was a dream yesterday?
RT has become (much) more relevant and RDNA3 competitor is not Ampere.
Also, while 7900XTX offers 3090Ti RT performance its raster performance is a good 20% faster...
Posted on Reply
#114
DemonicRyzen666
Tek-CheckSo, is Navi 41 going to be a chiplet GCD, inspired by MI300?

MI300 has GCD die with up to 38CUs.

Navi 41 could have three or even four of those dies.
MI300 has double the amount of transistor the RTX 4090 currently has.
Posted on Reply
#115
Beginner Macro Device
Vayra86You're paying every cent twice over for Nvidia's live beta here, make no mistake.
I'm paying nothing, I've never bought no RTX card in my life. I utterly despise what jerket man is doing and I wish nVidia to be hit by something really heavy. All they do is a negative effort.

AMD literally has a Jupiter sized window wide open to adjust their products and drivers, yet we only see some performance improvements, nothing special, like AMD is thinking they're Intel of 2012 when they had no one to compete with so making a dozen percent better product than the one from three years ago is completely fine.

It's not. Neither party deserves a cake. The only way I'm buying their BS SKUs is major discounts because they openly disrespect customers, me included. And AMD is worse because they don't use massive blunders by nVidia.
Vayra86I can honestly not always tell if RT is really adding much if anything beneficial to the scene
This is only because we're a decade, maybe a couple decades too early for this technology. Nothing powerful enough to push this art to its beaut. RT is the answer to complete mirrors and reflections deficite in games. And the only stopping factor is hardware. It can't process it fast enough as of yet. When my post will be as old as me now RT will be a default feature, not even gonna doubt that (unless TWW3 puts us back to the iron age)
Vayra86do consider the fact Ampere 'can't get DLSS3' for whatever reason
nGreedia being nGreedia became obvious when they released their first RTX cards. It doesn't nullify my point though.

That being said, I'm just completely pessimistic about GPU market of the nearest couple generations. GPUs are ridiculously expensive and games are coming in "quality" so terrible it's them who must pay us to play it, not the opposite.
Posted on Reply
#116
EatingDirt
john_Just go one page before the one you point in that review and explain me those numbers.
AMD Radeon RX 7600 Review - For 1080p Gamers - Power Consumption | TechPowerUp
I'm sorry you don't seem to understand efficiency. There's a thing called performance, and there's a thing called power consumption. Different programs & scenario's use different power and they also perform differently. I haven't read what Wizard uses to weigh efficiency, but I imagine it's probably something along the lines of average power consumption & performance over the duration of the gaming suite.

Again, this doesn't change the fact that the 7600 is, at the very least, 20% more efficient than the 6600XT.
john_Before answering remember this
7600 new arch, new node both should be bringing better efficiency for 7600. But what do we see? In some cases 6600XT with lower power consumption, in some cases 7600 with lower power consumption.
The node change for the 7600, by the way, was mostly insignificant. TSMC 7nm to "6nm" was a density increase with little-to-no efficiency increase. This is more clearly seen on the watt-per-frame graph, where the 7900XT & 7900XTX, both 5nm chips, have frames-per-watt equal to that of Nvidia's 4000 series, and better efficiency than the 7600.

Nvidia on the other hand, went from a mediocre Samsung 8nm node to a much superior TSMC 4nm node with the transition from the 3000 series to the 4000 series. It's not surprising to see a boost in efficiency.

At this point, there's not that much separating Nvidia and AMD GPU's in terms of hardware. Everything disappointing with the current GPU generation is the price with the ridiculous naming "upsell" of all cards by both AMD & Nvidia.
Posted on Reply
#117
fevgatos
TheoneandonlyMrKSo would be interested in knowing a shocking number of other bug's with proof.
Okay, plenty of games (dota 2 as an example) crash the drivers when run on dx11. It was a very common issue with RDNA2 but im hearing it also happen with RDNA3.

Diablo 4 on RDNA2 makes your desktop flicker when you alt tab to it, happens on my g14 laptop.

Driver installation requires you to disconnect from the internet. Which I guess is okay if you already know about it, I didn't, took me couple of hours to figure out why my laptop isn't working.
john_I mean why waste an extra of 30W of power while watching a movie?
Interesting. Are you of the same opinion with CPU's? Cause amd cpus consume 30-40w sitting there playing videos while intel drops to 5 watts.
Posted on Reply
#118
john_
EatingDirtI'm sorry you don't seem to understand efficiency. There's a thing called performance, and there's a thing called power consumption. Different programs & scenario's use different power and they also perform differently. I haven't read what Wizard uses to weigh efficiency, but I imagine it's probably something along the lines of average power consumption & performance over the duration of the gaming suite.

Again, this doesn't change the fact that the 7600 is, at the very least, 20% more efficient than the 6600XT.
First you say I don't understand efficiency, then you say that you don't know what Wizard does to measure it. You just grab and hold that 20% number that suits your opinion.
You do realize the above shows that you don't understand efficiency either. You just throw away the power consumption numbers in the previous page of the review and keep that efficiency number because supports your opinion.
What I see is that efficiency is measured under a very specific scenario, which is Cyberpunk 2077. So, if 7600 enjoys an updated optimized driver in that game the result is in it's favor.
Posted on Reply
#119
INSTG8R
Vanguard Beta Tester
Vayra86we can blame Raja"HBM"Koduri.
Poor Intel now he's they're problem with the "Raja Hype" I had Fury I had Vega I loved them both but they were in lack of a better terms just "Experiments" that while not terrible were never great either outside of the unique tech.
Posted on Reply
#120
londiste
john_First you say I don't understand efficiency, then you say that you don't know what Wizard does to measure it. You just grab and hold that 20% number that suits your opinion.
You do realize the above shows that you don't understand efficiency either. You just throw away the power consumption numbers in the previous page of the review and keep that efficiency number because supports your opinion.
What I see is that efficiency is measured under a very specific scenario, which is Cyberpunk 2077. So, if 7600 enjoys an updated optimized driver in that game the result is in it's favor.
The methodology is there on the review pages. Efficiency page does state Cyberpunk 2077 but not the resolution/settings but power consumption page does. RX7600 is a full 26% faster than RX6650XT in Cyberpunk at 2160p with ultra settings and that probably persists to the power/efficiency results despite lower texture setting.

The problem with this efficiency result is that overall relative performance on both RX7600 and RX6650XT at 2160p is basically the same. Similarly, power consumption for these two is also basically the same.

I do not believe it is the driver. But RDNA3 has tweaks in the architecture and setup that are likely to benefit a cutting edge game like Cyberpunk 2077.
Posted on Reply
#121
john_
londisteThe methodology is there on the review pages. Efficiency page does state Cyberpunk 2077 but not the resolution/settings but power consumption page does. RX7600 is a full 26% faster than RX6650XT in Cyberpunk at 2160p with ultra settings and that probably persists to the power/efficiency results despite lower texture setting.

The problem with this efficiency result is that overall relative performance on both RX7600 and RX6650XT at 2160p is basically the same. Similarly, power consumption for these two is also basically the same.

I do not believe it is the driver. But RDNA3 has tweaks in the architecture and setup that are likely to benefit a cutting edge game like Cyberpunk 2077.
The bold part is what makes me saying that in the end I don't see from RDNA3 anything meaningful compared to RDNA2.

Now, going to the power consumption page we see
Idle : 7600 2W, 6600XT 2W
Multi monitor : 7600 18W, 6600XT 18W
Video Playback: 7600 27W, 6600XT 10W
Gaming: 7600 152W, 6600XT 159W
Ray Tracing: 7600 142W, 6600XT 122W
Maximum: 7600 153W, 6600XT 172W
VSync 60Hz: 7600 76W, 6600XT 112W
Spikes: 7600 186W, 6600XT 207W

From the above I see some optimizations in gaming and some odd problems. VSync 60Hz probably (IF I understand it correctly) shows that RDNA3 is way better than RDNA2 when asked to do some job that doesn't needs to push the chip's performance at maximum. Raster, maximum and spikes, that 7600 is more optimized than 6600XT. And optimizations there could be on the rest of the PCB and it's components, not the GPU itself. Idle at 2W can't help, multi monitor at 6600XT's levels, that probably AMD didn't improved that area over 6000 series. Raytracing is extremely odd and Video playback problematic. Best case scenario AMD to do what it did with 7900 series and bring the power consumption of video playback at around 10-15W.

In any case a new arch on a new node should be showing greens everywhere but idle, where it was already low. And not at almost equal average performance. 7600 should had the performance of 6700 and a power consumption equal or better in everything compared to 6600XT. We don't see this, especially the performance.
Posted on Reply
#122
Dr. Dro
john_The bold part is what makes me saying that in the end I don't see from RDNA3 anything meaningful compared to RDNA2.

Now, going to the power consumption page we see
Idle : 7600 2W, 6600XT 2W
Multi monitor : 7600 18W, 6600XT 18W
Video Playback: 7600 27W, 6600XT 10W
Gaming: 7600 152W, 6600XT 159W
Ray Tracing: 7600 142W, 6600XT 122W
Maximum: 7600 153W, 6600XT 172W
VSync 60Hz: 7600 76W, 6600XT 112W
Spikes: 7600 186W, 6600XT 207W

From the above I see some optimizations in gaming and some odd problems. VSync 60Hz probably (IF I understand it correctly) shows that RDNA3 is way better than RDNA2 when asked to do some job that doesn't needs to push the chip's performance at maximum. Raster, maximum and spikes, that 7600 is more optimized than 6600XT. And optimizations there could be on the rest of the PCB and it's components, not the GPU itself. Idle at 2W can't help, multi monitor at 6600XT's levels, that probably AMD didn't improved that area over 6000 series. Raytracing is extremely odd and Video playback problematic. Best case scenario AMD to do what it did with 7900 series and bring the power consumption of video playback at around 10-15W.

In any case a new arch on a new node should be showing greens everywhere but idle, where it was already low. And not at almost equal average performance. 7600 should had the performance of 6700 and a power consumption equal or better in everything compared to 6600XT. We don't see this, especially the performance.
That's because there aren't any. Not that these 20 something watt improvements even matter, you're not running a GPU like this on a system with an expensive 80+ Titanium PSU anyway, and depending on the machine it's installed, conversion losses on budget PSUs could potentially make the 7600 worse off if system load isn't high enough to achieve the power supply's optimal conversion range. Yet here you are comparing an early stepping of the equivalent previous generation ASIC to the newest one on the newest drivers! (this isn't a dig at you, just at how preposterous this concept is)

I don't feel duty-bound to defend the indefensible, these GPUs are hot garbage, not that Nvidia's are any better below the 4090 and I've pointed out my bone with the 4090 being heavily cutdown more than once, this is a lost generation and I only hope the next one is better. I want a battle royale with Battlemage, RDNA 4 and Blackwell in the midrange, and a competent AMD solution at the high end. I literally want to give AMD my money, but they don't make it easy! Every. Single. Generation. there's some bloody tradeoff, some but or if, some feature that doesn't work or some caveat that you have to keep in mind. This is why I bought my RTX 3090 after 4 generations of being a Radeon faithful. I no longer have the time or desire to spend hours on end debugging problems, working around limitations, or missing out on new features because AMD deems them "not important" or they "can't afford to allocate resources to that right now" or "we'll eventually make an open-source equivalent" that is either worse than the competition (such as FSR) or never gets adopted. I want a graphics card I can enjoy now, not potentially some day down the road. The day AMD understands this they will have gone half way through the road to glory.
Posted on Reply
#123
Vayra86
Gosh I hadn't even looked at the 7600 review, because my interest in it does not even register on a scale with negatives, but yeah. Why did they release this POS?

It's an RDNA2 refresh with no USPs - look at that efficiency gap. That's not typical of RDNA3.
Beginner Micro DeviceI'm paying nothing, I've never bought no RTX card in my life. I utterly despise what jerket man is doing and I wish nVidia to be hit by something really heavy. All they do is a negative effort.

AMD literally has a Jupiter sized window wide open to adjust their products and drivers, yet we only see some performance improvements, nothing special, like AMD is thinking they're Intel of 2012 when they had no one to compete with so making a dozen percent better product than the one from three years ago is completely fine.

It's not. Neither party deserves a cake. The only way I'm buying their BS SKUs is major discounts because they openly disrespect customers, me included. And AMD is worse because they don't use massive blunders by nVidia.

This is only because we're a decade, maybe a couple decades too early for this technology. Nothing powerful enough to push this art to its beaut. RT is the answer to complete mirrors and reflections deficite in games. And the only stopping factor is hardware. It can't process it fast enough as of yet. When my post will be as old as me now RT will be a default feature, not even gonna doubt that (unless TWW3 puts us back to the iron age)


nGreedia being nGreedia became obvious when they released their first RTX cards. It doesn't nullify my point though.

That being said, I'm just completely pessimistic about GPU market of the nearest couple generations. GPUs are ridiculously expensive and games are coming in "quality" so terrible it's them who must pay us to play it, not the opposite.
My 'you' is always a royal you unless specified otherwise :)
But yeah I share some of your pessimism, OTOH, its been worse. During Turing and mining for example. What an absolute shitshow we've been having. Perhaps we're still on the road to recovery altogether.
john_You have the card, we look at numbers from TPU review. Either your card works as it should, which will be great and something went wrong with the review numbers, or something else is happening.
You have been looking at the wrong numbers, as I also pointed out somewhere earlier or somewhere else - RDNA3's efficiency is very close to Ada, and is chart topping altogether. See above. The worst case scenario gap between RDNA3 and Ada is 15% in efficiency; if you take the 4080 at 4.0W versus the 4.6 of 7900. Also take note of the fact that monolithic and pretty linearly scaled Ada is itself showing gaps of 15% between cards in its own stack just the same.
AssimilatorI love how people are using this image as evidence that AMD isn't rubbish at low-load power consumption. Look at the bottom 5 worst GPUs. LOOK AT THEM. Who makes them?

Then look at the 3090 and 3090 Ti. They're near the bottom, yet their successors are in the middle of the pack - almost halving power consumption. It's like one company is concerned with making sure their product is consistently improving in all areas generation-to-generation, while the other is sitting in the corner with their thumb up their a**.

The fact that AMD has managed to bring down low-load power consumption since the 7000-series launch IS NOT something to praise them for, because if they hadn't completely BROKEN power consumption with the launch of that series (AND THEIR TWO PREVIOUS GPU SERIES), they wouldn't have had to FIX it.

Now all of the chumps are going to whine "bUt It DoESn'T mattER Y u MaKIng a fUSS?" IT DOES MATTER because it shows that one of these companies cares about delivering a product where every aspect has been considered and worked on, and the other just throws their product over the fence when they think they're done with it and "oh well it's the users' problem now". If I'm laying down hundreds or thousands of dollars on something, I expect it to be POLISHED - and right now only one of these companies does that.


It's an incredibly lazy non-argument used by those who are intellectually bankrupt. The best course of action is to ignore said people.
Reading this being said of the company that presented us with the shoddy and totally unnecessary 12VHWPR is... ironic. The same thing goes for all those 3 slot GPUs that have no business being a 3 slotter given their TDPs. Ada is fucking lazy, and much like RDNA3 it barely moves forward, the similarities are staggering. There is a power efficiency jump from the past gen, and that's really all she wrote. The rest is marketing BS. Nvidia just presents its thumb up ass story in a better way, that is really all it is. Geforce is a hand-me-down from enterprise-first technology, now more than ever.

The Vapor Chamber Fail on AMD side was worse though, I agree.

But Nvidia polished? That was three generations ago. GTX was polished. RTX is open beta, and featureset isn't even backwards compatible within its own short history. DLSS3 not being supported pre Ada is absolutely not polish and great product and caring about customers. Ampere was a complete shitshow and Ada is now positioned primarily to push you into a gen-to-gen upgrade path due to lack of VRAM on the entire stack below the 4080. I think you need a reality check, and fast.
Posted on Reply
#124
Beginner Macro Device
Vayra86My 'you' is always a royal you unless specified otherwise
Sorry, it was very late at night and I'm generally retarded. Forgot this royal "you" ever existed.
Vayra86been worse. During Turing and mining for example.
Turing was a textbook example of "what the heck," whereas mining was a catalyst, not a real reason for their treason. Now is worse. They now have no single excuse for their sheer greed except for the fact it's de facto nGreedia's monopoly since AMD haven't struck back since... late noughties I might guess. Their so-far-best product line aka RX 6000s was borderline impossibly priced at launch. Be that mining hysteria non-existent, AMD would be forced to almost half their prices in order to compete with Ampere devices. RX 6700 XT was slower than RTX 3070 at launch and didn't perform RT-wise at all, also lacking DLSS. Their MSRPs are quite the same (480 USD in 6700 XT versus 500 USD in 3070).

So, despite nGreedia deserving their greedy men status, we all have to consider AMD even worse because they don't have no reason to be greedy.
Posted on Reply
#125
lexluthermiester
TriCyclopsNo I meant what I said, not what you think I said.
What you meant to say was very clear. Very transparent. But ok, whatever.
john_I mean why waste an extra of 30W of power while watching a movie?
You seem to over estimate how much 30w is. It's not a great deal of power. However, that particular situation was solved shortly after release.
Posted on Reply
Add your own comment
May 16th, 2024 16:31 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts