Wednesday, May 3rd 2023

AMD CEO Dr Lisa Su Confirms Mainstream RDNA3 GPUs in Q2-2023

AMD CEO Dr Lisa Su, in her Q1-2023 Financial Results call with investors and analysts, confirmed that the company plans to expand the Radeon RX 7000 series with the addition of new "mainstream" GPUs based on the RDNA3 graphics architecture in this quarter (Q2-2023). This confirms the launch of the Radeon RX 7600 XT later this month, but could also hint at other SKUs the company considers mainstream, such as the RX 7500 XT. AMD has for long considered the RX x700 series as performance-segment, and the RX 7600 XT launch right after the high-end RX 7900 series would hint that the company is still figuring out the economics of its RX 7700 series and RX 7800 series.

"In gaming graphics, channel sell-through of our Radeon 6000 and Radeon 7000 series GPUs increased sequentially. We saw strong sales of our high-end Radeon 7900 XTX GPUs in the first quarter, and we're on track to expand our RDNA 3 GPU portfolio with the launch of new mainstream Radeon 7000 series GPUs this quarter," said Dr Lisa Su. With GPU prices in free-fall since the GPU-accelerated crypto-mining crash, AMD is in the process of clearing out its Radeon RX 6000 series inventory as it creates room for the RX 7000 series. Enthusiast-segment SKUs of the yesteryear, such as the RX 6900 series, could be had at prices under $600.
Sources: Seeking Alpha, VideoCardz
Add your own comment

29 Comments on AMD CEO Dr Lisa Su Confirms Mainstream RDNA3 GPUs in Q2-2023

#26
AusWolf
Avro ArrowI don't know what you're talking about because that has never been true. The Phenom II X4 940 drew almost 220W at max load while the FX-8350 drew over 250W at max load.

Check this out:

AM2+ Era (Techspot):

AM3+ Era (Techspot):


And check out the FX-9590's numbers from AnandTech!

AM4 Era (Techspot):

(Note that the Ryzen 7 5700X consumes 32 fewer watts than the Ryzen 7 5800X so it would be at 174W total system draw.)

AM5 Era:

In the AM5 era, Intel's CPUs just look like hyper-OC versions of their previous gens while AMD has that stupid "race to 95°C" thing to max their power use because they just both want their performance numbers to be maxxed-out for review benchmark charts like these. IIRC, the R7-5800X3D uses a bit more power than the R7-7700X in Eco Mode. I get the feeling that Eco Mode is the same as AMD Cool'n'Quiet, a setting that is turned on by default in all AMD CPUs and APUs before Zen4.

Other than that, it doesn't appear that CPU power usage has appreciably gone up over the years. They're (almost) all in the 150-275W total system power between the AM2+ and AM4 eras with the power consumption in the AM5 era being artificially inflated to produce greater performance numbers. So, no, 125W did not mean 125W any more than it does today (unless you're Intel and say that the i9-13900K has a TDP of 125W). Tech advancement not only increases performance, it also increases effciency.

The most power-hungry consumer-grade CPU before the i9-13900K was the FX-9590 from the AM3+ era. It didn't perform even close to the R7-5950X but it used a crap-tonne more power.

Hell, even with the insanely-powerful video cards of today, the most power-hungry video card ever made was made nine years ago in 2014 with a TDP of 580W. The suggested PSU for this card was 950W.
Powercolor Radeon R9 290x2 Devil 13 4GB

Things aren't nearly as bad today with regard to power use as it appears. It's just that, with the war in Ukraine and the resultant spike in energy costs across the EU (caused by terrible energy decisions made by clueless politicians), power usage has come under more of a microscope than it ever had before. Couple that with the artificially-inflated power consumption numbers caused by AMD and Intel wanting to occupy the "top spot" on benchmark charts. Let's face it, people are just plain stupid sometimes. They behave like the top-spot CPU or GPU is somehow relevant to them even if they're not buying that specific product. Like, sure, the RTX 4090 is the fastest card in the world but what does that have to do with the noob who bought an RTX 4070 because he assumed that it must be faster than an RX 7900 XT because "It's nVidia, just like the RTX 4090!".

This is the kind of guano-insane mindset that has brought us to where we are now.
But the diagrams you linked show total system power. I was talking about CPU only power consumption. If you compare numbers on your linked diagrams, you see that the FX-8150 is around 7700X level, which was absolutely insane back then, but it doesn't even come close to the 7950X, which sits a good 100 W higher. That's what motherboards have to deal with today, and that's (partly) why there's a bigger difference in the low and high end.
Avro Arrow^^^ From the post that you were responding to. Please note the bold/italic text. ^^^
Since you "like" it when I repeat myself... :roll:

Power sipped through 4 slots is not the same as power consumed through one single socket. In the first case, you just build a normal PCI-e circuitry 4 times. In the second case, you have to design an entirely new power delivery to suit the higher load.
Posted on Reply
#27
Avro Arrow
bug@Avro Arrow If anything, more complex parts are all but expected to fail sooner. They're pricier so one would assume they undergo more thorough testing. I somehow doubt that, since added functionality/parts increase test scenarios exponentially.
Sure, that's also a possibility, but that also didn't usually happen. Companies like MSi know what they're doing when it comes to making motherboards, it's old hat to them and would've been old hat even back then. There was probably just a tiny flaw somewhere on the board that got missed and I was the unlucky recipient of the flaw. The reason that I'll never buy MSi again is that they were a$$holes about it. See, if I was running customer service for a company like that, sure, the warranty period is the warranty period, but, if a customer had purchased one of my expensive flagship products and it failed only three months after the warranty period was over, I would totally allow the customer to send the item in for examination. If it was clear that they'd done nothing to cause the problem, I would definitely take care of them. A customer who buys a flagship product is valuable and the choice between eating $100 to gain a loyal-as-hell customer (remember, the warranty had techincally expired) who buys flagship boards or saving $100 and possibly losing that customer (because their perception of my company would be terrible at that point), I'd choose the former seven days a week and twice on Sundays. If a flagship product fails only three months after the warranty expires, a company should be embarrassed by that. Instead, MSi was completely nonchalant.

At the end of the call, I informed them that I worked for Tiger Direct and that I would not sell another MSi-branded item for as long as I worked there. I estimate that they lost about $20,000 in sales over the next year while ASRock, ECS and Gigabyte probably gained the most benefit as a result.
bugAlso, TigerDirect... the latest to bite the dust :cry:
Yeah, but don't feel bad. It bit the dust because it was a terrible company with terrible management. The upper-level management was a bunch of crooks and cronies. Tiger Direct deserved to die.
AusWolfBut the diagrams you linked show total system power.
Yeah, they ALL do and since pretty much everything else in the system hasn't really changed with power use, they're all relevant. What, do you think that a hard drive or some RAM uses an extra 50W?
AusWolfI was talking about CPU only power consumption. If you compare numbers on your linked diagrams, you see that the FX-8150 is around 7700X level, which was absolutely insane back then, but it doesn't even come close to the 7950X, which sits a good 100 W higher. That's what motherboards have to deal with today, and that's (partly) why there's a bigger difference in the low and high end.
My point was that there have been high-watt CPUs in every era and so motherboards had to be made to deal with them. Hell, in the AM2 era, Phenom I CPUs sometimes melted their motherboards. Do you think that happened because they didn't use much juice? Oh hell no! :laugh:
AusWolfSince you "like" it when I repeat myself... :roll:
It sure beats repeating me! :D
AusWolfPower sipped through 4 slots is not the same as power consumed through one single socket.
Not at the socket site itself but it all comes from a single source that must be made more robust to handle that.
AusWolfIn the first case, you just build a normal PCI-e circuitry 4 times. In the second case, you have to design an entirely new power delivery to suit the higher load.
And what is that normal PCI-e circuitry all attached to? The power distribution circuits of the motherboard itself where 300W flows as 300W before it's divided up into 4 circuits of 75W. That's how circuits work. No matter what the wattage is at each end of it, it's all of them together at the source.
Posted on Reply
#28
bug
@Avro Arrow Yeah, I avoid MSI because of some subpar interaction with their customer support as well.

As for TigerDirect, I can't say I gave them a lot of business (I am US based). It's still sad to see brick and mortar going through these hard times. I know online is all the rage, but if you want to try a mouse or a keyboard before buying or just looking at a monitor to gauge if it's all the reviews make it out to be... well, good luck with that. Sure, you can return your online purchase, but that's just wasteful. And you get to pay for it.
Posted on Reply
#29
AusWolf
Avro ArrowYeah, they ALL do and since pretty much everything else in the system hasn't really changed with power use, they're all relevant. What, do you think that a hard drive or some RAM uses an extra 50W?
No, but since I was talking about CPU only power consumption, those diagrams aren't really an answer to what I said.
Avro ArrowMy point was that there have been high-watt CPUs in every era and so motherboards had to be made to deal with them. Hell, in the AM2 era, Phenom I CPUs sometimes melted their motherboards. Do you think that happened because they didn't use much juice? Oh hell no! :laugh:
Of course because OCP, OVP and such wasn't as robust as these days. VRMs also melted some motherboards because they were poorly built. My point stands: the difference between entry-level and high-end is much greater now than it used to be.
Avro ArrowNot at the socket site itself but it all comes from a single source that must be made more robust to handle that.
Oh, the 24-pin cable/connector can handle that. ;)
From Wikipedia:
The 20–24-pin Molex Mini-Fit Jr. has a power rating of 600 volts, 8 amperes maximum per pin (while using 18 AWG wire).[16] As large server motherboards and 3D graphics cards have required progressively more and more power to operate, it has been necessary to revise and extend the standard beyond the original 20-pin connector, to allow more current using multiple additional pins in parallel. The low circuit voltage is the restriction on power flow through each connector pin; at the maximum rated voltage, a single Mini-Fit Jr pin would be capable of 4800 watts.
Avro ArrowAnd what is that normal PCI-e circuitry all attached to? The power distribution circuits of the motherboard itself where 300W flows as 300W before it's divided up into 4 circuits of 75W. That's how circuits work. No matter what the wattage is at each end of it, it's all of them together at the source.
Okay, show me how complicated the 75 W PCI-Express power delivery circuit is compared to a CPU VRM.
Posted on Reply
Add your own comment
May 4th, 2024 14:48 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts