Sunday, August 6th 2023

AMD Retreating from Enthusiast Graphics Segment with RDNA4?
AMD is rumored to be withdrawing from the enthusiast graphics segment with its next RDNA4 graphics architecture. This means there won't be a successor to its "Navi 31" silicon that competes at the high-end with NVIDIA; but rather one that competes in the performance segment and below. It's possible AMD isn't able to justify the cost of developing high-end GPUs to push enough volumes over the product lifecycle. The company's "Navi 21" GPU benefited from the crypto-currency mining swell, but just like with NVIDIA, the company isn't able to push enough GPUs at the high-end.
With RDNA4, the company will focus on specific segments of the market that sell the most, which would be the x700-series and below. This generation will be essentially similar to the RX 5000 series powered by RDNA1, which did enough to stir things up in NVIDIA's lineup, and trigger the introduction of the RTX 20 SUPER series. The next generation could see RDNA4 square off against NVIDIA's next-generation, and hopefully, Intel's Arc "Battlemage" family.
Source:
VideoCardz
With RDNA4, the company will focus on specific segments of the market that sell the most, which would be the x700-series and below. This generation will be essentially similar to the RX 5000 series powered by RDNA1, which did enough to stir things up in NVIDIA's lineup, and trigger the introduction of the RTX 20 SUPER series. The next generation could see RDNA4 square off against NVIDIA's next-generation, and hopefully, Intel's Arc "Battlemage" family.
363 Comments on AMD Retreating from Enthusiast Graphics Segment with RDNA4?
7900xt was an increase over 6800xt (both are 2nd fastest).
Look I'm not trying to say the 7900xtx is a bad card. Performance is great. As you point out, you get more for the same money they used to charge for 6900xt. AMD still gives consumers more value than Nvidia when you want gaming performance. MCMs are the future and AMD has positioned themselves as the clear leader in MCM tech for GPUs and this experience will no doubt help in the future. I'm sure we are all hoping for Zen to Zen 2 to Zen 3 types of performance and efficiency gains for RDNA. I do agree with alot of what you said. I just happen to think there have been bumps in the road with this transition and don't think moving to MCM was all beneficial especially since they could not secure node parity with nvidia which magnifies all its shortcomings.
Are you an engineer in the computing field? I know im not. It’d be great to have a chance to see how a monolithic rdna3 top end GPU would have panned out or in comparison. Again, Im not gonna be naive and say AMD, Nvidia, or any other tech monster was flat out wrong to design something as they did or choose a node with absolutely zero inside information on why or how those choices effect design and costs.
I don’t care which company we’re taking about, to say you otherwise know better or state as a fact something was the wrong business choice is foolish. If you have the technological knowledge and proprietary information to prove otherwise, be my guest.
- Ampere was chosen to be produced at Samsung FAB because of price savings, and they knew they would win anyway (which they did, even if not on all resolutions)
- Turing was produced on a very very mature node because TU102 was a gigantic chip by standards going for all times of gaming chips. The chip has a size of 754mm2. It's a monster and therefore needs a very mature node to be produced without losing too many chips, since they only sold a slightly deactivated variant (2080 Ti) and the full thing (TITAN RTX).
- Historically NV was a bit more cautious and conservative with new nodes than ATI/AMD was. This is rather a wise decision, but sometimes it paid off for ATI/AMD to be faster.
edit, to make the summary complete:
- ADA was again produced at TSMC with a modified 5nm node to be more competitive, and since Samsung FAB was simply not up to par. This is basically Nvidia being "serious" after Ampere was produced at a weaker node and barely being faster than RDNA 2 aside from Raytracing.
Where is your evidence that MCM is the reason for performance and efficiency advantages over the monolithic 6900xt? All you have are performance numbers that show, yes the newer more advanced product is more efficient and performant than the previous model. Nvidia did the same thing when you compare 3090 vs 4090. How did they do that without the move to MCM?! Looks like you are also guilty of being naive and saying something is happening with absolutely zero inside information on why or how. But if you have the technical knowledge to prove otherwise, be my guest.
I've been trying to find some common ground with you, but you just want to argue so ok. I understand you want to defend AMD, but its not unfair to say their implementation of MCM still needs work.
But didn't think they would go this far and step back and only produce lower end products like Polaris and have use wait two plus years before getting something like the 6900XT.
Sucks, this is just gonna get nVidia to be bold and straight up come out and say the RTX 5090 is 2k, 5080Ti 1800USD, etc etc kind of like what happened during Turing... ugh history repeating itself in such a short time frame sucks.
In 1080p:
1. 7% slower overall (mixed RT and non-RT games)
2. 2% slower when RT titles are removed - raster is literally the same on both cards
3. 20% slower in RT only - the caveat here is that 4060 is chocked with RT on; hardware is slow (even the title in your video shows 4050...)
In 1440p:
1. 9% slower overall (mixed RT and non-RT games)
2. 2% slower when RT titles are removed - raster is literally the same on both cards
3. 28% slower in RT only - RT is uselss here, unless helped by DLSS and FG in a few titles; then latency becomes an issue...
There are already retailers in Europe selling 7600 below MSRP, for example for £230 in the UK + plus free Starfield, which is makes Radeon 7600 more attractive with game bundle, saving buyers over 100 bucks all together.
Nvidia is truly ahead because they secured TSMCs next node before AMD. It doesn't matter though the narrative has it wrong for me. If AMD can and has released a 16 core chip for laptops chiplets will become the future in Computing. For me the 7940 has a 6500XT GPU inside of it. As a result I think AMD are doing a from the bottom plan with GPUs. Right now there is no card from Nvidia that will give you the performance of the 6800XT for $500+ as the 4060Ti vs 6800XT comparison is evidence of. I now have seen that the new 7700XT and 7800XT will have 4 chiplets to 6. When the I/O die goes to 5nm AMD will be even faster. The truth is though that people wanted AMD to challenge Nvidia and the release of the 7900XTX was meh at best but it was not that simple. There is no OC room to speak of on the 4090 because the card came turned up to 11. Meanwhile the XT and XTX cards did not get that treatment until the AIB models arrived but even then users get a 4-500 bump in boost with just one click. I am not saying that a 4090 is not better than a 7900 series as a Graphics Processing Unit but my focus is on Gaming and 4K is for the sweet visuals and you need a GPU with the grunt to make it enjoyable. I fully expect that there will always be an AMD variant that satisfies the high end but they are going from the bottom up with the next release. I feel that the 7600 may be an actual red herring so that the price of the 6700XT could be maintained. I expect that by the time Games are too tough for the 7900 series that I may look into FSR.
This Crazey news is refuted.
even a rebrand 7950XTX made more sense anyway.
Also, AMD is working on RDNA 5, now.
Also, AMD is working on PS5pro, Xbox refresh and generally node ports for old consoles, other custom SOCs like Steamdeck and teslatainment, Vast amounts of work on fpga stuff and also next-generation server parts.
probably a resource issue and a desire to up their AI game.
www.semianalysis.com/p/die-size-and-reticle-conundrum-cost Nvidia willl be milking monolithic design on client GPUs until they are FORCED to move to chiplets, on new EUV high-NA machines that TSMC has already ordered from ASML. In datacenter, everybody has moved to chiplets. In client segment, once 600 mm2 dies are not possible anymore, Nvidia will have to move to chiplets. So, as the video below shows, it's not if but when. This will happen most probably after Blackwell, as first high-NA EUV machines are to be delivered to TSMC, Intel and Samsung ~2025.
The question for you as a customer is whether you are prepared to pay $1,500 for 5080 on another monolithic die? Currently, GPU prices are moving in a direction that will cause more people being outpriced from GPU market. Imagine 5080 is ~50% faster than 4080, which means ~20% faster than 4090 in 4K. Nvidia can easily try to sell you this performance uplift as a "good deal" and say it's $1,500, so less expensive than 4090, but still faster. Suddenly, you have 5080 which is twice as fast as 3080, but more than twice as expensive. Quite sick situation in comparison to CPU market, where doubling of performance over a few generations does not make CPUs twice as expensive.
But considering nvidia's mindset, they are able to charge 1500$ for a 100 mm^2 chiplet as well.
The chiplets improve only the yields, but increase the cost and complexity, and reduce the overall performance. Correct. Hence, the GPU shipments are at an all-time low.
It's supposed to double the shader throughput, but I feel like it's not exploited at all. Isn't that what prevents the 7900 XTX from doing as well as a 4090?
www.techpowerup.com/review/amd-radeon-rx-7600/10.html
www.techpowerup.com/review/amd-radeon-rx-7600/9.html
www.techpowerup.com/review/amd-radeon-rx-7600/21.html
www.techpowerup.com/review/amd-radeon-rx-7600/24.html
Probably the best example:
www.techpowerup.com/review/amd-radeon-rx-7600/27.html (faster than 6700 XT, over 15% faster than 6650XT)
Also pretty good:
www.techpowerup.com/review/amd-radeon-rx-7600/29.html
In those it is clearly faster than the 6650/6600XT, otherwise it's often not: www.techpowerup.com/review/amd-radeon-rx-7600/11.html
7900 XTX (look for 4K, not 1080, primarily compared to 4080):
Prime example:
www.techpowerup.com/review/asrock-radeon-rx-7900-xtx-taichi/16.html
Also good:
www.techpowerup.com/review/asrock-radeon-rx-7900-xtx-taichi/23.html
www.techpowerup.com/review/asrock-radeon-rx-7900-xtx-taichi/24.html
:confused:
:confused:
You get plenty of performance.
Also, the users are kindly asked to move up to 2160p, and this is what I like about nvidia. It does the opposite of what AMD does. And AMD is wrong in this case.
nVidia offers Overwatch 2 Invasion for 4060, AMD goes with Starfield. The problem with these games is whether they appeal to you or not. Otherwise, the price difference between them is only 25 euros in Romania and I deduce from the customer comments that there is no longer the enthusiasm met with the 6000 series. Among the complaints are high noise and/or high temperatures (cheap solutions, dubious quality).
The difference in consumption must also be taken into account (115W versus 155W), which for me means the average consumption of the i5-13500 processor in the games run with the 3070 Ti.
All in all, AMD sells cheaper because it is the only weapon with which it can fight nVidia. Imagine the same price for RTX 4090 and RX 7900 XTX. Who still buys RX?
With a frame limiter, the difference is within the margin of error.