I can only surmise, that it's going to be amazing, considering what we know on performance/power of the current 4nm. 90% production yield rate is off the charts, I would like to say that we might see products at a lower cost as a result, but I am not holding my breath on that anymore.
There is a reason I didn't upgrade to 9800X3D, impressive a chip as it is. (still tempted, but the 5900X is sufficient for now) however, the 10800X3D, will no doubt be on 2nm, with 1x 12-Core CCD, 6Ghz monster, at like, what 50W? Not to mention, I am not to sure if they will keep using the same socket, probable, however, not risking that, motherboards have become stupid expensive for no reason and RAM, mmmm, want to see what the new chip can take for a 1:1 Command Rate between memory/CPU.
Not enthusiastic on the GPU side, as nGreedia will just make a RTX6090 for $10 000 and call it a day, something with what, 10% performance increase and tell you it's the best, and money will just be thrown at it, as for AMD, they will look at what big daddy nGreedia does price/performance wise and follow suit, instead of just knocking them on their asses.
This post is so stock full of current internet tech bullshit that I could not just let it go. Please take mainstream sites and especially headlines - even more so techtubers and their clickbait titles - with a grain of salt.
- Performance/power improvements have been slowing down. There are a number of technical reasons why that is the case. 4nm is a variant of 5nm and its properties over 7/6nm are good but not to the scale of node shrinks of old. Same seems to apply for 3nm. There are good improvements there but these have slowed down a lot. 2nm is the next gen after that with no signs of this getting better.
- 90% yield rate claim according to some other new coverage is from SRAM. This is a very regular set of transistors, relatively dense but also easy to manufacture. Also I have not seen anything about die sizes and without that the 90% is meaningless. Also, off the charts? For a reasonably sized dies - think mobile SoC or AMD CCD or some Intel tile - on a mass-production ready node this is more of a prerequisite than "off the charts".
- TSMC has stated that 2nm will be more expensive. Compared to 3nm that is already more expensive than 5/4nm and more expensive than processes this mature have been historically. No, there will not be lower cost. Maybe later when 2nm becomes mainstream but that is years away.
- 9800X3D is on 4nm. AMD has not even used 3nm for mainstream products so far. 3nm has been in mass production since late 2022. 2nm is a generation newer than 3nm, there is a while until mass production and it is quite likely that the delay for something like CPUs or GPUs is on top of that. 10800X3D is said to be on 3nm but afaik AMD has not officially confirmed that yet. It is rumored to be shrinked to 2nm later but see above - it'll be quite a while later if it'll happen.
- 6GHz seems to be going further away rather than coming closer. Its about power. Intel got burnt and AMD is keeping official boost clocks still in latter half of 5GHz. There is a reason for that. And remember that intel did 6GHz on 14nm. There have been 10nm, 7nm, 5nm and 3nm processes after that and the frequencies have not increased. Even getting to 5+GHz needs the performance variation of a manufacturing process (which is not power efficient at all in the top end) and specific tweaking of (micro)architecture.
- The point about 50W might be reasonable one. In a sense that there will be efficiency improvements and 50W is in the range where that should apply quite nicely for a CPU with desktop amount of cores.
- More expensive motherboards aren't exactly for no reason. There is PCI Express 5.0, there is DDR5 and both are faster, need better more reliable signals to run across motherboard. This in turn makes the board more complex and more expensive in various ways. Also, CPUs have become power hogs. Intel was like that for a while but AMD followed suit with their own 220+W CPUs. Any motherboard for the socket that you as a manufacturer build should be able to run any CPU that runs with the socket so essentially that cheap A620 mobo needs to be able to run the 220W 9950X properly or the manufacturer gets ridiculed online. better, bigger VRM = more cost. Manufacturers did abuse the new tech as excuse for price hikes but at the end of the day it is clear that all this did raise the baseline price of a motherboard significantly.
- GPUs are not exempt from the same problems. Only much worse due to larger size. In quite a few last generations AMD did not even create a response to Nvidia's flagship but compete from one step lower. For example, 5090 is alone, 9070 (AMDs biggest chip) can compete against 5080 (Nvidias 2nd biggest). Same for 4090, 7900XT and 4080. Yes, Nvidia's flagships are expensive but they are also huge and are pushing limits - often enough, reticle size for one. Power consumption is another and this ties into the problem I mentioned before with performance/power improvements having slowed down.
- The impression that AMD could just sail in and give us consumers GPUs that are awesome and cheap etc is utopian. This has no touchpoints with reality. AMD has been struggling with the exact same problems.
- Chiplets for GPUs are probably the future but problem is that nobody has figured out a way to efficiently use those yet for consumer GPUs. AMD's RDNA3 and separated memory controllers with attached cache was the best attempt and it unfortunately showcased the efficiency and slight performance hits that are expected. Chiplets is not a better solution compared to monolithic and never has been. Chiplets are good for one thing - split up the die so it can be manufactured more efficiently/cheaply. It allows to reduce cost and/or create an ASIC that would not be possible otherwise - canonical example is chip so large that it exceeds the reticle limit or with yields that are not usable. As the downside - moving data between dies costs power (read: hit to efficiency) and might come with (largely mitigatable) hits to latency.
/rant