Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.
Things take time, and swapping everything over to a whole new RAM form factor and not just a tweak to the current design is a massive change. I can definitely see them taking off in laptops, but it would be nice to have interchangeable modules between laptops and desktops instead of regular large DIMMS and smaller SODIMMs. People seems to love small PC's these days, and with those cramped spaces it would be another benefit.
Things take time, and swapping everything over to a whole new RAM form factor and not just a tweak to the current design is a massive change. I can definitely see them taking off in laptops, but it would be nice to have interchangeable modules between laptops and desktops instead of regular large DIMMS and smaller SODIMMs. People seems to love small PC's these days, and with those cramped spaces it would be another benefit.
Camm would ironically take up more board space on mITX, and would likely be placed on the back of the board if there was any hope to still mount a 24pin atx power connector on the front side of the PCB; so yay flaming hot dimms and ics with camm right?
On the new IOD, there should be space for another Gen5 x4 PHY, and if USB4 is finally integrated, another Gen5 x4 PHY will be freed from existing ones.
I doubt it as I suspect any spare IO die space will be given to Media/GPU/NPU over PCI-e as 90% of consumers are using a single graphics card only. I wish there was more to aid in say a quad NVME array or networking add in cards but thats a big wish.
CAMM makes sense for the APUs with the lower latency and higher speeds capability meaning more performance from the iGPUs. Maybe on ITX if you can use both front and back of the board but I can imagine the wiring of that on the motherboard will be a nightmare aka $$$$
I doubt it as I suspect any spare IO die space will be given to Media/GPU/NPU over PCI-e as 90% of consumers are using a single graphics card only. I wish there was more to aid in say a quad NVME array or networking add in cards but thats a big wish.
CAMM makes sense for the APUs with the lower latency and higher speeds capability meaning more performance from the iGPUs. Maybe on ITX if you can use both front and back of the board but I can imagine the wiring of that on the motherboard will be a nightmare aka $$$$
I would still prefer user replaceable parts vs soldered on memory as you then arent limited to what Intel/AMD/Whatever SI decides you can have from day 1.
Also some of those latest AMD APUs such as the HX 370/375 could very much use the extra bandwidth offered from CAMM in their highest end offerings let alone the desktop focused Strix Halo.
I liked TSMC’s 1.8nm and 1.6nm nomenclature. Based on their track record, it’s safe to assume these will become a reality.
Intel, on the other hand, uses ‘A’ (which, as far as we know, stands for Abstract). They’ll need to prove they can still deliver—for now, it’s just marketing.
Other companies are attempting to enter the premium chip market, which may eventually drive prices down. Currently, however, costs remain prohibitively high. An upgrade isn’t worthwhile yet—perhaps in three years. Anything above a Ryzen 7500F, intl B580(b770), 32GB DDR5, and a Gen3 SSD seems like a devastating financial decision.
I hope China, which is rising in every dimension, will deliver its own homegrown, competitive GPU. I’d gladly buy anything not tied to the U.S. (Yes, yes, there’s that Moore Threads thing, but it’s basically an iGPU. Hopefully, next-gen designs will deliver.) The same logic applies to CPUs: if it’s Chinese, I’ll buy it; if it’s U.S., I’ll avoid it."
Next-Gen Console Hardware: CPUs, GPUs, and Innovations
The next generation of consoles will likely establish 12-core CPUs as the new gold standard, especially since large cache sizes(x3d) are a perfect match for console architectures, optimizing performance for gaming.
On the GPU front, advancements will undoubtedly deliver significant leaps in power and efficiency. However, the performance gap between current and next-gen hardware—combined with potential price increases—could slow adoption rates even more than the transition from PS4 to PS5. Games will be probably even more expensive.
Beyond raw processing power, exciting innovations like GAAFET transistors (Gate-All-Around FETs) for improved energy efficiency and that sweet sweet transistor count, DirectStorage for near-instant game loading, and dedicated neural network optimizations for AI-driven features (e.g., upscaling, NPC behavior) will redefine console capabilities. These technologies, paired with advanced cooling solutions and streamlined software integration, could make next-gen systems truly groundbreaking—assuming developers and consumers embrace the cost.
But looking on how games are optimized for profit make me sceptical. For sure I wont be early adopter may pick up one used after some years though.
I would still prefer user replaceable parts vs soldered on memory as you then arent limited to what Intel/AMD/Whatever SI decides you can have from day 1.
Also some of those latest AMD APUs such as the HX 370/375 could very much use the extra bandwidth offered from CAMM in their highest end offerings let alone the desktop focused Strix Halo.
I can’t recall seeing a single instance the AI 3xx chips not using soldered ram so far though. In mobile form factors I see zero issue with on package or soldered ram as long as they’re not charging apple prices for an extra 8gb.
Either way, camm remains DOA, and now that cu-dimm is already off to the races, there’s even less of a slim chance for the pointless changes to layout standards in desktop space.
Dell created the first CAMM concept, and Lenovo and Dell have since both made laptops with it (probably all business-focused laptops), Dell with recent models as well. So it's not dead. It's just not being demanded by enough consumers to reach consumer-facing laptops. Most consumers either don't know to want it or are under the false impression that it's proprietary and inferior to SODIMM sticks.
But some of us want it. I thought Strix Halo sounded interesting enough to consider buying, right up until I learned that no Strix Halo computer will ever have LPCAMM.
My guess is there'll be a name reset. Maybe "Ryzen AI 9 X 580". "Ryzen 3" seems to be a victim of tier-inflation. The Ryzen 9 9950X ought to have been the Ryzen 7 9800X, because starting from "9" there aren't enough tiers to reach "3".
My guess is there'll be a name reset. Maybe "Ryzen AI 9 X 580". "Ryzen 3" seems to be a victim of tier-inflation. The Ryzen 9 9950X ought to have been the Ryzen 7 9800X, because starting from "9" there aren't enough tiers to reach "3".
The tiers are a mess. I think the R5 7600 should have been the R3 7300. 6 cores should be normalised on the entry level by now. But hey it probably sells more being an R5.
Are they planning on increase 3D-vcache capacity (has it been tested internally to see if say 128 or more meg of L3 would boost performance metrics in any game/workload) ?
With the rumour of 12 cores on CCD's
Do we expect a regression in boost frequency for increased core count ?
Will it have more cache for 12 cores (currently capped at 96 meg but also, still 96 meg on 6 core X3D's)
I'm dreaming here but I think a non pbo boost of 5.5ish with 144meg cache would be excellent (amp that to close to 6ghz and 16 meg per core for 192 meg and that would be a monster)
Are they planning on increase 3D-vcache capacity (has it been tested internally to see if say 128 or more meg of L3 would boost performance metrics in any game/workload) ?
Did you look into the article and diagrams? For Zen7, 7MB of cache per core on V-cache chiplets. On 33-core EPYC die, that's 231MB per V-cache chiplet.
We don't know whether the same V-cache chiplets would be used for desktop CPUs.
Such V-cache chiplet would certainly improve gaming and other workloads, as more hits stay on L3 level rather than bleeding elsewhere.
With the rumour of 12 cores on CCD's
Do we expect a regression in boost frequency for increased core count ?
Will it have more cache for 12 cores (currently capped at 96 meg but also, still 96 meg on 6 core X3D's)
We expect boost towards 6GHz, as transition to a new process node will allow it.
For Zen6, 48MB of L3 cache per CCD, so the same amount per core.
We don't know whether 3D-cache chiplet would stay on 64MB or evolve to 96MB.
Are they planning on increase 3D-vcache capacity (has it been tested internally to see if say 128 or more meg of L3 would boost performance metrics in any game/workload) ?
I suspect we will see a regression of standard clocks more and full load boost but the lightly loaded/single core speeds will probably stay near the same clock speeds most likely
I suspect we will see a regression of standard clocks more and full load boost but the lightly loaded/single core speeds will probably stay near the same clock speeds most likely
There is no evidence of this, of course. If anything, it will be quite opposite. Moving from N7->N5/N4, they managed to increase single core clocks by ~800MHz and 32T clock by ~900MHz, which was a huge boost.
With Zen6, moving to N2 and N3 on some models, which is two nodes shrink for N2 and one node for N3, single core could certainly hit 6 GHz and 24T on a one CCD of vanilla models could get at least another ~300 MHz (the line below shows current 24T on 9950X3D).
One interesting area will be thermal density, as two 12-core CCDs will be near each other and very close to IOD, similarly to configuration on Strix Halo. Better coolers will be able to keep higher clocks for longer in so called clock frequency occupancy range. Differences could range ~200MHz between most capable AIOs and average air coolers.
There is no evidence of this, of course. If anything, it will be quite opposite. Moving from N7->N5/N4, they managed to increase single core clocks by ~800MHz and 32T clock by ~900MHz, which was a huge boost.
With Zen6, moving to N2 and N3 on some models, which is two nodes shrink for N2 and one node for N3, single core could certainly hit 6 GHz and 24T on a one CCD of vanilla models could get at least another ~300 MHz (the line below shows current 24T on 9950X3D).
One interesting area will be thermal density, as two 12-core CCDs will be near each other and very close to IOD, similarly to configuration on Strix Halo. Better coolers will be able to keep higher clocks for longer in so called clock frequency occupancy range. Differences could range ~200MHz between most capable AIOs and average air coolers.
I am just looking at the differences between the 7950x and 9950x and the lack of clock improvements even though there is a node change. In actual fact the 9950x has a 200mhz DECREASE in base clocks over the 7950x even though its on the better node.
Adding more cores per CCD I suspect will just cause it to lose boost headroom when heavily loaded due to temp/power draw on the CCD level
I am just looking at the differences between the 7950x and 9950x and the lack of clock improvements even though there is a node change. In actual fact the 9950x has a 200mhz DECREASE in base clocks over the 7950x even though its on the better node.
Adding more cores per CCD I suspect will just cause it to lose boost headroom when heavily loaded due to temp/power draw on the CCD level
Zen 5 has some quite significant advances over Zen 4, it's just that none of them are desktop-centric or really matter for most of the things that we do on our PCs daily. The native 512 bit FP datapaths alone make this a bonafide wide core, without the drawbacks that you saw on Skylake-X previously. They still managed to do that and retain the same, or even a slightly better level of performance over the previous generation architecture.