• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

DDR6 Memory Arrives in 2027 with 8,800-17,600 MT/s Speeds

Well, a man can dream :p
Is there any actual talk when it comes to DDR6 form factor? The increase in bus-size and frequencies would make regular DIMMs on a 2-channel platform still make sense (assuming 2DPC is still possible). The to-CAMM2-or-not only makes sense if the regular DIMM form factor is out of question.
Not that I have seen, it seems to largely be pushed by Nvidia and Micron for now, although SK Hynix is potentially another memory producer that will offer them.
Looking at Amphenol that makes the connector/shim whatever thing, they only mention server and data centre applications.

I don't think regular DIMMs, at least not the ones we have today, have enough pins for a wider bus.

I bet most consumer motherboards will come with only 1 camm slot forcing you to swap memory every time.
Of course this will spaciate ram price per capacity, making higher capacities uber expensive compared to what will be considered standard.
Of course the ammount will be pirric compared to any server configuration, servers can now fit TB of ram easily.

The workstation segment will be the most affected. After the "platinum" "gold" "silver" fiasco of intel cpus. If nowadays you have to pay extreme prices for 128 or 192GB of ram, imagine with the camm factor.
Of course thats because workstation ram is registered and ecc, like we were at 2002 and we just left behind rambus.

In a place and time where 256GB could be the standard of any workstation, even consumer pc, we got stuck about 15 years ago around 32-64GB for the *affordable segment* (16-32 for the consumer market) and anything beyond that sounds *crazy* specially for uninformed bosses and managers.

Console ram ammounts doesn't help either, if the current standard was between 64GB and 128GB...

Anyway now we need those ammounts on GPUs where the situation is even worse. 8GB is enough, and GDDR of course.
Meanwhile in the enterprise sphere HBM and 96GB its more than usual.

Ah quad channel? octo channel? reserved for enterprise as well. Consumer market don't need to have even the option.

A new format looks to me an oportunity to let marketing department to tight even more the consumer market.
I guess you didn't bother reading my comment about dual and single channel CAMM2s?
Going single channel would be like using two DIMMs today.

 
Last edited:
The 2019 Mac Pro places RAM on the back side of the motherboard. Just sayin'.
 
Ah quad channel? octo channel? reserved for enterprise as well. Consumer market don't need to have even the option.

A new format looks to me an oportunity to let marketing department to tight even more the consumer market:
1 slot. Single channel 16-32GB for desktop standard
1 slot. Dual channel 16-32GB, 48GB-64GB for enthusiasts.
1 slot. Dual channel 96GB-128GB for ubergamers. Of course cooling those dual sided rams will be fun.

Maybe the uber expensive pro-art hawk EXTREME 1000-1200USD motherboards will come with 2 slots. For the freaking amout of 128GB to 256GB (price range between 500-1000USD), and we are in 2030.
This makes no sense.

DDR5 most common capacity is already 32GB kit (2x16). Up from 16GB (2x8) on DDR4.
DDR6 will start from that and i would not be surprised if the most common DDR6 capacity will be 64GB.
I wonder if 16GB kits will even be made for it considering that already with DDR5 the 64GB kits are more numerous than 16GB kits.
 
Why would be a problem for server? Desktops are already limited to two channels.
High-end workstations have 4 or 8 channels.
Servers are 4, 8 or 12 channels, Diamond Rapids will be up to 16 channels. This will take up a lot of space as CAMM2 modules.
It's another pointless standard no one really is asking for, and you know the old saying; if it ain't broke, dont fix it. It's just bad engineering.

If there's anything to upgrade to. 1DPC motherboards now support 128GB total. Hard to imagine 128GB being too little for most people.
Most people never upgrade memory. By the time they need to upgrade they will likely buy a new platform.
I was referring to the normal practice of buying e.g. 2 sticks of memory, and then adding another 2 as needed (In this scenario, buying 2x32 GB now, another in 2-3 years, but that will result in lower performance, so in practice you either need to max out from the start, or "throw away" the old memory when upgrading.)

What "most people" need is pretty irrelevant, most people hardly use their computers like a power user do. But a typical power user (not the same as enthusiast) would probably want their build to last 5+ years, and by then 128 GB is going to be a limitation. This, along with other limitations, is why I've argued many times before that mainstream platforms are increasingly useless for anything but gamers, overclocking enthusiasts and basic office users.

And the current idea of putting a hot SSD right below a chunky hot GPU or between a CPU and GPU is better?
Backside access with much less heat would be so much easier than current system where i have to remove the GPU and then "dive in".
Most cases have no real airflow on the back side, which will definitely shorten the SSD lifespan. Most PC hardware should have some airflow, not necessarily a whole lot, but a little bit makes a huge difference.

I've long been an opponent of the stupid M.2 standard which shouldn't have been brought to the desktop (PCIe is better).
And the stupidity of the gigant metal blobs on top of SSDs, chipsets and VRMs, which does extremely little to cool anything. Just look at server motherboards which have tiny heat sinks with proper fins that do a better job.

The workstation segment will be the most affected. After the "platinum" "gold" "silver" fiasco of intel cpus. If nowadays you have to pay extreme prices for 128 or 192GB of ram, imagine with the camm factor.
What fiasco?
Xeon Platinum/Gold/Silver are part of their Xeon SP-line, which are server CPUs, not workstation CPUs. Xeon W 3400/3500 series are their workstation lineup, which hasn't yet been updated with Granite Rapids.

In a place and time where 256GB could be the standard of any workstation, even consumer pc, we got stuck about 15 years ago around 32-64GB for the *affordable segment* (16-32 for the consumer market) and anything beyond that sounds *crazy* specially for uninformed bosses and managers.

Console ram ammounts doesn't help either (18?), if the current standard was between 64GB and 128GB...

Anyway now we *need* those amounts on GPUs where the situation is even worse. 8GB is enough, and GDDR of course.
Meanwhile in the enterprise sphere HBM and 96GB its more than usual.

Ah quad channel? octo channel? reserved for enterprise as well. Consumer market don't need to have even the option.

A new format looks to me an oportunity to let marketing department to tight even more the consumer market:
1 slot. Single channel 16-32GB for desktop standard
1 slot. Dual channel 16-32GB, 48GB-64GB for enthusiasts.
1 slot. Dual channel 96GB-128GB for ubergamers. Of course cooling those dual sided rams will be fun.

Maybe the uber expensive pro-art hawk EXTREME 1000-1200USD motherboards will come with 2 slots. For the freaking amout of 128GB to 256GB (price range between 500-1000USD), and we are in 2030.
I think the solution is, as I've suggested before, to lower the entry for high-end workstation platforms (Xeon W and Threadripper) and move all the >100W TDP mainstream CPUs over to this platform, and make the mainstream strictly for non-demanding users. Right now we are already paying "HEDT prices" for motherboards and CPUs, but getting terrible deals.

I don't see games needing a lot of memory, but it's certainly needed for anyone even doing heavy web browsing at this point.
More memory also helps to keep the swapping to a minimum, and helps with IO cache, so I want as much memory as is reasonable on a system.
 
The 2019 Mac Pro places RAM on the back side of the motherboard. Just sayin'.

In a custom chassis, with DD4 running at 2933 mhz or less, and judging by most older (intel based) mac products it was probably near cooking itself to death.

Not sure what you’re going for here. If the aim is to follow apple, why not sign up for soldered ram and factory only upgrades that cost you triple what a normal DIY/serviceable part?
 
High-end workstations have 4 or 8 channels.
Servers are 4, 8 or 12 channels, Diamond Rapids will be up to 16 channels. This will take up a lot of space as CAMM2 modules.
It's another pointless standard no one really is asking for, and you know the old saying; if it ain't broke, dont fix it. It's just bad engineering.
DIMM will not disappear. CAMM2 will coexist with it.
What "most people" need is pretty irrelevant, most people hardly use their computers like a power user do. But a typical power user (not the same as enthusiast) would probably want their build to last 5+ years, and by then 128 GB is going to be a limitation. This, along with other limitations, is why I've argued many times before that mainstream platforms are increasingly useless for anything but gamers, overclocking enthusiasts and basic office users.
I consider myself power user and i've upgraded memory once twice. Once in the DDR1 era from 2x256MB to 4x512MB and then in the DDR3 era from 2x4GB to 4x4GB. With DDR4 i was smarter and bought 2x16GB right away despite the most common DDR4 kit size being 2x8GB. And by the looks of it i will move to DDR5 before 32GB becomes an issue. Also not using Chrome helps...

128GB with two slots is not going to be a limitation for 99% of people. Those 1% that need more that are or will be on Threadripper anyway.
And those who want to stay on mainstream platform there is plenty with four slot DIMM boards for 256GB now, and 512GB with DDR6.
Most cases have no real airflow on the back side, which will definitely shorten the SSD lifespan. Most PC hardware should have some airflow, not necessarily a whole lot, but a little bit makes a huge difference.
Since when does heat shorten SSD lifespan? It may limit max sustained performance on longer workloads, but it will not shorten SSD lifespan to any meaningful degree. Have these hot Gen 4 and Gen 5 started dying yet? I haven't seen it. A proper passive heatsink is enough to cool an SSD.
I've long been an opponent of the stupid M.2 standard which shouldn't have been brought to the desktop (PCIe is better).
And the stupidity of the gigant metal blobs on top of SSDs, chipsets and VRMs, which does extremely little to cool anything. Just look at server motherboards which have tiny heat sinks with proper fins that do a better job.
M.2 is PCIe. Okay i get it but the alternative you suggest is no better. So we should have PCIe cards as SSD's with giant heatsinks to cool a few chips instead?
That's worse. Wastes material and space.

Where's the bandwidth and space for those cards gonna come from? Lets put 4-7 PCIe slots on the board in E-ATX form factor?
There are problems with every standard. Of course server motherboards have tiny heatsinks. They also have high and noisy airflow.
Hence why many server GPU's and CPU's dont even come with active cooling.
 
And those who want to stay on mainstream platform there is plenty with four slot DIMM boards for 256GB now, and 512GB with DDR6.
For Arrow Lake and AM5 is 128 GB the practical limit, as most don't want to sacrifice significant memory speed. That difference is only going to increase with Nova Lake and Zen 6 BTW.

Since when does heat shorten SSD lifespan?
It's basic knowledge about electronics.
If they get hot enough to throttle, they should have air flow.

M.2 is PCIe. Okay i get it but the alternative you suggest is no better. So we should have PCIe cards as SSD's with giant heatsinks to cool a few chips instead?
That's worse. Wastes material and space.
All PCIe lanes should have been PCIe slots for desktops, which would have saved costs and every lane could be used more flexibly; either SSDs, controller cards, network cards, etc. A simple PCIe X4 card with good fins would be much more easily cooled, for most the air pulled in by the graphics card would be enough, and in the cases it doesn't, a simple fan would do.

Where's the bandwidth and space for those cards gonna come from? Lets put 4-7 PCIe slots on the board in E-ATX form factor?
There are problems with every standard. Of course server motherboards have tiny heatsinks. They also have high and noisy airflow.
There is space for many PCIe slots on even a standard ATX motherboard. With X4, X8 and X16 slots you can hold more lanes than the CPU can offer.

Hence why many server GPU's and CPU's dont even come with active cooling.
Server parts have large fins and are designed for cases with high air flow, this also includes network cards, controller cards etc. If you put these parts in a case with low airflow, they will overheat.
 
The only argument I stick to amd64 is it's modularity. When DDR5 socket type is gone there is no real reason to not buy RISC-V or similar platforms.
It may be the final nail so other technologies will be faster adopted. for PC gaming there are better platforms anyway in 2030 or later
 
For Arrow Lake and AM5 is 128 GB the practical limit, as most don't want to sacrifice significant memory speed. That difference is only going to increase with Nova Lake and Zen 6 BTW.


It's basic knowledge about electronics.
If they get hot enough to throttle, they should have air flow.


All PCIe lanes should have been PCIe slots for desktops, which would have saved costs and every lane could be used more flexibly; either SSDs, controller cards, network cards, etc. A simple PCIe X4 card with good fins would be much more easily cooled, for most the air pulled in by the graphics card would be enough, and in the cases it doesn't, a simple fan would do.


There is space for many PCIe slots on even a standard ATX motherboard. With X4, X8 and X16 slots you can hold more lanes than the CPU can offer.


Server parts have large fins and are designed for cases with high air flow, this also includes network cards, controller cards etc. If you put these parts in a case with low airflow, they will overheat.
I felt alone thinking m2 slots for desktop were a scam... cooling, versatility, etc.


Would be anyone surprise to know that most desktop pcs for office use 1x16GB RAM instead of f.e. 2x8GB? The last two batches (cycles) I saw for big and medium companies (here >250 employee for big, <250 medium) had 80% of their pcs configured that way. Thats where most windows 11 *new* installations come from as well.
 
Last edited:
All PCIe lanes should have been PCIe slots for desktops, which would have saved costs and every lane could be used more flexibly; either SSDs, controller cards, network cards, etc. A simple PCIe X4 card with good fins would be much more easily cooled, for most the air pulled in by the graphics card would be enough, and in the cases it doesn't, a simple fan would do.


There is space for many PCIe slots on even a standard ATX motherboard. With X4, X8 and X16 slots you can hold more lanes than the CPU can offer.
Beautiful in theory, unfeasible with current market practices of 3-to-4-slot graphics cards coolers.
 
The longer I wait to buy my new desktop, the more powerful and less expensive the purchase gets.
The more you wait the less you pay. :roll:
Hopefully DDR6 fixes the one of the main problem with DDR5 which has been stability with all slots occupied.
Hopefully most motherboards are made with just two slots, as the vast majority doesn't need four slots, with DDR5 one can have 128GB of capacity, more than plenty even in 2035.
Might even be cheaper to manufacture, so either the MSRP will be slightly lower or for the same budget bump up some other features.

Higher-end/premium motherboards sure, four slots for power users, no problem with that, if you can afford that much RAM you can afford a more expensive mobo than what the plebs get.
But a typical power user (not the same as enthusiast) would probably want their build to last 5+ years, and by then 128 GB is going to be a limitation.
It's interesting how with all the ever increasing system requirements for every piece of software, only the RAM capacity becomes a bottleneck and somehow everything else manages to keep up.

For people that make money with the PC, time is essential, sure the CPU can complete the task if it has enough RAM but if it's older it's also slower therefore the productivity is not great, so basically the whole power user argument is shaky, if you are willing to wait around for the CPU to finish sure go crazy on the RAM and make it "future proof", but that doesn't sound like a "productive power user" scenario.
 
Would be anyone surprise to know that most desktop pcs for office use 1x16GB RAM instead of f.e. 2x8GB? The last two batches (cycles) I saw for big and medium companies (here >250 employee for big, <250 medium) had 80% of their pcs configured that way. Thats where most windows 11 *new* installations come from as well.
That's pretty normal, also for "premium" laptops (e.g. Thinkpads).
A lot of the "office PCs" from vendors like Dell, HP, Lenovo, etc. are shockingly bad deals. While they might look okay on paper, the cooling is underpowered, storage is crap, PSUs are bad, etc. Their "proper" workstations are a bit better though, but all of them cost about twice of what they should. And that's before you start adding RAM and storage, that's where they get you.

And I want to stress that most of these PCs can't even serve as "light workstations", while they might be fine for those doing light office work, doing heavier work like programming, graphics, video, etc. they are simply way slower than you might expect. I've seen e.g. those Dell Optiplexes with 65W CPUs deployed (Coffee Lake era), with no case fans (only the stock CPU fan), so if you put >5 min. of high load on the CPU it will either throttle or crash. I've seen them crash many times with larger build jobs etc. And it's really sad that just a few tens of dollars here and there worth of better parts and these machines would have been much more usable.

It's interesting how with all the ever increasing system requirements for every piece of software, only the RAM capacity becomes a bottleneck and somehow everything else manages to keep up.

For people that make money with the PC, time is essential, sure the CPU can complete the task if it has enough RAM but if it's older it's also slower therefore the productivity is not great, so basically the whole power user argument is shaky, if you are willing to wait around for the CPU to finish sure go crazy on the RAM and make it "future proof", but that doesn't sound like a "productive power user" scenario.
I don't think most are aware of how RAM impacts performance and usability, the two main ones are IO cache and swapping;
IO cache is a portion of the RAM used to cache the data/applications you're working on. If you are doing a lot of multitasking or switching a lot between files, more RAM will yield noticable performance differences. (It's worth noting Linux users get the benefit of using all free RAM as IO cache for a very snappy experience. If you just have enough RAM, it will behave like a giant RAM-drive.)

Swapping("pagefile") happens gradually as you fill up your RAM, and the more you use it the more it heavier it will swap, leading to a gradual slowdown the more you multitask. This also means as applications consume more and more RAM to do similar work, having more RAM means you get to retain the performance level even when the applications (especially browsers) consume more. (In Linux I like to tune my swapping so it only starts when the RAM is almost full, so I get top performance for as long as possible.)
 
I know, I've had one. Practically disassembling the whole computer to change a SSD.
Just because the idea isn't new, doesn't change the fact that it's a terrible idea.
It can be easy with the right case design. Asrock Deskmini X300 motherboard tray slides out for easy access.

If DDR6 is faster than DDR5 and doesn't introduce new significant issues then what's wrong with releasing it when the average Joe is on DDR4?

It's also good news because iGPUs get more room for oomph.
All I know is DDR5 UDIMM ECC is still quite expensive compared to DDR4 UDIMM ECC but ... DDR5 at 6000MT/s makes my VM's suspend/restore time go ZOOM! Sadly that v-color kit I had was incompatible with my AM5 system at any speed. Regressing back to 5600MT/s now feels slow on my AM5 and on my TR system DDR4-3200 suspend/restore takes forever even with 8 channel memory.

I'm patiently waiting for DDR6 and AM6 and hopefully they will have ECC support out of the gate this time.
 
Last edited:
Mmm, sorry but no. There have offered is shops DDR5 9600 in stock. If doubling this will have 19200.

Maximum JEDEC-approved DDR5 standard is DDR5-8800 (PC5-70400), an eventual DDR6-17600 would be exactly double that.
 
I think that if CAMM2 allows universal compatibility between laptops and desktops it will be a very welcome change. Dual channel advancement to 192bits is also a necessary step towards definitively replacing entry-level GPUs with APUs.
 
Unless motherboards only supports single channel CAMM2 modules, the current design only allows for a single module.
I guess CPU memory controllers could change and support two modules, maybe one on each side of the motherboard?
The JEDEC standards definitely get in the way of using multiple modules. The other simple way would be both sides of the CPU socket like workstation boards do. Unfortunately as I understand the standards this isn't on the table at all (at least not without adding memory channels).
As I said, single channel CAMM2 modules can be stacked.
I get the feeling this is only going to be viable for JEDEC compliant memory.
It really does seem like the stacked single channel CAMM2s are sharing the same motherboard area though, just with different "spacers" which is going to be interesting to see implemented.
Yeah it should just be the connector itself which splits the two channels at different heights. The mounting method looks the same as a regular CAMM2 module with one difference: the center hole used isn't as far down on the module. Only the D module supports single channel and given that it's the large size it is likely enterprise focused.

Seeing that variants slide reminds me: has anyone even talked about dual-die packing for DDR5? I don't recall seeing it yet and would be pretty important for capacity sake. I haven't seen anything about DDR6 memory IC capacity so perhaps that's just solved for next generation.
It might have to happen for the 256 GB modules, but that's not likely to be something most consumers would invest in.
128GB is the maximum capacity for a CAMM2 module currently and it requires 16 packages on each side. The smaller ones max out at 64GB with 12 packages on the top and 4 on the bottom.

I think that if CAMM2 allows universal compatibility between laptops and desktops it will be a very welcome change. Dual channel advancement to 192bits is also a necessary step towards definitively replacing entry-level GPUs with APUs.
The CAMM2 specification is universal compatibility across everything using it. LPCAMM2 and CAMM2 are not cross compatible though which means a platform can either support DDR or LPDDR.
 
For Arrow Lake and AM5 is 128 GB the practical limit, as most don't want to sacrifice significant memory speed. That difference is only going to increase with Nova Lake and Zen 6 BTW.
There's always some excuse. 128GB will be enough for 99% of people for the foreseeable future and since this capacity can be achieved with only two sticks the speed is not affected. Those that really go for 256GB likely are not trying to break any speed records anyway as the capacity itself ensures speed.
It's basic knowledge about electronics.
About SSD's it's not. The last time i heard about SSD dying due to heat was when a manufacturer riddled it with too many RGB LED's.
Most SSD death are due to excessive writes or firmware issues, not heat. Not that 100c+ is healthy, but current high temps are not dangerous. Just annoying.
All PCIe lanes should have been PCIe slots for desktops, which would have saved costs and every lane could be used more flexibly; either SSDs, controller cards, network cards, etc. A simple PCIe X4 card with good fins would be much more easily cooled, for most the air pulled in by the graphics card would be enough, and in the cases it doesn't, a simple fan would do.
There's not enough space even on ATX board for what you're proposing. In order to ensure proper spacing only a few x4 slots could be placed on the board.
It would not have saved costs. It would have increased costs. Very few people actually use capture or network cards to justify the alternate use of those slots.
Again you're speaking from server perspective, but desktop should not be a server with everything having it's own PCIe slot.
There is space for many PCIe slots on even a standard ATX motherboard. With X4, X8 and X16 slots you can hold more lanes than the CPU can offer.
GPU's already block many of the potential slots. The x4 placed between CPU and GPU is getting cooked regardless.
the X8 or X16 below the GPU is cooked by the GPU too. On the lane count i agree that mainstream platforms should increase lane count.
Nothing crazy that will drive up the price, but maybe to around 40 from current ~24.
Server parts have large fins and are designed for cases with high air flow, this also includes network cards, controller cards etc. If you put these parts in a case with low airflow, they will overheat.
Desktop is desktop and server is server. As it should be. Instead of making a Frankenstein that doesn't do anything well it's better to have specialized platforms that perform their function well.

As for the topic itself - DIMM will not go anywhere. CAMM2 is proposed as additional standard. I suspect it will be used mostly in laptops where it's advantages over SO-DIMM are obvious.
 
I don't think most are aware of how RAM impacts performance and usability, the two main ones are IO cache and swapping;
Definitely. If we were to discuss how a lot of people get less RAM than they should when they make a build, we would not finish the discussion before the end of the world arrives.
If you are doing a lot of multitasking or switching a lot between files, more RAM will yield noticable performance differences.
True, but realistically how much multitasking could you do on a midrange CPU? Like I don't know a 7600X - 7700X / 9600X - 9700X something along those lines.
Sure if you have a LOT of RAM it will eventually finish the job(s) but that means you have to sit around for it to finish (because it's slower than a flagship). We could pretend that it's sort of "productive" but honestly this sort of scenario basically shows that a CPU upgrade is valid.

I would argue that this sort of scenario where you have a very large RAM buffer relative to the CPU muscle kind of masks the fact the CPU is getting long in the tooth. Because you get used to waiting around for it to finish, and keep adding tasks, while others finish, and because it never swaps it makes it seem like the rig is still potent when it actually isn't that strong anymore.

But yeah honestly many people should probably add another 16 or 32GB on top of what they are getting when making a build. This way they will be safe from having to add another kit a few years down the line.
 
There's always some excuse. 128GB will be enough for 99% of people for the foreseeable future

Not for several of the latest large AI language models, unfortunately. They can take hundreds of GB of memory even in quantized (lossy compressed) format and they'd really benefit from GPU-level memory bandwidth. I imagine that within a few years more than a few people will be using them rather than just enthusiasts?
 
This has always been a problem. If anything DDR6 will make this issue worse.

Yeah those supposed speeds are very conservative.

Why would be a problem for server? Desktops are already limited to two channels.

They are a little more expensive, but not by much. DDR5 is perfectly viable price wise. If changing your setup costs 3/4 the price of your current one id's say that perfectly acceptable considering that DDR5 doubles the capacity and speed. Not to mention speed gain from AM5 CPU you'll get.

If there's anything to upgrade to. 1DPC motherboards now support 128GB total. Hard to imagine 128GB being too little for most people.
Most people never upgrade memory. By the time they need to upgrade they will likely buy a new platform.

What prices? DDR5 doubles capacity at DDR4 prices. It's weird to see price listed as a reason in 2025 for holding off on DDR5. In PCIe 5.0 SSD's it makes sense. On DDR5 it makes zero sense.

And the current idea of putting a hot SSD right below a chunky hot GPU or between a CPU and GPU is better?
Backside access with much less heat would be so much easier than current system where i have to remove the GPU and then "dive in".

Exactly. First gen DDR6 will be slower and more expensive than even the average DDR5 we have today. Speed will be barely faster at ~9000 and timings will likely be double that of DDR5.

Personally i've always upgraded at the end of a DDR generation. That's when the prices have been best, speed the highest and compatibility the best.
I did it with DDR3-1866. DDR3 went to 2100 something and i skipped DDR2. DDR4 initial was 2133. And i did it with DDR4-3733 when DDR5 initial was 4800 and likely will do it again with DDR5 next year or whenever Zen 6 launches at hopefully around 8000 or so when DDR6 will likely launch at around 9000.

Then hold off and watch DDR6 evolve until there's talk of DDR7 and it's clear there's nothing more to be had from DDR6.

The problem is when upgrading to DDR5, it's everything else that you need to replace as well.

So the prices of all components M/B, CPU, and Ram need to come down. Ram is bit cheaper now but still more expensive.

I can upgrade to AM5, but if I don't get an 7800X3D or 9800X3D. It won't be worth it for me coming from a 5700X3D.

Seeing that AM6 and DDR6 is around the corner I can just as well wait for that, or for more reasonable prices in the CPU and M/B and pair it with DDR5.
 
Not for several of the latest large AI language models, unfortunately. They can take hundreds of GB of memory even in quantized (lossy compressed) format and they'd really benefit from GPU-level memory bandwidth. I imagine that within a few years more than a few people will be using them rather than just enthusiasts?
I very much doubt that. 1%. At most. Even that is millions of people already.

So the prices of all components M/B, CPU, and Ram need to come down. Ram is bit cheaper now but still more expensive.
I made some mock wish list a while back for various AM5 platform upgrades:
Ultra-Low AM5:
89,77€ - AMD Ryzen 5 8400F, 6C/12T, 4.20-4.70GHz, tray (100-000001591 / 100-100001591MPK)
42,99€ - TeamGroup T-Force VULCAN schwarz UDIMM 16GB Kit, DDR5-5200, CL40-40-40-76 (FLBD516G5200HC40CDC01)
71,99€ - MSI PRO A620M-E (7E28-001R)
Total price: € 204.66

Budget AM5:
123,94€ - AMD Ryzen 5 7400F, 6C/12T, 3.70-4.70GHz, tray
55,98€ - Lexar THOR OC Black UDIMM 16GB Kit, DDR5-6000, CL38-48-48-96 (LD5U08G60C38LG-RGD)
73,98€ - Biostar B650MT
Total price: € 253.90

Midrange AM5:
278,00€ - AMD Ryzen 5 7600X3D, 6C/12T, 4.10-4.70GHz, boxed without cooler 100-100001721WOF
89,90€ - Patriot Viper XTREME 5 UDIMM 32GB Kit, DDR5-6000, CL30-40-40-76 (PVX532G60C30K)
208,79€ - GIGABYTE B850 AORUS Elite WIFI7
Total price: € 576.69

Enthusiast AM5:
459,00€ - AMD Ryzen 7 9800X3D, 8C/16T, 4.70-5.20GHz, boxed without cooler 100-100001084WOF
266,00€ - ADATA XPG LANCER RGB Silver Grey CUDIMM 48GB Kit, DDR5-9200, CL42-56-56 (AX5CU9200C4224G-DCLACRSG)
707,08€ - ASUS ROG Crosshair X870E Apex (90MB1KR0-M0EAY0)
Total price: € 1432.08

These were never meant to be suggestions to anyone to go out there and buy these exact components.
More of a example how cheap AM5 really is.

I went overboard with midrange motherboard pricing as it should be closer to 150, not 210.
The enthusiast tier is obviously total overkill and not even compatible with CUDIMM's.
The ultra low is something i would not suggest to anyone as it's heavily compromised and for 50€ more the budget option is far more balanced without any motherboard or CPU limits.
I can upgrade to AM5, but if I don't get an 7800X3D or 9800X3D. It won't be worth it for me coming from a 5700X3D.
There's also much cheaper 7600X3D. 7000 series non-X3D parts already match 5700X3D performance.
Seeing that AM6 and DDR6 is around the corner I can just as well wait for that, or for more reasonable prices in the CPU and M/B and pair it with DDR5.
AM6 and DDR6 will be very expensive at the beginning. Just like AM5 and DDR5 were. It took more than a year to prices to start coming down.
Not to mention that AM6 wont come out until sometime in 2029 at the earliest. Zen 6 will still be on AM5 and that's a 2026/2027 product.
So by waiting for AM6 you'll be waiting for the next 5 years or so. DDR6 will also be slower than the best DDR5 at the beginning so you wont even gain noticeable performance with it.

I think it's far more reasonable to wait for Zen 6 on AM5 and move to AM5 at around 2027 when DDR5 prices will be lowest and performance likely as good as it's going to get. Same for motherboard prices. CPU prices for Zen 6 are a question mark obviously, but if it launches at the end of 2026 then by 2027 it should have stabilized too like 9800X3D that took 3 months to stabilize in availability and price.
 
I very much doubt that. 1%. At most. Even that is millions of people already.


I made some mock wish list a while back for various AM5 platform upgrades:
Ultra-Low AM5:
89,77€ - AMD Ryzen 5 8400F, 6C/12T, 4.20-4.70GHz, tray (100-000001591 / 100-100001591MPK)
42,99€ - TeamGroup T-Force VULCAN schwarz UDIMM 16GB Kit, DDR5-5200, CL40-40-40-76 (FLBD516G5200HC40CDC01)
71,99€ - MSI PRO A620M-E (7E28-001R)
Total price: € 204.66

Budget AM5:
123,94€ - AMD Ryzen 5 7400F, 6C/12T, 3.70-4.70GHz, tray
55,98€ - Lexar THOR OC Black UDIMM 16GB Kit, DDR5-6000, CL38-48-48-96 (LD5U08G60C38LG-RGD)
73,98€ - Biostar B650MT
Total price: € 253.90

Midrange AM5:
278,00€ - AMD Ryzen 5 7600X3D, 6C/12T, 4.10-4.70GHz, boxed without cooler 100-100001721WOF
89,90€ - Patriot Viper XTREME 5 UDIMM 32GB Kit, DDR5-6000, CL30-40-40-76 (PVX532G60C30K)
208,79€ - GIGABYTE B850 AORUS Elite WIFI7
Total price: € 576.69

Enthusiast AM5:
459,00€ - AMD Ryzen 7 9800X3D, 8C/16T, 4.70-5.20GHz, boxed without cooler 100-100001084WOF
266,00€ - ADATA XPG LANCER RGB Silver Grey CUDIMM 48GB Kit, DDR5-9200, CL42-56-56 (AX5CU9200C4224G-DCLACRSG)
707,08€ - ASUS ROG Crosshair X870E Apex (90MB1KR0-M0EAY0)
Total price: € 1432.08

These were never meant to be suggestions to anyone to go out there and buy these exact components.
More of a example how cheap AM5 really is.

I went overboard with midrange motherboard pricing as it should be closer to 150, not 210.
The enthusiast tier is obviously total overkill and not even compatible with CUDIMM's.
The ultra low is something i would not suggest to anyone as it's heavily compromised and for 50€ more the budget option is far more balanced without any motherboard or CPU limits.

There's also much cheaper 7600X3D. 7000 series non-X3D parts already match 5700X3D performance.

AM6 and DDR6 will be very expensive at the beginning. Just like AM5 and DDR5 were. It took more than a year to prices to start coming down.
Not to mention that AM6 wont come out until sometime in 2029 at the earliest. Zen 6 will still be on AM5 and that's a 2026/2027 product.
So by waiting for AM6 you'll be waiting for the next 5 years or so. DDR6 will also be slower than the best DDR5 at the beginning so you wont even gain noticeable performance with it.

I think it's far more reasonable to wait for Zen 6 on AM5 and move to AM5 at around 2027 when DDR5 prices will be lowest and performance likely as good as it's going to get. Same for motherboard prices. CPU prices for Zen 6 are a question mark obviously, but if it launches at the end of 2026 then by 2027 it should have stabilized too like 9800X3D that took 3 months to stabilize in availability and price.

Agree, only problem with buying into AM5 and Zen6 is that it's the last one in the series and no upgrade path after that. Guess it doesn't matter much as long as the M/B and CPUs prices come down by then.
 
There's not enough space even on ATX board for what you're proposing.
It's simple math. For Arrow Lake:
ATX boards have theoretically 7 "slots" for cards. The first is usually used for a PCIe X1 or not used at all. You can easily do e.g. X4 + X16 + blank + blank + X4, which leaves two bottom slots for all the chipset lanes, e.g. X8 + X8.

Desktop is desktop and server is server. As it should be. Instead of making a Frankenstein that doesn't do anything well it's better to have specialized platforms that perform their function well.
You know very well that I never said that, so don't attempt any straw-man arguments.
I was replying to your post about many server parts not having active cooling, because they have specialized cases (sometimes with ducts) which does the active cooling for them.

True, but realistically how much multitasking could you do on a midrange CPU? Like I don't know a 7600X - 7700X / 9600X - 9700X something along those lines.
I can count on one hand the amount of people I've seen professionally who closes applications as they switch between sub-tasks. Pretty much every professional on the planet who work at least semi-efficiently have many applications/files running, but usually only one with actual load. This means people need lots of RAM and not necessarily a lot of CPU cores. As a matter of fact, most people only start to close applications or most tabs in their browser when the computer gets too slow.

The various companies I've either worked for or have had contact with usually uses severely underpowered hardware for their engineering staff, and above all, it's almost always lack of memory.

Sure if you have a LOT of RAM it will eventually finish the job(s) but that means you have to sit around for it to finish (because it's slower than a flagship). We could pretend that it's sort of "productive" but honestly this sort of scenario basically shows that a CPU upgrade is valid.

I would argue that this sort of scenario where you have a very large RAM buffer relative to the CPU muscle kind of masks the fact the CPU is getting long in the tooth. Because you get used to waiting around for it to finish, and keep adding tasks, while others finish, and because it never swaps it makes it seem like the rig is still potent when it actually isn't that strong anymore.
I never said it had anything about waiting for tasks to finish, and judging by the way you argue I don't think you know what multitasking is. I'm not talking about queuing batch jobs.

When you run into heavy swapping it's more slowing down every tiny interaction, and at some point driving the user to close down files or applications. Some applications may even come to a point where they crash etc.
Just a web browser alone can quickly consume >10GB, plus office applications and specific productive tools for whatever task the worker has.
For most types of engineering, some kind of CAD tool(s) or similar is used, along with simulation tools etc. For software development it's usually some kind of "IDE", build tools, debugging tools, VMs/emulation tools, and more often than you think some kind of graphic tool etc. And it's pretty similar for anyone in creative graphics/video etc.
Most workers needs to switch between files/sub-tasks to be as productive as possible. The more you have to close and reopen files/applications, the more it takes focus away from being productive. The same goes for the computer gradually slowing down; the smoother the computer performs, the less distracting it is. This not only affects overall "productivity", but also quality of work, amount of mistakes, meeting deadlines, etc.
 
One day people will figure out that every time there is a new DDR standard, the latency doubles or triples... x3D cache will be mandatory for the awful latency DDR6 will come with. But hey, a couple of benchmarks will look good.
 
One day people will figure out that every time there is a new DDR standard, the latency doubles or triples... x3D cache will be mandatory for the awful latency DDR6 will come with. But hey, a couple of benchmarks will look good.
That's not how that works. Latency depends not just on timings, but also the clock speed. This offsets the timings increase.
So while yes, the timings numbers do increase - the actual latency in nanoseconds stays roughly the same.

it's wrong to say latency doubles or triples.

Some examples:
DDR3-1600 @ CL9: 11,25ns
DDR4-3200 @ CL16: 10,00ns
DDR5-6400 @ CL32: 10,00ns

So what's the point of new DDR generations if the latency stays the same?
Bandwidth. DDR3-1600 was 12,8GB/s. DDR5-6400 is 51,2GB/s (single-channel numbers).
At comparable latency.

That is why each memory generation is faster despite doubling of timings.
 
Back
Top