Friday, January 26th 2024

More AMD Ryzen 9000 "Zen 5" Desktop Processor Details Emerge

AMD is looking to debut its Ryzen 9000 series "Granite Ridge" desktop processors based on the "Zen 5" microarchitecture some time around May-June 2024, according to High Yield YT, a reliable source with AMD leaks. These processors will be built in the existing Socket AM5 package, and be compatible with all existing AMD 600 series chipset motherboards. It remains to be seen if AMD debuts a new line of motherboard chipsets. Almost all Socket AM5 motherboards come with the USB BIOS flashback feature, which means motherboards from even the earliest production batches that are in the retail channel, should be able to easily support the new processors.

AMD is giving its next-gen desktop processors the Ryzen 9000 series processor model numbering, as it used the Ryzen 8000 series for its recently announced Socket AM5 desktop APUs based on the "Hawk Point" monolithic silicon. "Granite Ridge" will be a chiplet-based processor, much like the Ryzen 7000 series "Raphael." In fact, it will even retain the same 6 nm client I/O die (cIOD) as "Raphael," with some possible revisions made to increase its native DDR5 memory frequency (up from the current DDR5-5200), and improve its memory overclocking capabilities. It's being reported that DDR5-6400 could be the new "sweetspot" memory speed for these processors, up from the current DDR5-6000.
The "Granite Ridge" processor will feature one or two "Eldora" CPU complex dies (CCDs). Each CCD contains eight "Zen 5" CPU cores (aka "Nirvana" cores), each with 1 MB of L2 cache, and a yet undisclosed amount of on-die L3 cache. The "Zen 5" CCD will be built on the TSMC N4 (4 nm EUV) foundry node, the same node on which the company builds its "Hawk Point" monolithic silicon.

The "Zen 5" CPU core is expected by AMD to achieve a 10-15 percent IPC uplift over "Zen 4," which should put its gaming performance roughly comparable to those of Ryzen 7000X3D series processors, but without the 3D Vertical Cache, yielding higher headroom for clock speeds and overclocking. High Yield YT believes that a May-June launch of Ryzen 9000 "Granite Ridge" could give AMD free reign over the DIY gaming desktop market until Intel comes around to launch its next-generation Core "Arrow Lake-S" desktop processor in the Socket LGA1851 package, some time in September-October 2024, setting the stage for Ryzen 9000X3D processors by CES (January 2025).

It was recently reported that "Zen 5" processors are already in mass production, although this could refer to the "Eldora" CCD that makes its way not just to the "Granite Ridge" desktop processors, but also EPYC "Turin" server processors.
Sources: High Yield YT (Twitter), HotHardware
Add your own comment

85 Comments on More AMD Ryzen 9000 "Zen 5" Desktop Processor Details Emerge

#51
trparky
btarunrThe "Zen 5" CPU core is expected by AMD to achieve a 10-15 percent IPC uplift over "Zen 4,"
Aww man, I was hoping it was going to be in the 20s in terms of a percent increase.
Posted on Reply
#52
phubar
rv8000Go over to the AM5 memory overclocking thread on OCN, theres been plenty discussion and benchmark evidence outlining the CCD configuration limitations; tldr there is little to no point running ddr5 8000 on a 7800X3D, whereas there is/can be on a 7950X/X3D.

Games are probably the LEAST applicable piece of software here, and not really at all what I’m talking about (as I also exclusively stated “application dependent”).
So I looked at that forum and all I can find is evidence for what I've been saying: the gains you're talking about are due to IF bus/uncore clock increases and latency tuning NOT bandwidth.

Plenty of people there were getting their DDR5 to 7200 or 7800 but with terrible timings and seeing basically no improvement just like I said.

The only people who saw nice improvements had to push their timings HARD and had tremendous difficulty with stability no matter how much they were pumping volts into the RAM.

To me its quite telling that your original comment mentioned rather high DDR5 speed (7800) with what is VERY tight timings (c36-38) for AMD at those speeds as 'evidence' for a DDR5 bandwidth limitation.

I suggest you look at benches with much more relaxed timings (like cl48-50 + the crappy secondary and tertiary timings those clocks typically require) at those high clocks. Whatever performance advantages you're seeing will disappear....which shouldn't happen if there was a bandwidth limitation.

Timings effect latency not bandwidth after all.

This is true with Intel too BTW. You can crank the DDR5 clocks as high as you like for more bandwidth, and your AIDA or y cruncher benches will look pretty sweet, but if you don't get the latency down too it won't much matter for nearly all real world apps.
Posted on Reply
#53
TumbleGeorge
A18 bionic already in production with TSMC N3P. AMD ZEN 5 is in production with older N3E. My rumors.
Posted on Reply
#54
rv8000
phubarSo I looked at that forum and all I can find is evidence for what I've been saying: the gains you're talking about are due to IF bus/uncore clock increases and latency tuning NOT bandwidth.

Plenty of people there were getting their DDR5 to 7200 or 7800 but with terrible timings and seeing basically no improvement just like I said.

The only people who saw nice improvements had to push their timings HARD and had tremendous difficulty with stability no matter how much they were pumping volts into the RAM.

To me its quite telling that your original comment mentioned rather high DDR5 speed (7800) with what is VERY tight timings (c36-38) for AMD at those speeds as 'evidence' for a DDR5 bandwidth limitation.

I suggest you look at benches with much more relaxed timings (like cl48-50 + the crappy secondary and tertiary timings those clocks typically require) at those high clocks. Whatever performance advantages you're seeing will disappear....which shouldn't happen if there was a bandwidth limitation.

Timings effect latency not bandwidth after all.

This is true with Intel too BTW. You can crank the DDR5 clocks as high as you like for more bandwidth, and your AIDA or y cruncher benches will look pretty sweet, but if you don't get the latency down too it won't much matter for nearly all real world apps.
You’re not going to see the impact of additional bandwidth when you have significantly worse latency due to 2:1 and horrendously loose timings. Not only that but generally those attempting to run higher frequencies also have desync’d fclk and mclk (which is again why 2:1 mode is often bad).

In order to properly compare for the sake of testing bandwidth you need latency to be the same or relatively equal which requires pushing timings and volts to overcome the previously mentioned penalties when at high frequencies (2:1) mode.

What you’re trying to pass off as proof to the opposite of what I’m saying actually proves my point. Latency and IF clocks are worse when attempting 2:1 mode, so why would those pushing volts and timings at those higher frequencies provide any benefit > dual ccd chips and higher bandwidth DO have benefits.

*Also there are absolutely timings that impact bandwidth (tertiary and secondary like SCL for example).
Posted on Reply
#55
Beginner Macro Device
trparkyAww man, I was hoping it was going to be in the 20s in terms of a percent increase.
If they hit ridiculously high frequencies without generating even more ridiculous amounts of heat then why not.

I personally skip every CPU upgrade until 2027 I suppose but I like where AMD CPUs are going. The only thing missing is doubling the RAM channels so you could make more use of iGPUs.
Posted on Reply
#56
efikkan
Beginner Micro DeviceThe only thing missing is doubling the RAM channels so you could make more use of iGPUs.
That would be an incredible waste of additional complexity and cost, only to make a feature that barely anyone use in the higher tier CPUs marginally better. And it still wouldn't make integrated graphics lag free and butter smooth, as CPUs and "GPUs" have very different access patterns; CPUs small accesses that are latency sensitive, GPUs large batches which needs a lot of bandwidth (so occupying the bus for a while). The only way to do this well would be to have dedicated memory for the GPU, and then you basically have defeated the point of the APU.

I propose an alternative to raising the features (and cost) of the mainstream platforms; lower the entry for the "HEDT" platforms instead. I've long argued that the upper tier 8+-core CPUs (i9-14900K, Ryzen 7700X-7950X) should be moved (or overlap) to the Xeon-W/Threadripper platforms respectively, as both these platforms lack CPUs with high core speeds (like the good old Sandy Bridge-E, Haswell-E, Skylake-X and early Threadrippers did). Getting both the high core speeds and IO and memory bandwidth would be an ideal offering for many "workstation" buyers like developers and even content creators, which doesn't really need 32 "slow" cores. I'd buy into a such platform in a heartbeat. :)
Posted on Reply
#57
Beginner Macro Device
efikkanto have dedicated memory for the GPU, and then you basically have defeated the point of the APU.
Why not place this dedicated memory inside the CPU then? You totally can fit it so space is not an issue here. I'm not pretending to know what I'm talking about, I'm frankly curious.
Posted on Reply
#58
67Elco
Beginner Micro DeviceWhy not place this dedicated memory inside the CPU then? You totally can fit it so space is not an issue here. I'm not pretending to know what I'm talking about, I'm frankly curious.
I may be in a minority, but I dream of a day when dedicated gpu's are not needed. I hate these overpriced, clunky looking things taking up space. What a shame to blight this by installing a brick...

Posted on Reply
#59
trparky
Beginner Micro DeviceIf they hit ridiculously high frequencies without generating even more ridiculous amounts of heat then why not.
Higher clocks and higher IPC would be best. As I said before, I was hoping to see a 25% increase in IPC.
Posted on Reply
#60
efikkan
Beginner Micro DeviceWhy not place this dedicated memory inside the CPU then? You totally can fit it so space is not an issue here. I'm not pretending to know what I'm talking about, I'm frankly curious.
I was thinking about adding that, but though it was unnecessary, but here we go;
Like I said, it defeats the purpose of the APU, which is basically a cost saving measure for users who only need "base level" graphics performance. This saves in terms of cost of dedicated memory, PCB, extra cooler, etc. (Not to mention system itegrators love it as it saves time for them.) If for instance they added 4 GB GDDR5/6 to the CPU die, it would certainly help, but what about those who need just a little more, and before you know it, you have three versions of every CPU model. This also adds more complexity, more things that can and will eventually fail, and potentially leads to the user having to replace it quicker.

Imagine a typical desktop PC buyer (not the most hardcore enthusiasts), when they buy a decent computer, they can expect ~5-8 good years out of it, more if they're lucky and the PC is not under heavy load all the time, well cooled and not OCed, perhaps even more with some upgrades. During this lifecycle we should expect a storage upgrade, and perhaps a memory upgrade. But we should also expect a GPU upgrade, as new codec support is crucial (even just for YouTube), and for GPU acceleration is hugely beneficial for most "heavy" workloads, even non-gaming, like photo editing, video editing, CAD, 3D-modeling, etc. And I'm still just assuming a "casual" user her. So the user should factor in a "mid-cycle" GPU upgrade anyways, even if it's a new "low-end" GPU, to get the most out of the computer. The wonderful thing about dedicated graphics cards, but often overlooked, is modularity; if your needs change, it breaks, etc. you can just replace it. It also makes troubleshooting easier. The more crap that's integrated into the CPU and motherboard, the more things can break.
Posted on Reply
#61
trparky
efikkanyou have three versions of every CPU model.
*cough* Apple *cough*
Posted on Reply
#62
67Elco
So, in other words there is no good reason they can't provide decent graphics in cpu's...got it.
Posted on Reply
#63
Beginner Macro Device
efikkanLike I said, it defeats the purpose of the APU, which is basically a cost saving measure for users who only need "base level" graphics performance.
This purpose has been completely destroyed even without my ideas coming online.
The APU alone is 250 USD. This money buys you a fully built used PC of similar gaming performance. For what you'd spend on a motherboard, RAM etc, you'll get even more performance. So anyone who really cares about their budget don't buy APUs, unless it's some mining histeria going on and dGPUs cost like a couple Boeings. FYI: RX 580 outperforms any existing and planned APU and it's <60 USD on most marketplaces. It also makes it unnecessary to buy 32 GB RAM paying about half of its price off right away.

Target audience also consists of people who absolutely don't care about the price. They want the most performance possible to fit into their mini purse. With everything so simplified, it's not so feasible for such people.
Posted on Reply
#64
efikkan
Beginner Micro DeviceThis purpose has been completely destroyed even without my ideas coming online.
The APU alone is 250 USD. This money buys you a fully built used PC of similar gaming performance. For what you'd spend on a motherboard, RAM etc, you'll get even more performance. So anyone who really cares about their budget don't buy APUs, unless it's some mining histeria going on and dGPUs cost like a couple Boeings. FYI: RX 580 outperforms any existing and planned APU and it's <60 USD on most marketplaces. It also makes it unnecessary to buy 32 GB RAM paying about half of its price off right away.
To be more precise, I'm primarily thinking of cost for the manufacturers (Dell, Lenovo, HP, Acer, etc.). The vast majority of desktop PCs are built by a few large companies, who build a large volume of PCs for both office and home. There are great savings for them in terms of assembly (time and procedural mistakes) and case design when they don't have to add in a graphics card (by default). Whether these savings trickle down to us consumers varies.
If you're talking about people who build themselves or get theirs custom built, then there isn't any significant savings to APUs.

Comparing a new PC (of any segment) to the used market isn't completely fair. New hardware usually gets 3+ years of warranty, where used stuff does not. Additionally, these days especially GPUs have been "misused" with mining, and I'd advice to stay clear from that. Depending on how old hardware you're talking about, there might be challenges with drivers too, for current or the next gen Windows. And lastly, any included storage should be discarded and replaced with new SSDs and/or HDDs. I've had both great success and failures with used hardware, but too often pricing is to far too high for the risk your're taking.

But if you're going to talk about great value on the used market, and want a reliable and expandable machine at a low price, look at used proper workstations from Dell, HP and Lenovo that are 3 years old. Those generally have overkill and high rated PSUs, have capable (but noisy) cooling, and space for plenty of PCIe cards, disks and SSDs. (Unlike the "base" office PCs, which have a minimal PSU and often little cooling at all.) Throw in a new GPU and you're good to go for many years. But be aware that such systems are quirky to work with, and may not be compatible with normal fans etc. (Just installing an SSD in a Dell Precision is quite an adventure.)
Posted on Reply
#65
Beginner Macro Device
efikkanAdditionally, these days especially GPUs have been "misused" with mining, and I'd advice to stay clear from that
That's overblown (pun intended: the main victims are GPU fans, not the GPUs themselves). I bought ~40 mining GPUs about 4 years ago and none of them failed so far (had to replace fans in 7 of them, no other issues). Mining GPUs, especially those used for ~15 months or so, are generally less prone to surprise the end user because if it had some manufacturing defect it would've shown up already when that GPU was mining. With retail GPUs, you are in a "hit or miss" situation. RMAs are rare but happen more often than suddenly dead mining GPUs. Of course I'd recommend avoiding Pascal/Polaris GPUs because they are greatly worn out but nothing wrong with buying something of penultimate gen.
efikkanany included storage should be discarded and replaced with new SSDs and/or HDDs
That's too extreme. Especially if we're talking SSDs manufactured <2 years ago. You can add a safety SSD but throwing every single storage unit in the rubbish bin is just a waste. Drives, unless handled with negative care, can serve you for decades.
efikkanthere might be challenges with drivers too
I didn't suggest buying Pentium II era computers. Anything semi-recent (Turing onwards; RDNA onwards) will get software updates for longer than you actually care. You wouldn't care if the game isn't supported by your decade-old GPU if reviews show that more recent GPUs of double the speed are still too slow for this title. Drivers for non-GPU parts are a PITA exclusively if you are some corporate user with extremely specific soft/hardware. Windows 11 runs perfectly fine (driver-wise) on earliest Core 2 Duo systems. That's 18 years old.
efikkanThose generally have overkill and high rated PSUs
Last time I had experience with such systems I noticed the fact these PSUs are so foul it's a crime to call it a PSU. Even KCAS line-up wasn't as bad. These pre-builts are not worth the effort if you have at least a half of idea what you're doing. Their cases and motherboards are designed the way you must have a hammer, a chainsaw, and an angle grinder to make anything semi-decent fit there.
efikkanThere are great savings for them in terms of assembly (time and procedural mistakes) and case design when they don't have to add in a graphics card (by default).
I sincerely don't know how much you need to damage your brain and limbs to make PC building mistakes possible. Every single connector, slot, and interface is designed the way you need to intentionally forfeit all the sanity to do it wrong. I am talking from a perspective of a guy who has both arms fooling around due to neural damage and both halves of the brain being abused by road accidents and being defenestrated as a child. Even I have to try to do it incorrectly. So that screams "we didn't rip our customers off enough last decade."
efikkanIf you're talking about people who build themselves or get theirs custom built, then there isn't any significant savings to APU
Yes, that's my initial point. Savings are ALWAYS negative, unless market is in an unorthodox condition (2020 to 2022 for example).
Posted on Reply
#66
3valatzy
TumbleGeorgeA18 bionic already in production with TSMC N3P. AMD ZEN 5 is in production with older N3E. My rumors.
The article claims otherwise. 7nm+ + 5nm+. Still. Using the old intel tick-tock model, this will be a tock.
"Granite Ridge" will be a chiplet-based processor, much like the Ryzen 7000 series "Raphael." In fact, it will even retain the same 6 nm client I/O die (cIOD) as "Raphael,"
The "Zen 5" CCD will be built on the TSMC N4 (4 nm EUV) foundry node
Which will make it less appealing, AM4 will rule for some time to come.
Posted on Reply
#67
efikkan
Beginner Micro DeviceThat's overblown (pun intended: the main victims are GPU fans, not the GPUs themselves). I bought ~40 mining GPUs about 4 years ago and none of them failed so far (had to replace fans in 7 of them, no other issues).
May I ask what you did with them? Are you talking about buying and flipping them, or building your own lab for some heavy compute task (or even mining yourself)?

My concern isn't primarily DoA, but stability over time. And what people regard as stable is highly subjective. May I also ask what is your standard for a GPU passing validation? My standard is that a GPU (or any hardware) should handle your use case for at least 3 months without a single crash due to hardware issues. (It doesn't mean I stress test for 3 months though…)
Beginner Micro DeviceThat's too extreme. Especially if we're talking SSDs manufactured <2 years ago. You can add a safety SSD but throwing every single storage unit in the rubbish bin is just a waste. Drives, unless handled with negative care, can serve you for decades.
Safety wise, sure if you do a proper DBAN it should be safe to use.
And I could agree if you find something less than 2 years old, but that's rather rare. Getting SMART errors on devices >3 years is fairly common, and makes them useless. Espescially SSDs seem to wear out at 2-3 years of daily use, I've seen a lot of them fail.
Beginner Micro DeviceI didn't suggest buying Pentium II era computers. Anything semi-recent (Turing onwards; RDNA onwards) will get software updates for longer than you actually care. You wouldn't care if the game isn't supported by your decade-old GPU if reviews show that more recent GPUs of double the speed are still too slow for this title. Drivers for non-GPU parts are a PITA exclusively if you are some corporate user with extremely specific soft/hardware. Windows 11 runs perfectly fine (driver-wise) on earliest Core 2 Duo systems. That's 18 years old.
Anything older than Skylake(2015) can be a hit or miss with compatibility for later OS's, but generally speaking anything workstation/server grade generally have very long driver support. Linux will generally work fine.
Part of my reasoning is that something very old might be able to work right now, but will not have as long support as a brand new product, so this should be taken into the value proposition of any used hardware, along with lack of warranty. (e.g. a Zen 5 will offer more support going forward)

I've many times considered buying "old" computer parts, and if you know what to look for you can find quite good deals. Local sources are usually the best, but you can even find gold on Ebay, even though shipping and VAT would challenge the value proposition for some of us. Especially workstation parts can be great deals, better than the usual i7s. Just ~1.5 years ago I was looking at getting 2-3 sets workstation boards, CPUs and RAM at a great deal, I believe it was Cascade Lake or something similar, so very recent and feature rich. I didn't buy it because I was too busy. I also almost pulled the trigger on a box of 3x Intel X550 (10G NIC) NIB for ~$80 a piece, would be amazing for my "home lab", but I'm just too busy on my day job.

What's perhaps more interesting would be what kind of hardware would offer a decent value compared to a brand new Zen 4 or Raptor Lake? For all-round gaming and desktop use, you will get pretty far with anything from the Skylake family with boost >4 GHz, even if you pair it with a RTX 4070 or RX 7800 XT, you'll not be loosing too many FPS in most titles, and if the old parts frees up money for a higher tier GPU it might be worth it. And I don't expect this to change a lot when Zen 5 arrives, for most realistic use cases, there aren't a tremendous gain beyond "Skylake" performance in gaming (except for edge cases). But the bigger question remains; risk and time.
Beginner Micro Device
efikkanThose generally have overkill and high rated PSUs
Last time I had experience with such systems I noticed the fact these PSUs are so foul it's a crime to call it a PSU. Even KCAS line-up wasn't as bad. These pre-builts are not worth the effort if you have at least a half of idea what you're doing. Their cases and motherboards are designed the way you must have a hammer, a chainsaw, and an angle grinder to make anything semi-decent fit there.
For clarity, I was only takling about workstation computers, not baseline office computers or home computers from retail stores, both of those have generally underpowered and low quality PSUs and cooling.

Take for instance the Dell Optiplex, especially the slightly smaller ones; horrible designs. Even if you put a graphics card in there, there is no PSU headroom. There is (usually) no case cooling, only the stock CPU cooler recycling case air. Even when specced with the highest 65W i7, those are utterly useless for development work, or any kind of sustained load. I've seen them throttle like crazy or shut down just from a ~5 min compile job, and those were fairly new at the time.
Beginner Micro DeviceI sincerely don't know how much you need to damage your brain and limbs to make PC building mistakes possible. Every single connector, slot, and interface is designed the way you need to intentionally forfeit all the sanity to do it wrong.<snip>
Please be serious. Let's be civil and have a constructive and interesting discussion. ;)
If you read carefully I'm talking about the big PC manufacturers who build systems by the hundreds of thousands. For them, avoiding an add-in card (or two), cabling, cooling means less work hours and less potential points of failure. Every single mistake which have to be manually corrected, cost them "a lot". It's completely different for us enthusiasts building computers, or even ordering a custom built machine, these are not mass produced on an assembly line.
Posted on Reply
#68
Beginner Macro Device
efikkanMay I ask what you did with them? Are you talking about buying and flipping them, or building your own lab for some heavy compute task (or even mining yourself)?
I just was a one-time employee of one internet café. They asked me to upgrade their computers because customers whined about lags in games and I grabbed about 40 used GPUs there.

A week before the NY 2024, one of the board asked me what to do with their viri situation but I'm not a virus expert... Ended up knowing all these GPUs are still working fine.
efikkanEspescially SSDs seem to wear out at 2-3 years of daily use, I've seen a lot of them fail.
They are statistically as reliable as HDDs. Only the most terrible SSDs fail after a couple years. Data removal on SSDs is as monkey as it gets, just format it 10 times and call it a day. No one will recover this data. HDDs... well, will take a long while but also doable. Still a very insignificant chance of catching a spiked HDD, rarely do we see hackers and other suspicious people selling their hardware instead of destroying.
efikkanAnything older than Skylake(2015) can be a hit or miss with compatibility for later OS's
And it will also be a miss in terms of $ per FPS (especially any other performance metric if you ain't a gamer). Most feasible 2nd hand offerings are way newer. As of today, it's low and mid-tier systems from 2019 to 2021 (Ryzen 3600/5500/5600, i3-10105F, i3-12100F, i5-10400F). Driver support isn't a problem for such PCs. GPU-wise, it's GTX 1600 series (4 years newer than GTX 900 series which are still supported), RTX 3000 series (effectively almost next-gen) and RX 6000 series (same story but AMD).
efikkanMay I also ask what is your standard for a GPU passing validation?
It's valid up until it crashes for hardware reasons. No matter the use case if clocks/voltages/wattages/fan curves are out-of-the-box and no overheat is going on. Can't run one specific task and it's not a software issue, this GPU is effectively broken. Even if I or another end user never uses such software, this GPU is already marked as a failing one and it'll await replacement when possible. Broken cooling system doesn't count, it's the easiest thing to fix, especially if the heatsink and wiring is intact. Of course in case this failure doesn't bother the user it can be marked "OK" but y'know...
efikkanFor clarity, I was only takling about workstation computers,
These are usually overpriced on my 2nd hand market so didn't bother checking. Fair enough I guess. Won't argue with that because I frankly got no idea about workstation PCs.
efikkanIf you read carefully I'm talking about the big PC manufacturers who build systems by the hundreds of thousands. For them, avoiding an add-in card (or two), cabling, cooling means less work hours and less potential points of failure. Every single mistake which have to be manually corrected, cost them "a lot". It's completely different for us enthusiasts building computers, or even ordering a custom built machine, these are not mass produced on an assembly line.
Stop insulting me. I read it carefully. I added the line of "we didn't rip our customers off enough last decade" precisely for that reason. I know how easy this task is and what corner they cut. This is an efficient business model but it's still extensively greedy (if we account the fact these no-dGPU PCs ain't much cheaper than yes-dGPU ones).
Posted on Reply
#69
efikkan
Beginner Micro DeviceAnd it will also be a miss in terms of $ per FPS (especially any other performance metric if you ain't a gamer). Most feasible 2nd hand offerings are way newer. As of today, it's low and mid-tier systems from 2019 to 2021 (Ryzen 3600/5500/5600, i3-10105F, i3-12100F, i5-10400F). Driver support isn't a problem for such PCs. GPU-wise, it's GTX 1600 series (4 years newer than GTX 900 series which are still supported), RTX 3000 series (effectively almost next-gen) and RX 6000 series (same story but AMD).
I agree, as long as you can find them at a price that compensates for the lack of warranty. Like paying only 30% off for a 3 year old is a bit steep.

I was actually updating my GPU support overview just yesterday (as I was figuring out which combination of GPUs, systems and OS versions to have for test-PCs for software validation), and like you said in your previous post, both Turing and RDNA are great in support for newer platforms. Nvidia actually still have mainline driver support for Maxwell(10 years), and this is also the earliest that works decently with DX 12 and Vulkan. Turing, which is still being made, is going to be a "long term support" family, as they are still used in the Quadro Txx series, was well as the relatively recent GTX 1630(2022), and can even work as a replacement for users who still want to use old Win 7. (Win 8 or Win7 32-bit is only Pascal and older.)

Despite ever-increasing demands, Pascal and newer hold up very well in non-RT workloads, and I know GTX 1080 owners who have gotten a lot of mileage out them. At least compared to what I experienced in the late 90s and early 2000s, pretty much anything up to Kepler felt "obsolete" within 3-4 years, sometimes even sooner.

But in terms of value, AM5 would be much greater if there were a good selection of solid motherboards at a decent price. This, along with memory costs, is probably the reason why Zen 3 is still a top seller, and I would argue, a great value deal if you can find a good CPU/motherboard combo at a low price. This would probably be a more compelling choice over a used system. It will be interesting to see if Zen 3 becomes a "problem" for Zen 5 due to the excellent value.
Beginner Micro DeviceStop insulting me. I read it carefully. I added the line of "we didn't rip our customers off enough last decade" precisely for that reason. I know how easy this task is and what corner they cut. This is an efficient business model but it's still extensively greedy (if we account the fact these no-dGPU PCs ain't much cheaper than yes-dGPU ones).
There are no insults, nor anything remotely condescending in my posts, so please don't attribute things I don't say or mean. And I don't mind people disagreeing or a heated discussion either, just as long as it's civil. I'm very much interested in hearing other peoples opinions when they have something to contribute. :)
I'm just explaining why the business works the way they do. The big PC manufacturers have a very high influence on Intel and AMD. They want to cut costs, which is why they push integrated graphics, and they also want frequent refreshes and gimmicks, to sell new hardware. That's we see e.g. Intel increasing ridiculous amounts of E-cores in their designs, and completely gimmicky "clock speeds" on their low-TDP models. For non-enthusiasts, "numbers" sell, like "cores", "GHz", etc.
But both you and me see through this.
Posted on Reply
#70
phints
9800X3D will win 2024 CPUs if it ships (maybe even the 9700X will), but can AMD fix their horrible boot times with Zen 4? I have a friend whos PC takes 1 minute to boot while my 4 year old 5800X takes 10 seconds.
Posted on Reply
#71
67Elco
phints9800X3D will win 2024 CPUs if it ships (maybe even the 9700X will), but can AMD fix their horrible boot times with Zen 4? I have a friend whos PC takes 1 minute to boot while my 4 year old 5800X takes 10 seconds.
My 7900 takes about that long with XMP enabled...disabled it boots like everything else.
Posted on Reply
#72
trparky
phints9800X3D will win 2024 CPUs if it ships (maybe even the 9700X will), but can AMD fix their horrible boot times with Zen 4? I have a friend whos PC takes 1 minute to boot while my 4 year old 5800X takes 10 seconds.
That's one thing about my 7700X system that I find to be really annoying. Mine doesn't take nearly as long to boot but it's still longer than I'd like.
Posted on Reply
#73
efikkan
phints9800X3D will win 2024 CPUs if it ships (maybe even the 9700X will),
I wouldn't even dare to guess which CPU family will win without at least some real details about their performance characteristics. While Intel have an IPC lead now, we don't know how Zen 5 will be against Arrow Lake, and ultimately if even Arrow Lake has higher IPC but very low clock speeds. But we do know one thing, that throwing a bunch of extra L3 cache is only going to help certain types of workloads, and not going to make up for shortcomings overall (if there are any).
phintsbut can AMD fix their horrible boot times with Zen 4? I have a friend whos PC takes 1 minute to boot while my 4 year old 5800X takes 10 seconds.
And how often do you (re-)boot your PC? How long does it run before crashing?
But it's probably the POST time you're complaining about (which is technically pre-boot), which happens when you overclock your memory and the BIOS struggle to get your memory controller to behave (retraining memory). Like 67Elco alluded to, removing overclocking of memory resolves the issue. This is a symptom that you are operating outside the specs of the controller. Run at the speed profile for which it is designed for, and the problem goes away. Intel's current memory controllers are certainly more tolerant, and AMD may improve their controllers somewhat, but this is generally becoming a larger and larger "problem". The solution is to run at the designed spec.
Posted on Reply
#74
fb020997
5800X3D->9800X3D. I’d say that it’ll be quite the upgrade, even with a 7900XT!
Posted on Reply
#75
Fumero
So we will have a Ryzen 9800X3D this year?
Posted on Reply
Add your own comment
May 9th, 2024 05:06 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts