Is there even a JEDEC spec for 3200 MHz?
Edit: Where does it say
here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.
Edit 2: Also, show me a RAM kit that runs at 3200 MHz by JEDEC default, without XMP or DOCP.
You can't really get any lower power than this (the fact that I've owned both of these CPUs makes me feel nostalgic).
www.anandtech.com
Yep, JEDEC announced three 3200 specs a while after the initial DDR4 launch.
The fastest JEDEC 3200 spec is 20-20-20. That (or one of the slower standards) is what is used for 3200-equipped laptops.
There are essentially no consumer-facing JEDEC-spec 3200 kits available though - simply because this doesn't really matter to consumers, and they buy whatever (and enthusiasts want faster stuff and wouldn't touch JEDEC with a ten-foot pole). This also means these DIMMs aren't generally sold at reatil, but
they can be found through other channels. All ECC DDR4-3200 is also at JEDEC speeds, as are most if not all DDR4-3200 SODIMMs.
How 2 cores aren't enough for background tasks? Even 1 core is totally fine. I'm not talking about having AutoCAD or BOINC opened in background, but just dealing with Windows overhead and for that even Pentium 4 is enough. As long as main cores aren't getting distracted from gaming, E cores have a purpose. As a MT workload boost, those E cores shouldn't be expected to do anything much of value, when you realize that die space is wasted on them instead of P cores.
Sorry, but no. Try to consider how a PC operates in the real world. Say you have a game that needs 4 fast threads, only one of which consumes a full core, but each of which can hold back performance if the core needs to switch between it and another task. You then have 4 fast cores and 1 background core, and Windows Update, Defender (or other AV software) or any other software update process (Adobe CS, some Steam or EGS or Origin or whatever other automated download) kicks in. That E core is now fully occupied. What happens to other, minor system tasks? One of three scenarios: The scheduler kicks the update/download task to a P core, costing you performance; the scheduler keeps all "minor" tasks on the E core, choking it and potentially causing issues through system processes being delayed; the scheduler starts putting tiny system processes on the P core, potentially causing stutters. Either way, this harms performance. So, 1 E core is insufficient. Period. Two is the bare minimum, but even with a relatively low number of background processes it's not unlikely for the same scenario to play out with two.
Also, the E cores are overall quite fast. So, in that same scenario, a 2P+8E setup is likely to perform
better than a 4P+1E (or 2E) setup, as the likelyhood of the game needing more than 2 "faster than a full E core" threads is very low, and you are left with more cores to handle the slightly slower game threads + background tasks.
Intel’s messaging with its new Ice Lake Xeon Scalable (ICX or ICL-SP) steers away from simple single core or multicore performance, and instead is that the unique feature set, such as AVX-512, DLBoost, cryptography acceleration, and security, along with appropriate software optimizations or paired with specialist Intel family products, such as Optane DC Persistent Memory, Agilex FPGAs/SmartNICs, or 800-series Ethernet, offer better performance and better metrics for those actually buying the systems. This angle, Intel believes, puts it in a better position than its competitors that only offer a limited subset of these features, or lack the infrastructure to unite these products under a single easy-to-use brand.
I'm not really sure if that matters to any non enterprise consumer even a tiny bit. All these features sound like they matter in closed temperature and dust controlled server room and offer nothing for consumer with excessive budget.
.... so you agree that HEDT is becoming quite useless then? That paragraph essentially says as much. Servers and high end workstations (HEDT's core market!) are moving to specialized workflows with great benefits from specialized acceleration. MSDT packs enough cores and performance to handle pretty much anything else. The classic HEDT market is left as a tiny niche, having lost its "if you need more than 4 cores" selling point, and with PCIe 4.0 eroding its IO advantage, even. There are still uses for it, but they are rapidly shrinking.
I clearly said that this is what would be interesting to people who have very excessive budgets. 3970X is more interesting as a toy than 5950X.
No you didn't. What you said was:
Chips like 5950X and 12900K are essentially pointless, as those looking for power, go with HEDT and consumers just want something sane and what works and what is priced reasonably. The current "fuck all" budget chip is TR 3970X (3990X is bit weak in single core). Things like i9 or Ryzen 9 on mainstream platform are just products made for poor people to feel rich (they aren't exactly poor, but I feel snarky). Those platforms are always gimped in terms of PCIe lanes and other features and that's why TR4 platform is ultimate workhorse and "fuck all" budget buyers platform. And if that's too slow, then you slap phase change on TR, OC as far as it goes and enjoy it. Far better, than octa core with some eco fluff.
Your argumentation here is squarely centered around the previous
practical benefits of HEDT platforms - multi-core performance, RAM and I/O. Nothing in this indicates that you were speaking of people buying these as "toys" - quite the opposite. "Ultimate workhorse" is hardly equivalent to "expensive toy", even if the same object can indeed qualify for both.
You're not wrong that there has historically been a subset of the HEDT market that has bought them because they have money to burn and want the performance because they can get it, but that's a small portion of the overall HEDT market, and one that frankly is well served by a $750 16-core AM4 CPU too. Either way, this market isn't big enough for AMD or Intel to spend millions developing products for it - their focus is on high end workstations for professional applications.
And that's where the luxury of HEDT lies, they offer good performance for everything and excellent performance at what you said here, those rare cases, when you are memory bandwidth constrained or need huge core counts.
That's not really true. While 3rd-gen TR does deliver decent ST performance, it's still miles behind MSDT Ryzen. I mean,
look at Anandtech's benchmarks, which cover everything from gaming to tons of different workstation tasks as well as industry standard benchmarks like SPEC. The only scenarios where the 3970X wins out are either highly memory bound or among the few tasks that scale well beyond 16 cores and 32 threads. Sure, these tasks exist, but they are quite rare, and not typically found among non-workstation users (or datacenters).
Of course, that the 3970X is significantly behind the 5950X in ST and low threaded tasks doesn't mean that it's
terrible for these things.
It's generally faster in ST tasks than a 6700K, for example, though not by much. But I sincerely doubt the people you're talking about - the ones with so much money they really don't care about spending it - would find that acceptable. I would expect them to buy (at least) two PCs instead.
I'm not seriously looking for one and wouldn't have any use for it. If being into computers were just a hobby, then performance would matter very little for me. I would rather look into unique computers or something random like Phenom IIs. Performance matters the most when it's not plenty and when you can't upgrade frequently. If not some rather modest gaming needs (well, wants to be exact), I would be fine with Celeron. But even in gaming what I have now (i5 10400F) is simply excessive. I could be perfectly served by i3 10100F. And latest or modern games at all make up like 30% of my library. I often play something old like UT2004, Victoria 2, Far Cry and those games don't need modern CPU at all and in fact, modern OS and hardware may even cause compatibility issues. I used to have Athlon 64, socket 754 era correct rig for a while, but frequent part failures made it too expensive and too annoying to keep it running. Besides it, I have tried various computers already and I used to have 3 desktops working and ready to use in single room. It was nice for a while, until I realized that I only have one ass and head and can only meaningfully use only one of them. Those weren't expensive machines either, but still I learned my lesson. Beyond that, maintenance effort also increases and at some point one or two of them will mostly sit abandoned doing nothing. Sure you can use them for BOINC or mining, but still their utility is very limited. I certainly was more impressionable and was into acquiring things that looked interesting, but the sad reality is that beyond initial interest, you still end up with only one daily usage machine. I also tested this, when I had no responsibilities and 24 hours all to myself for literal months. There's really not much benefit in doing that long term. If you work or study, then you really can't properly utilize more than 2 machines (main desktop and back up machine or daily machine and cool project machine or daily desktop and laptop). Despite all that, I would like to test out Quad FX machine. By that I mean that using it for 3 months would be nice and later it might collect dust. i5 10400F machine serves all my needs just fine, while offering some nice extras (extra two cores, that I probably don't really need, but are nice for BOINC and really low power usage) and getting Quad FX machine would only mean a ton of functional overlap. Perhaps all this made me mostly interested in longest lasting configs, that don't need to be upgraded or replaced for many years and that means that I will keep using my current machine for a long time, until it really stops doing what I need and want (well that to limited extent of course).
If you look at what many people own and what are their interests, most people would say that they want reliable, no bullshit and long lasting system. I think that those are important criteria and I judge many part releases by their long term value. i9 is weak on my scale. Sure, it's fast now, but that power consumption a and heat output are really unpleasant. It will be fast for a while, but it will be the fastest only for few months and that's the main reason to get one. Over time you will feel its negative aspects far more than initial positive ones, therefore I think that it's a poor chip. It also is obscenely expensive to maintain, you need just released overpriced board to own one and likely unreliable cooling solution aka water cooling. And on top of that, it's transitionary release between DDR4 and DDR5, meaning that it doesn't take full advantage of new technology. And it also is the first attempt at P and E cores and I don't think that it has a great layout of those. Therefore all in all, it's unpleasant to own chip, with lots of potential to look at a lot worse in future (due to figuring out P and E core layout better and leveraging DDR5 better or so I expect) and it is not priced right and is expensive to buy and maintain. All in all, I don't think it will last as well as i5 2600K or 3770K/4770K. Those chips lasted for nearly decade and started to feel antiquated only relatively recently, this i9 12900K is already feels of somewhat limited potential. Therefore, I don't think that it's really interesting or good. In long term ownership with low TCO and minimal negative factors, this i9 fails hard. Performance only matters so much in that equation. I think that i5 12400 or i7 12700 would fare a lot better than K parts and will be far more pleasant to use in long term. This CPU (and for that matter all hardware) evaluation mentality is certainly not common here at TPU, but I think it is valuable and therefore I won't look at chips by their performance only. Performance matters in long term usage, but only so much and many other things matter just as much as performance.
Maybe, but you have to admit that 3970X's overclocked performance would be great. 5950X would never beat it in multithreaded tasks. My point is that if you are looking for luxury CPU, then buy an actual luxury CPU, not just some hyped up mainstream stuff. I'm not shifting frame of reference and some slight benefit of 5950X in single threaded workloads won't make it overall better chip, while it gets completely obliterated in multithreaded loads. 5950X might be more useful to user, that's a good argument to make, but does it feel like luxury and truly "fuck all" budget CPU? I don't think so and I don't think that people looking for high end workhorse CPU would actually care about 5950X either, since Threadripper platform was made for that in made and it has that exclusive feel, just like Xeons. You know, this is similar situation to certain cars. Corvette is well known performance car. It's fast, somewhat affordable and it looks good. Some people don't know that Vettes are actually faster and may even feel nicer to drive than some Ferraris or Lambos, therefore typical Ferrari or Lambo buyer doesn't even consider getting a Vette, despite it most likely being objectively better car than Ferrari or Lambo, while also being a lot cheaper. I think that it's a similar situation here with Threadripper and 5950X or 12900K. Threadripper feels more exclusive and has some features that make it distinctly well performing HEDT chip, which mainstream one doesn't have. Despite mainstream chip, like 5950X, being more useful and overall better performing, it's just not as alluring as Threadripper. This is how I think about this. But full disclosure, if I'm being 100% honest, then most likely I would just rather leave my computer alone and just enjoy for what it is, rather than what it could be. I would only upgrade to 2TB SSD as nothing AAA, except one title that could currently fit onto it and I'm already using NTFS compression.
There were signs of this above, but man, that's a huge wall of goal post shifting. No, you weren't presenting arguments as if they only applied to you and your specific wants and interests, nor were you making specific value arguments. You were arguing about the general performance of the 12900K, for general audiences - that's what this thread is about, and for anything else you actually do need to specify the limitations of your arguments. It's a given that flagship-tier hardware is poor value - that's common knowledge for anyone with half a brain and any experience watching any market whatsoever. Once you pass the midrange, you start paying a premium for premium parts. That's how premium markets work. But this doesn't invalidate the 12900K - it just means that, like other products in this segment it doesn't make sense economically. That's par for the course. It's expected. And the same has been true for every high-end CPU ever.
Also, you're making a lot of baseless speculations here. Why would future P/E core scheduling improvements not apply to these chips? Why would future DDR5 improvements not apply here? If anything, RAM OC results show that the IMC has plenty left in the tank, so it'll perform better with faster DDR5 - the RAM seems to be the main limitation there. It's quite likely that the Thread ... Director? is sub-optimal and will be improved in future generations, but you're assuming that this is causing massive performance bottlenecks, and that software/firmware can't alleviate these. First off, I've yet to see any major bottlenecks outside of specific applications that either seem to not run on the E cores or get scheduled only to them (and many MT applications seem to scale well across all cores of both types), and if anything there are indications that software and OS issues are the cause of this, and not hardware.
You were also making arguments around absolute performance, such as an OC'd 10900K being faster, which ... well, show me some proof? If not, you're just making stuff up. Testing and reviews strongly contradict that idea. For example, in AT's SPEC2017 testing (which scales well with more cores, as some workstation tasks can),
the 12900K with DDR4 outperforms the 10900K by 31%. To beat that with an OC you'd need to be running your 10900K at (depending on how high their unit boosted) somewhere between 5.8 and 6.8GHz to catch up, let alone be faster. And that isn't possible outside of exotic cooling, and certainly isn't useful for practical tasks. And at that point, why wouldn't you just get a 12900K and OC that? You seem to be looking very hard for some way to make an unequal comparison in order to validate your opinions here. That's a bad habit, and one I'd recommend trying to break.
The same goes for things like saying an OC'd 3970X will outperform a 5950X in MT tasks. From your writing it seems that the 5950X is for some reason not OC'd (which is ... uh, yeah, see above). But regardless of that, you're right that the 3970X would be faster, but again - to what end, and for what (material, practical, time, money) cost? The amount of real-world workloads that scale well above 16 cores and 32 threads are quite few (heck, there are few that scale well past 8c16t). So unless what you're building is a PC meant solely for running MT workloads with near-perfect scaling (which generally means rendering, some forms of simulation, ML (though why wouldn't you use an accelerator for that?), video encoding, etc.), this doesn't make sense, as most of the time using the PC would be spent at lower threaded loads, where the "slower" CPU would be noticeably faster. If you're building a video editing rig, ST performance for responsiveness in the timeline is generally more important than MT performance for exporting video, unless your workflow is
very specialized. The same goes for nearly everything else that can make use of the performance as well. And nobody puts an overclocked CPU in a mission-critical render box, as that inevitably means stability issues, errors, and other problems. That's where TR-X, EPYC, Xeon-W and their likes come in - and there's a conscious tradeoff there for stability instead of absolute peak performance (as at that point you can likely just buy two PCs instead).
So, while your arguments might apply for the tiny group of users who still insist on buying HEDT as toys (instead of just buying $700+ MSDT CPUs and $1000+ motherboards, which are both plentiful today), they don't really apply to anyone outside of this group.