• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-12900K

Joined
Jun 14, 2020
Messages
1,097 (1.14/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
No. I'm saying that one buys a 12900K if its performance is needed, regardless of power consumed - and not for working on Excel spreadsheets.
Well I agree, it's just that only a handful of applications actually max it out. Most productivity apps (photoshop / premiere / solidworks / autocad etc.) do not.
 
Joined
May 2, 2017
Messages
7,762 (3.69/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I actually doubt that. I think that i9 10900K might be faster if it had unlocked PLs and higher clocks.
With a 30%-ish IPC deficit? Yeah, sorry, I don't think so. I mean, unless you are just outright ignoring reality and saying "if the 10900K had infinite clock speed it would be the fastest CPU ever", there are hard limits to how high those CPUs can clock regardless of power under any type of semi-conventional cooling, and I would think a stock 12900K beats a well overclocked golden sample 10900K in all but the most poorly scheduled MT apps, and every single ST app out there.
That would be a major oversight. What's the point of 2P/4E chip? An Atom W? It would be horrible in games and hell to schedule properly, because P cores are real cores and E ones are just padding. I personally don't see i9 12900K as a real 16 core chips, it's just 8 core chips with some Atoms to deal with background tasks, while running games.
As was said before, the E clusters are 4 or 0 cores, nothing in between. They have a single stop on the ring bus and no internal power gating. And IMO, 2P4E would be fantastic for any low power application, whether that's a general purpose desktop or a laptop. Remember, those E cores can't hang with the P cores, but they're not your grandpa's Atom cores. Anandtech confirmed Intel's claims of them matching Skylake at the same clocks (they're slightly slower at 3.9GHz than the 4.2GHz i7-6700K).
Chips like 5950X and 12900K are essentially pointless, as those looking for power, go with HEDT and consumers just want something sane and what works and what is priced reasonably. The current "fuck all" budget chip is TR 3970X (3990X is bit weak in single core). Things like i9 or Ryzen 9 on mainstream platform are just products made for poor people to feel rich (they aren't exactly poor, but I feel snarky).
That's nonsense. I would make the opposite claim: that chips like the 5950X and 12900K have effectively killed HEDT. I mean, how much attention does Intel give to their HEDT lineup these days? They're still on the X299 chipset (four generations old!), and on 14nm Skylake still. Servers are on Ice Lake cores, but there's no indication of that coming to HEDT any time soon. Threadripper was also a bit of a flash in the pan after MSDT Ryzen went to 16 cores - there are just not enough workstation applications performed in high enough volumes that a) scale to >16+16 threads or b) need tons of memory bandwidth to make TR a viable option - especially when MSDT has a massive clock speed and IPC advantage. A 5950X will outperform a 3970X or 3990X in the vast majority of applications (as will a 5800X, tbh), and while there are absolutely applications that can make great use of TR, they are few and highly specialized.
JEDEC speeds are the speeds most of you should use, this is the speed the memory controller is designed to operate at, and running it overclocked over time will result in data loss and eventually stability issues.

Overclocking memory is still overclocking, and should never be the default recommendation for DIY builders. It should be reserved for those wanting to take the risk.

Using overclocked memory in benchmarking is also misleading, as each overclock will be different, and people usually have to decrease their overclock after a while.
I disagree here. JEDEC specs are designed for servers and for low cost, and are woefully low performance. XMP and DOCP are manufacturer-supported "OC" modes that, while technically OCing, are as out of the box as you can expect for DIY. I would very much agree that hand-tuned memory has no place in a CPU benchmark, but XMP? Perfectly fine.
To both of you;
The important question is under what's the real power consumption under which circumstances.
Does it matter if the CPU gets a little hot under an unrealistic workload? (And putting a CPU under such a load 24-7 is going to wear it out quickly anyways)
I'm pretty sure Noctua NH-U14S is more than sufficient for 99.9% of customers buying this CPU, and if anything an AiO is usually not going to improve that much over NH-U14S in a case, it's much better to upgrade the case fans and calibrate the fan curves. Do that, and you'll get a system that's very quiet under most real workloads, yet still can handle the extreme peaks.
AiO coolers are usually not a good choice anyways, far too much noise for little gains. If you need cooling for extreme overclocking, go custom loop.
The TPU thermal test is in Blender. Is that an unrealistic workload? No. Of course, most people running Blender aren't constantly rendering - but they might have the PC doing so overnight, for example. Would you want it bumping up against tJmax the entire night? I wouldn't.
 
Joined
Jun 10, 2014
Messages
2,783 (0.88/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1600 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
No enthusiast uses that, and these are enthusiast sites. I never said it was an invalid test, just that it is an oddball scenario nobody fits into.
Says who?
Well it may seem that way due to most reviews being oriented towards overclocking and many teenagers tend to scream the loudest in the forums.

Most DIY builders (and even gamers) are 30+, have families, jobs etc. They want computers that works reliably, don't mess up their files etc., and often want capable machines to do some work, some gaming etc.

A baseline benchmark should always be stock, otherwise there is no way to have a fair comparison. How far should you otherwise push the OC? If one reviewer get a good sample and another get a bad sample it may end up changing the conclusion. It should be stock vs. stock or OC vs. OC, not lightly OC vs stock etc.

If you want to OC, then by all means OC and enjoy!
But when people push OC on "normal" PC buyers just looking for a good deal, then it annoys me. Just look at all the first time builders, what is their number one problem? It's memory. If they just had gotten memory at the JEDEC speeds for their CPU, they would have saved a lot of money, gotten a stable PC and only sacrificed a negligible performance difference. In most real world scenarios it's less than 5%, and for value buyers it's much smarter to save that money and buy a higher tier GPU or CPU. Running OC memory with tight timings is relevant for those wanting to do benchmarking as a hobby, but has little real world value, especially considering stability issues, file corruption and loss of warranty (CPU) is the trade-off for a minor performance gain.
 
Joined
May 2, 2017
Messages
7,762 (3.69/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Says who?
Well it may seem that way due to most reviews being oriented towards overclocking and many teenagers tend to scream the loudest in the forums.

Most DIY builders (and even gamers) are 30+, have families, jobs etc. They want computers that works reliably, don't mess up their files etc., and often want capable machines to do some work, some gaming etc.

A baseline benchmark should always be stock, otherwise there is no way to have a fair comparison. How far should you otherwise push the OC? If one reviewer get a good sample and another get a bad sample it may end up changing the conclusion. It should be stock vs. stock or OC vs. OC, not lightly OC vs stock etc.

If you want to OC, then by all means OC and enjoy!
But when people push OC on "normal" PC buyers just looking for a good deal, then it annoys me. Just look at all the first time builders, what is their number one problem? It's memory. If they just had gotten memory at the JEDEC speeds for their CPU, they would have saved a lot of money, gotten a stable PC and only sacrificed a negligible performance difference. In most real world scenarios it's less than 5%, and for value buyers it's much smarter to save that money and buy a higher tier GPU or CPU. Running OC memory with tight timings is relevant for those wanting to do benchmarking as a hobby, but has little real world value, especially considering stability issues, file corruption and loss of warranty (CPU) is the trade-off for a minor performance gain.
You're not wrong, but this is precisely why XMP/DOCP exists. Boot the PC, get into the BIOS (which you need to do anyway to see that everything registers properly), enable the profile, save and exit. Done. Unless you've been dumb and bought a kit with some stupidly high XMP/DOCP profile (idk, 4400c16 or something), it should work on literally every CPU out there, as none of them have IMCs so bad that they only support JEDEC speeds. XMP is an excellent middle ground.
 
Joined
Jan 27, 2015
Messages
1,572 (0.54/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage Team Group MP33 Pro 2TB M.2 2280 PCIe 3.0 x4 + WD SN770 512GB m.2 PCIe 4.0 x4
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Says who?
Well it may seem that way due to most reviews being oriented towards overclocking and many teenagers tend to scream the loudest in the forums.

Most DIY builders (and even gamers) are 30+, have families, jobs etc. They want computers that works reliably, don't mess up their files etc., and often want capable machines to do some work, some gaming etc.

A baseline benchmark should always be stock, otherwise there is no way to have a fair comparison. How far should you otherwise push the OC? If one reviewer get a good sample and another get a bad sample it may end up changing the conclusion. It should be stock vs. stock or OC vs. OC, not lightly OC vs stock etc.

If you want to OC, then by all means OC and enjoy!
But when people push OC on "normal" PC buyers just looking for a good deal, then it annoys me. Just look at all the first time builders, what is their number one problem? It's memory. If they just had gotten memory at the JEDEC speeds for their CPU, they would have saved a lot of money, gotten a stable PC and only sacrificed a negligible performance difference. In most real world scenarios it's less than 5%, and for value buyers it's much smarter to save that money and buy a higher tier GPU or CPU. Running OC memory with tight timings is relevant for those wanting to do benchmarking as a hobby, but has little real world value, especially considering stability issues, file corruption and loss of warranty (CPU) is the trade-off for a minor performance gain.

Most people who post here put their system specs up. They don't use JEDEC RAM settings. I'm sure there's one somewhere, I just have never seen it.

I'm not saying that there isn't a place for it, have said many times that if you want to see what a system built well inside tolerances works like, you check out CNET and PCWorld and you buy a Dell Inspiron or some such. I've also said that isn't a bad way to go, many here disagree, but at least when you buy the system you get to see how it all works together (if you research it) vs piecemealing it together like DIY typically do.

But AT isn't speaking to that crowd. I don't know who they are speaking to because even OEMs don't do things like high end $600 motherboards on slow RAM, Alienware for example has all switched to XMP mode RAM as has HP Omen. It looks to me like AT wanted to talk to the HEDT crowd, but got distracted by normal desktop hardware?

And just for the record, I get asked once or twice a year from someone what to get. I almost never recommend DIY or any kind of prebuilt enthusiast rig. If they are not gamers I usually find a good deal on a reasonable OEM system and recommend that, and I also usually recommend a refurb corporate laptop with a decent SSD if price is an issue - because those things are built like tanks. Again though, that is not what these sites are about. You're talking about people who just want it to work and don't know or need to know squat, they should be looking at CNET and PCWorld not picking components out - again that is not a slam, it just is common sense to me.
 
Last edited:
Joined
May 24, 2007
Messages
5,367 (0.94/day)
Location
Tennessee
System Name AM5
Processor AMD Ryzen R9 7950X
Motherboard Asrock X670E Taichi
Cooling EK AIO Basic 360
Memory Corsair Vengeance DDR5 5600 64 Gb - XMP1 Profile
Video Card(s) AMD Reference 7900 XTX 24 Gb
Storage Samsung Gen 4 980 1 TB / Samsung 8TB SSD
Display(s) Samsung 34" 240hz 4K
Case Fractal Define R7
Power Supply Seasonic PRIME PX-1300, 1300W 80+ Platinum, Full Modular
To both of you;
The important question is under what's the real power consumption under which circumstances.
Does it matter if the CPU gets a little hot under an unrealistic workload? (And putting a CPU under such a load 24-7 is going to wear it out quickly anyways)
I'm pretty sure Noctua NH-U14S is more than sufficient for 99.9% of customers buying this CPU, and if anything an AiO is usually not going to improve that much over NH-U14S in a case, it's much better to upgrade the case fans and calibrate the fan curves. Do that, and you'll get a system that's very quiet under most real workloads, yet still can handle the extreme peaks.
AiO coolers are usually not a good choice anyways, far too much noise for little gains. If you need cooling for extreme overclocking, go custom loop.

You took a sentence out of my post, and lost the the context of the whole post. The power consumption is worse than my 5950x. The multithreaded capability is worse than my 5950x. I would gain slight gains from single threaded performance.

Context of my post which you lost, not worth switching platforms.
 
Joined
May 8, 2021
Messages
1,978 (3.11/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
With a 30%-ish IPC deficit? Yeah, sorry, I don't think so. I mean, unless you are just outright ignoring reality and saying "if the 10900K had infinite clock speed it would be the fastest CPU ever", there are hard limits to how high those CPUs can clock regardless of power under any type of semi-conventional cooling, and I would think a stock 12900K beats a well overclocked golden sample 10900K in all but the most poorly scheduled MT apps, and every single ST app out there.
Sure mate, but that's all? New chip just beating tweaked 2 year old chip. And don't forget that 10900K is actually coolable, less dense, so it's easier to extract higher clocks. 5.4GHz is doable out of 10900K. At that point, it will nearly close the gap.


As was said before, the E clusters are 4 or 0 cores, nothing in between. They have a single stop on the ring bus and no internal power gating. And IMO, 2P4E would be fantastic for any low power application, whether that's a general purpose desktop or a laptop. Remember, those E cores can't hang with the P cores, but they're not your grandpa's Atom cores. Anandtech confirmed Intel's claims of them matching Skylake at the same clocks (they're slightly slower at 3.9GHz than the 4.2GHz i7-6700K).
2P/4E still sounds rather lethargic. I will believe that this config is worthy when I see it. That also doesn't change anything about 2E config being impossible now, that's an oversight for sure.


That's nonsense. I would make the opposite claim: that chips like the 5950X and 12900K have effectively killed HEDT. I mean, how much attention does Intel give to their HEDT lineup these days? They're still on the X299 chipset (four generations old!), and on 14nm Skylake still.
Not much, but you can't shit on them for staying with Skylake. It's not like Rocket Lake was any good and Comet Lake is still Skylake, so actually that stuff is not that old. And you get many benefits of HEDT platform.

Threadripper was also a bit of a flash in the pan after MSDT Ryzen went to 16 cores - there are just not enough workstation applications performed in high enough volumes that a) scale to >16+16 threads or b) need tons of memory bandwidth to make TR a viable option - especially when MSDT has a massive clock speed and IPC advantage. A 5950X will outperform a 3970X or 3990X in the vast majority of applications (as will a 5800X, tbh), and while there are absolutely applications that can make great use of TR, they are few and highly specialized.
Well, 3970X would be my pick if money is no object. 32 cores are way cooler than 16. 3970X isn't really as antiquated as you say. And to be honest, if money is no object, I would rather find Quad FX platform with 2 dual core Athlon 64 FX chips, server motherboard, 16GB DDR2. But those things are really rare and certainly not so quick today. Quad FX machine might not even beat Athlon X4 860K. The crazy thing is that old oddware like Athlon 64 FX-74 goes for 250-300 USD still. Used AMD Socket F dual socket boards are super rare and when they are listed, they cost a lot. Same deal with coolers, they don't exist and I don't think that mainstream ones fit. At least ECC DDR2 is quite common. It's 2021 and I still want to know how good AMD K8 was against K10 or BD.

Back to topic, I also mentioned phase change cooler with 3970X. It would be fun to tweak 32C/64T monster. It's somewhat lower clocked Ryzen, so it does have some potential, if you can tame that heat output. 5950X isn't as tweakable, it's already nearly maxed out.

Edit:
There seems to be WRX80 platform, which is better than 3970X one. 3975WX looks pretty cool too, but it's at stock slower than 3970X.
 
Joined
May 2, 2017
Messages
7,762 (3.69/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Sure mate, but that's all? New chip just beating tweaked 2 year old chip. And don't forget that 10900K is actually coolable, less dense, so it's easier to extract higher clocks. 5.4GHz is doable out of 10900K. At that point, it will nearly close the gap.
I never said the 12900K was particularly impressive, I just said that your claim of a 10900K being faster was wrong.
2P/4E still sounds rather lethargic. I will believe that this config is worthy when I see it. That also doesn't change anything about 2E config being impossible now, that's an oversight for sure.
2E doesn't really make sense - what would be the value of a scant two E cores, if their major point is low-power MT performance and smooth handling of background tasks? Two cores isn't enough for either. And the E cores are small enough that 4 is already a small enough die area to fit on any die they want.
Not much, but you can't shit on them for staying with Skylake. It's not like Rocket Lake was any good and Comet Lake is still Skylake, so actually that stuff is not that old. And you get many benefits of HEDT platform.
Servers have been on Ice Lake for half a year now, with large-scale customers (Google, Facebook and their ilk) having access to it for likely a year before that. There's nothing stopping Intel from releasing those chips for an X699 platform. But clearly they're not interested, hence the lack of updates since late 2019 for that lineup (which was even at that point just a warmed-over refresh of the 9th gen products from the previous year.
Well, 3970X would be my pick if money is no object. 32 cores are way cooler than 16.
Well, then you either have highly specialized workloads or just don't care about real-world performance. Most people don't buy CPUs based on what core count is "cooler", but through balancing what they can afford and what performs well.
3970X isn't really as antiquated as you say.
I never said it was antiquated, I said it has lower IPC and boosts lower than Zen3 MSDT chips, leaving its only advantage at workloads that either need tons of bandwidth or more than 16+16 threads, which are very rare (essentially nonexistent) for most end users.
And to be honest, if money is no object, I would rather find Quad FX platform with 2 dual core Athlon 64 FX chips, server motherboard, 16GB DDR2. But those things are really rare and certainly not so quick today. Quad FX machine might not even beat Athlon X4 860K. The crazy thing is that old oddware like Athlon 64 FX-74 goes for 250-300 USD still. Used AMD Socket F dual socket boards are super rare and when they are listed, they cost a lot. Same deal with coolers, they don't exist and I don't think that mainstream ones fit. At least ECC DDR2 is quite common. It's 2021 and I still want to know how good AMD K8 was against K10 or BD.
... so you aren't actually looking for a high performance PC at all then? I mean, that's perfectly fine. Everyone has different interests, and I also think older PCs can be really cool (I just neither have the space, time or money to collect and use them). But using that perspective to comment on a new CPU launch? That isn't going to produce good results.
Back to topic, I also mentioned phase change cooler with 3970X. It would be fun to tweak 32C/64T monster. It's somewhat lower clocked Ryzen, so it does have some potential, if you can tame that heat output. 5950X isn't as tweakable, it's already nearly maxed out.
Again, at now you're shifting the frame of reference to something completely different than what a general CPU review is about. And again, if that's your interest that is perfectly valid, but it isn't valid as general commentary on CPUs, as it completely shifts the basis for comparison away from the concerns of the vast majority of people reading this review or discussing the results.
 
Joined
May 8, 2021
Messages
1,978 (3.11/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
2E doesn't really make sense - what would be the value of a scant two E cores, if their major point is low-power MT performance and smooth handling of background tasks? Two cores isn't enough for either. And the E cores are small enough that 4 is already a small enough die area to fit on any die they want.
How 2 cores aren't enough for background tasks? Even 1 core is totally fine. I'm not talking about having AutoCAD or BOINC opened in background, but just dealing with Windows overhead and for that even Pentium 4 is enough. As long as main cores aren't getting distracted from gaming, E cores have a purpose. As a MT workload boost, those E cores shouldn't be expected to do anything much of value, when you realize that die space is wasted on them instead of P cores.


Servers have been on Ice Lake for half a year now, with large-scale customers (Google, Facebook and their ilk) having access to it for likely a year before that. There's nothing stopping Intel from releasing those chips for an X699 platform. But clearly they're not interested, hence the lack of updates since late 2019 for that lineup (which was even at that point just a warmed-over refresh of the 9th gen products from the previous year.
Intel’s messaging with its new Ice Lake Xeon Scalable (ICX or ICL-SP) steers away from simple single core or multicore performance, and instead is that the unique feature set, such as AVX-512, DLBoost, cryptography acceleration, and security, along with appropriate software optimizations or paired with specialist Intel family products, such as Optane DC Persistent Memory, Agilex FPGAs/SmartNICs, or 800-series Ethernet, offer better performance and better metrics for those actually buying the systems. This angle, Intel believes, puts it in a better position than its competitors that only offer a limited subset of these features, or lack the infrastructure to unite these products under a single easy-to-use brand.
I'm not really sure if that matters to any non enterprise consumer even a tiny bit. All these features sound like they matter in closed temperature and dust controlled server room and offer nothing for consumer with excessive budget.


Well, then you either have highly specialized workloads or just don't care about real-world performance. Most people don't buy CPUs based on what core count is "cooler", but through balancing what they can afford and what performs well.
I clearly said that this is what would be interesting to people who have very excessive budgets. 3970X is more interesting as a toy than 5950X.


I never said it was antiquated, I said it has lower IPC and boosts lower than Zen3 MSDT chips, leaving its only advantage at workloads that either need tons of bandwidth or more than 16+16 threads, which are very rare (essentially nonexistent) for most end users.
And that's where the luxury of HEDT lies, they offer good performance for everything and excellent performance at what you said here, those rare cases, when you are memory bandwidth constrained or need huge core counts.


... so you aren't actually looking for a high performance PC at all then? I mean, that's perfectly fine. Everyone has different interests, and I also think older PCs can be really cool (I just neither have the space, time or money to collect and use them). But using that perspective to comment on a new CPU launch? That isn't going to produce good results.
I'm not seriously looking for one and wouldn't have any use for it. If being into computers were just a hobby, then performance would matter very little for me. I would rather look into unique computers or something random like Phenom IIs. Performance matters the most when it's not plenty and when you can't upgrade frequently. If not some rather modest gaming needs (well, wants to be exact), I would be fine with Celeron. But even in gaming what I have now (i5 10400F) is simply excessive. I could be perfectly served by i3 10100F. And latest or modern games at all make up like 30% of my library. I often play something old like UT2004, Victoria 2, Far Cry and those games don't need modern CPU at all and in fact, modern OS and hardware may even cause compatibility issues. I used to have Athlon 64, socket 754 era correct rig for a while, but frequent part failures made it too expensive and too annoying to keep it running. Besides it, I have tried various computers already and I used to have 3 desktops working and ready to use in single room. It was nice for a while, until I realized that I only have one ass and head and can only meaningfully use only one of them. Those weren't expensive machines either, but still I learned my lesson. Beyond that, maintenance effort also increases and at some point one or two of them will mostly sit abandoned doing nothing. Sure you can use them for BOINC or mining, but still their utility is very limited. I certainly was more impressionable and was into acquiring things that looked interesting, but the sad reality is that beyond initial interest, you still end up with only one daily usage machine. I also tested this, when I had no responsibilities and 24 hours all to myself for literal months. There's really not much benefit in doing that long term. If you work or study, then you really can't properly utilize more than 2 machines (main desktop and back up machine or daily machine and cool project machine or daily desktop and laptop). Despite all that, I would like to test out Quad FX machine. By that I mean that using it for 3 months would be nice and later it might collect dust. i5 10400F machine serves all my needs just fine, while offering some nice extras (extra two cores, that I probably don't really need, but are nice for BOINC and really low power usage) and getting Quad FX machine would only mean a ton of functional overlap. Perhaps all this made me mostly interested in longest lasting configs, that don't need to be upgraded or replaced for many years and that means that I will keep using my current machine for a long time, until it really stops doing what I need and want (well that to limited extent of course).

If you look at what many people own and what are their interests, most people would say that they want reliable, no bullshit and long lasting system. I think that those are important criteria and I judge many part releases by their long term value. i9 is weak on my scale. Sure, it's fast now, but that power consumption a and heat output are really unpleasant. It will be fast for a while, but it will be the fastest only for few months and that's the main reason to get one. Over time you will feel its negative aspects far more than initial positive ones, therefore I think that it's a poor chip. It also is obscenely expensive to maintain, you need just released overpriced board to own one and likely unreliable cooling solution aka water cooling. And on top of that, it's transitionary release between DDR4 and DDR5, meaning that it doesn't take full advantage of new technology. And it also is the first attempt at P and E cores and I don't think that it has a great layout of those. Therefore all in all, it's unpleasant to own chip, with lots of potential to look at a lot worse in future (due to figuring out P and E core layout better and leveraging DDR5 better or so I expect) and it is not priced right and is expensive to buy and maintain. All in all, I don't think it will last as well as i5 2600K or 3770K/4770K. Those chips lasted for nearly decade and started to feel antiquated only relatively recently, this i9 12900K is already feels of somewhat limited potential. Therefore, I don't think that it's really interesting or good. In long term ownership with low TCO and minimal negative factors, this i9 fails hard. Performance only matters so much in that equation. I think that i5 12400 or i7 12700 would fare a lot better than K parts and will be far more pleasant to use in long term. This CPU (and for that matter all hardware) evaluation mentality is certainly not common here at TPU, but I think it is valuable and therefore I won't look at chips by their performance only. Performance matters in long term usage, but only so much and many other things matter just as much as performance.


Again, at now you're shifting the frame of reference to something completely different than what a general CPU review is about. And again, if that's your interest that is perfectly valid, but it isn't valid as general commentary on CPUs, as it completely shifts the basis for comparison away from the concerns of the vast majority of people reading this review or discussing the results.
Maybe, but you have to admit that 3970X's overclocked performance would be great. 5950X would never beat it in multithreaded tasks. My point is that if you are looking for luxury CPU, then buy an actual luxury CPU, not just some hyped up mainstream stuff. I'm not shifting frame of reference and some slight benefit of 5950X in single threaded workloads won't make it overall better chip, while it gets completely obliterated in multithreaded loads. 5950X might be more useful to user, that's a good argument to make, but does it feel like luxury and truly "fuck all" budget CPU? I don't think so and I don't think that people looking for high end workhorse CPU would actually care about 5950X either, since Threadripper platform was made for that in made and it has that exclusive feel, just like Xeons. You know, this is similar situation to certain cars. Corvette is well known performance car. It's fast, somewhat affordable and it looks good. Some people don't know that Vettes are actually faster and may even feel nicer to drive than some Ferraris or Lambos, therefore typical Ferrari or Lambo buyer doesn't even consider getting a Vette, despite it most likely being objectively better car than Ferrari or Lambo, while also being a lot cheaper. I think that it's a similar situation here with Threadripper and 5950X or 12900K. Threadripper feels more exclusive and has some features that make it distinctly well performing HEDT chip, which mainstream one doesn't have. Despite mainstream chip, like 5950X, being more useful and overall better performing, it's just not as alluring as Threadripper. This is how I think about this. But full disclosure, if I'm being 100% honest, then most likely I would just rather leave my computer alone and just enjoy for what it is, rather than what it could be. I would only upgrade to 2TB SSD as nothing AAA, except one title that could currently fit onto it and I'm already using NTFS compression.
 
Joined
Jan 14, 2019
Messages
4,781 (3.23/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Silent Loop 2 280 mm
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) AMD Radeon RX 6750 XT
Storage 1 TB Crucial P5 Plus, 2 TB Corsair MP600 R2
Display(s) Samsung C24F390, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Cherry MW 8 Advanced
Keyboard MagicForce 68
Software Windows 10 Pro
Benchmark Scores Unigine Superposition 1080p Ultra: 7150, Cinebench R23 multi: 19250, single: 1975.
You're not wrong, but this is precisely why XMP/DOCP exists. Boot the PC, get into the BIOS (which you need to do anyway to see that everything registers properly), enable the profile, save and exit. Done. Unless you've been dumb and bought a kit with some stupidly high XMP/DOCP profile (idk, 4400c16 or something), it should work on literally every CPU out there, as none of them have IMCs so bad that they only support JEDEC speeds. XMP is an excellent middle ground.
My golden rule is to buy the RAM kit with the highest officially supported speed of the platform. Currently, I'm on B560 with an 11700. The highest officially supported RAM speed on this is 3200 MHz, so I'm using a 3200 MHz kit. If I were reviewing CPUs, I would do it in exactly the same way.

How 2 cores aren't enough for background tasks? Even 1 core is totally fine. I'm not talking about having AutoCAD or BOINC opened in background, but just dealing with Windows overhead and for that even Pentium 4 is enough. As long as main cores aren't getting distracted from gaming, E cores have a purpose. As a MT workload boost, those E cores shouldn't be expected to do anything much of value, when you realize that die space is wasted on them instead of P cores.
A lot depends on what kind of cores we're talking about. I'm not (yet) familiar with Alder Lake's E cores, so I can't agree or disagree. All I know is, Windows update doesn't even max out one thread on my 11700, while it pegs all 4 cores to 100% usage on the Atom x5 in my Compute Stick. I literally can't use it for anything while it's updating.

And that's where the luxury of HEDT lies, they offer good performance for everything and excellent performance at what you said here, those rare cases, when you are memory bandwidth constrained or need huge core counts.
I disagree. I think the "luxury of HEDT" mainly lies in the extra PCI-e lanes, storage and networking capabilities. If you only need raw CPU power, a mainstream platform with a 5950X is a lot cheaper and perfectly adequate.

Jokes or not, but I have actually seen Celeron (Pentium 4 era, 3GHz model) lag horrendously in Excel. Once I needed some graphs it straight up froze for liek half minute and once it unfroze, it rendered it at less than 1 fps. It was some IT class hell. But more realistically, I have heard that some companies keep databases in Excel. Thousands of entries, all labeled and with tons of formulas to calculate many things. That kind of stuff might actually bring i7 to its knees.
That was 15 years ago. Nowadays, even a Celeron can run basically everything you need in an office - unless you're using some horribly slow implementation of emulated Windows on Citrix, like we do at my job. But then, it's the software's fault and no amount of raw horsepower can fix it.
 
Joined
May 8, 2021
Messages
1,978 (3.11/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
My golden rule is to buy the RAM kit with the highest officially supported speed of the platform. Currently, I'm on B560 with an 11700. The highest officially supported RAM speed on this is 3200 MHz, so I'm using a 3200 MHz kit. If I were reviewing CPUs, I would do it in exactly the same way.
You are actually running out of spec. That's JEDEC 3200MHz that is supported and that means that it's RAM with 3200Mhz and timings of CL20/CL22/CL24, technically, CL 16 is out of spec. If RAM speed compatibility is listed by CPU manufacturer, then it is listed at JEDEC spec, meaning JEDEC timings too. I'm not sure if that actually impacts stability or anything, but LGA1200 platform is expected to have CAS latency of 12,5-15 ns. If you had B460 board, then XMP profile might not work as XMP is overclocking and there's no overclocking on Comet Lake locked chipsets to absurd degree.


A lot depends on what kind of cores we're talking about. I'm not (yet) familiar with Alder Lake's E cores, so I can't agree or disagree. All I know is, Windows update doesn't even max out one thread on my 11700, while it pegs all 4 cores to 100% usage on the Atom x5 in my Compute Stick. I literally can't use it for anything while it's updating.
We all know that E cores have Skylake IPC and sort of Skylake clocks, so they are close to your 11700's performance. 2E is plenty for background gunk.


I disagree. I think the "luxury of HEDT" mainly lies in the extra PCI-e lanes, storage and networking capabilities. If you only need raw CPU power, a mainstream platform with a 5950X is a lot cheaper and perfectly adequate.
Well that too and also memory bandwidth as well as insane amount of supported memory. You can put 256 GB of RAM in TRX40 boards.


That was 15 years ago. Nowadays, even a Celeron can run basically everything you need in an office - unless you're using some horribly slow implementation of emulated Windows on Citrix, like we do at my job. But then, it's the software's fault and no amount of raw horsepower can fix it.
I disagree. Many low power chips of today fail to match AMD Athlon 64 performance, which was better than Pentium 4's and consequently, Celeron's too. So basically anything Atom or Celeron N (Silver) can actually lag with Excel, but that lag mostly had to do with broken GPU hardware acceleration of that particular computer.
 
Joined
Jan 14, 2019
Messages
4,781 (3.23/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Silent Loop 2 280 mm
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) AMD Radeon RX 6750 XT
Storage 1 TB Crucial P5 Plus, 2 TB Corsair MP600 R2
Display(s) Samsung C24F390, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Cherry MW 8 Advanced
Keyboard MagicForce 68
Software Windows 10 Pro
Benchmark Scores Unigine Superposition 1080p Ultra: 7150, Cinebench R23 multi: 19250, single: 1975.
You are actually running out of spec. That's JEDEC 3200MHz that is supported and that means that it's RAM with 3200Mhz and timings of CL20/CL22/CL24, technically, CL 16 is out of spec. If RAM speed compatibility is listed by CPU manufacturer, then it is listed at JEDEC spec, meaning JEDEC timings too. I'm not sure if that actually impacts stability or anything, but LGA1200 platform is expected to have CAS latency of 12,5-15 ns. If you had B460 board, then XMP profile might not work as XMP is overclocking and there's no overclocking on Comet Lake locked chipsets to absurd degree.
Is there even a JEDEC spec for 3200 MHz?

Edit: Where does it say here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.

Edit 2: Also, show me a RAM kit that runs at 3200 MHz by JEDEC default, without XMP or DOCP. ;)

I disagree. Many low power chips of today fail to match AMD Athlon 64 performance, which was better than Pentium 4's and consequently, Celeron's too. So basically anything Atom or Celeron N (Silver) can actually lag with Excel, but that lag mostly had to do with broken GPU hardware acceleration of that particular computer.

You can't really get any lower power than this (the fact that I've owned both of these CPUs makes me feel nostalgic).

 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.69/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Is there even a JEDEC spec for 3200 MHz?

Edit: Where does it say here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.

Edit 2: Also, show me a RAM kit that runs at 3200 MHz by JEDEC default, without XMP or DOCP. ;)



You can't really get any lower power than this (the fact that I've owned both of these CPUs makes me feel nostalgic).

Yep, JEDEC announced three 3200 specs a while after the initial DDR4 launch. The fastest JEDEC 3200 spec is 20-20-20. That (or one of the slower standards) is what is used for 3200-equipped laptops.

There are essentially no consumer-facing JEDEC-spec 3200 kits available though - simply because this doesn't really matter to consumers, and they buy whatever (and enthusiasts want faster stuff and wouldn't touch JEDEC with a ten-foot pole). This also means these DIMMs aren't generally sold at reatil, but they can be found through other channels. All ECC DDR4-3200 is also at JEDEC speeds, as are most if not all DDR4-3200 SODIMMs.

How 2 cores aren't enough for background tasks? Even 1 core is totally fine. I'm not talking about having AutoCAD or BOINC opened in background, but just dealing with Windows overhead and for that even Pentium 4 is enough. As long as main cores aren't getting distracted from gaming, E cores have a purpose. As a MT workload boost, those E cores shouldn't be expected to do anything much of value, when you realize that die space is wasted on them instead of P cores.
Sorry, but no. Try to consider how a PC operates in the real world. Say you have a game that needs 4 fast threads, only one of which consumes a full core, but each of which can hold back performance if the core needs to switch between it and another task. You then have 4 fast cores and 1 background core, and Windows Update, Defender (or other AV software) or any other software update process (Adobe CS, some Steam or EGS or Origin or whatever other automated download) kicks in. That E core is now fully occupied. What happens to other, minor system tasks? One of three scenarios: The scheduler kicks the update/download task to a P core, costing you performance; the scheduler keeps all "minor" tasks on the E core, choking it and potentially causing issues through system processes being delayed; the scheduler starts putting tiny system processes on the P core, potentially causing stutters. Either way, this harms performance. So, 1 E core is insufficient. Period. Two is the bare minimum, but even with a relatively low number of background processes it's not unlikely for the same scenario to play out with two.

Also, the E cores are overall quite fast. So, in that same scenario, a 2P+8E setup is likely to perform better than a 4P+1E (or 2E) setup, as the likelyhood of the game needing more than 2 "faster than a full E core" threads is very low, and you are left with more cores to handle the slightly slower game threads + background tasks.
Intel’s messaging with its new Ice Lake Xeon Scalable (ICX or ICL-SP) steers away from simple single core or multicore performance, and instead is that the unique feature set, such as AVX-512, DLBoost, cryptography acceleration, and security, along with appropriate software optimizations or paired with specialist Intel family products, such as Optane DC Persistent Memory, Agilex FPGAs/SmartNICs, or 800-series Ethernet, offer better performance and better metrics for those actually buying the systems. This angle, Intel believes, puts it in a better position than its competitors that only offer a limited subset of these features, or lack the infrastructure to unite these products under a single easy-to-use brand.
I'm not really sure if that matters to any non enterprise consumer even a tiny bit. All these features sound like they matter in closed temperature and dust controlled server room and offer nothing for consumer with excessive budget.
.... so you agree that HEDT is becoming quite useless then? That paragraph essentially says as much. Servers and high end workstations (HEDT's core market!) are moving to specialized workflows with great benefits from specialized acceleration. MSDT packs enough cores and performance to handle pretty much anything else. The classic HEDT market is left as a tiny niche, having lost its "if you need more than 4 cores" selling point, and with PCIe 4.0 eroding its IO advantage, even. There are still uses for it, but they are rapidly shrinking.
I clearly said that this is what would be interesting to people who have very excessive budgets. 3970X is more interesting as a toy than 5950X.
No you didn't. What you said was:
Chips like 5950X and 12900K are essentially pointless, as those looking for power, go with HEDT and consumers just want something sane and what works and what is priced reasonably. The current "fuck all" budget chip is TR 3970X (3990X is bit weak in single core). Things like i9 or Ryzen 9 on mainstream platform are just products made for poor people to feel rich (they aren't exactly poor, but I feel snarky). Those platforms are always gimped in terms of PCIe lanes and other features and that's why TR4 platform is ultimate workhorse and "fuck all" budget buyers platform. And if that's too slow, then you slap phase change on TR, OC as far as it goes and enjoy it. Far better, than octa core with some eco fluff.
Your argumentation here is squarely centered around the previous practical benefits of HEDT platforms - multi-core performance, RAM and I/O. Nothing in this indicates that you were speaking of people buying these as "toys" - quite the opposite. "Ultimate workhorse" is hardly equivalent to "expensive toy", even if the same object can indeed qualify for both.

You're not wrong that there has historically been a subset of the HEDT market that has bought them because they have money to burn and want the performance because they can get it, but that's a small portion of the overall HEDT market, and one that frankly is well served by a $750 16-core AM4 CPU too. Either way, this market isn't big enough for AMD or Intel to spend millions developing products for it - their focus is on high end workstations for professional applications.
And that's where the luxury of HEDT lies, they offer good performance for everything and excellent performance at what you said here, those rare cases, when you are memory bandwidth constrained or need huge core counts.
That's not really true. While 3rd-gen TR does deliver decent ST performance, it's still miles behind MSDT Ryzen. I mean, look at Anandtech's benchmarks, which cover everything from gaming to tons of different workstation tasks as well as industry standard benchmarks like SPEC. The only scenarios where the 3970X wins out are either highly memory bound or among the few tasks that scale well beyond 16 cores and 32 threads. Sure, these tasks exist, but they are quite rare, and not typically found among non-workstation users (or datacenters).

Of course, that the 3970X is significantly behind the 5950X in ST and low threaded tasks doesn't mean that it's terrible for these things. It's generally faster in ST tasks than a 6700K, for example, though not by much. But I sincerely doubt the people you're talking about - the ones with so much money they really don't care about spending it - would find that acceptable. I would expect them to buy (at least) two PCs instead.
I'm not seriously looking for one and wouldn't have any use for it. If being into computers were just a hobby, then performance would matter very little for me. I would rather look into unique computers or something random like Phenom IIs. Performance matters the most when it's not plenty and when you can't upgrade frequently. If not some rather modest gaming needs (well, wants to be exact), I would be fine with Celeron. But even in gaming what I have now (i5 10400F) is simply excessive. I could be perfectly served by i3 10100F. And latest or modern games at all make up like 30% of my library. I often play something old like UT2004, Victoria 2, Far Cry and those games don't need modern CPU at all and in fact, modern OS and hardware may even cause compatibility issues. I used to have Athlon 64, socket 754 era correct rig for a while, but frequent part failures made it too expensive and too annoying to keep it running. Besides it, I have tried various computers already and I used to have 3 desktops working and ready to use in single room. It was nice for a while, until I realized that I only have one ass and head and can only meaningfully use only one of them. Those weren't expensive machines either, but still I learned my lesson. Beyond that, maintenance effort also increases and at some point one or two of them will mostly sit abandoned doing nothing. Sure you can use them for BOINC or mining, but still their utility is very limited. I certainly was more impressionable and was into acquiring things that looked interesting, but the sad reality is that beyond initial interest, you still end up with only one daily usage machine. I also tested this, when I had no responsibilities and 24 hours all to myself for literal months. There's really not much benefit in doing that long term. If you work or study, then you really can't properly utilize more than 2 machines (main desktop and back up machine or daily machine and cool project machine or daily desktop and laptop). Despite all that, I would like to test out Quad FX machine. By that I mean that using it for 3 months would be nice and later it might collect dust. i5 10400F machine serves all my needs just fine, while offering some nice extras (extra two cores, that I probably don't really need, but are nice for BOINC and really low power usage) and getting Quad FX machine would only mean a ton of functional overlap. Perhaps all this made me mostly interested in longest lasting configs, that don't need to be upgraded or replaced for many years and that means that I will keep using my current machine for a long time, until it really stops doing what I need and want (well that to limited extent of course).

If you look at what many people own and what are their interests, most people would say that they want reliable, no bullshit and long lasting system. I think that those are important criteria and I judge many part releases by their long term value. i9 is weak on my scale. Sure, it's fast now, but that power consumption a and heat output are really unpleasant. It will be fast for a while, but it will be the fastest only for few months and that's the main reason to get one. Over time you will feel its negative aspects far more than initial positive ones, therefore I think that it's a poor chip. It also is obscenely expensive to maintain, you need just released overpriced board to own one and likely unreliable cooling solution aka water cooling. And on top of that, it's transitionary release between DDR4 and DDR5, meaning that it doesn't take full advantage of new technology. And it also is the first attempt at P and E cores and I don't think that it has a great layout of those. Therefore all in all, it's unpleasant to own chip, with lots of potential to look at a lot worse in future (due to figuring out P and E core layout better and leveraging DDR5 better or so I expect) and it is not priced right and is expensive to buy and maintain. All in all, I don't think it will last as well as i5 2600K or 3770K/4770K. Those chips lasted for nearly decade and started to feel antiquated only relatively recently, this i9 12900K is already feels of somewhat limited potential. Therefore, I don't think that it's really interesting or good. In long term ownership with low TCO and minimal negative factors, this i9 fails hard. Performance only matters so much in that equation. I think that i5 12400 or i7 12700 would fare a lot better than K parts and will be far more pleasant to use in long term. This CPU (and for that matter all hardware) evaluation mentality is certainly not common here at TPU, but I think it is valuable and therefore I won't look at chips by their performance only. Performance matters in long term usage, but only so much and many other things matter just as much as performance.

Maybe, but you have to admit that 3970X's overclocked performance would be great. 5950X would never beat it in multithreaded tasks. My point is that if you are looking for luxury CPU, then buy an actual luxury CPU, not just some hyped up mainstream stuff. I'm not shifting frame of reference and some slight benefit of 5950X in single threaded workloads won't make it overall better chip, while it gets completely obliterated in multithreaded loads. 5950X might be more useful to user, that's a good argument to make, but does it feel like luxury and truly "fuck all" budget CPU? I don't think so and I don't think that people looking for high end workhorse CPU would actually care about 5950X either, since Threadripper platform was made for that in made and it has that exclusive feel, just like Xeons. You know, this is similar situation to certain cars. Corvette is well known performance car. It's fast, somewhat affordable and it looks good. Some people don't know that Vettes are actually faster and may even feel nicer to drive than some Ferraris or Lambos, therefore typical Ferrari or Lambo buyer doesn't even consider getting a Vette, despite it most likely being objectively better car than Ferrari or Lambo, while also being a lot cheaper. I think that it's a similar situation here with Threadripper and 5950X or 12900K. Threadripper feels more exclusive and has some features that make it distinctly well performing HEDT chip, which mainstream one doesn't have. Despite mainstream chip, like 5950X, being more useful and overall better performing, it's just not as alluring as Threadripper. This is how I think about this. But full disclosure, if I'm being 100% honest, then most likely I would just rather leave my computer alone and just enjoy for what it is, rather than what it could be. I would only upgrade to 2TB SSD as nothing AAA, except one title that could currently fit onto it and I'm already using NTFS compression.
There were signs of this above, but man, that's a huge wall of goal post shifting. No, you weren't presenting arguments as if they only applied to you and your specific wants and interests, nor were you making specific value arguments. You were arguing about the general performance of the 12900K, for general audiences - that's what this thread is about, and for anything else you actually do need to specify the limitations of your arguments. It's a given that flagship-tier hardware is poor value - that's common knowledge for anyone with half a brain and any experience watching any market whatsoever. Once you pass the midrange, you start paying a premium for premium parts. That's how premium markets work. But this doesn't invalidate the 12900K - it just means that, like other products in this segment it doesn't make sense economically. That's par for the course. It's expected. And the same has been true for every high-end CPU ever.

Also, you're making a lot of baseless speculations here. Why would future P/E core scheduling improvements not apply to these chips? Why would future DDR5 improvements not apply here? If anything, RAM OC results show that the IMC has plenty left in the tank, so it'll perform better with faster DDR5 - the RAM seems to be the main limitation there. It's quite likely that the Thread ... Director? is sub-optimal and will be improved in future generations, but you're assuming that this is causing massive performance bottlenecks, and that software/firmware can't alleviate these. First off, I've yet to see any major bottlenecks outside of specific applications that either seem to not run on the E cores or get scheduled only to them (and many MT applications seem to scale well across all cores of both types), and if anything there are indications that software and OS issues are the cause of this, and not hardware.

You were also making arguments around absolute performance, such as an OC'd 10900K being faster, which ... well, show me some proof? If not, you're just making stuff up. Testing and reviews strongly contradict that idea. For example, in AT's SPEC2017 testing (which scales well with more cores, as some workstation tasks can), the 12900K with DDR4 outperforms the 10900K by 31%. To beat that with an OC you'd need to be running your 10900K at (depending on how high their unit boosted) somewhere between 5.8 and 6.8GHz to catch up, let alone be faster. And that isn't possible outside of exotic cooling, and certainly isn't useful for practical tasks. And at that point, why wouldn't you just get a 12900K and OC that? You seem to be looking very hard for some way to make an unequal comparison in order to validate your opinions here. That's a bad habit, and one I'd recommend trying to break.

The same goes for things like saying an OC'd 3970X will outperform a 5950X in MT tasks. From your writing it seems that the 5950X is for some reason not OC'd (which is ... uh, yeah, see above). But regardless of that, you're right that the 3970X would be faster, but again - to what end, and for what (material, practical, time, money) cost? The amount of real-world workloads that scale well above 16 cores and 32 threads are quite few (heck, there are few that scale well past 8c16t). So unless what you're building is a PC meant solely for running MT workloads with near-perfect scaling (which generally means rendering, some forms of simulation, ML (though why wouldn't you use an accelerator for that?), video encoding, etc.), this doesn't make sense, as most of the time using the PC would be spent at lower threaded loads, where the "slower" CPU would be noticeably faster. If you're building a video editing rig, ST performance for responsiveness in the timeline is generally more important than MT performance for exporting video, unless your workflow is very specialized. The same goes for nearly everything else that can make use of the performance as well. And nobody puts an overclocked CPU in a mission-critical render box, as that inevitably means stability issues, errors, and other problems. That's where TR-X, EPYC, Xeon-W and their likes come in - and there's a conscious tradeoff there for stability instead of absolute peak performance (as at that point you can likely just buy two PCs instead).

So, while your arguments might apply for the tiny group of users who still insist on buying HEDT as toys (instead of just buying $700+ MSDT CPUs and $1000+ motherboards, which are both plentiful today), they don't really apply to anyone outside of this group.
 
Joined
May 8, 2021
Messages
1,978 (3.11/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Is there even a JEDEC spec for 3200 MHz?
Yes

Edit: Where does it say here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.

Edit 2: Also, show me a RAM kit that runs at 3200 MHz by JEDEC default, without XMP or DOCP. ;)

They don't specify it, but Intel was particularly vocal that JEDEC is standard and they expect their spec to be respected and that XMP is overclocking. XMP actually voids Intel CPU warranty. I don't think that they post specs that instantly void warranty of their products, so it has to be JEDEC spec that is what they expect.



You can't really get any lower power than this (the fact that I've owned both of these CPUs makes me feel nostalgic).

It's not multicore performance that sucks, but rather single core performance of those super low power devices. Just like this Atom, many of them have single core performance lower than of Athlon 64, if tested in Cinebench.

BTW I have some Athlon 64s still. I have Athlon 64 3000+, Athlon 64 3200+ 2.2GHz, Athlon 64 3400+ 2.4GHz. All are socket 754. I also have Sempron 2800+ (s754), Sempron 3000+ (S1G1), Turion X2 TL-60 (S1G1). If I'm not forgetting anything, that's all K8 stuff that I have. I have Athlon X4 845, Athlon X4 870K and Athlon X4 760K too. Just to induce some nostalgia for you, in 2017, I built ultimate 2004 Athlon system with these specs:
DFI K8T800Pro-Alf
Athlon 64 3400+ 2.4GHz (OC to 2.5GHz)
Scythe Andy Samurai Master
2x1GB DDR400 Transcend JetRAM CL3
2X WD Raptor 10k rpm 72GB drives in RAID 0
120GB WD IDE drive
80GB Samsung Spinpoint IDE drive
Asus ATi Radeon X800 Platinum Edition AGP 8X version (Asus AX X800, yep that model with anime waifu and blue LEDs)
Creative Sound Blaster Audigy 2ZS
Sony 3.5" floppy drive
TSST corp DVD drive
TP-Link PCI Wi-Fi card
Old D-Link DFM-562IS 56k modem for lolz
Creative SBS 560 5.1 speakers (era correct)
Windows XP Pro SP3 32 bit
Fractal Design Define R4 case (certainly not era correct)
Chieftec A-90 modular 550W power supply (didn't want to refurbish an old PSU)

Considered or partially made projects:
2X SATA SSDs in RAID 0 (didn't work out due to unknown compatibility problems)
ATi Silencer + voltmod --> +150 MHz overclock
Athlon 64 3700+ upgrade (materialized as 3400+ upgrade from 3200+)
DFI LANParty UT nF3-250Gb + cranking Athlon 64 to 2.8-3 GHz (never found that board for sale anywhere)
Athlon 64 3200+ overclock to 2.5 GHz (At 224 bus speed, RAID 0 with Windows XP install got corrupted permanently, because K8T800Pro chipset from VIA doesn't support independent clock speed locks, that's they I wanted LANParty board)
 
Joined
Jan 14, 2019
Messages
4,781 (3.23/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Silent Loop 2 280 mm
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) AMD Radeon RX 6750 XT
Storage 1 TB Crucial P5 Plus, 2 TB Corsair MP600 R2
Display(s) Samsung C24F390, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Cherry MW 8 Advanced
Keyboard MagicForce 68
Software Windows 10 Pro
Benchmark Scores Unigine Superposition 1080p Ultra: 7150, Cinebench R23 multi: 19250, single: 1975.
Yep, JEDEC announced three 3200 specs a while after the initial DDR4 launch. The fastest JEDEC 3200 spec is 20-20-20. That (or one of the slower standards) is what is used for 3200-equipped laptops.

There are essentially no consumer-facing JEDEC-spec 3200 kits available though - simply because this doesn't really matter to consumers, and they buy whatever (and enthusiasts want faster stuff and wouldn't touch JEDEC with a ten-foot pole). This also means these DIMMs aren't generally sold at reatil, but they can be found through other channels. All ECC DDR4-3200 is also at JEDEC speeds, as are most if not all DDR4-3200 SODIMMs.
This is what I mean. Therefore, to achieve Intel/AMD's recommended maximum RAM speed of 3200 MHz, you need XMP/DOCP. You don't have a choice. Or you could go with your DIMM's standard speeds of 2400-2666 MHz, which is also advised against.

They don't specify it, but Intel was particularly vocal that JEDEC is standard and they expect their spec to be respected and that XMP is overclocking. XMP actually voids Intel CPU warranty. I don't think that they post specs that instantly void warranty of their products, so it has to be JEDEC spec that is what they expect.
If they don't specify it, they can't expect it. Simple as. Also, good luck to anyone involved in an RMA process trying to prove that I ever activated XMP. ;)

It's not multicore performance that sucks, but rather single core performance of those super low power devices. Just like this Atom, many of them have single core performance lower than of Athlon 64, if tested in Cinebench.
Single core performance is getting less and less relevant even in office applications.
 
Joined
May 8, 2021
Messages
1,978 (3.11/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Sorry, but no. Try to consider how a PC operates in the real world. Say you have a game that needs 4 fast threads, only one of which consumes a full core, but each of which can hold back performance if the core needs to switch between it and another task. You then have 4 fast cores and 1 background core, and Windows Update, Defender (or other AV software) or any other software update process (Adobe CS, some Steam or EGS or Origin or whatever other automated download) kicks in. That E core is now fully occupied. What happens to other, minor system tasks? One of three scenarios: The scheduler kicks the update/download task to a P core, costing you performance; the scheduler keeps all "minor" tasks on the E core, choking it and potentially causing issues through system processes being delayed; the scheduler starts putting tiny system processes on the P core, potentially causing stutters. Either way, this harms performance. So, 1 E core is insufficient. Period. Two is the bare minimum, but even with a relatively low number of background processes it's not unlikely for the same scenario to play out with two.
Why not just clamp down on background junk then? Seems cheaper and easier to do that than try to buy it in form of CPU. I personally would be more than fine with 2E cores.


Also, the E cores are overall quite fast. So, in that same scenario, a 2P+8E setup is likely to perform better than a 4P+1E (or 2E) setup, as the likelyhood of the game needing more than 2 "faster than a full E core" threads is very low, and you are left with more cores to handle the slightly slower game threads + background tasks.
I still want 4P/2E chip. Won't change my mind that it's not the best budget setup.


.... so you agree that HEDT is becoming quite useless then? That paragraph essentially says as much. Servers and high end workstations (HEDT's core market!) are moving to specialized workflows with great benefits from specialized acceleration. MSDT packs enough cores and performance to handle pretty much anything else. The classic HEDT market is left as a tiny niche, having lost its "if you need more than 4 cores" selling point, and with PCIe 4.0 eroding its IO advantage, even. There are still uses for it, but they are rapidly shrinking.
No, I just showed you why I don't care about server archs and why they have no place in HEDT market yet and on top of that, I clearly say that TR 3970X is my go to choice for HEDT chip right now, not anything Intel.


No you didn't. What you said was
And that has literally the same meaning. You need big MT performance for big tasks. Only consumers cares excessively about single threaded stuff. Prosumer may be better served by 3970X, rather than 5950X. More lanes, more RAM, HEDT benefits and etc.

Your argumentation here is squarely centered around the previous practical benefits of HEDT platforms - multi-core performance, RAM and I/O. Nothing in this indicates that you were speaking of people buying these as "toys" - quite the opposite. "Ultimate workhorse" is hardly equivalent to "expensive toy", even if the same object can indeed qualify for both.
Ultimate workhorse can be an expensive toy. Some people use Threadrippers for work, meanwhile others buy them purely for fun. Nothing opposite about that.

You're not wrong that there has historically been a subset of the HEDT market that has bought them because they have money to burn and want the performance because they can get it, but that's a small portion of the overall HEDT market, and one that frankly is well served by a $750 16-core AM4 CPU too.
Really? All those Xeon bros with sandy, ivy, Haswell E chips are not that small and the whole reason to get those platforms, was mostly to not buy 2600K, 3770K or 4770K. Typical K chips are cool, but Xeons were next level. Nothing changes with threadripper

Either way, this market isn't big enough for AMD or Intel to spend millions developing products for it - their focus is on high end workstations for professional applications.
Their only job is just to put more cores on mainstream stuff. They develop architecture and then scale it to different users. Same Zen works for Athlon buyer and for Epyc buyer. There isn't millions of dollars expenditures specifically for HEDT anywhere. And unlike Athlon or Ryzen buyers, Threadripper buyers can and are willing to pay high profit margin, making HEDT chip development far more attractive to AMD than Athlon or Ryzen development. Those people also don't need stock cooler or much tech help, which makes it even more cheaper for AMD to make them.


That's not really true. While 3rd-gen TR does deliver decent ST performance, it's still miles behind MSDT Ryzen. I mean, look at Anandtech's benchmarks, which cover everything from gaming to tons of different workstation tasks as well as industry standard benchmarks like SPEC. The only scenarios where the 3970X wins out are either highly memory bound or among the few tasks that scale well beyond 16 cores and 32 threads. Sure, these tasks exist, but they are quite rare, and not typically found among non-workstation users (or datacenters).

Of course, that the 3970X is significantly behind the 5950X in ST and low threaded tasks doesn't mean that it's terrible for these things. It's generally faster in ST tasks than a 6700K, for example, though not by much. But I sincerely doubt the people you're talking about - the ones with so much money they really don't care about spending it - would find that acceptable. I would expect them to buy (at least) two PCs instead.
Maybe two PC is a decent idea then, but anyway, those multithreaded tasks aren't so rare in benchmarks. I personally would like to play around with 3970X far more in BOINC and WCG. 3970X's single core performance is decent.


There were signs of this above, but man, that's a huge wall of goal post shifting. No, you weren't presenting arguments as if they only applied to you and your specific wants and interests, nor were you making specific value arguments. You were arguing about the general performance of the 12900K, for general audiences - that's what this thread is about, and for anything else you actually do need to specify the limitations of your arguments. It's a given that flagship-tier hardware is poor value - that's common knowledge for anyone with half a brain and any experience watching any market whatsoever. Once you pass the midrange, you start paying a premium for premium parts. That's how premium markets work. But this doesn't invalidate the 12900K - it just means that, like other products in this segment it doesn't make sense economically. That's par for the course. It's expected. And the same has been true for every high-end CPU ever.
But the fact that it's impossible to cool adequately doesn't mean anything right? And the fact, that it doesn't beat 5950X decisively is also fine, right? Premium or not, but I wouldn't want a computer that fires my legs just to beat 5950X by a small percentage.


Why would future DDR5 improvements not apply here? If anything, RAM OC results show that the IMC has plenty left in the tank, so it'll perform better with faster DDR5 - the RAM seems to be the main limitation there. It's quite likely that the Thread ... Director? is sub-optimal and will be improved in future generations, but you're assuming that this is causing massive performance bottlenecks, and that software/firmware can't alleviate these.
Well, you literally said here that it's in hardware, so sure software can't do that and you clearly say here that it may be fixed after few gens. Cool, I will care about those gens then, no need to care about experimental 12900K.

You were also making arguments around absolute performance, such as an OC'd 10900K being faster, which ... well, show me some proof? If not, you're just making stuff up.
I extrapolate.

Testing and reviews strongly contradict that idea. For example, in AT's SPEC2017 testing (which scales well with more cores, as some workstation tasks can), the 12900K with DDR4 outperforms the 10900K by 31%. To beat that with an OC you'd need to be running your 10900K at (depending on how high their unit boosted) somewhere between 5.8 and 6.8GHz to catch up, let alone be faster. And that isn't possible outside of exotic cooling, and certainly isn't useful for practical tasks. And at that point, why wouldn't you just get a 12900K and OC that? You seem to be looking very hard for some way to make an unequal comparison in order to validate your opinions here. That's a bad habit, and one I'd recommend trying to break.
You posted a link with 11900K benchmarks, not 10900K, making all your points here invalid. 11900K is inferior to 10900K due to 2 cores chopped off for tiny IPC gains. 2C/4T can make a difference. They more or less result in 20% of closing gap with 12900K and then you only need 10% of performance gains, which you can get from simply raising PLs to 12900K levels, you might not even need to overclock 10900K to match 12900K.

The same goes for things like saying an OC'd 3970X will outperform a 5950X in MT tasks. From your writing it seems that the 5950X is for some reason not OC'd (which is ... uh, yeah, see above). But regardless of that, you're right that the 3970X would be faster, but again - to what end, and for what (material, practical, time, money) cost? The amount of real-world workloads that scale well above 16 cores and 32 threads are quite few (heck, there are few that scale well past 8c16t). So unless what you're building is a PC meant solely for running MT workloads with near-perfect scaling (which generally means rendering, some forms of simulation, ML (though why wouldn't you use an accelerator for that?), video encoding, etc.), this doesn't make sense, as most of the time using the PC would be spent at lower threaded loads, where the "slower" CPU would be noticeably faster. If you're building a video editing rig, ST performance for responsiveness in the timeline is generally more important than MT performance for exporting video, unless your workflow is very specialized. The same goes for nearly everything else that can make use of the performance as well. And nobody puts an overclocked CPU in a mission-critical render box, as that inevitably means stability issues, errors, and other problems. That's where TR-X, EPYC, Xeon-W and their likes come in - and there's a conscious tradeoff there for stability instead of absolute peak performance (as at that point you can likely just buy two PCs instead).
You seem to still apply value argument and stability argument to literally the maximum e-peen computer imaginable. If you have tons of cash, you can make others just set it up for you, particularly well insulated phase change cooling.

So, while your arguments might apply for the tiny group of users who still insist on buying HEDT as toys (instead of just buying $700+ MSDT CPUs and $1000+ motherboards, which are both plentiful today), they don't really apply to anyone outside of this group.
Maybe, but my point was about maximum computer that money can buy. Value be damned. 5950X or 12900K is not enough. Gotta OC that HEDT chip for maximum performance.
 
Joined
May 8, 2021
Messages
1,978 (3.11/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
If they don't specify it, they can't expect it. Simple as. Also, good luck to anyone involved in an RMA process trying to prove that I ever activated XMP. ;)
With H410/H470/B460 motherboard, you couldn't even activate it in the first place. And today if Intel cared, they could just put e-fuse on chip and clamp down hard at warranty trolls with overclocked and XMPed chips. Anyway, it's pretty obvious that Intel only ensures correct operation at CSS latency 12.5-15 ns, anything less is experimental.

Single core performance is getting less and less relevant even in office applications.
I'm not really sure about that, office software tends to be really slow to adopt new technologies, including multithreading. Wouldn't be surprised if Excel is still mostly single threaded. Performance for them just doesn't matter.
 
Joined
Jun 10, 2014
Messages
2,783 (0.88/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1600 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Servers have been on Ice Lake for half a year now, with large-scale customers (Google, Facebook and their ilk) having access to it for likely a year before that. There's nothing stopping Intel from releasing those chips for an X699 platform. But clearly they're not interested, hence the lack of updates since late 2019 for that lineup (which was even at that point just a warmed-over refresh of the 9th gen products from the previous year.
There has been references in early official documentation/drivers/etc. referring to a "Ice Lake X", though it never materialized. It's most likely due to the fact that the Ice Lake-SP/X core was unable to reach decent clock speeds (as seen with the Xeon W-3300 family), in fact lower than the predecessor Cascade Lake-SP/X, making it fairly uninteresting for the workstation/HEDT market. Ice Lake has worked well for servers though.

X699 will be based on Sapphire Rapids, which is in the same Golden Cove family as Alder Lake. Hopefully it will boost >4.5 GHz reliably.

There are essentially no consumer-facing JEDEC-spec 3200 kits available though - simply because this doesn't really matter to consumers, and they buy whatever (and enthusiasts want faster stuff and wouldn't touch JEDEC with a ten-foot pole). This also means these DIMMs aren't generally sold at reatil, but they can be found through other channels. All ECC DDR4-3200 is also at JEDEC speeds, as are most if not all DDR4-3200 SODIMMs.
Really?
Kingston, Corsair, Crucial and most of the rest have 3200 kits. These are big sellers and usually at great prices.
My current home development machine (5900X, Asus ProArt B550-Creator, Crucial 32 GB CT2K16G4DFD832A), runs 3200 MHz at CL22 flawlessly. CL20 would be better, but that's what I could find in stock at the time. But running overclocked memory for a work computer would be beyond stupid, I've seen how much file corruption and compilation fails it causes over time. An overclock isn't 100% stable just because it passes a few hours of stress tests.

<snip>
Also, the E cores are overall quite fast. So, in that same scenario, a 2P+8E setup is likely to perform better than a 4P+1E (or 2E) setup, as the likelyhood of the game needing more than 2 "faster than a full E core" threads is very low, and you are left with more cores to handle the slightly slower game threads + background tasks.
There is one flaw in your reasoning;
While the E cores are theoretically capable to a lot of lighter loads, games are super sensitive to timing issues. So even though most games only have 1-2 demanding threads and multiple light threads, the light threads may still be timing sensitive. Depending on which thread it is, delays may cause audio glitches, networking issues, IO lag etc. Any user application should probably "only" run on P cores to ensure responsiveness and reliable performance. Remember that the E cores share L2, which means the worst case latency can be quite substantial.
 
Joined
Feb 1, 2019
Messages
1,119 (0.76/day)
Location
UK, Leicester
System Name Main PC
Processor 9900k@4.8ghz 1.25v
Motherboard Asrock Fatality K6 Z370
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 3080 RTX FE
Storage 980 PRO 1TB (OS, others not listed)
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Asus Xonar D2X
Power Supply Antec HCG 750 Gold
With a 30%-ish IPC deficit? Yeah, sorry, I don't think so. I mean, unless you are just outright ignoring reality and saying "if the 10900K had infinite clock speed it would be the fastest CPU ever", there are hard limits to how high those CPUs can clock regardless of power under any type of semi-conventional cooling, and I would think a stock 12900K beats a well overclocked golden sample 10900K in all but the most poorly scheduled MT apps, and every single ST app out there.

As was said before, the E clusters are 4 or 0 cores, nothing in between. They have a single stop on the ring bus and no internal power gating. And IMO, 2P4E would be fantastic for any low power application, whether that's a general purpose desktop or a laptop. Remember, those E cores can't hang with the P cores, but they're not your grandpa's Atom cores. Anandtech confirmed Intel's claims of them matching Skylake at the same clocks (they're slightly slower at 3.9GHz than the 4.2GHz i7-6700K).

That's nonsense. I would make the opposite claim: that chips like the 5950X and 12900K have effectively killed HEDT. I mean, how much attention does Intel give to their HEDT lineup these days? They're still on the X299 chipset (four generations old!), and on 14nm Skylake still. Servers are on Ice Lake cores, but there's no indication of that coming to HEDT any time soon. Threadripper was also a bit of a flash in the pan after MSDT Ryzen went to 16 cores - there are just not enough workstation applications performed in high enough volumes that a) scale to >16+16 threads or b) need tons of memory bandwidth to make TR a viable option - especially when MSDT has a massive clock speed and IPC advantage. A 5950X will outperform a 3970X or 3990X in the vast majority of applications (as will a 5800X, tbh), and while there are absolutely applications that can make great use of TR, they are few and highly specialized.

I disagree here. JEDEC specs are designed for servers and for low cost, and are woefully low performance. XMP and DOCP are manufacturer-supported "OC" modes that, while technically OCing, are as out of the box as you can expect for DIY. I would very much agree that hand-tuned memory has no place in a CPU benchmark, but XMP? Perfectly fine.

The TPU thermal test is in Blender. Is that an unrealistic workload? No. Of course, most people running Blender aren't constantly rendering - but they might have the PC doing so overnight, for example. Would you want it bumping up against tJmax the entire night? I wouldn't.
XMP is out of spec still, bear that in mind, the dimms are factory tested, but not the rest of the system to go with it.

My 3200CL14 kit, I had to downclock to 3000mhz on my 8600k because the IMC on my 8600k couldnt handle 3200mhz.

On my Ryzen system my 3000CL16 kit worked fine on windows for years, but after I installed proxmox (linux), the ram stopped working properly, and it was unstable. Sure enough google stress test yielded errors until I downclocked it to 2800MHZ which is still out of spec for the cpu.

Always stress test ram using OS based testing (not memtest86 which is only good at finding hardware defects), dont assume XMP is stable.
 
Joined
May 2, 2017
Messages
7,762 (3.69/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
There has been references in early official documentation/drivers/etc. referring to a "Ice Lake X", though it never materialized. It's most likely due to the fact that the Ice Lake-SP/X core was unable to reach decent clock speeds (as seen with the Xeon W-3300 family), in fact lower than the predecessor Cascade Lake-SP/X, making it fairly uninteresting for the workstation/HEDT market. Ice Lake has worked well for servers though.

X699 will be based on Sapphire Rapids, which is in the same Golden Cove family as Alder Lake. Hopefully it will boost >4.5 GHz reliably.
Just goes to show that high end MSDT is taking over where HEDT used to have its niche. The space between "server/datacenter chip" and "16c24t high clocking new arch MSDT chip" is pretty tiny, both in relevant applications and customers. There are a few, but a fraction of the old HEDT market back when MSDT capped out at 4-6 cores.
Really?
Kingston, Corsair, Crucial and most of the rest have 3200 kits. These are big sellers and usually at great prices.
At JEDEC speeds? That's weird. I've literally never seen a consumer-facing kit at those speeds. Taking a quick look at Corsair DDR4-3200 kits (any capacity, any series) on a Swedish price comparison site doesn't give a single result that isn't c16 in the first page of results (48 different kits) when using the default sorting (popularity). Of course there will also be c14 kits for the higher end stuff. Looking at one of the biggest Swedish PC retailers (inet.se), all DDR4-3200, and sorting by popularity (i.e. sales), the first result that isn't c16 is the 9th one, at c14. Out of 157 listed results, the only ones at JEDEC speeds were SODIMMs, with the rest being c16 (by far the most), c14 (quite a few), and a single odd G.Skill TridentZ at c15. Of course this is just one retailer, but it confirms my previous experiences at least.
My current home development machine (5900X, Asus ProArt B550-Creator, Crucial 32 GB CT2K16G4DFD832A), runs 3200 MHz at CL22 flawlessly. CL20 would be better, but that's what I could find in stock at the time. But running overclocked memory for a work computer would be beyond stupid, I've seen how much file corruption and compilation fails it causes over time. An overclock isn't 100% stable just because it passes a few hours of stress tests.
XMP is generally 100% stable though, unless you buy something truly stupidly fast. Of course you should always do thorough testing on anything mission critical, and running JEDEC for that is perfectly fine - but then you have to work to actually find those DIMMs in the first place.
There is one flaw in your reasoning;
While the E cores are theoretically capable to a lot of lighter loads, games are super sensitive to timing issues. So even though most games only have 1-2 demanding threads and multiple light threads, the light threads may still be timing sensitive. Depending on which thread it is, delays may cause audio glitches, networking issues, IO lag etc. Any user application should probably "only" run on P cores to ensure responsiveness and reliable performance. Remember that the E cores share L2, which means the worst case latency can be quite substantial.
You have a point here, though that depends if the less intensive threads for the game are latency-intensive or not. They don't necessarily have to be - though audio processing definitely is, and tends to be one such thing. Core-to-core latencies for E-to-E transfers aren't that bad though, at a ~15ns penalty compared to P-to-P, P-to-E or E-to-P. Memory latency also isn't that bad at just an extra 8ns or so. The biggest regression is the L2 latency (kind of strangely?) that is nearly doubled. I guess we'll see how this plays out when mobile ADL comes around - from current testing it could go either way. There's definitely the potential for a latency bottleneck there though, you're right about that.

This is what I mean. Therefore, to achieve Intel/AMD's recommended maximum RAM speed of 3200 MHz, you need XMP/DOCP. You don't have a choice. Or you could go with your DIMM's standard speeds of 2400-2666 MHz, which is also advised against.
You could always track down one of the somewhat rare JEDEC kits - they are out there. OEM PCs also use them pretty much exclusively (high end stuff like Alienware might splurge on XMP). System integrators using off-the-shelf parts generally use XMP kits though, as they don't get the volume savings of buying thousands of JEDEC kits.

Why not just clamp down on background junk then? Seems cheaper and easier to do that than try to buy it in form of CPU. I personally would be more than fine with 2E cores.
...because I want to use my PC rather than spend my time managing background processes? One of the main advantages of a modern multi-core system is that it can handle these things without sacrificing too much performance. I don't run a test bench for benchmarking, I run a general-purpose PC that gets used for everything from writing my dissertation to gaming to photo and video editing to all kinds of other stuff. Keeping the background services for the various applications used for all of this running makes for a much smoother user experience.
I still want 4P/2E chip. Won't change my mind that it's not the best budget setup.
We'll see if they make one. I have my doubts.
No, I just showed you why I don't care about server archs and why they have no place in HEDT market yet and on top of that, I clearly say that TR 3970X is my go to choice for HEDT chip right now, not anything Intel.

And that has literally the same meaning. You need big MT performance for big tasks. Only consumers cares excessively about single threaded stuff. Prosumer may be better served by 3970X, rather than 5950X. More lanes, more RAM, HEDT benefits and etc.
But there aren't that many relevant tasks that scale well past 8 cores, let alone 16. That's what we've seen in Threadripper reviews since they launched: if you have the workload for it they're great, but those workloads are quite limited, and outside of those you're left with rather mediocre performance, often beat by MSDT parts.
Ultimate workhorse can be an expensive toy. Some people use Threadrippers for work, meanwhile others buy them purely for fun. Nothing opposite about that.
Did you even read what I wrote, the sentences you just quoted? I literally said that they can be the same, but that they aren't necessarily so. You presented them as if they were necessarily the same, which is not true. I never said anything like those being opposite.
Really? All those Xeon bros with sandy, ivy, Haswell E chips are not that small and the whole reason to get those platforms, was mostly to not buy 2600K, 3770K or 4770K. Typical K chips are cool, but Xeons were next level. Nothing changes with threadripper
Except it does: bakc then, those chips were the only way to get >4 cores and >8 threads, which had meaningful performance gains in common real-world tasks. There's also a reason so many of those people still use the same hardware: they keep up decently with current MSDT platforms, even if they are slower. The issue is that the main argument - that the increase in core count is useful - is essentially gone unless you have a very select set of workloads.
Their only job is just to put more cores on mainstream stuff. They develop architecture and then scale it to different users. Same Zen works for Athlon buyer and for Epyc buyer. There isn't millions of dollars expenditures specifically for HEDT anywhere. And unlike Athlon or Ryzen buyers, Threadripper buyers can and are willing to pay high profit margin, making HEDT chip development far more attractive to AMD than Athlon or Ryzen development. Those people also don't need stock cooler or much tech help, which makes it even more cheaper for AMD to make them.
Wait, do you think developing a new CPU package, new chipset, new platform, BIOS configuration, and everything else, is free? A quick google search tells me a senior hardware or software engineer at AMD earns on average ~$130 000. That means tasking eight engineers with this for a year is a million-dollar cost in salaries alone, before accounting for all the other costs involved in R&D. And even a "hobby" niche platform like TR isn't developed in a year by eight engineers.
Maybe two PC is a decent idea then, but anyway, those multithreaded tasks aren't so rare in benchmarks. I personally would like to play around with 3970X far more in BOINC and WCG. 3970X's single core performance is decent.
Well, that places you in the "I want expensive tools to use as toys" group. It's not the smallest group out there, but it doesn't form a viable market for something with multi-million dollar R&D costs.
But the fact that it's impossible to cool adequately doesn't mean anything right? And the fact, that it doesn't beat 5950X decisively is also fine, right? Premium or not, but I wouldn't want a computer that fires my legs just to beat 5950X by a small percentage.
Wait, and an overclocked 3970X is easy to cool? :rolleyes: I mean, you can at least try to be consistent with your arguments.
Well, you literally said here that it's in hardware, so sure software can't do that and you clearly say here that it may be fixed after few gens. Cool, I will care about those gens then, no need to care about experimental 12900K.
What? I didn't say that. I said there is zero indication of there being significant issues with the hardware part of the scheduler. Do you have any data to suggest otherwise?
You posted a link with 11900K benchmarks, not 10900K, making all your points here invalid. 11900K is inferior to 10900K due to 2 cores chopped off for tiny IPC gains. 2C/4T can make a difference. They more or less result in 20% of closing gap with 12900K and then you only need 10% of performance gains, which you can get from simply raising PLs to 12900K levels, you might not even need to overclock 10900K to match 12900K.
The 10900K is listed in the overall result comparison at the bottom of the page, which is where I got my numbers from. The 11900K is indeed slower in the INT test (faster in FP), but I used the 10900K results for what I wrote here. The differences between the 10900K and 11900K are overall minor. Please at least look properly at the links before responding.
You seem to still apply value argument and stability argument to literally the maximum e-peen computer imaginable. If you have tons of cash, you can make others just set it up for you, particularly well insulated phase change cooling.
Wait, weren't you the one arguing that the 12900K makes no sense? You're wildly inconsistent here - on the one hand you're arguing that some people don't care about value and want extreme toys (which would imply things like exotic cooling and high OCs, no?), and on the other you're arguing for the practical value of high tread count workstation CPUs, and on the third (yes, you seem to be sprouting new hands at will) you're arguing for some weird combination of the two, as if there is a significant market of people running high-end massively overclocked workstation parts both for prestige and serious work. The way you're twisting and turning to make your logic work is rather confusing, and speaks to a weak basis for the argument.
Maybe, but my point was about maximum computer that money can buy. Value be damned. 5950X or 12900K is not enough. Gotta OC that HEDT chip for maximum performance.
But that depends entirely on your use case. And there are many, many scenarios in which a highly overclocked (say, with a chiller) 12900K will outperform an equally OC'd 3970X. And at those use cases and budget levels, the people in question are likely to have access to both, or to pick according to which workloads/benchmarks interest them.


And, to be clear, I'm not even arguing that the 12900K is especially good! I'm just forced into defending it due to your overblown dismissal of it and the weird and inconsistent arguments used to achieve this. As I've said earlier in the thread, I think the 12900K is a decent competitor, hitting where it ought to launching a year after its competition. It's impressive in some aspects (ST performance in certain tasks, E-core performance, MT in tasks that can make use of the E cores), but downright bad in others (overall power consumption, efficiency of the new P core arch, etc.). It's very much a mixed bag, and given that it's a hyper-expensive i9 chip it's also only for the relatively wealthy and especially interested. The i5-12600K makes far more sense in most ways, and is excellent value compared to most options on the market today - but you can find Ryzen 5 5600Xes sold at sufficiently lower prices for those to be equally appealing depending on your region. The issue here is that you're presenting things in a far too black-and-white manner, which is what has lead us into this weird discussion where we're suddenly talking about HEDT CPUs and Athlon 64s in successive paragraphs. So maybe, just maybe, try to inject a bit of nuance into your opinions and/or how they are presented? Because your current black-and-white arguments just miss the mark.
XMP is out of spec still, bear that in mind, the dimms are factory tested, but not the rest of the system to go with it.

My 3200CL14 kit, I had to downclock to 3000mhz on my 8600k because the IMC on my 8600k couldnt handle 3200mhz.

On my Ryzen system my 3000CL16 kit worked fine on windows for years, but after I installed proxmox (linux), the ram stopped working properly, and it was unstable. Sure enough google stress test yielded errors until I downclocked it to 2800MHZ which is still out of spec for the cpu.

Always stress test ram using OS based testing (not memtest86 which is only good at finding hardware defects), dont assume XMP is stable.
I know, I never said that XMP wasn't OC after all. What generation of Ryzen was that, btw? With XMP currently, as long as you're using reasonably specced DIMMs on a platform with decent memory support, it's a >99% chance of working. I couldn't get my old 3200c16 kit working reliably above 2933 on my previous Ryzen 5 1600X build, but that was solely down to it having a crappy first-gen DDR4 IMC, which 1st (and to some degree 2nd) gen Ryzen was famous for. On every generation since you've been able to run 3200-3600 XMP kits reliably on the vast majority of CPUs. But I agree that I should have added "as long as you're not running a platform with a known poor IMC" to the statement you quoted. With Intel at least since Skylake and with AMD since the 3000-series, XMP at 3600 and below is nearly guaranteed stable. Obviously not 100% - there are always outliers - but as close as makes no difference. And, of course, if memory stability is that important to you, you really should be running ECC DIMMs in the first place.
 
Joined
Jun 29, 2018
Messages
339 (0.20/day)
The article includes only transient execution vulnerabilities. Both AMD and Intel have a lot more than that but those are different altogether.
Yes, and the linked Intel article also contains transient execution vulnerabilities, unless you don't consider "Speculative Code Store Bypass" a transient execution vulnerability? The wiki article is incomplete and outdated, as I wrote.
 
Joined
May 8, 2021
Messages
1,978 (3.11/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
...because I want to use my PC rather than spend my time managing background processes? One of the main advantages of a modern multi-core system is that it can handle these things without sacrificing too much performance. I don't run a test bench for benchmarking, I run a general-purpose PC that gets used for everything from writing my dissertation to gaming to photo and video editing to all kinds of other stuff. Keeping the background services for the various applications used for all of this running makes for a much smoother user experience.
Imagine the crazy thing, I actually use my PC too, I just don't leave Cinebench running while I paly games.


But there aren't that many relevant tasks that scale well past 8 cores, let alone 16. That's what we've seen in Threadripper reviews since they launched: if you have the workload for it they're great, but those workloads are quite limited, and outside of those you're left with rather mediocre performance, often beat by MSDT parts.
BOINC, HandBrake, Oracle VM, 7 zip...


Except it does: back then, those chips were the only way to get >4 cores and >8 threads, which had meaningful performance gains in common real-world tasks.
Um, FX 8350 existed (8C/8T), so did i7 920 (6C/12T) and Phenom X6 1055T (6C/6T).

Wait, do you think developing a new CPU package, new chipset, new platform, BIOS configuration, and everything else, is free? A quick google search tells me a senior hardware or software engineer at AMD earns on average ~$130 000. That means tasking eight engineers with this for a year is a million-dollar cost in salaries alone, before accounting for all the other costs involved in R&D. And even a "hobby" niche platform like TR isn't developed in a year by eight engineers.
We have problems with your statements. First of all, it's AWARD that develops BIOS with help from AMD, CPU package is mostly made by TSMC or previously Global Foundries with minimal help from AMD, many things are also not exclusively AMD's business. Just to make TR platform while they already have Zen architecture likely doesn't take an army of senior engineers. Same goes for anything else. You said it takes millions of dollars, that's true, but you seem to imply that it takes hundreds of millions, which is most likely not true.


Well, that places you in the "I want expensive tools to use as toys" group. It's not the smallest group out there, but it doesn't form a viable market for something with multi-million dollar R&D costs.
I literally told you that beyond making new arch, you just scale it for different product lines and SKUs. It's not that hard to make TR, when they make Ryzen and EPYC already.

Wait, and an overclocked 3970X is easy to cool? :rolleyes: I mean, you can at least try to be consistent with your arguments.
Still easier to cool at stock speeds with air cooler than 12900K. I'm very consistent, you are sloshing around from one argument to another. I said it's doable with phase change cooler.

What? I didn't say that. I said there is zero indication of there being significant issues with the hardware part of the scheduler. Do you have any data to suggest otherwise?
It's quite likely that the Thread ... Director? is sub-optimal and will be improved in future generations Sure as hell you did. What else "thread director" is supposed to mean? OS scheduler? or CPU's own thread management logic?

The 10900K is listed in the overall result comparison at the bottom of the page, which is where I got my numbers from. The 11900K is indeed slower in the INT test (faster in FP), but I used the 10900K results for what I wrote here. The differences between the 10900K and 11900K are overall minor. Please at least look properly at the links before responding.
I literally used search function in web browser. Couldn't you pick any less straight forward link? I just needed a riddle today. Anyway, that's just a single benchmark and it's super synthetic. Basically like Passmark and Passmark's scores rarely translate to real world performance or performance even in other synthetic tasks. This is the link that you should have used:

New i9 is % faster than old 10900K:
In Agisoft 41%
In 3Dpm -AVX 0%
In 3Dpm +AVX 503%
In yCruncher 250m Pi 30%
In yCruncher 2.5b Pi 66%
In Corona 39%
In Crysis 1%
In Cinebench R23 MT 69% (where did you get 30% performance difference here?)

And I'm too lazy to calculate the rest. So you were full of shit and could argue well, I literally had to provide an argument to myself to realize that I'm full of shit too. Fail. 10900K is more like 60% behind i9 12900K, no reasonable overclock will close gap like that.


Wait, weren't you the one arguing that the 12900K makes no sense? You're wildly inconsistent here - on the one hand you're arguing that some people don't care about value and want extreme toys (which would imply things like exotic cooling and high OCs, no?), and on the other you're arguing for the practical value of high tread count workstation CPUs, and on the third (yes, you seem to be sprouting new hands at will) you're arguing for some weird combination of the two, as if there is a significant market of people running high-end massively overclocked workstation parts both for prestige and serious work. The way you're twisting and turning to make your logic work is rather confusing, and speaks to a weak basis for the argument.
Not at all, it's just you who can't follow simple reasoning. If you buy 12900K, you have strong single core perf, but weaker multicore perf. If you buy 3970X you get weaker single core perf and strong multicore perf. You want the best of both, you overclock 3970X to make it balanced. Simple. Except that I found out that 12900K's advantage is much bigger than I thought and 3970X is actually more antiquated than I thought and it's Zen, meaning it doesn't overclock that well.


And, to be clear, I'm not even arguing that the 12900K is especially good! I'm just forced into defending it due to your overblown dismissal of it and the weird and inconsistent arguments used to achieve this. As I've said earlier in the thread, I think the 12900K is a decent competitor, hitting where it ought to launching a year after its competition. It's impressive in some aspects (ST performance in certain tasks, E-core performance, MT in tasks that can make use of the E cores), but downright bad in others (overall power consumption, efficiency of the new P core arch, etc.). It's very much a mixed bag, and given that it's a hyper-expensive i9 chip it's also only for the relatively wealthy and especially interested.
Intel could have just launched it on HEDT platform, those guys don't care about power usage, heat output as much and that will surely mean cheaper LGA 1700 motherboards. Would have been more interesting as 16P/32E part.


The i5-12600K makes far more sense in most ways, and is excellent value compared to most options on the market today - but you can find Ryzen 5 5600Xes sold at sufficiently lower prices for those to be equally appealing depending on your region.
Nothing Alder Lake is sold where I live. i5 12600K only makes sense for wealthy buyer. i5 12400 will deliver most performance at much lower cost, Ryzen 5600X complete and utter failure as value chip since day one, but Lisa said that it's "best value" chip on the market, while ignoring 10400 and 11400 and people bought it in droves.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,224 (2.80/day)
Is there even a JEDEC spec for 3200 MHz?

Edit: Where does it say here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.

Edit 2: Also, show me a RAM kit that runs at 3200 MHz by JEDEC default, without XMP or DOCP. ;)



You can't really get any lower power than this (the fact that I've owned both of these CPUs makes me feel nostalgic).

3200 MHz sticks JEDEC is 22-22-22

Have them in this laptop I'm typing from.
 
Joined
Apr 12, 2013
Messages
5,428 (1.51/day)
CPU development cycles for a new arch are in the ~5 year range. In other words, MS has known for at least 3+ years that Intel is developing a big+little-style chip. Test chips have been available for at least a year. If MS haven't managed to make the scheduler work decently with that in that time, it's their own fault.
You aren't making any sense here, ok so MS has known about these chips for at least 3+ years right? So Apple, having made the chips & OS, also has at least 5+ years, that is beside their experience of Axx chips & iOS, to make the M1 on desktops a real winner! Heck there were rumors as far back as 2016-17 that these chips were coming, not to mention they are optimizing for essentially a single closed platform.

Do you have any idea about literally gazillion different combination of hardware & software (applications) that win11 has to work on? You think Intel, MS or both combined can replicate this in a lab? I've been essentially beta testing Windows (releases) for 10+ years now & your posts just shows how easy you make it sound ~ except it's not :rolleyes:

No it's not, stop making things up you seem to be on a crusade to somehow make it look like this is child's play if MS(or Intel) had done it properly! You can have all the money in the world it won't mean a thing, it takes time ~ that's the bottom line, there's no magic pixie dust you can sprinkle to make everything work the way it should :shadedshu:
 
Top