• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core Ultra 9 285K

This is the worst CPU launch I think I have ever seen...far worse than Bulldozer and worse than Prescott. 285K is losing to the 7700 in some games and not even by a small margin, despite Intel having a huge node process advantage, adopting AMD's MCM design (the same one these clowns called "glue" back in the day), costing 3x the price and having 3x the cores. Efficiency is nonexistent, even compared to 4 year old AM4 chips that are handing it its ass (5950X, 5800X3D). This launch was so bad, AM5 7000 non-X3D series chips just got their prices bumped up today. AM4 will probably see the same effect.

The Pentium 4 'Willamette' days were 'interesting' - socket 423 was not well received. (EDIT: Upon reflection, S423 really was a platform with virtually no redeeming features at release)

Not only was the Pentium 4:
- Slower at same clock rate as Pentium 3
- Needed to hit high clockspeeds to offset the long pipeline (which spawned the whole Hyper-Threading workaround)
- Needed extra power for those clockspeeds / performance
- Requiring Rambus RAM due to chipsets (price premium - somewhat better transfer rate but latency wasn't its friend)
- Slower than the competing AMD Athlon, it also lost out at times to the budget Duron clock for clock.
But Socket 423 wasn't even really capable of even meeting the requirements to push the P4 much further ahead of the P3, as it was electrically limited.

People who went big and bought P3 1GHz chips early on (especially any 100MHz FSB ones - good for running on BX chipset boards without needing to OC the FSB / AGP / PCI) may have spent more money but had a good period of enjoying a better product sipping around half the power (supposedly) - at least until Intel managed to ramp up the P4 clock rates to make it worthwhile.

It was of course the 'next architectural step', engineered for a workload that never really came... which eventually led to the 'next architechtural step' being the Pentium M/Core starting from the Pentium Pro/2/3 (P6) core again...

There are some parallels with this Arrow Lake release, but at least the power requirement is lower (so perf/watt is better), and it doesn't need specific fancy RAM. That's also what makes it differ from Bulldozer - although if not for that power improvement, it would kind of be almost the same kind of launch as when AMD went from K10 to Bulldozer.

EDIT: It's easy to hate on Prescott - but it was an 'improvement' technically - Intel went all in on a lengthened pipeline and additional cache, pipeline improvements, etc... it was the most Pentium 4 of all the P4's. But I think that was also the point where they realised they were flogging a lame soon-to-be-dead horse.... that said the last Cedar Mill P4D chips (whilst no match for an equivalent C2D) were actually not terrible (except for power consumption).
 
Last edited:
Core Ultra 11 286KS around the corner? after several Windows patches/updates and firmware updates/patches of course. Or is this it all ?
 
They really outdid themselves by developing a chip on a new socket worse than previous chip on the old socket with a built in self-destruct code. At this point I don't care if Intel ends up bankrupt at least then someone can pick up the leftovers.
 
  • Excellent performance in heavy multithreaded apps
  • Good energy efficiency
  • Easy to keep cool
  • PCIe Gen 5 SSD without compromising GPU bandwidth
  • Good memory support, well over DDR5-8000
  • Integrated GPU
  • iGPU performance doubled vs Raptor Lake
***that is all i need from the new intel cpu in the future, there is a energy efficiency, more support of DDR5 RAM, good integrated GPU.... just for gaming, browsing, multimedia.
 
EDIT: It's easy to hate on Prescott - but it was an 'improvement' technically - Intel went all in on a lengthened pipeline and additional cache, pipeline improvements, etc... it was the most Pentium 4 of all the P4's. But I think that was also the point where they realised they were flogging a lame soon-to-be-dead horse.... that said the last Cedar Mill P4D chips (whilst no match for an equivalent C2D) were actually not terrible (except for power consumption).
I never said they didn't go all out trying with their Prescott. That was the Gigahertz race back then and despite AMD outclassing Intel completely in performance and efficiency, Intel had nothing to worry about due to their stranglehold over the market. They're not even remotely in the same market/circumstances today, though.

Intel have around 4x the employees of AMD and this was their best case scenario for a new architecture launch -- especially by subcontracting their fab work to TSMC and on a better node than AMD, on top of the MCM design, an entirely new socket, revamped cores, increased cache, etc. And they failed this badly to even compare well to a 4 year old architecture from AMD, or their own last balls-to-the-wall overclocked generation. Meanwhile, Apple are eating Intel's breakfast in the laptop market whilst AMD are eating their lunch in the workstation and consumer markets -- whilst more competition is on the way from Qualcomm and Mediatek and who knows else (Zhaoxin in China, etc). And Intel are really delusional enough to think that they have a snowball's chance in hell of competing in the GPU market whilst failing this badly in their main business that keeps them afloat as a company?

I'm just so glad I saw the writing on the wall for this company shortly after Bob Swan joined and dumped all my stocks before I lost any money. If they can't release something decent now on TSMCs latest and greatest node, they've got no hope doing it with their own in-house fabs after their 10nm fiasco.
 
CUDIMM was designed to be backwards compatible with UDIMM. Source: Anandtech. We'll see how it turns out in real world.
Yeah, it's backwards compatible in the sense that you can plug it into any DDR5 UDIMM motherboard and use it, but that does not mean you'll be taking advantage of the clock redriver. As per the link itself:
Officially, bypass mode is only supported for speeds up to DDR5-6000 (3000MHz), so JEDEC complaint DIMMs will be expecting to use CKD mode (Single PLL or Dual PLL) at DDR5-6400 and beyond. The end result being that a CUDIMM should work with a slower/older DDR5 memory controller by going into bypass mode, whereas DIMMs without a CKD won't be available at the higher speeds that require a CKD (not at JEDEC-standard voltages and timings, at least).
Yeah. Also, a bit of on-package memory plus more memory on removable modules seems like a great idea but getting optimum performance from two pools or RAM, each having different properties, is next to impossible. OS alone can't take care of that, applications also need to be aware of it.
Intel's Xeon Max did it, worked fine on Linux (since it has proper heterogeneous memory support).
Windows would likely be a mess, yeah.
Neither Intel nor AMD mentioned the ability to use subchannels up until Arrow Lake. Did I miss something?
I mean, that's an inherent feature of DDR5. Derbauer also made that claim and many folks corrected it:
Apart from that, the Alder Lake IMC could run both DDR4 and DDR5 (well, also LPDDR4x and LPDDR5). The common denominator was 64-bit channels of DDR4, and chances are the IMC was designed for that.
Now that I really don't know about, so I will refrain from commenting. I'm not really that into how IMCs work internally.
Some genius would probably be able to develop a micro-benchmark that makes good use of subchannels. Then we'd know for sure.
Yeah, that'd be nice and make it easier to be sure of.
Also the lack of DDR4 legacy.
AMD and Intel server CPUs with DDR5 memory controllers are a different story, I think they use subchannels because they run many, many processes at once.
Zen 4's desktop IOD (so Zen 5 as well) seem to be kinda independent already:
1729823969762.png

2x 40-bit PHY.

Not sure about the Epyc ones since I have never seen annotations for its IOD.
 
After watching/Reading 5 reviews today......


97v2ev.jpg
 
Intel have around 4x the employees of AMD and this was their best case scenario for a new architecture launch -- especially by subcontracting their fab work to TSMC and on a better node than AMD, on top of the MCM design, an entirely new socket, revamped cores, increased cache, etc. And they failed this badly to even compare well to a 4 year old architecture from AMD, or their own last balls-to-the-wall overclocked generation.

I get the 'disappointment' in that sense, but for me it's more a shame because technically this is version 2 of what they started with Meteor Lake so it shouldn't seem to much like a beta product release. Yeah, it's MCM is still a kind of new thing for them to be doing, and each new generation brings new tiles which may have their own little foibles, so getting on top of it is probably a bit harder initially.
They have obviously executed the release to a schedule, part of which to capture sales at key time of the year heading in to holiday season, which has led to these sort of bugs making it to reviewers.
At this point I'm still in the 'lets give them the benefit of the doubt' stage and expect to see improvements alongside the fixes for the bugs such as W11 24H2 crashes, etc.

  • Excellent performance in heavy multithreaded apps
  • Good energy efficiency
  • Easy to keep cool
  • PCIe Gen 5 SSD without compromising GPU bandwidth
  • Good memory support, well over DDR5-8000
  • Integrated GPU
  • iGPU performance doubled vs Raptor Lake

A realistic take on it. It's their best product in terms of efficiency and the IO capabilities provided by the SoC are good.
If you have a workload that is single threaded and can exploit the P core boost speed, or multi threaded and works fine on P or E cores then it is potentially your best option.
It's all those in between scenarios and games where it gets tricky.... (Edit: and value for money becomes more subjective)
 
Last edited:
Not sure if this was asked here or not but how's the power reading's done?
@W1zzard was on top of this.

The ASUS Z890 Hero motherboard feeds four of the CPU VRM phases from the 24-pin ATX connector, instead of the 8-pins, which makes it impossible to measure CPU-only power with dedicated test equipment (we're not trusting any CPU software power sensors). You either lose some power because only the two 8-pin CPU connectors are measured, or you end up including power for the GPU, chips, and storage when measuring the two 8-pin connectors along with the 24-pin ATX. For this reason, we used an MSI Z890 Carbon motherboard exclusively for the power consumption tests in this review.
 
New name...check
New socket...check
Performance regression...check
One off generation...check
Expensive...check

With so many box ticked, I bet everyone on LGA 1700 will rush to migrate.
 
Well, I don't know why you're not happy. Now that there is no HT, there is more security.
Right.It.Was.One.Of.Important.Advertising.Reaffirmations.About.Arrow.Lake.
 
This is not Zen 1, Zen 1 murdered Kaby Lake in MT. This is equal in MT, sometimes.
Zen 1’s MT is insane and power efficiency too. You guys remember ryzen 7 1700?
1729832136296.png

1729832153081.png

1729832329783.png
 
@TumbleGeorge
Lack of HT is the last of the problems with ArL. The MT performance is very much there, the issue is, it seems, that the TD isn’t too good at putting it on the road in many cases. I mean, sometimes it’s faster than the 32 thread 9950X, sometimes it’s embarrassingly slower. Same with low threaded workloads, so it’s not like IPC is lacking either. Adding HT would not help.

This is not Zen 1, Zen 1 murdered Kaby Lake in MT. This is equal in MT, sometimes.
Yes, having double the cores AND threads would do that. This one has LESS threads than the equivalent competitor. It’s a different approach.
The comparison is made because we are dealing once more with a completely different architectural approach to desktop CPUs. Not on any performance basis.
 
AMD did a favor to Intel with Zen 5.
Intel did an even bigger favor to AMD with Allow Lake.
The thing is that AMD is coming with the 9000 X3D chips and Intel has no response to them. In the end AMD will gain even more market share in desktops, while Intel will probably win back some market share in laptops, where efficiency is important.

As for us consumers, I guess AMD knew about Arrow Lake and priced 9000 series accordingly. And seeing that Intel offers nothing new in gaming, X3D chips are going ALL up in price. Even on AM4 AMD discontinued 5800X3D and I am pretty sure 5700X3D is going to become pricier over time going slowly to a price close to that of the last price of 5800X3D.
 
Let me put it this way with some examples. But first, a disclaimer, most gamers run entry level or mid range hardware. Even $400 just on CPU is a lot, reality is the majority of people run i5/r5 class because they're cheap CPUs that are good enough, and they're not pairing with a $1600 GPU, where the faster CPUs can stretch their legs. So for most of these people who are looking to build a new PC, dropping ~$500 on X3D isn't an option. The AM4 upgrade to 5700X3D is a great option though, but this is assuming you already have the AM4 platform. I certainly wouldn't recommend doing a new build on that dated platform, too many compromises besides the one perk, gaming performance.

Example #1, you have $300 to spend on CPU

You can, A) buy a 245K for $310, and get 80% applications performance and 94% gaming performance (relative to 285K).

Or B) buy a 5700X3D if it sells in your country, for $200, pair it with a last generation platform with all of those downsides, and get 52% applications performance and 90% gaming performance.
If you can't find a 5700X3D, you'll have to go with 5800X3D at around $250, this is 56% applications performance and 94% gaming. These percentages relative to 285K.

Example #2
You have $400 to spend on CPU.

You can, A) buy a 265K for $395, or stretch the budget for some reason to a 9900X @$430 and get 94% and 93% in productivity respectively.
In gaming with those options you'd get 97% and 100% performance. These percentages relative to 285K, tested with a 4090.

Or B) buy a 7800X3D, which costs $470, so good luck with that $400 budget, for 70% in productivity and 112% in gaming.

For 20% more money you're getting 25% lower productivity performance, and 12% faster gaming performance, assuming you own a 4090.

Example #3 Now lets do 7950X3D, here it's a little less obvious, but again, this CPU price range is approx 4% of the market going by steam HW survey, 50% are 6/8 core CPUs, and even quad cores are still 4x the marketshare of 16 core CPUs.

View attachment 368806

A) 285K for $585 - 100% relative applications performance/gaming performance
B) 7950X3D for $600 - 96% applications performance, 106% gaming performance - slightly slower in applications, slightly faster in gaming (assuming you have no scheduling issues and are willing to tolerate Xbox game bar and a 3DVCache scheduling driver), for more money, on an older platform, with worse IO, and no chance of running super fast memory without switching gears, unlike ARL.
C) 9950X for $650 - 103% applications, 102% gaming.

So yeah, the X3D chips only make sense for gaming. If you don't game, there's pretty much no point paying the premium for them, because they're slower than alternatives. Every other CPU from both Intel and AMD are reasonably balanced, they can game, they can do some productivity work, and they aren't too expensive.
My point remains the same. Your quote was "X3D's chips are only good for gaming" before saying they're slower than the intel competition and non X3D parts as well in non gaming workloads. My point was, it isn't really "only good for gaming" as it loses very little in productivity performance while being substantially faster in gaming.

The 7950X3D is within 4% of the 285K in productivity but 6% faster in gaming. You should add a couple of % points to the 6% simply because 24H2 is known to provide that uplift. Now that's a CPU released close to two years back. I was fully expecting the 285K to be the new 'jack of all trades' king, with all the microarchitectural changes on TSMC's 3nm. But here we are with it being within 4% of the 285K in prod and ~8-9% faster in gaming.

On another note, it's an obvious no brainer that there's no point paying extra for X3D's if you don't game. But that doesn't mean it's "only good for gaming". It means they got shafted for paying extra for something they don't do which is on them and lost a little bit of productivity performance. But for people who do both, there are better options than the 285K and that even includes 9950X.
 
Last edited:
@TumbleGeorge
Lack of HT is the last of the problems with ArL. The MT performance is very much there, the issue is, it seems, that the TD isn’t too good at putting it on the road in many cases. I mean, sometimes it’s faster than the 32 thread 9950X, sometimes it’s embarrassingly slower. Same with low threaded workloads, so it’s not like IPC is lacking either. Adding HT would not help.


Yes, having double the cores AND threads would do that. This one has LESS threads than the equivalent competitor. It’s a different approach.
The comparison is made because we are dealing once more with a completely different architectural approach to desktop CPUs. Not on any performance basis.
But much more core, 24 vs 16, this is not good as you think.
Zen 1 double core but much faster and much better in power efficiency.
This is not the same and not the Zen 1 momment when AMD is limitting their consumer products to 8 core CCD when they already got 16 core CCD in Zen 5. You guys know nothing.
 
But much more core, 24 vs 16, this is not good as you think.
I thought the consensus was that E-cores aren’t “real cores” and Intel shouldn’t include them, but give “muh enthusiasts” 2 more P-cores. Now we are straight up making them equal to Ps and not treating them like the HT replacement they are?
So which is it?
1729832798891.png
 
Yes, having double the cores AND threads would do that.
And that's why it isn't like Zen 1. Or Zen 2. AMD doubled Intel's CPU core counts twice in only 28 months, gave almost everyone SMT, caught up to the competition in 1T and stayed board compatible with Bristol Ridge.
Intel's transition to synthesizable cores made at TSMC isn't a Zen moment or a reset. It's just something they had to do and it generally hasn't panned out except in certain workloads here and there.
 
If you can't buy it for $600, then it doesn't really matter if it costed that at some brief point in the past (a period of two days).

For the vast majority of time since it's launch, as well as right now, it's available for $650.

So no, no confusion here.
It literally got an official price cut to $599, not a weekly deal or anything. As you know, comparisons are to be made with MSRP which is $599 at the time of writing.

Availability and retailer price gouging is different though, and should be put on a disclaimer like "currently they're out of stock at MSRP" or something along the lines.

Also for the vast majority of the time it was actually $625, I know and i've checked every other day. Even shows up in your own graph. So by your logic, it should be $625, but MSRP wise it's $599. Take your pick.
 
@JohH
You have a crystal ball I can borrow so I too can be certain that in “28 months” Intel won’t catch up or fix the first gen teetering issues like AMD did?
People are really quick at dismissing the entire approach less than 24 hours after the new gen has been released.

Intel's transition to synthesizable cores made at TSMC isn't a Zen moment or a reset
It absolutely is. It’s their first full on clean sheet designs since NEHALEM. I don’t know how much more of a “reset” it can be. The reason why Zen seemed so miraculous is because AMD was coming from a significantly shittier previous showing which was Bulldozer. Anyone who was deluding themselves into thinking that Arrow Lake would be an instant slam-dunk was on some high grade shit and I’ve warned people that it won’t go like they think it will. Now everyone is surprised for… some reason?
 
ECC is supported by the architecture, though not on the Z890 chipset, nor by the processor models being announced today.
@W1zzard
Still incorrect as per Intel itself. We can't check for ourselves, though, since, as stated, there is neither a compatible chipset (W890?) nor motherboards.

EDIT: Not sure why this won't get fixed in the reviews (it affects all three of them). It's not as if @W1zzard hasn't fixed other typos/errors.
 
Last edited:
It absolutely is. It’s their first full on clean sheet designs since NEHALEM. I don’t know how much more of a “reset” it can be. The reason why Zen seemed so miraculous is because AMD was coming from a significantly shittier previous showing which was Bulldozer. Anyone who was deluding themselves into thinking that Arrow Lake would be an instant slam-dunk was on some high grade shit and I’ve warned people that it won’t go like they think it will. Now everyone is surprised for… some reason?
It's not a clean sheet. It's clearly a Golden Cove derivative with 8-wide decode, bigger buffers and more execution resources. That's why it's Lion Cove and Skymont - they're derivative of their predecessors. Unified core is the closest thing Intel plans to a "reset" and it isn't a reset so much as a fusion dance or daring synthesis.
And I'm not surprised. LGA 1851 was supposed to launch with Meteor Lake S but it was late and so shit that it was cancelled. At least they got ARL out in time.
 
@JohH
I am not talking about P-cores only. I am talking holistically. As a CPU in its entirety. Lunar and Arrow are a new page for Intel.
If we talk cores only - AMD considers Zen 5 to be a clean sheet design and that is comparable in changes from GC to LC. So I am a bit more inclined to listen to actual people developing the chip on what they consider it to be rather than what the community perceives it as.
 
@JohH
I am not talking about P-cores only. I am talking holistically. As a CPU in its entirety. Lunar and Arrow are a new page for Intel.
If we talk cores only - AMD considers Zen 5 to be a clean sheet design and that is comparable in changes from GC to LC. So I am a bit more inclined to listen to actual people developing the chip on what they consider it to be rather than what the community perceives it as.
Everything Arrow Lake does holistically Meteor Lake did first. And Arrow Lake hasn't fixed any of the problems of MTL, only brought them (at last) to desktop.
Arrow Lake, as a whole, is derivative of Meteor Lake.
Lion Cove is derivative of Golden Cove/Redwood Cove.
Skymont is derivative of Goldmont/Crestmont.

And the combined package isn't a compelling degree ahead of their competition in ST or MT or gaming or efficiency or value despite now having a node advantage and a more expensive but better interconnect technology. At least Zen 1 and Zen 2 had compelling MT advantages.
 
Back
Top