• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core Ultra 9 285K

AMD did a favor to Intel with Zen 5.
Intel did an even bigger favor to AMD with Allow Lake.
The thing is that AMD is coming with the 9000 X3D chips and Intel has no response to them. In the end AMD will gain even more market share in desktops, while Intel will probably win back some market share in laptops, where efficiency is important.

As for us consumers, I guess AMD knew about Arrow Lake and priced 9000 series accordingly. And seeing that Intel offers nothing new in gaming, X3D chips are going ALL up in price. Even on AM4 AMD discontinued 5800X3D and I am pretty sure 5700X3D is going to become pricier over time going slowly to a price close to that of the last price of 5800X3D.

"AMD Ryzen 7 9800X3D CPU Benchmarks Leak Out: Up To 22% Faster In Geekbench Versus 7800X3D"

That's going to he a bloodbath in gaming, if these leaks at all translate to gaming performance.

And as I said, for home users multi core application performance is quickly just "fast enough" and doesn't translate to any deciding point when buying. That's why AMD sold tons of 5800 X3D, 7800 X3D, even though they were quite noticeably slower in productivity than similarly priced 5900X, 7900X.

I have friends that do tons of photo editing, and only game occasionally, and they decided to buy an X3D processor - because the difference for them is just a bit longer end "rendering" time when exporting photos, all other values are similar, and extra cache in "gaming" CPU might make it more responsive in tasks that are hard to benchmark.
 
That's going to he a bloodbath in gaming, if these leaks at all translate to gaming performance.
It doesn't. It's about ~10% better than the 7800X3D.
 
It doesn't. It's about ~10% better than the 7800X3D.

Most probably. That is still twice the amount of "Zen 5%", and is just added on top of already leading gaming performance.
 
To be honest I like the track both AMD and Intel are taking.

AMD managed to eek out slightly better performance while increasing efficiency, if they keep doing that can you imagine what we will be using in a decade from now as far as power to efficiency goes.

INTEL yes they took a step back to enable them to keep marching forward, this was a necessity as the track they were on was going to meltdown literally. Just call it a strategic withdrawal.

On a personal note, I mainly game so my 7800x3d does more than I need, and yes I'd be more than happy if the 9800x3d gives a 5% lift but with a 10w to 15w power drop, just my opinion as I live in the UK and power is expensive.
 
I have friends that do tons of photo editing, and only game occasionally, and they decided to buy an X3D processor - because the difference for them is just a bit longer end "rendering" time when exporting photos, all other values are similar, and extra cache in "gaming" CPU might make it more responsive in tasks that are hard to benchmark.

Came to post exactly this. Unless you have a super tight workflow where literally every second counts, CPUs have been "good enough" to meet the productivity needs of home users for like a decade.

I game and do photo editing and blender work and can't say I care about saving a few seconds on an export vs keeping my 1% frame dips up in multiplayers. :laugh:
 
Why the 14900KS is not included ?? Not normal ast flagship to be missing ... with all that intel micro code update, it would hace been important to know where that chip is at now.
 
Oh boy.
It is more efficient than the 14900 and 13900 but also slower. New node somewhat helped but not much. Maybe this is something Intel needs to fix with some BIOS updates or some power delivery is messed up or something. The efficiency is better but still not great or even OK for that matter.
Still new Mobo required so that lesson has not been learned. Maybe, using the same socket as RPT would help to run away from the ticking bomb 13th and 14th gen CPUs are. I'm sure consumers would appreciate that even though core ultra is slower. It is not on the bottom of the chart so that is good right?
 
Came to post exactly this. Unless you have a super tight workflow where literally every second counts, CPUs have been "good enough" to meet the productivity needs of home users for like a decade.

I game and do photo editing and blender work and can't say I care about saving a few seconds on an export vs keeping my 1% frame dips up in multiplayers. :laugh:

But that "good enough" mentality is also a hurdle in selling new generations to buyers that don't fall for hype.

I have a 5900X, and an RTX 3080. Do I gain anything in productivity by buying 7800X3D? Nothing. Not "virtually", measurably there isn't any difference. And I don't expect 9800X3D to change that by more than 5%.

And gaming? It looks impressive, until you consider you don't have an RTX 4090, and you don't game at 720p or 1080p.

But the prices of new platforms - new CPUs, new motherboard, new memory, have all gone up considerably in last two generations. It will have to be a considerable jump in performance to spend that kind of money, and it makes no sense to gain almost no productivity increase, and no noticeable gaming performance increase in real world situations (I mostly game at 4K 60fps, with DLSS and other help to gain that in ageing card).
 
@dgianstefani

Appears to be a discrepancy in the summary and conclusion. Test Setup page lists 6000 CL36, but summary mentions CL38 twice in the DDR5 Memory & CUDIMM paragraph.
Nice find, CL38 was a typo, fixed in all 3 reviews

The clock redriver will be useless on a platform that doesn't support it, and so far only ARL does so in desktop. Zen 5 doesn't work with it either (i mean, you can use the dimms, just not take advantage of the clock redriver part).
That's my understanding, too. The other platforms can run CUDIMMs in some sort of compatibility mode, which bypasses the clock driver, so no gains over classic modules
 
FIASCO.
Intel 15th Generation New Core Ultra Processes are a complete Fiasco, no offense to anyone.
It is nothing more than a revised version of the 14th generation with the addition of an NPU section, and a slightly reduced power consumption. I have been assembling systems and working with hardware for 25 years, and I have never seen Intel like this.
Moreover, it is said that the socket will be supported until the end of 2025, which is complete nonsense.
There is absolutely no need for a person who has an undegraded 14th generation or 13th generation or even 12th generation Intel Core i9 or i7 to switch. I am not even talking about those with AMD 7950X3D, 7900X3D, 7800X3D Ryzen processors.
 
Last edited:
Why would a guy with a solid 14900K, 14700K, 13900K, 13700K or 7950X3D, 7900X3D, 7800X3D switch to the Core Ultra series?
Unnecessary expense, most importantly a money trap.
Why would a someone go from 5800x3d to 9800x3d either, considering the costs? Too many people conflate wants & needs here & yes the earth is going to sh!t as a result of that!

Not the only reason but one of the main ones :ohwell:
 
10-20fps lower in the minimums is a little alarming honestly.
 
But that "good enough" mentality is also a hurdle in selling new generations to buyers that don't fall for hype.

I have a 5900X, and an RTX 3080. Do I gain anything in productivity by buying 7800X3D? Nothing. Not "virtually", measurably there isn't any difference. And I don't expect 9800X3D to change that by more than 5%.

And gaming? It looks impressive, until you consider you don't have an RTX 4090, and you don't game at 720p or 1080p.

But the prices of new platforms - new CPUs, new motherboard, new memory, have all gone up considerably in last two generations. It will have to be a considerable jump in performance to spend that kind of money, and it makes no sense to gain almost no productivity increase, and no noticeable gaming performance increase in real world situations (I mostly game at 4K 60fps, with DLSS and other help to gain that in ageing card).
Mostly agreed, though if you play MMOs or even some large scale sandbox shooters, even the best gaming CPUs today can still get bogged down with lots of players around.

Even my 7800X3D has dipped below 60 in crowded cities in FFXIV, meanwhile my 4080 is largely AFK.
 
Overall, 2024 was very hyped, but the outcome is disapointing, both for AMD and Intel. I like Lunar Lake (this will be my next ultrabook), but I agree this is not for everyone (most ppl here care about gaming - for which Lunar Lake is not too good, nor too bad).

on power consumption: It seems to me that they pushed arrowlake to the limit on power consumption to extract every bit of perf they can, as it has deficit in clock vs Raptor lake and high latency too (so perf in game is not good).

So yes, efficiency @factory value is not impressive, but it does not mean the efficiency of the architecture is bad. 10nm vs 3(even fake TSMC) nm should still be very obvious.

A test @ 4 or 5Ghz and undervolted while being stable (for different arch, Zen4, 5, ARL, RL, AL) would be interesting to see the real efficiency of the underlying arch/process.

On a side note, on performance, Intel 7 is not that bad!

We can only hope intel 18A is viable next year.
 
The thing is, as usual, the X3D chips are only good for gaming. They're slower than the non X3D chips as well as the Intel competition in anything that isn't gaming, which happens to be what the vast majority of people in the world use CPUs for (not gaming). They're also more expensive.

Compare, say, a mainstream segment $330 9700X against the $310 245K, you're essentially getting 8% more application performance per dollar with the 245K, a more modern platform, and generally it's more efficient, while having slightly slower gaming prowess (with 6000 MT and early firmware). The 7800X3D is both more expensive and 10% slower in applications, but 20% faster in gaming (when the 245K is tested with slow memory), for $490.

What I'm seeing with the $590 285K is a CPU that compares favorably against its more expensive competition ($649 9950X), 30% more efficient in ST View attachment 368773

essentially the same in MT

View attachment 368772

25% less power in idle

View attachment 368774

...plus a better platform, but currently it's slightly slower in gaming despite being more efficient, when tested with memory 2000 MT slower than Intel's "sweetspot" 8000 MT.

View attachment 368775
What?

Did you happen to oversee the upper sections of those graphs?
1729841597285.png


1729841713772.png


AMD dominates in efficiency, except for idle consumption - that is indeed horrible. 9600X might not be so efficient as Ultra 245K, but Ultra 245K delivers about 5% less FPS. If you take that into account, 9600X prevails. With Intel's node advantage, one would expect Ryzen 7900 will have a competition in multi-thread efficiency. The thing is, Ryzen 7900 and 7700 show exactly that situation when CPU is paired with sane voltages and clocks, meaning delivering exceptional efficiency. (I'm not taking X3D SKUs into account.) In terms of efficiency, Intel's i5-13400F is only efficient SKU.

As for 9800X3D, it boosts base clocks significantly compared to 7800X3D. There is estimated about 12-16% application perf. improvement over 7800X3D which means that 9800X3D will dominate gaming while being powerful in apps as 9700X. Although, this might even change a bit after retesting on Win 24H2.
 
Everything Arrow Lake does holistically Meteor Lake did first. And Arrow Lake hasn't fixed any of the problems of MTL, only brought them (at last) to desktop.
Arrow Lake, as a whole, is derivative of Meteor Lake.
Lion Cove is derivative of Golden Cove/Redwood Cove.
Skymont is derivative of Goldmont/Crestmont.
Skymont is good, but Lion Cove is not. While Skymont is based off of Crestmont, it also brings new ideas and is a substantial improvement, while Lion Cove seemingly expands a lot, but brings very little. 33% in major structure increase in many areas but with just 10% gain is sad. The Israeli Design Center that brought us Merom/Conroe that enthusiasts pretty much worship is now on it's deathbed and should be replaced.

What should it be replaced by? Successors of Skymont.

The real problem is that Arrowlake(and the predecessor Meteorlake that uses the same tile configuration) is developed during extremely troublesome times for Intel. It's said that lot of the Meteorlake team went to Microsoft's CPU project. So much Intel guys moved to MS that the project acronym they used within Intel for Meteorlake was used at Microsoft. Many IDC(the team) leads and members left Intel during Kraznich's era. Mooly Eden, Dadi Perlmutter, remember them? They were brought to fame after Core 2.

So not only the too many tiles subpar configuration was kept, they didn't have the manpower/intellect/leadership to make that work to their vision either. When people are unhappy and/or leave you don't see the ramifications right away. We're seeing them now. Projects take 3+ years to come to fruition.

The tile based Xeon "Sapphire Rapids" suffered too because during Kraznich's era he fired the entirety of the validation team, responsible for making things work properly, doesn't get errors, and is reliable under workloads.

Increasingly we're coming to a point where Intel as a company is in some jeopardy of declaring bankruptcy in the future. The next few years are absolutely critical.
 
Skymont is good, but Lion Cove is not. While Skymont is based off of Crestmont, it also brings new ideas and is a substantial improvement, while Lion Cove seemingly expands a lot, but brings very little. 33% in major structure increase in many areas but with just 10% gain is sad. The Israeli Design Center that brought us Merom/Conroe that enthusiasts pretty much worship is now on it's deathbed and should be replaced.

What should it be replaced by? Successors of Skymont.

The real problem is that Arrowlake(and the predecessor Meteorlake that uses the same tile configuration) is developed during extremely troublesome times for Intel. It's said that lot of the Meteorlake team went to Microsoft's CPU project. So much Intel guys moved to MS that the project acronym they used within Intel for Meteorlake was used at Microsoft. Many IDC(the team) leads and members left Intel during Kraznich's era. Mooly Eden, Dadi Perlmutter, remember them? They were brought to fame after Core 2.

So not only the too many tiles subpar configuration was kept, they didn't have the manpower/intellect/leadership to make that work to their vision either. When people are unhappy and/or leave you don't see the ramifications right away. We're seeing them now. Projects take 3+ years to come to fruition.

The tile based Xeon "Sapphire Rapids" suffered too because during Kraznich's era he fired the entirety of the validation team, responsible for making things work properly, doesn't get errors, and is reliable under workloads.

Increasingly we're coming to a point where Intel as a company is in some jeopardy of declaring bankruptcy in the future. The next few years are absolutely critical.
Troublesome times or not, those people on team surely did not have a gun aimed at their head to go for insane voltages which cause extreme fast degradation.

What will be critical for Intel is success of it's own 18A process. Pat bet his (company's) ass for this, so hopefully they will deliver something usable. Otherwise they're f*cked, meaning they can't compete in servers, in AI, in GPUs, in gaming, in heavy app workloads. Even their mainstream network controllers suck for 3rd generation in a row.
 
23H2 was used, not 24H2. Did I read it right?
Heavily implied by other reviewers that 24h2 is not friendly with intel's ultra chips.
 
Heavily implied by other reviewers that 24h2 is not friendly with intel's ultra chips.

As in "they are even more behind AMD, which gains in 24H2", or do they have problems in performance, stability in new Windows update?

Whichever it is, there is no excuse for not benchmarking in 24H2, which is in rollout, and is now basically current Windows 11 version.
 
I don't know how that would be the case considering the architecture has been simplified.
Windows 2152 update which is currently a preview has the improvement. Even if you're on 24h2 you don't have it. By all accounts it's not a massive improvement 3-5%. But I suspect arrow lake will be like Zen 5 and lots of small improvements will happen.
 
Strange, the general consensus was that Intel's (insert preferred number) nm node was causing the high power consumption etc. and the 3nm node would be a drastic improvement... Wonder what happened
 
As in "they are even more behind AMD, which gains in 24H2", or do they have problems in performance, stability in new Windows update?

Whichever it is, there is no excuse for not benchmarking in 24H2, which is in rollout, and is now basically current Windows 11 version.

From the conclusion page of the TPU review:

When pairing Windows 24H2 with Arrow Lake, performance will be terrible—we've seen games running at 50% the FPS vs 23H2. One solution is to turn off Thread Director or disable the "Balanced" power profile, which is why we decided to use 23H2 for the time being. Last but not least, there are some driver issues and bluescreens when both a dGPU and iGPU are active at the same time.
 
From the conclusion page of the TPU review:

Wasn't 24H2 available in Release Preview Channel since May 2024? For a lot of users Windows have already updated - that the new Intel CPUs are crap in it shouldn't be excuse to use outdated version. What's next, using Windows 10 if that brings any advantage to Intel?
 
Back
Top