• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core Ultra 9 285K

Let me put it this way with some examples. But first, a disclaimer, most gamers run entry level or mid range hardware. Even $400 just on CPU is a lot, reality is the majority of people run i5/r5 class because they're cheap CPUs that are good enough, and they're not pairing with a $1600 GPU, where the faster CPUs can stretch their legs. So for most of these people who are looking to build a new PC, dropping ~$500 on X3D isn't an option. The AM4 upgrade to 5700X3D is a great option though, but this is assuming you already have the AM4 platform. I certainly wouldn't recommend doing a new build on that dated platform, too many compromises besides the one perk, gaming performance.

Example #1, you have $300 to spend on CPU

You can, A) buy a 245K for $310, and get 80% applications performance and 94% gaming performance (relative to 285K).

Or B) buy a 5700X3D if it sells in your country, for $200, pair it with a last generation platform with all of those downsides, and get 52% applications performance and 90% gaming performance.
If you can't find a 5700X3D, you'll have to go with 5800X3D at around $250, this is 56% applications performance and 94% gaming. These percentages relative to 285K.

Example #2
You have $400 to spend on CPU.

You can, A) buy a 265K for $395, or stretch the budget for some reason to a 9900X @$430 and get 94% and 93% in productivity respectively.
In gaming with those options you'd get 97% and 100% performance. These percentages relative to 285K, tested with a 4090.

Or B) buy a 7800X3D, which costs $470, so good luck with that $400 budget, for 70% in productivity and 112% in gaming.

For 20% more money you're getting 25% lower productivity performance, and 12% faster gaming performance, assuming you own a 4090.

Example #3 Now lets do 7950X3D, here it's a little less obvious, but again, this CPU price range is approx 4% of the market going by steam HW survey, 50% are 6/8 core CPUs, and even quad cores are still 4x the marketshare of 16 core CPUs.

View attachment 368806

A) 285K for $585 - 100% relative applications performance/gaming performance
B) 7950X3D for $600 - 96% applications performance, 106% gaming performance - slightly slower in applications, slightly faster in gaming (assuming you have no scheduling issues and are willing to tolerate Xbox game bar and a 3DVCache scheduling driver), for more money, on an older platform, with worse IO, and no chance of running super fast memory without switching gears, unlike ARL.
C) 9950X for $650 - 103% applications, 102% gaming.

So yeah, the X3D chips only make sense for gaming. If you don't game, there's pretty much no point paying the premium for them, because they're slower than alternatives. Every other CPU from both Intel and AMD are reasonably balanced, they can game, they can do some productivity work, and they aren't too expensive.

Just gonna point out, the 245k is a pretty poor investment vs a 9700X for anyone who going to game and dabble in office/light productivity work. 245k is worse in just about every metric. Not a great base example either.
 
That chart is from today. It is still a "top 3" value gaming chip 5 years later. There is no propaganda unless you want to complain to Wiz.

I wonder why you complain about dissonance while not realizing a 90% increase in MT for 10% decrease in gaming performance is a trade some people will make while not realizing people won't trade a similar 10% gaming decrease in gaming performance for a 5% increase in MT. It's a totally different scenario. If ARL was putting up 70K in Cinebench 23 maybe you'd have a point, it would be the modern 3950X. But it's not even close.

I wonder why you are talking about the 'value' proposition of the 3600X as a gaming chip, and the MT performance of a 3950X (which was absolutely not a value prop, at all), as if they are the same thing?

If you wanted to do MT things - like rendering or MT encoding - then as we already talked about wouldn't you use a GPU in 2019? Why would you use a CPU? So the reasoning back that the time - again 3 years before you joined AT - was that there was some slight difference in encoding quality with a CPU vs a GPU. That never really panned out in tests.

Again, the vast, vast majority of folks here primarily want a high performance CPU to game with. Very, very few do any of these rendering / encoding things. If they *did*, then they'd be all over the encoding performance of Arrow Lake. But they aren't, because they don't care about that (no do I), and the 'care' they had in 2019 was all fake.

Why was that?
 
Yeah, but did you see some of the graphs for productivity? It's losing in Microsoft Office and antivirus.
OH MY, OH MY LOOOORDDDDDD !! it's losing in MS word, now you are limited to typing only 1500 words per minute instead of 1699 with intel, can you even imagine the HORROR ?
 
Just gonna point out, the 245k is a pretty poor investment vs a 9700X for anyone who going to game and dabble in office/light productivity work. 245k is worse in just about every metric. Not a great base example either.
The 9700X is slightly slower in applications, slightly more expensive, about 8% faster for gaming (after a month of bugs being ironed out, on a mature platform, and with the Intel chip running at 6000 MT, not 8000 MT "sweet spot").

Missing the point about X3D though. The examples, ideal or not, are sufficient to drive home that point.
 
I wonder why you are talking about the 'value' proposition of the 3600X as a gaming chip, and the MT performance of a 3950X (which was absolutely not a value prop, at all), as if they are the same thing?

If you wanted to do MT things - like rendering or MT encoding - then as we already talked about wouldn't you use a GPU in 2019? Why would you use a CPU?

Again, the vast, vast majority of folks here primarily want a high performance CPU to game with. Very, very few do any of these rendering / encoding things. If they *did*, then they'd be all over the encoding performance of Arrow Lake. But they aren't, because they don't care about that (no do I), and the 'care' they had in 2019 was all fake.

Why was that?
Because you said this
Where'd all the people from the Zen 2 days who only cared about rendering/encoding speed and power efficiency go?? :laugh::laugh::laugh:
Zen 2 had the 3950X. It was far ahead of the competition in rendering, encoding and efficiency. 90% in embarrassingly parallel work. Arrow Lake is only a few percent better in some and behind in others.

I don't know why you are mad that Zen 2 also had a value gaming part that depending on pricing at the time was competitive with the next year's 10400F. It was priced competitively and so people recommended it whenever it was on sale.

And where are the 10400F owners now? The 3600 owner could be running a 5600X3D if they wanted. Which is as fast in gaming as the brand new 245K without having to buy new memory or board.
 
Last edited:
No I do not mean latency. I mean topology - where the CPU's execution resources are physically located. This is why the 7900X3D is amongst the slowest of all Zen 4 Ryzens, when it clearly has the goods. Also don't underestimate cycle penalties - they multiply the the millions, if not billions of cycles in CPUs that operate in the 4-5 GHz range.
Yep that is why I do not enjoy Gaming on it. I suggest you watch the MSI Gaming live stream and see the the 9900X is faster in every Game than the 285K. It does not matter what you say vs what I feel. 4-5 Ghz? The 7900X3D boosts to 5.7 Ghz on the other CCD. I guess that does not matter though. There is no scenario that makes the topology of the 7900X3D bad. It was constructed by the community and pasted into the void for the lack of reviews. I guess the 9900X is faster than the 7900X3D in Games? That's right, you can't answer that. You have no 7900X3D to verify that with. I don't want to derail this thread but you and a few others love to throw AMD into the equation as being lesser, even when the raw data shows that Intel are behind AMD in the CPU War in every metric.......still. Topology. All Zen chips are made the same. This one just has 128 mb of L3 Cache and is not proven that less cores are beneficial for heat and that the 6 core argument has been debunked? Don't forget that 24H2 also benefits X3D so it really is a nothing burger that you are debating. The 7800X3D is great at Gaming, The 7950X3D is great at productivity, the 7900X3D is within 5-10% in Gaming or Producttivity and ot is foolish? When it is $50 more than the 7800X3D and $400 less than the 7950X3D? Even today where I live? I have seen people comment that the MSI Claw is ok even though it is the slowest handheld PC you can buy. Are they wrong in their assessment? Or do we only deal in absolutes as humans.
 
You mean $599 for the 9950X.
No.

1729804715189.png

1729804755879.png
 
20% slower in gaming on average vs 7800X3D (ARL tested on 23H2 because they lose performance on 24H2 where Ryzen gains so the review got the best for each arch)
Totally immature platform vs AM5 which has solved all quirks for months now. And although ARL is more efficient than RPL, the latter was terrible and the new is compared to Zen5 which is by far better in power draw apart from idle where the diff is ~20W.
 
So cause it's out of stock at those 2 stores you're basing your comment on the stores that have it in stock for a more expensive price? I honestly can't with your posts anymore and can't even put you on ignore as you are "staff" sick of reading your word salads and twisting everything to fit an obvious agenda that you spout in EVERY AMD/Intel thread, here's some screenshots for you as obviosuly that means it is fact, and this time from the UK, so does that make it better for you?

AL.PNG
99.PNG
 
You seem to have it confused. Out of stock for a reason. The X870E Godlike is $1899 and already Out of stock at my local brick and mortar. All this review will do is increase AMD sales for people that were waiting to see what Intel had to offer.
If you can't buy it for $600, then it doesn't really matter if it costed that at some brief point in the past (a period of two days).

For the vast majority of time since it's launch, as well as right now, it's available for $650.

So no, no confusion here.
 
I think all the issues in gaming will be solved in a month or so. It can't be next to 14900K in some games and 20% lower in others. It will be sorted sooner or later.

Regarding the comparison to the 3Ds. It's kinda unfair.
Because Intel does not have one. They should be compared to the normal 9000 cpus.

X3D is specialized series where AMD can sacrifice as much performance in most apps as they want as long as they can get more performance in gaming. Some of us are (more than) fine with it.
 
To be honest, I don't mind that if Intel has some troublesome time for once. Though I hope that AMD doesn't abuse this with too high pricing, instead they should lower their prices a little to gain more users.
I agree, I don't mind seeing Intel struggle a bit at all, especially after the Raptor Lake fiasco. I'd like to see AMD lower their prices on Zen 5 by another $50 then they'd have the better value as well.
You seem to have it confused. Out of stock for a reason. The X870E Godlike is $1899 and already Out of stock at my local brick and mortar. All this review will do is increase AMD sales for people that were waiting to see what Intel had to offer.
Despite reviewers bashing on Zen 5 i've seen the 9950X and X870E boards go out of stock regularly, if AMD did another price cut they'd have some massive sales. Zen 5 has the mature platform with better socket longevity as well, AM5 users can drop in Zen 6 when it arrives, while Intel might have an Arrow Lake refresh but the rumors so far is Z890 might only get one generation for socket 1851.
It seems like the Ultra 200 series is like Rocket Lake 11th gen, a placeholder until the next gen, really no reason to buy it unless you absolutely have to have an Intel system.
 
I agree, I don't mind seeing Intel struggle a bit at all, especially after the Raptor Lake fiasco. I'd like to see AMD lower their prices on Zen 5 by another $50 then they'd have the better value as well.
I don't mind using my current 5800X, but if the price of 9700X would've cut to 300EUR (incl VAT), count me in.

Good CPUs, shitty pricing.
 
X3D is specialized series where AMD can sacrifice as much performance in most apps
Bro, come on, they don't sacrifice "as much as they want" to gain extra gaming performance, it is all down to clockspeed, plain and simple and x3D can't scale as high in clocks as their non-x3D counterparts and non-gaming workloads show this, it is linear, it's not magic sauce, the extra x3D cache can't cope with the higher voltages that non x3D chips use to acheive higher clocks so run lower clockspeeds, to say they sacrifice everything else to acheive this is just plain wrong, productivity workloads scale just as much with x3D chips when you factor in the lower clockspeeds and they have the same IPC as non-x3D chips

If you can't buy it for $600, then it doesn't really matter if it costed that at some brief point in the past (a period of two days).

For the vast majority of time since it's launch, as well as right now, it's available for $650.

So no, no confusion here.
Google supply and demand
 
Regarding the comparison to the 3Ds. It's kinda unfair.

No.

You buy a package which implies a mainboard, a cpu and RAM.
The mainboard defines which connectors you can use (see the mainboard manual with all the exceptions with processors, ram, m2, and so on...)
The processor and RAM defines if an instruction can be executed and how fast. (please forgive me for being not 100% correct)

Also the lower priced processors and previous generations should be considered.

I did not bother with those 13th, 14th and 15th generation Intel processors. The 12th gen E-cores could not execute my code.

off topic: AMD and the mainboard manufacturers have a lot of cashbacks or giveaway games running as of now. I assume people are buying in advance mainboards and other components to start building their self build computers.
 
It's OK once you take into account that Intel is idling at 25W lower than AM5. Take away those 25W from all the load power draw numbers and it won't look as bad, sometimes even better than AMD.

The platform idles 25w higher, actual CPU idle power consumption is close between the two.

Important to note as single chipset AMD platforms seem to use less power at idle but that data is hard to find given every publication tests with the dual chipset boards. We can only compare to past single chipset AMD motherboards and see that they were more efficient at idle.

Idle power consumption is important but full use and mix scenario power consumption is much more so. You aren't dealing with heat issues at idle and your heat output isn't impacting your efficiency and clocks. It is during mixed and heavy workloads.

As per the review:

Now, technically there are some issues with that approach as its possible some of the ATX power lines from the ATX connector may supplement voltage lines to some parts of the CPU depending on implementation.

EDIT: I'm wrong...

Yeah, you're right - that 25W additional 'headroom' that AM5 uses doesn't just disappear. But on the flip side, Intel is compensating AMD by just blowing past that 25W figure when under full load.
The chiplet downside - not enough APU reviews for AM5 to compare with at TPU, but there is this:

View attachment 368763

Clearly AMDs AM5 APUs with a monolithic design do much better in this regard, and Intel's Foveros is a bit of a halfway house in that sense.
In a way this graph is a good reminder just how much power hungrier AM5 generally seems to be - I'm not entirely convinced the AMD promontory chipset chips are their best work in that regard.

The APUs are more efficient at idle but look at them compared to last gen APUs. The 5700G platform uses less power. I suspect this is due to the single chipset motherboard.

Where'd all the people from the Zen 2 days who only cared about rendering/encoding speed and power efficiency go?? :laugh::laugh::laugh:

Tom's said it best - lateral move. Nothing here for gamers, mostly for people doing encoding and some niche scientific uses. People who *actually* care about that, prob aren't using a CPU though.

Looks like a 14700K will be my next upgrade. LGA 1700 FTW

Those people still exist but unless this CPU responds to power limits very well you are better off getting a 14th gen Intel and limiting it for maximum efficiency.

Should have held on to my 7800x3d and sold it now (Sold it when i got my 9950x)
Even though 9800X3d is out soon, could probably get quite a bit for a 7800x3d.

That's just the tech world for ya. Sometimes a launch is good and other times it's a complete dud.

Regarding the comparison to the 3Ds. It's kinda unfair.
Because Intel does not have one. They should be compared to the normal 9000 cpus.

X3D is L3 cache, which Intel does have.

Can't say I understand the logic, if there's a hardware feature that makes things go faster when the output is the same, that's entirely fair.

Was it unfair that the Intel Pentium 4 extreme edition was the first consumer CPU to include an L3 cache and thus invalidate it's results? No.
 
I agree, I don't mind seeing Intel struggle a bit at all, especially after the Raptor Lake fiasco. I'd like to see AMD lower their prices on Zen 5 by another $50 then they'd have the better value as well.
I want competition, I don't want a situation like Nvidia also in the CPU sector. The years of Intel's 4 cores +2% generation vs generation I don't want to see again.

That said, the permance of the Intel processor is disappointing. ‘Eh but in the TechpowerUp application benchmark they do very well’. Yes, but the variability is very high indeed. For example in the Stockfish test they really suck, and that's something I really care about.

The precise consumption I can't see (maybe I'm blind) but it should be around 240W.
That's definitely 40W (or so) less than the 14900K but it still consumes more than the 9950X, despite being 23% slower


1729806751622.png
 
20% slower in gaming on average vs 7800X3D (ARL tested on 23H2 because they lose performance on 24H2 where Ryzen gains so the review got the best for each arch)
Totally immature platform vs AM5 which has solved all quirks for months now. And although ARL is more efficient than RPL, the latter was terrible and the new is compared to Zen5 which is by far better in power draw apart from idle where the diff is ~20W.
HUB didnt help themselves there, they already have accusations of being preferable to AMD, whilst I dont think they did it for that reason, I think it was embargo date. I think they either should have delayed the review until they got it working on 24H2, or provide comparison results on same OS. But of course they had to meet that precious embargo date so scuffed it all together.
 
Where is the i3?? :)

Also, what's with the naming?? Why not 15k series?!?!?!?!
 
HUB didnt help themselves there, they already have accusations of being preferable to AMD, whilst I dont think they did it for that reason, I think it was embargo date. I think they either should have delayed the review until they got it working on 24H2, or provide comparison results on same OS. But of course they had to meet that precious embargo date so scuffed it all together.
TPU did the same and tested on 23H2 cause of performance issues with AL on 24H2... if anything they have done Intel a favour, good luck to the guys who have W11 24H2 and buy AL though, likely just scheduling issue that will be worked out over the coming weeks/months
 
Back
Top