• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's Core Ultra 7 265K and 265KF CPUs Dip Below $250

Why would the lack of AVX prevent you? It's still faster than the 9700x in AVX workloads due to the extra cores. Stockfish and Ycruncher are AVX heavy workloads and the 265k still wipes the floor with the 9700x.
Prime95/mprime PRP3/PM1 workloads have staggering performance benefits from AVX-512 over AVX2. The workloads are also so dependent on memory timings and speeds that more cores really do more harm than good, and HT/SMT is largely useless for LL/PRP3, only physical cores matter. And Prime95 is also not good at handling hybrid core characteristics.
 
At some point - I hope in the not so distant future - people might figure out that if someone is interested in a CPU like the 285k they are not interested in an 8 core chip like the 9800x 3d. The actual competition is the 9950x and the 9950x 3d.
Fair enough, but if we use microcenter prices we see that the 9950x is a better value still.

At Microcenter it is $500, the Core 9 Ultra 285k is $560. If you look at the relative application and gaming charts they are basically within 1-2% of each other for both categories, meaning the 9950x is the same performance for 10%$ less money.
1750514267191.png


Gaming performance:
1750514435581.png



So, you still wouldn't get the Intel solution over the 9950x.
 

Attachments

  • 1750514346480.png
    1750514346480.png
    29 KB · Views: 17
Fair enough, but if we use microcenter prices we see that the 9950x is a better value still.

At Microcenter it is $500, the Core 9 Ultra 285k is $560. If you look at the relative application and gaming charts they are basically within 1-2% of each other for both categories, meaning the 9950x is the same performance for 10%$ less money.
View attachment 404708

Gaming performance:
View attachment 404714


So, you still wouldn't get the Intel solution over the 9950x.
That depends more on what kind of workloads you are running. There are workloads that the 9950x is much faster than the 285k, and workloads that the 285k is much faster at. It's use case dependant. Also there is the whole issue with power draw on the ryzen chips which just go full alcoholic for simple single threaded / light threaded tasks but I guess that's just a me problem, most people don't seem to care about that.

They are both good chips - for my use case the 285k is better - I dislike the low load powerdraw of ryzen chips but nothing wrong with the 9950x either.
 
That depends more on what kind of workloads you are running. There are workloads that the 9950x is much faster than the 285k, and workloads that the 285k is much faster at. It's use case dependant. Also there is the whole issue with power draw on the ryzen chips which just go full alcoholic for simple single threaded / light threaded tasks but I guess that's just a me problem, most people don't seem to care about that.

They are both good chips - for my use case the 285k is better - I dislike the low load powerdraw of ryzen chips but nothing wrong with the 9950x either.
Absolutely, there are certain workloads, as well as games, where the 285k is better than the 9950x, but as a whole the 9950x in a variety of applications and games is still faster than the 285k. To make this even worse for Intel, Microcenter has the 9950x3d at $630, that is 12.5% more expensive than the 285k, but in gaming the 9950x3d is a little more than 17% faster, which means cost per frame is still a better value for the 9950x3d. When it comes to application performance the 9950x3d is 7% faster than the 285k, but cost per performance is better for the 285k, but in that case why would you go with the 285k when you can get the 9950x?

The only real answer is, as you say, there is certain software that you will only run the vast majority of time on your computer that the 285k excels. Outside of that, AMD has very effectively sandwiched the 285k with the 9950x and the 9950x3d.
1750516072972.png
 
When it comes to application performance the 9950x3d is 7% faster than the 285k, but cost per performance is better for the 285k, but in that case why would you go with the 285k when you can get the 9950x?
Didn't we already agree that there are workloads that the 285k is a lot faster? Didn't we also agree that in mixed workloads its still a lot more efficient than ryzen chips? That's why I would personally go for the 285k. I don't care about the gaming performance that much, my 12900k is more than fast enough to drive my 4090. Actually the 4090 is a slouch in comparison.
 
You should look at some benchmarks there are plenty of games that are still CPU limited with a lot of physics calculations so nothing you do with the GPU will raise that minimum FPS.
Plenty of games.....weird because I looked at lots of benchmarks on TPU for 4k results....didn't see any meaningful difference,
 
Plenty of games.....weird because I looked at lots of benchmarks on TPU for 4k results....didn't see any meaningful difference,
There are games like KCD 1, Stalker 2, Hogwarts that are CPU limited even at 4k. The problem is that these games are so unoptimized that it doesn't really matter, it's going to be a bad experience no matter which cpu you use. Even if a CPU is 35% faster in one of these games - that will take your 1% from 30 to 40. It's a big nothing burger.

Reminds me of playing arma 3 with my friends like 10+ years ago, I had an fx8350 and they were making fun of me for dropping to 11 fps in some heavier areas, while their uber ultra fast 3770k was 60% faster, dropping to...18 fps. It really did not make a difference, both experiences were horrible :D
 
Plenty of games.....weird because I looked at lots of benchmarks on TPU for 4k results....didn't see any meaningful difference,
physics based games like simulators.. iracing, flight sim , kerbal, beam ng
 
Last edited:
physics based games like simulators.. iracing, flight sim , kerbal, beam ng
Yes, well I have 100's of hours testing systems in BeamNg and fun fact......AMD CPU's and specifically the X3D ones don't do as well.......in fact Arrow Lake IS the CPU to get for Beam.....its hard for die hard amd fans to admit that. Beam wants fast single thread performance and lots of cores, Intel is currently the best for that, sure maybe Beam is an outlier but I couldn't be happier with my 285K it was a large increase in performance from my 14900K in Beam.......but I also have 2 amd systems (7800X3d and 7945HX3D) and while there is nothing wrong with them, when you want lots of cars spawned intel is superior.

14900K with 4090 and 80 cars


7800X3D with 4090 and 80 cars

7945HX3D with 5070Ti and 80 cars

285K with 5070Ti and 80 cars

Also Fun fact Beam is still CPU limited at 4K with a 5090,

Another fun comparison

14900K with 4090 no cars spawned (to highlight cpu performance)


7800X3D with 4090 no cars spawned

https://www.youtube.com/watch?v=bc3Fb604IIM

285K with 5090

https://www.youtube.com/watch?v=2YKfXSwZv8A
 
Last edited:
Didn't we already agree that there are workloads that the 285k is a lot faster? Didn't we also agree that in mixed workloads its still a lot more efficient than ryzen chips? That's why I would personally go for the 285k.
For most productivity folks, there is simply not enough difference in applications where 285K is significantly faster. If they work in Linux, even less reasons. Someone building a productivity PC with 285K is buying EOL platform with no upgrades. They are immediately stuck with such system.

If they buy AM5 with 9950X/9950X3D, they will have a simple and very powerful drop-in upgrade next year already, with 24 powerful Zen6 cores. So, 50% core increase within a year. This is going to boost their productivity workloads pretty fast and far beyond current CPUs, instead of thinking of buying a completely new Intel system and spending way more money. It just does not add up.

The only case I can see where 285K is a viable option for productivity system is when a media creator is heavily reliant on QuickSync function and they earn money with it. That makes sense.
Screenshot 2025-06-21 at 18-24-51 Intel Core Ultra 9 285K Review - Perf vs 14900K 9950X 7950X3...png
 
Last edited:
For most productivity folks, there is simply not enough difference in applications where 285K is significantly faster. If they work in Linux, even less reasons. Someone building a productivity PC with 285K is buying EOL platform with no upgrades. They are immediately stuck with such system.

If they buy AM5 with 9950X/9950X3D, they will have a simple and very powerful drop-in upgrade next year already, with 24 powerful Zen6 cores. So, 50% core increase within a year. This is going to boost their productivity workloads pretty fast and far beyond current CPUs, instead of thinking of buying a completely new Intel system and spending way more money. It just does not add up.

The only case I can see where 285K is a viable option for productivity system is when a media creator is heavily reliant on QuickSync function and they earn money with it. That makes sense.
Why are you assuming people buy a 600$ cpu with a plan to upgrade next year? How do you know what the prices of the next gen is? How do you know lga1851 isn't getting another generation?
 
At least Panther Lake should still be inbound unless that was already cancelled. Intel's in a complete state of disarray right now, IMHO. They will need time. Right now, I think buying Ryzen is probably the best choice for most people, but that doesn't discredit the 285K as a bad processor, especially if the price is right.
 
Prime95/mprime PRP3/PM1 workloads have staggering performance benefits from AVX-512 over AVX2.
Ah, I get it, you play benchmarks.
 
Why are you assuming people buy a 600$ cpu with a plan to upgrade next year? How do you know what the prices of the next gen is? How do you know lga1851 isn't getting another generation?
If there ever was any refresh for Arrow Lake, we would have heard about it by now. And even if there is late announcement during the summer, the refresh will be similar to moving from 13900K to 14900K, a minor little thing. So... nothing groundbreaking there, I am afraid. There is no new architecture available as a drop-in option on 1851. And you know this very well, so why pretending otherwise?

People who do serious productivity stuff with a high-end desktop system have enough brain power to do calculations and know how to invest into their gear. It's a no-brainer to invest into productivity platform with guaranteed drop-in upgrade on a new architecture. That's also why both Intel and AMD usually give at least one drop-in upgrade on HEDT/WS platforms, so that productivity folks don't need to buy motherboards and other components every time they just need a new CPU.

It's wrong to assume that a drop-in upgrade is planned for next year. Drop-in upgrade is hard wired into platform, owners of such systems know it and they can buy new CPU when they need it and not when it's released. Whatever the price is, it will still be much cheaper than buying a new high-end desktop system. There is nothing to debate here. It's pretty simple.

Intel lost their chance and momentum on 1851 platform for productivity folks when they cancelled the first architecture meant for this socket, which was Meteor Lake S. The rest is history. So, the choice for building a great productivity platform now with a drop-in upgrade is obviously AM5 platform. If someone can wait with such decision and continue to use whatever they have, then Nova Lake will give them next opportunity to purchase such platform with meaningful drop-in upgrade in future.
 
Didn't we already agree that there are workloads that the 285k is a lot faster? Didn't we also agree that in mixed workloads its still a lot more efficient than ryzen chips? That's why I would personally go for the 285k. I don't care about the gaming performance that much, my 12900k is more than fast enough to drive my 4090. Actually the 4090 is a slouch in comparison.
So, the way you have presented the 285k in general as being better value than the 9950x and that isn't true, it is only true in application performance overall when compared to the 9950x3d. We do agree that there is certain software that the 285k performs very well on, but again you initially presented the 285k in general as being better in applications and a better value compared to the 9950x. Just not true.

As for the power efficiency, I have not made any posts at all on this thread about power efficiency, so you must be talking about someone else you responded to.
 
Plenty of games.....weird because I looked at lots of benchmarks on TPU for 4k results....didn't see any meaningful difference,
Why would one ever look first at 4K Ultra performance when comparing CPUs? We won't find there anything interesting. TPU has almost 30 top CPUs within 5-6% difference at 4K Ultra. It doesn't really matter which one of those CPUs you use in gaming at that setting and resolution.

Reviewers by default test in Ultra settings, as they don't have time to explore other settings. Hardware Unboxed has a few videos showing settings below Ultra, mixed-up with a few CPUs and a few GPUs. This is where things start to get more interesting.

At medium settings in 4K, CPUs start to work harder again and this is where X3D CPUs show the best mileage, on average, and being most future-proof.
 
Last edited:
Why would you ever look first at 4K Ultra performance when comparing CPUs? You won't find there anything interesting.

TPU has almost 30 top CPUs within 5-6% difference at 4K Ultra. It doesn't really matter which one of those CPUs you use in gaming at that setting and resolution.

Reviewers by default test in Ultra settings, as they don't have time to explore other settings. Hardware Unboxed has a few videos showing settings below Ultra, mixed-up with a few CPUs and a few GPUs. This is where things start to get more interesting.

At medium settings in 4K, CPUs start to work harder again and this is where X3D CPUs show the best mileage, on average, and being most future-proof.
Entirely my point....it was sarcastic I game at 4K and highest settings, but somehow because I chose a 285K i get terrible performance (according to some people)
 
It doesn't make it any more appealing because of the broken offshore meme controller and core to core latency, but it's probably worth it for $150.
 
Ah, I get it, you play benchmarks.
Prime95 is a lot more than a benchmark and wasn't intended to become one, but yeah, that's the gist of it.
 
It is super unpopular in DIY, but that's a subset of the real market.

I'm struggling to understand why anyone would go for a 9700X instead at $305, current pricing. Perhaps 9600X at $180, but I mean, $60 for more than 3x the cores and cheaper mobos meaning it's more like $30 more for the ARL chip...
The "AMD is much better for gaming" is mostly from the X3Ds, which are almost twice the price, especially if you consider mobos. Against standard Zen 5? The 265K is faster than the 9900X in applications, and is essentially a 7700X in games, the 9700X is 5% faster with a 5090. That's with essentially the same efficiency, and with 200S (warranty) boost turned off, and ARL running slower RAM than it's rated for. Anyone able to argue for Zen 5 in this case? I find it a weak choice besides at the high end, with 9800X3D/9950X3D. Maybe 9600X3D at ~$250 changes things...

The advent of gaming performance charts/results generated with an RTX 5090 really throws off people's understanding of the actual relative performance with the GPUs they have I think. Besides the whole general ignoring of "application performance" charts.
The simple answer is upgradability, I was 99% sure i would be able to upgrade my cpu on AM5 to a new generation, even before 9000 series was announced, and im 98% sure i will be able to get a zen6 update can you say that for intel? that's their fundamental problem, is every intel platform is a dead end.
AMD is most popular for its 3D cache chips as you pointed out, Linux performance, SoCs, platform longevity and all P core architecture. To a lesser extent, AVX512 and SMT could be attractive to some. These attributes can be worth a premium to enough customers.

Edit: looks like Geofrancis likes the platform longevity.
As rarely, as I agree with dgianstefani, he has the point here. This is unpopular by you, and other TPU users, experienced enough, to make a viable, wise choice as a DIY user. But the thing is, that these CPUs being sold en masse, by Intel to OEMs/SI. This has never changed, and won't change any time soon, as AMD is still a niche in the consumer market (and is limited mostly by DIY customers of US and EU), as almost all their supply goes to EPYC and MI.

Don't get me wrong. I'm not preaching for either company. There are just the products and their strong and weak points.

There are areas, where Intel CPUs were and are the dominant. These are the embedded, industrial, and other manufacturing fields, where the CPUs being bought in droves, AMD has basically zero availability. And also, those who used to buy Intel for their lifetime, will continue to do so. And these price-cuts, are aimed excactly at them. And this is the backbone of all CPU sales.
At this point, the Intel CPUs, unless they have serious manufacturing silicon/die defects, are pretty much great deal. As much as I dislike Intel, I must confess, that entire x86 computing, and software is turning around them. This is basically plug&play experience.

That's why most companies are lazy, or cautionous to try "new" endeavours. And also, their bias being fueled by wintel-nvidia lobbies, that exist here, after all these years, like AMD's Athlon and Ryzen never happened. You guys don't have a clue to what extent...

The another factor are the uneducated clients, that know nothing about the IT and CPU market, but "being told", heard the "claims", that Intel is "more stable", and other fairy tales, that dominates the market and companies's decisions on the upper level, due to Intel's corporate influence.

This hurts the small business the most. They would buy the cheapest all-rounder, regardless of platform/socket logevity and upgradability.
Both companies and their clients just see the "more cores" labels, and buy that (more cores is an old own AMD's game, at which Intel have completely outplayed them), having no clue about potential heterohenous core issues.
And they still don't know, that Ryzen has their five solid generations and several refreshes, that had their issues mostly ironed out. Not to mention EPYC, which simply wipes the floor with Xeon, for a good half-decade.

Another factor, is that despite the AMD's superiority, in many areas, their products are simultaneously more expensive. And there's simply no competition at the ently-level/low end, where the entire 12600K/B660 rig can be bought for just below $500. Go try the same with "alternatives" by AMD. There's simply no competition, and AMD ends up more expensive about $50-$100 more at every level. At least here.

You may say this is just one small local market, but this is exactly what Intel relies on, the each particular market, outside the US and EU, where they can sold their products in bulk, to OEM, or simply biased suppliers. The markets, that never happen in "big" outlets, and finacial reports.

Unless AMD will begin to treat themself "worthy" and reliable, solid rival, there's nothing consumers can do. No amount of tantrums about AMD's superiority, will help when the Intel's "inferior" SKUs dominate the mindshare and market sales globally. AMD must invest into consumer software and HW support and validation.

 
Last edited:
There are areas, where Intel CPUs were and are the dominant. These are the embedded, industrial, and other manufacturing fields, where the CPUs being bought in droves, AMD has basically zero availability. And also, those who used to buy Intel for their lifetime, will continue to do so. And these price-cuts, are aimed excactly at them.

That's why most companies are lazy, or cautionous to try "new" endeavours. And also, their bias being fueled by wintel-nvidia lobbies, that exist here, after all these years, like AMD's Athlon and Ryzen never happened. You guys don't have a clue to what extent... The another factor are the uneducated clients, that know nothing about the IT and CPU market, but "being told", heard the "claims", that Intel is "more stable", and other fairy tales, that dominates the market and companies's decisions on the upper level, due to Intel's corporate influence, at upper management.
Thanks for your sentiment, I think. Lol.

You almost put 2+2 together here (seriously, no patronizing intended).

Let me elaborate.

The reason Intel is more "stable" isn't due to it generally being more/less stable/secure than competing hardware at the silicon level (insert joke here about Apple claiming their fundamentally more "secure" OS/hardware against viruses, when at the time their pathetic marketshare simply meant noone bothered to write viruses for OSX, compared to Windows, then a harsh wake up call when they reached critical mass and started being a target). It's due to the vast majority of enterprise and legacy software/systems being validated for Intel platforms. Just like software is validated for NVIDIA GPUs, because of the dominant (not just because of corpo corruption as AYYYYYMD fanboys like to shout from the rooftops, but because it's good, and NVIDIA has insanely good software support/teaching resources at every level/prexisting libraries to utilize) CUDA architecture. Validation is a time consuming and expensive process, that requires specific optimizations, code checking/tweaking and improvements being merged into mainline (in case of open-source stuff). Why on earth would these industries switch platforms to save what, some small percentage on hardware costs, when the real cost of implementing professional systems are in software costs and validation costs, something most enthusiasts still don't get. No, the NVIDIA -$50 strategy doesn't work, nor is compelling at the professional level. AMD EPYC has done well (relatively) at solidifying the AMD x86 CPU family as compelling in enterprise, but the GPUs are still a fragmented joke.

Economies of scale don't just operate based on cost of hardware, but cost of solutions, a cost that is formed of risk (often risking the entire business), time (often years) and money (often millions/billions if not in initial cost but in lost profit in case of failure), sometimes it's not even possible to hire the engineers and experts that are required to make something work well, good ones are a finite resource, and are fiercely in demand, often earning six or seven figures elsewhere (hence the dream/appeal of AI that can replace them). Intel+NVIDIA are massively popular because to go against that popularity you're throwing the vast majority of off the shelf/turn key paid and free software (look at simple free/paid video editing software, all the way up to million dollar scientific compute licenses etc), along with the vast majority of freely available open source libraries that are supported by the vast majority of hardware already on the market. It seems AMD is finally waking up to these facts, and have started to stop making "equivalent" hardware with inconsistent software, due to their massive hiring of software engineers and apparent early steps at paying for prompt and consistent software support from their consumer cards to their enterprise cards/CPUs, without fail or exception. Some ways to go yet, but progress is progress. ROCm, why does it suck vs CUDA? Because it's technically worse? Yes, to a small extent, but mainly because it's inconsistent, and unreliable when it comes to when support will be available, and for how long, plus developers are often left on their own to figure out workarounds or problemsolving for their use case. CUDA software, assuming it's fundamental x32/x64 capabilities are supported, will at least run on literally any CUDA card, whether it is a $300 5060, a tens of thousands of dollars Quadro, or a million dollar rack of GPUs.

There's gonna be the usual "but exception" posts in response to this, but actually think - is that meaningful?
 
AFAIK Intel has excellent Linux support, so not sure about that point either. It performs better in Linux than in Windows
Not at all. Please don't spread falsehoods without checking data first. Read the new re-testing article on Phoronix. In Linux, 285K is even slower than 9900X, so 265K will be further down too. The difference in performance is more pronounced than in Windows. There's still 16% difference between the top SKUs.
Linux AMD 9950X vs 285K 2025.png


 
Not at all. Please don't spread falsehoods without checking data first. Read the new re-testing article on Phoronix. In Linux, 285K is even slower than 9900X, so 265K will be further down too. The difference in performance is more pronounced than in Windows. There's still 16% difference between the top SKUs.
View attachment 404807

Point out the "falsehood".

1) AFAIK Intel has excellent Linux support.
2) It (Arrow Lake) performs better in Linux than in Windows.

So far you've noted that "the difference in performance (between AMD/Intel current gen consumer CPUs) is more pronounced than in Windows". So? This means that Linux is well optimized for hardware. This does not mean that Intel Chips do not perform better in Linux than in Windows. Your deep and nuanced statement of—but AMD is still faster in Linux according to one set of benchmarks—also doesn't mean that Intel does not have "excellent linux support", which it does. In fact Intel code commits are consistent, high quality and ahead of schedule, often patching Linux kernel literal years before products are even released. The Intel Clear Linux distro is not only generally the fastest general purpose release for Intel computers, but ironically enough, also for AMD computers.

I can go on, but I'd rather respond to someone actually understanding the statements I made, rather than replying to a straw man they dreamt up.
 
Thanks for your sentiment, I think. Lol.

You almost put 2+2 together here (seriously, no patronizing intended).

Let me elaborate.

The reason Intel is more "stable" isn't due to it generally being more/less stable/secure than competing hardware at the silicon level (insert joke here about Apple claiming their fundamentally more "secure" OS/hardware against viruses, when at the time their pathetic marketshare simply meant noone bothered to write viruses for OSX, compared to Windows, then a harsh wake up call when they reached critical mass and started being a target). It's due to the vast majority of enterprise and legacy software/systems being validated for Intel platforms. Just like software is validated for NVIDIA GPUs, because of the dominant (not just because of corpo corruption as AYYYYYMD fanboys like to shout from the rooftops, but because it's good, and NVIDIA has insanely good software support/teaching resources at every level/prexisting libraries to utilize) CUDA architecture. Validation is a time consuming and expensive process, that requires specific optimizations, code checking/tweaking and improvements being merged into mainline (in case of open-source stuff). Why on earth would these industries switch platforms to save what, some small percentage on hardware costs, when the real cost of implementing professional systems are in software costs and validation costs, something most enthusiasts still don't get. No, the NVIDIA -$50 strategy doesn't work, nor is compelling at the professional level. AMD EPYC has done well (relatively) at solidifying the AMD x86 CPU family as compelling in enterprise, but the GPUs are still a fragmented joke.

Economies of scale don't just operate based on cost of hardware, but cost of solutions, a cost that is formed of risk (often risking the entire business), time (often years) and money (often millions/billions if not in initial cost but in lost profit in case of failure), sometimes it's not even possible to hire the engineers and experts that are required to make something work well, good ones are a finite resource, and are fiercely in demand, often earning six or seven figures elsewhere (hence the dream/appeal of AI that can replace them). Intel+NVIDIA are massively popular because to go against that popularity you're throwing the vast majority of off the shelf/turn key paid and free software (look at simple free/paid video editing software, all the way up to million dollar scientific compute licenses etc), along with the vast majority of freely available open source libraries that are supported by the vast majority of hardware already on the market. It seems AMD is finally waking up to these facts, and have started to stop making "equivalent" hardware with inconsistent software, due to their massive hiring of software engineers and apparent early steps at paying for prompt and consistent software support from their consumer cards to their enterprise cards/CPUs, without fail or exception. Some ways to go yet, but progress is progress. ROCm, why does it suck vs CUDA? Because it's technically worse? Yes, to a small extent, but mainly because it's inconsistent, and unreliable when it comes to when support will be available, and for how long, plus developers are often left on their own to figure out workarounds or problemsolving for their use case. CUDA software, assuming it's fundamental x32/x64 capabilities are supported, will at least run on literally any CUDA card, whether it is a $300 5060, a tens of thousands of dollars Quadro, or a million dollar rack of GPUs.

There's gonna be the usual "but exception" posts in response to this, but actually think - is that meaningful?
Exactly! There's nothing to oppose here.

The only "but" could be added, is that the market changes should start within the rival companies and their strategies. No amount of pressure by consumers/outside of corporate core, will make any impact on the market condition. In this case it should start inside AMD, and their (positive) pressure, and lasting impression on their image among customers. AMD have to make "tectonic shifts", in order to gain favourable and healthy feedback.
 
Back
Top