• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core Ultra 9 285K

5800X3D still good enough for gaming. NEXT! :p
 
Hardly anyone cares about things like Thunderbolt as well.
A statement that could only be made by someone ignorant of the fact that Thunderbolt 4 is implicitly USB4 compatible. Or in other words, ARL natively supports two USB4/TB4 ports, each running at the maximum 40Gbps, without having to sacrifice any PCIe lanes at all. It's basically free for board manufacturers to add these ports without eating into other board functionality.

In contrast, Zen 5 only gets USB4 (no TB4) with the bolt-on ASM4124, which offers one 40Gbps USB4 ports and one slower 20Gbps one. It also consumes four valuable PCIe 5.0 lanes from the CPU, on a platform that is already lane-staved. Not only does this force board manufacturers to pay for ASM4124, it also forces them to cut four lanes' worth of functionality, and manufacturers hate being forced to compromise.

I stand by my statement that ARL is massively superior to Zen 4 and 5 in terms of platform connectivity. If there are any Z890 boards with dual x16 slots (something literally not possible on Zen 4/5 because of its garbage IO) then I will almost certainly be upgrading to ARL, performance be damned. Hell, even an x16 + x8 combo might convince me (this is theoretically possible on Zen 4/5 but nobody does it because of the way lanes are emitted by the chipset, again, another shit design choice).

AMD needs to wake the fuck up and stop stiffing consumers on IO. I paid a thousand quid for this half-decade-old Threadripper system to get decent IO because AMD's "latest and greatest" won't provide it (not can't, won't, because it was a deliberate decision), and if Intel can give me that IO in a modern platform for less than what I paid for this old one, they will have my money.

I thought the consensus was that E-cores aren’t “real cores” and Intel shouldn’t include them, but give “muh enthusiasts” 2 more P-cores. Now we are straight up making them equal to Ps and not treating them like the HT replacement they are?
So which is it?
View attachment 368842
Goal post moving, that's what.
 
Gaming is not good (of which I don't care).
A Chinese review tried to disable the E-cores and gaming performance improved by -5 - +15%. E-cores have much higher (+~50%) memory latency than P-cores.
They also tried undervolting and the power consumption of 285K improved by ~100W under stress test.
Imo this is fine. It doesn't do super well vs AMD but it has some good characteristics compared to Raptor Lake I welcome.
Next gen should be better and AMD shouldn't become complacent.
Imo next time they need to lower the E-core vs P-core latency and add more cache because it didn't change compared to Raptor Lake.

Can you link the page?
 
Heavily implied by other reviewers that 24h2 is not friendly with intel's ultra chips.
Also implies the products drivers weren’t finished
Strange, the general consensus was that Intel's (insert preferred number) nm node was causing the high power consumption etc. and the 3nm node would be a drastic improvement... Wonder what happened
n3b is the broken node, apple used to make the m3 and rushed to replace with the m4 which is on n3e.

it feels like tsmc s lead is only marginal and this is part of the proof.
 
And as I said, for home users multi core application performance is quickly just "fast enough" and doesn't translate to any deciding point when buying. That's why AMD sold tons of 5800 X3D, 7800 X3D, even though they were quite noticeably slower in productivity than similarly priced 5900X, 7900X.

I have friends that do tons of photo editing, and only game occasionally, and they decided to buy an X3D processor - because the difference for them is just a bit longer end "rendering" time when exporting photos, all other values are similar, and extra cache in "gaming" CPU might make it more responsive in tasks that are hard to benchmark.
Extremely important point. 99.99% of people don't care if it takes them 20 seconds or 19 seconds to decompress a 4GB file. All these processors are good enough in all these operations.

Obviously if you do a review you try to cover as many scenarios as possible and then take an average, but that is not the way a buyer thinks. You identify the two to three scenarios that matter most, personally gaming, stockfish (chess) and Git / C++ compilation, and base your decision on that.
 
I wonder what OS version the coming AMD 9800X3D will be made on? Since Windows 11 24H2 have certain optimizations for Zen 5 it would be hardly fair to benchmark it in 23H2 "for the sake of fairness"- since new Intel CPUs seem to not operate well in current Windows version? And I doubt it will be fixed by then, 24H2 was available in Release Preview Channel for half a year at least, so another week and a half doesn't really seem enough time.
 
I wonder what OS version the coming AMD 9800X3D will be made on? Since Windows 11 24H2 have certain optimizations for Zen 5 it would be hardly fair to benchmark it in 23H2 "for the sake of fairness"- since new Intel CPUs seem to not operate well in current Windows version? And I doubt it will be fixed by then, 24H2 was available in Release Preview Channel for half a year at least, so another week and a half doesn't really seem enough time.
I think Microsoft and Intel kinda work together on optimizing software to the hardware and even the other way around before the products are launched.
 
I think Microsoft and Intel kinda work together on optimizing software to the hardware and even the other way around before the products are launched.

Well, I bet they do, but that didn't prevent Intel from releasing Adler Lake with problematic scheduling in 2022, where Windows didn't really know in certain occasions which loads to put on which cores - problems that reviewers mostly glossed over, used tactics like Process Lasso or disabling E-cores altogether to get the results that were expected, and leaving users then to scratch their heads why their performace didn't match reviews.

And now again, problems with new Windows version are just a footnote, but that's the version buyers will have to use processors on.
 
At least laptops will benefit the most from these new CPU architectures because of the focus on energy efficiency this time around, we will see once AMD & Intel release more high performance CPUs from these architectures into those laptops. The AI HX 370 pulls 15 watts lower and performs 10% better than a 7840HS (20W vs 35W) in a Geekbench 6 test.
 
İ think we should wait to say something about arrow lake , remember if w11 24h2 didn't come , these arrow lake gaming numbers will be tie with AMD regular CPUs like 9950x , and 5 percent below than 7800x3d. Clearly something wrong with arrow lake,look hardware unboxed review . 285k is losing to i5 12600k at some games,like cyberpunk 2077.i see 3 to 4 reviews and there is strange behaviour . Some games there is gains like 10 to 20 percent but in other games mostly regression up to massive 30 percent. With bigger caches this shouldn't be case at least some game improvements show us there is a problem with arrow lake. Meteor lake Tile design made latency worse we already know that but even in meteor lake regression is not that big.if these numbers are how arrow lake are maded this is one of the worst products i have ever seen. İ can't understand how lunar lake can be this successful with core efficency , latencies with tile architecture but arrow lake is this much flop. Intel give us answers .
 
I wonder what OS version the coming AMD 9800X3D will be made on? Since Windows 11 24H2 have certain optimizations for Zen 5 it would be hardly fair to benchmark it in 23H2 "for the sake of fairness"- since new Intel CPUs seem to not operate well in current Windows version? And I doubt it will be fixed by then, 24H2 was available in Release Preview Channel for half a year at least, so another week and a half doesn't really seem enough time.
To achieve most meaningul set of results, metrics should be as equal as possible. There are many posts in this thread related to how unfair was to test newest Intel's CPUs with old shitty slow 6000 MT/s RAM that uses Zen 4 and 5. Well, it showed performance difference regardless of memory. If Intel would be tested with 8000 MT/s speed, I expect for the Zen 5 to be tested the same way as well.

The data based on Zen 5 with 6000 MT/s memory vs. Intel Ultra 200K with 8000 MT/s would be inaccurate. It would not be a test, it would be a benchmark of two totally different setups.

Same logic applies to OS. Since Windows 11 is literally never evolving PoS OS which was never supposed to exist at first place (Microsoft said), I'd suggest to start testing hardware also on Linux. Does not need to be so comprehensive as with Windows, just few benchmarks and few games. Doing the same on Windows 10 would be meaningful, too, given how big share this older OS still holds.
 
I wonder what OS version the coming AMD 9800X3D will be made on? Since Windows 11 24H2 have certain optimizations for Zen 5 it would be hardly fair to benchmark it in 23H2 "for the sake of fairness"- since new Intel CPUs seem to not operate well in current Windows version? And I doubt it will be fixed by then, 24H2 was available in Release Preview Channel for half a year at least, so another week and a half doesn't really seem enough time.
Obviously 24H2, since it has been in the public channel for a while now. Testing 23H2 with Intel was a courtesy, since it is a version that is disappearing.

To achieve most meaningul set of results, metrics should be as equal as possible. There are many posts in this thread related to how unfair was to test newest Intel's CPUs with old shitty slow 6000 MT/s RAM that uses Zen 4 and 5. Well, it showed performance difference regardless of memory. If Intel would be tested with 8000 MT/s speed, I expect for the Zen 5 to be tested the same way as well.

The data based on Zen 5 with 6000 MT/s memory vs. Intel Ultra 200K with 8000 MT/s would be inaccurate. It would not be a test, it would be a benchmark of two totally different setups.

It makes no sense. Imagine if one platform had a 3-channel controller and another a 2-channel controller (which has happened in the past). Are you essentially saying to use 2 sticks even in the 3-channel platform to be fair? That doesn't make sense.

What does make sense is to test each platform under their ideal conditions and then point out that 8000 MT/s RAMs cost more, so you also have to consider their price in the cost/performance ratio.
 
Can you link the page?
BUT it seems they removed the E-core latency part from the review, IDK why. Only the P-core latency part remained.
 
Obviously 24H2, since it has been in the public channel for a while now. Testing 23H2 with Intel was a courtesy, since it is a version that is disappearing.



It makes no sense. Imagine if one platform had a 3-channel controller and another a 2-channel controller (which has happened in the past). Are you essentially saying to use 2 sticks even in the 3-channel platform to be fair? That doesn't make sense.

What does make sense is to test each platform under their ideal conditions and then point out that 8000 MT/s RAMs cost more, so you also have to consider their price in the cost/performance ratio.
You're right, but TPU approach is best of both worlds. Like for like test on launch, memory scaling article a little while later, as W1z mentioned. It's obvious to people who read the recent Zen memory scaling article that the AM5 platform doesn't effectively benefit from super fast RAM since the gear ratios ruin latency, whereas it seems ARL does.
Now that DDR5 memory has matured we're looking into faster memory options for the next full retest of the 40+ CPUs in our test group. For Arrow Lake specifically, you have to consider that this platform has excellent memory support with speeds in excess of 8000 MT/s, without any additional dividers. On AMD, beyond 6400 MT/s you'll have to run at a lower memory controller ratio, which needs a significantly higher memory clock to make up for the increased latency. On Arrow Lake, a higher divider is only needed for 8800+ MT/s, which means memory scales much better. I am working on a full memory scaling article, but for now, I've included some quick testing in the review for you to get an idea what to expect. The numbers confirm that 6000 CL36 isn't monumentally slower than 6000 CL30, but that there are good gains to be had with memory clocked at the limit of what's possible today.
 
So its an "AI" chip and gamers can **** off. Got it.
 
but that's the version buyers will have to use processors on.
also @oZ65

24H2 = red bar:
Untitled.png
 
I don't know if you tested the whole gaming suite (at all resolutions) under 24H2, @W1zzard, but if you did I think it would be interesting for the results to be consolidated in the Relative Performance and Average FPS graphs.
Says right at the top of the graph, average FPS.
 
Says right at the top of the graph, average FPS.
Says right on top of it, Baldur's Gate 3. If I need to make myself clear, I meant to say the whole test suite Average FPS graphs.
 
Last edited:
I will say we still need time on this new platform, however the more I have sat on this and read the review and the specs the less and less I get impressed. Now I am not talking the updates to features on the platform and I still can't wait to see some individual core changes and such, just the basics like power consumption and performance.

I mean, on average it seems like the chip shaves off ~45 watts depending on the test which sounds cool except when you realize the turbo is 200mhz lower and the max turbo is 500mhz lower. I mean when you think about it like that it seems like they are just not pushing the chip as crazy as they were before. Now lets be frank, the i9 14900KS was a hot beast and hard to keep cool so this is much nicer overall for the average user, but this feels like a generation that's going to be skipped by a lot of people.
 
Says right on top of it, Baldur's Gate 3 1920x1080.
Really man? The entire graph header has information.
Higher resolution graphs are the next few pages in the review.

Sry man, I ninja'ed you. "" I meant to say the whole test suite Average FPS graphs. ""
I understand better what you're asking now. Sorry bout that.
 
Last edited:
Reliable leakers say Lion Cove team had to abandon HT, because they couldn't make it in time. The P core team is executing badly.

There was a Tesla slide about how an 8-wide decoder is impossible on x86, yet Lion Cove did it. However they got 10% per clock gain over it. It couldn't have been easy. Why did they do it? Hubris? Fallen short of expectations? A bit of both? David Huang's tests say that the branch predictor regressed compared to the predecessor Golden Cove. Branch predictor is a very important part and could even be indicative of a skill of the team.
It was first reported HT was being removed a long time ago, I dont think its ever coming back on Intel now. The advantages are very questionable with lots of disadvantages and they have a replacement for it in the workloads HT shined.
 
Back
Top