• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core Ultra 9 285K

That's a good point. However, even other applications don't scale as well as they should. Take NAMD which should scale pretty well because of the doubled vector throughput of the new E cores, but the uplift is relatively meager.

Those types of simulations are always bandwidth limited and don't care about vector throughput. They only did when we were at 128bit and 1 or 2 cores. It is the same when using GPUs.

I'd say NAMD is especially impressive 14% uplift, because it is above what one usually gets from bandwidth limited tests.

Now the selling point is if, by the time this ramps, they can sell it for a profit and cheaper or not. If packaging is bringing up the cost too much, it is not worth the tradeoff.
 
Still surprised AMD didn't make a monolithic X3D CPU based off these APUs, instead of using the chiplet models with their inherent issues.
Because monolithic chips are expensive to make as Intel has found out.
 
So in Other words AMD still is King now. Too Bad Intel cannot make a pure 16 CORE Monster like AMD.
That is where Intel is hurting. E cores are great for average business man working at the office but for gaming and power users 16 Core will always win.
 
Yep, the IF is showing signs of being unable to scale.
It's essentially the same with Zen 5 as it was with Zen 3. Zen 3 you could get 1800 MHz on pretty much any sample, and 2000 MHz IF on good ones.

Zen 4/5 is more right? Since you're running 6000 MT memory "in sync", but the trickery is that it's actually 1.5/1 "sync". The IF runs at 2000 MHz and the memory runs at 3000 MHz and this is considered "in sync". Compared to Zen 3 where both IMC and memory would run at 1800 MHz or 2000 MHz.

Derbauer getting 8800 MT without going into gear 4 is promising for the ARL memory controller/platform. Plus the ARL memory controller allows for more granularity, with sub-channels on the memory. Whereas RPL had one MC channel going to 2x40 bit sub-channels on the memory stick, ARL allows one channel addressing sub-channel 1 on both sticks, and the other channel addressing sub-channel 2 on both sticks. I wonder if this means changing a value in each sub-channel could be done in one cycle, rather than taking two cycles with older MC.
 
Unless you're going full unified memory x3d on APU's will probably not work as well you'd hope for!
Zen 4/5 is more right? Since you're running 6000 MT memory "in sync", but the trickery is that it's actually 1.5/1 "sync". The IF runs at 2000 MHz and the memory runs at 3000 MHz and this is considered "in sync". Compared to Zen 3 where both IMC and memory would run at 1800 MHz or 2000 MHz.
You mean how the CPU or other "buses" on the PC work?
 
Intel just wrote AMD a blank cheque to charge us whatever it wants for 7000x3D & 9000x3D series :mad:

 
You mean how the CPU or other "buses" on the PC work?
It's trickery because they call both Zen 3 2000/2000 MHz and Zen 5 2000/3000 MHz "in sync", implying it's still 1:1.

Even going up a "gear" to run 8000 MT memory means something like 2000/4000 MHz for Zen 5, changing that "sync" to 2:1 from 1.5:1, from 1:1 with Zen 3, but all three are still "in sync", which is technically true if you don't mind ignoring performance costs.

None of this would be necessary, as we see with ARL which uses an appropriate Foveros interposer that doesn't castrate memory and inter-chiplet frequency caps or have a bad power draw cost, if AMD had gone with TSMCs more advanced packaging.
 
Looking at the memory scaling results (and improvements without power limit), this replicates what Granite Rapids has achieved and seems to have a beast of a memory controller.

To me, it looks like the new Thread director is not playing well with the scheduler in the OS, just like with Alder Lake. We will have to see, there hasn't been a problem free CPU launch in quite some time.
There are two dimensions on which to evaluate a SOC's DRAM performance: bandwidth and latency. This new IMC improves bandwidth though the CUDIMM plays a fairly significant part in that achievement. However, latency seems to be a significant regression.
 
So here we have the second beta CPU launch of 2024. I wonder if all the scheduling changes that Intel got into Windows 11 have come to bite them, now that they don't have HT anymore. But they have at least mostly solved the energy inefficiency problem, which really was their Achilles heel; seems that Skylake has finally, mercifully been laid to rest.

As for platform connectivity they have AMD beat hands down; dual Thunderbolt 4 40Gbps links without having to sacrifice any PCIe lanes is fantastic, although lack of built-in WiFi 7 is disappointing. Sure, Zen 4/5 has 4 more PCIe 5.0 lanes but I think we can all agree that nobody gives a fuck about PCIe 5.0 (and with AMD's 800-series chipsets you lose those extra lanes to USB4 anyway). Will be interesting to see how Z890 boards will be kitted out - hopefully some dual x16 slot models will make an appearance.
 
It's trickery because they call both Zen 3 2000/2000 MHz and Zen 5 2000/3000 MHz "in sync", implying it's still 1:1.

Even going up a "gear" to run 8000 MT memory means something like 2000/4000 MHz for Zen 5, changing that "sync" to 2:1 from 1.5:1, from 1:1 with Zen 3, but all three are still "in sync", which is technically true if you don't mind ignoring performance costs.
Okay but I guess that's not really an option till they're using the same building blocks for EPYC, memory speeds there will always be much lower although it could change with something like CUDIMM few years down the line.
 
What he and multiple people on the forum told you is that the 7900X3D has inefficient topology. That didn't change. The 7950X3D doesn't have the same problem because it's 8+8, and the 9950X3D should finally do away with all major topology issues because it'll have 2 X3D CCDs. Zen 5 X3D will invalidate his point, especially if it's unlocked as rumored.
Lmao you mean latency right? How long is one nanosecond? Can you detect 1000 nanoseconds? What you are repeating is what has been constructed. This is not the place to have that discussion though. For this thread. Removing HT was a dumb move from Intel and it shows.
 
Those types of simulations are always bandwidth limited and don't care about vector throughput. They only did when we were at 128bit and 1 or 2 cores. It is the same when using GPUs.

I'd say NAMD is especially impressive 14% uplift, because it is above what one usually gets from bandwidth limited tests.

Now the selling point is if, by the time this ramps, they can sell it for a profit and cheaper or not. If packaging is bringing up the cost too much, it is not worth the tradeoff.
Judging by the die sizes and advanced lithography used for the various tiles, it's almost certainly more expensive to manufacture than both the competition's products and their own 14900K. Given Intel's volumes, they should still make money from it.
 
It's not the same. Zen 2 3950X was nearly 2x the rendering performance of the competition (9900K) at the same power.
285K is sometimes +10% and sometimes -10% (depending on the renderer) at the same power.

Remember what Zen 2 looked like

Here's what the 285K looks like

The 3950X was an impressive step forward in MT even if it didn't translate to gaming. 285K is a small step forward in MT and it also doesn't translate to gaming.

So for one, the correct comparison is the 10900K.

No sane person uses CPU for rendering, and hasn't for about a decade. To wit, most people were not getting 3900 or 3950X, the vast majority were getting 3600/3600X - for gaming.

This was with a 2080Ti.


1729798849093.png


It got significantly worse when the 3080/3090 came out. So much so, most review sites actually stopped including Zen 2 chips in benchmarks soon after Zen 3 came out. They ofc would not want anyone to notice that their 'best gaming' CPU of 2019\2020 was a dud 6-18 months later.

Zen 2 popularity with gamers is an interesting study in the power of propaganda and cognitive dissonance.
 
Love the Dutch saying about Intel .. Cudimm and other things are good COPE, COPE, COPE. further DDR6 :) not far Cu may be interesting
 
Lmao you mean latency right? How long is one nanosecond? Can you detect 1000 nanoseconds? What you are repeating is what has been constructed. This is not the place to have that discussion though. For this thread. Removing HT was a dumb move from Intel and it shows.

No I do not mean latency. I mean topology - where the CPU's execution resources are physically located. This is why the 7900X3D is amongst the slowest of all Zen 4 Ryzens, when it clearly has the goods. Also don't underestimate cycle penalties - they multiply the the millions, if not billions of cycles in CPUs that operate in the 4-5 GHz range.
 
1729797478009.png

PCIe Gen 5 SSD without compromising GPU bandwidth
Don't get it... why is that special?
There are many X670 and X870 mobos with the same feature, it is "basic" with AMD top tier and even with a simple B650 you may get it...
 
So for one, the correct comparison is the 10900K.

No sane person uses CPU for rendering, and hasn't for about a decade. To wit, most people were not getting 3900 or 3950X, the vast majority were getting 3600/3600X - for gaming.

This was with a 2080Ti.


View attachment 368790

It got significantly worse when the 3080/3090 came out. So much so, most review sites actually stopped including Zen 2 chips in benchmarks soon after Zen 3 came out. They ofc would not want anyone to notice that their 'best gaming' CPU of 2019\2020 was a dud 6-18 months later.

Zen 2 popularity with gamers is an interesting study in the power of propaganda and cognitive dissonance.
You are the one who brought up rendering and efficiency. Zen 2 was way ahead in both. 285K is only marginally ahead in rendering. Neither are good gaming chips.

10900K launched Spring 2020. Zen 2 about 9 months earlier. First impressions matter. Zen 2 was significantly discounted throughout most of its lifespan. Zen 2 was still more efficient than the 10900K.

Crying "bwaaah why aren't people consistent" doesn't make sense. ARL is 5% faster than the competition in MT at launch. The 3950X was at 90% faster than the competition in MT at launch. And in gaming neither were impressive.

But if you were in a position to compromise a bit on gaming for much more MT then the 3950X made sense. It doesn't make sense for the 285K. Just buy a 14900K or 7950X3D or wait for a 9950X3D. There is no compelling use case for it.
 
The problem with some Zen5 SKUs were that they were tuned for low consumption, the arch itself isn't bad at all.

I also wondered all that hype for Arrowlake without any reliable information, leaks or anything. I guess improving from those self-destructive 13/14th gen was the most important thing to have.
If power consumption is the key point, I think its still way too high given Intel is using a more advanced node, removed SMT, and made some big claims about efficiency.
When the previous gen was cranked past the limit, the efficiency on this gen doesn't seem very impressive. Gamers Nexus also reported of the 285K having instability, and some bad frametiming. Somehow Intel managed to screw up this launch and my point is Intel isn't getting the same treatment that AMD did for a buggy launch.
Theres also other problems that really makes the whole platform seem disappointing, like the socket bending still isn't fixed unless you go for a more expensive motherboard, Z890 might not get a Arrow Lake refresh either, and Intel claimed their cpu's were "on par" with the previous gen and Ryzen but that doesn't seem to be the case at all.
 
View attachment 368787
PCIe Gen 5 SSD without compromising GPU bandwidth
Don't get it... why is that special?
There are many X670 and X870 mobos with the same feature, it is "basic" with AMD top tier and even with a simple B650 you may get it...
Because it doesn't have anything to do with AMD.

It's an improvement over Alder/Raptor Lake where if you wanted PCIe Gen 5 for SSD, you had to eat into GPU lanes.
 
Back
Top