• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Nova Lake-S" Core Ultra 3, Ultra 5, Ultra 7, and Ultra 9 Core Configurations Surface

Depending on the price core 7 looks to be the sweet spot and core 5 be value for money specs wise… can’t wait to see how these perform
 
All I hope for is for them to bring back AVX512 across their entire stack.
Normies won't have a use for that, except 7 zipping all day long..
 
Normies won't have a use for that, except 7 zipping all day long..
Emulating benefits too. RCPS3 is practically unplayable without it.

I really hope these deliver, a CPU war would drive the platform prices down, they've been creeping up for a while.
Agreed, the 265k going down to $230 was actually exciting. enough with $400+ processors.
 
I feel Intel has no focus, it just switches constantly, I also feel getting rid of Gelzinger was a mistake....but time will tell
I also think getting rid of Gelsinger was a huge mistake. That being said they needed to make cuts and I don't think he was a cutter.

Agreed, the 265k going down to $230 was actually exciting. enough with $400+ processors.

I actually like the 265K - with the overclocks and for the $199 I got it for it's a nice chip. The power consumption is still meh, but performance wise it's not terrible -- if they can fix the latency / add adamntium cache to it, as is it's pretty good - the platform is nice, still a cut above my B850.
 
This could be good or it could be eh depending on latency between the chips and how slow the access is to the L3. The nicest thing about this is the i5 is parity with what an i9 is now.
 
Emulating benefits too. RCPS3 is practically unplayable without it.
how huge is the difference? so you mean to say the ones who buy Ryzen are mostly Emu guys and 7 Zipping guys? (though I use my ROG Ally for that, Desktop is as usual another story)
This could be good or it could be eh depending on latency between the chips and how slow the access is to the L3. The nicest thing about this is the i5 is parity with what an i9 is now.
How are you even perceiving latency when you don't have a meter for it? I have the 285k and when gaming (COD) I don't feel its 66ns latency ever lagging behind my RPL's 50ns and also my Zen5 X3D 62ns (Level 2) latency?
 
how huge is the difference? so you mean to say the ones who buy Ryzen are mostly Emu guys and 7 Zipping guys? (though I use my ROG Ally for that, Desktop is as usual another story)
It's not big (10-15%). If it's unplayable without AVX - it's unplayable with AVX. I've been doing GT4 on my 12900k with AVX off - plays just fine, I don't know why he thinks it's unplayable.
 
No it's because of the diminishing returns of just adding more cores without additional bandwidth, especially with the marketing tactic of throwing more E cores on, it seems like more of a stopgap than any real performance increase. Intel getting rid of Jim Keller and the Royal Core project was a terrible mistake.
Unless you are doing very bandwidth sensitive stuff, I feel like that's a problem that's overblown for the dekstop. 8 channels of DDR5 doesn't seems to bring a massive advantage on dekstop apps. 16 "Real" cores with 8 channel still lose to dual channel 8+12 marketing cores. Even the 13900k with 20 cores saw a measly 4% perf increase when paired with DDR5 7200 with a much high bandwidth compared to DDR4 3600.
Another thing, seeing all the hurdle about just making 4 dimms of DDR5 works on the consumer platform, i'm not confident that more channels wouldn't massivelly increase the platform cost for everyone. (HEDT is never coming back. Not as sane prices. Consummer- worksattion and nothing in between) Zen 6 and nova lake will probably be the last arch using DDR5 anyway, DDR6 will massivelly increase the bandwidth. And it sounds like the industry is looking to move away from slots, and move to camm.
1753367659088.png


1753366369660.png
1753366647545.png
 
Finally something interesting, hopefully increased core and cache size for the Ultra 5 K series will do good for gaming. And also waiting for Zen 6 at the same time, what they can cook up at AMD. (Platform) Pricing, performance and power consumption will be crucial.

With Zen6 AM5 will also be a dead end platform same as this (most likely), so overall platform pricing, core count and gaming performance will be the deciding factor for me, which direction i will go from AM4.

It appears Intel / Nova Lake platform will have more pci-e lanes than AM5 so that's a plus at least.
 
But we are also in an era where the Highest core count SKU on the mainstream platform is also the fastest in ST. I don't understand why I'm still seeing so many people acting as if we are still in the X99/X299 era were clock speed tanked hard starting from six cores...But I also don't understand why the top SKU having more cores than most people need seems to be a bother. Just buy a core i5 with 8 cores for cheaper. Or is there some kind of social status stigma about not having a core i9?
The value proposition of the "i5"/"i7"/"i9" tiers have varied from generation to generation, but my recommendation is usually for demanding users to go for the highest tier of CPU core performance "within reason", as what makes the CPU "long lived" for normal use is actual core speed, whether it's gaming, productive applications or just heavy web browsing. When the CPU is 3-5+ years old, you rather have a slightly faster CPU vs. more cores for general use. If more cores could offset slower cores, we would all buy used 60+ core Xeons for "future proofing".

Now, if the Arrow Lake refresh have a Ultra 5 245K with more decent clocks, it should be a top seller. But at the moment, the recommendation would probably be the 265K.

No it's because of the diminishing returns of just adding more cores without additional bandwidth, especially with the marketing tactic of throwing more E cores on, it seems like more of a stopgap than any real performance increase.
Most of the focus is on either special benchmarks or synthetics in the media. Like how many have chosen a CPU based on a Cinebench score without having a faint idea what it's actually for? (It's actually a very niche application)

As you're referring to, most loads which properly scales with many cores also needs memory bandwidth, which is why I've often called the 12/16 core Ryzen chips "benchmark chips", as they make little sense in real world use. Proper workstation chips also have lots of thermal headroom, and can sustain (mixed) load on many cores without throttling so much. Most real workstation use is often a mix of load; some threads with high, some medium and some low load, such workloads are hard to benchmark fairly. Even most productive benchmarks are just the batch load, not the interactive part you use 99% of the time.

Normies won't have a use for that, except 7 zipping all day long..
At least Linux users get to enjoy it more and more, both encryption and now CRC are accelerated with AVX-512, which at the very least gives a bit of extra free performance for many loads.
 
how huge is the difference? so you mean to say the ones who buy Ryzen are mostly Emu guys and 7 Zipping guys? (though I use my ROG Ally for that, Desktop is as usual another story)

How are you even perceiving latency when you don't have a meter for it? I have the 285k and when gaming (COD) I don't feel its 66ns latency ever lagging behind my RPL's 50ns and also my Zen5 X3D 62ns (Level 2) latency?

Fair question, I have a 265K and initially latency was 90ns+ but after tweaks and XMP it's down to mid 70s. I think it runs just great for price I paid.

What I mean is that as Intel has increased cache size over the last few generations the latency has really gotten worse for L3 and AMD doesn't seem to have the same impact. In the latest generation the inter chip connects are really down clocked. If they have little latency impact from moving from 36MB L3 to 144 L3 that will be huge. If they can improve the interconnects as well I think the chips going to be a winner but if not those are going to be major bottlenecks.
 
Most of the focus is on either special benchmarks or synthetics in the media. Like how many have chosen a CPU based on a Cinebench score without having a faint idea what it's actually for?
You don't need to know what cinebench does. It uses all of the cores - anything multitasking will do the same thing. You don't need to be rendering scenes to use multiple cores, if you are doing 2-3-4 tasks at the time, even if those tasks individually would only use a couple of cores, since you are running a lot of them ---> more cores are used.
 
You don't need to know what cinebench does. It uses all of the cores - anything multitasking will do the same thing.
Using a very niche benchmark to extrapolate "generic performance" is at best incredibly foolish, but it is what many are doing.
And no, not everything which saturates many cores scales the same way, that's so far from the truth as it can be. :facepalm:
 
Using a very niche benchmark to extrapolate "generic performance" is at best incredibly foolish, but it is what many are doing.
And no, not everything which saturates many cores scales the same way, that's so far from the truth as it can be. :facepalm:
That's the best way to extrapolate generic performance since reviews don't do multi tasking usually. But go ahead, give me a better method, how else would I extrapolate which CPU is better at let's say running a heavy unzip package on the background while playing a game?
 
That's the best way to extrapolate generic performance since reviews don't do multi tasking usually. But go ahead, give me a better method, how else would I extrapolate which CPU is better at let's say running a heavy unzip package on the background while playing a game?
Is this really a use case? Is this practical thing that people do often?

For example, for fun I have redirected steam to use my iGPU instead of my GPU running my games. I had done that for Discord as well but it keeps changing the install directory. That said how many core are really being used for other things? Even with running plex, and a docker with a small low cpu usage item, a browser with you tube, I am just not using 20 cores. Maybe some time another game client is installing while I'm playing but still not a frequent use case.

I'm not against cinebench as a whole and it shows kind of how handbrake will behave when I use it but I'm not going to run 20 threads on handbrake or it at all while I'm gaming.
 
Is this really a use case? Is this practical thing that people do often?
No, it's not something I do often, but when I want to do it - it's a computer - it should be able to handle it. I frequently run 2 games on the same PC either with apollo streaming to my TV so my GF can play whatever while im also playing my game on my PC or while waiting for a queue (eg POE2 launch). Again, not something that I do frequently, but it was a bummer my brand new 9800x 3d crapped the bed with these kind of stuff when my old 12900k handled them perfectly fine. More cores = more better.
 
With Zen6 AM5 will also be a dead end platform same as this (most likely)
Good point!
Imagine if LGA1954 will not be a one-off, what will the AMD camp say to that? If they're equally matched would Nova Lake be more attractive due to "upgrade path"? Ahahah can't wait. :roll:
how huge is the difference?
It's not small:
Says there that AVX512 is enabled where supported.
Using a very niche benchmark to extrapolate "generic performance" is at best incredibly foolish, but it is what many are doing.
And no, not everything which saturates many cores scales the same way, that's so far from the truth as it can be. :facepalm:
I would agree with this, at least in part, I'll explain:
Cinebench is one program that uses all cores, and for this workload we will have a corresponding CPU performance chart.
However if we use multiple programs, that together would also achieve ~100% utilization, and run scripts/batches for each so that they're all active at the same time, would the CPUs have similar placements in this new performance chart?
If yes then Cinebench is a good stand-in for everything MT, but if not then we would need some heavy-duty, complex, time consuming tests to accurately and realistically differentiate between CPUs.

/////

Oh and the product stack/segmentation for Nova Lake totally sucks:

Intel has this:
CU 9 - 16P + 32E + 4 LPE
CU 7 - 14P + 24E + 4 LPE
CU 5 (high) - 8P + 16E + 4 LPE
CU 5 (mid) - 8P + 12E + 4 LPE
CU 5 (low) - 6P + 8E + 4 LPE
CU 3 (mid) - 4P + 8E + 4 LPE
CU 3 (low) - 4P + 4E + 4 LPE

When this looks better:
CU 9 - 16P + 32E + 4 LPE
CU 8 - 14P + 28E + 4 LPE
CU 7 - 12P + 24E + 4 LPE
CU 6 - 10P + 20E + 4 LPE
CU 5 - 8P + 16E + 4 LPE
CU 4 - 6P + 12E + 4 LPE
CU 3 - 4P + 8E + 4 LPE

Also the naming scheme is whack, like on Arrow Lake, the second digit should reflect the SKU tier.
 
No, it's not something I do often, but when I want to do it - it's a computer - it should be able to handle it. I frequently run 2 games on the same PC either with apollo streaming to my TV so my GF can play whatever while im also playing my game on my PC or while waiting for a queue (eg POE2 launch). Again, not something that I do frequently, but it was a bummer my brand new 9800x 3d crapped the bed with these kind of stuff when my old 12900k handled them perfectly fine. More cores = more better.
Wow I have never considered the 2 games idea, I guess I just figured it wouldn't work. I'd love to understand how you have that setup.
 
Normies won't have a use for that, except 7 zipping all day long..
Maybe, but I don't care about that. I was talking for myself.
At least Linux users get to enjoy it more and more, both encryption and now CRC are accelerated with AVX-512, which at the very least gives a bit of extra free performance for many loads.
Anyone that works with Numpy and other data-related stacks will also have a really huge performance uplift.
My main motivators for upgrading from a 5950x+128GB to a 9950x+256GB were doubling the RAM amount, followed by the extra performance uplift provided by full blown AVX-512.

I may upgrade to Zen 6 if it gives a core count bump and comes with proper CUDIMM support so I can drive my current sticks faster.
Whenever DDR6 comes with higher capacities (so I can double up my RAM amount again), then I'd hopefully have more options to chose between AMD and Intel since I'd need to change platforms anyway.
 
Wow I have never considered the 2 games idea, I guess I just figured it wouldn't work. I'd love to understand how you have that setup.
Apollo is a sunshine fork (https://github.com/ClassicOldSong/Apollo) that creates virtual displays to stream to devices. Im using it on the switch ( so I stream PC games to my switch) but you can do it with any device that has the moonlight app (moonlight is the app you have to install on the client) , so basically TV's / laptops / pads / mobile phones etc.. Anything with android / linux basically works, for other OSes ive no idea. So you connect a gamepad / mouse / keyboard whatever you prefer to the client and you run the game and that's it, your PC keeps working like normally while you are streaming games to someone else.

Well it works "normally" assuming you have the cores to do that.

EG1. It's not just for games of course, since it's a virtual display you can do whatever you want , it turns a screen into a computer basically.
EG1. You can run many instance of apollo if your PC can handle it so you can stream to an unlimited amount of devices.
 
After complaining about Intel not giving us more than 4 cores for over a decade, now people are complaining about getting too many cores, as if anyone is forcing them to buy the 7/9 chips.
 
After complaining about Intel not giving us more than 4 cores for over a decade, now people are complaining about getting too many cores, as if anyone is forcing them to buy the 7/9 chips.
The most interesting model to me is the Core 3. If the F version stays around ~$100 for 12 or 16 cores and the platform pricing is reasonable, it could be the best option for budget builds, since AMD isn't putting out any standard model below 6c/12t and $200.
 
After complaining about Intel not giving us more than 4 cores for over a decade, now people are complaining about getting too many cores, as if anyone is forcing them to buy the 7/9 chips.

Yeah, it is possible to be wrong in two different ways at two different times. Happens all the time. We'll see what happens with their market share, if AMD keeps on their path and release very strong 8, 12 and 16 core P-Core desktop SKUs the chance that Intel's 50+ core monstrosity is faster in gaming or most other apps is slim to none unless there are some very interesting architecture improvements none of us are yet aware of.

One reason Intel might be pushing so many cores is it'll help them with yields, like maybe only 10% of their dies can be Core Ultra 9s but when you can disable so many cores with bad transistors without breaking the whole chip, suddenly your yields for mainstream Core 5s are probably close to 100%. Their bean counters might have figured out that even if they lose another 10% of their desktop market share, the lower costs will more than make up for it.
 
I don't buy the rumored specs.

But regardless, their previous 24 core products all ran >200W for all-core workloads. So a 52-core model would be >400W? Who wants that? And how would Intel expect motherboard makers to build economically for a platform that supports everything from tiny 4+4+4 Core 3 chips to the enormous Core 9?
 
Back
Top