• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Bartlett Lake-S" Gaming CPU is Possible, More Hints Appear for a 12 P-Core SKU

I would have loved such a CPU a few years ago. But now that we've got news of Zen 6 potentially having a 12-core CCD, it's a bit too late.
 
The fact remains, in most real world scenarios, especially in desktop usage with applications (+typically a browser in the background) and various backgroud processes, one p-core outperforms one group of four e-cores easily. There are exceptions, but those are mostly either edge-cases or purely synthetic. presumably slightly less aggressive boosting than the "problematic" Raptor Lake. So what are you really gaining here? And how is this actually better than an 285K?
In those real world scenarios you don't just have 4 ecores, you have 8 pcores as well. So both the hybrid and the non hybrid cpu perform the same.
 
We have an official Intel source (from the Linux kernel) confirming Bartlett Lake is Raptor Cove:

I have serious doubts about LGA 1851 compatibility. Meteor/Arrow Lake's platform differs significantly from LGA 1700.
I hope for Nova Lake = 1851 and native VVc alias h.266 support
 
<snip>

So kind of the opposite approach of setting affinity, interesting but arguably yet another "expert level" tweak to make a system usable. (I would like it of course, but also being able to configure memory and IO cache too…)
If a program is problematic but not super sensitive to responsiveness, you can encapsulate it in a VM and still run it "seamless" on your desktop BTW.
But if you're running into these kinds of issues, you might as well just build a massive workstation with a mighty CPU and loads of RAM…

<snip>


But explain to me this;
Even if we assume Intel carves out a consumer SKU from the 12-core Bartlett Lake, and assuming there are no architectural benefits here, in best-case it will perform slightly lower than Raptor Lake in gaming (as it's the absurdly high boost that gives Raptor Lake an edge in gaming), and somewhat better in various productive workloads of course. But a 12-core model will almost certainly have lower base clock, and effective clocks at "mixed" threaded workloads, and presumably slightly less aggressive boosting than the "problematic" Raptor Lake. So what are you really gaining here? And how is this actually better than an 285K?

If the cores are significantly lower clocked I wont buy it, but I dont think they will be.

On the affinity thing, yeah we kind of have it already with affinity, but I suppose I mean from a OS base level perspective, some apps themselves can override user set affinity as they set it themselves when fired up, and affinity always has to be configured as an override, not a default behaviour. So by default software only see's p-cores, but can whitelist e-cores e.g. gcc, hyper-v, svchost, obss, cinebench, and so on.

I would have loved such a CPU a few years ago. But now that we've got news of Zen 6 potentially having a 12-core CCD, it's a bit too late.
It is late, but still better late than never, AMD released late CPUs for their AM4, and people have appreciated it, if I can slot in a new CPU without a board swap then I will do that.
Historically I didnt mind buying new motherboards, but now days they a lot more expensive and are regressing in capability, every new gen, is less pcie slots, and less SATA.
 
On the affinity thing, yeah we kind of have it already with affinity, but I suppose I mean from a OS base level perspective, some apps themselves can override user set affinity as they set it themselves when fired up, and affinity always has to be configured as an override, not a default behaviour. So by default software only see's p-cores, but can whitelist e-cores e.g. gcc, hyper-v, svchost, obss, cinebench, and so on.
I absolutely see benefits from such features, probably for web browsers more than anything, as before you know it they will gobble up all the cores and memory they see, and run hundreds if not thousands of threads until even a powerful workstation is slowed down to a crawl. So for me reserving some cores (e.g. 4-6 cores if I had 12+ of them) and ~32 GB of RAM for the browser, and only give it medium priority in scheduling, would probably lead to a better user experience. (This is part of the reason why I use two computers when working on my projects.)

But such features will have to work differently on low-power laptops though, as 2 p-cores may be too little to run all applications well.
 
On the affinity thing, yeah we kind of have it already with affinity, but I suppose I mean from a OS base level perspective, some apps themselves can override user set affinity as they set it themselves when fired up, and affinity always has to be configured as an override, not a default behaviour. So by default software only see's p-cores, but can whitelist e-cores e.g. gcc, hyper-v, svchost, obss, cinebench, and so on.
Are you sure this is specific to the aforementioned softwares, or if that's a thing between the windows scheduler + intel's hw assist thing?
I'm saying that because on Linux GCC and OBS (which uses ffmpeg underneath) do make use of both P and E cores.
On MacOS with AS, which also has a hybrid design, GCC, Clang and OBS are also able to make use of P and E cores without issues. Only kind of software that seems to be stuck only to the P cores is anything that has to do with virtualization, such as Docker, so any container that I run, even if it's really CPU intensive, will only be able to make use of the P-cores.
 
Are you sure this is specific to the aforementioned softwares, or if that's a thing between the windows scheduler + intel's hw assist thing?
I'm saying that because on Linux GCC and OBS (which uses ffmpeg underneath) do make use of both P and E cores.
On MacOS with AS, which also has a hybrid design, GCC, Clang and OBS are also able to make use of P and E cores without issues. Only kind of software that seems to be stuck only to the P cores is anything that has to do with virtualization, such as Docker, so any container that I run, even if it's really CPU intensive, will only be able to make use of the P-cores.
I think you have misunderstood me, I am talking about hiding e-cores so software where you want p-cores only is assured to only ever use p-cores and as a default behaviour. Not about making compilers and OBS using e-cores which they will already do.
 
I think you have misunderstood me, I am talking about hiding e-cores so software where you want p-cores only is assured to only ever use p-cores and as a default behaviour. Not about making compilers and OBS using e-cores which they will already do.
Ohhh, I understood you said they already do so, sorry!

Going back to your original idea, isn't the major issue with games, or some really specific software that someone has mentioned before?
Most software should do fine just being allowed access to all cores (the current behavior), MT stuff will take advantage of the extra cores, and simpler stuff like browsers and whatnot should not really care which core it's running in. I only see the issue with specific stuff that can make use of some cores but are not embarrassingly parallel, and thus being only on P cores makes since, which often ends up being just games.

The opposite of your idea is what's in place, for both Intel and AMD, where they try to shove things like games to run only on the P/x3D-cores, while all other software can run at will across all cores. I do believe this is the right approach for general usage, but I get that it's still not ideal for games.
 
It is late, but still better late than never, AMD released late CPUs for their AM4, and people have appreciated it, if I can slot in a new CPU without a board swap then I will do that.
Historically I didnt mind buying new motherboards, but now days they a lot more expensive and are regressing in capability, every new gen, is less pcie slots, and less SATA.
It is too late for me. If this thing had released along with, or shortly after Alder Lake, then I would have got one to replace my Rocket Lake system with instead of swapping to AMD with AM5. Now I'd need a new motherboard and RAM, which (already being on AM5) isn't worth it.
 
Ohhh, I understood you said they already do so, sorry!

Going back to your original idea, isn't the major issue with games, or some really specific software that someone has mentioned before?
Most software should do fine just being allowed access to all cores (the current behavior), MT stuff will take advantage of the extra cores, and simpler stuff like browsers and whatnot should not really care which core it's running in. I only see the issue with specific stuff that can make use of some cores but are not embarrassingly parallel, and thus being only on P cores makes since, which often ends up being just games.

The opposite of your idea is what's in place, for both Intel and AMD, where they try to shove things like games to run only on the P/x3D-cores, while all other software can run at will across all cores. I do believe this is the right approach for general usage, but I get that it's still not ideal for games.
There are a couple of games out there that performs worse with ecores on but it's really the extreme exception. The only game I remember off the top of my head is that warhammer online game, and that only is an issue on alderlake, it works better with ecores on raptorlake (weird, I know). Other than that the majority of games benefit from ecores one way or another (averages or lows).
 
There are a couple of games out there that performs worse with ecores on but it's really the extreme exception. The only game I remember off the top of my head is that warhammer online game, and that only is an issue on alderlake, it works better with ecores on raptorlake (weird, I know). Other than that the majority of games benefit from ecores one way or another (averages or lows).
Is that with or without the Thread Director (or whatever it's called) software installed?
 
It is too late for me. If this thing had released along with, or shortly after Alder Lake, then I would have got one to replace my Rocket Lake system with instead of swapping to AMD with AM5. Now I'd need a new motherboard and RAM, which (already being on AM5) isn't worth it.
Dont think I would move from AM5 to this chip either.
 
Dont think I would move from AM5 to this chip either.

Nobody in their right mind would. But it remains Raptor Cove is still a very capable core, and this would be a great drop-in replacement/upgrade for owners of Z690 and Z790 motherboards.
 
My big question here is whether they'll leave AVX-512 on such a CPU, if they make it available.

I wouldn't leave AM5 if I had it. But going from an i7-13700k might be interesting, a drop-in on my ROG STRIX Z690-G mATX board, and a one-generation life extension.
 
Last edited:
Intel must just drop all P-cores and increase e-cores counts/speed that would be more power efficient plus 6ghz could be achievable with less wattage ....
 
Intel must just drop all P-cores and increase e-cores counts/speed that would be more power efficient plus 6ghz could be achievable with less wattage ....
That wouldn't work very well for lightly threaded loads like gaming.
 
we have Ryzens, come on.. oh, yeah, again, the RAM issues, lmfao

What ram issues?

Intel should try and market their CPU's differently. Make a CPU 8-Cores max and performance that matches the 9800X3D or above.

Then bring to market another with maybe only 32 efficient cores for work.

Make the price right,and see if the market will accept it. Maybe they can turn the shitshow around like that.
 
Last edited:
What ram issues?

Intel should try and market their CPU's differently. Make a CPU 8-Cores max and performance that matches the 9800X3D or above.

Then bring to market another with maybe only 32 efficient cores for work.

Make the price right,and see if the market will accept it. Maybe they can turn the shitshow around like that.
my 7500F with A620 and 2*8GB 6000.36 started like ~1 min to post (latest bios blah blah blah for that moment...), currently I have 14600K with ddr4 which boots in seconds...:rolleyes:
 
my 7500F with A620 and 2*8GB 6000.36 started like ~1 min to post (latest bios blah blah blah for that moment...)
I suppose you didn't have Memory Context Restore enabled in the BIOS.
 
I suppose you didn't have Memory Context Restore enabled in the BIOS.
don't remember if it was there.. shitabyte mobo. A620..:oops:
 
my 7500F with A620 and 2*8GB 6000.36 started like ~1 min to post (latest bios blah blah blah for that moment...), currently I have 14600K with ddr4 which boots in seconds...:rolleyes:

Remember that being problem at launch. Thought AMD fixed that. Anyways I'll wait for AM6 before upgrading from AM4.
 
Remember that being problem at launch. Thought AMD fixed that. Anyways I'll wait for AM6 before upgrading from AM4.
Yes, they fixed it.
 
Back
Top