• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Planning P-core Only "Bartlett" LGA1700 Processor for 2025

If it fixes the degradation problems.... that might be the way.
Regarding the 12p core chip, if it can somehow match the 14700k in MT performance, I'll be damn interested. If it doesn't, it will be meh.
Well of course it wont. Thats the whole point of ecores. They give better multicore performance with less die space.
 
I mean I'm not going to disagree that you can have faster and slower cores, but irrespective of that consoles are still developing around 8 cores and 16 threads so meeting or exceeding that is ideal.

If it fixes the degradation problems.... that might be the way.

Well of course it wont. Thats the whole point of ecores. They give better multicore performance with less die space.

Yup exactly the 12P is more to bump up ST and offer a chip that's more affordable. It'll probably be fairly similar to a14600K in MT actually. It might be a bit quicker though due to some of the changes beyond simply inserting more P cores. I think the 14600K will almost assuredly beat the 10P core version even in spite of that I suspect, but I didn't really crunch the math on that. If I remember right 4E cores is similar to 2P cores in MT at least with HT, but w/o it that changes things. If you bump up cache and adjust IPC and frequency it probably balances out pretty close anyway in spite of removing HT. The big perk is the much higher ST which is good for every workload. It's a pretty significant bump.

I really think Intel should consider doing the opposite of a low power island with a single P core in some manner where they isolate a P core or maybe all the P cores with a bit of a thermal buffer space to make it easier to boost and/or at lower voltages since heat makes it more difficult to run lower voltages.

I don't know how they would go about that 100%. I think they could have like a cache between low power and regular E cores and another cache between the standard E cores and P cores. They should probably maybe considering using as many low power E cores as they do the amount of P cores so the regular E cores are more the higher density cores and balanced in the middle, but then has access to a bit of shared cache with low power E cores and high power P cores. They could get priority access to the cache from the low density cores that shared, but wouldn't get priority access from the shared cache with P cores.

Something like that might be possible and work reasonably, but idk it would probably be a bit of a engineering challenge to figure out how to arrange of all it in the manner. It would spread the heat out a bit though by having a shared cache in between clusters of core types.
 
Last edited:
It will be much better than a 5800X3d though and upto 12 P cores...
I meant the significance of the 5800X3D launch which has disrupted the 12900K.

Intel plans to do the same, disrupting the 9800X3D by launching BTL in Q3 2025.

If they're launching it in Q3'25, they're aiming for the 9800X3D.

It's about quality. The 12400F (6C12T) performs faster than a 10900K (10C20T).

having a bit of additional headroom for other multi-tasking just is a nice benefit for a variety of reasons. I'd say in general a PC should strive to either meet console specs or for 2-4 cores above them to allow for a bit of additional MT reassurance leeway. I'm not about to tell people what's best for them outright, but I think it's pretty reasonable and sensible guidance to consider.
Sure.

If Intel bumps up cache on these a reasonable bit
They should not go lower than 48MB L3 Cache (12MB more than 36MB we got with Raptor Cove).

past history
They are done with releasing 4C8T Core i7s and 4C4T Core i5s.

That's exactly WHY we need competition, to drive innovation and of course for better prices.

Now, if Intel does not rise again, AMD will become the new Intel (stagnation all over again) so we need both of them strong in order to have a thriving CPU market.

It's really a sign healthy competition in the CPU market actually because if AMD was doing a poor job competing we'd just be stuck on quad cores with a 50MHz bump on a new socket.
Oh, the good old days™ ;)

If it fixes the degradation problems.... that might be the way.
This^

meeting or exceeding that is ideal.
Sure.
 
I mean I'm not going to disagree that you can have faster and slower cores, but irrespective of that consoles are still developing around 8 cores and 16 threads so meeting or exceeding that is ideal.



Yup exactly the 12P is more to bump up ST and offer a chip that's more affordable. It'll probably be fairly similar to a14600K in MT actually. It might be a bit quicker though due to some of the changes beyond simply inserting more P cores. I think the 14600K will almost assuredly beat the 10P core version even in spite of that I suspect, but I didn't really crunch the math on that. If I remember right 4E cores is similar to 2P cores in MT at least with HT, but w/o it that changes things. If you bump up cache and adjust IPC and frequency it probably balances out pretty close anyway in spite of removing HT. The big perk is the much higher ST which is good for every workload. It's a pretty significant bump.

I really think Intel should consider doing the opposite of a low power island with a single P core in some manner where they isolate a P core or maybe all the P cores with a bit of a thermal buffer space to make it easier to boost and/or at lower voltages since heat makes it more difficult to run lower voltages.

I don't know how they would go about that 100%. I think they could have like a cache between low power and regular E cores and another cache between the standard E cores and P cores. They should probably maybe considering using as many low power E cores as they do the amount of P cores so the regular E cores are more the higher density cores and balanced in the middle, but then has access to a bit of shared cache with low power E cores and high power P cores. They could get priority access to the cache from the low density cores that shared, but wouldn't get priority access from the shared cache with P cores.

Something like that might be possible and work reasonably, but idk it would probably be a bit of a engineering challenge to figure out how to arrange of all it in the manner. It would spread the heat out a bit though by having a shared cache in between clusters of core types.
LOL man, it will be faster than the 14600k. Ballpark 13700k +10% but depends on the clocks, if pushed to 5.5 like RPL it can actually surpass the 14700k
 
I don't know how this degradation matter will unravel it might snowball out of control for all we know. The scope of it could change readily over time and we're mostly just start to see emerging problems primarily with the 13900K and 14900K SKU's, but other's like the 14700K and 13700K are going to be next on the chopping block. The 14700K in particular more so than the 13700K due to the additional E cores and binning. Like if it due to the ring and added stress from additional cores and higher SKU binning they might just be degrading a bit slower, but give it a few more months could be in the same scenario.

That said once you start alleviating some of that added pressure the ring bus is likely very durable overall in terms degradation concerns. Like there might be a general cutoff point in terms of what's over stressing it in the first place between core count and frequency with binning. I'm concerned it might snowball and then it's question of how much does it do so!? Like if it does snowball will even go as far back as some Alder Lake chips that aren't viewed as a problem right now in like 5 or 6 months or something!?
 
Exciting CPU for once.
This^

I despise E cores, or ZENc ones for that matter. I mean, we're not on a mobile platform, we're ON desktop.

Performance (or rather, PROPER) cores are all we need, TBH.

High time to set things straight regarding LGA1700 by launching 12C24T Core 9, 10C20T Core 7 and 8C16T Core 5 BTL (PROPER-cores only) processors.

I'm very interested.
To the mad moon!
 
Here's the Bartlett die shot, gals and guys. In case it triggers a déjà vü feeling in any one of you ... naaaah, you can't be anything but wrong.

1721378747432.png
 
Here's the Bartlett die shot, gals and guys. In case it triggers a déjà vü feeling in any one of you ... naaaah, you can't be anything but wrong.

View attachment 355666
Sapphire Rapids? Where did you get the idea that this is BTL, a CPU to be released more than a year from now?
 
The first interesting Intel CPUs since Coffee Lake
I'd rather disagree. As an i5-12400F owner I don't see any way you get this much performance over this little power consumption for this money, both by opting for other Intel CPUs and for AMD CPUs. 12100F is literally THE no-brainer in an "I don't have much money" segment as well. The problem with Alder Lake is that E-cores weren't particularly great for a long time because of software limitations. Also worth noting the LGA1700 CPUs are geometrically invalid. They should've been square from the very beginning.
 
Sapphire Rapids? Where did you get the idea that this is BTL, a CPU to be released more than a year from now?
Unsold Sapphire Rapids dies. Everything checks out: 15 P cores, 2 DDR5 channels, 32 PCIe 5 lanes, no need for additional chiplets.
 
Unsold Sapphire Rapids dies. Everything checks out: 15 P cores, 2 DDR5 channels, 32 PCIe 5 lanes, no need for additional chiplets.
So you think Intel would disable 3 cores and several PCI-e lanes on an expensive die, somehow fit it in a much smaller package, and give the end product a different code name to sell it cheaper? I find it highly unlikely.
 
So you think Intel would disable 3 cores and several PCI-e lanes on an expensive die, somehow fit it in a much smaller package, and give the end product a different code name to sell it cheaper? I find it highly unlikely.
Maybe that's the least unlikely of all unlikely possibilities.

Even the existence of the Bartlett chip (or plans) is unlikely. A year and a half after the glorious peak, which is also the end, of the LGA1700 story, there comes another glorious peak? Why? And why not on LGA1851? Did Intel paint themself full green overnight, and decided that the platform should live for four years instead of two, to reduce e-waste? Will they have enough free Intel 7 fab capacity in Q3 2025, but not earlier, to afford making this chip? See, there's room for my speculation: are they predicting that they will have a big enough pile of SPR dies with 10-11-12-13 working cores around that time?

Technically a single SPR die (around 20 x 20 mm) can fit (barely) on the LGA1700 package. Yes I know it might just be impossible, the power delivery may not be compatible, SPR dies exist in two mirrored variants so they'd need two different substrates, etc.
 
Maybe that's the least unlikely of all unlikely possibilities.

Even the existence of the Bartlett chip (or plans) is unlikely. A year and a half after the glorious peak, which is also the end, of the LGA1700 story, there comes another glorious peak? Why? And why not on LGA1851? Did Intel paint themself full green overnight, and decided that the platform should live for four years instead of two, to reduce e-waste? Will they have enough free Intel 7 fab capacity in Q3 2025, but not earlier, to afford making this chip? See, there's room for my speculation: are they predicting that they will have a big enough pile of SPR dies with 10-11-12-13 working cores around that time?

Technically a single SPR die (around 20 x 20 mm) can fit (barely) on the LGA1700 package. Yes I know it might just be impossible, the power delivery may not be compatible, SPR dies exist in two mirrored variants so they'd need two different substrates, etc.
Here is a simpler theory:

74 - Copy.jpg

I don't think Bartlett Lake is meant for another "glorious peak". It's just something for those of us who don't want the E-cores. A side track, or stop gap, kind of, not a whole new generation on its own.

Edit: I used the Alder Lake die in the above example, but if you substitute that with Raptor Lake that has 4 e-core clusters instead of 2, and cut and paste the 4 extra p-cores there, the die size will remain identical.
 
Last edited:
why not on LGA1851?
So that BTL supports DDR4 (alongside DDR5) just like all the other LGA1700 SKUs which is essential (DDR5 modules ain't that cheap yet, the prices have stabilized a lot though).

Now if they release BTL with at least 48MB L3 Cache, they're aiming at the upcoming 9800X3D.

They might wanna increase that amount to 64MB if they'd rather be safe than sorry, although they've always acted skimpy when it comes to the Cache capacity. They needed AMD for the push!
 
Here is a simpler theory:

View attachment 355876
I don't think Bartlett Lake is meant for another "glorious peak". It's just something for those of us who don't want the E-cores. A side track, or stop gap, kind of, not a whole new generation on its own.

Edit: I used the Alder Lake die in the above example, but if you substitute that with Raptor Lake that has 4 e-core clusters instead of 2, and cut and paste the 4 extra p-cores there, the die size will remain identical.
Hey, you generously included a big iGPU. I think a reduced one, or none at all, is more likely, also to not overload the ring bus. The target public is either enthusiasts with powerful dGPUs or small workstation/server builders who either have powerful dGPUs too, or don't need more than an Aspeed onboard graphics chip.

So that BTL supports DDR4 (alongside DDR5) just like all the other LGA1700 SKUs which is essential (DDR5 modules ain't that cheap yet, the prices have stabilized a lot though).
I don't want to discuss BTL in the present tense just yet ... well, DDR4 is not at all absolutely necessary, and also far from optimal. People who buy this chip will probably have serious plans to make it work hard. All 12 cores, not just 8. Some highly parallel load, meaning that memory latency doesn't mean much, but bandwidth does, and amount probably too. Now if you're making a choice between DDR4-4400 (DIMMs up to 32GB) and DDR5-6400 (DIMMS up to 48GB), what do you do?
 
It's a nice to have. The 5800X3D has disrupted the 12900K position on a DDR4-3200 based platform so DDR4 support for the upcoming BTL processors is more than welcome.

I'll take BTL as a decent farewell to the LGA1700 platform.
 
Hey, you generously included a big iGPU. I think a reduced one, or none at all, is more likely, also to not overload the ring bus. The target public is either enthusiasts with powerful dGPUs or small workstation/server builders who either have powerful dGPUs too, or don't need more than an Aspeed onboard graphics chip.
It's the same iGPU as in Raptor Lake, UHD Graphics 770 with 32 EUs. 16 e-cores = 4 p-cores in terms of die area, so there's no change there.

I can do the same drawing using an actual Raptor Lake die shot to demonstrate if you'd like.
 
Guess I will have to wait for reviews to be sure this doesn't suffer from the raptor lake issues. Other than that, my 12600k would be replaced by a 10 P core.
 
2025 sure is gonna be packed with new, exciting HW launches.
 
Last edited:
So that BTL supports DDR4 (alongside DDR5) just like all the other LGA1700 SKUs which is essential (DDR5 modules ain't that cheap yet, the prices have stabilized a lot though).

Now if they release BTL with at least 48MB L3 Cache, they're aiming at the upcoming 9800X3D.

They might wanna increase that amount to 64MB if they'd rather be safe than sorry, although they've always acted skimpy when it comes to the Cache capacity. They needed AMD for the push!
If there is no IPC gain, then it wont be as fast as 7000 X3D and definitely not 9000 X3D for gaming, it also will lose some productivity performance as 4 e-cores beats 1 -p-core for that.

The advantage of this chip is to appease those who just want one type of core only and to maximise whatever performance is possible with non hybrid.

Maybe I am wrong but thats my take.

It could still be possible to get the scheduling benefits of e-cores, but just much harder than hybrid chips, relying on affinity overrides only to force everything that isnt the foreground application of 8 of the p-cores.
 
There is room for different options with different performance trade off to exist. It does make plenty of sense for some people's needs and now that the manufacturing has improved it allow for more flexibility than a older 10P core Comet Lake design that was really pushing the node pretty heavy handed to the point they dropped down P cores to 8 with Rocket Lake. We'll see similar from AMD eventually once the right maturity point is reached with a single CCD pushing more like 10 cores or 12 cores.
 
Sony could launch the PS6 with 12 ZEN7 cores on a single CCD.

That means AM6 will provide at least 32 cores with their Ryzen 9 series (16 cores on a single CCD).
 
Actually people who overclocked are probably better off.

Setting a manual vcore will likely be 1.4 V or lower. It's the auto boosting broken TVB/borked power settings combo taking chips above 1.5 V at high temperatures that caused degradation.

I had a terrible time manually overclocking or even underclocking any 13th or 14th Gen CPU even e-cores off.

Would pass all stress tests including OCCT, Prime95 Y Cruncher, Cinebench. Max temp like 85C and power like 220 watts no WHEAs nor errors nor crashes

Passed shader compilation in TLOU1 multiple times.

Then a week or 2 layer WHEA logged in HWINFO doing shader compilation.

Meaning stability random and not consistent or it degraded easily or who knows what??

Even had random CInebench app error even vcore auto and intel limits to PL1 and PL2 253 and current at 307A or something max. That was with a brand nerw 14700K and 14900K HT disabled all e-cores on then I threw in towel on 13th and 14th Gen.

There is something fundamentally wrong with these chips and the failures are coming to light and spreading all over the news.

Its very sad and shameful but its unfortunately true and reality.

Maximizing performance per area is definitely not an overrated concept. It's the most important design goal of any company. Ecores do just that, offer max performance for the die space they need. Replacing them with full pcores will just drop the mt performance of the chip at similar die sizes.

A 12pcore chip will barely be faster than a 13700k, that's a 2 year old chip. Now let's see how much that 12pcore chip will cost and then tell me how great it is.


Actually 13700K and rest of K variant RPL chips will be a 2.75 year old or even 3 year old chip if the Q3 2025 release is true. Remember RPL came out October 2022.

My 7800X3D didn't work with water, no matter what I did. There must be something with my AIO's cold plate, or I don't know. But as soon as I got an air cooler for it, I loved it.

My most recent positive examples are a 6500 XT which I adore for being a small and quiet GPU that sips power, perfect for older games, and an i7-11700 which is awesome for its configurability.
My most recent negative example is the Ryzen 5 3600 which I couldn't for the love of god keep from throttling in a low-profile SFF case. Like you said, it's not bad, just didn't do it for me.


To be honest, I've given up on Intel with Alder/Raptor Lake, but yet another P-core only monolithic design in the making got some long lost juices flowing in me. :rolleyes:


I so badly wanted more than 8 cores on a single rinbus/CCX-CCD/tile for a while on a homogenous design with modern arch and IPC.

That way best set and forget it CPU for gaming no hybrid scheduling quirks no process Lasso nor APO. No cross CCX-CCD severe latency hit.

Last CPU to have that was Comet Lake with the 10850K and 10900K 10 all P core die. Though that us outdated arch in IPC being 40 to 50% behind Golden Cove and stuck on PCIe Gen 3 and DDR4.

So I am so excited for a 12 + 0 die Bartlett Lake and I was going to be a buyer.

However given stability and degradation issues Raptor Lake has, I am skeptical. My thought and hope is that they will fix it, however according to this not so sure:


So really hard pass for me despite me wanting such a chip and it being only option to come soon. Even Zen 6 sadly is gonna be 8 cores max per CCX-CCD as it is most cost effective for AMD on chiplets. Intel has more flexibility with monolithic design, but does not meet crap when their CPUs are unstable and degrading on current arch on 10nm wafer.

Unless intel makes Bartlett Lake 12 + 0 die Alder Lake based, hard pass.

I will just stick with AMD 8 core X3D chips. 8 cores is enough for gaming. Yes some games are starting to see benefits from more, but only marginally in almost all cases, and I really do not want a hybrid nor dual CCX-CCD chip.

So in such case I will hope Intel released a 12 P core Arrow Lake die or just deal with no such option for along while if ever.
 
Last edited:
Back
Top