• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Is Intel going to deliver a processor/chipset worth waiting for?

Is there any merit in having more than 8 cores for high end gaming with RTX 4090 then 5090??

Certainly would shut up gaming sector if it meant not upgrading processor or board for the next half decade. Would allow them to institute energy efficient designs that meet nearly all business and vast majority of consumer uses.

Sure plenty would still complain but best answer they are willing to produce or the alternative mirror gens is hard to argue against.
 
The quote below prodded me back into considering the always uncertain decision to await the next great thing.



Betting he can pass his massive bonus and still inflating domestic foundry costs onto th.... World isn't getting any cooler and I doubt Intel processors and mobo's with their chipset are going to be anything short of stratospherically priced. Not much worth discussing to that end. ;)

The question once again is will Intel deliver a processor and chipset worth waiting for?
pat gelsinger is busy doing you know what by now
 
Certainly would shut up gaming sector if it meant not upgrading processor or board for the next half decade. Would allow them to institute energy efficient designs that meet nearly all business and vast majority of consumer uses.

Sure plenty would still complain but best answer they are willing to produce or the alternative mirror gens is hard to argue against.

Well yeah if games really start to use more than 8 cores. Though for now it seems games really do not.

Any games you can think of that do. I have seen many say Spiderman Remastered/Miles Morales and the likes, Starfield, Cyberpunk and LOU Part 1 absolutely use more than 8 cores. I have tested the later 2 and have not seen near 100% CPU (per MSI Afterburner CPU usage OSD) usage or even that close to it with an RTX 4090 on 7800X3D at 4K even with DLSS and such on. Well excpet for shader compilation in LOU Part 1 which uses 100%, but that is supposed to be done before playing on main menu.

Never tested nor played the others.

Though I have seen benches and results all over the place. Heard some say Starfield is even better with e-cores and HT off which would contradict it is so heavily threaded or e-cores are just more scheduling problems or depending on area of game?

- Mentions really does not use near that many threads and you can loop or waste thread usage but it does not do anything??

Though either way I want a 10-12 P core on single ring or CCD option from Intel or AMD. No such option exists right now.

Bartlett's supposed to be Raptor Cove-based. I do question the practical utility of the 12 P-core configuration for gaming over the current hybrid processor line, but priced right, sounds like a fun chip to me!


Would sure be a fun chip regardless and I would pay top dollar for it even if it cost $1000 which is at least $250 more than the current top i9 SKUs of 13900KS and upcoming 14900KS will cost or did cost upon release ($750).

I hate e-cores and hybrid arch headaches and want a P core only variant with a little more than 8.
 
Would sure be a fun chip regardless and I would pay top dollar for it even if it cost $1000 which is at least $250 more than the current top i9 SKUs of 13900KS and upcoming 14900KS will cost or did cost upon release ($750).

I hate e-cores and hybrid arch headaches and want a P core only variant with a little more than 8.

What's up with that? Can't say the E-cores ever bothered me with the i9-13900KS. It's been a quite transparent experience thus far.

How would you compare it to 8 only P cores form either AMD or Intel (13700K or 12700K or 14700K e-cores disabled or 7700X, 7800X3D) and no e-cores for gaming? Is there any merit in having more than 8 cores for high end gaming with RTX 4090 then 5090??

I want a 12 P core Raptor Cove as I am not fond of e-cores and WIN11. Also not fond of AMD dual CCDs. I want a 10900K setup with the current CPU archs and also not be stuck at PCIe Gen 3.

As much as I would love to see 12 P core Bartlett Lake on ring bus, I hav emy doubts its going to happen despite rumors. SOme rumor even say a Xoen D which probably means a fused off Sapphire Rapids and put into LGA 1700 package and uhh mesh and gimped IPC that Intel sells as. But here is to hoping Intel wakes up and delivers such a fun CPU with 12 Raptor Cove P cores on ring bus worth consumer level IPC.

It would take less power than 13900K and 14900K as 4 e-cores take slightly more space than 1 P core and maybe a little more power too unless they really ramp up all core clocks. But as long as all core is 5GHz I am happy. It would beat the AMD Ryzen 7900X in productivity and smash it in gaming and be close to 7800X3D all around and better in the few and likely more upcoming games that can scale to lots of threads.

Best compared to a beefed up 7800X that runs much faster than a 7800X ever could, I suppose. Or a i9-11900K that has vastly improved IPC, much more cache, much more frequency, perhaps? As for having >8 cores for gaming... i'd say it's not entirely meritless, but it's also far from a dealbreaker, 8 is plenty. I run the i9 on Windows 10 just fine, the primary issue being that the OS doesn't support the power management correctly and as such it runs the CPU clocks at high multipliers almost all the time.
 
Last edited:
What's up with that? Can't say the E-cores ever bothered me with the i9-13900KS. It's been a quite transparent experience thus far.



Best compared to a beefed up 7800X that runs much faster than a 7800X ever could, I suppose. As for having >8 cores for gaming... i'd say it's not entirely meritless, but it's also far from a dealbreaker, 8 is plenty. I run the i9 on Windows 10 just fine, the primary issue being that the OS doesn't support the power management correctly and as such it runs the CPU clocks at high multipliers almost all the time.


I have had experience otherwise. Command and Conquer Generals was stuterry as well as some other games with them on. Turn them off and it was as smooth sailing as any prior chip with much better performance of current arch of course,
 
Well yeah if games really start to use more than 8 cores. Though for now it seems games really do not.

Same was said about 6 cores being a waste. As badly optimized as many games are they might just be.

Hyperthreading going away is to be celebrated, probably. As of today Dro's 13th gen is still looking like it might still be the chip to own from Intel a few generations from now.
 
I have had experience otherwise. Command and Conquer Generals was stuterry as well as some other games with them on. Turn them off and it was as smooth sailing as any prior chip with much better performance of current arch of course,

Mhm, can't say I've ever experienced something like this, but at the same time, I suppose most games i've been playing perform well on E-cores regardless. The last really demanding game I've played is Starfield, and there was no change in fps running it on 8P or 8P+16E/32T
 
Same was said about 6 cores being a waste. As badly optimized as many games are they might just be.

Hyperthreading going away is to be celebrated, probably. As of today Dro's 13th gen is still looking like it might still be the chip to own from Intel a few generations from now.

What makes you say Intel 13th Gen chip to own a few generations from now as opposed to Zen 4 when you probably have an upgrade on Zen 4 being the AM5 socket assuming AMD does not do what Intel did on LGA 1151 and require new chipsets jumping to the Coffee Lake Gen form Skylake/Kaby Lake.

And why is hyper threading going away to be celebrated?
 
What makes you say Intel 13th Gen chip to own a few generations from now as opposed to Zen 4 when you probably have an upgrade on Zen 4 being the AM5 socket assuming AMD does not do what Intel did on LGA 1151 and require new chipsets jumping to the Coffee Lake Gen form Skylake/Kaby Lake.

And why is hyper threading going away to be celebrated?

Sheer maturity. It'll take years for Intel to refine their next nodes and architectures to reach the level of performance and frequency capabilities that Intel 7 (10ESF) exhibits, particularly on a binned KS-grade CPU. Regarding hyperthreading, it's a side channel (security risk), and given the sheer amount of densely packed cores, no longer necessary. Rather than particularly power efficient, the E-cores are area optimized, so they could easily increase the E-core count to 32 in a future chip and make up for any MT losses incurred from the removal of SMT on the P-cores.
 
Sheer maturity. It'll take years for Intel to refine their next nodes and architectures to reach the level of performance and frequency capabilities that Intel 7 (10ESF) exhibits, particularly on a binned KS-grade CPU. Regarding hyperthreading, it's a side channel (security risk), and given the sheer amount of densely packed cores, no longer necessary. Rather than particularly power efficient, the E-cores are area optimized, so they could easily increase the E-core count to 32 in a future chip and make up for any MT losses incurred from the removal of SMT on the P-cores.


Is your statement about the mature node of 10nm Intel 7 which Alder and Raptor Lake are built in in regards to just the Intel side of things or also include AMD in the equation?
 
Is your statement about the mature node of 10nm Intel 7 which Alder and Raptor Lake are built in in regards to just the Intel side of things or also include AMD in the equation?

Intel only, AMD is a fabless company and they use TSMC's services to make the chips. Intel's current generation processors are all made in-house, with the exception of some Meteor Lake parts and ARC graphics.
 
What makes you say Intel 13th Gen chip to own a few generations from now as opposed to Zen 4 when you probably have an upgrade on Zen 4 being the AM5 socket assuming AMD does not do what Intel did on LGA 1151 and require new chipsets jumping to the Coffee Lake Gen form Skylake/Kaby Lake.

Qualifier being Intel in a thread about future delivery by Intel in the consumer market. Platform longevity is by this late point a well established business practice from both them and AMD.
And why is hyper threading going away to be celebrated?

For many uses it was detrimental. Announcement that all processors would have it didn't get much praise. If among the changes made justifying existence of 14th gen had included removal of hyperthreading? Non-starter at higher level, possibly very welcome on non-K chips to be paired with less well optioned boards.

Caveat possibly being we have no idea what the effects will be. Only where differences existed previously.
 
Well yeah if games really start to use more than 8 cores. Though for now it seems games really do not.
What? This is a in game Cyberpunk screenshot of how is my 24C/24T CPU utilised.

14900k game util ht off.png

What I see is total 84% CPU utilisation, 8 cores loaded 100% and 16 cores loaded 65%.

How old is this game (when the development started, how long is it on the market)?

Are you really sure that 8 core CPU is enough? And that E-cores are useless?
 
Bartlett's supposed to be Raptor Cove-based. I do question the practical utility of the 12 P-core configuration for gaming over the current hybrid processor line, but priced right, sounds like a fun chip to me!
The practical utility to me is not having to install Windows 11 to use it properly. :D

I would love to see it come to fruition, though I'm not sure how much of it is just rumor.

What? This is a in game Cyberpunk screenshot of how is my 24C/24T CPU utilised.

View attachment 338331

What I see is total 84% CPU utilisation, 8 cores loaded 100% and 16 cores loaded 65%.

How old is this game (when the development started, how long is it on the market)?

Are you really sure that 8 core CPU is enough? And that E-cores are useless?
I'd like to see performance figures compared to a let's say, 8C/16T CPU. Loading the CPU with work is one thing. Actually getting measurable performance out of said work is another.
 
Too cynical. They have done it before, and have the research capabilities and money to do it again. They already have BPD, and foveros that TSMC and AMD do not, so don't be too quick to write them off. They have the speed, just not the power efficiency and having just switched from monlithic to tiles, have got the means. I would put a bet on it.

Did I say that I wrote them off?.....no, of course they have the money and the means but like every new CPU it takes yrs to develop, so when i said 3-5yrs I think thats pretty realistic dont you think?
 
You could presume that the game uses the CPU only when it has a good reason to do so, right?
I don't know what process runs on those threads, and without such knowledge, I don't presume anything.
 
I waited 11 years going from i5-2400 to i9-13900k.
See no real worthwhile upgrade in the coming years.

I don't think anyone knows the answer to the question, but generally speaking, I never recommend waiting for the next thing. If you need a PC now, you need it now. Otherwise, you could be waiting forever. Not to mention there's teething issues with any new platform. Many people don't have the patience for such things.
Waiting another year to go 13900k instead of 12900k is one of the best examples way you do want to wait.

That said, this case was quite unique. I don't remember another like it.
 
I waited 11 years going from i5-2400 to i9-13900k.
See no real worthwhile upgrade in the coming years.


Waiting another year to go 13900k instead of 12900k is one of the best examples way you do want to wait.

That said, this case was quite unique. I don't remember another like it.
It all depends on what you have now and how badly you need a new PC.
 
You could presume that the game uses the CPU only when it has a good reason to do so, right?
You’d think so, but seeing as how there is basically no performance difference between CPUs with various number of cores even in the HEAVILY CPU bound scenario of running at 720p it doesn’t actually bear out in practice.
1710060376191.png
 
I am truly boggled now. I got these results with Cyberpunk built-in benchmark and 8/8, 6/6 and 4/4 cores active (no RT, upscaling or FG, 1080 mid settings):

Aver/min/max:

241/124/301
243/132/303
225/138/319

6 cores are better in everything than 8 cores
4 cores dropped just average, and improved minimal and maximal numbers.

I do not understand this. And yes I checked HWinfo that I really have the correct amount of cores active.

Is the built-in benchmark even usable??? With decreasing number of cores you improve minimal and maximal FPS? REALLY?!

EDIT: these numbers 241/124/301 were the same for 24/24, 8/16 and 8/8 config.
 
@BoggledBeagle
It’s all within 5%-ish, so just average run to run variance. And the 4 core config did drop the averages somewhat noticeably, so hey, more or less in line with the accepted wisdom of 4 core CPUs being somewhat outdated, but 6 and up all are perfectly fine for gaming provided the individual cores are fairly strong.
In short - don’t be too bothered, as long as the performance is up to your frame target at your desired resolution and settings with good frame pacing nothing else really matters in practice.
 
I am truly boggled now. I got these results with Cyberpunk built-in benchmark and 8/8, 6/6 and 4/4 cores active (no RT, upscaling or FG, 1080 mid settings):

Aver/min/max:

241/124/301
243/132/303
225/138/319

6 cores are better in everything than 8 cores
4 cores dropped just average, and improved minimal and maximal numbers.

I do not understand this. And yes I checked HWinfo that I really have the correct amount of cores active.

Is the built-in benchmark even usable??? With decreasing number of cores you improve minimal and maximal FPS? REALLY?!

EDIT: these numbers 241/124/301 were the same for 24/24, 8/16 and 8/8 config.
I'd say these results are within margin of error.

Although, it proves my theory - just because X game uses your 12-16-core CPU, it doesn't mean that it can't run equally well on a 6- or 8-core one.

It's pretty much the same thing that has been said about VRAM usage. Just because it's high in X game with X GPU, it doesn't necessarily mean that said game won't run well on another GPU with less VRAM.

Edit: Modern games are very adaptable to your specific hardware configuration.
 
Last edited:
Edit: Modern games are very adaptable to your specific hardware configuration.
And it seems that the more they adaptable, the less people are*.


*A general note, not interested for the OP
 
I am truly boggled now. I got these results with Cyberpunk built-in benchmark and 8/8, 6/6 and 4/4 cores active (no RT, upscaling or FG, 1080 mid settings):

Aver/min/max:

241/124/301
243/132/303
225/138/319

6 cores are better in everything than 8 cores
4 cores dropped just average, and improved minimal and maximal numbers.

I do not understand this. And yes I checked HWinfo that I really have the correct amount of cores active.

Is the built-in benchmark even usable??? With decreasing number of cores you improve minimal and maximal FPS? REALLY?!

EDIT: these numbers 241/124/301 were the same for 24/24, 8/16 and 8/8 config.
Lock your frequency. Fewer cores active boost harder.
 
Back
Top