• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Is Intel going to deliver a processor/chipset worth waiting for?

Cute.

But yes, if intel does come up with a new killee product (namely one that ditches the e-waste cores and focuses on gaming performance), then im all for going intel again - was always a good experience.
But a 3 gen old Intel WITH ewaste cores matches your 3d. What the hell dude :roll:
 
Cute.

But yes, if intel does come up with a new killee product (namely one that ditches the e-waste cores and focuses on gaming performance), then im all for going intel again - was always a good experience.

If you saw me tyou wouldn't think i was cute.

Every intel rig i have ever built has posted first time and been rock stable.

They probably could build a killer gaming centric CPU, dunno why they don't.
 
You are wasting your time arguing with him. accept it. even if you did show him non refutable proof, he still would not accept it. He might have used intel in the past, but now he is a resout AMD user. Until Intel pull another C2D(and they will) he is lost.
Nah, having fun ruining hopes and dreams. The "fastest gaming CPU" gets schooled by a 3gen old stock Intel. And let's not even touch high population density or ocing. Won't be pretty :D
 
If you saw me tyou wouldn't think i was cute.

Every intel rig i have ever built has posted first time and been rock stable.

They probably could build a killer gaming centric CPU, dunno why they don't.

Aww, don't be harsh on yourself ;)

But yeah, that's what i've been saying - why don't they make a cpu like the 7800x3d with 8 fast p cores and a big fat L3 cache. It honestly seems like they just don't care all that much about gaming (despite it being a massive target group), and just gives us whatever generic product they use for enterprise customers aswell.

10% slower? You are getting 135 fps and im getting 132! Dude.....

I hope you realize if I decide to OC even my 3 gen old 12900k will just fly past your 3d, right? Sure, power draw will hit 150w, but it's a 3 gen old CPU. And slightly faster? My 14900k was dropping 15-20% performance on my 12900k, both running stock

"Uhhh you just watch out when i oc my cpu!!!"

Uhm yeah, all cpu's can be oc'ed - oh wait, most intel cpus can't...
 
"Uhhh you just watch out when i oc my cpu!!!"

Uhm yeah, all cpu's can be oc'ed - oh wait, most intel cpus can't...
Most CPUs that cost anywhere near that 3d thing can. But no need to, if a 3 gen Intel is matching yours at stock, im pretty sure all 13th and 14th k CPUs will be faster. Even the 13600k probably beats my 12900k so...
 
Most CPUs that cost anywhere near that 3d thing can. But no need to, if a 3 gen Intel is matching yours at stock, im pretty sure all 13th and 14th k CPUs will be faster. Even the 13600k probably beats my 12900k so...

If you pay the premium for a K skew, sure - but nearly all prebuilts come with the non-k varients of even the 14900. And no, Cyberpunk is one of the few games that can use more cores than what the 13600k has, so no, it wont be faster than your 12900k in that particular game.
 
If you pay the premium for a K skew, sure - but nearly all prebuilts come with the non-k varients of even the 14900. And no, Cyberpunk is one of the few games that can use more cores than what the 13600k has, so no, it wont be faster than your 12900k in that particular game.
Well if someone has a 13600k they can enlighten us, but even if it can't, a 13700k definitely poops on my 12900k. And by extension....

The more I see actual users results from the 3d the more Im convinced it's a scam. I mean power draw is nice and all but that only applies to 720p. At 1440p - 4k all cpus consume the same, so it's kinda irrelevant. Basically we have similar gaming performance, but I can have a game unpacking on the background (in fact, I just did that yesterday) with 0 impact on the gaming performance.

Now Ill try to post some results on my next goal, looping Cinebench R23 on the background while matching your x3d performance. Let's see :D
 
Well if someone has a 13600k they can enlighten us, but even if it can't, a 13700k definitely poops on my 12900k. And by extension....

The more I see actual users results from the 3d the more Im convinced it's a scam. I mean power draw is nice and all but that only applies to 720p. At 1440p - 4k all cpus consume the same, so it's kinda irrelevant. Basically we have similar gaming performance, but I can have a game unpacking on the background (in fact, I just did that yesterday) with 0 impact on the gaming performance.

Now Ill try to post some results on my next goal, looping Cinebench R23 on the background while matching your x3d performance. Let's see :D
X3D are ideal for singleplayer games and SFF builds.

Intel is ideal for multiplayer games. The ones you can't really bench because the test scenes always change. But there's a reason basically zero streamers or professional esports players use AMD, for either CPU or GPU.

This isn't to say both CPUs can't play both types of games, it's just a matter of what is best.
 
And after 15 seconds in the bios, 7800x 3d is down. Next target, 8800x 3d, bring it on

image-2024-03-07-222112902.png
 
Well if someone has a 13600k they can enlighten us, but even if it can't, a 13700k definitely poops on my 12900k. And by extension....

The more I see actual users results from the 3d the more Im convinced it's a scam. I mean power draw is nice and all but that only applies to 720p. At 1440p - 4k all cpus consume the same, so it's kinda irrelevant. Basically we have similar gaming performance, but I can have a game unpacking on the background (in fact, I just did that yesterday) with 0 impact on the gaming performance.

Now Ill try to post some results on my next goal, looping Cinebench R23 on the background while matching your x3d performance. Let's see :D

Yeah sure, total scam - you totally weren't handpicking titles that can actually use all the cores.

Shall we try microsoft flight simulator, shadow of the tomb raider, or horizon zero dawn... see how your 12900k fares...
 
X3D are ideal for singleplayer games and SFF builds.

Intel is ideal for multiplayer games. The ones you can't really bench because the test scenes always change. But there's a reason basically zero streamers or professional esports players use AMD, for either CPU or GPU.
Single player games aren't required to run at 240 fps or whatever. Honestly even a 12400 - 12600k would be fine for single player games.
 
Single player games aren't required to run at 240 fps or whatever. Honestly even a 12400 - 12600k would be fine for single player games.
You don't get to decide what FPS is required for other people.
 
Yeah sure, total scam - you totally weren't handpicking titles that can actually use all the cores.

Shall we try microsoft flight simulator, shadow of the tomb raider, or horizon zero dawn... see how your 12900k fares...
Cyberpunk is a handpicked title that uses ecores?

Man, wasn't you 2 pages ago that was trying to convince me that the game worked better WITHOUT ecores??? Are you for real?

Is this a different cyberpunk TPU is testing? Your 7800x 3d is easily on the top of the chart, 70 fps ahead of my 12900k. Man, stop it, if anything this is a cherrypicked title for AMD. According to TPUs review that is.

 
Cyberpunk is a handpicked title that uses ecores?

Man, wasn't you 2 pages ago that was trying to convince me that the game worked better WITHOUT ecores??? Are you for real?

Is this a different cyberpunk TPU is testing? Your 7800x 3d is easily on the top of the chart, 70 fps ahead of my 12900k. Man, stop it.


At the time of testing it evidently did. However, reading through patch notes, intel has been working with cdpr to improve the performance in cyberpunk with p / e cores. Cyberpunk was always able to use more than 8 cores. But out of curiousity, try and disable your e-cores, see what it does.

But you ignored the last line - dont wanna try and compare microsoft flight simulator, shadow of the tomb raider, or horizon zero dawn benchmark results ?
 
At the time of testing it evidently did. However, reading through patch notes, intel has been working with cdpr to improve the performance in cyberpunk with p / e cores. Cyberpunk was always able to use more than 8 cores. But out of curiousity, try and disable your e-cores, see what it does.
I was trying for 3-4 pages to convince you that ecores helped with this game and for pretty much most games in general and you were basically calling me ignorant. Now you are telling me that ecores do actually help....

And no, CDPR didn't try anything like that, the patches were specifically targeting Pcores. You are talking about the Prioritize P cores option, which last version only thing it did was cause stuttering, and in the current version, well, it doesn't do anything. Literally nothing. You get the exact same performance with it on or off. Im willing to find an old version of cyberpunk and test it again, nothing will change. Wanna find out what version TPU used and test there? I guarantee the results will be identical.

But you ignored the last line - dont wanna try and compare microsoft flight simulator, shadow of the tomb raider, or horizon zero dawn benchmark results ?
I haven't ignored anything, im willing to test whatever anyone asks me. I get around 360 fps in SOTR and you probably get around 400-410. I know, I've been testing CPUs as a hobby. In zero dawn it's not that your 3d is fast, it's that every Intel cpu is slow. My 12900k is losing (massively) to a 5950x in that game, so....

MSFS no idea, haven't tested it.
 
I was trying for 3-4 pages to convince you that ecores helped with this game and for pretty much most games in general and you were basically calling me ignorant. Now you are telling me that ecores do actually help....

And no, CDPR didn't try anything like that, the patches were specifically targeting Pcores. You are talking about the Prioritize P cores option, which last version only thing it did was cause stuttering, and in the current version, well, it doesn't do anything. Literally nothing. You get the exact same performance with it on or off. Im willing to find an old version of cyberpunk and test it again, nothing will change. Wanna find out what version TPU used and test there? I guarantee the results will be identical.


I haven't ignored anything, im willing to test whatever anyone asks me. I get around 360 fps in SOTR and you probably get around 400-410. I know, I've been testing CPUs as a hobby. In zero dawn it's not that your 3d is fast, it's that every Intel cpu is slow. My 12900k is losing (massively) to a 5950x in that game, so....

MSFS no idea, haven't tested it.

Come on then, show results with e-waste cores on / off - prove they aren't entirely a waste.

Oh right, and yet you're calling the 7800x3d a scam, despite it being considerably cheaper, aswell as faster in plenty of examples.
 
Apperantly some people got nothing else to do on a Thursday... :D
 
Come on then, show results with e-waste cores on / off - prove they aren't entirely a waste.

Oh right, and yet you're calling the 7800x3d a scam, despite it being considerably cheaper, aswell as faster in plenty of examples.
You realize that if ecores harm performance then the results are even more embarrassing? A 3 gen old intel with ewaste cores hurting it's actual performance still matches or even beats (oced) the 7800x 3d.

The x3d is considerably cheaper than what? For most of its lifetime it was price matching a 13700k and a 14700k. Lately sure, it is 20 euros cheaper than a 13700k.
 
You realize that if ecores harm performance then the results are even more embarrassing? A 3 gen old intel with ewaste cores hurting it's actual performance still matches or even beats (oced) the 7800x 3d.

The x3d is considerably cheaper than what? For most of its lifetime it was price matching a 13700k and a 14700k. Lately sure, it is 20 euros cheaper than a 13700k.

You were eager to prove me wrong previously - now you won't even provide any proof...

When it was about performance you were comparing it with the i9, but now when it's about money, it's against the i7 - cute. And as i have already told you multiple times, where i am the difference is alot bigger than 20 euro. The 7800x3d is 2900 dkk, the 14700k 3400 dkk, and the 14900k 4800 dkk. That's a difference of 40 and 160 euro respectively. And the mobo and ram needed for the intel cpus are alot more expensive aswell.
 
You were eager to prove me wrong previously - now you won't even provide any proof...
That's cause im on the couch on my 6900hs laptop, can't really turn those ewaste cores off cause don't have any. Tomorrow I can prove you wrong, no problems, but what is the point anyways? If you are right and ecores harm performance then the 3d stinks even more.
When it was about performance you were comparing it with the i9, but now when it's about money, it's against the i7 - cute. And as i have already told you multiple times, where i am the difference is alot bigger than 20 euro. The 7800x3d is 2900 dkk, the 14700k 3400 dkk, and the 14900k 4800 dkk. That's a difference of 40 and 160 euro respectively. And the mobo and ram needed for the intel cpus are alot more expensive aswell.
By i9 you mean the 12900k? You realize the 13700k is better - a lot better than the 12900k right? So what are you even on about?

Im pretty confident you have your x3d for more than 6 months, back then it wasn't any cheaper than the 13700kf. Even right now, in proshop the difference is 11 euros.
 
That's cause im on the couch on my 6900hs laptop, can't really turn those ewaste cores off cause don't have any. Tomorrow I can prove you wrong, no problems, but what is the point anyways? If you are right and ecores harm performance then the 3d stinks even more.

By i9 you mean the 12900k? You realize the 13700k is better - a lot better than the 12900k right? So what are you even on about?

Im pretty confident you have your x3d for more than 6 months, back then it wasn't any cheaper than the 13700kf. Even right now, in proshop the difference is 11 euros.

Provide the proof, and we shall see what to make of it.

Proshop has a discount on all intel cpu's this week (wonder why - probably can't get the stock moved... lol) - none of the other shops do. Next week when the discount is over, difference is back at what i said.

X3D are ideal for singleplayer games and SFF builds.

Intel is ideal for multiplayer games. The ones you can't really bench because the test scenes always change. But there's a reason basically zero streamers or professional esports players use AMD, for either CPU or GPU.

This isn't to say both CPUs can't play both types of games, it's just a matter of what is best.

Battlefield isn't exactly esport, but from what i've seen / tested, the 7800x3D pretty much hits it out of the park in bf2042.
 
So is X3D not a "fundamental" change because it's just "incremental" adding more cache, and the architecture hasn't changed? You're muddying the waters here with "minor" "major" changes etc. The only thing that matters is end performance per clock. I don't care if they only changed a "wheel locking nut", if the car is 20% faster because of it, it's enough of an architectural change that you can't argue it's the "same".
I'm not saying that's X3D is not a big change. Of course it is, otherwise I wouldn't have one. All I'm saying is, it's not a new architecture. If AMD can glue some extra cache on top of the CPU die, or if Intel can glue a few extra cores with the cache they naturally come with onto theirs, of course it brings performance gains, and of course it's awesome. It's just not a fundamentally new design from the ground-up. You can say that this doesn't matter for the end user as long as we get something extra, and you'd be absolutely right.
 
I'm not saying that's X3D is not a big change. Of course it is, otherwise I wouldn't have one. All I'm saying is, it's not a new architecture. If AMD can glue some extra cache on top of the CPU die, or if Intel can glue a few extra cores with the cache they naturally come with onto theirs, of course it brings performance gains, and of course it's awesome. It's just not a fundamentally new design from the ground-up. You can say that this doesn't matter for the end user as long as we get something extra, and you'd be absolutely right.

Adding more cores & cache to Skylake is another analogy.

That line had big performance gains , it was only when you did a very small bench that fit inside L1 cache with clock speed locked/normalized that you could see it is the same uArch. But normal users just saw big jumps, and Intel made money hand over fist.

I mean, does it really matter *how* you got that +30% in two years? This, supposedly in Intel's dark days of rehashing skylake.

1709913777102.png
 
Adding more cores & cache to Skylake is another analogy.

That line had big performance gains , it was only when you did a very small bench that fit inside L1 cache with clock speed locked/normalized that you could see it is the same uArch. But normal users just saw big jumps, and Intel made money hand over fist.

I mean, does it really matter *how* you got that +30% in two years? This, supposedly in Intel's dark days of rehashing skylake.

View attachment 338125
It doesn't matter how you get the performance increase. Efficiency also increased during this time due to process and architecture tweaks, plus revsions/improvements on the iGPU side of things.
 
Adding more cores & cache to Skylake is another analogy.

That line had big performance gains , it was only when you did a very small bench that fit inside L1 cache with clock speed locked/normalized that you could see it is the same uArch. But normal users just saw big jumps, and Intel made money hand over fist.

I mean, does it really matter *how* you got that +30% in two years? This, supposedly in Intel's dark days of rehashing skylake.

View attachment 338125
It doesn't matter how you get the performance increase. Efficiency also increased during this time due to process and architecture tweaks, plus revsions/improvements on the iGPU side of things.
I never doubted the significance of the performance gain, did I?

All I said was that the above things are architecturally pretty much the same thing. This statement is not supposed to have any kind of negative or positive connotation attached to it.

Edit: If you manage to slap some magical bullshit back wing onto your car that produces enough downforce to make it 0.5 seconds faster in 0-60, that's bloody awesome, really, but I'm not gonna pretend that it suddenly became an entirely different car. Again, there is no negativity, or any other emotion or value judgement about it. It's just fact.
 
Back
Top