• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Some Intel Nova Lake CPUs Rumored to Challenge AMD's 3D V-Cache in Desktop Gaming

Status
Not open for further replies.
Good. Intel joining x3d game. Lets hope it will be priced well and have a potential to rival AMD :].
 
Good. Intel joining x3d game. Lets hope it will be priced well and have a potential to rival AMD :].
It would be interesting if Intel reentered HEDT with their version of 3D v-cache. Threadripper needs some competition to balance pricing. And this makes me question where the heck is X3D for Threadripper?
 
  • Like
Reactions: ixi
Too little too late maybe since Zen6 will probably be much faster than Zen5. Evolving a bad gaming CPU arch cannot be enough usually if the opponent is so good and progressing constantly at that segment.
 
You think it's good Intel is going to segment it's CPUs with v cache because you think segmenting CPUs with v cache is terrible? WUT?

It may help them catch up significantly. IDK if they will dethrone and on efficiency or outright gaming though.

I dont know why people think amd has a lead in efficiency. Thats just not true. Especially in the midrange intel has a huge lead the last 4 years. Amds cpus arent efficient, they are just slow, there is a huge difference between the two. Even a 3 year old 13700k is a LOT faster than the 9700x when both are limited to the same power. Their persistence on sticking to 8 cores at the ~350-400$ price range has rendered them completely not competitive for performance oriented consumers at that budget.

Some genuine questions because I don't have any recent big.little Intel CPU.
Is core parking not working? Is the stutter from crossing over CCD's? Is Intel's thread director just lightyears better preventing games from hitting ecores?
The thread director is actually nuts, you can have the heaviest most multi heavy core workload running on the background while playing your games normally without doing anything, not having to go to the taskmanager or process lasso to set it up. It does all of that automatically. With that said though there are a couple of games (2 or 3) that don't like ecores, performance is worse with them on. But it's really a handful in the last 4 years ive been using ecores, so no biggie for me.

If Intel can compete in gaming loads again then it's good for everyone. I'm just looking at the dumpster-fire that is the Arrow Lake gaming benchmarks where they're often slower than 13th gen with a few popular titles their average FPS is worse than AMD's minimum FPS. Not a good look for Intel and gaming.
Surely you realize that the dumpster-fire 285k is as fast as the 9950x is in games, right? So - surely - you can't be actually suggesting that the 9950x doesn't offer a great gaming experience, are you?
 
Surely you realize that the dumpster-fire 285k is as fast as the 9950x is in games, right? So - surely - you can't be actually suggesting that the 9950x doesn't offer a great gaming experience, are you?
It's a dumpster-fire in terms of performance/$ for gaming.

285K is more expensive than, and slower than a 9800X3D. I wouldn't buy a 9950X for gaming either, because it's also more expensive than, and slower than a 9800X3D.

If you move the goalposts, any argument can be countered.
 
It's a dumpster-fire in terms of performance/$ for gaming.

285K is more expensive than, and slower than a 9800X3D. I wouldn't buy a 9950X for gaming either, because it's also more expensive than, and slower than a 9800X3D.

If you move the goalposts, any argument can be countered.
The 9800x 3d is also a dumbster fire in terms of performance per $ for gaming. Don't believe me? Here you go.

cost-per-frame-2560x1440.png

It's also a dumbster fire in performance / $ for non gaming but whatever, doesn't matter here.
 
285K is more expensive than, and slower than a 9800X3D.
from where I am at, the 285k is cheaper, specially the 265k and kf..the 9800X3D is being ripped here around 700-800usd and the 9950X3D at around 900-1000usd
 
Intel needs a hard reset. A completely new, grounds up design that can actually scale linearly again and stand the test of time, and has a reliable socket that nobody can deny will last half a decade or more. Until then? They're dead in the water. I don't even care what they reinvent. Its all been done and it makes no sense to do it again, or re-release it as yet another Lake. They need a chiplet based design on their basic CPU core complex. Even with their recent releases they still haven't really achieved what AMD has done and made them successful. Yes, they fused chiplets together. But its far from a scalable, continuously developed floorplan.
This.
Intel really needs a new architecture, separating the memory controller from the die with Arrow Lake resulted in very high RAM latency, only mitigated with expensive high speed RAM and lots of tuning to only match 14th gen. Maybe they'll fix the RAM latency with Nova Lake, but Intel needs to start with a new architecture optimized for a split core design.
 
from where I am at, the 285k is cheaper, specially the 265k and kf..the 9800X3D is being ripped here around 700-800usd and the 9950X3D at around 900-1000usd
Well that sucks. The MSRPs are $589 for the 285K and $479 for the 9800X3D.
Here in the UK it's about ÂŁ525 for the 285K and ÂŁ420 for the 9800X3D.

The 9800x 3d is also a dumbster fire in terms of performance per $ for gaming. Don't believe me? Here you go.
Oh look, you've moved the goalposts again. Now you're comparing to lesser CPUs than the 285K because the graph you linked has the 9800X3D absolutely whooping the 285K at $2.9/frame instead of $3.9/frame.

You should really learn not to keep moving goalposts when trying to make a point. It makes you look like a flustered fanboy struggling to bring reasoning skills or logic to the table. I know you can use reasoning and logic - I've seen you do it before in plenty of other threads. Why are you not doing that here? Ignore your emotions, look at the facts, and ask yourself if the 285K is really a better gaming CPU than the 9800X3D, and then ask yourself why every single review on the web shows the exact opposite - of the 9800X3D out-gaming the 285K at a lower price.

What are you trying to prove and why?
 
Oh look, you've moved the goalposts again. Now you're comparing to lesser CPUs than the 285K because the graph you linked has the 9800X3D absolutely whooping the 285K at $2.9/frame instead of $3.9/frame.

You should really learn not to keep moving goalposts when trying to make a point. It makes you look like a flustered fanboy struggling to bring reasoning skills or logic to the table.
How am I moving the goal posts? You literally said the 285k is a gaming dumbster fire. I said it matches the 9950x - therefore the 9950x is also a dumbster fire. Now you moved the goalpost saying that the 9800x 3d has a better performance / $ in gaming. I noted that the 9800x 3d is also terrible in performance / $ for gaming, therefore according to your OWN logic it's a dumbster fire for gaming, yet you are moving the goalposts again to not admit it.

If bad performance / $ in gaming makes a CPU a gaming dumbsterfire then the 9800x 3d is a dumbsterfire in gaming. Period. Stop moving the goalposts.

Ignore your emotions, look at the facts, and ask yourself if the 285K is really a better gaming CPU than the 9800X3D, and then ask yourself why every single review on the web shows the exact opposite - of the 9800X3D out-gaming the 285K at a lower price.
You are the one with emotions here. Im using just straight up logic. Yes obviously the 9800x 3d is a better gaming CPU than the 285k but that doesn't mean every other CPU is a dumbsterfire. Especially with the metric you used - the performance / $- that freaking metric completely negates your argument since 9800x 3d sucks at it. It's horribly expensive resulting in very poor gaming performance / $
 
How am I moving the goal posts? You literally said the 285k is a gaming dumbster fire. I said it matches the 9950x - therefore the 9950x is also a dumbster fire. Now you moved the goalpost saying that the 9800x 3d has a better performance / $ in gaming. I noted that the 9800x 3d is also terrible in performance / $ for gaming, therefore according to your OWN logic it's a dumbster fire for gaming, yet you are moving the goalposts again to not admit it.

If bad performance / $ in gaming makes a CPU a gaming dumbsterfire then the 9800x 3d is a dumbsterfire in gaming. Period. Stop moving the goalposts.
This is a thread about Intel's answer to the X3D models' 3D vCache.

I said "It's a dumpster-fire in terms of performance/$ for gaming" in response to the 285K and 9950X that you brought to the discussion, not me.

You can argue until you're blue in the face about the 285K and 9950X but they're not the CPUs I was discussing, they're not relevant to this thread, and even if I entertain your daft suggestion that the 285K or 9950X is a better gaming CPU than the 9800X3D, you're not making any sense because the 'evidence' you're posting shows the 9800X3D being vastly superior.

This is a thread about gaming performance on upcoming CPUs with more cache. Can you please focus on that instead of digging yourself into a hole?
 
Last edited:
If they do this itll be because they either absolutely destroy AMD or best it by 3% .
Apart from the 285k the 265k and 245k are amazing buys for 4K.

Regarding stuttering on X3D you maybe a minority or at the very very least dont notice it. Simply type in stuttering cpu reddit in google and the threads you will get are regarding X3D CPUs. Hell I typed in Intel CPU stuttering and the first result was 9800x3d stutters . Its there, the major annoyance I have from it, is that people defend it for no clear reason and say its because you didnt enable C-States or some setting that you would never even need to know about on an Intel platform.

I vowed never to listen to the internet again, Intel's latest current platform while not dishing out the bigger numbers I felt was overall more sophisticated, it even supports CUDIMMS. It just worked with everything such as streaming, video recording (which amd has issues with atm idk why), overclocking was actually fun, it ran cooler believe it or not since its not trying to boost itself to maximum thermals all the time, absolutely 0 stuttering and lets be honest theres no difference in games at 4K so my switch over to AMD got me a hotter CPU and lower benchmark scores .
If Intel bests AMD's fastest gaming CPU by 3%, then it won't be worth it going with an Intel X3D part unless you absolutely want that 3% bragging right, IMO. Having socket longevity and price/performance is much more valuable to me.
And I have a difficult time on believing reddit posts from Intel fans saying AMD cpu's are bad because of a latency issue, if it were true to the extent some people insist, then every reviewer would be complaining of games stuttering. Arrow Lake needs expensive CUDIMMs and overclocking to match 13th & 14th gen, while having higher power consumption than a 9800X3D. I personally couldn't care less about overclocking anymore so having a CPU that boosts on its own making OC's pointless is a good thing.
It's at the bottom regardless but keep on moving the goalposts.
The comparison was with the 9800X3D, which is also a faster and cheaper CPU than the 285k, you're the one moving goalposts,lol.
 
Last edited:
This is a thread about Intel's answer to the X3D models' 3D vCache.

I said "It's a dumpster-fire in terms of performance/$ for gaming" in response to the 285K and 9950X that you brought to the discussion, not me.

You can argue until you're blue in the face about the 285K and 9950X but they're not the CPUs I was discussing, they're not relevant to this thread, and even if I entertain your daft suggestion that the 285K or 9950X is a better gaming CPU than the 9800X3D, you're not making any sense because the 'evidence' you're posting shows the 9800X3D being vastly superior.

This is a thread about gaming performance on upcoming CPUs with more cache. Can you please focus on that instead of digging yourself into a hole?
Exactly, and im telling you the metric you are using to determine what is a dumbster fire (fps / $) is a very bad metric, since that metric clearly shows that the 9800x 3d is at the bottom of the chart losing to 95% of the CPU released in the last 7 years on that metric. You don't think that the 9800x 3d is a gaming dumbsterfire (I guess, not sure), and yet the metric you are using shows that it is.

The comparison was with the 9800X3D, which is also a faster and cheaper CPU than the 285k, you're the one moving goalposts,lol.
My comparison is to the 245k. Clearly the 9800x 3d is a gaming dumbsterfire since it loses by a lot to the 245k in fps / $. Don't move the goalposts, admit that it's a gaming dumbsterfire.
 
I dont know why people think amd has a lead in efficiency.
I thought it was already argued in TPU ad-nauseum AMD leads in core efficiency given energy used for work done over time. The ironic downside to Ryzen is in the high yield chiplet paradigm the I/O die is a dumpster fire power hog so if you really want that sweet total power efficiency you need the mobile chips or variant G series CPU but those tend to be weaker compute offerings compared to the standard lineup.
 
Last edited:
I thought it was already argued in TPU ad-nauseum AMD leads in core efficiency given energy used for work done over time. The ironic downside to Ryzen is in the high yield chiplet paradigm the I/O die is a dumpster fire power hog so if you really want that sweet total power efficiency you need the mobile chips or variant G series CPU but those tend to be weaker compute offerings compared to the standard lineup.
It doesn't really need to be argued. Take a 9700x and a 265k, limit them to the same power, run cinebench or anything multithreaded (corona, vray, etc.). The 265k will be stupidly faster, run considerably cooler and end up using a lot less power overall since it finishes the task faster.
 
My comparison is to the 245k. Clearly the 9800x 3d is a gaming dumbsterfire since it loses by a lot to the 245k in fps / $. Don't move the goalposts, admit that it's a gaming dumbsterfire.
LOL, this is the first time you've mentioned 245K. Goalposts moved yet again.
Christ dude, give it up already. It's not even on topic. At least talk about CPUs with extra cache or something.

Take a 9700x and a 265k, limit them to the same power, run cinebench or anything multithreaded (corona, vray, etc.). The 265k will be stupidly faster, run considerably cooler and end up using a lot less power overall since it finishes the task faster.
What does that have to do with this thread?
Gaming? No.
Extra L3 cache? No.

....aaaaand now you're adding two more completely unrelated CPUs to the discussion, an 8C vs 20C processor where the 20C processor wins in multithreaded workloads? Wow, what an unexpected result that never needed proving in the first place.... :kookoo:
 
LOL, this is the first time you've mentioned 245K. Goalposts moved yet again.
Christ dude, give it up already. It's not even on topic. At least talk about CPUs with extra cache or something.
You didn't get the point. The point is that either your metric is flawed or the 9800x 3d is a dumbster fire for gaming. In fact I take that back, i think you got the point, you are just pretending you didn't to avoid admitting the obvious. So ill ask you clearly, does bad fps / $ make a CPU a dumbsterfire for gaming? Answer the question without moving the goalposts yet again.
 
You didn't get the point. The point is that either your metric is flawed or the 9800x 3d is a dumbster fire for gaming. In fact I take that back, i think you got the point, you are just pretending you didn't to avoid admitting the obvious. So ill ask you clearly, does bad fps / $ make a CPU a dumbsterfire for gaming? Answer the question without moving the goalposts yet again.
If you're paying top dollar for the best gaming CPU, you want the best gaming CPU, period - and that's the one with extra cache.

The fact the 9800X3D beats the 285K for less money makes the 285K a dumpster fire for gaming. It's worse in absolute performance, relative performance, performance/Watt, performance/$.
 
It doesn't really need to be argued. Take a 9700x and a 265k, limit them to the same power, run cinebench or anything multithreaded (corona, vray, etc.). The 265k will be stupidly faster, run considerably cooler and end up using a lot less power overall since it finishes the task faster.
I'm having a hard time agreeing with your conclusion but anyway it's getting off topic so I will just stop here and say I thought you cannot compare efficiency by using the chip wattage as a constant because they essentially both have different power/frequency curves. By fixing the wattage the cores you are comparing won't run at the same frequency so while one might appear to win at one point in their curve they may loose at a different point where the other chip has an advantage at a different spot in their curve.
 
If you're paying top dollar for the best gaming CPU, you want the best gaming CPU, period - and that's the one with extra cache.
Right,so performance / $ is useless. Thats my point the whole page, that youshouldn't have even brought it up.
 
Having socket longevity and price/performance is much more valuable to me.
yeah AMD supports sockets longer but they release too many boards every year with "refreshed" Chipset names..in the history of me owning these products from both Intel and AMD, I am always buying more AMD boards in the long run than Intel..why is that? Because even Intel's old RPL is still competitive to AMD's what? 7900x, 7950x, 7800X3D, 7950X3D, 7900X3D, 9950X, 9900X, 9800X3D, 9900X3D and 9950X3D..its just 1 intel chip for what? 8 AMD chips..
 
I'm having a hard time agreeing with your conclusion but anyway it's getting off topic so I will just stop here and say I thought you cannot compare efficiency by using the chip wattage as a constant because they essentially both have different power/frequency curves. By fixing the wattage the cores you are comparing won't run at the same frequency so while one might appear to win at one point in their curve they may loose at a different point where the other chip has an advantage at a different spot in their curve.
That's like saying you can't compare efficiency of cars by running them at the same speed. Quite the contrary, that's exactly how you SHOULD compare it. They are called iso comparisons. Running at different power limits doesn't tell you which CPU is actually more efficient, it just tells you which CPU has a lower power limit out of the box. That's why if you eg compare a 265k to a 265 non k the latter will look a lot more efficient, because it has a lower power limit, but in reality it isn't actually more efficient. If anything it might even be worse due to worse binning.
 
Status
Not open for further replies.
Back
Top