• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-13900KS 6 GHz Processor MSRP 22% Higher Than i9-13900K: Retailer

Intel just seem to be doubling-down on "we're faster, power budget be damned", with their GHz chasing. Isn't this the attitude that culminated in several years of terrible Netburst processors?

Intel can't keep pushing clockspeed without exponential power consumption - they're already bumping up against the physical limits of what top-end watercooling can (barely) handle.

Adding more L2 cache seems to have been the reason Raptor Lake is faster than Alder Lake. We need performance/Watt, not GHz, because we've reached the point where there are no more Watts available.
The architecture allows you to push the processor much more than AMD allows (they also injected 100W more to the 7950X compared to the 5950X). I think the obstacle is the voltage, lower at Intel at the same frequency. High voltage does not need power to destroy your processor.
If you are worried about the consumption, manually set PL2 in BIOS or XTU.
 
I don't recall the barrier to 1, 2, 3, 4 or 5 GHz being broken at reasonable perf/W. These milestones are always pushed.
Except that those CPUs barely ate any power, and were made on bigger, not so dense nodes compared to these modern monsters, and there was a lot of headroom for inventing better cooling. Nowadays, we have to cool tens of millions of transistors packed into an area the size of a fingernail, eating hundreds of watts, but our thermal conductors are still the same: aluminium, copper and a water-equivalent liquid. Our top-end PCs are already full of radiators, and there isn't much room left to grow in size.
 
The architecture allows you to push the processor much more than AMD allows (they also injected 100W more to the 7950X compared to the 5950X). I think the obstacle is the voltage, lower at Intel at the same frequency. High voltage does not need power to destroy your processor.
If you are worried about the consumption, manually set PL2 in BIOS or XTU.
PPT rose 88W, not 100W, and TDP only 65W - but yes - the point is still that pushing power consumption up (which is increasingly a factor of voltage, since voltage is the only exponent in the power equation) is futile for technological progress.

You have to remember that desktops are a shrinking market and the future is mobile/laptop. You CANNOT put a 500W CPU in a laptop or phone, laws of physics say that progress in this direction is an eventual dead-end. Enthusiasts with massive cooling solutions will always present a distorted picture of the high-end but realistically, 90% of deesktops are using basic cooling with a single 92 or 120mm fan, adequate for ~125W at best, and this is beyond the upper echelons of what most OEMs can designate cooling systems for in a laptop where the CPU and GPU will be fighting over the same fin stack and heatpipes.
 
In the end you’re paying more, but obtaining little to no overall benefit in performance per watt. The 7950X has all core unlocked multiplier across 16 cores, and I just purchased one brand new for $549.

You’re now only paying for bin. The bin doesn’t guarantee the performance either on the boost.

The worst part is purchasing the 13900KS and being stuck on a dead platform with a processor that will consume more power or equal than meteor lake, but with less performance.

I paid nearly $100 less than a 13900K, and they are expecting me to pay $100 over 13900K pricing to get a bin on a dead platform versus AM5 that you will get potentially up to two new launches with mini launches in between without mobo upgrade costs until at minimum 2025. With that in mind, why would I care if I’m averaging 5 fps less than Intel? If power costs are a problem, I could run the processor down to 5950X speeds and consume less than the 5950X because of the head room on 5nm. But why? I’m already 30-40W lower consumption on average performing the same tasks vs 13900k…

Intel is in a tough position right now, especially as AM5 platform pricing continues to erode and 3D cache is on the horizon.
Personally, I never give much credit to platform upgradability across generations. Selling your expensive CPU for half of its price just to buy another, equally expensive CPU that's 10% faster or more efficient is a bad deal, imo.
 
Agreed. The performance headroom for overclocking has been significantly reduced in the most recent processor generation but there does seem to be a lot more optimization room available to get within a few percent of the stock performance while reducing temps at the same time. I'm looking forward to the 3d cache variants and hoping those are fully unlocked as well.
Unlikely, as the reason they were locked for Zen3 was the cache itself being easily damaged by voltages needed for overclocking.

AMD are running the X3D variants with a max voltage limit, which is a fixed value based on TSMC's process node. The 7800X3D will likely run faster than the 5800X3D because of the jump from 7nm to 5nm, but the limiting factor will once again be that the 3D-vCache is limiting the peak voltage that can be supplied to any of the cores.

Given what I've been saying in this thread about diminishing returns for added voltage/power - that's no bad thing, the result will likely be far more efficient at stock than anything else in the Zen4 lineup.

Even if you just set a manual PPT level or temperature limit, there's much efficiency to be gained with only a few percent of performance lost at most.
Indeed. I don't think a single one of my personal machines have been using the stock power limits in the last 15 years. I even found tools to extract efficiency from locked down laptops.
Even without voltage tuning, just about every CPU/GPU/APU will have perf/Watt gains by simply reducing the power budget. If you can spend that power budget more wisely by picking better voltage curves, that's just icing on the cake.
 
I don’t understand the rationale behind KS chips anymore… You’re getting binned chips, but you still may not reach the boosts and it’s limited on cores.

I liked KS before where it enabled me to boost/OC across all cores. What about those E cores?

I’m fairly certain they matter:


Back when KS was awesome (14+^4):

I don't get your point: E cores will be most probably better binned too in KS.

Diminishing returns ? For sure, on a KS CPU, but this is not targeted to people looking for a good deal.

In the end you’re paying more, but obtaining little to no overall benefit in performance per watt. The 7950X has all core unlocked multiplier across 16 cores, and I just purchased one brand new for $549.

You’re now only paying for bin. The bin doesn’t guarantee the performance either on the boost.

The worst part is purchasing the 13900KS and being stuck on a dead platform with a processor that will consume more power or equal than meteor lake, but with less performance.

I paid nearly $100 less than a 13900K, and they are expecting me to pay $100 over 13900K pricing to get a bin on a dead platform versus AM5 that you will get potentially up to two new launches with mini launches in between without mobo upgrade costs until at minimum 2025. With that in mind, why would I care if I’m averaging 5 fps less than Intel? If power costs are a problem, I could run the processor down to 5950X speeds and consume less than the 5950X because of the head room on 5nm. But why? I’m already 30-40W lower consumption on average performing the same tasks vs 13900k…

Intel is in a tough position right now, especially as AM5 platform pricing continues to erode and 3D cache is on the horizon.
your post makes very little sense to me.

AMD price for 7950X is not $549. Official price still is $699 if I'm not wrong, even after AMD admitted their launch fiasco and lowered it quite a bit. So your point could be applied also to a 13900K on a deal (I found one 90€ below official price in a local store).

13900K also has all the cores unlocked, and they are much more than 16 (they are 24 actually).
You are just trying to justify your purchase here... don't need to. Everyone can buy whatever fits his needs.

AM5 platform prices are falling because they are not selling: very hard to define this a "victory" over Intel. And the launch of non-K SKUs in January will make situation even worse for AMD.
 
Last edited:
The 22% price increase is a bit steep, but I do think the 13900KS will retain its resale value very well so the extra cost is mostly returned if you resell it.

My experience has been that the fastest processor in a socket doesn't depreciate nearly as fast as other processors. There are always people who want to upgrade their system to faster CPUs even years after the socket is discontinued, and these keep up the resale value of the fastest processor for the socket.

I think the biggest problem might be actually obtaining one. When I bought my 9900KS I had to buy in a combo bundle because no place would sell it alone, and that was before these required combo bundles became common in 2020.
It may seem like the price is holding better, but you need to understand that you are paying more as well. The market for such niche chips is very small, and people in this niche market will quickly upgrade to the latest chip when it's available.
In any case, it will not be possible for Intel to achieve higher clockspeed without raising power limit, even top tier bins. A couple of hundred Mhz don't seem much until you realise that the chip is probably running at the limit of what Intel's 10nm can deliver.
 
Indeed. I don't think a single one of my personal machines have been using the stock power limits in the last 15 years. I even found tools to extract efficiency from locked down laptops.
Even without voltage tuning, just about every CPU/GPU/APU will have perf/Watt gains by simply reducing the power budget. If you can spend that power budget more wisely by picking better voltage curves, that's just icing on the cake.
And that's why I never buy or recommend Intel CPUs with a K. Non-K variants have adequate performance even at stock, and if you upgrade your cooling, you can fine-tune them to deliver close to K-SKU level performance for less money.
 
New oveprised pre bined power consumption king on the way to customers. Enjoy new gaming king with fantastic TCO.
 
I don't get your point: E cores will be most probably better binned too in KS.

Diminishing returns ? For sure, on a KS CPU, but this is not targeted to people looking for a good deal.


your post makes very little sense to me.

AMD price for 7950X is not $549. Official price still is $699 if I'm not wrong, even after AMD admitted their launch fiasco and lowered it quite a bit. So your point could be applied also to a 13900K on a deal (I found one 90€ below official price in a local store).

13900K also has all the cores unlocked, and they are much more than 16 (they are 24 actually).
You are just trying to justify your purchase here... don't need to. Everyone can buy whatever fits his needs.

AM5 platform prices are falling because they are not selling: very hard to define this a "victory" over Intel. And the launch of non-K SKUs in January will make situation even worse for AMD.

7950X sold for $549 for Black Friday and cyber Monday all weekend long. The deal is still current. I’m not trying to justify my purchase to anyone. I’m giving my opinion on 13900KS.

You’re right everyone can buy what fits their needs.

AM5 is selling and prices are also simultaneously falling.

#4 selling processor on Newegg is 7950X. #1 on Amazon is 7900X. The sales and new pricing speak for themselves. AMD priced high on early adopters and release.
 
Wanna play……..gotta pay…..
 
7950X sold for $549 for Black Friday and cyber Monday all weekend long. The deal is still current. I’m not trying to justify my purchase to anyone. I’m giving my opinion on 13900KS.

You’re right everyone can buy what fits their needs.

AM5 is selling and prices are also simultaneously falling.

#4 selling processor on Newegg is 7950X. #1 on Amazon is 7900X. The sales and new pricing speak for themselves. AMD priced high on early adopters and release.
You're not wrong... one just needs to look at the current poll on the TPU main page. Zen 4 is already outnumbering Intel 9-10-11th gen platforms, as well as Raptor Lake, which doesn't even need a motherboard upgrade from Alder Lake. I know, we're enthusiasts, not "common people", but at least it shows that Zen 4 does have a market, despite common belief.
 
You're not wrong... one just needs to look at the current poll on the TPU main page. Zen 4 is already outnumbering Intel 9-10-11th gen platforms, as well as Raptor Lake, which doesn't even need a motherboard upgrade from Alder Lake. I know, we're enthusiasts, not "common people", but at least it shows that Zen 4 does have a market, despite common belief.

All I'm saying is that the 13900K is a good chip, but there is no case where a binned version of that is worth $150 over the13900K. Especially at current pricing.

It would be different if it were a 7950X binned, at which point mainstream AIOs could still control the heat. 15-20 C higher temps on average than 7950X, 20 Watts power draw on average higher than 7950X. At the end of the day, it equates to 1% better frames. on average across 4k gaming at higher pricing that the 7590X. What kind of award is that?

The whole value proposition spreading around the internet is the ability to pair the 13900K (or other 13gen SKUs) with Z690. That's totally dismissed now with the 7950X (and other SKUs) receiving massive price drops across the board. 7950X for $549? Asrock X670E Taichi $469? B650 pricing? This is also considering that the LGA1700 platform is dead. As a hardware enthusiast you would get at least 2 more upgrade cycles, plus mini updates in between out of the AM5 platform (AT LEAST, AMD states support for AM5 until at least 2025, but their track record says longer).

Save the money, and invest in DDR5 or a new GPU, which will equate to a meaningful upgrade in average frames. Wait for Meteor Lake or jump to AM5 if you're ready for a future investment upgrade. That's my opinion. :toast:
 
All I'm saying is that the 13900K is a good chip, but there is no case where a binned version of that is worth $150 over the13900K. Especially at current pricing.

It would be different if it were a 7950X binned, at which point mainstream AIOs could still control the heat. 15-20 C higher temps on average than 7950X, 20 Watts power draw on average higher than 7950X. At the end of the day, it equates to 1% better frames. on average across 4k gaming at higher pricing that the 7590X. What kind of award is that?

The whole value proposition spreading around the internet is the ability to pair the 13900K (or other 13gen SKUs) with Z690. That's totally dismissed now with the 7950X (and other SKUs) receiving massive price drops across the board. 7950X for $549? Asrock X670E Taichi $469? B650 pricing? This is also considering that the LGA1700 platform is dead. As a hardware enthusiast you would get at least 2 more upgrade cycles, plus mini updates in between out of the AM5 platform (AT LEAST, AMD states support for AM5 until at least 2025, but their track record says longer).

Save the money, and invest in DDR5 or a new GPU, which will equate to a meaningful upgrade in average frames. Wait for Meteor Lake or jump to AM5 if you're ready for a future investment upgrade. That's my opinion. :toast:
And all I'm saying is that future upgrades should not be the main consideration when building a PC, especially not over present needs and value.

I still agree that the 13900KS, along with all KS chips in history, is a waste of money.
 
All I'm saying is that the 13900K is a good chip, but there is no case where a binned version of that is worth $150 over the13900K. Especially at current pricing.

It would be different if it were a 7950X binned, at which point mainstream AIOs could still control the heat. 15-20 C higher temps on average than 7950X, 20 Watts power draw on average higher than 7950X. At the end of the day, it equates to 1% better frames. on average across 4k gaming at higher pricing that the 7590X. What kind of award is that?

The whole value proposition spreading around the internet is the ability to pair the 13900K (or other 13gen SKUs) with Z690. That's totally dismissed now with the 7950X (and other SKUs) receiving massive price drops across the board. 7950X for $549? Asrock X670E Taichi $469? B650 pricing? This is also considering that the LGA1700 platform is dead. As a hardware enthusiast you would get at least 2 more upgrade cycles, plus mini updates in between out of the AM5 platform (AT LEAST, AMD states support for AM5 until at least 2025, but their track record says longer).

Save the money, and invest in DDR5 or a new GPU, which will equate to a meaningful upgrade in average frames. Wait for Meteor Lake or jump to AM5 if you're ready for a future investment upgrade. That's my opinion. :toast:
Top-tier products are not cost effective, why would this one be?

It's a halo product (plus Intel's chance to go down in history as the company that broke 6GHz in the consumer space). Treat it as such.
 
Top-tier products are not cost effective, why would this one be?

It's a halo product (plus Intel's chance to go down in history as the company that broke 6GHz in the consumer space). Treat it as such.

Just because it’s a top tier product, doesn’t mean you throw out all logic and lay your money down without thought.

I would agree with you if there were additions to the SKU beyond binned. More cores, more cache, more something beyond a bunch of chips that Intel withheld that were able to hit higher turbo boosts.
 
PPT rose 88W, not 100W, and TDP only 65W - but yes - the point is still that pushing power consumption up (which is increasingly a factor of voltage, since voltage is the only exponent in the power equation) is futile for technological progress.

You have to remember that desktops are a shrinking market and the future is mobile/laptop. You CANNOT put a 500W CPU in a laptop or phone, laws of physics say that progress in this direction is an eventual dead-end. Enthusiasts with massive cooling solutions will always present a distorted picture of the high-end but realistically, 90% of deesktops are using basic cooling with a single 92 or 120mm fan, adequate for ~125W at best, and this is beyond the upper echelons of what most OEMs can designate cooling systems for in a laptop where the CPU and GPU will be fighting over the same fin stack and heatpipes.
I compared the real consumption and there really is a jump of almost 100W in stress between 5950X and 7950X. About 140W to 240W according to the reviews.
The x900KS is not worth the price, but overclockers are willing to pay this price. They have the guarantee of the best silicon.
 
Just because it’s a top tier product, doesn’t mean you throw out all logic and lay your money down without thought.
You kind of do. All the way back to the Pentium EE days or even before that, where you'd cough up an additional $200 for an Intel 486 over AMD's counterpart.

And there's no reason to get offended either, it's not like bad things have happened to the rest of the stack, just because of this new SKU.
 
Like other i9 in the past, I would not be surprised if Intel cuts into its good 13900K silicon to bin these. I think the time to buy an i9 is early in the cycle before the KS releases. Or late in the cycle when the KS is close to the same price as the K.
 
7950X sold for $549 for Black Friday and cyber Monday all weekend long. The deal is still current. I’m not trying to justify my purchase to anyone. I’m giving my opinion on 13900KS.

You’re right everyone can buy what fits their needs.

AM5 is selling and prices are also simultaneously falling.

#4 selling processor on Newegg is 7950X. #1 on Amazon is 7900X. The sales and new pricing speak for themselves. AMD priced high on early adopters and release.
in which parallel dimension ?

1670601289387.png

that is US.
In Europe is the same

1670601374450.png

even heavily discounted, AM5 platform is a fiasco so far
 
Unlikely, as the reason they were locked for Zen3 was the cache itself being easily damaged by voltages needed for overclocking.

AMD are running the X3D variants with a max voltage limit, which is a fixed value based on TSMC's process node. The 7800X3D will likely run faster than the 5800X3D because of the jump from 7nm to 5nm, but the limiting factor will once again be that the 3D-vCache is limiting the peak voltage that can be supplied to any of the cores.

Given what I've been saying in this thread about diminishing returns for added voltage/power - that's no bad thing, the result will likely be far more efficient at stock than anything else in the Zen4 lineup.


Indeed. I don't think a single one of my personal machines have been using the stock power limits in the last 15 years. I even found tools to extract efficiency from locked down laptops.
Even without voltage tuning, just about every CPU/GPU/APU will have perf/Watt gains by simply reducing the power budget. If you can spend that power budget more wisely by picking better voltage curves, that's just icing on the cake.
I understand why they locked it, especially for the 1st gen of the vcache, I still hope they unlock the new ones, just because it would be fun to tune, overclock, undervolt, etc.

The first gen was a really cool proved they could produce them in volume and that there is a large enough market to make the investment worthwhile. The new version is reported to have a denser TSV implementation that should allow for power and throughput. It's a really interesting time to watch the dominant x86 processor manufacturers take different approaches to the consumer market.
 
it looks like intel 10nm 2nd GEN(known as Intel 7 node) is already kinda becoming less efficient and lesser performance gains.
12th Gen CPU was impressive, but 13 th gen is meh..nothing special.
AMD clearly has an advantage in 5nm nodes.
i wonder how long will Intel stay at 10nm+.
Hope they dont make it a home it like 14nm..
Ah shush! xD Seriously though.. They may run hotter and use more power, but also, Intel manages to cram twice the transistors in the same area as anyone else, so there is that advantage.
And 13th Gen is meh? Not really, that's a pretty big gain. Nobody even noticed that with the 11th gen, which was a bit of a flop, but only because again people that love AMD ironically lack logic and go omg less cores, even Intel fanatics did...but the IPC lift was actually surprising. And whilst 14nm got old in the tooth, poor old Intel had to catch up, almost died even though AMD's marketing is the most powerful thing they do.. Heck, even i used to mock with the plus plus plus...plus, they were still trading blows so.. All AMD stuff is MEH to me, maybe the new Radeon will be the first product that impresses me, though even AMDs GPUs are now pulling 380 watts! (7900 XTX)

Used to be the case AMDs stuff always used more power and ran hotter for less performance, kinda funny really.

Imagine if AMD ever had to make their own chips...then they'd be screwed. I have to laugh when someone calls me a fangirl of Intel though.. Or nVidia.. Not really. I use whatever does and performs the best - which usually ends up being Intel and nVidia! Except when i used an EPYC CPU for a GPU server...literally because at the time it had many PCIe 4.0 lanes, not because it was faster than Intel, was mostly for GPU rendering so i wanted full speed access to all the GPUs with enough cores for each Windows VM running on top of Linux etc..plus quite a lot of RAM bandwidth to share around. Frankly wish i'd waited for an Intel equivalent so the CPU cores were more useful in terms of speed but it worked. And, now, AMD love charging more than Intel do - that chip was DAMN expensive. Since when was the underdog allowed to creep prices up beyond the usual leader?!
 
Ah shush! xD Seriously though.. They may run hotter and use more power, but also, Intel manages to cram twice the transistors in the same area as anyone else, so there is that advantage.
And 13th Gen is meh? Not really, that's a pretty big gain. Nobody even noticed that with the 11th gen, which was a bit of a flop, but only because again people that love AMD ironically lack logic and go omg less cores, even Intel fanatics did...but the IPC lift was actually surprising. And whilst 14nm got old in the tooth, poor old Intel had to catch up, almost died even though AMD's marketing is the most powerful thing they do.. Heck, even i used to mock with the plus plus plus...plus, they were still trading blows so.. All AMD stuff is MEH to me, maybe the new Radeon will be the first product that impresses me, though even AMDs GPUs are now pulling 380 watts! (7900 XTX)

Used to be the case AMDs stuff always used more power and ran hotter for less performance, kinda funny really.

Imagine if AMD ever had to make their own chips...then they'd be screwed. I have to laugh when someone calls me a fangirl of Intel though.. Or nVidia.. Not really. I use whatever does and performs the best - which usually ends up being Intel and nVidia! Except when i used an EPYC CPU for a GPU server...literally because at the time it had many PCIe 4.0 lanes, not because it was faster than Intel, was mostly for GPU rendering so i wanted full speed access to all the GPUs with enough cores for each Windows VM running on top of Linux etc..plus quite a lot of RAM bandwidth to share around. Frankly wish i'd waited for an Intel equivalent so the CPU cores were more useful in terms of speed but it worked. And, now, AMD love charging more than Intel do - that chip was DAMN expensive. Since when was the underdog allowed to creep prices up beyond the usual leader?!
AMD making their own chips was the problem, it tied them to a single foundry and diverted too much of the money and brains they had into manufacturing instead of processor design. Unlike the wealthy Intel, who can afford to finance their way through several years on an uncompetitive process node, AMD sold their foundry to the UAE, where it was renamed GlobalFoundries.

Once that foundry was no longer competitive at the bleeding edge, it became a burden. Just as Intel are now using TSMC for some products, AMD unshackled themselves from their foundry and are all but done with the mandatory contracts with Globalfoundries now.

I understand why they locked it, especially for the 1st gen of the vcache, I still hope they unlock the new ones, just because it would be fun to tune, overclock, undervolt, etc.

The first gen was a really cool proved they could produce them in volume and that there is a large enough market to make the investment worthwhile. The new version is reported to have a denser TSV implementation that should allow for power and throughput. It's a really interesting time to watch the dominant x86 processor manufacturers take different approaches to the consumer market.
We can hope, but I have enough understanding of silicon design to know that the jump to a new process node and denser TSV is unlikely to help much, if at all.
I'm guessing it will still be locked, and that clock speeds will still be slightly lower than their non vCache variants.

If I'm right, I'm right.
If I'm wrong, I'm happy.
To me, that's better than unfounded optimism which most often results in disappointment.
 
AMD making their own chips was the problem, it tied them to a single foundry and diverted too much of the money and brains they had into manufacturing instead of processor design. Unlike the wealthy Intel, who can afford to finance their way through several years on an uncompetitive process node, AMD sold their foundry to the UAE, where it was renamed GlobalFoundries.

Once that foundry was no longer competitive at the bleeding edge, it became a burden. Just as Intel are now using TSMC for some products, AMD unshackled themselves from their foundry and are all but done with the mandatory contracts with Globalfoundries now.
Getting out of manufacturing wasn't as much "unshackling" as it was throwing the dead weight overboard. Being able to build your stuff is an advantage. Sure, now AMD can go to TSMC, Samsung or whoever. But they also have to get in line with everybody else and compete with Apple, Qualcomm and others for fab capacity.
Again, it was the right decision at the time, but if they had the cash now to afford it, I'm pretty sure they'd prefer to have the ability to build at least a part of their chips on their own. But like you pointed out, Intel one of the few players that can afford that, most of the industry does what AMD did.
 
Back
Top