• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i9-10900K

How on earth is this "competitive on price if you consider the competition"??

$500 is the tray price and it will not retail at this price. $530 if you're lucky. That makes it 25-30% more expensive than a 3900X!

And that's just the CPU! Add on top the mobo differences plus the cost of extra watts... I mean, come on.

This CPU is for 1) Intel fanbois and 2) gamers with absolutely no concern for money. That's it.

Or in my case 1) Not an Intel fan girl 2) Gamer who has had abysmal luck with AMD in recent months and had to switch to Team Blue out of necessity. But I didn't get the 10900K though. Too expensive. I got the 10700K for near MSRP thanks to the wonderful @oxrufiioxo and the rather snazzy Asus Strix Z490-g Gaming (Wifi) board for a rather reasonable $250. Should be a great combo.

AMD marketing is pushing reviewers hard for that, because it's the only way their cores don't sit idle in games. Do really that many people game and stream at the same time?

While I avoid doing any kind of "multitasking" while gaming, gaming + streaming has become increasingly popular for whatever reason. I find watching somebody recording themselves playing a game to be as exciting as watching paint dry, but I guess I'm just way crustier than my 33 years would suggest, lol.
 
the purpose of existence of 10 core is to make 8 core out of the most of them with 2 defective cores and drive the prices down not to compete with 3900.

10700K is the best choice, well 10700F $300 to be precise, not the 10600K soon to be bottlenecking CPU, and not the 10 core,

10 core sounds as absurd as 5 core or something.
 
yes to both
Thanks for that W1z
My mate is thinking about building a rig based on the 10600k, but is worried about the temps being in an enclosed case. But it shouldn't be any worse than similar tier chips.
 
310W peak. Something about comets and craters, funny twist, obligatory lol

Gaming is 395 watts .... the 3900X is 385 watts ... what are we all missing ? Must be 15 posts commenting on the "heaters" .... reading comprehension issues ?

3900X = 385 watts / 79C
10800K = 295 watts / 54 C ... this "heater" is 25C cooler

The 3900X vs 9900KF was interesting .. looking at the review pages the two traded wins, the difference being that the 3900X excelled in brain simulation and things mosts folks never do. While this tweaks the nerd in inside of me's interests, it does nothing for the enthusiast and engineering business owner. ,,, and this is **my build**. This time around, it's almost a complete sweep. But the categories Intel picked up with the 10900k over the 9900KF, we never cared about in the 1st place.

In January the 3900C and 9900KF were the same price... In February it was $415. That had nothing to do the 10900K or Ryzen 4000 but with the perceived value of the 9900KF vs 3900X.


The World's Fastest Gaming CPU.
4K Gamers: Really? Why should I upgrade my PC? According to price / performance Ryzen 5 1600, haven't seen noticeable performance in 4K (Intel Core i9 10900K only 3 % fast then Ryzen 5 1600).

You shouldn't. But when you are gaming at 144 Hz under ULMB, you will know that answer. I'll think about 4k when there's a GFX card powerful enough to drive 144 hz

One thing that kinda took me off guard ...

"Using a 240 mm AIO I could get 5.2 GHz stable, but with even more voltage, which causes CPU temperatures to reach over 95°C, right at the throttling point—despite watercooling. Definitely not worth it. " That sounds like one went to extraordinary lengths for cooling .... The Scythe Fuma does as well or better than most 240mm coolers. Having $42 worth of cooling isn't exactly a big deal. Not that I'm of the opinion that there significant value to push the CPU that far ... impressive that ya can of course, but I'd prolly stop at 5,0

Here's how I'm looking at this CPU *** Friom y owm PERSONAL *** perspective.... So what changed since the 9900k

Synthetic Benchmarks - AMD / Intel still split wins here; we don't run benchmarks even on an infequent basis so don't care, irrelevant

Rendering -Intel has moved up quite a bit ; but as we don't do rendering even on an infrequent basis so irrelevant

Software / Game Development - Intel has again moved up quite a bit taking leads; but again as we don't development on even on an infrequent basis so irrelevant

Web Stuff - Intel has again moved up quite a bit taking leads; but again as the differences are so small, still irrelevant

Machine Learning, Brain Simulation, Physics - This is where "more cores mattered, and if I ever dip my toes into this kinda thing, maybe I'll look at AMD, until then, still irrelevant.

Office suites - The silliest of tests resulting in scripting a bunch of tasks together, each of would would require user input (1-2 seconds each) in between and these benchmarks finish 1 or 2 tenths f a second apart The equivalent of racing thru Manhattan during rush hour and hitting a light at every corner

Photo Editing - OK Intel continuing with most of the wins... but 0.1 seconds ? Who cares.

Video Editing - Intel with a 25 second win, OK we're all paying attention.

Photogrammetry - We send that work out but otherwise Id be impressed with Intel's win here

Text Recognition - Still something we do frequently and yet another Intel win, color me interested.

Server / Workstation - Teeny wins for both sides here, but of no interest as that stuff ain't done here.

Compression - I did my 1st RAR files this year 2 days ago ... not something gonna base a CPU selction on even tho Intel finished in 3/4 of the time.

Encryption - We don't do that here, tho grats for a big Intel win by 60% faster

Encoding - Intel came in within a hair of a sweep here and with a huge win on the sound side ... but who cares ? ... not me, don't do that stuff.

Gaming (1080p) - OK 8% win for Intel is significant ... expected much less. Did expect that to drop in hal tho at 1440p due to GPU bottlenecking being more of an impact at higher res.

Power Consumption - 10 watts at stock difference when gaming (385 / 395). Would be interesting to compare AMD vs Intel if the 3900x could overclock

Temperature - 54C for Intel vs 79C for 3900X at at stock .. Wow

Overclocking - Would like to see something used here like RoG Real Bench, much more useful and something we can compare against our own builds at home Would love to see how it stacks up against the 420 + 280 in push / pull using here.

So, w/ 3900X out either way, it's between 9900KF or 10900K ?

1. Handling that "potential" power, MoBo makers have to pay attention... that's gonna cost money ... cooling gonna cost money
2. Too few MoBos have been reviewed.
3. We still in 1st stepping.
4. What's the F version like

If a box went down to a catastrophic failure, I'd do a 9900KF build .... In 6 months might be different. But If I'm choosing, waiting for late fall when there will be way more options
 
Last edited:
Gaming is 395 watts .... the 3900X is 385 watts ... what are we all missing ? Must be 15 posts commenting on the "heaters" .... reading comprehension issues ?

3900X = 385 watts / 79C
10800K = 295 watts / 54 C ... this "heater" is 25C cooler

The 3900X vs 9900KF was interesting .. looking at the review pages the two traded wins, the difference being that the 3900X excelled in brain simulation and things mosts folks never do. While this tweaks the nerd in inside of me's interests, it does nothing for the enthusiast and engineering business owner. ,,, and this is **my build**. This time around, it's almost a complete sweep. But the categories Intel picked up with the 10900k over the 9900KF, we never cared about in the 1st place.

In January the 3900C and 9900KF were the same price... In February it was $415. That had nothing to do the 10900K or Ryzen 4000 but with the perceived value of the 9900KF vs 3900X.




You shouldn't. But when you are gaming at 144 Hz under ULMB, you will know that answer. I'll think about 4k when there's a GFX card powerful enough to drive 144 hz

One thing that kinda took me off guard ...

"Using a 240 mm AIO I could get 5.2 GHz stable, but with even more voltage, which causes CPU temperatures to reach over 95°C, right at the throttling point—despite watercooling. Definitely not worth it. " That sounds like one went to extraordinary lengths for cooling .... The Scythe Fuma does as well or better than most 240mm coolers. Having $42 worth of cooling isn't exactly a big deal. Not that I'm of the opinion that there significant value to push the CPU that far ... impressive that ya can of course, but I'd prolly stop at 5,0

Here's how I'm looking at this CPU *** Friom y owm PERSONAL *** perspective.... So what changed since the 9900k

Synthetic Benchmarks - AMD / Intel still split wins here; we don't run benchmarks even on an infequent basis so don't care, irrelevant

Rendering -Intel has moved up quite a bit ; but as we don't do rendering even on an infrequent basis so irrelevant

Software / Game Development - Intel has again moved up quite a bit taking leads; but again as we don't development on even on an infrequent basis so irrelevant

Web Stuff - Intel has again moved up quite a bit taking leads; but again as the differences are so small, still irrelevant

Machine Learning, Brain Simulation, Physics - This is where "more cores mattered, and if I ever dip my toes into this kinda thing, maybe I'll look at AMD, until then, still irrelevant.

Office suites - The silliest of tests resulting in scripting a bunch of tasks together, each of would would require user input (1-2 seconds each) in between and these benchmarks finish 1 or 2 tenths f a second apart The equivalent of racing thru Manhattan during rush hour and hitting a light at every corner

Photo Editing - OK Intel continuing with most of the wins... but 0.1 seconds ? Who cares.

Video Editing - Intel with a 25 second win, OK we're all paying attention.

Photogrammetry - We send that work out but otherwise Id be impressed with Intel's win here

Text Recognition - Still something we do frequently and yet another Intel win, color me interested.

Server / Workstation - Teeny wins for both sides here, but of no interest as that stuff ain't done here.

Compression - I did my 1st RAR files this year 2 days ago ... not something gonna base a CPU selction on even tho Intel finished in 3/4 of the time.

Encryption - We don't do that here, tho grats for a big Intel win by 60% faster

Encoding - Intel came in within a hair of a sweep here and with a huge win on the sound side ... but who cares ? ... not me, don't do that stuff.

Gaming (1080p) - OK 8% win for Intel is significant ... expected much less. Did expect that to drop in hal tho at 1440p due to GPU bottlenecking being more of an impact at higher res.

Power Consumption - 10 watts at stock difference when gaming (385 / 395). Would be interesting to compare AMD vs Intel if the 3900x could overclock

Temperature - 54C for Intel vs 79C for 3900X at at stock .. Wow

Overclocking - Would like to see something used here like RoG Real Bench, much more useful and something we can compare against our own builds at home Would love to see how it stacks up against the 420 + 280 in push / pull using here.

So, w/ 3900X out either way, it's between 9900KF or 10900K ?

1. Handling that "potential" power, MoBo makers have to pay attention... that's gonna cost money ... cooling gonna cost money
2. Too few MoBos have been reviewed.
3. We still in 1st stepping.
4. What's the F version like

If a box went down to a catastrophic failure, I'd do a 9900KF build .... In 6 months might be different. But If I'm choosing, waiting for late fall when there will be way more options
Do you want to buy my 9900kf? 5.3ghz on apex xi, 1.24v llc8.
 
Hi,
Nice review
Yeah I got a 10900k from local micro center could of knocked me over with a feather when I saw it for 529.99
Return a borked x299 mark 2 board for 260.00 off the chip so all in all 313.00 for the chip
Pair it with the asus xii formula since apex was going to take way too long to show up
Have another set of 3600c16 for now tearing down my x99 rig this weekend or as soon as the board shows up
Should be fun to see what the optimus blocks can do on it since it did so well on HEDT :cool:
 
Great review W1zzard as always, Techpowerup GPU reviews are the best and have the most amount of useful information of all of the reviews on the web however not quite there with this CPU review!

Only issue I have with this CPU review is it should have included the overclocker details and the test results for the 3900X as it is the most direct competitor for the 10900K. The specifications for my AMD computer and the single-core and multi core results for Cinebench R20 are shown below. My 3900X CPU is not a greatly binned chip at all but still had some good results!


Test System "Zen 2"
Processor:
AMD Ryzen 3900X Manual OC to 4.475 GHz all cores
With Noctua NH-D15 air cooler​
Motherboard:
Asus X470 Prime Pro with 5406 BIOS​
Memory:
2x 8 GB G.SKILL Trident Z DDR4
DDR4-3000 15-15-15-35 (clocked to DDR4-3200)​
Graphics:
MSI 5700XT Gaming X​
Storage:
2 TB Samsung M.2 SSD​
Power Supply:
Corsair AX-860 860Watt​
Software:
Windows 10 Professional 64-bit
Version 1903 (May 2019 Update)​
Drivers:
AMD 20.4.2 Graphics Driver​
 

Attachments

  • R20 Multi.jpg
    R20 Multi.jpg
    71.3 KB · Views: 221
  • R20 Single.jpg
    R20 Single.jpg
    62.8 KB · Views: 150
Last edited:
AMD marketing is pushing reviewers hard for that, because it's the only way their cores don't sit idle in games. Do really that many people game and stream at the same time?

Twitch Streamers?
 
Gaming is 395 watts .... the 3900X is 385 watts ... what are we all missing ? Must be 15 posts commenting on the "heaters" .... reading comprehension issues ?

3900X = 385 watts / 79C
10800K = 295 watts / 54 C ... this "heater" is 25C cooler

The 3900X vs 9900KF was interesting .. looking at the review pages the two traded wins, the difference being that the 3900X excelled in brain simulation and things mosts folks never do. While this tweaks the nerd in inside of me's interests, it does nothing for the enthusiast and engineering business owner. ,,, and this is **my build**. This time around, it's almost a complete sweep. But the categories Intel picked up with the 10900k over the 9900KF, we never cared about in the 1st place.

In January the 3900C and 9900KF were the same price... In February it was $415. That had nothing to do the 10900K or Ryzen 4000 but with the perceived value of the 9900KF vs 3900X.




You shouldn't. But when you are gaming at 144 Hz under ULMB, you will know that answer. I'll think about 4k when there's a GFX card powerful enough to drive 144 hz

One thing that kinda took me off guard ...

"Using a 240 mm AIO I could get 5.2 GHz stable, but with even more voltage, which causes CPU temperatures to reach over 95°C, right at the throttling point—despite watercooling. Definitely not worth it. " That sounds like one went to extraordinary lengths for cooling .... The Scythe Fuma does as well or better than most 240mm coolers. Having $42 worth of cooling isn't exactly a big deal. Not that I'm of the opinion that there significant value to push the CPU that far ... impressive that ya can of course, but I'd prolly stop at 5,0

Here's how I'm looking at this CPU *** Friom y owm PERSONAL *** perspective.... So what changed since the 9900k

Synthetic Benchmarks - AMD / Intel still split wins here; we don't run benchmarks even on an infequent basis so don't care, irrelevant

Rendering -Intel has moved up quite a bit ; but as we don't do rendering even on an infrequent basis so irrelevant

Software / Game Development - Intel has again moved up quite a bit taking leads; but again as we don't development on even on an infrequent basis so irrelevant

Web Stuff - Intel has again moved up quite a bit taking leads; but again as the differences are so small, still irrelevant

Machine Learning, Brain Simulation, Physics - This is where "more cores mattered, and if I ever dip my toes into this kinda thing, maybe I'll look at AMD, until then, still irrelevant.

Office suites - The silliest of tests resulting in scripting a bunch of tasks together, each of would would require user input (1-2 seconds each) in between and these benchmarks finish 1 or 2 tenths f a second apart The equivalent of racing thru Manhattan during rush hour and hitting a light at every corner

Photo Editing - OK Intel continuing with most of the wins... but 0.1 seconds ? Who cares.

Video Editing - Intel with a 25 second win, OK we're all paying attention.

Photogrammetry - We send that work out but otherwise Id be impressed with Intel's win here

Text Recognition - Still something we do frequently and yet another Intel win, color me interested.

Server / Workstation - Teeny wins for both sides here, but of no interest as that stuff ain't done here.

Compression - I did my 1st RAR files this year 2 days ago ... not something gonna base a CPU selction on even tho Intel finished in 3/4 of the time.

Encryption - We don't do that here, tho grats for a big Intel win by 60% faster

Encoding - Intel came in within a hair of a sweep here and with a huge win on the sound side ... but who cares ? ... not me, don't do that stuff.

Gaming (1080p) - OK 8% win for Intel is significant ... expected much less. Did expect that to drop in hal tho at 1440p due to GPU bottlenecking being more of an impact at higher res.

Power Consumption - 10 watts at stock difference when gaming (385 / 395). Would be interesting to compare AMD vs Intel if the 3900x could overclock

Temperature - 54C for Intel vs 79C for 3900X at at stock .. Wow

Overclocking - Would like to see something used here like RoG Real Bench, much more useful and something we can compare against our own builds at home Would love to see how it stacks up against the 420 + 280 in push / pull using here.

So, w/ 3900X out either way, it's between 9900KF or 10900K ?

1. Handling that "potential" power, MoBo makers have to pay attention... that's gonna cost money ... cooling gonna cost money
2. Too few MoBos have been reviewed.
3. We still in 1st stepping.
4. What's the F version like

If a box went down to a catastrophic failure, I'd do a 9900KF build .... In 6 months might be different. But If I'm choosing, waiting for late fall when there will be way more options


The total system gaming power consumption ranges from Core i5-9600K @ 356W to Core i9-10900K @ 395W, this is only a range difference of only 39 Watts for all of the 14 x CPUs.
Me thinks this is mostly GPU power not CPU, the Core i5-9600K (not overclock) for example does not draw anything like 356 Watts !!!!!!!!
 
Last edited:
When talking about performance, most (if not all) reviewers simply refuse to put the cost-benefit analysis in the same sentence. That is to say if you put the cost-benefit analysis in the same sentence, the 10900K will be less attractive.
 
That's the thing; its not possible going by the usage stat.

Usage figures are worthless anyway, they reflect almost nothing. They are calculated based of deltas for the total process time according to a fixed sampling rate so they don't actually measure something happening in hardware, in fact "CPU usage" is a totally misleading term. For instance there is no way to know whether a core is running mostly integer/float//double/SIMD/etc or a combination of all, so you could have two workloads generating "100% usage" but one may cause the core to consume 5W and one 15W.
 
Last edited:
@W1zzard! thumbs-all-the-way up for still doing SuperPi front and center. Keeping everyone honest. Makes me smile everytime I see it. I'm sure "they" are loving it, that you are still using something like that.

...
..
.
 
but on page 20 I don't see it hitting 5.3GHz even on 1 thread generic FP load. Isn't it advertised to be able to hit 5.3GHz on 1-2 thread loads or am I missing something?
Correct, it will not hit tvb with that load. It seems you need very light weight activity for it to activate, like humans clicking around, not calculating something (which puts a single core at 100% for several seconds)
 
I guess you haven't done any benchmarking yourself?
The problem is that as soon as you introduce more variables, there's a huge risk of the testing environment throwing off the normally reproducible numbers. Then we're ending up in a situation where a lot of people are going to start questioning the benchmark results, as every time they come outside the normal 1-2% variance or less. Benchmarking for review purposes has to deliver reproducible results across several platforms, so you need to try to minimise variables that might affect the performance in a negative way, regardless of what you're testing.

Yes, a lot of people run at least something like discord alongside their gaming, but live streaming, not sure how many people really does that. That said, most of us probably don't turn off all the background services and what not when we game either, so you lose out a few percents performance there too.

I doubt PCIe 3.0 vs PCIe 4.0 would make any difference to idle power, at least not if no PCIe 4.0 devices are being used. It's more likely that Intel is simply better than AMD at the whole idle power end of things and it's something that have been for quite some time. AMD seems to be starting to catch up on the mobile side though, so maybe that'll be something that translates over to the next set of desktop CPUs as well.
Intel is better than AMD idle, because generally Intel's CPUs base clocks are much lower than AMD's base clock, not to mention fewer cores as well. In this case, 10 cores with the base clock of 3.7 GHz vs 12 cores with the base clock of 3.8 GHz.
 
thanks for the review, from my point of view AMD Ryzen still the winner. :rockout:
 
Last edited:
Intel is better than AMD idle, because generally Intel's CPUs base clocks are much lower than AMD's base clock, not to mention fewer cores as well. In this case, 10 cores with the base clock of 3.7 GHz vs 12 cores with the base clock of 3.8 GHz.

These CPUs idle at much lower clocks than the base frequencies.
 
Comet Lake is a huge NO for every PC builder
yeah,right :laugh: typical amd PR talk.NO intel cpu is even remotely good.if you have more of such ridiculous statements please keep them to yourself.
10900k is not a good value option,but there are many skus in the 10th gen that'd make a very good proposition for majority of users.and I'm saying it as someone who's got absolutely no interest in 10th gen at this point.
10400f is a very good budget chip,considering the cpu alone-better than 3600 for a home/gaming rig.so is 10600kf for an enthusiast gamer and 10700f for,well,frankly everything,it's a 9900 non-K at $350 with boost clocks just shy of the K-sku.
wouldn't buy them with b550 coming,but still,your point is just laughable.

combas.jpg
temp.jpg
watt.jpg
idle1thread.jpg
 
Last edited:
it is the fastest gaming cpu no comment; i call it "ego cpu" as is all they can do for now ; the eskimos will be happy to have it btw
 
Gaming is 395 watts .... the 3900X is 385 watts ... what are we all missing ? Must be 15 posts commenting on the "heaters" .... reading comprehension issues ?

3900X = 385 watts / 79C
10800K = 295 watts / 54 C ... this "heater" is 25C cooler

The 3900X vs 9900KF was interesting .. looking at the review pages the two traded wins, the difference being that the 3900X excelled in brain simulation and things mosts folks never do. While this tweaks the nerd in inside of me's interests, it does nothing for the enthusiast and engineering business owner. ,,, and this is **my build**. This time around, it's almost a complete sweep. But the categories Intel picked up with the 10900k over the 9900KF, we never cared about in the 1st place.

In January the 3900C and 9900KF were the same price... In February it was $415. That had nothing to do the 10900K or Ryzen 4000 but with the perceived value of the 9900KF vs 3900X.




You shouldn't. But when you are gaming at 144 Hz under ULMB, you will know that answer. I'll think about 4k when there's a GFX card powerful enough to drive 144 hz

One thing that kinda took me off guard ...

"Using a 240 mm AIO I could get 5.2 GHz stable, but with even more voltage, which causes CPU temperatures to reach over 95°C, right at the throttling point—despite watercooling. Definitely not worth it. " That sounds like one went to extraordinary lengths for cooling .... The Scythe Fuma does as well or better than most 240mm coolers. Having $42 worth of cooling isn't exactly a big deal. Not that I'm of the opinion that there significant value to push the CPU that far ... impressive that ya can of course, but I'd prolly stop at 5,0

Here's how I'm looking at this CPU *** Friom y owm PERSONAL *** perspective.... So what changed since the 9900k

Synthetic Benchmarks - AMD / Intel still split wins here; we don't run benchmarks even on an infequent basis so don't care, irrelevant

Rendering -Intel has moved up quite a bit ; but as we don't do rendering even on an infrequent basis so irrelevant

Software / Game Development - Intel has again moved up quite a bit taking leads; but again as we don't development on even on an infrequent basis so irrelevant

Web Stuff - Intel has again moved up quite a bit taking leads; but again as the differences are so small, still irrelevant

Machine Learning, Brain Simulation, Physics - This is where "more cores mattered, and if I ever dip my toes into this kinda thing, maybe I'll look at AMD, until then, still irrelevant.

Office suites - The silliest of tests resulting in scripting a bunch of tasks together, each of would would require user input (1-2 seconds each) in between and these benchmarks finish 1 or 2 tenths f a second apart The equivalent of racing thru Manhattan during rush hour and hitting a light at every corner

Photo Editing - OK Intel continuing with most of the wins... but 0.1 seconds ? Who cares.

Video Editing - Intel with a 25 second win, OK we're all paying attention.

Photogrammetry - We send that work out but otherwise Id be impressed with Intel's win here

Text Recognition - Still something we do frequently and yet another Intel win, color me interested.

Server / Workstation - Teeny wins for both sides here, but of no interest as that stuff ain't done here.

Compression - I did my 1st RAR files this year 2 days ago ... not something gonna base a CPU selction on even tho Intel finished in 3/4 of the time.

Encryption - We don't do that here, tho grats for a big Intel win by 60% faster

Encoding - Intel came in within a hair of a sweep here and with a huge win on the sound side ... but who cares ? ... not me, don't do that stuff.

Gaming (1080p) - OK 8% win for Intel is significant ... expected much less. Did expect that to drop in hal tho at 1440p due to GPU bottlenecking being more of an impact at higher res.

Power Consumption - 10 watts at stock difference when gaming (385 / 395). Would be interesting to compare AMD vs Intel if the 3900x could overclock

Temperature - 54C for Intel vs 79C for 3900X at at stock .. Wow

Overclocking - Would like to see something used here like RoG Real Bench, much more useful and something we can compare against our own builds at home Would love to see how it stacks up against the 420 + 280 in push / pull using here.

So, w/ 3900X out either way, it's between 9900KF or 10900K ?

1. Handling that "potential" power, MoBo makers have to pay attention... that's gonna cost money ... cooling gonna cost money
2. Too few MoBos have been reviewed.
3. We still in 1st stepping.
4. What's the F version like

If a box went down to a catastrophic failure, I'd do a 9900KF build .... In 6 months might be different. But If I'm choosing, waiting for late fall when there will be way more options

Those temps might be a tad misleading. But 54 C on this CPU going at stock is a fairy tale, you can safely forget about that. It sure as hell never happened on my 8700K and this one has additional cores and higher TDP. And definitely not on a Scythe Fuma lol. Dual stack air on this sort of CPU easily produces 70C and above load temps. But its not relevant. Both are very much in the safe zone and no throttling happens at stock. That is; they aren't clocking back. Whether they reach advertised boost all the time is another story... That goes for both camps too.

Performance wise my opinion here is that its all the same anyway. For gaming... anything 8700K and over will do fine, it was like that 2 years ago and has not changed. Not for high refresh gaming either, really. The differences we see here are in single digit percentages most of the time, and the base level of performance is great. If you do some content production and or heavy multitasking as a home user, some 8 core version will suit you better.

Beyond that... I think every other metric for a CPU choice comes into play in a big way, more so than performance. The biggest one is then obviously price, as the platforms are rather similar now. But Intel's aggressive boost and power limiting is not really something to get all excited about, in that sense Ryzen still has the definite edge.

That's 7.5%... the difference between high and ultra or 2xAA and 4xAA...or pegged at 144 FPS versus not. It's something...and in some cases, a bump in the class of card!

Correction... 7.5% averages; not minimums which is what its all about. Nobody cares (read: should care) if games can peak to 240 FPS if the average is 150, but it still pushes averages up. Even with those monster scores all you really need is the CPU that will hold at least 120. And even then you will be dropping below it from time to time. Its a CPU. The load will vary and is still then limited by the GPU, and most people don't push 2080tis. Come next gen they might, but even then, you're in the safe zone with all of these cpus. None of them will be holding anything back in any notable way.
 
Last edited:
But 54 C on this CPU going at stock is a fairy tale, you can safely forget about that.
That's what I measured, and yes, my 8700K runs warmer than that, too. Both are sitting at PL1 125 W in that test, which is the definition of "stock"
 
I feel unimpressed by the numbers between the last two generations. IMHO its an unnecessary product for the market, only necessary for "the press"; to say "hello we're still here".
Given the good performance of the i5-9600 the trick will be to position the i7 i5 price points to simulate "improved performance" over the previous generation.

This is my last day with a X5690. What is the point of this? Overclocking, for me isn't worth anymore to try to achieve 5-10% improvement for a 70% power increase. OC is for the fun of OCing, not for a real palpable benefit anymore. Because going from 60 to 64 fps at 4k in a bench or from 150 to 170 fps 1080 its pointless or rendering something in 48 seconds instead of 56. And if you're a mad renderer, CC, or simulator, you already have a sTR4 or 2066 platform.

To compete with AMD they should offer more pcie lanes, or pcie4, thunderbolt in all mobos, 2-3 generations per SB, etc. you know what I mean.

It would be interesting to have a 8-core limited comparison between the last 4 generations of top-of-the-line desktop cpus.
 
'No cooler included in the box'.

More needs to be made of this omission and in this case, you'll need a beefy, expensive cooler (240mm AIO optimal). The 3900X and upcoming 3900XT include coolers that you don't need to upgrade. So that adds at least $100 to the total price. Then there is added expense of more expensive mobos, only high-end for optimal performance.

So it's about $150-200 extra for 3.6% more fps at 1440p (realistic res) if you happen to have a 2080 Ti. No thanks.

EDIT: Actually, the listed price is the trade price. Already online the prices of these are $50 at least higher. So you're paying $200-250 more for a 10-core instead of a 12-core and I'm not exaggerating. Terrible deal.
 
Last edited:
The 3900X and upcoming 3900XT include coolers that you don't need to upgrade.So that adds at least $100 to the total price.
but frankly ones that you want to upgrade.
they get decimated by $30 dollar coolers,let's not make a big deal out of wraith prism.it looks super dope and is a decent stop gap solution.
temp_max.png
halas_load.png
 
AMD marketing is pushing reviewers hard for that, because it's the only way their cores don't sit idle in games. Do really that many people game and stream at the same time?

The problem is that as soon as you introduce more variables, there's a huge risk of the testing environment throwing off the normally reproducible numbers. Then we're ending up in a situation where a lot of people are going to start questioning the benchmark results, as every time they come outside the normal 1-2% variance or less. Benchmarking for review purposes has to deliver reproducible results across several platforms, so you need to try to minimise variables that might affect the performance in a negative way, regardless of what you're testing.
^ This. The other problem with benchmarking game streaming as a CPU test is that in the real world, the vast overwhelming majority don't "brute force" software-encode on one CPU on one PC. Serious streamers usually have a second PC dedicated for capture / encoding for reasons beyond mere CPU offload (eg, some games just don't 'play well' with video broadcast software and experience weird non performance related stutter, you can also ALT-TAB out without fear of breaking something vs broadcasting software capturing Exclusive Fullscreen, a crash on the gaming PC doesn't bring down the stream, better ergonomics having separate broadcast controls on another PC, etc). It's just more professional all round. Likewise budget streamers don't sit there saying "I only have a quad-core. Let's inflict stutter on my audience by trying to brute-force CPU encode" Instead they just enable NVEnc / Shadowplay in OBS (fixed-function encoders on the dGPU) and enjoy "good enough" quality for most games (especially Turing) with <5% FPS impact and hardly any extra CPU load. Others might use an external capture box (a common option for console game streaming too). Real-world streaming CPU usage is clearly going to be vastly different between these methods vs CPU encoding on the same PC and the latter is not actually the most popular way of doing it.

Same goes for "well let's try and benchmark this game with a web browser running with 200x tabs open to try and find something to fill up those cores". Any half decent modern browser should be intelligent enough to both "lazy load" tabs plus force suspend idle background tabs (a highly desirable feature anyway vs hostile background mining scripts plus reduced memory usage). Many browsers now block tracking scripts natively and anyone with half a brain will have been running uBlock Origin since 2014. There are options for telling W10 to not download updates during certain time periods / game sessions. All of a sudden, supposedly FPS crippling levels of background CPU usage drops to barely a couple of percent on a quad-core and becomes a non-issue for gaming with a bit of common sense.

So rather than waste time on inconsistent and non-repeatable "background benchmarking" just to try and find something to fill up unused cores for the 1% of gamers who stream, personally I'd rather see games benchmarked normally but have another vote here for including 1% / 0.1% lows.
 
Back
Top