• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-10900

Joined
Feb 26, 2016
Messages
548 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling EK Quantum Velocity C+A, EK Quantum Vector C+A, CE 280, Monsta 280, GTS 280 all w/ A14 IP67
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 3
Power Supply EVGA 1600 T2 w/ A14 IP67
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Monsgeek M5W w/ Cherry MX Silent Black RGBs
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
What application are you referring to? Some applications just don't use all cores.
In many cases, the unleashed , overclocked to the max 10900 (in the red bar) has higher clock speed than the K variants chilling at default speed. With Intel sketchy boosty thingy, you will get a lot of variable clock speed between runs and that may cause variation in the result.
I don't see anything wrong with the chart.
1594771897602.png1594772008992.png1594772126598.png1594772178825.png1594772259734.png1594772317677.png1594772377633.png
Image 1 - Not sure how the 9900K loses to an i5-10500 but okay.
Image 2 - not really sure how a stock 10900 beats a 10900 with faster RAM or max turbo, but okay.
Image 3- Not really sure how an i5-10500 beats an i7-8700K but okay.
Image 4 - Not sure how an i5-10600K loses to an i5-10500 but is faster than an i5-10400F but okay.
Image 5 - Not sure how an i5-10400F beats an i7-8700K but okay.
Image 6 - Not sure how an i7-10700 beats an i9-10900K but okay.
Image 7 - Please explain how an i7-10700/K beats an i9-9900KS.

There are much more inconsistencies, I don't really wan't to flood the chat.
 
Joined
Feb 21, 2008
Messages
6,862 (1.16/day)
Location
S.E. Virginia
System Name Barb's Domain
Processor i9 10850k 5.1GHz all cores
Motherboard MSI MPG Z490 GAMING EDGE WIFI
Cooling Deep Cool Assassin III
Memory 2*16gig Corsair LPX DDR4 3200
Video Card(s) RTX 4080 FE
Storage 500gb Samsung 980 Pro M2 SSD, 500GB WD Blue SATA SSD, 2TB Seagate Hybrid SSHD
Display(s) Dell - S3222DGM 32" 2k Curved/ASUS VP28UQG 28" 4K (ran at 2k), Sanyo 75" 4k TV
Case SilverStone Fortress FT04
Audio Device(s) Bose Companion II speakers, Corsair - HS70 PRO headphones
Power Supply Corsair RM850x (2021)
Mouse Logitech G502
Keyboard Logitech Orion Spectrum G910
VR HMD Oculus Quest 2
Software Windows 10 Pro 64 bit
Benchmark Scores https://www.3dmark.com/spy/34962882
The heat added to your room isn't based on the temperature it is running at. If the CPU is using 200W, it is adding almost 200W heat to your room, regardless of the temp being 40 or 80 degrees. That's a very common misconception.

Tell that to my 3930k that heats my room to what I consider uncomfortable temperatures whenever it runs at temperatures over 75'c
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,068 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Please explain
margin of error, especially the games are hard to run 100% the same, almost impossible for some, like AC
different power limits, different turbo profiles, cache sizes, apps not benefiting from more cores
i also remember one of our games runs better with fewer cores, i think it was metro, there's a discussion in the comments of a previous CML review

any idea why people are asking on twitter which is obviously the wrong platform considering they can just ask here?
 
Last edited:
Joined
May 1, 2020
Messages
108 (0.07/day)
View attachment 162263View attachment 162264View attachment 162265View attachment 162266View attachment 162267View attachment 162268View attachment 162270
Image 1 - Not sure how the 9900K loses to an i5-10500 but okay.
Image 2 - not really sure how a stock 10900 beats a 10900 with faster RAM or max turbo, but okay.
Image 3- Not really sure how an i5-10500 beats an i7-8700K but okay.
Image 4 - Not sure how an i5-10600K loses to an i5-10500 but is faster than an i5-10400F but okay.
Image 5 - Not sure how an i5-10400F beats an i7-8700K but okay.
Image 6 - Not sure how an i7-10700 beats an i9-10900K but okay.
Image 7 - Please explain how an i7-10700/K beats an i9-9900KS.

There are much more inconsistencies, I don't really want to flood the chat.

Yeah I appreciate your effort not flooding the chat. Here are my 2 cents:
  1. Margin of error. In this case, the score is almost the same. Google Octane 2.0 is not the best application to test CPU cores scaling...
  2. Same as Google Octane 2.0. the difference here is in the 10s miliseconds... Almost identical.
  3. In Tensor Flow, I saw in the chart 10500 is 2.2% faster than 8700K. I'm not sure what test @w1zard is using. There are so many parameters that may affect the result.
  4. Margin of error. The result should be read as identical.
  5. Digicortex compute plugin uses SSE, AVX / AVX-2 and AVX-512 instruction set. Depending on cooling, boost frequency, and RAM setup (x86 Compute Plugin is NUMA aware), you will see 10 Gen i5 with better memory setup going ever so slightly faster than the older 8700K.
  6. Games Not all can eat up all cores. Very few fully utilize the full cores of the 10900K. Hence the results. Game is NOT the representation of CPU multicore performance (in my own point of view). They are there for your entertainment. Rendering videos , 3D scenes or simulating physics / neural network with proper cores config (intra_/inter_op_parallelism_threads, launch simultaneous processes on multiple NUMA node, bind OpenMP threads to physical processing units, sets the maximum number of threads to use for OpenMP parallel regions, ...)
  7. Margin of error. AC: odyssey results will vary a lot. Take it with a grain of salt. It is relative result, not absolute.
When you look at the benchmark, think this way: Is CPU X run better with this application than CPU Y ? Benchmark gives you a representation of how CPUs perform within the enclave of the application. Rules are set by the developers and CPUs play by their rules which sometimes favor the in-theory-slower CPU. I'm running Photoshop and it just loves high frequency cores. So, a server Xeon CPU that cost thousands of dollars will bite the dust vs a cheap 9900KS.

margin of error, especially the games are hard to run 100% the same, almost impossible for some, like AC
different power limits, different turbo profiles, cache sizes, apps not benefiting from more cores
i also remember one of our games runs better with fewer cores, i think it was metro, there's a discussion in the comments of a previous CML review

any idea why people are asking on twitter which is obviously the wrong platform considering they can just ask here?
Metro Exodus loves 6-8 high speed cores and scale poorly with 12-16 cores...
And someone is trying to promote their Twitter using this forum. I guess.
 
Last edited:
Joined
Feb 26, 2016
Messages
548 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling EK Quantum Velocity C+A, EK Quantum Vector C+A, CE 280, Monsta 280, GTS 280 all w/ A14 IP67
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 3
Power Supply EVGA 1600 T2 w/ A14 IP67
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Monsgeek M5W w/ Cherry MX Silent Black RGBs
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
margin of error, especially the games are hard to run 100% the same, almost impossible for some, like AC
different power limits, different turbo profiles, cache sizes, apps not benefiting from more cores
i also remember one of our games runs better with fewer cores, i think it was metro, there's a discussion in the comments of a previous CML review


any idea why people are asking on twitter which is obviously the wrong platform considering they can just ask here?
I was replying to his thoughts that he put on Twitter, he quoted your article, I replied my thoughts on his tweet initially.

The heat added to your room isn't based on the temperature it is running at. If the CPU is using 200W, it is adding almost 200W heat to your room, regardless of the temp being 40 or 80 degrees. That's a very common misconception.
Not entirely true. At higher temperatures, the processor leaks more power. So yes the temperature of the processor has a small (but noticeable) impact on the power consumption. And I am not just referring to CPUs, this also applies to GPUs, VRMs, really anything with logic.

Yeah I appreciate your effort not flooding the chat. Here are my 2 cents:
  1. Margin of error. In this case, the score is almost the same. Google Octane 2.0 is not the best application to test CPU cores scaling...
  2. Same as Google Octane 2.0. the difference here is in the 10s miliseconds... Almost identical.
  3. In Tensor Flow, I saw in the chart 10500 is 2.2% faster than 8700K. I'm not sure what test @w1zard is using. There are so many parameters that may affect the result.
  4. Margin of error. The result should be read as identical.
  5. Digicortex compute plugin uses SSE, AVX / AVX-2 and AVX-512 instruction set. Depending on cooling, boost frequency, and RAM setup (x86 Compute Plugin is NUMA aware), you will see 10 Gen i5 with better memory setup going ever so slightly faster than the older 8700K.
  6. Games Not all can eat up all cores. Very few fully utilize the full cores of the 10900K. Hence the results. Game is NOT the representation of CPU multicore performance (in my own point of view). They are there for your entertainment. Rendering videos , 3D scenes or simulating physics / neural network with proper cores config (intra_/inter_op_parallelism_threads, launch simultaneous processes on multiple NUMA node, bind OpenMP threads to physical processing units, sets the maximum number of threads to use for OpenMP parallel regions, ...)
  7. Margin of error. AC: odyssey results will vary a lot. Take it with a grain of salt. It is relative result, not absolute.
When you look at the benchmark, think this way: Is CPU X run better with this application than CPU Y ? Benchmark gives you a representation of how CPUs perform within the enclave of the application. Rules are set by the developers and CPUs play by their rules which sometimes favor the in-theory-slower CPU. I'm running Photoshop and it just loves high frequency cores. So, a server Xeon CPU that cost thousands of dollars will bite the dust vs a cheap 9900KS.


Metro Exodus loves 6-8 high speed cores and scale poorly with 12-16 cores...
And someone is trying to promote their Twitter using this forum. I guess.
In that case, I think their testing setup is flawed. I highly doubt they locked the GPU frequency to a static frequency, because GPU boost can alter results, and is not always consistent between runs. That's number 1. Number 2, they probably did not lock the fan speeds to a certain amount and had them running on auto settings. The more variables you introduce, the more variance there can be in results. That's why you need to lock the frequency of the graphics card so that it doesn't fluctuate, and locking the fan speeds means there is one less variable that can be introduced. I understand it is very difficult to completely control the temperature of CPUs (would be extremely difficult to keep the temperatures static), keeping as much things static instead of adaptive helps.

In regards to AVX-512, that isn't available on Coment Lake. Only on HEDT, server, and Ice Lake platforms at the moment. Skylake through Comet lake has literally the same architecture, hence the 0 IPC improvement. If you keep RAM speed all the same and ignore the security fixes, then a 6700K clocked at 4.2 GHz all core will perform exactly the same as an i3-10300 at 4.2 GHz all core.

If games don't always eat up all the cores, then that means a 10900K SHOULDN'T be fully loaded up. Which means it should run a tiny bit faster because it runs higher clocks when it's not utilized all the way.

Turbo ratios of the processors I mentioned
---------------------------------------------
i7-8700K - 47/46/45/44/44/43/-/-/-/-
i9-9900K - 50/50/50/48/48/47/47/47/-/-
i9-9900KS - 50/50/50/50/50/50/50/50/-/-
i5-10400F - 43/?/?/?/?/40/-/-/-/-
i5-10500 - 45/?/?/?/?/42/-/-/-/-
i5-10600K - 48/48/48/47/45/45/-/-/-/-
i7-10700 - 48/?/?/?/?/?/?/46/-/- (with Turbo Boost Max 3.0)
i7-10700K - 51/51/51/48/48/47/47/47/-/- (with Turbo Boost Max 3.0)
i9-10900 - 52/?/?/?/?/?/?/?/?/46 (with Turbo Boost Max 3.0 and Thermal Velocity Boost)
i9-10900K - 53/53/51/?/50/?/?/?/?/49 (with Turbo Boost Max 3.0 and Thermal Velocity Boost)

With those turbo ratios, you can now see why I was skeptical of the results, because the older i7s and i9s have higher turbo ratios than even today's 10th gen i5s (aside from 10600/K). Which means, even the older 8700K still should beat the i5-10500. I left out the i7-9700K because it didn't have HT, and that alone can change factors, so I didn't talk about that.
 
Last edited:
Joined
May 1, 2020
Messages
108 (0.07/day)
If you keep RAM speed all the same and ignore the security fixes, then a 6700K clocked at 4.2 GHz all core will perform exactly the same as an i3-10300 at 4.2 GHz all core.
There is a big IF. If we ignore the patches and the RAM speed / timing is the same then....
This statement is true on its own but when you introduce it to the real world, it becomes irrelevant.

If games don't always eat up all the cores, then that means a 10900K SHOULDN'T be fully loaded up. Which means it should run a tiny bit faster because it runs higher clocks when it's not utilized all the way.
Not that simple. What you mentioned is perfectly fine in theory. In real life, when the conditions for boosting aren't met, the CPU simply won't boost just as high. The ambient temperature, CPU, case temperatures are not disclosed or recorded during the test. So, we have little information to whether the CPU always boost to their max.

The 8700K will beat the crap out of the i5 10500 in some applications. Not all of them. I had a 8700K myself and the results are consistent with what W1zard post here.
Keep in mind that the results you saw from older CPUs benchmark are from older set of applications + drivers + windows updates. Therefore, it's hard to get a 100% accurate results.

I think @W1zzard has done the best he could with the benchmark consider the time / money / effort. I like the variables because that represents what consumers will get when they purchase the xyz components and slap them all together. Do you in real life fix your GPU core clock? I guess not. It's good to isolate the CPU from all the variables and test their capability but that renders the test irrelevant in real life situation. That's what manufacturers did in their marketing: In a very well controlled situation, the CPU can boost up to xx ghz, but in real life, you rarely see the advertised number. That's what happen when you isolate too many variables.
 
Joined
Nov 4, 2019
Messages
234 (0.14/day)
Tell that to my 3930k that heats my room to what I consider uncomfortable temperatures whenever it runs at temperatures over 75'c

Sigh. Your CPU doesn't draw the same amount of power at all times. My 10900 usually draws around 35W doing light work. When it is running hotter, that is usually when it is drawing closer to 200 watts. The temperature has almost zero effect on how much heating is happening in your room. I could attach a giant heatsink to it, and when it is running at 40 degrees, if it is drawing 200 watts, that is how much it is heating the room.

Almost ALL CPU power is converted to heat, there are no moving parts like in a car engine. Sure it changes with temperature, but the effect is almost zero, so the other guy is also just being pedantic without being right.
 

navjack27

New Member
Joined
Jan 25, 2020
Messages
7 (0.00/day)
So, we have little information to whether the CPU always boost to their max.

So much this. Yeah. With the 10900 you have a 28 sec window for boosting. If the CPU calls whatever the idle was before you started testing actually not in the moving average of the boost window then you'll get X amount of boosting. Your test can be less than 28 seconds long and on one core but that load jumps from core to core. Penalty for cache invalidation and moving it along with skewing the boosting yet again. If you attempt to control for everything you basically won't have a review published. It is impossible.

I went down that rabbit hole during my testing and it didn't really get me anywhere sane. Tested at a 125w PL1. You can end up with a review that is 50 pages of just asinine iteration.
realbench_powerstuff.png
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,068 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
fix your GPU core clock?
You can't. Well, unless you are willing to live with base clock, which is A LOT lower on Turing.

For GPUs the biggest negative effect to repeatability is "cold card" at the start of benchmarks (very convenient for GPU makers, too, if noob reviewers are involved). Within 30 seconds the card will heat up and clocks/perf will drop significantly on many models. That's why all my game tests include some time to heat up the card before measuring FPS
 
Joined
Mar 7, 2010
Messages
956 (0.18/day)
Location
Michigan
System Name Daves
Processor AMD Ryzen 3900x
Motherboard AsRock X570 Taichi
Cooling Enermax LIQMAX III 360
Memory 32 GiG Team Group B Die 3600
Video Card(s) Powercolor 5700 xt Red Devil
Storage Crucial MX 500 SSD and Intel P660 NVME 2TB for games
Display(s) Acer 144htz 27in. 2560x1440
Case Phanteks P600S
Audio Device(s) N/A
Power Supply Corsair RM 750
Mouse EVGA
Keyboard Corsair Strafe
Software Windows 10 Pro
That has been a problem, from a consumer enjoyment point of view. My new 10th gen CPU performs great, it just feels old right after I bought it :) Emotionally not exciting :)
I know if I had forced myself back to Intel for that extra frames in games, I just know I would not be happy, I do it with cars, trucks and anything tech. And it has cost me dearly in the passed for not having the Kahunna's to say NO and be happy with what you got. But with Ryzen, I don't have that problem... Yet:)
 
Joined
Feb 26, 2016
Messages
548 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling EK Quantum Velocity C+A, EK Quantum Vector C+A, CE 280, Monsta 280, GTS 280 all w/ A14 IP67
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 3
Power Supply EVGA 1600 T2 w/ A14 IP67
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Monsgeek M5W w/ Cherry MX Silent Black RGBs
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
There is a big IF. If we ignore the patches and the RAM speed / timing is the same then....
This statement is true on its own but when you introduce it to the real world, it becomes irrelevant.


Not that simple. What you mentioned is perfectly fine in theory. In real life, when the conditions for boosting aren't met, the CPU simply won't boost just as high. The ambient temperature, CPU, case temperatures are not disclosed or recorded during the test. So, we have little information to whether the CPU always boost to their max.

The 8700K will beat the crap out of the i5 10500 in some applications. Not all of them. I had a 8700K myself and the results are consistent with what W1zard post here.
Keep in mind that the results you saw from older CPUs benchmark are from older set of applications + drivers + windows updates. Therefore, it's hard to get a 100% accurate results.

I think @W1zzard has done the best he could with the benchmark consider the time / money / effort. I like the variables because that represents what consumers will get when they purchase the xyz components and slap them all together. Do you in real life fix your GPU core clock? I guess not. It's good to isolate the CPU from all the variables and test their capability but that renders the test irrelevant in real life situation. That's what manufacturers did in their marketing: In a very well controlled situation, the CPU can boost up to xx ghz, but in real life, you rarely see the advertised number. That's what happen when you isolate too many variables.
Yes in real life I fix my GPU clock, as well as use Boost Lock in Precision X1. Locks the frequency to boost clocks, allows for easier and simpler overclocking.

As for drivers, I can understand that, but at the same time, if the 8700K was running older drivers and Windows versions, shouldn't that actually perform better than current drivers? The reason I think that is because the security updates are supposed to fix the shortcuts, which in turn loses performance, sooo the older chips SHOULD perform slightly faster but idk. Given how it's literally the same architecture under the hood, 8700K should beat the 10500.


Check the link above to see the differences; not many aside from clock speeds and TDP.

You can't. Well, unless you are willing to live with base clock, which is A LOT lower on Turing.

For GPUs the biggest negative effect to repeatability is "cold card" at the start of benchmarks (very convenient for GPU makers, too, if noob reviewers are involved). Within 30 seconds the card will heat up and clocks/perf will drop significantly on many models. That's why all my game tests include some time to heat up the card before measuring FPS
Have you used EVGA Precision X1? On all Turing cards I have helped people with, they all have locked to boost frequencies, in excess of 1800 MHz on the core.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,068 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
As for drivers and Windows versions
Same drivers and Windows version in all tests. Also same application versions

Precision X1
Ah yea, I remember now. I even looked into how they are doing it, using NVDIA's "test clocks for stability" API.
Great idea actually. Maybe for future CPU reviews I could lock the GPU freq? Nobody will miss a few % in performance, but better repeatability will help with quality of results?
 
Joined
Feb 26, 2016
Messages
548 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling EK Quantum Velocity C+A, EK Quantum Vector C+A, CE 280, Monsta 280, GTS 280 all w/ A14 IP67
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 3
Power Supply EVGA 1600 T2 w/ A14 IP67
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Monsgeek M5W w/ Cherry MX Silent Black RGBs
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
Sure it changes with temperature, but the effect is almost zero, so the other guy is also just being pedantic without being right.
Define "almost zero".

Just tested with my GPU fans at 0% and ran 5x GPU Z "PCIe tests" to load up the GPU completely (around 94%). GPU power on HWINFO refers to Board Power Draw on GPU-Z, and GPU Core (NVVDD) Input Power (sum) [the first one] refers to GPU Chip Power Draw on GPU-Z, as the numbers were exactly the same (within rounding of course). At around 80°C, the BPD was ~110W average while CPD was ~73W average. I then ran the GPU fans at 100% as well as paused all 5 tests to quickly cool down to 40°C, then set fans back down to 0%, resumed all 5 tests, and then let it heat up to 50°C to read again. At around 50°C, the BPD was ~100W average while CPD was ~62W average. Yes, leakage is present. And it isn't zero. The % change from 50-80°C is ~10% increase in BPD and ~17.7% increase in CPD. That is nowhere near zero. While it may have a *small* impact (as I previously mentioned), temperature does impact the actual numbers, and it is noticeable. I could then proceed to prove how this correlates with the importance of cooling quality but that is not the point of this test; the point was to show how leakage affects the power consumption, which in turn can and will affect the actual wattage of heat dissipated.

The graphics card I just tested is a GTX 980 SC ACX 2.0 by EVGA, and it is in an eGPU box (connected to my laptop via Thunderbolt 3, at PCIe 3.0 x4). I believe I repasted this GPU with IC Diamond a year ago, and I used Boost Lock for this. This is actually [nearly] ideal for testing leakage, as there are not as many variables compared to a desktop, such as a radiator in the front that is cooling the CPU, so there are minimal variable factors involved. I already know the stable points for the overclock, 1425 MHz (from 1367) on core and 8000 MHz (from 7000) memory. While under load, the GPU maintained 1425 MHz core and 8 GHz memory throughout, up until around 75-77°C where the core clock dropped to 1412 MHz and stayed that way until temperatures were brought back down. There are 2x 120mm fans connected, one is the standard internal one, and the 2nd one is a fan that I added to boost cooling when I had a Tesla before, but I leave it in anyways since it does help lower the temperatures by about 3-7°C. Both 120mm fans inside the eGPU box are static speed, no variance. The internal PSU has the fan facing the graphics card, so it does help exhaust some heat, but not a lot. That is the only unconstrained variable that I can think of, however the PSU fan never ramped up during the test, so I consider that to be an independent variable for this test.

EDIT - I moved the sentences in the correct order; I was typing a lot of different things, and they went out of order, my apologies.
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,068 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
GPU Z "PCIe tests" to load up the GPU completely
GPU-Z PCIe test is not a good choice for that, its load profile is relatively low. Better use any random game with no FPS cap, and in windowed mode. I like to use Unigine Heaven for that because it loads reasonably fast nowadays, and I can pause the movement, so load is fixed. Furmark is a bad choice because GPUs/drivers will detect it and clocks down way too much

NVIDIA drops 13 MHz at fixed temperature intervals, depending on the architecture
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,068 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Joined
May 1, 2020
Messages
108 (0.07/day)
As for drivers, I can understand that, but at the same time, if the 8700K was running older drivers and Windows versions, shouldn't that actually perform better than current drivers? The reason I think that is because the security updates are supposed to fix the shortcuts, which in turn loses performance, sooo the older chips SHOULD perform slightly faster but idk. Given how it's literally the same architecture under the hood, 8700K should beat the 10500.
New GPU drivers / windows updates may affect the performance and gain some advantages for the 10th gen when compared to the 8700K in gaming. The net gain from GPU drivers and other factors may contribute to higher performance in 10th gen. (I'm just pointing out many possibilities.)

EVGA Precision X1
Nice. I will give it a try. I think @W1zzard should lock the frequency to a stable boost clock.
 
Joined
Nov 19, 2019
Messages
103 (0.06/day)
View attachment 162263View attachment 162264View attachment 162265View attachment 162266View attachment 162267View attachment 162268View attachment 162270
Image 1 - Not sure how the 9900K loses to an i5-10500 but okay.
Image 2 - not really sure how a stock 10900 beats a 10900 with faster RAM or max turbo, but okay.
Image 3- Not really sure how an i5-10500 beats an i7-8700K but okay.
Image 4 - Not sure how an i5-10600K loses to an i5-10500 but is faster than an i5-10400F but okay.
Image 5 - Not sure how an i5-10400F beats an i7-8700K but okay.
Image 6 - Not sure how an i7-10700 beats an i9-10900K but okay.
Image 7 - Please explain how an i7-10700/K beats an i9-9900KS.

There are much more inconsistencies, I don't really wan't to flood the chat.

Most of these look like margin of error to me. The results are essentially the same, implying that the CPU is not the bottleneck.

One or two look to have a slightly bigger difference. It's possible that those tests are simply less repeatable, or maybe helped by newer boosting algorithms?
 
Joined
Feb 26, 2016
Messages
548 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling EK Quantum Velocity C+A, EK Quantum Vector C+A, CE 280, Monsta 280, GTS 280 all w/ A14 IP67
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 3
Power Supply EVGA 1600 T2 w/ A14 IP67
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Monsgeek M5W w/ Cherry MX Silent Black RGBs
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
GPU-Z PCIe test is not a good choice for that, its load profile is relatively low. Better use any random game with no FPS cap, and in windowed mode. I like to use Unigine Heaven for that because it loads reasonably fast nowadays, and I can pause the movement, so load is fixed. Furmark is a bad choice because GPUs/drivers will detect it and clocks down way too much
For stability it is not a good choice, I just wanted to show how leakage works at a certain temperature, which honestly, any test that provides a consistent load will be fine. If anything it would be more noticeable at higher wattages.

Most of these look like margin of error to me. The results are essentially the same, implying that the CPU is not the bottleneck.

One or two look to have a slightly bigger difference. It's possible that those tests are simply less repeatable, or maybe helped by newer boosting algorithms?
i7-8700K vs i5-10500 for example, it's literally the same Turbo Boost 2.0, but the 8700K has higher turbo ratios. Sure TDP might have something to do with that, but if that's the case why not test the 10900K with a 65W TDP setting?

The reason I am going after this is because it is basically implying that the i9-10900 is FASTER than the i9-10900K, which clearly would not be true, by logic. I can see those numbers, and I can see the percentage differences, however for someone that just wants to see if x CPU is better than y CPU, they don't really care to see how much faster x CPU is, they only care if it's faster. Yes there are people like that, and one way to prevent that scenario from happening is keeping everything else static (which the GPU clocks weren't static). The reason that is important is, if you run your GPU at a fixed frequency, the variations are much smaller. That's why I push for static frequencies on GPUs *WHEN COMPARING CPUs* because that's how you should test CPUs in games with everything else exactly the same. If you keep all other things are equal, then allow the CPUs to boost on its own, that's fine. I have done some testing myself (not with the same titles), and I can guarantee you those small variations that is giving a certain CPU an edge, is because the GPU is boosting ever so slightly faster on one CPU rather than the other. Yes, you can say margin of error. But like, you can't really say it's margin of error when the GPU frequency fluctuates. As W1zzard mentioned, if you have less variables, the tests are more repeatable.

TLDR, just have less variables that's all.

Also, please don't misunderstand, I greatly appreciate the time these reviewers take as I know this stuff takes weeks to do, and I am not trying to discredit them at all, I am just trying to help them out, that's all.
 
Last edited:
Joined
Feb 21, 2008
Messages
6,862 (1.16/day)
Location
S.E. Virginia
System Name Barb's Domain
Processor i9 10850k 5.1GHz all cores
Motherboard MSI MPG Z490 GAMING EDGE WIFI
Cooling Deep Cool Assassin III
Memory 2*16gig Corsair LPX DDR4 3200
Video Card(s) RTX 4080 FE
Storage 500gb Samsung 980 Pro M2 SSD, 500GB WD Blue SATA SSD, 2TB Seagate Hybrid SSHD
Display(s) Dell - S3222DGM 32" 2k Curved/ASUS VP28UQG 28" 4K (ran at 2k), Sanyo 75" 4k TV
Case SilverStone Fortress FT04
Audio Device(s) Bose Companion II speakers, Corsair - HS70 PRO headphones
Power Supply Corsair RM850x (2021)
Mouse Logitech G502
Keyboard Logitech Orion Spectrum G910
VR HMD Oculus Quest 2
Software Windows 10 Pro 64 bit
Benchmark Scores https://www.3dmark.com/spy/34962882
Sigh. Your CPU doesn't draw the same amount of power at all times. My 10900 usually draws around 35W doing light work. When it is running hotter, that is usually when it is drawing closer to 200 watts. The temperature has almost zero effect on how much heating is happening in your room. I could attach a giant heatsink to it, and when it is running at 40 degrees, if it is drawing 200 watts, that is how much it is heating the room.

Almost ALL CPU power is converted to heat, there are no moving parts like in a car engine. Sure it changes with temperature, but the effect is almost zero, so the other guy is also just being pedantic without being right.

Actually, considering it runs at 100% full load running WCG, it pretty much does use the same amount of power at all times when it's booted up. Which is exactly why I asked about a cooling solution that can keep the 10900 cool (<75'c) running WCG 24/7. Whether it's because of the temperature it's running at, or how much power it is using, I know that if my 3930k is running at over 75'c, it noticeably increases the temperature in my room. Higher temperatures (>75'c), in my case, have two causes. One is when I increase my OC, which is to be expected. And two is when I need to clean the air filters in my case to maintain air flow in the case. Both cause increased temperatures in my room, but only one involves increased power being drawn.
 
Joined
Nov 19, 2019
Messages
103 (0.06/day)
For stability it is not a good choice, I just wanted to show how leakage works at a certain temperature, which honestly, any test that provides a consistent load will be fine. If anything it would be more noticeable at higher wattages.


i7-8700K vs i5-10500 for example, it's literally the same Turbo Boost 2.0, but the 8700K has higher turbo ratios. Sure TDP might have something to do with that, but if that's the case why not test the 10900K with a 65W TDP setting?

The reason I am going after this is because it is basically implying that the i9-10900 is FASTER than the i9-10900K, which clearly would not be true, by logic. I can see those numbers, and I can see the percentage differences, however for someone that just wants to see if x CPU is better than y CPU, they don't really care to see how much faster x CPU is, they only care if it's faster. Yes there are people like that, and one way to prevent that scenario from happening is keeping everything else static (which the GPU clocks weren't static). The reason that is important is, if you run your GPU at a fixed frequency, the variations are much smaller. That's why I push for static frequencies on GPUs *WHEN COMPARING CPUs* because that's how you should test CPUs in games with everything else exactly the same. If you keep all other things are equal, then allow the CPUs to boost on its own, that's fine. I have done some testing myself (not with the same titles), and I can guarantee you those small variations that is giving a certain CPU an edge, is because the GPU is boosting ever so slightly faster on one CPU rather than the other. Yes, you can say margin of error. But like, you can't really say it's margin of error when the GPU frequency fluctuates. As W1zzard mentioned, if you have less variables, the tests are more repeatable.

TLDR, just have less variables that's all.

Also, please don't misunderstand, I greatly appreciate the time these reviewers take as I know this stuff takes weeks to do, and I am not trying to discredit them at all, I am just trying to help them out, that's all.

OK, obviously anything that can be done to reduce the error and improve the repeatability of results will be helpful. At some point though people should realize that a difference of 1 or 2% simply won't be noticeable in the real world and for practical purposes you can consider any two results that are within that margin as being equal. (either because the cpus really are that close in performance, or because the bottleneck is elsewhere in the system).

Actually, considering it runs at 100% full load running WCG, it pretty much does use the same amount of power at all times when it's booted up. Which is exactly why I asked about a cooling solution that can keep the 10900 cool (<75'c) running WCG 24/7. Whether it's because of the temperature it's running at, or how much power it is using, I know that if my 3930k is running at over 75'c, it noticeably increases the temperature in my room. Higher temperatures (>75'c), in my case, have two causes. One is when I increase my OC, which is to be expected. And two is when I need to clean the air filters in my case to maintain air flow in the case. Both cause increased temperatures in my room, but only one involves increased power being drawn.

The point here is that if you want to know how much a 10900 will heat up your room compared to your 3930k, you need to compare how much power each cpu uses.
Let's say your 3930k has a water cooler, that keeps it at 75 degrees while drawing 300w, and a 10900 system has an air cooler that keeps it at 80 degrees while drawing 65w.
In both cases, the power drawn will leave the back of the case as heat into the room. So, in this example the 3930k will heat up the room much faster despite being at a slightly lower temperature.

Now, if you have the same system and cooler, then the cpu temperature will be proportional to the amount of power it is using. So, when your 3930k is at 75 degrees it might be using 300w, but when it is at 50 degrees it will be using much less than that (lets say 75w). This is why your room heats up more when your cpu is at 75 degrees vs 50 degrees.
 
Joined
Jan 6, 2013
Messages
349 (0.08/day)
running 4133 cl16-16-16-31 on i5 10500 no problem.


yes it also throttles like crazy.
max turbo is what matters here,with PLs removed.
I dont think anyone buying an i9 or r9 should really consider power efficiency on 10/12 cores.what matters here is performance and it's really good.10900 beats 3900XT in cpu tests,not to mention gaming.


a basic z490 mobo except for a couple of asrock ones.
same price as x570 probably.
if you want performance, you get 10900k, not 10900. So 10900 is a very good mix of performance and efficiency.
I still stand upon what I said. 10900 shows that without those massive OCs that the K parts need in order to match Zen 2 parts with their higher IPC, 14nm process is very efficient at frequencies similar to AMD parts (10900 runs at ~4Ghz on average on all cores)
 
Last edited:
Joined
Aug 6, 2017
Messages
7,412 (3.00/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
if you want performance, you get 10900k, not 10900. So 10900 is a very good mix of performance and efficiency.
I still stand upon what I said. 10900 shows that without those massive OCs that the K parts need in order to match Zen 2 parts with their higher IPC, 14nm process is very efficient at frequencies similar to AMD parts (10900 runs at ~4Ghz on average on all cores)
no.just no.
10900 is PL restricted
once you remove them it's right up there with 10900k.it also beats not only 3900x,but 3900xt too.

and please tell me you're joking about "massive overclocks",10900k can achieve 200mhz OC over stock 4.9
 

tecky

New Member
Joined
Aug 27, 2020
Messages
1 (0.00/day)
That is awesome work - thanks therefor! You are the only one so far more or less testing the non k version! Keep up the good work!

I need a recommendation if you would go for the 3900x or the 10900 non k version.
At the moment I have an Hackintosh i5 10400 and a 3700x in my Windows rig. With both rigs I am doing video editing, like premiere and AE and Final Cut.
With the Win Rig I do gaming as well - AAA Titles, on a 2060S.
Maybe I am considering to go up on a Ryzen 3900x and keep the Hackintosh with i5 next to my side or sell actually the majority of my current hardware
and build just one rig with the i9 10900. From a price point after selling and buying its equal to what I currently have. Of course when I would buy the 3900x instead+keep the Hackintosh it would be approx 120Dollars more.

So what would be your recommendation?

New Build would be:
Asus Z490-A with OC planned TDP unlock to 200W-250W.
32GB Crucial Ballistix 2*16
i9 10900 (non k)
Dark Power Pro 11 650W Platin
1xNVME SanDisk 500GB MacOS
1XNVME EVO PLUS 250GB WIN
Be Quiet Dark Rock Pro 4
NXZT 510 Elite
1. Win GPU 2060s @ x8 PciE3 for Win
2. Hackintosh GPU RX580 @ x8Pcie3 for MacOS

Win current build
3700X Stock, maybe 3900x = maybe future option if not the i9
B450 Tomahawk Max
250GB Evo Plus
2TB Crucial MX500
32GB Crucial Ballistix 2*16 @3733mhz
RTX 2060S
Straight Power 11 Gold 550

Hacki Current @OpenCore
i5 10400
NVME SanDiskExtremePro 500GB
RX580
MSI Gaming Plus Z490
32GB Crucial Ballistix 2*16 @3200mhz
BeQuiet 600W Gold

Actually I want to save some bucks and not spend anymore. So I guess with the new build I would be good to go for at least 2022, until DDR5 release/establishment.
The Hacki is more a fun project at not yet too relevant for production purposes. But runs 100% stable with all support.
My thougts were just that the i9 is the best i can get for gaming and also decent for editing, whereas in comparison the 3900x isnt a bad choice either.
And i dont know how worst the efficency will get with the i9?! But to let it stock it wouldnt be a great choice i guess?!
I cant decide ..... please help :D
 
Joined
Dec 24, 2012
Messages
129 (0.03/day)
10900f is around $349 @ Amazon. I presume this is identical to the 10900 w/o the IGP in perf, power, general behavior? Thanks.
 
Top