• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

10700 vs 5600X

Joined
Jan 20, 2019
Messages
1,751 (0.75/day)
Location
London, UK
System Name ❶ Oooh (2024) ❷ Aaaah (2021) ❸ Ahemm (2017)
Processor ❶ 5800X3D ❷ i7-9700K ❸ i7-7700K
Motherboard ❶ X570-F ❷ Z390-E ❸ Z270-E
Cooling ❶ ALFIII 360 ❷ X62 + X72 (GPU mod) ❸ X62
Memory ❶ 32-3600/16 ❷ 32-3200/16 ❸ 16-3200/16
Video Card(s) ❶ 3080 X Trio ❷ 2080TI (AIOmod) ❸ 1080TI
Storage ❶ NVME/SSD/HDD ❷ <SAME ❸ SSD/HDD
Display(s) ❶ 1440/165/IPS ❷ 1440/144/IPS ❸ 1080/144/IPS
Case ❶ BQ Silent 601 ❷ Cors 465X ❸ Frac Mesh C
Audio Device(s) ❶ HyperX C2 ❷ HyperX C2 ❸ Logi G432
Power Supply ❶ HX1200 Plat ❷ RM750X ❸ EVGA 650W G2
Mouse ❶ Logi G Pro ❷ Razer Bas V3 ❸ Logi G502
Keyboard ❶ Logi G915 TKL ❷ Anne P2 ❸ Logi G610
Software ❶ Win 11 ❷ 10 ❸ 10
Benchmark Scores I have wrestled bandwidths, Tussled with voltages, Handcuffed Overclocks, Thrown Gigahertz in Jail

According to the above 5600X review, the non-K series 10700 comes out ahead @ 1080p (in most games). Also beats the 10700K and 10900k (stock).

I'm just just trying to wrap my head around how this was made possible. I've always been lead to believe the K-series CPU's are "marginally faster" out of the box + the added benefit of overclocking. My kaby lake 7700K sees pretty wide single threaded perf gains over the non-K 7700.... "so vat da hell iz goin on with 10th Gen?"

Is the 10700 reference in these reviews based on a premium Z-series board, more expensive lower latency memory and overclocking (as I understand some OC is possible with non-K series)? Or was it based on stock configurations with a B/H series board and not-so expensive RAM? Also your know-how thoughts on how the non-K is able to beat intels flagship i9 SKU?
 
1604679816167.png


That's probably well within margin of error - .6%
 
It's some form of %0.1 difference... you can see that these chips were indeed getting decent results in the tests so the better one was either capped by something else, or simply didn't help marginally at FHD.

As for overclocking locked SKUs, you're going to have to work on your bCLK which could lead to anything from USB device unreliability to failure. It's not something I'd suggest.
 
The $324 10700 is 3.6% faster in Gaming than the 5600x but is $24 more than the MSRP of the 5600X

Keep in mind that this will change somewhat at higher resolutions. If you see yourself upgrading to 1440p / 2160p, take a look at those.

The surprise to me was at 1440p where. in gaming, the $174 10400F outperformed very one of the new Ryzen CPUs, including the $550 5900x.

relative-performance-games-2560-1440.png
 
Last edited:
It's some form of %0.1 difference... you can see that these chips were indeed getting decent results in the tests so the better one was either capped by something else, or simply didn't help marginally at FHD.

As for overclocking locked SKUs, you're going to have to work on your bCLK which could lead to anything from USB device unreliability to failure. It's not something I'd suggest.

Does the 10700 reference in the charts rope in overclocking performance gains? Eg. raised BCLK + power limits? The 10700 for me is only a desirable option if it's running at stock with auto-boost enabled without having to invest in a premium Z-series board + 14CL 3200Mhz kits. Alternatively I am more than happy to follow through with the more power efficient 5600X.
 

According to the above 5600X review, the non-K series 10700 comes out ahead @ 1080p (in most games). Also beats the 10700K and 10900k (stock).

I'm just just trying to wrap my head around how this was made possible. I've always been lead to believe the K-series CPU's are "marginally faster" out of the box + the added benefit of overclocking. My kaby lake 7700K sees pretty wide single threaded perf gains over the non-K 7700.... "so vat da hell iz goin on with 10th Gen?"

Is the 10700 reference in these reviews based on a premium Z-series board, more expensive lower latency memory and overclocking (as I understand some OC is possible with non-K series)? Or was it based on stock configurations with a B/H series board and not-so expensive RAM? Also your know-how thoughts on how the non-K is able to beat intels flagship i9 SKU?
Both platforms on the end of their lifetime. 5600X and 10700 are really close in perf. 0.5~3% is identical in real life.
On gaming performance the 5600X does it with a little less power that both 10700/10700K and on multi it stands between them on perf but a little lower in power consumption than the nonK and a lot lower than the K.
With 5600X you get PCI-E 4.0 and something tells me that with RX6000 is going to be a lot more relevant than fast NVMe(s), if you are interested for a new GPU also.
(see SAM feature)
 
Get the 5600X, you could use it with the stock cooler for your needs.
 
Keep in mind that the tests that were performed here was with an RTX 2080 Ti and the recently launched and announced will shift things around somewhat.
 
Get the 5600X, you could use it with the stock cooler for your needs.
The Wraith Stealth? Garbage for gaming...
I would buy a 30/40€ cooler like the Arctic Freezer 34 at least
 
The Wraith Stealth? Garbage for gaming...
I would buy a 30/40€ cooler like the Arctic Freezer 34 at least

Gaming does not load the CPU to its limits, if you actually made an example like rendering, where the whole chip is used... that might have made more sense.

I have couple of those and they work just fine when you don't push the CPU, which is what the OP sounded like he is going to do. Especially now that winter is coming.
 
The $324 10700 is 3.6% faster in Gaming than the 5600x but is $24 more than the MSRP of the 5600X

Keep in mind that this will change somewhat at higher resolutions. If you see yourself upgrading to 1440p / 2160p, take a look at those.

The surprise to me was at 1440p where. in gaming, the $174 10400F outperformed very one of the new Ryzen CPUs, including the $550 5900x.

relative-performance-games-2560-1440.png

My thoughts too. I've been looking at all three (10400/10600K/10700) to compare with Zen 3 (5600X). For me the 10400F only demands £150 (GBP) which is a steal!! For gaming @ 1440p I wouldn't have looked past a 10400/10400F/3600 - problem being I am looking for a 8-core chip with growing thread utilisation demands (multiple DT applications, occasional rendering + video transcoding, i guess games may also benefit to some extent in the long run, etc).

I'm in no rush though - my 7700K is holding up to anything and everything listed in the review charts and multi-threaded inferiority isn't a big deal with my "current" requirements....which brings me to: What are your thoughts on waiting for Rocket lake for a more meaningful ST advantage over the 7700K or slashed ZEN 3 prices? Money isn't a problem and I don't mind paying less excruciating over-spends but i'm 50/50 whether it's worth upgrading now. My next upgrade will be put to use for 4yrs+ or earlier if DDR5 platforms sees larger than life performance uplifts. So many options... and I already know i'm over-thinking... (SOMEBODY HOLD MY HAND!! hehe)
 
In that case if I didn’t had any serious gaming issues I would keep it for another year at least. After AlderLake, or even after Zen4
 
In terms of gaming the differences are practically zero, however, despite having 2 less cores it outperforms the 10700 in just about every other task. I'd get the 5600X.
 
Both platforms on the end of their lifetime. 5600X and 10700 are really close in perf. 0.5~3% is identical in real life.
On gaming performance the 5600X does it with a little less power that both 10700/10700K and on multi it stands between them on perf but a little lower in power consumption than the nonK and a lot lower than the K.
With 5600X you get PCI-E 4.0 and something tells me that with RX6000 is going to be a lot more relevant than fast NVMe(s), if you are interested for a new GPU also.
(see SAM feature)

Excellent point!! I forgot about the announcement notes on smart cache and 4-11% perf gains (i think) when pairing Zen 3 with a AMD GPU. Yes also have a GPU upgrade in mind. I'm definitely open to RDNA 2 but don't see myself pulling the trigger at launch. I've got a 1080 TI hence don't mind waiting a lot longer for big navi to hit the shelves, Nvidia to respond with TI/super variants and who knows RDNA 2 counterattacks too (not needing the absolute best, but hopefully some price slash action). I'll wait until the dust settles. I definitely want to see how well the smart cache feature performs in a select number of games I play.

Get the 5600X, you could use it with the stock cooler for your needs.

Honestly I don't like stock coolers. Even if it came with a Wraith Prism i'd get something better. Anyway cooling isn't a problem... i got a couple of monster-like air coolers already.
 
Excellent point!! I forgot about the announcement notes on smart cache and 4-11% perf gains (i think) when pairing Zen 3 with a AMD GPU. Yes also have a GPU upgrade in mind. I'm definitely open to RDNA 2 but don't see myself pulling the trigger at launch. I've got a 1080 TI hence don't mind waiting a lot longer for big navi to hit the shelves, Nvidia to respond with TI/super variants and who knows RDNA 2 counterattacks too (not needing the absolute best, but hopefully some price slash action). I'll wait until the dust settles. I definitely want to see how well the smart cache feature performs in a select number of games I play.
Just a clarification. It’s SmartAccessMemory and not smart cache. The large Infinity Cache all RX6000 have, is something different and unrelated to SAM.
IC is just an enhancement of GPU<->VRAM communication. SAM is CPU access to full VRAM.
 
In that case if I didn’t had any serious gaming issues I would keep it for another year at least. After AlderLake, or even after Zen4

I agree. Even my multi-threaded workloads don't urgently require a 6core+ option although it would be nice. My only worry is running multiple applications whilst day trading and a whole bunch of side-charts/market feeds/multiple browser windows and a couple of other tasks... i'm a beginner at DT hence not an immediate requirement from the get-go. Bottom line, i've hit a point with hardware enthusiasm where an "upgrade" is an itch I can't scratch lol I can't shake it off but willing to a wait a little longer until 11th Gen hits the shelves.

Just a clarification. It’s SmartAccessMemory and not smart cache. The large Infinity Cache all RX6000 have, is something different and unrelated to SAM.
IC is just an enhancement of GPU<->VRAM communication. SAM is CPU access to full VRAM.

thank you for the correction. I swear I knew should have reverted back to the Zen 3 slides before making an a$$ of myself hehe.

Actually I was meaning to ask... are there any current independent reviews for SAM in action (other than AMD slides)? Or is this something expected with RDNA 2 NDA's lifted at launch?

Also in your words, can you simplify the SAM process? For example, my not-so researched understanding is the CPU uses both system memory (RAM) and VRAM for the added performance uplift. If games don't utilise more than lets say 10GB RAM, and I've already got 16gigs... does that mean I won't benefit with SAM or have i got it all wrong?
 
GPUs (6000) must be released first...
Around the 18th

We don’t know the specifics yet, but from my understanding, CPU will be using empty VRAM for its own workload during gaming.
 
GPUs (6000) must be released first...
Around the 18th

We don’t know the specifics yet, but from my understanding, CPU will be using empty VRAM for its own workload during gaming.

I am curious to see how this works. Could be massive, could be marketing nonsense.
 
Gaming does not load the CPU to its limits, if you actually made an example like rendering, where the whole chip is used... that might have made more sense.

I have couple of those and they work just fine when you don't push the CPU, which is what the OP sounded like he is going to do. Especially now that winter is coming.
I'm speaking from direct experience.
Bought a 3600X for my son, in a "cheap" gaming rig (3600X + MSI B450M Mortar Max + GTX 1660 Super) and the stock cooler was terrible: while gaming it struggled to keep temperature under control with the fan at maximum speed.
Swapped with the Wraith Prism brand new from my 3900X (I'm using a Dark Rock 4 with it) and the situation dramatically improved in both temperatures and noise.
The Wraith Prism isn't a great cooler, but it is more than adeguate for a 3600X/3700X while the stock Wraith Stealth is not.

And if you don't believe my words, you can check by yourself: stealth vs prism
Even in gaming the difference is huge.

Even the cheap Spire is better than the garbage Stealth, and I don't know why AMD was providing the Spire for 3600X and now is giving the Stealth to the more expensive 5600X.
 
Last edited:
GPUs (6000) must be released first...
Around the 18th

We don’t know the specifics yet, but from my understanding, CPU will be using empty VRAM for its own workload during gaming.

Ah, that makes sense. I guess we'll have to see how much of an impact this feature will have :)
 
Also in your words, can you simplify the SAM process? For example, my not-so researched understanding is the CPU uses both system memory (RAM) and VRAM for the added performance uplift. If games don't utilise more than lets say 10GB RAM, and I've already got 16gigs... does that mean I won't benefit with SAM or have i got it all wrong?
AMD has not revealed details of what SAM does. Basically, there is nothing new on hardware side, there is GPU and it can access system RAM over PCIe bus. This has been the case since the very beginning of PCIe GPUs (and AGP before that). Best educated guess is that SAM is some sort of software/firmware/API solution that does traffic management or QoS to prioritize certain traffic.
 
AMD has not revealed details of what SAM does. Basically, there is nothing new on hardware side, there is GPU and it can access system RAM over PCIe bus. This has been the case since the very beginning of PCIe GPUs (and AGP before that). Best educated guess is that SAM is some sort of software/firmware/API solution that does traffic management or QoS to prioritize certain traffic.

I was under the impression SAM is an older inferior feature being revamped for superior gains. I guess we're all in the same boat with the becoming (not in your boat lol, yours and Zachs technical know-how is definitely more advanced) but SAM does seem interesting or like phanbuey put it "marketing nonsense".
 
I'm speaking from direct experience.
Bought a 3600X for my son, in a "cheap" gaming rig (3600X + MSI B450M Mortar Max + GTX 1660 Super) and the stock cooler was terrible: while gaming it struggled to keep temperature under control with the fan at maximum speed.
Swapped with the Wraith Prism brand new from my 3900X (I'm using a Dark Rock 4 with it) and the situation dramatically improved in both temperatures and noise.
The Wraith Prism isn't a great cooler, but it is more than adeguate for a 3600X/3700X while the stock Wraith Stealth is not.

And if you don't believe my words, you can check by yourself: stealth vs prism
Even in gaming the difference is huge.

Even the cheap Spire is better than the garbage Stealth, and I don't know why AMD was providing the Spire for 3600X and now is giving the Stealth to the more expensive 5600X.
I agree that the stealth is bare minimum and just to work, but it’s not about CPU price. It’s about power draw. All 3600, 3700X, 5600X are 65W TDP parts. 3600X is a 95W TDP (88W PPT vs 125W PPT).
AMD has not revealed details of what SAM does. Basically, there is nothing new on hardware side, there is GPU and it can access system RAM over PCIe bus. This has been the case since the very beginning of PCIe GPUs (and AGP before that). Best educated guess is that SAM is some sort of software/firmware/API solution that does traffic management or QoS to prioritize certain traffic.
Yes CPU access to VRAM is old but the difference from SAM is that up until now CPU could/can access only a 256MB part of VRAM. Now AMD is giving full access to entire VRAM. And I suspect that PCI-E 4.0 is a requirement for such operation that can involve large amount of data back and forth for the CPU alone...
They said it... “Full CPU access to VRAM” and there is a video that explains it on AMD channel.
If it’s marketing nonsense we will see. It’s not up to us to decide, especially when we don’t know the specifics...
 
I'm speaking from direct experience.
Bought a 3600X for my son, in a "cheap" gaming rig (3600X + MSI B450M Mortar Max + GTX 1660 Super) and the stock cooler was terrible: while gaming it struggled to keep temperature under control with the fan at maximum speed.
Swapped with the Wraith Prism brand new from my 3900X (I'm using a Dark Rock 4 with it) and the situation dramatically improved in both temperatures and noise.
The Wraith Prism isn't a great cooler, but it is more than adeguate for a 3600X/3700X while the stock Wraith Stealth is not.

And if you don't believe my words, you can check by yourself: stealth vs prism
Even in gaming the difference is huge.

Even the cheap Spire is better than the garbage Stealth, and I don't know why AMD was providing the Spire for 3600X and now is giving the Stealth to the more expensive 5600X.

If you don't know what you're doing, you can overheat under any condition. Such as auto voltages. A properly tuned 5600X, at STOCK settings, will not impact your gaming experience anywhere as bad as you're making it out to be.

Of course the cooler with the heatpipes will perform better, I have MULTIPLE of them and I use those effectively in numerous rigs. The main problem with the big Wraith is noise under heavier loads. Your link is also with Pinnacle Ridge counterparts...

the cooler comes with TIM pre-applied, it's a slap on solution and it DOES work, even at %100 load:

 
Back
Top