• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Single rank vs dual rank memory

Joined
May 8, 2021
Messages
1,978 (1.83/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
I do this on my benching systems, you can get great consistency if you kill everything, but most games have launchers which run in the background, I often play while on voice call over discord (wish it was as light as teamspeak was...), closing all browsers and stuff is a pain (most people want to be able to switch quickly from one thing to another, and in some cases it's useful to have a browser window open to refer to something while you are playing), windows services are a thing, drivers required to play the game normally spawn their own threads which the CPU needs to switch to. These are not some RGB or antivirus junkware which hogs a third of your CPU and should never have been installed to start with, these are tasks that are useful to the user or critical to the normal operation of the device in the given workload (think managing network packets of an online game). I can cull down an OS very well to run cinebench and take a screenshot for a hwbot sub, but that is very distant from any practical use case.
That doesn't seem like it would be real. My daily system doesn't have much variation either and I barely did anything special to Windows 10. Rogue task at best would reduce fps slightly, but it wouldn't cause massive fps drops or awful spikes. Obviously, you can load your PC with junk, but it takes effort and makes no sense. So any normal Windows installation should be reasonably tame. This is not single core era, where everything mattered.

I just checked my systems' RAM latency variance. With web browser and torrent client on, I saw variation of only 1 ns and my RAM latency is 60.8-61.8 ns. And that's mostly just lame Windows 10 and I use Windows Defender. That latency variation could be simply due to CPU boosting 100MHz more for a moment. With notepad, file explorer, torrent client, AIDA64, I ran Cinebench R23 3 times to see variance of results. And it was just 55 points and average score was 8160 points. That's basically meaningless variation and it wouldn't cause any stuttering or actually noticeable reduction of fps. During benchmarking, there's no bunch of shit open and only Windows is might interfere, so this variation would be even lower. It's so tiny, that it's almost irrelevant. I highly doubt that Windows cruft is really having much effect on those i3s. I would suspect code that isn't meant for low thread count, that is ruining their performance and making gaming stuttery.


I dealt with many threads of people popping VRMs on average tier AM3(+) boards when they overclocked 8320s... Overclocking a powerful CPU takes a good motherboard, this is the same case now with AM4 Vs LGA1200, you need much more motherboard to get good OC mileage with comet/rocket lake than you need for a 5800x or even 5950x. The doubling of the CPU only price was offset by savings you could make elsewhere on the rig and when considering the overall costs getting a used sandy/ivy i7 ($200 or so in 2015) and a passable motherboard was only $50-100 or so more than going with an 8350.
One line, Gigabyte 970A-UD3P.


You also have to be careful with modern benchmarks using the spectre/meltdown/MDS security patches on those intel CPUs, those patches were only made around 2018-2019 and are not representative of the performance at the time. Also, consider the games being benchmarked and the relevance to different groups of users.
Patches came latter sure, but you wouldn't use those chips without them. RA tech actually benchmarked i3 without patches and it barely gained any performance. Framerate was still not smooth.


You are not testing things which benefit substantially from memory overclocking. I play a number of games that get good returns from memory so it is relevant to my use case. This is something I always try to consider when making recommendations about memory overclocking, CPU overclocking, and even CPU choices in builds. If you are planning on playing games in a very GPU limited fashion and/or are not particularly sensitive to worse performance, don't bother with OCing those parts that aren't the bottleneck, in fact skip a 5600x or 5800x and go with a 3600/3700x or 10400F... But there are many people like me who do play games where performance is strongly tied to memory, games which are naturally CPU bound, and games where performance matters for competitive reasons.
In that case, you gain more from CPU overclock or power limit increase, or even setting power plan to ultimate.


This, not to mention AM4 boards are substantially cheaper than comparable LGA1200 ones so you recoup a lot of the cost gap there as well. If the prices don't make sense don't buy it, but the global market is in a very dynamic situation at the moment, many low end intel chips for example were not in stock in many places earlier this year, and intel also had their supply shortages with 14nm...

I don't see any reason to be emotional about these products and write them off because you are irritated at a change in price tiers, they are tools for a task and because of these kinds of fluctuations at certain times some are going to be better than others.
That was never a case where I live.


He is the last person on this planet you should be taking serious advice on when it comes to computers... In one of his videos he has a pre production AM4 board with a beta (never publicly released) BIOS on it, and when it doesn't run properly he makes a video blaming AMD for it without bothering to do any troubleshooting or cross checking with the board vendor or AMD about why the hardware was behaving as it is.
While he may fail at being objective, I'm pretty sure he got some experimental Intel boards too. He also mentions that some problems were never fixed. His arguments are quite realistic too, come on first gen Ryzen certainly had issues with memory compatibility and stability. And Ryzens definitely had issues with USB. It's also true that OEMs had to do tons of BIOS patchworks to make it as stable and reliable as it is today. I remember his video about APU, where he bought brand new cheap board, APU and the rest. Graphics drivers were glitchy. So far Ryzen could be best described as fat (those latencies) bleeding patient, who is patched up, but still occasionally bleeds. So all in all, Ryzen 1 was poor, Ryzen 2 was improved, but still had issues often. Ryzen 3 became rather decent. Ryzen 5 still can have USB issues, but is good, unfortunately overpriced and not available. That might not be catastrophic, but since Nehalem, Intel didn't have nearly as many problems with any of their consumer platforms. And certainly no disaster like Ryzen 1st gen. Their biggest fails so far were melting motherboards in Pentium D era, dying Northwoods due to electromigration, PL shenanigans that murder cheap boards. That's it. I hear of some SATA issues with Sandy Bridge, but I think it was HEDT platform only.


Personally at least, the only place I put real value in intel CPUs at the moment is for competitive gamers who want to run a very well tuned (heavily overclocked) system, in which case they should get a 10900k(f), good 2 DIMM motherboard (apex for example), and a kit of dual rank b-die. In any other case, ryzen 5000 provides better value than intel at every price point except the 10400f which is a questionable product compared to the 3600 due to the dead end platform.
Why not 10600KF? Games still poorly benefit from more cores. And AM4 is dead end too. There won't be any AMD CPU made for AM4 anymore, AMD is moving to AM5. LGA1200 at least got one lame refresh for Celerons-i3s. And anyway if you are are in that market segment, one gen update doesn't matter to you. You won't upgrade Ryzen 3600 soon and when it will be inadequate, you will need entirely new platform anyway. Honestly, Ryzen 3600 must last 8 years since launch, there's no reason why it should. Now it has plenty of power and runs games way above 60 fps. It will take a long time until it will not be able to maintain 60 fps or acceptable framerates. Then there will be much faster, much power efficient and overall better chips. Also motherboards will be massively upgraded and there will be at least DDR5, if not DDR6. That minor upgrade to say 5800X just won't make sense, even if it is cheap. Not to mention that after all those years you hardware is to certain extent worn out, who knows, maybe motherboard won't last much longer. So it only makes sense to replace CPU, board, RAM and likely SSD.


I personally run rocket lake and the only reason is shits and giggles, which is a perfectly valid reason to use any hardware, but I'm definitely not going to go out and tell people it's good...
But why not. It's a rather decent platform and your i9 certainly doesn't lag and it performs well. The only disadvantages are heat and questionable value, but if you get i7 instead, value is much better and there is PL tuning to contain that CPU, so... It's a bit worse than Ryzen, but it's also so similar that they are basically equivalents and it doesn't really matter that much which you have. And then there is i5 11400F, which is just plain good value that murders Ryzens.
 
Joined
Mar 31, 2014
Messages
1,533 (0.42/day)
Location
Grunn
System Name Indis the Fair (cursed edition)
Processor 11900k 5.1/4.9 undervolted.
Motherboard MSI Z590 Unify-X
Cooling Heatkiller VI Pro, VPP755 V.3, XSPC TX360 slim radiator, 3xA12x25, 4x Arctic P14 case fans
Memory G.Skill Ripjaws V 2x16GB 4000 16-19-19 (b-die@3600 14-14-14 1.45v)
Video Card(s) EVGA 2080 Super Hybrid (T30-120 fan)
Storage 970EVO 1TB, 660p 1TB, WD Blue 3D 1TB, Sandisk Ultra 3D 2TB
Display(s) BenQ XL2546K, Dell P2417H
Case FD Define 7
Audio Device(s) DT770 Pro, Topping A50, Focusrite Scarlett 2i2, Røde VXLR+, Modmic 5
Power Supply Seasonic 860w Platinum
Mouse Razer Viper Mini, Odin Infinity mousepad
Keyboard GMMK Fullsize v2 (Boba U4Ts)
Software Win10 x64/Win7 x64/Ubuntu
My daily system doesn't have much variation either and I barely did anything special to Windows 10.
It's a 6c12t part, it won't suffer from performance loss due to context (thread) switching with any at this time reasonable amount of background tasks. Go below 12t and it will quickly start to become worse though.
saw variation of only 1 ns and my RAM latency is 60.8-61.8 ns.
This looks about normal. Even on a benching OS most memory latency tests have around this variance.
And it was just 55 points and average score was 8160 points.
This is much higher variation than on a benching system... I don't run r23 much but on r15 and r20 you should get around a quarter of a percent variation on a good benching OS.
I would suspect code that isn't meant for low thread count, that is ruining their performance and making gaming stuttery.
There isn't really a way to code for high thread count without actually saturating many cores, if you run lots of threads which do a little work, as long as your context switching overhead is not becoming dominant, it is actually beneficial to have them on the same cores because usually they have shared data which will end up just staying in the private caches (L1/L2) of the CPU cores, rather than having to go to L3 to retrieve it.
Patches came latter sure, but you wouldn't use those chips without them
I wouldn't now, but I was talking about FX Vs intel in pre zen era so they should not be considered in the comparison because even if you wanted to run them back then you couldn't have.
Why not 10600KF?
Loses to AMD, maybe not on an apex with dual rank b die, but if you are running that setup a) money isn't a problem so you can just go the 10900k(f) and b) value is thrown out the window.

And AM4 is dead end too. There won't be any AMD CPU made for AM4 anymore
AMD has said they will be making consumer products with 3d cache zen3, these will almost certainly be drop in replacements for existing zen 3 designs.
You won't upgrade Ryzen 3600 soon and when it will be inadequate, you will need entirely new platform anyway.
I think this argument only applies to drop in replacements and where the performance uplift on the platform is small. A 5800x in 2-3 years time (or a 3d cache zen3) will be quite a lot cheaper than they are now.

But why not
Zen 3 does everything RKL does, but better... CML does performance and tuning better...

And then there is i5 11400F, which is just plain good value that murders Ryzens.
Until you account for the higher motherboard and cooling costs. The 10400F runs away with the value game compared to the 11400F.
 
Joined
May 8, 2021
Messages
1,978 (1.83/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
It's a 6c12t part, it won't suffer from performance loss due to context (thread) switching with any at this time reasonable amount of background tasks. Go below 12t and it will quickly start to become worse though.
While those parts are more sensitive, my quick testing still shows variation so tiny that it's almost not a variation. And most of that variation can be due to different boost step, if particular tested core was in C state. That's enough to sway score as much. On the other hand, my other system has near zero variation. In latency test, variation is no more than 0.2 ns. Cinebench R15 score variation is 2 points max. CPU-Z variation is 4 points only. One thing is that it doesn't use any boost and other is that CPU is always ("FSB" still variates a bit) at same speed. Other than that, it's just cleaned up Windows 7 installation on spinning rust with Bitdefender free

This looks about normal. Even on a benching OS most memory latency tests have around this variance.
But that's with background tasks or in other words needlessly bloated OS (kinda). And to me that doesn't look good. My memory has 5 ns lower latency normally, variation is also smaller up to 0.5 ns.

This is much higher variation than on a benching system... I don't run r23 much but on r15 and r20 you should get around a quarter of a percent variation on a good benching OS.
Sorry, but that's unrealistic. If you test CPUs with turbo boost and C-states on, there will be variation and certainly more than quarter percent. And since CPUs now have far more power saving features (EIST, Ring to core offset, Race to halt, energy efficient turbo, real time memory timing just to list some, more high end BIOSes may have more options available and on Zen, there are other features). I will mention that C-states are particularly not great for benchmarking with balanced power plan. They reduce CPU cache performance, ruin SSD IOPS and access times, make random cores asleep. Depending on power limit selected (Intel spec for i5 10400F is PL1 - 65 watts, PL2 - 134 watts, CPU performance will also variate. You also can't turn off C-states as those are mandatory if you want to let CPU to reach higher non all core boost states. You also need Speedshift on, because boost is somewhat broken without it.

Let's not forget that this is with junk in background. For that, it's not bad and it certainly wouldn't cause stuttering or massive frame drops. That can only reduce framerate by 1-2 fps. Unless you try to run something on Intel Atom or AMD E something, then yeah maybe that will matter then.

There isn't really a way to code for high thread count without actually saturating many cores, if you run lots of threads which do a little work, as long as your context switching overhead is not becoming dominant, it is actually beneficial to have them on the same cores because usually they have shared data which will end up just staying in the private caches (L1/L2) of the CPU cores, rather than having to go to L3 to retrieve it.
In other words you just said what I said.

I wouldn't now, but I was talking about FX Vs intel in pre zen era so they should not be considered in the comparison because even if you wanted to run them back then you couldn't have.
Oh well:

Situation isn't much different even those years ago.

Even in Skyrim at worst, FX got 85 fps, ivy i7 got 114 fps. Performance difference is 34%. Perhaps spectre and meltdown patches did affect some Intel part, but that was 25 second benchmark. So it's dubious at best. In many other benchmarks difference is non existent or at worst 35%. In gaming benchmarks difference has been quite small overall. Fun thing is that FX often beat ivy i5, which was considered to be higher end chip than FX. In some benches FX also managed to beat i7 and often it beat i5.

I have found one bench where FX sucked hard. Far Cry 2 benchmark. It was bottlenecking GPU really badly there. i7 was nearly two times faster at low resolution. That's it, other benches showed that FX still decently competitive and in SHA1 hashing bench FX actually was faster than i7 and for that matter, it was the fastest CPU they had. It even beat i7 3960X, a Sandy Bridge behemoth with tons of cache, 130 watt TDP, quad channel memory, 6 cores and 12 threads.

So, once games were updated to utilize more cores, FX started to shine. But before that, FX gave performance good enough anyway. Is it really any wonder that FX aged well? Back in 2012 people said that FX will age better. Pretty much everyone knew that. And that became reality. FX doesn't beat i7 in gaming, but it didn't have to. It was priced much lower and did quite well. FX was doing quite well in productivity benches too.

Also that benchmark video was with both CPUs at 4.5GHz, which is quite a handicap for FX and big boost for Intel.


Loses to AMD, maybe not on an apex with dual rank b die, but if you are running that setup a) money isn't a problem so you can just go the 10900k(f) and b) value is thrown out the window.
Does it really lose?

The difference is small, i5 is actually overclockable. And no if you buy i5, money is still a problem. You still save a ton with i5. Argument about board and cooling doesn't work either, you need beefy VRMs and good cooler, both run hot (Ryzen runs hotter) and if you don't want you board throttling or kicking the bucket fast, you need to spend more on it anyway. Applies to both Intel and AMD. You can also put some of that gamer RAM on i5 and you will see AMD being beaten. Otherwise spending premium on RAM is stupid, get a better cooler and clock CPU higher, far better gains.

AMD has said they will be making consumer products with 3d cache zen3, these will almost certainly be drop in replacements for existing zen 3 designs.
So why would they make that new 1700 pin socket then? Anyway, it's still same old Zen 3, it's just one refresh left on AM4 at most. It's dead end. AMD may also pull the same you need another board since BIOS chip is too small, so buy into dead end AM4 again.

I think this argument only applies to drop in replacements and where the performance uplift on the platform is small. A 5800x in 2-3 years time (or a 3d cache zen3) will be quite a lot cheaper than they are now.
Upgrade from 3600 to 5600X would only make sense if it was truly cheap, no more than 50 USD. Otherwise, it's just not worth it. And there will be faster and better CPUs by then. 5600X will not look so hot anymore.

Zen 3 does everything RKL does, but better... CML does performance and tuning better...
Comet Lake isn't meaningfully better at tuning and certainly not at performance. At 6 core tier, Rocket Lake is better. AM4 is even better, but it costs too much to make any sense.

Until you account for the higher motherboard and cooling costs. The 10400F runs away with the value game compared to the 11400F.
They are virtually identical. 11400F is same 10400F, but with better memory support and AVX512. That's it, there aren't any other changes. So you realistically buy almost same board (just update it to B560) and same cooler. New i5 isn't hotter. New i5 runs colder than older one:

By as much as 9C.
 
Top