• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Ryzen 3000 memory speed vs latency in gaming

Joined
Aug 6, 2017
Messages
7,412 (2.60/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT

this is a really good read for someone who is still undecided between 3200 cl14 and 3600 cl16

3600 cl16 is always faster,by 1-2% in some games,by 5-7% in others.

All tests done on single rank B-die memory.
 
And no tests at important speeds like 3733 and 3800...
Whereas pointless speeds like 4000, 4400 and 4600 are included...
Another mostly useless Ryzen 3000 memory test.
Also, that's a $560 memory kit the reviewer is testing with.
 
lol,you're useless.
who buys 3800 memory for a chip that can't handle it without crippling IF speed.
3600 is the sweet spot.
 
Yeah, right... :banghead:

1569913435694-png.133018
well,yours did.
does not matter for the test.
and they would have to test 3800 in non-sync mode since it's not guaranteed.some users even struggle to hit 1800.
useless complaining.
 
that 3600 cl14 tho o_O

what a time to be alive.
 
well,yours did.
does not matter for the test.

And seemingly 75% or more of people manage to do the same. 3733 isn't even hard.
But no, doesn't matter, 'cause you say so... o_O
 
And seemingly 75% or more of people manage to do the same. 3733 isn't even hard.
But no, doesn't matter, 'cause you say so... o_O
show me the data chart.
75% means one in four can't.


needless nitpicking.go comment somewhere else.it's not for you.
 
Well no matter what for me. I am going after 3600 CL14 memory and then ajusting clock and timings accordingly to what the Ryzen 9 3950X i want can go to on IF clock. Can it do 1900 MHz on IF i will for 3800 MHz try timings like CL14-16-16-36 and if it only can do like 1800 MHz and 3600 MHz on memory i will try for 14-14-14-32 or arounds those numbers. Maybe 3733 MHz would be optimal For IF clock and timings like 14-15-15-35. Those are the numbers i will try to hit, when i get my G.skill Trident Z NEO 3600 MHz CL14 kit rated for 1.4 volts.
 
for the majority the question is 3200 c16 vs 3000 c14 or 3600 c16 vs 3200 c14.
 
Actually, every single X570 motherboard and Ryzen 3000 that I tested, could run at 1900MHz IF and I tested multiple ASUS, ASRock and Gigabyte boards with some CPUs.
Also, most new IC including Hynix, Samsung, Micron, Nanya and Spectek, can make 3600-3800 CL16 where CL14 vs 16 is like no different in daily usage. In most my tests difference in games and most benchmarks is like 1% between various memory kits at 3600-3800. Sure, you can buy a highly overpriced 3600 CL14 kit to get 1FPS more in games or 1GB/s more and 1ns less in the AIDA64 test.

I also wouldn't say that 4400+ clock is pointless. In tests, results at ~4600 CL18-20 are about the same as at ~3733 CL16-17 and from current new IC, Hynix, Micron and Samsung can run at 4600+ CL18. Some kits can easier pass 4600 CL18 than 3733 CL16. Higher bandwidth, a bit lower latency and in games or benchmarks results are the same.

Clearly, some people just like to repost conclusions of other users and assume it's a rule because someone made a pretty average review. Not many are actually testing anything nowadays.
 
Well no matter what for me. I am going after 3600 CL14 memory and then ajusting clock and timings accordingly to what the Ryzen 9 3950X i want can go to on IF clock. Can it do 1900 MHz on IF i will for 3800 MHz try timings like CL14-16-16-36 and if it only can do like 1800 MHz and 3600 MHz on memory i will try for 14-14-14-32 or arounds those numbers. Maybe 3733 MHz would be optimal For IF clock and timings like 14-15-15-35. Those are the numbers i will try to hit, when i get my G.skill Trident Z NEO 3600 MHz CL14 kit rated for 1.4 volts.
I got my 3600 CL 15 G.skill Tridents to do 3600 CL14 no trouble what so ever with my 3700x
 
I got my 3600 CL 15 G.skill Tridents to do 3600 CL14 no trouble what so ever with my 3700x

It's the same Samsung IC. 3600 CL15 was actually on the best Samsung IC available. I wouldn't be surprised if it was better than these 3600 CL14 kits as tight timing Neo are rated at 1.4-1.5V.
 
Actually, every single X570 motherboard and Ryzen 3000 that I tested, could run at 1900MHz IF and I tested multiple ASUS, ASRock and Gigabyte boards with some CPUs.
Also, most new IC including Hynix, Samsung, Micron, Nanya and Spectek, can make 3600-3800 CL16 where CL14 vs 16 is like no different in daily usage. In most my tests difference in games and most benchmarks is like 1% between various memory kits at 3600-3800. Sure, you can buy a highly overpriced 3600 CL14 kit to get 1FPS more in games or 1GB/s more and 1ns less in the AIDA64 test.

I also wouldn't say that 4400+ clock is pointless. In tests, results at ~4600 CL18-20 are about the same as at ~3733 CL16-17 and from current new IC, Hynix, Micron and Samsung can run at 4600+ CL18. Some kits can easier pass 4600 CL18 than 3733 CL16. Higher bandwidth, a bit lower latency and in games or benchmarks results are the same.

Clearly, some people just like to repost conclusions of other users and assume it's a rule because someone made a pretty average review. Not many are actually testing anything nowadays.
How many was that?
 
Last edited by a moderator:
Actually, every single X570 motherboard and Ryzen 3000 that I tested, could run at 1900MHz IF and I tested multiple ASUS, ASRock and Gigabyte boards with some CPUs.
Also, most new IC including Hynix, Samsung, Micron, Nanya and Spectek, can make 3600-3800 CL16 where CL14 vs 16 is like no different in daily usage. In most my tests difference in games and most benchmarks is like 1% between various memory kits at 3600-3800. Sure, you can buy a highly overpriced 3600 CL14 kit to get 1FPS more in games or 1GB/s more and 1ns less in the AIDA64 test.

I also wouldn't say that 4400+ clock is pointless. In tests, results at ~4600 CL18-20 are about the same as at ~3733 CL16-17 and from current new IC, Hynix, Micron and Samsung can run at 4600+ CL18. Some kits can easier pass 4600 CL18 than 3733 CL16. Higher bandwidth, a bit lower latency and in games or benchmarks results are the same.

Clearly, some people just like to repost conclusions of other users and assume it's a rule because someone made a pretty average review. Not many are actually testing anything nowadays.

Well, mine can't do shit. I can't even run auto.
 
Well, mine can't do shit. I can't even run auto.
what cpu and memory ?

the guys need to read the title and understand that this is advice for those who are undecided between e.g. 3200 c16 and 3000 c14,instead of bickering about damn 3800 results that almost no one cares about.
 
Your ram needs to run double your fclk for optimal performance so if your fclk is 1800 your ram should be 3600. That is if you want to hit the lowest latency with the fastest read times, go above or below double fclk and you get instant increase of 8-10ns and drop in read speed.

I tested my 4400mhz ram at many speeds and my fastest stable fclk is 1866 so running my ram at 3733 gives me the lowest latency and highest read speed compared with every other speed I tested.

In other words if your ram isn't running double your fastest stable fclk your system will underperform. Some people get stable 1900 fclk which would be 3800 ram speed for optimal performance. Don't piss and moan about 3000 and 3200 when those speeds kill your performance if they're not exactly double your fclk.

To prove my point I oc'd to 1900fclk and boosted the soc voltage a bit so it would run stable and it had higher write speed but also suffered that instant 10ns gain because it wasn't 2:1
 

Attachments

  • aida res.png
    aida res.png
    225.4 KB · Views: 6,143
Last edited:
Get 3200c16 kit and oc to 3600c18 done...
 
what cpu and memory ?

the guys need to read the title and understand that this is advice for those who are undecided between e.g. 3200 c16 and 3000 c14,instead of bickering about damn 3800 results that almost no one cares about.
Just saying that OP has replied in the thread that I cut and pasted below,just giving a different view in this thread.

Well what about me and my testing.Hell 2133Mhz is just as good as 3600Mhz .All about context of testing.To much bull floating around about memory.People see test like the OP posted and actually believe it is gospel and willing to insult and fight people.
So I also say who cares and buy what you want for your comfort level.I can say not many people are buying RTX 2080TI and expensive RAM to play at 1080P.

Ryzen 3600X Tested Tight Ram Timings 2133Mhz 2400Mhz 2933Mhz 3200Mhz 3733Mhz 4000Mhz 4200Mhz 2560x1440 Ultra settings.

There are plenty of 1080P and lower test,these test are aimed at people who blew there cash on $500+ graphics cards and are not quite sure if they should upgrade their ram.I say no if you game at higher resolutions.

Battlefield V I noticed that above DDR4 4000Mhz average FPS started to climb.Which is weird because memory clock/fabric clock/memory controller switch from 1:1:1 to 1:1:2 so memory controller gets cut in half.

A Plague Tale Innocence had two areas where FPS lows dropped to 16 FPS and 7 FPS in the same spot on every test

All games gave the exact same game play every test.

Video of game areas tested



RAM TIMINGS USED FULL NO AUTO






♦ 2133Mhz (16GB)CL10-10-10-10-21
♦ 2400mhz (16GB)CL10-11-11-11-21
♦ 2933mhz (16GB)CL12-14-13-13-26
♦ 3200mhz (16GB)CL14-14-14-14-28
♦ 3733mhz (16GB)CL16-17-16-16-34
♦ 4000mhz (16GB)CL16-18-17-17-36 -(memory clock 2000x2/fabric clock 1800x2/memory controller 1000x2)
♦ 4200mhz (16GB)CL16-18-17-17-36 -(memory clock 2100x2/fabric clock 1800x2/memory controller 1050x2)

COMPUTER USED
♦ CPU - AMD 3600X With MasterLiquid Lite ML240L RGB AIO (Fans 55%)
♦ GPU - Nvidia RTX 2080
♦ RAM - G.Skill Trident Z 16gb DDR4 (F4-4000C18D-16GTZ) (2x8GB)
♦ Mobo - MSI X470 - Gaming Plus
♦ SSD - M.2 2280 WD Blue 3D NAND 500GB
♦ DSP - LG 27" 4K UHD FreeSync Gaming Monitor (27UD59P-B.AUS)
♦ PSU - Antec High Current Pro 1200W

VIDEO INFORMATION
► FPS Monitoring : MSI Afterburner/RTSS
► Gameplay Recorder : Nvidia Shadowplay
► Edit Videos : VSDC Free Video Editor http://www.videosoftdev.com/
 
I appreciate the effort and the post of your results is very well made to help anyone understand those. What I got from those tests is that 3733MHz is the optimum as AMD already showed in their marketing slides about Zen2 memory performance. The 0.1% and 1% low FPS clearly show that in almost all games (BF V isn't good in comparisons for minimum FPS). Thanks for the info!
 
Just saying that OP has replied in the thread that I cut and pasted below,just giving a different view in this thread.

Well what about me and my testing.Hell 2133Mhz is just as good as 3600Mhz .All about context of testing.To much bull floating around about memory.People see test like the OP posted and actually believe it is gospel and willing to insult and fight people.
So I also say who cares and buy what you want for your comfort level.I can say not many people are buying RTX 2080TI and expensive RAM to play at 1080P.

Ryzen 3600X Tested Tight Ram Timings 2133Mhz 2400Mhz 2933Mhz 3200Mhz 3733Mhz 4000Mhz 4200Mhz 2560x1440 Ultra settings.

There are plenty of 1080P and lower test,these test are aimed at people who blew there cash on $500+ graphics cards and are not quite sure if they should upgrade their ram.I say no if you game at higher resolutions.

Battlefield V I noticed that above DDR4 4000Mhz average FPS started to climb.Which is weird because memory clock/fabric clock/memory controller switch from 1:1:1 to 1:1:2 so memory controller gets cut in half.

A Plague Tale Innocence had two areas where FPS lows dropped to 16 FPS and 7 FPS in the same spot on every test

All games gave the exact same game play every test.

Video of game areas tested



RAM TIMINGS USED FULL NO AUTO






♦ 2133Mhz (16GB)CL10-10-10-10-21
♦ 2400mhz (16GB)CL10-11-11-11-21
♦ 2933mhz (16GB)CL12-14-13-13-26
♦ 3200mhz (16GB)CL14-14-14-14-28
♦ 3733mhz (16GB)CL16-17-16-16-34
♦ 4000mhz (16GB)CL16-18-17-17-36 -(memory clock 2000x2/fabric clock 1800x2/memory controller 1000x2)
♦ 4200mhz (16GB)CL16-18-17-17-36 -(memory clock 2100x2/fabric clock 1800x2/memory controller 1050x2)

COMPUTER USED
♦ CPU - AMD 3600X With MasterLiquid Lite ML240L RGB AIO (Fans 55%)
♦ GPU - Nvidia RTX 2080
♦ RAM - G.Skill Trident Z 16gb DDR4 (F4-4000C18D-16GTZ) (2x8GB)
♦ Mobo - MSI X470 - Gaming Plus
♦ SSD - M.2 2280 WD Blue 3D NAND 500GB
♦ DSP - LG 27" 4K UHD FreeSync Gaming Monitor (27UD59P-B.AUS)
♦ PSU - Antec High Current Pro 1200W

VIDEO INFORMATION
► FPS Monitoring : MSI Afterburner/RTSS
► Gameplay Recorder : Nvidia Shadowplay
► Edit Videos : VSDC Free Video Editor http://www.videosoftdev.com/
Thing is you used really low timings, most cheap ram, for instance 3000cl16 has no chance whatsoever of running 2933 at cl12. Samsung b-die however does that. I really find your testing interesting, but for many it's not possible to go so low. Can give you an example, I have G. Skill Aegis 3000cl16. Best I have achieved at 2666 is cl13-16-16-32 and 350 tRFC. B-die can probably do it at cl11-12-12-24 270 tRFC without problems.
 
Fair points,thanks for the reply.It is just another view on the Ram and every bit of information can help people out.
 

Revolutionary testing showing how GPU bound workloads remain GPU bound...

Appreciate the effort but you really need to realise that both the CPU/RAM and GPU/VRAM subsystems will each have their own FPS limits and in basically all cases significantly bottlenecking on either one of them will make any changes to the other completely irrelevant. The objective of CPU/RAM testing in most games should be to determine a maximum attainable (smooth) framerate without GPU limitations, which the reader can then use as a guideline to determine a good FPS target for their GPU and settings choices. Looking at "realistic scenarios" has little use if those scenarios are extremely heavily GPU bottlenecked.
 

this is a really good read for someone who is still undecided between 3200 cl14 and 3600 cl16

3600 cl16 is always faster,by 1-2% in some games,by 5-7% in others.

All tests done on single rank B-die memory.
I actually tested this and you are spot on. Here is my results:
 
Back
Top