• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Shadow Of The Tomb Raider - CPU Performance and general game benchmark discussions

Joined
Aug 9, 2019
Messages
1,521 (0.88/day)
Processor Ryzen 5600X@4.85 CO
Motherboard Gigabyte B550m S2H
Cooling BeQuiet Dark Rock Slim
Memory Patriot Viper 4400cl19 2x8@4000cl16 tight subs
Video Card(s) Asus 3060ti TUF OC
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores CB20 4710@4.7GHz Aida64 50.4ns 4.8GHz+4000cl15 tuned ram SOTTR 1080p low 263fps avg CPU game
View attachment 213535
@Taraquin i got around to taking your advice and switching to 2t instead of gdm 1t. i wasnt convinced at all at first but i was proven wrong. it seems it took an extremely minor hit to bandwidth but my latency is consistently around 52.4-52.6ns now. even with that small change somehow it translated to this benchmark and i got my best runs consistently. to make sure it wasnt a fluke i switched back to gdm 1t and got lesser scores. this is at 4000/2000 cl 14-15-14-21.

i also experimented with turning off c states and messed around with the p states to see if i can get my latency lower. all it really did was stabilize any latency jitter between aida test at the cost of bandwidth and "sleep voltage" on my cpu was increased. so my cpu and flck woke up a little faster from idle pretty much. but it doesnt make a difference at all during gaming since the cpu is in a woken up state at that point. even then 2 or 3 quick latency or read test before running a full aida benchmark is enough to wake the cpu up completely to get your true results. it might help in certain scenarios though but i couldnt tell you which lol.
2T on ryzen 5000 seems to be generally faster, BUT stabilizing it is much harder than 1T GDM. Currently I\m running flat 16-32-48 288 trfc on 1T and can do that at 1.44V. On 2T I get some errors in TM5 after a while if I try that. if you get 2T stable, go for it, but GDM is a nice fallback with almost the same performance and better stability :)
 
Last edited:
Joined
Jan 26, 2020
Messages
416 (0.27/day)
Location
Minbar
System Name Da Bisst
Processor Ryzen 5800X
Motherboard GigabyteB550 AORUS PRO
Cooling 2x280mm + 1x120 radiators, 4xArctic P14 PWM, 2xNoctua P12, TechN CPU, Alphacool Eisblock Auror GPU
Memory Corsair Vengeance RGB 32 GB DDR4 3800 MHz C16 tuned
Video Card(s) AMD PowerColor 6800XT
Storage Samsung 970 Evo Plus 512GB
Display(s) BenQ EX3501R
Case SilentiumPC Signum SG7V EVO TG ARGB
Audio Device(s) Onboard
Power Supply ChiefTec Proton Series 1000W (BDF-1000C)
Mouse Mionix Castor
I don't know but seems like he still believes like those cores have dedicated L3 per core(like L1.L2) and it's not getting that L3 cache is actually characterized as a pool of fast memory that is shared between cores and even if it's split in half you still have 2x32mb which will certainly add some worst latency but you will have more cache....
P.S.On that video that I posted above you can clearly and without the doubt see advantage of more cache....sure it was Intel CPU but I doubt that it is such big difference with those Ryzens If I could guess I will said that gains are not as much as with the Intel but never the less you will still see the advantage of more L3 cache....
I cant say for sure what makes a dual core CCD faster in certain situations, I don't have the in-depth knowledge to make 100% statements, I can only look at differences in results with similar hardware and draw some (possibly flawed) conclusions. :)

The thing is, in this specific test we run on this thread, there are noticeable differences between single and dual CCD's Ryzens, if that's up to moar cores, or cache speed, or how the cache is used, that's just a guess :)
 
Joined
Aug 11, 2021
Messages
35 (0.04/day)
2T on ryzen 5000 seems to be generally faster, BUT stabilizing it is much harder than 2T. Currently I\m running flat 16-32-48 288 trfc on 1T and can do that at 1.44V. On 2T I get some errors in TM5 after a while if I try that. if you get 2T stable, go for it, but GDM is a nice fallback with almost the same performance and better stability :)
Yea i noticed. Im getting failures about a half hour into hci memtest. I added .2 mv to it and am running a test now. Im not willing to go any higher.

@Taraquin yea 2t is not happening lol. I updated my bios this morning thinking this might be the "one" and thats a big fat NOPE. Now im testing my daily overclocks again for stability.
 

tabascosauz

Moderator
Supporter
Staff member
Joined
Jun 24, 2015
Messages
7,573 (2.35/day)
Location
Western Canada
System Name ab┃ob
Processor 7800X3D┃5800X3D
Motherboard B650E PG-ITX┃X570 Impact
Cooling NH-U12A + T30┃AXP120-x67
Memory 64GB 6000CL30┃32GB 3600CL14
Video Card(s) RTX 4070 Ti Eagle┃RTX A2000
Storage 8TB of SSDs┃1TB SN550
Display(s) 43" QN90B / 32" M32Q / 27" S2721DGF
Case Caselabs S3┃Lazer3D HT5
Yet there are some measurable differences in cache between the 5600x, 5800x and 5900x and 5950x :)
See below screenshots of Aida Cache and Memory bench, taken from this thread, but repeatable over other threads findings:

Now, I am not expert on CPU architecture, but those double numbers of cache speed jump out, and in certain situations, like this test also, there is a noticeable gain between them, so my humble opinion, which could be wrong, is that there is a difference in cache behavior between single and dual CCD Ryzen 5000 CPUs, which in certain scenarios can give a uplift.

It's not new, the difference has been there since Ryzen 3000 and L3 results are pretty much mirrored (though slightly faster usually). I'm 90% sure that the "cache differences" in AIDA are horseshit. AIDA is pretty meaningless on both cache and memory front (especially DRAM latency, it's wildly unpredictable compared to membench's latency counter), it's not hard to dupe AIDA with memory settings that are flat out unstable or tank performance in other benchmarks (LinpackXtreme, DRAM Calc).

As to L3 in AIDA, infamous example was the L3 cache read "bug" with Renoir APUs. And no, before you ask, it wasn't an issue of boost or Cstates, all-core and Cstates off made zero difference on 4650G. AMD "fixed" it with an AGESA patch that literally didn't do anything to performance in any other test in existence. Suspected that AMD probably tweaked Precision Boost to prevent cores from parking during AIDA to make users feel better about themselves. Then Cexanne came around and now we're back to crappy 300-400GB/s L3 in AIDA......see the pattern here? If AIDA is authoritative, we'd be claiming that Zen 3 has demonstrably slower L3 than Zen 2 Renoir of all things......AIDA is the single greatest pat-oneself-on-the-back machine, it's popular because it's easy, doesn't mean it indicates anything at all. When different people's stock 5900Xs have hundreds of GB/s' difference in L3 AIDA readings..........

Again, don't get me wrong, I'm not trying to discredit you or cast doubt on your choice of settings for the benchmark. But if it's supposed to be a CPU-heavy game, it should perform the part, and nothing that I can see so far shows that. Please provide more HWInfo if you can though, more is better.

@ tabascosauz
Your settings are just plain bad, that's why your scoring low..

But do note that my latency numbers are not normal for a dual ccd 5900x/5950x (average is 55-58ns latency in aida i would say)

3800cl14 final timings tight stable.png

Okay, make up your mind?

First you say that my settings are bad for 3800 14-15-15 (feel free to offer actionable feedback), then you say that my 54.8ns/101s membench is better than average for 2CCD and yours is significantly better than expected for some reason (are you implying board firmware or PEBCAK?). I never claimed to be running the tightest 3800CL14 setup in the world but neither are most of the other results in here, so, which one is it then?

I'm well aware of polling rate. That doesn't significantly change the test behaviour at all. Upping polling rate may cause a little more of the "high" boost clocks to translate into effective clock, but your own HWInfo screenshot indicates that usage and load are still nowhere near what's expected even from a mildly CPU-bound game (I have a LOT of those). Look at the disparity between your "clocks" and effective clocks, it's the classic symptom of mostly-idle cores and has little to do with polling rate. If anything, needing to increase polling rate to portray increased CPU usage just tells confirms how low average usage is...

Plus, while per-core clocks and power vary a lot and a loose polling rate may miss occasional peaks, polling rate can't fool temps. I've done a shit load of logging in a few other games on 5900X trying to figure out the 10-15C temp spikes that Zen3 chiplet seem to experience sometimes, particularly in MW19 where clocks/per-core power/temps jump around like a roller coaster. Insurgency:Sandstorm is an example of a game that works the CPU moderately but effective clock doesn't show it, only per-core power and temps do. You can up or down the polling rate all you like, if a game is actually CPU-intensive it makes no difference and will naturally show it in the data.

And one thing that polling rate *certainly* won't fool is the fact that the GPU is running full tilt during this benchmark for more than just part of it. It takes a real long time at 100% load to get to 72.5C edge temp, and 180W is literally max possible load. So from what I can tell, it's quite a bit more GPU bound than the vague "29%" number seems to imply, are you insinuating that "bad settings" are solely to blame for most of the test running on GPU?

Or are you implying that the GPU is bad (it certainly is no 3080, I never made any claims regarding GPU perf)? Which in itself would be an admission that the bench isn't nearly so CPU-bound as it should be?

Wait isn't the 5900X 8+4 ... ?

That's been rumored for a long time, but it's never made any sense. Ryzen has always functioned on symmetric CCDs. CCD1 and CCD2 cores are clearly demarcated in differences in per-core power during all-core for example, and it does not paint a picture of 8+4 or anything that isn't 6+6.

Some games like MW19 run a heavy "all-core" AVX workload sometimes...but whereas on a 3700X it runs truly 8-core loads, on 5900X it automatically limits itself to the 6 cores of CCD1. And Windows scheduler seems to pick its favoured background processing core not based on core quality (mine is literally the worst core), but the fact that it's not on the same CCD1 as the two preferred performance cores inevitably are.
 
Last edited:
Joined
Aug 4, 2020
Messages
1,572 (1.16/day)
Location
::1
Makes sense. I don't know who made up the 8+4 claim/rumor first but it seemed quite outlandish to me from the get-go. There's a reason why 3600, 3900X, 5600X and 5900X had been the price/performance champs while the 3700X and especially the 5800X never could quite make it (nor the 3950X/5950X) - you could simply jam the slightly flawed CCDs into the former while the latter require flawless ones.
 
Joined
Jul 11, 2015
Messages
628 (0.20/day)
System Name Harm's Rig's
Processor 5950X /2700x / AMD 8370e 4500
Motherboard ASUS DARK HERO / ASRock B550 Phantom Gaming 4
Cooling Enermax LIQMAX III ARGB 360 AIO/ Zalman cooler fan 110mm
Memory Patriot Viper Steel DDR4 16GB (4x 8GB) 4000M TRIDENT Z F-43600V15D-16GTZ /G.SKILL DDR4
Video Card(s) ZOTAC AMP EXTREME AIRO 4090 / 1080 Ti /290X CFX
Storage SAMSUNG 980 PRO SSD 1TB/ WD DARK 770 2TB , Sabrent NVMe 512GB / 1 SSD 250GB / 1 HHD 3 TB
Display(s) Thermal Grizzly WireView / TCL 646 55 TV / 50 Xfinity Hisense A6 XUMO TV
Case TT 37 VIEW 200MM'S/ NZXT Tempest custom
Audio Device(s) Sharp Aquos
Power Supply FSP Hydro PTM PRO 1200W ATX 3.0 PCI-E GEN-5 80 Plus Platinum - EVGA 1300G2/Corsair w750
Mouse G502
Keyboard G413

Attachments

  • CaptureHERE.PNG
    CaptureHERE.PNG
    66.6 KB · Views: 70
Joined
Aug 11, 2021
Messages
35 (0.04/day)
Yet there are some measurable differences in cache between the 5600x, 5800x and 5900x and 5950x :)
See below screenshots of Aida Cache and Memory bench, taken from this thread, but repeatable over other threads findings:

5600X 5800x 5950x

View attachment 213583View attachment 213584View attachment 213585

Now, I am not expert on CPU architecture, but those double numbers of cache speed jump out, and in certain situations, like this test also, there is a noticeable gain between them, so my humble opinion, which could be wrong, is that there is a difference in cache behavior between single and dual CCD Ryzen 5000 CPUs, which in certain scenarios can give a uplift.

My best example is also found in this tread, 2 very close systems, both with a 6800XT as GPU, and 3800mhz tuned ram for the 5800x and stock 3200mhz ram for the 5950x, even if the 5950x is paired with slower ram and worse latency, it has a gain of ~20FPS over the 5800x in a more CPU intensive game setting.
My l3 cache is actually faster than that- around 600+gb/s but i capped my edc to 105 as ive gotten better performance from keeping my tdc/edc values about 10-15amps above stock when using pbo. The result is lowered l3 cache speeds per aida. I think its a bug or something

Now that i think of it its probably something ill want to test now. Cinebench is not the one and done cpu test. Capping my edc might give me a better cb20 score but may also be hurting gaming performance

EDIT: ignore the memory side. the timings are different. the one on the left has an edc cap of 105 (stock is 90A) the one on the right has an edc of 300 so its basically uncapped. notice the difference in l3 cache speeds. however, with the edc uncapped i lose about 50-60 points in cinebench and performance falls within a margin of error in an actual gaming benchmark ...pretty weird
2021-08-15 (1).png
2021-08-21 (1).png
 
Last edited:
Joined
Jan 26, 2020
Messages
416 (0.27/day)
Location
Minbar
System Name Da Bisst
Processor Ryzen 5800X
Motherboard GigabyteB550 AORUS PRO
Cooling 2x280mm + 1x120 radiators, 4xArctic P14 PWM, 2xNoctua P12, TechN CPU, Alphacool Eisblock Auror GPU
Memory Corsair Vengeance RGB 32 GB DDR4 3800 MHz C16 tuned
Video Card(s) AMD PowerColor 6800XT
Storage Samsung 970 Evo Plus 512GB
Display(s) BenQ EX3501R
Case SilentiumPC Signum SG7V EVO TG ARGB
Audio Device(s) Onboard
Power Supply ChiefTec Proton Series 1000W (BDF-1000C)
Mouse Mionix Castor
It's not new, the difference has been there since Ryzen 3000 and L3 results are pretty much mirrored (though slightly faster usually). I'm 90% sure that the "cache differences" in AIDA are horseshit. AIDA is pretty meaningless on both cache and memory front (especially DRAM latency, it's wildly unpredictable compared to membench's latency counter), it's not hard to dupe AIDA with memory settings that are flat out unstable or tank performance in other benchmarks (LinpackXtreme, DRAM Calc).

As to L3 in AIDA, infamous example was the L3 cache read "bug" with Renoir APUs. And no, before you ask, it wasn't an issue of boost or Cstates, all-core and Cstates off made zero difference on 4650G. AMD "fixed" it with an AGESA patch that literally didn't do anything to performance in any other test in existence. Suspected that AMD probably tweaked Precision Boost to prevent cores from parking during AIDA to make users feel better about themselves. Then Cexanne came around and now we're back to crappy 300-400GB/s L3 in AIDA......see the pattern here? If AIDA is authoritative, we'd be claiming that Zen 3 has demonstrably slower L3 than Zen 2 Renoir of all things......AIDA is the single greatest pat-oneself-on-the-back machine, it's popular because it's easy, doesn't mean it indicates anything at all. When different people's stock 5900Xs have hundreds of GB/s' difference in L3 AIDA readings..........

Again, don't get me wrong, I'm not trying to discredit you or cast doubt on your choice of settings for the benchmark. But if it's supposed to be a CPU-heavy game, it should perform the part, and nothing that I can see so far shows that. Please provide more HWInfo if you can though, more is better.



View attachment 213606

Okay, make up your mind?

First you say that my settings are bad for 3800 14-15-15 (feel free to offer actionable feedback), then you say that my 54.8ns/101s membench is better than average for 2CCD and yours is significantly better than expected for some reason (are you implying board firmware or PEBCAK?). I never claimed to be running the tightest 3800CL14 setup in the world but neither are most of the other results in here, so, which one is it then?

I'm well aware of polling rate. That doesn't significantly change the test behaviour at all. Upping polling rate may cause a little more of the "high" boost clocks to translate into effective clock, but your own HWInfo screenshot indicates that usage and load are still nowhere near what's expected even from a mildly CPU-bound game (I have a LOT of those). Look at the disparity between your "clocks" and effective clocks, it's the classic symptom of mostly-idle cores and has little to do with polling rate. If anything, needing to increase polling rate to portray increased CPU usage just tells confirms how low average usage is...

Plus, while per-core clocks and power vary a lot and a loose polling rate may miss occasional peaks, polling rate can't fool temps. I've done a shit load of logging in a few other games on 5900X trying to figure out the 10-15C temp spikes that Zen3 chiplet seem to experience sometimes, particularly in MW19 where clocks/per-core power/temps jump around like a roller coaster. Insurgency:Sandstorm is an example of a game that works the CPU moderately but effective clock doesn't show it, only per-core power and temps do. You can up or down the polling rate all you like, if a game is actually CPU-intensive it makes no difference and will naturally show it in the data.

And one thing that polling rate *certainly* won't fool is the fact that the GPU is running full tilt during this benchmark for more than just part of it. It takes a real long time at 100% load to get to 72.5C edge temp, and 180W is literally max possible load. So from what I can tell, it's quite a bit more GPU bound than the vague "29%" number seems to imply, are you insinuating that "bad settings" are solely to blame for most of the test running on GPU?

Or are you implying that the GPU is bad (it certainly is no 3080, I never made any claims regarding GPU perf)? Which in itself would be an admission that the bench isn't nearly so CPU-bound as it should be?



That's been rumored for a long time, but it's never made any sense. Ryzen has always functioned on symmetric CCDs. CCD1 and CCD2 cores are clearly demarcated in differences in per-core power during all-core for example, and it does not paint a picture of 8+4 or anything that isn't 6+6.

Some games like MW19 run a heavy "all-core" AVX workload sometimes...but whereas on a 3700X it runs truly 8-core loads, on 5900X it automatically limits itself to the 6 cores of CCD1. And Windows scheduler seems to pick its favoured background processing core not based on core quality (mine is literally the worst core), but the fact that it's not on the same CCD1 as the two preferred performance cores inevitably are.
Its fine, I love a healthy polite disagreement :laugh:

Regarding Aida memory and cache benchmarks, I don't think any of us can make an educated judgement on the validity of the scores if we are honest. I used that as a point to highlight differences between single and multi CCD's that also affect the benchmark, and with that I mean the CCD layout and not the Aida benchmarks.

Regarding the fact that the setting for this benchmark are the best or not for showing CPU differences, that's a very long and veeery subjective discussion. The fact is, the settings are good for highlighting differences between core counts, very good at highlighting differences between memory subsystems, and generally speaking not really GPU bound. As far as what was posted on this thread, there are clear differences between different CPU's, and also differences between the same CPU's but with faster memory. If that's not enough, I don't know what could be :)

Now, one can always make the argument that for certain hardware combos there can be a different set of settings that could show even greater differences, like older GPU's who are weak enough to be the bottleneck for even 1080p lowest graphical settings, but that would lead to an infinity of results and an impossibility of getting a common denominator and being able to actually compare results and draw a conclusion.

I also argue there is no universal CPU test. and not because there aren't any, or some games that could be used for such a thing, there is no universal one because people have different ideas about what a CPU bottleneck is and how it manifests.

I sometimes do not understand people who are hellbent on changing a thread so it suits their feelings about a certain situation, everyone is free to start a new thread anytime and set a framework for another type of test if they can not live with a thread they do not agree with, I did that myself with this tread, and mainly not for me, for people who wanted to use this game as a CPU bench basically :)
 
Joined
Aug 11, 2021
Messages
35 (0.04/day)
So does anyone know what the determining factor is about having gdm on or off. Ive only ever owned 1 set of ram. I built my first pc in december of 2020. Anyways, i noticed some people have gdm standard and its extremely hard to get it stable off. While others come standard with gdm off 1t. Why is this?
 
Joined
Jan 26, 2020
Messages
416 (0.27/day)
Location
Minbar
System Name Da Bisst
Processor Ryzen 5800X
Motherboard GigabyteB550 AORUS PRO
Cooling 2x280mm + 1x120 radiators, 4xArctic P14 PWM, 2xNoctua P12, TechN CPU, Alphacool Eisblock Auror GPU
Memory Corsair Vengeance RGB 32 GB DDR4 3800 MHz C16 tuned
Video Card(s) AMD PowerColor 6800XT
Storage Samsung 970 Evo Plus 512GB
Display(s) BenQ EX3501R
Case SilentiumPC Signum SG7V EVO TG ARGB
Audio Device(s) Onboard
Power Supply ChiefTec Proton Series 1000W (BDF-1000C)
Mouse Mionix Castor
So does anyone know what the determining factor is about having gdm on or off. Ive only ever owned 1 set of ram. I built my first pc in december of 2020. Anyways, i noticed some people have gdm standard and its extremely hard to get it stable off. While others come standard with gdm off 1t. Why is this?
I can only tell you what I experienced with GDM on vs off. For me GDM on 1T gives a slightly higher calculated bandwidth, while GDM off and 2T gives a slightly better latency, but the overall performance is basically the same. My experience is limited with my current CPU as I could only run GDM off 2T with the latest Agesa, prior to that GDM off was a no go being very unstable.
 
Joined
Aug 9, 2019
Messages
1,521 (0.88/day)
Processor Ryzen 5600X@4.85 CO
Motherboard Gigabyte B550m S2H
Cooling BeQuiet Dark Rock Slim
Memory Patriot Viper 4400cl19 2x8@4000cl16 tight subs
Video Card(s) Asus 3060ti TUF OC
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores CB20 4710@4.7GHz Aida64 50.4ns 4.8GHz+4000cl15 tuned ram SOTTR 1080p low 263fps avg CPU game
2T is more flexible when tuning, but a bit harder to stabilize, in most scenarios on ryzen 5000 it seems like 2T is a bit faster. 1T and GDM har limitation with the CL, CWL, WR and RP-timings which must be set in even numbers or else they will be rounded up. The best RP-timung is impossible with GDM since it's 5. The commonly used CL 15 is also impossible with GDM due to this.
 
Joined
Aug 11, 2021
Messages
35 (0.04/day)
I can only tell you what I experienced with GDM on vs off. For me GDM on 1T gives a slightly higher calculated bandwidth, while GDM off and 2T gives a slightly better latency, but the overall performance is basically the same. My experience is limited with my current CPU as I could only run GDM off 2T with the latest Agesa, prior to that GDM off was a no go being very unstable.
@Taraquin i screwed around last night trying to get gdm off 1t stable. I pumped a few more volts into the ram but setting my memclkdrstr to 60 ohms and my proct odt to 40 i was able to achieve significantly more stability than usual. Ultimately i still crashed after about 20 membench runs. But i would imagine 2t with the correct proct odts should be achievable. I am stable where im at though with gdm on with the stock resistances so it might be something ill try later on. I like the idea of being stable without having to worry about resistances being the only thing keeping me stable for a minor perf increase lol
 
Joined
Jul 11, 2015
Messages
628 (0.20/day)
System Name Harm's Rig's
Processor 5950X /2700x / AMD 8370e 4500
Motherboard ASUS DARK HERO / ASRock B550 Phantom Gaming 4
Cooling Enermax LIQMAX III ARGB 360 AIO/ Zalman cooler fan 110mm
Memory Patriot Viper Steel DDR4 16GB (4x 8GB) 4000M TRIDENT Z F-43600V15D-16GTZ /G.SKILL DDR4
Video Card(s) ZOTAC AMP EXTREME AIRO 4090 / 1080 Ti /290X CFX
Storage SAMSUNG 980 PRO SSD 1TB/ WD DARK 770 2TB , Sabrent NVMe 512GB / 1 SSD 250GB / 1 HHD 3 TB
Display(s) Thermal Grizzly WireView / TCL 646 55 TV / 50 Xfinity Hisense A6 XUMO TV
Case TT 37 VIEW 200MM'S/ NZXT Tempest custom
Audio Device(s) Sharp Aquos
Power Supply FSP Hydro PTM PRO 1200W ATX 3.0 PCI-E GEN-5 80 Plus Platinum - EVGA 1300G2/Corsair w750
Mouse G502
Keyboard G413
Joined
Aug 9, 2019
Messages
1,521 (0.88/day)
Processor Ryzen 5600X@4.85 CO
Motherboard Gigabyte B550m S2H
Cooling BeQuiet Dark Rock Slim
Memory Patriot Viper 4400cl19 2x8@4000cl16 tight subs
Video Card(s) Asus 3060ti TUF OC
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores CB20 4710@4.7GHz Aida64 50.4ns 4.8GHz+4000cl15 tuned ram SOTTR 1080p low 263fps avg CPU game
@Taraquin i screwed around last night trying to get gdm off 1t stable. I pumped a few more volts into the ram but setting my memclkdrstr to 60 ohms and my proct odt to 40 i was able to achieve significantly more stability than usual. Ultimately i still crashed after about 20 membench runs. But i would imagine 2t with the correct proct odts should be achievable. I am stable where im at though with gdm on with the stock resistances so it might be something ill try later on. I like the idea of being stable without having to worry about resistances being the only thing keeping me stable for a minor perf increase lol
Yes, GDM is awesome stabilitywise. 2T or especially 1T w/o GDM is for the patient people :) You might gain a bit if performance, but it takes a bit of tinkering.

Even though I have a working 2T setup I prefer 1T GDM since it can run ram at 0.02V lower w/o errors in TM5. Fuddling with ProcODT etc would probably fix it, but I don't have that much patience.
 
Joined
Oct 26, 2016
Messages
1,740 (0.64/day)
Location
BGD
Processor Intel I9 7940X
Motherboard Asus Strix Rog Gaming E X299
Cooling Xigmatek LOKI SD963 double-Fan
Memory 64Gb DDR4 2666Mhz
Video Card(s) 1)RX 6900XT BIOSTAR 16Gb***2)MATROX M9120LP
Storage 2 x ssd-Kingston 240Gb A400 in RAID 0+ HDD 500Gb +Samsung 128gbSSD +SSD Kinston 480Gb
Display(s) BenQ 28"EL2870U(4K-HDR) / Acer 24"(1080P) / Eizo 2336W(1080p) / 2x Eizo 19"(1280x1024)
Case Lian Li
Audio Device(s) Realtek/Creative T20 Speakers
Power Supply F S P Hyper S 700W
Mouse Asus TUF-GAMING M3
Keyboard Func FUNC-KB-460/Mechanical Keyboard
VR HMD Oculus Rift DK2
Software Win 11
Benchmark Scores Fire Strike=23905,Cinebench R15=3189,Cinebench R20=3791.Passmark=30689,Geekbench4=32885
Regarding the core/cache scaling on Intel...here more testing and a clear advantage of larger amount of L3 cache....

 
Joined
Jul 11, 2015
Messages
628 (0.20/day)
System Name Harm's Rig's
Processor 5950X /2700x / AMD 8370e 4500
Motherboard ASUS DARK HERO / ASRock B550 Phantom Gaming 4
Cooling Enermax LIQMAX III ARGB 360 AIO/ Zalman cooler fan 110mm
Memory Patriot Viper Steel DDR4 16GB (4x 8GB) 4000M TRIDENT Z F-43600V15D-16GTZ /G.SKILL DDR4
Video Card(s) ZOTAC AMP EXTREME AIRO 4090 / 1080 Ti /290X CFX
Storage SAMSUNG 980 PRO SSD 1TB/ WD DARK 770 2TB , Sabrent NVMe 512GB / 1 SSD 250GB / 1 HHD 3 TB
Display(s) Thermal Grizzly WireView / TCL 646 55 TV / 50 Xfinity Hisense A6 XUMO TV
Case TT 37 VIEW 200MM'S/ NZXT Tempest custom
Audio Device(s) Sharp Aquos
Power Supply FSP Hydro PTM PRO 1200W ATX 3.0 PCI-E GEN-5 80 Plus Platinum - EVGA 1300G2/Corsair w750
Mouse G502
Keyboard G413
Best run.
 

Attachments

  • Capture240.PNG
    Capture240.PNG
    1.5 MB · Views: 112
  • Capture239.PNG
    Capture239.PNG
    1.6 MB · Views: 116
Joined
Jan 26, 2020
Messages
416 (0.27/day)
Location
Minbar
System Name Da Bisst
Processor Ryzen 5800X
Motherboard GigabyteB550 AORUS PRO
Cooling 2x280mm + 1x120 radiators, 4xArctic P14 PWM, 2xNoctua P12, TechN CPU, Alphacool Eisblock Auror GPU
Memory Corsair Vengeance RGB 32 GB DDR4 3800 MHz C16 tuned
Video Card(s) AMD PowerColor 6800XT
Storage Samsung 970 Evo Plus 512GB
Display(s) BenQ EX3501R
Case SilentiumPC Signum SG7V EVO TG ARGB
Audio Device(s) Onboard
Power Supply ChiefTec Proton Series 1000W (BDF-1000C)
Mouse Mionix Castor
Best run.
Seeing your result makes me wonder what my 6800XT could do with the new upcoming Alder Lake CPU's, the rumor is those would be gaming monster :)
 
Joined
Sep 21, 2020
Messages
1,495 (1.14/day)
Processor 5800X3D -30 CO
Motherboard MSI B550 Tomahawk
Cooling DeepCool Assassin III
Memory 32GB G.SKILL Ripjaws V @ 3800 CL14
Video Card(s) ASRock MBA 7900XTX
Storage 1TB WD SN850X + 1TB ADATA SX8200 Pro
Display(s) Dell S2721QS 4K60
Case Cooler Master CM690 II Advanced USB 3.0
Audio Device(s) Audiotrak Prodigy Cube Black (JRC MUSES 8820D) + CAL (recabled)
Power Supply Seasonic Prime TX-750
Mouse Logitech Cordless Desktop Wave
Keyboard Logitech Cordless Desktop Wave
Software Windows 10 Pro
Regarding the core/cache scaling on Intel...here more testing and a clear advantage of larger amount of L3 cache....

What I'd really like to see is how the fastest quads today compare for gaming.
Intel's Core i3-10325 has a 4.7 GHz ST boost and 8 MB L3 cache. AMD's Ryzen 3 3300X boosts to 4.35 GHz but has double the amount of cache. I don't know what the all-core boost on the i3 is, but the 3300X does 4.2 GHz on stock. Would the higher clock speed of the Intel chip make more of a difference in games than the Ryzen's generous L3 or vice versa?
 

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
12,945 (2.60/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Razer Huntsman Tournament Edition
Software Windows 11 Pro 64-Bit
I don't think anybody knows exactly if its 8+4 or 6+6, initial reports said 6+6, in theory 8+4 would be better, but who knows, probably selected based on yields and CCD characteristics. Though I am not sure how such a random CCD selection process could work, but its not like AMD did not deliver 5800X's with 2 CCD, one of them being disabled :)
I am pretty certain its 6+6. Its better this way too as the heat would be spread out more rather than majority of the heat concentrated on one CCD due to the increased cores it may have.

What I'd really like to see is how the fastest quads today compare for gaming.
Intel's Core i3-10325 has a 4.7 GHz ST boost and 8 MB L3 cache. AMD's Ryzen 3 3300X boosts to 4.35 GHz but has double the amount of cache. I don't know what the all-core boost on the i3 is, but the 3300X does 4.2 GHz on stock. Would the higher clock speed of the Intel chip make more of a difference in games than the Ryzen's generous L3 or vice versa?
It would depend on more than just core speed and the amount of cache cores have access too. And largely dependent on specific games.
 
Joined
Sep 13, 2020
Messages
142 (0.11/day)
System Name Desktop
Processor AMD Ryzen 7 5800X3D (H²O)
Motherboard ASRock X470 Taichi
Cooling NZXT X61 + 2x PH-F140XP | NZXT X41 + PH-F140XP
Memory G.Skill 32 GiB DDR4@3600 CL16
Video Card(s) MSI RX 5700 XT (H²O)
Storage Crucial MX200 250GB, SEC 970 Evo 1TB
Display(s) LG 34GL750-B
Case Phanteks Enthoo Luxe
Audio Device(s) ALC1220, DT 770 Pro, UR44
Power Supply BeQuiet! SP E10 700W
Mouse EndgameGear XM1
Keyboard Cherry RS 6000M
Software EndeavourOS | Pop!_OS | Win10
Hello!
those cores have dedicated L3 per core(like L1.L2) and it's not getting that L3 cache is actually characterized as a pool of fast memory that is shared between cores and even if it's split in half you still have 2x32mb which will certainly add some worst latency but you will have more cache.
They have dedicated L3 per CCX (core complex), but there's also an interconnect (IF) between the two CCX of a 5900X or 5950X.
A core from CCX0 could also read the L3 data from CCX1, but it gets additional penalty time (latency).

When different people's stock 5900Xs have hundreds of GB/s' difference in L3 AIDA readings.
Yes, AIDA could show inconsequent results ... and too the RAM-OC could be instable, so different amount of data has transferred again by ECC. The result is a lower value.

I am pretty certain its 6+6. Its better this way too as the heat would be spread out more rather than majority of the heat concentrated on one CCD due to the increased cores it may have.
And I think, that all combinations of 6+6, 7+5 and 8+4 cores are possible, because this makes more sense from a financial point of view. Companies love to save money.
So why would AMD waste the CCXs with 7 healthy cores and 5 healthy cores? Maybe the CCXs with 4 healthy cores could be reused in Athlon processors.
But that's just my guess ...

... Oh yes, Igor confirms the existence of Ryzen 5600X/5800X with two CCX, which can come from a faulty Ryzen 5900X: https://www.igorslab.de/en/ryzen-5-...t-laesst-sich-die-cpu-zum-ryzen-9-unlocken-2/


Btw, this is what I found on Anandtech:
The Ryzen 9 5900X: 12 Cores at $549

Squaring off against Intel’s best consumer grade processor is the Ryzen 9 5900X, with 12 cores and 24 threads, offering a base frequency of 3700 MHz and a turbo frequency of 4800 MHz (4950 MHz was observed).
This processor is enabled through two six-core chiplets, but all the cache is still enabled at 32 MB per chiplet (64 MB total). The 5900X also has the same TDP as the 3900X/3900XT it replaces at 105 W.
It should be two 6-core in a CCX.

Best regards
 

Attachments

  • Zen3_arch_22.jpg
    Zen3_arch_22.jpg
    216.9 KB · Views: 56

tabascosauz

Moderator
Supporter
Staff member
Joined
Jun 24, 2015
Messages
7,573 (2.35/day)
Location
Western Canada
System Name ab┃ob
Processor 7800X3D┃5800X3D
Motherboard B650E PG-ITX┃X570 Impact
Cooling NH-U12A + T30┃AXP120-x67
Memory 64GB 6000CL30┃32GB 3600CL14
Video Card(s) RTX 4070 Ti Eagle┃RTX A2000
Storage 8TB of SSDs┃1TB SN550
Display(s) 43" QN90B / 32" M32Q / 27" S2721DGF
Case Caselabs S3┃Lazer3D HT5
They have dedicated L3 per CCX (core complex), but there's also an interconnect (IF) between the two CCX of a 5900X or 5950X.

Do you have a source for this? Anandtech's core-to-core latency testing showed zero difference between Ryzen 3000 and 5000 when venturing outside of their respective CCXs. Ryzen 3000 had no direct IF link between CCDs, so if there was suddenly a new avenue for inter-CCD communication in Ryzen 5000 wouldn't you expect even slightly better results?

CC5950X.png
CC3950X.png
Zen_3_chiplet_layout.jpg


A direct IF link between CCD1-CCD2 would be a massive change from Ryzen 3000. A "new feature" that I would've expected AMD to constantly brag about or other news/review sites to have reported on. Outwardly it doesn't appear as if they've drastically redesigned the substrate for that (honestly I'm not sure the substrate is any different at all outside of accommodating the new CCD).

If the design hasn't actually changed and there is no direct link, even if the 2 CCDs can indirectly talk to each other, I'm pretty sure the sheer latency penalty associated with having to travelling across the substrate not once but twice, would basically invalidate any potential performance boost from theoretically having "more L3".
 
Last edited:
Joined
Sep 13, 2020
Messages
142 (0.11/day)
System Name Desktop
Processor AMD Ryzen 7 5800X3D (H²O)
Motherboard ASRock X470 Taichi
Cooling NZXT X61 + 2x PH-F140XP | NZXT X41 + PH-F140XP
Memory G.Skill 32 GiB DDR4@3600 CL16
Video Card(s) MSI RX 5700 XT (H²O)
Storage Crucial MX200 250GB, SEC 970 Evo 1TB
Display(s) LG 34GL750-B
Case Phanteks Enthoo Luxe
Audio Device(s) ALC1220, DT 770 Pro, UR44
Power Supply BeQuiet! SP E10 700W
Mouse EndgameGear XM1
Keyboard Cherry RS 6000M
Software EndeavourOS | Pop!_OS | Win10
Anandtech's core-to-core latency testing showed zero difference between Ryzen 3000 and 5000 when venturing outside of their respective CCXs.
Inter-core latencies within the L3 lie in at 15-19ns, depending on the core pair. One aspect affecting the figures here are also the boost frequencies of that the core pairs can reach as we’re not fixing the chip to a set frequency. This is a large improvement in terms of latency over the 3950X, but given that in some firmware combinations, as well as on AMD’s Renoir mobile chip this is the expected normal latency behaviour, it doesn’t look that the new Zen3 part improves much in that regard, other than obviously of course enabling this latency over a greater pool of 8 cores within the CCD.

Inter-core latencies between cores in different CCDs still incurs a larger latency penalty of 79-80ns, which is somewhat to be expected as the new Ryzen 5000 parts don’t change the IOD design compared to the predecessor, and traffic would still have to go through the infinity fabric on it.
The technical specs don't change so much, so you get extra penalty latency on Ryzen 5000 "between cores in different CCDs/CCXs".

Ryzen 3000 had no direct IF link between CCDs
I didn't write anything about a "direct IF interconnection". That's what you may wanted to read. ;) Maybe that's our misunderstanding.
I mean the way over the IF interconnection. And yes - now I see clearly, that is a doubled (serial, not parallel) connection through the IO-die. Thanks for making that clear for me!

If the design hasn't actually changed and there is no direct link, even if the 2 CCDs can indirectly talk to each other, I'm pretty sure the sheer latency penalty associated with having to travelling across the substrate not once but twice, would basically invalidate any potential performance boost from theoretically having "more L3".
If you watch at the two pictures of core-to-core-latencies, that you posted, then you'll see also a big improvement.
The green fields of very less latencies are 100% greater with Ryzen 5000 as with Ryzen 3000, so half of the picture is marked green than only a quarter.

In addition the biggest latencies of the green marked fields are almost the half (6.6ns - 19.8ns) of the ones from Ryzen 3000 (6.7ns - 33.1ns).
And the orange marked fields shows on Ryzen 5000 nevertheless a little better latencies at the end: 79.2ns - 84.6ns instead of 81.7ns - 92.5ns for the Ryzen 3000 series.

So nobody has to worry about bad performance incl. L3 cache latencies of the Ryzen 5000 series processors. They're just fine. :peace:
 
Joined
May 2, 2019
Messages
27 (0.01/day)
System Name RTi
Processor Intel Core i9-13900K
Motherboard ASUS ROG STRIX Z790-E GAMING WiFi
Cooling Corsair iCUE H170i ELITE LCD XT Display
Memory G.SKILL Trident Z5 RGB Series 96GB (2 x 48GB) DDR5 CL34 6800
Video Card(s) EVGA RTX 3090 Ti FTW3 ULTRA
Storage Samsung 990 Pro NVMe 2TB x2, Samsung 980 Pro NVMe 2TB x1
Display(s) LG 27GN850 (IPS/1440p/144Hz/1ms/G Sync) and LG 27GP850 (IPS/1440/180Hz/1ms/G Sync)
Case Cooler Master HAF 700 EVO
Power Supply CORSAIR - HXi Series HX1500i
Mouse Jelly Comb LED Bluetooth Mouse
Keyboard TECURS 2.4G Wireless Keyboard
Software Windows 10 Pro
Benchmark Scores https://www.passmark.com/baselines/V11/display.php?id=186004438447 https://www.3dmark.com/spy/400622
Have you tweaked ram or OCed CPU? You have a lot of potential :)
Figured I'd update you that went out and grabbed 3600MHz of Vengeance goodness! Gonna retest this benchy later!
 
Joined
Aug 11, 2021
Messages
35 (0.04/day)
Just ordered a 2x8gb set of ripjaws v 3600 cl16 bdie to add to my neos for 100 bucks. Hoping i can pop them in an load my bios/custom timing profile and be set. Not too optimistic on hitting 4000 on 4 dimms though. Hoping for at least 3800. Using all 4 channels should net an increase even if i take a hit of 200mhz and 100 flck on zen 3 architecture according to GN. Really hoping i hit the lottery on these even though they were cheaper b die is bdie. Ill post a sotr run in about a week with the results
 
Top