• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

9900X3D - Will AMD solve the split CCD issue

I’d like to see a 10-12core single CCD/CCX 3D variant at some point if 3DV cache is a thing still in future.
But unless AMD needs this core count CCX on its server SKUs I don’t see it happening.
Because, we like it or not the Ryzen design is a server type one and happens to work well on PC platform and work better for games with 3DVcache addition. Enables AMD to have universal design, keep cost low and profit margins high.
I think you're right, but things can change. APU's will get 12 cores (including C cores) this summer, and that's of course a single chip, which is not a given, they could have stayed at 8 cores like it's been for 4 years now.
It's not like that IGP needs 12 cores anyway, AMD is aiming higher mobile CPU performance with this model, while still having an upcoming APU with even better graphics AND a mobile 16 core CPU.

But then again, the majority of computers sold are laptops, tho I wonder if that's true for AMD, lots of laptops are Intel still.

AMD has already moved away from a single chiplet design a couple of years ago, even if not for desktop yet. I think AMD is more capable of doing something like you describe now, but I won't expect it to happen this year. Let's see if they can improve V-cache somehow.

I’m a bit puzzled for what to get for my next upgrade. Not that I plan to switch soon. I guess I will decide it when the need for upgrade gets strong.
No wonder, you'll have to wait! :D
 
Yes, 4090, but if you use dlss and run high refresh the difference can be noticable. Going from 60 to 70fps on 1% lows is noticable for many :)
I wonder how much of that is placebo. If it isn't, fair enough. :)

For me, any difference above 40-50 FPS is basically unnoticeable. 1% and 0.1% lows usually mean a small hitch while the game loads some new assets, which I also don't find particularly disturbing.

Well isn't that counterproductive? You are paying more for the 3d variant for the gaming performance, you sacrifice performance in other workloads and ocing to get that extra cache to help you in games and then you realize you don't really get that extra gaming oomph because it's just a 6 core chip for gaming.

Id wager the normal 7700x with some slightly tuned ram and / or ocing would end up faster than the 7900x 3d, which is rather absurd.
You're paying more if you need the extra cores for something. For gaming, you don't. If you only game, then any AMD CPU above the 7800X3D is a waste of money.
 
I wonder how much of that is placebo. If it isn't, fair enough. :)

For me, any difference above 40-50 FPS is basically unnoticeable. 1% and 0.1% lows usually mean a small hitch while the game loads some new assets, which I also don't find particularly disturbing.

Yours is 40-50fps. I naturally see the difference around 80-90fps. In some competitive titles where fast motion in detailed environments and motion clarity are key the difference can easily extend to as much as ~120fps. Beyond that I'd have to consciously look at set piece details in fast motion for the difference to surface which (for me) renders it irrelevant. I haven't tried side-by-side comparisons with 240hz/360hz vs 90/120fps... although i suspect it won't make much of a difference or if it does i'm sure i'm not missing out on much... 90-120fps for me is the sweetspot. The rest boils down to the lows, frame times and frame time variance keeping up at a good pace without too much discrepancy which unfortunately in some titles is noticeably rampant - something i'm having to resolve through tweaking each game to maintain the preferred level of smoothness.

I have recently formed the opinion anyone who can't see the difference beyond 60fps and yet enjoys smooth gaming visuals is simply lucky. For you guys it doesn't take more strain on the eyes to visually process information and that too succumbing to higher premiums for hardware which make smoother gameplay at higher fps possible. You've been efficiently fine-tuned and we've been left to empty our wallets to the greedy corporations who prey on our (desired) weaknesses.
 
Yours is 40-50fps. I naturally see the difference around 80-90fps. In some competitive titles where fast motion in detailed environments and motion clarity are key the difference can easily extend to as much as ~120fps. Beyond that I'd have to consciously look at set piece details in fast motion for the difference to surface which (for me) renders it irrelevant. I haven't tried side-by-side comparisons with 240hz/360hz vs 90/120fps... although i suspect it won't make much of a difference or if it does i'm sure i'm not missing out on much... 90-120fps for me is the sweetspot. The rest boils down to the lows, frame times and frame time variance keeping up at a good pace without too much discrepancy which unfortunately in some titles is noticeably rampant - something i'm having to resolve through tweaking each game to maintain the preferred level of smoothness.

I have recently formed the opinion anyone who can't see the difference beyond 60fps and yet enjoys smooth gaming visuals is simply lucky. For you guys it doesn't take more strain on the eyes to visually process information and that too succumbing to higher premiums for hardware which make smoother gameplay at higher fps possible. You've been efficiently fine-tuned and we've been left to empty our wallets to the greedy corporations who prey on our (desired) weaknesses.
Well, that's the thing - I don't play competitively. I enjoy walking simulators more than anything. A good story and atmosphere make a game a winner in my books. Gaming is my wind-down after a busy day, not the other way around. :)
 
Well, that's the thing - I don't play competitively. I enjoy walking simulators more than anything. A good story and atmosphere make a game a winner in my books. Gaming is my wind-down after a busy day, not the other way around. :)

yep unfortunately i put myself through intense madness with shooter compo type huge multiplayer maps... just cant get enough of em. As much as i enjoy them they equally rile me up too. Occasionally i fancy some racing sims. After work im working over-time without pay :p
 
7950X3D still broken in some games for me compared to 7800X3D with the latest chipset driver. Still in most games I've tried so far the 7950X3D is either tied or ahead and if I disable the non cache CCD it's faster. It's just a minor annoyance I guess.
I’d like to see a 10-12core single CCD/CCX 3D variant at some point if 3DV cache is a thing still in future.
But unless AMD needs this core count CCX on its server SKUs I don’t see it happening.
Because, we like it or not the Ryzen design is a server type one and happens to work well on PC platform and work better for games with 3DVcache addition. Enables AMD to have universal design, keep cost low and profit margins high.

Server needs drive mostly the development of these chips.

Personally I do not need productivity performance. Other than normal everyday tasks, I game but I do want to have higher than 8 cores for some VMs occasionally.
A x950X3D is the most appealing SKU to me but the cost is high. At least initially.

I’m a bit puzzled for what to get for my next upgrade. Not that I plan to switch soon. I guess I will decide it when the need for upgrade gets strong.

150% this is what I am hoping for before they ditch the AM5 socket. I will admit I like the 7800X3D a lot less than I thought I would and the 7950X3D is about 90% there with just some headaches.... Need to use it a bit longer though to decide how much I like/dislike it's quirks.
 
Last edited:
It's just your standards that are too low, plus you've been mentally embellishing your experience for a very long time now, you've grown attached to it



Well the subject of the quest is whether AMD will fix the topology issue that the X3D Ryzen 9's have. It's naturally gonna generate a debate on whether X3D is worth it or not.

It will fix itself in the end in this generation or future ones as system memory true latency narrows the gap. I mean they can and will try to do other things in addition to that, but just narrowing the gap divide between CPU cache and system memory latency is absolutely going to help a good bit so expect DDR6 for example to be better or just DDR5 kits at the tail end of the life cycle will be more ideal than the early ones. I'm sure eventually they'll have a optical connection path between CCX dies most likely.

The test seems thorough, but it's still only one game.

Having 7700X and 7800X3D at the same min FPS with EXPO non tuned isn't really a common thing. As the 7800X3D gains less from better RAM settings, it's obvious that the 7700X would pull ahead when starting at the same value.

View attachment 350808

It's simply not running into as many cache misses thus better RAM has negligible impact on a X3D chip. It'll also vary greatly depending from one program test scenario to another. If it's getting enough cache misses it should help.
 
It will fix itself in the end in this generation or future ones as system memory true latency narrows the gap. I mean they can and will try to do other things in addition to that, but just narrowing the gap divide between CPU cache and system memory latency is absolutely going to help a good bit so expect DDR6 for example to be better or just DDR5 kits at the tail end of the life cycle will be more ideal than the early ones. I'm sure eventually they'll have a optical connection path between CCX dies most likely.



It's simply not running into as many cache misses thus better RAM has negligible impact on a X3D chip. It'll also vary greatly depending from one program test scenario to another. If it's getting enough cache misses it should help.
Still X3D gains around the same as 7700X from ramtuning if you bclk oc, without bclk it gains less.
Screenshot_20240614-061741.png

Screenshot_20240614-061756.png


Without bclk difference is smaller, though they should have run 6200+ 1:1 or 7800+ 1:2 for better results:
Screenshot_20240614-064235.png

Still 4% better avg and 7% better 1% lows is a free upgrade :) Just adjusting tREFI up to 65535 and lowering tRFC from 900ish stock to around 500 for M-die and 400 for A-die accounts for 90% of the gains in ramtuning :)
 
My timings I'm sure are a bit of not so optimal mess honestly :laugh: :rolleyes:, but it is what it is. I know this could use better tuning in places. The tRCD, tRP, tRAS, CR are intentionally set a bit high because I was just trying to get CL30 stable enough with the voltage. I've dropped the tREFI as well to help on stability and probably still need to drop that more. There are probably some area's though I could try to tune, but haven't asked around. Stability is way easier at CL32. I haven't tried to mess with it too much in awhile. The tREFI is probably still a bit high, but it's been pretty good where it is so I'll adjust it a bit downward further if I really need to for some reason like BG3 stability when I get around to playing it more. I'm really not worried about my 14700K it'll be fine for along while and I have alright memory kit, but it's nothing exceptional at this point there is already a heap of better binned kit options that have been trickling out with MT/s creeping up further. There seems to be a good bit of life left in DDR5 performance kit binning to be had which is good news. I don't need DDR6 is arriving too immanently and we might see a die shrink for DDR5 IC's in the meanwhile I'm hoping.

ram timings z790-h.jpg


I think DerMeowzerOC or whatever his name I refuse to Google it this time to spell it correct did a BCLK video for Intel as well it showed it helped a good bit with 0.1%'s which makes sense since you're reducing latency. People have kind of a made the CCX thing into a larger issue than it needs to be, but it'll be fixed in newer hardware generations regardless. It's the same with frame interpolation and latency that will get resolved in due time, but the first generation is a bit subpar. I'm hoping we'll see post process interpolation at some point, but that's not exactly what Nvidia did with it. In the case of lighting I think it's perfect for faking it better, but it's a matter of how well you can calculate and blend numerous post process configurations together. It works pretty well with reshade to try to hodgepodge the idea, but I'm not a math and science expert or programmer. You can interpolate other post process as well likewise. I think the case of carefully programmed layered light weight post process there is a real field for it to grow and be baked into hardware design. It's a lot like training AI images, but in real time in between actual frame renders. Layering it provides a filtering effect and allows for a lot of possibilities.
 
My timings I'm sure are a bit of not so optimal mess honestly :laugh: :rolleyes:, but it is what it is. I know this could use better tuning in places. The tRCD, tRP, tRAS, CR are intentionally set a bit high because I was just trying to get CL30 stable enough with the voltage. I've dropped the tREFI as well to help on stability and probably still need to drop that more. There are probably some area's though I could try to tune, but haven't asked around. Stability is way easier at CL32. I haven't tried to mess with it too much in awhile. The tREFI is probably still a bit high, but it's been pretty good where it is so I'll adjust it a bit downward further if I really need to for some reason like BG3 stability when I get around to playing it more. I'm really not worried about my 14700K it'll be fine for along while and I have alright memory kit, but it's nothing exceptional at this point there is already a heap of better binned kit options that have been trickling out with MT/s creeping up further. There seems to be a good bit of life left in DDR5 performance kit binning to be had which is good news. I don't need DDR6 is arriving too immanently and we might see a die shrink for DDR5 IC's in the meanwhile I'm hoping.

View attachment 351198

I think DerMeowzerOC or whatever his name I refuse to Google it this time to spell it correct did a BCLK video for Intel as well it showed it helped a good bit with 0.1%'s which makes sense since you're reducing latency. People have kind of a made the CCX thing into a larger issue than it needs to be, but it'll be fixed in newer hardware generations regardless. It's the same with frame interpolation and latency that will get resolved in due time, but the first generation is a bit subpar. I'm hoping we'll see post process interpolation at some point, but that's not exactly what Nvidia did with it. In the case of lighting I think it's perfect for faking it better, but it's a matter of how well you can calculate and blend numerous post process configurations together. It works pretty well with reshade to try to hodgepodge the idea, but I'm not a math and science expert or programmer. You can interpolate other post process as well likewise. I think the case of carefully programmed layered light weight post process there is a real field for it to grow and be baked into hardware design. It's a lot like training AI images, but in real time in between actual frame renders. Layering it provides a filtering effect and allows for a lot of possibilities.
Cl30 is just not worth it at 7000. I know, I've tried. Make it 32 and just tune everything else. But if you have a 14700k you should be running higher frequencies as well, is your board hitting a limit at 7000?
 
If it's of any interest for a reference, mine are Hynix A-die Trident Z5 6800's (model F5-6800J3445G16GX2-TZ5RK), the highest supported on the MSI Z690 QVL. @ir_cow reviewed them here at TPU. Used to be about the best of the best out of the "earlier generation" DDR5 memory kits.


I spent quite some time reaching these values and testing it thoroughly, unfortunately, I could never get DDR5-6800 to operate stable on this motherboard, it just won't do it, regardless of timings or voltage involved. Satisfied with my final timing set and the 6400 MT/s, though. No point asking for anything more on a Z690 E-ATX creator-focused motherboard I suppose.

1718347444378.png
 
If it's of any interest for a reference, mine are Hynix A-die Trident Z5 6800's (model F5-6800J3445G16GX2-TZ5RK), the highest supported on the MSI Z690 QVL. @ir_cow reviewed them here at TPU. Used to be about the best of the best out of the "earlier generation" DDR5 memory kits.


I spent quite some time reaching these values and testing it thoroughly, unfortunately, I could never get DDR5-6800 to operate stable on this motherboard, it just won't do it, regardless of timings or voltage involved. Satisfied with my final timing set and the 6400 MT/s, though. No point asking for anything more on a Z690 E-ATX creator-focused motherboard I suppose.

View attachment 351202
Hah, you are limited by your mobo, im limited by this junk of a 12900k I have, IMC is stuck at 7000.
 
If it's of any interest for a reference, mine are Hynix A-die Trident Z5 6800's (model F5-6800J3445G16GX2-TZ5RK), the highest supported on the MSI Z690 QVL. @ir_cow reviewed them here at TPU. Used to be about the best of the best out of the "earlier generation" DDR5 memory kits.


I spent quite some time reaching these values and testing it thoroughly, unfortunately, I could never get DDR5-6800 to operate stable on this motherboard, it just won't do it, regardless of timings or voltage involved. Satisfied with my final timing set and the 6400 MT/s, though. No point asking for anything more on a Z690 E-ATX creator-focused motherboard I suppose.

View attachment 351202

I'm running 16GBx4 so probably is going to differ a bit from that on what's needed, but doesn't hurt to look into trying some of those settings and just seeing how it behaves. It'll either work alright or work like **** with some of those settings. I guess my voltages on VDD/VDDQ isn't too far off the mark. I just kind of stopped at the point where the MB starts to color code in yellow. I figured that was a good cut off point w/o asking around a bit how sketchy it is to go much further. :laugh: On scale from 1-10 how screwed is my IMC degradation if I push this voltage higher. :rolleyes: OCing ain't it fun! The dark science of you go first then we'll think about it.
 
I'm running 16GBx4 so probably is going to differ a bit from that on what's needed, but doesn't hurt to look into trying some of those settings and just seeing how it behaves. It'll either work alright or work like **** with some of those settings. I guess my voltages on VDD/VDDQ isn't too far off the mark. I just kind of stopped at the point where the MB starts to color code in yellow. I figured that was a good cut off point w/o asking around a bit how sketchy it is to go much further. :laugh: On scale from 1-10 how screwed is my IMC degradation if I push this voltage higher. :rolleyes: OCing ain't it fun! The dark science of you go first then we'll think about it.

Well, the settings I have set to 4 do not apply (since the second slot is vacant), and I reckon most of my other tertiaries would be either too tight or completely invalid for 2DPC, but some of the other "equations" I've got here you may be able to apply, such tRAS = tCL + tRCD(RD) + 2, tCKE adjustment amongst others. Trial and error though.
 
That's memory in a nutshell trial and error...I'll trial this oops error I'll trial that same...I'll trial this ok a bit better seem to be on the right path at least...
 
Back
Top