• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

9900X3D - Will AMD solve the split CCD issue

What thread is this? What was the topic again? I have no idea after the last couple of pages.

my cpu is better than your cpu.... Dual CCD cpu suck thread..... Or at least that's what it has turn into.
 
@AusWolf
except how relevant any difference is, is up to the user, just showed there is one, and its not that small, given its at same IF/ram speed.
and when did i claim night and day difference?
I never said you claimed anything. What I said was that RAM tuning doesn't give you a perceptible difference outside of benchmarking. How many points you get in 3DMark is a different topic altogether.

without ppl tweaking ram, how would we know about MC/IF's able to go above 1800 for am4?
Does it matter, though? I guess it kind of does if you're chasing benchmark numbers which I don't care about at all.

my cpu is better than your cpu.... Dual CCD cpu suck thread..... Or at least that's what it has turn into.
Wasn't it a "dual CCD sucks" thread to begin with? I mean, even the title mentions fixing some issues, whatever they may be.
 
I never said you claimed anything. What I said was that RAM tuning doesn't give you a perceptible difference outside of benchmarking. How many points you get in 3DMark is a different topic altogether.


Does it matter, though? I guess it kind of does if you're chasing benchmark numbers which I don't care about at all.

Depends on the cpu.... With X3D parts not so much but on ryzen 5000/7000 it can be above 10% now a lot of that has to do with some boards just making timings up though but a 3200 Bdie tunned setup on 5000 is faster than a docp 3800 setup on every ram kit I have tried.... Also it probably helps more with an Nvidia setup as the cpu driver overhead is a bit higher when cpu limited. Now with a midrange gpu it doesn't matter. I'm sure with a 7900XTX at 1440p or below it could be worth it but anything slower than that probably not unless medium settings is your jam.

I use to be one who wanted to get that last 5%-10% at any cost but now idgaf but for those that do more power to them. Just because I feel something is worth it or not worth it doesn't make it so though, for people who strive to get every ounce out of their machine more power to them I just throw money at it these days too lazy to validate all that stuff lol.

A 14900k with tuned 8000 memory would murder my 7800X3D but that doesn't really matter to me still 8000 CL38 kits really aren't that expensive for those that want to.

I never said you claimed anything. What I said was that RAM tuning doesn't give you a perceptible difference outside of benchmarking. How many points you get in 3DMark is a different topic altogether.


Does it matter, though? I guess it kind of does if you're chasing benchmark numbers which I don't care about at all.


Wasn't it a "dual CCD sucks" thread to begin with? I mean, even the title mentions fixing some issues, whatever they may be.

Well people act like it needs to be fixed but the truth is it is just a tradeoff for having basically different cores on two separate CCD. The better question would have been will dual CCD X3D chips finally always outperform single CCD ones.... I am actually hoping to see that if the 7950X3D was even 5% faster in gaming I would own one.

To me given what we know they behave as expected except around launch when windows was the primary cause for a lot of the issues. Every benchmark I have seen has it slightly behind the 7800X3D/7950X3D in an aggregate with some outliers that either have it way behind or even ahead, and given only 6 of the cores have extra cache it is what I would expect.
 
That is unfortunate that you had that experience. I can't say it's been the same for me. When I went from the 5950X to the 5800X3D my Frame rates did not improve but the 1% lows improved using a 6800XT. When I went to the 7900X3D I noticed 3-5 more GB/s use with the 7900XT in most Games and 4K became a non issue. Today every Game I play performs just fine, even the console ports that people whined about last year were generally a non issue for me. I don't know if I got a unicorn chip but my chip sips power and gives me that smile compelling hardware does when you can feel it.

It's just your standards that are too low, plus you've been mentally embellishing your experience for a very long time now, you've grown attached to it

What thread is this? What was the topic again? I have no idea after the last couple of pages.

Well the subject of the quest is whether AMD will fix the topology issue that the X3D Ryzen 9's have. It's naturally gonna generate a debate on whether X3D is worth it or not.
 
It's just your standards that are too low, plus you've been mentally embellishing your experience for a very long time now, you've grown attached to it

Nothing wrong with loving your own hardware so much it makes you blind to the truth also when gaming at 4k with a midrange gpu it would be hard to even notice anything faster on the CPU side... People make his hardware out to be worse than it actually is and even though I mostly agree that a split 6+6 core cpu is dumb it performs just fine and is usually really cheap because nobody else want's it either..... I mean I have seen it in the 360 range and if you need the 12 cores that isn't so bad even if that need is fabricated in your own head.... I don't need a 16 core chip but if the 7950X3D was consistently faster than the 7800X3D I'd own one if I purchased one I would disable the second CCD and I didn't think an extra 200 usd was worth doing that but I still almost did it anyways lol.
 
Last edited:
It's just your standards that are too low, plus you've been mentally embellishing your experience for a very long time now, you've grown attached to it
My standards are too low when I am one of the only persons that actually own the chip that everyone is bashing. I will also say that if you think that 4K 144hz is easy to do then why am I using a Display that is faster than yours. I swear sometimes you make it to easy when I see that you are still on HDD If you look and see all these Games are at 4K what am I embellishing exactly. That the 7900X3D is a faster than a 5900X?
Screenshot 2024-06-10 222838.png
 
My standards are too low when I am one of the only persons that actually own the chip that everyone is bashing. I will also say that if you think that 4K 144hz is easy to do then why am I using a Display that is faster than yours. I swear sometimes you make it to easy when I see that you are still on HDD If you look and see all these Games are at 4K what am I embellishing exactly. That the 7900X3D is a faster than a 5900X?
View attachment 350789
lol. HYPR-RX

:kookoo: :shadedshu: :banghead:
 
If this is with the sliders maxed that's pretty insane.
I can show you settings used, everything to the max except 720p :) this is perhaps getting a bit offtopic
 
Depends on the cpu.... With X3D parts not so much but on ryzen 5000/7000 it can be above 10% now a lot of that has to do with some boards just making timings up though but a 3200 Bdie tunned setup on 5000 is faster than a docp 3800 setup on every ram kit I have tried.... Also it probably helps more with an Nvidia setup as the cpu driver overhead is a bit higher when cpu limited. Now with a midrange gpu it doesn't matter. I'm sure with a 7900XTX at 1440p or below it could be worth it but anything slower than that probably not unless medium settings is your jam.
I doubt you could get 10% improvement in your FPS by RAM tuning, but even if you do, it's still nothing by my standards. If you can't get at least 30-50% better (which is only possible with a GPU upgrade in most cases), then it's not worth bothering with, imo.

A 14900k with tuned 8000 memory would murder my 7800X3D but that doesn't really matter to me still 8000 CL38 kits really aren't that expensive for those that want to.
Nothing murders anything. Both the 7800X3D and 14900K are excellent CPUs for different reasons, and I don't think anyone buying either of them is unhappy with their choice. :)

Well people act like it needs to be fixed but the truth is it is just a tradeoff for having basically different cores on two separate CCD. The better question would have been will dual CCD X3D chips finally always outperform single CCD ones.... I am actually hoping to see that if the 7950X3D was even 5% faster in gaming I would own one.

To me given what we know they behave as expected except around launch when windows was the primary cause for a lot of the issues. Every benchmark I have seen has it slightly behind the 7800X3D/7950X3D in an aggregate with some outliers that either have it way behind or even ahead, and given only 6 of the cores have extra cache it is what I would expect.
If someone thinks that the tradeoffs of having 2 CCDs are greater than the benefits, then they don't need more than 8 cores to begin with. Simple as.

It's just your standards that are too low, plus you've been mentally embellishing your experience for a very long time now, you've grown attached to it
I couldn't notice any difference between the 7700X and 7800X3D at any RAM speed even when I still had my 7800 XT. I guess my standards are too low as well. :wtf:
 
I couldn't notice any difference between the 7700X and 7800X3D at any RAM speed even when I still had my 7800 XT. I guess my standards are too low as well. :wtf:

Don't misunderstand, time and time again we have clarified this argument, sure you can have a good experience, but you won't be getting the best out of that system, which is why it's not popular to begin with. The shortcomings exist and are undeniable, but he still insists on the same argumentation with fierce determination. The folks who engaged in discussion such as dgian and myself have brought the benchmarks, explained the technical details, the reasons why, the mechanics, the aspects, everything that could be done in good faith to have a technical debate on this subject and the conversation always circles around to "you don't own this hardware!" "something something my glorious 7900 XT!" "babble about how Intel and Nvidia are the embodiment of Evil", and other arguments that can be reduced to "you're being mean :(((". Now it's gotten to bragging about high frame rates on old games. I'm sorry, if it can't get 168 fps on Orcs Must Die, what's that latest-generation "Ryzen 9" good for?

It never ends. Here, I broke an old game, and I didn't even need AFMF as a crutch. Hear me roar. (kitty kitty kitty...)

1718091626198.png


Anyway, thread's officially gone off the rails. I'm done contributing, at least until it gets back on-topic.
 
Don't misunderstand, time and time again we have clarified this argument, sure you can have a good experience, but you won't be getting the best out of that system, which is why it's not popular to begin with. The shortcomings exist and are undeniable, but he still insists on the same argumentation with fierce determination. The folks who engaged in discussion such as dgian and myself have brought the benchmarks, explained the technical details, the reasons why, the mechanics, the aspects, everything that could be done in good faith to have a technical debate on this subject and the conversation always circles around to "you don't own this hardware!" "something something my glorious 7900 XT!" "babble about how Intel and Nvidia are the embodiment of Evil", and other arguments that can be reduced to "you're being mean :(((". Now it's gotten to bragging about high frame rates on old games. I'm sorry, if it can't get 168 fps on Orcs Must Die, what's that latest-generation "Ryzen 9" good for?

It never ends. Here, I broke an old game, and I didn't even need AFMF as a crutch. Hear me roar. (kitty kitty kitty...)
I fail to see how any of this rant is relevant to the conversation.

My point is, as it always has been, that if I get 80 FPS with 4800 MHz, and 81 FPS with 6000 MHz RAM in X game (because I play at 1440 with a mid-range GPU), then "getting the best out of my system" achieves literally absolutely nothing. Why run something at maximum power and maximum heat if it doesn't result in anything? You don't run your car's engine in 2nd gear at redline all the time, either.
 
I fail to see how any of this rant is relevant to the conversation.

My point is, as it always has been, that if I get 80 FPS with 4800 MHz, and 81 FPS with 6000 MHz RAM in X game (because I play at 1440 with a mid-range GPU), then "getting the best out of my system" achieves literally absolutely nothing. Why run something at maximum power and maximum heat if it doesn't result in anything? You don't run your car's engine in 2nd gear at redline all the time, either.
Well ram tuning is just akin to "does a faster CPU benefit in games". Well no, if you are GPU bound it doesn't - obviously. I'd still argue that youll see a difference in 1% lows even when GPU bound but that's besides the point, the main point is having those ram tuned can actually save you a CPU upgrade. Just to stay on topic, a 7700x with tuned ram will probably be as fast as a 9700x with stock XMP - assuming the later is around 15% faster in games stock vs stock.
 
I fail to see how any of this rant is relevant to the conversation.

My point is, as it always has been, that if I get 80 FPS with 4800 MHz, and 81 FPS with 6000 MHz RAM in X game (because I play at 1440 with a mid-range GPU), then "getting the best out of my system" achieves literally absolutely nothing. Why run something at maximum power and maximum heat if it doesn't result in anything? You don't run your car's engine in 2nd gear at redline all the time, either.

I agree on principle (to certain extent of course, otherwise, I would probably have purchased a power-sipping 7800X3D and undervolted it), but I think you didn't follow the entire train of thought here. The condition posed is that the 7900X3D is a flawless product that people dog on just because they are prejudiced against it, and not because there isn't mountains of evidence, including multiple instances brought forward in this thread by multiple forum members that this is not the case. People just make mean threads because they have hatred in their hearts. Or something.

I specifically called out on the "7900X3D is not and can never be bad because non-sequitur/appeal to emotion/affirming a disjunct/false equivalence/kettle logic/moralistic fallacy/argumentum ad infinitum/no true scotsman" rather than anything actually meaningful. Besides, if power efficiency was a concern, a Ryzen 9 shouldn't be anywhere near the list of products you'd actually consider. Anyway, I'm out before we end up getting some choice words from the mods, lol
 
I agree on principle (to certain extent of course, otherwise, I would probably have purchased a power-sipping 7800X3D and undervolted it), but I think you didn't follow the entire train of thought here. The condition posed is that the 7900X3D is a flawless product that people dog on just because they are prejudiced against it, and not because there isn't mountains of evidence, including multiple instances brought forward in this thread by multiple forum members that this is not the case. People just make mean threads because they have hatred in their hearts. Or something.

I specifically called out on the "7900X3D is not and can never be bad because non-sequitur/appeal to emotion/affirming a disjunct/false equivalence/kettle logic/moralistic fallacy/argumentum ad infinitum/no true scotsman" rather than anything actually meaningful. Besides, if power efficiency was a concern, a Ryzen 9 shouldn't be anywhere near the list of products you'd actually consider. Anyway, I'm out before we end up getting some choice words from the mods, lol
With that, I agree. The 7900X3D is not a perfect CPU, nothing is. But with that, my point is that if one can't live with the inter-core and inter-cache latencies, then they shouldn't be looking at a 6+6-core CPU anyway.
 

@AusWolf

Here you can see some difference :)
7700x / 7800x3D / 13900KF with tuned RAM and 4090 in Star Citizen:
 

@AusWolf

Here you can see some difference :)
7700x / 7800x3D / 13900KF with tuned RAM and 4090 in Star Citizen:
Very interesting video. It's really eye opening that the 3d gets high average fps on reviews due to hitting very high max framerates (which are useless). It's apparently very fast on non demanding scenes, not that fast in more demanding ones.
 

@AusWolf

Here you can see some difference :)
7700x / 7800x3D / 13900KF with tuned RAM and 4090 in Star Citizen:
Yeah, the difference between 6000 MHz EXPO and 6400 MHz tuned is 10 FPS with a 7700X... and a 4090. That's what I said earlier, you need a 4090 and an FPS counter on screen to see these differences.
 
Yeah, the difference between 6000 MHz EXPO and 6400 MHz tuned is 10 FPS with a 7700X... and a 4090. That's what I said earlier, you need a 4090 and an FPS counter on screen to see these differences.
In this situation, these 10 FPS are 15% which is not a little.
But there is a better 1% Low FPS which will result in flawless gameplay and faster work in Windows at all :)
 
Yeah, the difference between 6000 MHz EXPO and 6400 MHz tuned is 10 FPS with a 7700X... and a 4090. That's what I said earlier, you need a 4090 and an FPS counter on screen to see these differences.
His 4090 is dropping to as low as 70% though, with max settings. So even a 4070 with high instead of ultra would probably net you similar results. Also the FPS itself is irrelevant, the % difference is what matters. You get 15% which is basically a generational upgrade. totally not bad.
 
The test seems thorough, but it's still only one game.

Having 7700X and 7800X3D at the same min FPS with EXPO non tuned isn't really a common thing. As the 7800X3D gains less from better RAM settings, it's obvious that the 7700X would pull ahead when starting at the same value.

1718097746998.png
 
Last edited:
In this situation, these 10 FPS are 15% which is not a little.
But there is a better 1% Low FPS which will result in flawless gameplay and faster work in Windows at all :)
His 4090 is dropping to as low as 70% though, with max settings. So even a 4070 with high instead of ultra would probably net you similar results. Also the FPS itself is irrelevant, the % difference is what matters. You get 15% which is basically a generational upgrade. totally not bad.
You can look at it that way. Personally, I don't play at a percentage of x - I play at x FPS.
 
This conversation so far makes me feel pretty good about pairing my future 9950x / 9900X3D with ECC RAM and RX 6700 XT. I won't have to worry about any of these high performance issues.
 
This conversation so far makes me feel pretty good about pairing my future 9950x / 9900X3D with ECC RAM and RX 6700 XT. I won't have to worry about any of these high performance issues.
Well you will, at some point. Having a CPU that's 15-20% faster means you will get to keep it for a prolonged amount of time. 15% faster is 15% faster at the end of the day, and we are somehow arguing whether that's a good thing...?
 
Well you will, at some point. Having a CPU that's 15-20% faster means you will get to keep it for a prolonged amount of time. 15% faster is 15% faster at the end of the day, and we are somehow arguing whether that's a good thing...?
One thing is for sure, we're not discussing the topic, the 9900X3D, because we don't know anything about it lol. The OP in turn is about 6 3D cores, while the title is about having two CCD's..

Does the 9900X3D have the same proportionally lower clock speed? Is the V-cache a carbon copy or is it improved? L1 and L2 speed is improved tho, but that's not 3D specific.
 
Back
Top