• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

9900X3D - Will AMD solve the split CCD issue

A perfect example is a 5800x and 5950x. Both have the same 105w TDP and 142w ppt. Just look at the scores between those two.


100% my 5800X is always slower than my 5950X down to 65w never tested below that. In gaming they are similar though regardless.
 
if they improve the scheduler some of us would like our gaming cake and our productivity sandwitches without the trade offs
 
100% my 5800X is always slower than my 5950X down to 65w never tested below that. In gaming they are similar though regardless.
Same for me.
 
100% my 5800X is always slower than my 5950X down to 65w never tested below that. In gaming they are similar though regardless.
I remember writing a post on what AM4 meant and basically the 5950X is the snappiest of the bunch and the 5900X is just butter smooth goodness. I used a 5800X3D and was whelmed with the exception of 1% lows in daily use vs those chips.
 
I remember writing a post on what AM4 meant and basically the 5950X is the snappiest of the bunch and the 5900X is just butter smooth goodness. I used a 5800X3D and was whelmed with the exception of 1% lows in daily use vs those chips.
I can only tell them apart in benchmarks otherwise using them blindly they feel identical.

Same ram speed and timings
Same storage
Different motherboards
If anything the 5800X system is slightly more snappy but probably placebo.

The only major difference is compression and decompression with the 5950X.
 
I can only tell them apart in benchmarks otherwise using them blindly they feel identical.

Same ram speed and timings
Same storage
Different motherboards
If anything the 5800X system is slightly more snappy but probably placebo.

The only major difference is compression and decompression with the 5950X.
I spend more time than I should using my PC I guess. I will admit that my 8 core experience was the 5800X3D and that is a limited chip to begin with so you might be right as 5 GHZ is 5Ghz. I know the 5900X does feel different to me though than the 5950X.
 
That's just a misconception people have. The more limited your cooling is, the bigger the gap is between smaller and bigger CPUs. Bigger CPUs will be able to push more watts with your limited cooling than the smaller cpus and the gap in MT will just widen.

Achieving higher efficiency per core with high core count CPUs is beneficial for multi-threaded performance but intentionally limiting a part under 'stricter' cooling limitations (~125w) resulting to significant performance loss is ineffectually unproductive. You just don't buy a pricier 7-seater car to do the job of a 5 seater. More effectively what should be suggested is; buying a 14900K and limiting cooling support/undervolting just enough in preserving the CPUs MT execution or outstripping stock MT performance. Resorting to 125w is great for forming efficiency/performance comparisons but in real-time use to get your moneys worth you're just crippling the 14900K.
 
True, being restricted by the tower is different, but is there really any tower that is restricted to 100 watts? That completely takes away any notion of using a GPU at all.

Yes, the 14900k is definitely faster than the 14600k at same power. You need to drop a 14900k to 65 watts to make it slower than a full unlocked 14600k. Obviously when you limit the 14600k to 65w as well it will still be slower.
I wasn't thinking about a tower. I was thinking small form factor / low profile. Other than that, fair enough.
 
I have a 7900X. I am thinking of getting the 9950X. Make me change my mind.
I'd at least wait for some 3rd party review like from TPU or GN or HU first so you will have an idea if your going to pass the magical 30% mark in uplift. Right now $499 is super appealing for 7950X3D but I've got to save the cash for 9950X or EPYC(AM5) as I haven't decided yet which to go for. It would be great if TPU or GN or HU would review these EPYC chips for AM5. Id really like that performance with certified ECC support and not settle for the guessing game but it also depends if it's supported on consumer brand motherboards that have better I/O options and not just entry level server boards.
 
I spend more time than I should using my PC I guess. I will admit that my 8 core experience was the 5800X3D and that is a limited chip to begin with so you might be right as 5 GHZ is 5Ghz. I know the 5900X does feel different to me though than the 5950X.

A 4090 or Oled screen or even a nice chair make a much larger difference on a gaming PC than what cpu is inside..... At least with cpu from the same generation. I used a 5800X3D for a couple weeks even with a 4070 it's a night and day better experience than a 5950X if your mostly gaming.

It's what prompted me to jump to 7000 series X3D the 7950X3D left a bad taste in my mouth after just one week of use.

Core difference are usually placebo except on Intel were I can tell the difference between their 6 cores and 8 core cpu becuase the wider cpu have more consistent 1% lows. It's the opposite with ryzen with the single ccd chips being a lot more smooth in some games above 200fps.

Likely why the dual CCD chips lose to the 7800X3D.
 
It's what prompted me to jump to 7000 series X3D the 7950X3D left a bad taste in my mouth after just one week of use.
Can you elaborate more on the bad experience with 7950X3D ? What was the major issue or issues?
 
Can you elaborate more on the bad experience with 7950X3D ? What was the major issue or issues?

It was the second week after launch so windows didn't want to cooperate in same games and you had to disable the 2nd CDD you also had to jump through hoops or use a third party program like core lasso which honestly shouldn't be a thing it's basically the same reason I'm not a fan of ecores although the 12th and 13th gen system's I've worked on had less issues with it....Never had to do that once on my 5950X btw so it was mostly scheduler issues with two ccd basically having different cores.

It's been a little better more recently. To me it just felt like a beta product honestly and actually almost made me skip the 7000 series for my main rig. It wasn't till I worked on a bunch of 5800X3D based systems and had one for a couple weeks that I changed my mind.... Getting a free board didn't hurt either though lol. Had I'd gotten a similar free Z790 board I would have gotten a 13900k so same difference why not a 14900k the price, it's not hard to get the 13th gen flagship for $380 vs in the 500s for the 14th gen part... Part of me wanted to still grab the 7950X3D though regardless it's mostly a good product now at it's current price with windows just getting confused somtimes.
 
Last edited:
A 4090 or Oled screen or even a nice chair make a much larger difference on a gaming PC than what cpu is inside..... At least with cpu from the same generation. I used a 5800X3D for a couple weeks even with a 4070 it's a night and day better experience than a 5950X if your mostly gaming.

It's what prompted me to jump to 7000 series X3D the 7950X3D left a bad taste in my mouth after just one week of use.

Core difference are usually placebo except on Intel were I can tell the difference between their 6 cores and 8 core cpu becuase the wider cpu have more consistent 1% lows. It's the opposite with ryzen with the single ccd chips being a lot more smooth in some games above 200fps.

Likely why the dual CCD chips lose to the 7800X3D.
That is unfortunate that you had that experience. I can't say it's been the same for me. When I went from the 5950X to the 5800X3D my Frame rates did not improve but the 1% lows improved using a 6800XT. When I went to the 7900X3D I noticed 3-5 more GB/s use with the 7900XT in most Games and 4K became a non issue. Today every Game I play performs just fine, even the console ports that people whined about last year were generally a non issue for me. I don't know if I got a unicorn chip but my chip sips power and gives me that smile compelling hardware does when you can feel it.
 
That is unfortunate that you had that experience. I can't say it's been the same for me. When I went from the 5950X to the 5800X3D my Frame rates did not improve but the 1% lows improved using a 6800XT. When I went to the 7900X3D I noticed 3-5 more GB/s use with the 7900XT in most Games and 4K became a non issue. Today every Game I play performs just fine, even the console ports that people whined about last year were generally a non issue for me. I don't know if I got a unicorn chip but my chip sips power and gives me that smile compelling hardware does when you can feel it.

I'd have a hard time with a 7900XT at 4k at 1440p UW a 4090 is barely adequate for my use case.
 
I'd have a hard time with a 7900XT at 4k at 1440p UW a 4090 is barely adequate for my use case.
Interesting I have been using the Gigabyte FV43U and it has been great.
 
Interesting I have been using the Gigabyte FV43U and it has been great.

I hate FSR and it's what 40% slower than what is in my current main rig that's a massive drop so I'd have to turn down way too many settings and basically make it look like a console game to get a similar framerate. Also in some games I enjoy CP with PT witcher next gen the 7900XT is less than half as fast....

Like I said for my use case.

Yeah just checked there is a 40% difference in min framerates in thr latest gpu review which matter to me the most between the cards and that's just in general gaming in RT is massively difference.

Not going to a 60 ish fps experience lol or even less in the games I enjoy.
 
I'd have a hard time with a 7900XT at 4k at 1440p UW a 4090 is barely adequate for my use case.

Damn I have a 4080 super and have no issues at all at 4k. What problems are you having at 1440p with a 4090? Let me guess, cyberpunk?
 
Damn I have a 4080 super and have no issues at all at 4k. What problems are you having at 1440p with a 4090? Let me guess, cyberpunk?

I like to hit 175fps ish without frame generation. Max refresh rate of my oled.

and even though I dont mind DLSS I much prefer DLAA and only use DLSS when I have no other choice and would not go below quality mode even at 4k

A lot of more recent games aren't anywhere near that. AW2, UE5 games, and I expect that to be more and more the case.

I also want to be able to use path tracing in games that support it very much looking forward to half life 2

I have a 3080ti in my secondary pc for standard 1440p it's mostly fine but I don't use RT with it at all and it needs DLSS.

Again I stated for my use case some people are happy at 1080p low settings at 30fps more power to them.

My only point in bringing it up is a gpu makes a much larger difference in gaming than different cpus of the same generation not that a 4090 is all anyone needs, I much rather have a 7600 or 13400 and a 4080 than a 7950X3D/14900k and a 4070 if given the choice but everyone has decide what a min level of perfomance is ok for them with their own hard earned money.

The 7900XT/7900X3D the person I was responding to has is a great combination, just not for me and that really has 0 bearing on how much a person should enjoy their hardware or how good or bad it is.
 
There are a few differences in design like the increased latency added by moving from monolitic to multi-die, but the added cache outweighs this quite well. A recent HWUB-video showed that adding a lot of cache to raptor lake would not yield the same benefits like adding it to zen 3 and 4. Intels approach with bruteforcing insane powerconsumption to "win" has not been to their benefit lately with the recent issues with new bioses, unstable cpus, board partnes not happy etc.
Hubs video does nothing of that sort to be fair. They are not testing higher cache chips cause they don't exist. But still in their comparison there are games that show a ~10% increase going from 24 to 36 cache. Now imagine tripling that cache like amd has with x3d vs non x3d variants. He should have tested games that the 3d chips are doing better than their non 3d variants, instead he just decided to compare games at random.

You don't need clock frequency bruteforcing with Intel either, you need to tune that ram to negate the lack of a huge cache. I've shown that tuning the ram while dropping clockspeeds and power draw to half will still give you a 20% performance bump over running stock + xmp. I've tested multiple games as you've seen on my channel and it's just a reality that even an old 12900k stock with tuned ram is on par or faster than the 7800x 3d. Granted, most people don't know or don't want to mess with their ram, in that case the x3d chips are a match made in heaven.
 
Hubs video does nothing of that sort to be fair. They are not testing higher cache chips cause they don't exist. But still in their comparison there are games that show a ~10% increase going from 24 to 36 cache. Now imagine tripling that cache like amd has with x3d vs non x3d variants. He should have tested games that the 3d chips are doing better than their non 3d variants, instead he just decided to compare games at random.

You don't need clock frequency bruteforcing with Intel either, you need to tune that ram to negate the lack of a huge cache. I've shown that tuning the ram while dropping clockspeeds and power draw to half will still give you a 20% performance bump over running stock + xmp. I've tested multiple games as you've seen on my channel and it's just a reality that even an old 12900k stock with tuned ram is on par or faster than the 7800x 3d. Granted, most people don't know or don't want to mess with their ram, in that case the x3d chips are a match made in heaven.
HU tests at stock, so they can't show anything over that. In some games, they test at custom scenes so we cannot repeat their test, so we can only "trust" them.

RAM tuning helps on all CPUs, on 7800x3D too.
 
HU tests at stock, so they can't show anything over that. In some games, they test at custom scenes so we cannot repeat their test, so we can only "trust" them.

RAM tuning helps on all CPUs, on 7800x3D too.
I know ram helps in all cpus, but you only get single digit gains on the 3d chips. They have much less need for it. An i5 with tuned ram will smack some sense into an xmp 13900k. Now I don't know how zen 4 scales with ram but zen 3 got some huge 30% uplifts.
 
If you have a 4090. Otherwise, it's a waste of effort.
Well, debatable. People usually have an fps target,so regardless of their gpu they will try to hit that target by dropping settings dlss etc. Also a 4070 at 1440p is as fast a 4090 at 4k. And of course there are games that are completely cpu bound. Check kingdom come deliverance, even in the starting village you'll drop to 50 fps on a 5800x 3d
 
Back
Top