• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Shows First Ryzen 7 7800X3D Game Benchmarks, Up To 24% Faster Than Core i9-13900K

Precisely why i think arguing about which CPU is better for gaming is pointless in my honest opinion, in real-world usage, it'll be difficult to notice the differences between the top performing gaming CPUs with a 4090, and the differences will be even less noticeable with the GPUs that people are more likely to use (3080s and 6800/6900 XTs.)
There are people pairing i7 4790Ks with 3080s. Platform longevity is something to look out for.
 
I think you are under-estimating the efficiency of the 3D cache, even without the power binning a top end laptop SKU goes though or the power optimizations you have the 7950X3D which could feasibly match a laptop chip in power efficiency. I can think of many regions in the world where either electricity price or climate heavily incentivizes a 7950X3D or 7800X3D purchase, aside from it's class leading performance.
Efficiency importance is coming back like never before, and it is here to stay in the wake of everyhting that is happening with electricity prices, economic crises, local war and fight fdor resources, as well as climate change globally. 'Intel 4' is predicted to be 50% more efficient and that's the way it should be. Intel is under increasing pressure to deliver more efficient line of products as laws in many countries are tightening up the energy consumption agenda.
 
That's a completely different issue that's debatable. What's not debatable is that disabling ecores on the current CPU's loses you performance in plenty of games. There is a game here and there (havent had any myself) that may work better with ecores off, but if you find such a game, you can disable the ecores with a press of a button from within windows on the fly

"With its E-cores disabled, the i9-13900K is a negligible 0.1% slower than the stock i9-13900K (E-cores enabled)"


At the end of the day, whether you disable or enable e-cores, your performance on average ends up being about the same. You seem to imply it's a rarity here which isn't true according to TPU's data.

The other poster is likely correct, if e-cores only provide an average advantage of 0.1% in games it would be very easy for a hypothetical CPU with more P cores to outperform them. 0.1% is well within margin of error, it's possible that a another benchmark could show the 13900K e-cores disabled beating a fully enabled 13900K simply through random benchmark variance.
 
"With its E-cores disabled, the i9-13900K is a negligible 0.1% slower than the stock i9-13900K (E-cores enabled)"


At the end of the day, whether you disable or enable e-cores, your performance on average ends up being about the same. You seem to imply it's a rarity here which isn't true according to TPU's data.

The other poster is likely correct, if e-cores only provide an average advantage of 0.1% in games it would be very easy for a hypothetical CPU with more P cores to outperform them. 0.1% is well within margin of error, it's possible that a another benchmark could show the 13900K e-cores disabled beating a fully enabled 13900K simply through random benchmark variance.
Lat`s make an 13900P that lake any e-cores completely. And on top of that, an 13950KPS that add 4 p-cores on the missing e-cores space. Still, for app use, I think that 16 e will do better than 4 p considering you already have 8 p anyway.
:pimp:
 
The problem is that whether or not you have the benefit of E cores, you end up paying for them and they are reflected in the price of the CPU.
Ps
Even if you turn them off after building your computer and never, ever use them, Intel will never give you back the money you paid for them.
 
"With its E-cores disabled, the i9-13900K is a negligible 0.1% slower than the stock i9-13900K (E-cores enabled)"


At the end of the day, whether you disable or enable e-cores, your performance on average ends up being about the same. You seem to imply it's a rarity here which isn't true according to TPU's data.

The other poster is likely correct, if e-cores only provide an average advantage of 0.1% in games it would be very easy for a hypothetical CPU with more P cores to outperform them. 0.1% is well within margin of error, it's possible that a another benchmark could show the 13900K e-cores disabled beating a fully enabled 13900K simply through random benchmark variance.

In 12-gen you achieve way higher RING frequency without ecores, which already outperforms the "0.1%". :) I just don't mind posting too deeply about different scenarios, and at same time reviewers don't do it either.

12700k @1.20V allcore 4900MHz, without Ecores and pumped up ring frequency + DDR4 4000 c16 flat, gives almost 22k in CBR23, with around 175W. But wattage varies a little bit, depending on the unit's "leakage current". Sticks I used were 2x16Gb from a G.Skill 128Gb kit, spec'd 3200 c15 flat.

If I remember correct, pumping up ring freq gave something around 500pts for CBR23
 
Last edited:
My question is what benefit does liquid cooling on this offer? What RAM tuning does it offer to make it a long lived high performance system. What about a lapped die or exposed die?

If you're talking about a long lived high performance system then the CPU is much less important than the socket. A new CPU will go into an AM5 socket four years from now. For this reason, although I am unimpressed with the 7000 series CPUs, I'm buying an AMD mobo. If Intel supported its socket types for as long as AMD supported AM4, I'd stay with Intel without any hesitation.
 
If you're talking about a long lived high performance system then the CPU is much less important than the socket. A new CPU will go into an AM5 socket four years from now. For this reason, although I am unimpressed with the 7000 series CPUs, I'm buying an AMD mobo. If Intel supported its socket types for as long as AMD supported AM4, I'd stay with Intel without any hesitation.

I don't mind buying new mobos and getting new features. Would not pair 5800X3D with X370/B350 just only because it is possible. :D In theory it is nice idea, but in practice a bit different story.

Also those 500usd/eur mobos don't give too much for home usage, I am not making a welding device, so 14x 55A VRM is enough. No need for 24x105A lol, just a marketing trick afterall.
 
up to 999% <> reality -1-5%

Just disable the CCD without 3DNow! cache on 7950X3D and bench it out. :sleep:
Woohoo and amen again at these AMD-style upto jokes. April fool's day came a bit early this year. :lovetpu:


Would be great if Intel dropped off these E-Core gimmicks. Just plain P-cores. I have found 0 usage for those notepad-tier CPU cores, disabled in BIOS already.
Also an interesting note is that if you disable e-cores, you automatically get a higher Ring and a more cache to use.
 
It is obviously impossible to dispute any claim of "up to," but W1zzard's testing has the 7950x3d (with or without the second CCD disabled) barely edging out the 13900k on average. There's no reason to expect that the 7800x3d will do substantially better.

This CPU will be perhaps the best option for what you might call the zero-compromise gamer; it's definitely priced better than the 13900k, but that's not saying much.. IMO any "gaming" CPU over about $250 is silly. This was a problem for the 5800x3d too, which enjoyed a lot of good press at launch, but the love fest really only went into turbo mode later on, when A) the price went way down, and B) AM5's high platform costs made any drop-in upgrade for AM4 owners look that much better by comparison. The 5800x3d is still sitting pretty, in fact, as are various lower-end Intel options.
Well put
The only option on the table for the 7800X3D is that it has to push fewer cores so it might just have better power/frequency curves. 200 mhz I think is already optimistic though.

Still, the 'up to' is believable, given the fact the X3D's excel in exactly those situations where the cache alleviates 'BAD' FPS. Nobody cares about 24% on top of 144 fps, but 24% on top of 40-50, or sub 30, is a game changer. Simulators, 4X, etc all benefit massively where it matters most. This is also where all that extra 'cache' for E cores doesn't extract the same benefits; the special X3D sauce is the perfect catch-all for heavy single threaded gaming scenarios.

For me its really only a question of when, not if, I'll get some model of X3D. And the stellar power consumption while gaming is really only just a bonus on top - half TDP compared to Intel 13th is immense.

Precisely why i think arguing about which CPU is better for gaming is pointless in my honest opinion, in real-world usage, it'll be difficult to notice the differences between the top performing gaming CPUs with a 4090, and the differences will be even less noticeable with the GPUs that people are more likely to use (3080s and 6800/6900 XTs.)
Well... it was exactly a 'top end recent CPU' that gave the 4090 its wings compared to the initial W1zzard review with a somewhat slower non X3D Ryzen.

Especially at lots of GPU power, the impact of fast CPUs is worthwhile. It was, is and will always be about having a balanced system between CPU and GPU. That said, the baseline of CPUs is certainly more than sufficient for lower tier GPUs, and also becomes more than sufficient once you push 4K resolution on it. But if you run 1440p on a fast GPU... damn right you'll see the benefits of more CPU.
 
Last edited:
My aging 9900 might get retired depending on this one's numbers on WoW, this looks pretty good.
 
My aging 9900 might get retired depending on this one's numbers on WoW, this looks pretty good.

Well the 5800X3D was a 35% increase over the 5800X in WoW so it's likely numbers in that game will be very good for the 7800X. MMOs in general seem to really benefit.
 
The E-Cores arent terrible, they are equivalent to Skylake cores, maybe a little faster than skylake.

E-Cores lack hardware 256-bit AVX2 i.e. three port AVX2 128-bit hardware with AVX2 software support.


vs

Core i9 13900K has 16 extra E-Cores (three x 128-bit AVX x 16 cores = 6,144 bits).

Ryzen 9 7950X/7950X3D has 8 extra "fat" Zen 4 (four 256-bit AVX x 8 cores = 8,192 bits).


----
Intel Golden Cove P-Cores block diagram which is similar to Raptor Cove P-Cores.
Three-port 256-bit AVX hardware that consists of two-port 256-bit FMA/256-bit ALU + 1 port 256-bit FADD/256-bit ALU


AMD Zen 4 block diagram.
Four ports 256-bit AVX hardware that consists of two-port 256-bit FMA/256-bit ALU + 2 port 256-bit FADD/256-bit ALU. Zen 4 executes AVX-512 like Golden Cove's split 256-bit (when enabled).

Most current-gen games are designed with game consoles Zen 2 AVX2 assumptions.
 
Last edited:
Well the 5800X3D was a 35% increase over the 5800X in WoW so it's likely numbers in that game will be very good for the 7800X. MMOs in general seem to really benefit.
That was an out of the box install I imagine. The problem with WoW is that you also "need" to run addons or at least if you want to be competitive or have an easier time to see what's happening, the new expansion has already incorporated quite a lot qol addons into the core, but you still need something to track numbers, dungeon timers and custom alerts, and those addons are heavy.

Still, I get like 60fps on Valdrakken at 4k without doing anything, that has been pretty much the heaviest part of the game for me. In raids I go 90fps.
 
Until I see REAL data that has been tested and a CPU that has been reviewed, I will not listen to any of the market speak and talking heads from AMD or any company in any sector of any economic industry.

For the past several years we have been getting nothing but "technical truths" and "blatant lies" just to sell product.

But unless the overall components come down in prices (YES Ngreedia, Yes AMD with your AMD Tax on Motherboards), this gets a hard pass.
 
That was an out of the box install I imagine. The problem with WoW is that you also "need" to run addons or at least if you want to be competitive or have an easier time to see what's happening, the new expansion has already incorporated quite a lot qol addons into the core, but you still need something to track numbers, dungeon timers and custom alerts, and those addons are heavy.

Still, I get like 60fps on Valdrakken at 4k without doing anything, that has been pretty much the heaviest part of the game for me. In raids I go 90fps.

Correct, it was at stock without addons. It would be interesting to test how addons impact the baseline performance of X3D chips vs non-X3D chips from Intel and AMD.
 
Hmm 24% faster, I'm not buying that. We'll see with actual test results.
I think TPU's own benchmark already confirmed that 20+ % faster is possible. But again, this is an "up to" scenario, and I don't think people should get too hung up on the high % number since these are mostly edge cases. It is possible for Intel to have "up to" double digit % faster scenario. At the end of the day, buy what suits your budget and needs.
 
If that`s the best ZEN4-3D can do it is 'nice', but nothing special to talk about. The great power efficiency is offset by the high platform cost. In the high-end for app you have real competition, all else is in Intel hand. The total cost is still too high with AMD for most and the lower you go in tier, the more pronounce it is.
We can just hope than ZEN4 is ironing all the problems for the new platform so ZEN5 will see smooth lunch.
It's 25-50% faster than the same CPU without the 3D cache and is 25% faster than the power hungry intel equivalent. How is anything you said close to reality here?

The intel price savings aren't as large as you claim when the AMD side uses the same RAM (and honestly, supports lower maximums so usually cheaper) and requires far less cooling - the 3D chips use a lot less power and produce a lot less heat while gaming.

The slower intel setup consumes double the power and outputs double the heat at every step - going to 3.5x the consumption vs the emulated 7800x3D - that's not something to be overlooked in your 'total cost' analysis
1678683469532.png
1678683492612.png




I'm not seeing any major differences in prices here, when the 7800x3D is going to offer the same gaming performance as the 7950x3D, as well as work on budget motherboards (unlike on the intel side, which requires a Z series motherboard that must sustain those wattages)

1678683690875.png
1678683711649.png



If i limit myself to Z790 boards (because anything less reduces the performance of the intel K CPUs) the AMD boards are actually cheaper - starting at $300Au vs $350 on the intel side
 
Well I had a 5800X3D and I do not miss it (just sold it on Saturday). I love how people are ragging on the 7900X3D. At $799 Canadian vs the $1100 for the 7950X3D saving $400 is worth it. In every way the 7900X3D is better and faster than the 5800X3D. One thing that is not important but relevant is that once again I am happy to run benchmarks in 3DMark again as my 12 cores make a difference at 87 Watts.

The only reason I feel that the chip is maligned is there were no samples sent for review. The truth is that (for now) prices of PC parts are expensive. These X3D parts are exemplary in the tangible bump in performance you get in Gaming. For anyone who is on AMD and on the fence. You will pay for it but the jump in clock speed and Vcache is noticeable.

I expect the 7800X3D will be the best selling chip for AMD in 2023 as DDR5 is about 3% more expensive than DDR4 at 32GB sets. The Motherboards are expensive if you want flexibility but if you just want to build a X670 Gaming PC, the new Tomahawk board from MSI is looking pretty good I also expect board prices to come down from where they are. As an example there was a time when the cheapest X399 board would be less than the most expensive X470 boards but would cost more to build with the amount of I/O.

Now we have the reverse of TRX40 where the boards were (in reference) affordable but the CPUs were eye watering. Now the MSI X670E Ace Max is double the price of the X570S Ace Max but the Carbon is actually the Ace in this generation in terms of I/O. My only gripe is the fact that the CPU will be happy to hit it's 86 C limit when working hard.
 
What high platform cost? Motherboard is $140. Best 5600C28 ram is $130. 32GB of it!
Yeah, I think a lot of nay sayers are just repeating rhe same "cons" from launch without ever having checked if prices changed.

If that`s the best ZEN4-3D can do it is 'nice', but nothing special to talk about. The great power efficiency is offset by the high platform cost. In the high-end for app you have real competition, all else is in Intel hand. The total cost is still too high with AMD for most and the lower you go in tier, the more pronounce it is.
We can just hope than ZEN4 is ironing all the problems for the new platform so ZEN5 will see smooth lunch.

You're kidding me, right? AMD, a company A FRACTION of the size of Intel, is not only matching Intel, but beating them with a cheaper chip that's way more efficient and you're not impressed? Might I remind you that Intel's R&D budget is over 3x larger than AMD's, and that's only because AMD just bumped up their R&D budget in 2022...prior to that, 2021 and before, Intel's R&D budget was 7.5x larger!

Anyone who says they're not impressed by what AMD is able to do with a literal fraction of the resources is either not being honest or wrongfully assumes that Intel and AMD are playing on a level field and are equally "matched" (and even if that was the case, AMD would still be winning and still be impressive)⅚⅝. Seriously, can anyone point to another example, from any other industry, we're a competitor is completely financially outmatched and yet not only competes, but wins against their competition?

Considering the fact that we're witnessing a trend in capitalism of increased consolidation of industries with constant mergers, far less competition and choice, the fact that AMD was able to resurrect itself, shatter the Intel monopoly, match and then beat the competition and do it all in a period of approximately five years is astonishing (not to mention the fact that they're able to do this while competing with Nvidia who also outmatched AMD by a wide margin in Financials and resource access, yet AMD is able to still match them in raster and is chipping away their lead in ray tracing). Again, when you lay out the facts, I simply cannot believe anyone who says they're not impressed by what AMD has been able to accomplish while giving consumers the undeniably best x86 market they've seen in a decade. Even an Intel fanboy should be thanking AMD, least we forget, prior to ryzen we were at 4 core stagnation with 4% generational "uplifts" and all at a premium price thanks to a defacto Intel monopoly. For example, Intel's i7-6900K, an 8 core CPU was released just a year before ryzen with an MSRP or $1100, and just a year later AMD launches ryzen and offers the 8 core 1800x at $500 and on a mainstream platform...can anyone cite any other time when in a years time, we went to a doubling of [full power] cores on a mainstream platform and a price reduction of more than half? The closest thing is when AMD doubled cores again with the 3950x.

I'll end the rant now, but I think it's impossible to say that it's unimpressive what AMD has done for the x86 market (and graphics) and for consumers (helping consumers in NOT the goal of any corporation, it's profit and in no way am I naive enough to think AMD is the "good guy) and all in an incredibly small amount of time AND is basically leading innovation in the x86 market to such an extent that now Intel is copying AMD whether it be with chiplets or the China only black box raptor lake CPUs with more L3 cache....
 
Last edited:
I was bored today, decided to set up a gaming profile with CCD1 disabled just to see benchmark results vs full 16-cores,.. It turned out like a Ryzen 7 7700X on steroids.

I haven't had time to actually play any games yet with the 8-core profile, as I spent all day setting up the profile and testing,.. but I ran 4 CP2077 game benchmarks with nice results.
I'm testing the CP2077 Reworked Project Ultra Quality Mod released yesterday @ 2560x1440: https://www.nexusmods.com/cyberpunk2077/mods/7652?tab=files

Screenshot 2023-03-12 084059.jpg

No ECO going on here,.. all MLB & PBO Max.

Screenshot 2023-03-12 234515.jpg

I set per core boost limit with two cores at 5850 and six cores at 7550.

Screenshot 2023-03-13 015733.jpg

After work today I plan on playing CP2077 with the 8-core profile,.. see what shakes out.

Then most likely I'll save the profile,.. and go back to full 16-cores,.. :laugh:
 
Last edited:
It's 25-50% faster than the same CPU without the 3D cache and is 25% faster than the power hungry intel equivalent. How is anything you said close to reality here?

The intel price savings aren't as large as you claim when the AMD side uses the same RAM (and honestly, supports lower maximums so usually cheaper) and requires far less cooling - the 3D chips use a lot less power and produce a lot less heat while gaming.

The slower intel setup consumes double the power and outputs double the heat at every step - going to 3.5x the consumption vs the emulated 7800x3D - that's not something to be overlooked in your 'total cost' analysis
View attachment 287618View attachment 287619



I'm not seeing any major differences in prices here, when the 7800x3D is going to offer the same gaming performance as the 7950x3D, as well as work on budget motherboards (unlike on the intel side, which requires a Z series motherboard that must sustain those wattages)

View attachment 287620View attachment 287621


If i limit myself to Z790 boards (because anything less reduces the performance of the intel K CPUs) the AMD boards are actually cheaper - starting at $300Au vs $350 on the intel side
Every single paragraph is a heavy exaggeration...
 
It's 25-50% faster than the same CPU without the 3D cache and is 25% faster than the power hungry intel equivalent. How is anything you said close to reality here?

The intel price savings aren't as large as you claim when the AMD side uses the same RAM (and honestly, supports lower maximums so usually cheaper) and requires far less cooling - the 3D chips use a lot less power and produce a lot less heat while gaming.

The slower intel setup consumes double the power and outputs double the heat at every step - going to 3.5x the consumption vs the emulated 7800x3D - that's not something to be overlooked in your 'total cost' analysis
View attachment 287618View attachment 287619



I'm not seeing any major differences in prices here, when the 7800x3D is going to offer the same gaming performance as the 7950x3D, as well as work on budget motherboards (unlike on the intel side, which requires a Z series motherboard that must sustain those wattages)

View attachment 287620View attachment 287621


If i limit myself to Z790 boards (because anything less reduces the performance of the intel K CPUs) the AMD boards are actually cheaper - starting at $300Au vs $350 on the intel side

100W difference between 2 products
4h gaming every day, without exceptions
~120h gaming / month
20cnt/kWh (including all extra costs here; taxes and transfer)

12kWh * 20 = 2,4eur saved monthly. Imagine having a difference less than 100W.

If this is an issue, I would say "it's time to prioritize". :roll: Consider going to work or something. Normal desktop usage is around 20W with Intels, didn't bother to check how it would be when using only e-cores. P-cores are my favorite.


Anyway, with Intel products there is possibility to highly reduce wattage given by stock 1.35-1.4V to roughly 1.2V and even then to OC allcore frequency. I understand if this is not an option for lots of users, but going green is nice.

In my case the heating season is about 9months per year, so little extra heat would not do any harm either. :toast:
 
Last edited:
Back
Top