Saturday, March 11th 2023

AMD Shows First Ryzen 7 7800X3D Game Benchmarks, Up To 24% Faster Than Core i9-13900K

AMD has finally released some of the official gaming benchmark for its 8-core Ryzen 7 7800X3D processor that should be coming in April, and, now that AMD has released some of the first gaming benchmarks, it appears that it outperforms the Intel Core i9-13900K by up to 24 percent. Officially, AMD is putting the Ryzen 7 7800X3D against the Intel Core i7-13700K, leaving the Core i9-13900K and the Core i9-13900KS to its 16- and 12-core Ryzen 7000X3D SKUs.

Although some of its Ryzen 7000X3D series chips are available as of February 28th, namely the Ryzen 9 7950X3D and the Ryzen 9 7900X3D, AMD has pushed back the launch of its 8-core/16-thread Ryzen 7 7800X3D. This was quite a surprise and a big letdown, especially due to its tempting $449 price tag. One of the reasons might be the fact that the Ryzen 7 7800X3D is simply too good and might put a lot of pressure on even AMD's own SKUs, let alone Intel's lineup.
The AMD Ryzen 7 7800X3D has yet another significant advantage compared to the rest of the Ryzen 7000X3D series, as while the 12-core and 16-core SKUs are a multi-chip module with two CCDs, and feature an asymmetric chiplet design, the Ryzen 7 7800X3D has a rather standard design, with single 8-core CCD with 3D V-Cache.

The Ryzen 9 7950X3D and the Ryzen 9 7900X3D have two CCDs with only one CCD with 3D Vertical Cache, which means it relies on software control, or the 3D Vertical Cache Optimizer Driver, to ensure that workload from games are directed to the CCD with the 3D Vertical Cache using dynamic "preferred cores" flagging for the Windows OS scheduler. You can find more details in our Ryzen 9 7950X3D review.

AMD has released two new slides, putting the AMD Ryzen 7 7800X3D against the Intel Core 9 13900K in four games, Rainbow Six Siege, Total War: Three Kingdoms, Red Dead Redemption 2, and Horizon Zero Dawn. In all four, the Ryzen 7 7800X3D is ahead of the Core i9-13900K, by anywhere between 13 and 24 percent. The second slide puts the Ryzen 7 7800X3D against the previous generation AMD Ryzen 5800X3D in Rainbow Six Siege, Warhammer: Dawn of War III, CS:GO, and DOTA 2, where the new generation is anywhere between 21 and 30 percent faster.
If these benchmarks turn out to be even close to painting the realistic picture, as these are just three games handpicked by AMD, the Ryzen 7 7800X3D, priced at $449, might be a big winner for AMD, and becoming one of the best sellers as it managed to outperform Intel's $579 priced Core i9-13900K SKU, while being $130 less expensive. The Core i7-13700K, which is what AMD is actually putting the Ryzen 7 7800X3D against, is priced at $405.

Of course, these are handpicked benchmarks provided by AMD so take them with a grain of salt, and we would rather wait to check out these performance figures by ourselves when it officially launches on April 6th. In the meantime, you can check out our Ryzen 7800X3D preview, which is a simulation of the performance with a single CCD enabled.
Source: Toms Hardware
Add your own comment

85 Comments on AMD Shows First Ryzen 7 7800X3D Game Benchmarks, Up To 24% Faster Than Core i9-13900K

#51
Tek-Check
evernessinceI think you are under-estimating the efficiency of the 3D cache, even without the power binning a top end laptop SKU goes though or the power optimizations you have the 7950X3D which could feasibly match a laptop chip in power efficiency. I can think of many regions in the world where either electricity price or climate heavily incentivizes a 7950X3D or 7800X3D purchase, aside from it's class leading performance.
Efficiency importance is coming back like never before, and it is here to stay in the wake of everyhting that is happening with electricity prices, economic crises, local war and fight fdor resources, as well as climate change globally. 'Intel 4' is predicted to be 50% more efficient and that's the way it should be. Intel is under increasing pressure to deliver more efficient line of products as laws in many countries are tightening up the energy consumption agenda.
Posted on Reply
#52
evernessince
fevgatosThat's a completely different issue that's debatable. What's not debatable is that disabling ecores on the current CPU's loses you performance in plenty of games. There is a game here and there (havent had any myself) that may work better with ecores off, but if you find such a game, you can disable the ecores with a press of a button from within windows on the fly
"With its E-cores disabled, the i9-13900K is a negligible 0.1% slower than the stock i9-13900K (E-cores enabled)"

www.techpowerup.com/review/rtx-4090-53-games-core-i9-13900k-e-cores-enabled-vs-disabled/3.html

At the end of the day, whether you disable or enable e-cores, your performance on average ends up being about the same. You seem to imply it's a rarity here which isn't true according to TPU's data.

The other poster is likely correct, if e-cores only provide an average advantage of 0.1% in games it would be very easy for a hypothetical CPU with more P cores to outperform them. 0.1% is well within margin of error, it's possible that a another benchmark could show the 13900K e-cores disabled beating a fully enabled 13900K simply through random benchmark variance.
Posted on Reply
#53
SkySong
Hmm 24% faster, I'm not buying that. We'll see with actual test results.
Posted on Reply
#54
Dirt Chip
evernessince"With its E-cores disabled, the i9-13900K is a negligible 0.1% slower than the stock i9-13900K (E-cores enabled)"

www.techpowerup.com/review/rtx-4090-53-games-core-i9-13900k-e-cores-enabled-vs-disabled/3.html

At the end of the day, whether you disable or enable e-cores, your performance on average ends up being about the same. You seem to imply it's a rarity here which isn't true according to TPU's data.

The other poster is likely correct, if e-cores only provide an average advantage of 0.1% in games it would be very easy for a hypothetical CPU with more P cores to outperform them. 0.1% is well within margin of error, it's possible that a another benchmark could show the 13900K e-cores disabled beating a fully enabled 13900K simply through random benchmark variance.
Lat`s make an 13900P that lake any e-cores completely. And on top of that, an 13950KPS that add 4 p-cores on the missing e-cores space. Still, for app use, I think that 16 e will do better than 4 p considering you already have 8 p anyway.
:pimp:
Posted on Reply
#55
TumbleGeorge
The problem is that whether or not you have the benefit of E cores, you end up paying for them and they are reflected in the price of the CPU.
Ps
Even if you turn them off after building your computer and never, ever use them, Intel will never give you back the money you paid for them.
Posted on Reply
#56
mäggäri
evernessince"With its E-cores disabled, the i9-13900K is a negligible 0.1% slower than the stock i9-13900K (E-cores enabled)"

www.techpowerup.com/review/rtx-4090-53-games-core-i9-13900k-e-cores-enabled-vs-disabled/3.html

At the end of the day, whether you disable or enable e-cores, your performance on average ends up being about the same. You seem to imply it's a rarity here which isn't true according to TPU's data.

The other poster is likely correct, if e-cores only provide an average advantage of 0.1% in games it would be very easy for a hypothetical CPU with more P cores to outperform them. 0.1% is well within margin of error, it's possible that a another benchmark could show the 13900K e-cores disabled beating a fully enabled 13900K simply through random benchmark variance.
In 12-gen you achieve way higher RING frequency without ecores, which already outperforms the "0.1%". :) I just don't mind posting too deeply about different scenarios, and at same time reviewers don't do it either.

12700k @1.20V allcore 4900MHz, without Ecores and pumped up ring frequency + DDR4 4000 c16 flat, gives almost 22k in CBR23, with around 175W. But wattage varies a little bit, depending on the unit's "leakage current". Sticks I used were 2x16Gb from a G.Skill 128Gb kit, spec'd 3200 c15 flat.

If I remember correct, pumping up ring freq gave something around 500pts for CBR23
Posted on Reply
#57
Fierce Guppy
SteevoMy question is what benefit does liquid cooling on this offer? What RAM tuning does it offer to make it a long lived high performance system. What about a lapped die or exposed die?
If you're talking about a long lived high performance system then the CPU is much less important than the socket. A new CPU will go into an AM5 socket four years from now. For this reason, although I am unimpressed with the 7000 series CPUs, I'm buying an AMD mobo. If Intel supported its socket types for as long as AMD supported AM4, I'd stay with Intel without any hesitation.
Posted on Reply
#58
mäggäri
Fierce GuppyIf you're talking about a long lived high performance system then the CPU is much less important than the socket. A new CPU will go into an AM5 socket four years from now. For this reason, although I am unimpressed with the 7000 series CPUs, I'm buying an AMD mobo. If Intel supported its socket types for as long as AMD supported AM4, I'd stay with Intel without any hesitation.
I don't mind buying new mobos and getting new features. Would not pair 5800X3D with X370/B350 just only because it is possible. :D In theory it is nice idea, but in practice a bit different story.

Also those 500usd/eur mobos don't give too much for home usage, I am not making a welding device, so 14x 55A VRM is enough. No need for 24x105A lol, just a marketing trick afterall.
Posted on Reply
#59
SkySong
mäggäriup to 999% <> reality -1-5%

Just disable the CCD without 3DNow! cache on 7950X3D and bench it out. :sleep:
Woohoo and amen again at these AMD-style upto jokes. April fool's day came a bit early this year. :lovetpu:


Would be great if Intel dropped off these E-Core gimmicks. Just plain P-cores. I have found 0 usage for those notepad-tier CPU cores, disabled in BIOS already.
Also an interesting note is that if you disable e-cores, you automatically get a higher Ring and a more cache to use.
Posted on Reply
#60
Vayra86
WastelandIt is obviously impossible to dispute any claim of "up to," but W1zzard's testing has the 7950x3d (with or without the second CCD disabled) barely edging out the 13900k on average. There's no reason to expect that the 7800x3d will do substantially better.

This CPU will be perhaps the best option for what you might call the zero-compromise gamer; it's definitely priced better than the 13900k, but that's not saying much.. IMO any "gaming" CPU over about $250 is silly. This was a problem for the 5800x3d too, which enjoyed a lot of good press at launch, but the love fest really only went into turbo mode later on, when A) the price went way down, and B) AM5's high platform costs made any drop-in upgrade for AM4 owners look that much better by comparison. The 5800x3d is still sitting pretty, in fact, as are various lower-end Intel options.
Well put
The only option on the table for the 7800X3D is that it has to push fewer cores so it might just have better power/frequency curves. 200 mhz I think is already optimistic though.

Still, the 'up to' is believable, given the fact the X3D's excel in exactly those situations where the cache alleviates 'BAD' FPS. Nobody cares about 24% on top of 144 fps, but 24% on top of 40-50, or sub 30, is a game changer. Simulators, 4X, etc all benefit massively where it matters most. This is also where all that extra 'cache' for E cores doesn't extract the same benefits; the special X3D sauce is the perfect catch-all for heavy single threaded gaming scenarios.

For me its really only a question of when, not if, I'll get some model of X3D. And the stellar power consumption while gaming is really only just a bonus on top - half TDP compared to Intel 13th is immense.
Ayhamb99Precisely why i think arguing about which CPU is better for gaming is pointless in my honest opinion, in real-world usage, it'll be difficult to notice the differences between the top performing gaming CPUs with a 4090, and the differences will be even less noticeable with the GPUs that people are more likely to use (3080s and 6800/6900 XTs.)
Well... it was exactly a 'top end recent CPU' that gave the 4090 its wings compared to the initial W1zzard review with a somewhat slower non X3D Ryzen.

Especially at lots of GPU power, the impact of fast CPUs is worthwhile. It was, is and will always be about having a balanced system between CPU and GPU. That said, the baseline of CPUs is certainly more than sufficient for lower tier GPUs, and also becomes more than sufficient once you push 4K resolution on it. But if you run 1440p on a fast GPU... damn right you'll see the benefits of more CPU.
Posted on Reply
#61
Tartaros
My aging 9900 might get retired depending on this one's numbers on WoW, this looks pretty good.
Posted on Reply
#62
evernessince
TartarosMy aging 9900 might get retired depending on this one's numbers on WoW, this looks pretty good.
Well the 5800X3D was a 35% increase over the 5800X in WoW so it's likely numbers in that game will be very good for the 7800X. MMOs in general seem to really benefit.
Posted on Reply
#63
ValenOne
PsychoholicThe E-Cores arent terrible, they are equivalent to Skylake cores, maybe a little faster than skylake.
E-Cores lack hardware 256-bit AVX2 i.e. three port AVX2 128-bit hardware with AVX2 software support.

en.wikipedia.org/wiki/Gracemont_%28microarchitecture%29#/media/File:GracemontRevised.png

vs

en.wikipedia.org/wiki/Skylake_%28microarchitecture%29#/media/File:Skylake_architecture_diagram.png
Core i9 13900K has 16 extra E-Cores (three x 128-bit AVX x 16 cores = 6,144 bits).

Ryzen 9 7950X/7950X3D has 8 extra "fat" Zen 4 (four 256-bit AVX x 8 cores = 8,192 bits).


----
Intel Golden Cove P-Cores block diagram which is similar to Raptor Cove P-Cores.
i0.wp.com/chipsandcheese.com/wp-content/uploads/2021/11/goldencove.drawio3.png?ssl=1
Three-port 256-bit AVX hardware that consists of two-port 256-bit FMA/256-bit ALU + 1 port 256-bit FADD/256-bit ALU


AMD Zen 4 block diagram.
chipsandcheese.com/2022/11/05/amds-zen-4-part-1-frontend-and-execution-engine/
Four ports 256-bit AVX hardware that consists of two-port 256-bit FMA/256-bit ALU + 2 port 256-bit FADD/256-bit ALU. Zen 4 executes AVX-512 like Golden Cove's split 256-bit (when enabled).

Most current-gen games are designed with game consoles Zen 2 AVX2 assumptions.
Posted on Reply
#64
SL2
SkySongHmm 24% faster, I'm not buying that. We'll see with actual test results.
Yeah, no.

No one said 24 % faster, maybe you should read again.
Posted on Reply
#65
Tartaros
evernessinceWell the 5800X3D was a 35% increase over the 5800X in WoW so it's likely numbers in that game will be very good for the 7800X. MMOs in general seem to really benefit.
That was an out of the box install I imagine. The problem with WoW is that you also "need" to run addons or at least if you want to be competitive or have an easier time to see what's happening, the new expansion has already incorporated quite a lot qol addons into the core, but you still need something to track numbers, dungeon timers and custom alerts, and those addons are heavy.

Still, I get like 60fps on Valdrakken at 4k without doing anything, that has been pretty much the heaviest part of the game for me. In raids I go 90fps.
Posted on Reply
#66
Icon Charlie
Until I see REAL data that has been tested and a CPU that has been reviewed, I will not listen to any of the market speak and talking heads from AMD or any company in any sector of any economic industry.

For the past several years we have been getting nothing but "technical truths" and "blatant lies" just to sell product.

But unless the overall components come down in prices (YES Ngreedia, Yes AMD with your AMD Tax on Motherboards), this gets a hard pass.
Posted on Reply
#67
evernessince
TartarosThat was an out of the box install I imagine. The problem with WoW is that you also "need" to run addons or at least if you want to be competitive or have an easier time to see what's happening, the new expansion has already incorporated quite a lot qol addons into the core, but you still need something to track numbers, dungeon timers and custom alerts, and those addons are heavy.

Still, I get like 60fps on Valdrakken at 4k without doing anything, that has been pretty much the heaviest part of the game for me. In raids I go 90fps.
Correct, it was at stock without addons. It would be interesting to test how addons impact the baseline performance of X3D chips vs non-X3D chips from Intel and AMD.
Posted on Reply
#68
watzupken
SkySongHmm 24% faster, I'm not buying that. We'll see with actual test results.
I think TPU's own benchmark already confirmed that 20+ % faster is possible. But again, this is an "up to" scenario, and I don't think people should get too hung up on the high % number since these are mostly edge cases. It is possible for Intel to have "up to" double digit % faster scenario. At the end of the day, buy what suits your budget and needs.
Posted on Reply
#69
Mussels
Freshwater Moderator
Dirt ChipIf that`s the best ZEN4-3D can do it is 'nice', but nothing special to talk about. The great power efficiency is offset by the high platform cost. In the high-end for app you have real competition, all else is in Intel hand. The total cost is still too high with AMD for most and the lower you go in tier, the more pronounce it is.
We can just hope than ZEN4 is ironing all the problems for the new platform so ZEN5 will see smooth lunch.
It's 25-50% faster than the same CPU without the 3D cache and is 25% faster than the power hungry intel equivalent. How is anything you said close to reality here?

The intel price savings aren't as large as you claim when the AMD side uses the same RAM (and honestly, supports lower maximums so usually cheaper) and requires far less cooling - the 3D chips use a lot less power and produce a lot less heat while gaming.

The slower intel setup consumes double the power and outputs double the heat at every step - going to 3.5x the consumption vs the emulated 7800x3D - that's not something to be overlooked in your 'total cost' analysis




I'm not seeing any major differences in prices here, when the 7800x3D is going to offer the same gaming performance as the 7950x3D, as well as work on budget motherboards (unlike on the intel side, which requires a Z series motherboard that must sustain those wattages)




If i limit myself to Z790 boards (because anything less reduces the performance of the intel K CPUs) the AMD boards are actually cheaper - starting at $300Au vs $350 on the intel side
Posted on Reply
#70
kapone32
Well I had a 5800X3D and I do not miss it (just sold it on Saturday). I love how people are ragging on the 7900X3D. At $799 Canadian vs the $1100 for the 7950X3D saving $400 is worth it. In every way the 7900X3D is better and faster than the 5800X3D. One thing that is not important but relevant is that once again I am happy to run benchmarks in 3DMark again as my 12 cores make a difference at 87 Watts.

The only reason I feel that the chip is maligned is there were no samples sent for review. The truth is that (for now) prices of PC parts are expensive. These X3D parts are exemplary in the tangible bump in performance you get in Gaming. For anyone who is on AMD and on the fence. You will pay for it but the jump in clock speed and Vcache is noticeable.

I expect the 7800X3D will be the best selling chip for AMD in 2023 as DDR5 is about 3% more expensive than DDR4 at 32GB sets. The Motherboards are expensive if you want flexibility but if you just want to build a X670 Gaming PC, the new Tomahawk board from MSI is looking pretty good I also expect board prices to come down from where they are. As an example there was a time when the cheapest X399 board would be less than the most expensive X470 boards but would cost more to build with the amount of I/O.

Now we have the reverse of TRX40 where the boards were (in reference) affordable but the CPUs were eye watering. Now the MSI X670E Ace Max is double the price of the X570S Ace Max but the Carbon is actually the Ace in this generation in terms of I/O. My only gripe is the fact that the CPU will be happy to hit it's 86 C limit when working hard.
Posted on Reply
#71
AnarchoPrimitiv
GarrusWhat high platform cost? Motherboard is $140. Best 5600C28 ram is $130. 32GB of it!
Yeah, I think a lot of nay sayers are just repeating rhe same "cons" from launch without ever having checked if prices changed.
Dirt ChipIf that`s the best ZEN4-3D can do it is 'nice', but nothing special to talk about. The great power efficiency is offset by the high platform cost. In the high-end for app you have real competition, all else is in Intel hand. The total cost is still too high with AMD for most and the lower you go in tier, the more pronounce it is.
We can just hope than ZEN4 is ironing all the problems for the new platform so ZEN5 will see smooth lunch.
You're kidding me, right? AMD, a company A FRACTION of the size of Intel, is not only matching Intel, but beating them with a cheaper chip that's way more efficient and you're not impressed? Might I remind you that Intel's R&D budget is over 3x larger than AMD's, and that's only because AMD just bumped up their R&D budget in 2022...prior to that, 2021 and before, Intel's R&D budget was 7.5x larger!

Anyone who says they're not impressed by what AMD is able to do with a literal fraction of the resources is either not being honest or wrongfully assumes that Intel and AMD are playing on a level field and are equally "matched" (and even if that was the case, AMD would still be winning and still be impressive)⅚⅝. Seriously, can anyone point to another example, from any other industry, we're a competitor is completely financially outmatched and yet not only competes, but wins against their competition?

Considering the fact that we're witnessing a trend in capitalism of increased consolidation of industries with constant mergers, far less competition and choice, the fact that AMD was able to resurrect itself, shatter the Intel monopoly, match and then beat the competition and do it all in a period of approximately five years is astonishing (not to mention the fact that they're able to do this while competing with Nvidia who also outmatched AMD by a wide margin in Financials and resource access, yet AMD is able to still match them in raster and is chipping away their lead in ray tracing). Again, when you lay out the facts, I simply cannot believe anyone who says they're not impressed by what AMD has been able to accomplish while giving consumers the undeniably best x86 market they've seen in a decade. Even an Intel fanboy should be thanking AMD, least we forget, prior to ryzen we were at 4 core stagnation with 4% generational "uplifts" and all at a premium price thanks to a defacto Intel monopoly. For example, Intel's i7-6900K, an 8 core CPU was released just a year before ryzen with an MSRP or $1100, and just a year later AMD launches ryzen and offers the 8 core 1800x at $500 and on a mainstream platform...can anyone cite any other time when in a years time, we went to a doubling of [full power] cores on a mainstream platform and a price reduction of more than half? The closest thing is when AMD doubled cores again with the 3950x.

I'll end the rant now, but I think it's impossible to say that it's unimpressive what AMD has done for the x86 market (and graphics) and for consumers (helping consumers in NOT the goal of any corporation, it's profit and in no way am I naive enough to think AMD is the "good guy) and all in an incredibly small amount of time AND is basically leading innovation in the x86 market to such an extent that now Intel is copying AMD whether it be with chiplets or the China only black box raptor lake CPUs with more L3 cache....
Posted on Reply
#72
Braegnok
I was bored today, decided to set up a gaming profile with CCD1 disabled just to see benchmark results vs full 16-cores,.. It turned out like a Ryzen 7 7700X on steroids.

I haven't had time to actually play any games yet with the 8-core profile, as I spent all day setting up the profile and testing,.. but I ran 4 CP2077 game benchmarks with nice results.
I'm testing the CP2077 Reworked Project Ultra Quality Mod released yesterday @ 2560x1440: www.nexusmods.com/cyberpunk2077/mods/7652?tab=files



No ECO going on here,.. all MLB & PBO Max.



I set per core boost limit with two cores at 5850 and six cores at 7550.



After work today I plan on playing CP2077 with the 8-core profile,.. see what shakes out.

Then most likely I'll save the profile,.. and go back to full 16-cores,.. :laugh:
Posted on Reply
#73
fevgatos
MusselsIt's 25-50% faster than the same CPU without the 3D cache and is 25% faster than the power hungry intel equivalent. How is anything you said close to reality here?

The intel price savings aren't as large as you claim when the AMD side uses the same RAM (and honestly, supports lower maximums so usually cheaper) and requires far less cooling - the 3D chips use a lot less power and produce a lot less heat while gaming.

The slower intel setup consumes double the power and outputs double the heat at every step - going to 3.5x the consumption vs the emulated 7800x3D - that's not something to be overlooked in your 'total cost' analysis




I'm not seeing any major differences in prices here, when the 7800x3D is going to offer the same gaming performance as the 7950x3D, as well as work on budget motherboards (unlike on the intel side, which requires a Z series motherboard that must sustain those wattages)




If i limit myself to Z790 boards (because anything less reduces the performance of the intel K CPUs) the AMD boards are actually cheaper - starting at $300Au vs $350 on the intel side
Every single paragraph is a heavy exaggeration...
Posted on Reply
#74
mäggäri
MusselsIt's 25-50% faster than the same CPU without the 3D cache and is 25% faster than the power hungry intel equivalent. How is anything you said close to reality here?

The intel price savings aren't as large as you claim when the AMD side uses the same RAM (and honestly, supports lower maximums so usually cheaper) and requires far less cooling - the 3D chips use a lot less power and produce a lot less heat while gaming.

The slower intel setup consumes double the power and outputs double the heat at every step - going to 3.5x the consumption vs the emulated 7800x3D - that's not something to be overlooked in your 'total cost' analysis




I'm not seeing any major differences in prices here, when the 7800x3D is going to offer the same gaming performance as the 7950x3D, as well as work on budget motherboards (unlike on the intel side, which requires a Z series motherboard that must sustain those wattages)




If i limit myself to Z790 boards (because anything less reduces the performance of the intel K CPUs) the AMD boards are actually cheaper - starting at $300Au vs $350 on the intel side
100W difference between 2 products
4h gaming every day, without exceptions
~120h gaming / month
20cnt/kWh (including all extra costs here; taxes and transfer)

12kWh * 20 = 2,4eur saved monthly. Imagine having a difference less than 100W.

If this is an issue, I would say "it's time to prioritize". :roll: Consider going to work or something. Normal desktop usage is around 20W with Intels, didn't bother to check how it would be when using only e-cores. P-cores are my favorite.


Anyway, with Intel products there is possibility to highly reduce wattage given by stock 1.35-1.4V to roughly 1.2V and even then to OC allcore frequency. I understand if this is not an option for lots of users, but going green is nice.

In my case the heating season is about 9months per year, so little extra heat would not do any harm either. :toast:
Posted on Reply
#75
Wasteland
GarrusTPU's review had the 7950X3D at 4 percent faster in gaming over the 12 games he picked. A slightly different group of games (some games swing wildly with cache's benefits, some barely change) and the 7800X3D and 10 percent is believable. AMD's slide:

Rainbow Six Siege.
Total War.
Horizon.
Red Dead 2.

Which of these 4 games did TPU include in his review? None.

He selected Cyberpunk instead of Red Dead 2, God of War instead of Horizon Zero Dawn, Age of Empires instead of Total War, all games that heavily favor Intel. I'm not accusing him of doing anything wrong, I'm just pointing out that there are 4 games that are massively ahead with Ryzen, and they were not in his 12 game selection. He had many games that Intel is usually ahead at.

That's why it was 4 percent faster on average instead of 10 percent.

Personally I love Red Dead 2, and detest Cyberpunk. Horizon Zero Dawn is a much better game than God of War. And the new Age of Empires is not good, so Ryzen is great for me. Rainbow Six Siege no comment, but is seems a lot more important than CS:Go or other 1000fps games ;)
It goes without saying that different test suites will yield different results. My point wasn't, "AMD is LYING," or, "I MUST DEFEND INTEL's SACRED HONOR!" I don't care to enlist in the eternal (and eternally tedious) corporate-fanboy war. My point was that we don't need AMD's PR campaign to give us a glimpse at the 7800x3d's performance profile. We already know that in the general case, it will not perform anywhere near "24% faster than the 13900k." But sure, there are always outliers. If you happen to adore a particular game that massively favors one architecture over another, then buy accordingly, averages be damned.

Benchmarks aside, the main thrust of my comment is that no CPU priced at $450 is particularly appealing for a gaming use case. Sure, if money is no object, or if you're a highly competitive twitch gamer who craves stratospheric frame rates in CPU-bound situations, then the 7800x3d might be for you, but most gamers will be vastly better off buying a $200-300 CPU instead, and putting the extra money towards a beefier GPU (or really towards anything else; pretend that Lisa Su bought you two weeks worth of groceries or w/e, lol). Future proofing doesn't really work as a justification here either, because if you buy into AM5 with a cheaper CPU now, then later you can grab a relatively inexpensive (say, Zen6) drop-in upgrade that will likely spank the 7800x3d--or you can just grab a 5800x3d or an i5 now and skip AM5 entirely. It isn't as if current mid-range CPUs will become obsolete for gaming any time soon.

A Ryzen 5 7600 will get you ~80% of the 7800x3d's gaming performance, for roughly half the money, today--and probably more performance in practice, once we invest the difference in other components. So yeah, I'm impressed by the tech, but until the price comes down on these Zen 4 x3d chips, I don't think they really move the needle in the gaming market. As noted earlier, even the 5800x3d didn't exactly rocket to the top of recommended lists until it dropped from its initial $450 MSRP, and that chip had (still has) a huge positional advantage, given the high number of existing AM4 owners looking for a final upgrade to their aging platform. There is, in other words, a built-in $200+ discount for large swathes of the 5800x3d's target audience. Its Zen4 successor has a much tougher row to hoe, value-wise.
Posted on Reply
Add your own comment
Jun 1st, 2024 02:01 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts