• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

First Signs of AMD Zen 3 "Vermeer" CPUs Surface, Ryzen 7 5800X Tested

If they can match intels single threaded performance, they'll suddenly change the perception of their CPU's.

I'm so tired of "AMD is for the poors" i see on facebook, they need to stomp intel and make this better for consumers.

I'm okay for intel to be top of gaming framerates, that gives us cheaper and yet better multithreaded ryzen cpus. I think that is the reason amd is not charging an arm and a leg for them at moment because high end users are still opting for intel cpus for gaming and I would like to keep that way. If amd can keep increasing even more multithreaded performance on ryzen, that is a buy for me as multithreading performance is much more important for me.
 
many games are designed on intel systems because duh, the majority of their customers use intel. Things are changing now where ryzens sprinting arguably equal to intel depending on how you test it, and i dunno if its the pain medication, the caffeine or the shiny benchmarks but i'm very excited for where the tech world is going this year (the REST of the year is fucking garbage lol)

Pain meds? Anything good? Hahaha
 
No doubt they most powerful at the moment, but that site choose to feature it with the oppositions product. I'm sure are AMD employees embarrassed about that. RDNA2 had better put the blow down asap!

I'm sure the press would make more of a song and dance of it if AMD used their own RX 5700 XT or similar graphics card for the benchmark. Everyone would be up in arms that Radeon graphics cards aren't powerful enough and are limiting the performance of the Intel processor, it's an unfair benchmark, blah buh blah. They're going to be allegedly committing war crimes against the PC community either way and so what else can they do in that situation. Most likely they're not too worried about it.
 
It doesn't mean it isn't either. Intel's compiler and benchmark shenanigans are legendary.
That's nonsense.
Nearly all software are compiled with either MSVC, GCC or LLVM, none of those do any "shenanigans".
 
monolithic die



Lower latency



None of it seems to do anything for Ryzen.
I'm repeating myself here (I already commented about it in another thread), but the 4650G has 1/4 the L3 cache of the 3600 (8 vs 32) - no matter how fast your SDRAM latency is the cache is still like, another order of magnitude faster than the SDRAM and the chopped-down L3 cache of the Renoir will significantly bottleneck its gaming performance.
 
Last edited:
Given the fact that this isn't a legit review - who gives a crap.

tenor.gif



Always interesting to read, but it's so sad how people get so whipped up over AMD vs Intel. Give it a rest. Reviews will be out soon....then you guys can all be like:

SlightWebbedKob-size_restricted.gif
 
That's nonsense.
Nearly all software are compiled with either MSVC, GCC or LLVM, none of those do any "shenanigans".
It doesn't require shenanigans if the team writing it are owned by intel, they Optimise for. Their platform.
Not even slightly dodgy, fair play.
 
Bots, scalpers and bounce alerts:

“New CPU? This is free real estate!”

After the RTX3000 launch, it is clear the sneaker crowds have their eyes on PC components now
 
The potential of 8 cores on a single chiplet is what has me excited. Ryzen's achilleas heel was increased latency across chiplets, if we can get 8 cores and not have those issues that will be one heck of a gaming chip. Hopefully they don't go the route of 4+4 or 6+2 and actually give us a full single chiplet CPU.
 
The potential of 8 cores on a single chiplet is what has me excited. Ryzen's achilleas heel was increased latency across chiplets, if we can get 8 cores and not have those issues that will be one heck of a gaming chip. Hopefully they don't go the route of 4+4 or 6+2 and actually give us a full single chiplet CPU.

That is why amd is behind intel in gaming yet but amd can get close to intel in gaming and yet keep the chiplet design, chiplet is easy management and amd like that and intel wants to go to chiplet design too but they know if they do they will lose the gaming advantage they have over amd.
 
Too bad we don't know what the frequency of the Ryzen 5800X is.
 
A big key with the 3000 series launch was the comparison of it being more efficient than Intel (3700X was the demo chip). I suspect 4000 series will be even more of the same against 14++++, so getting within striking distance in ST performance with way more efficiency would still be a nice feat. I guess we’ll find out soon enough.
 
The potential of 8 cores on a single chiplet is what has me excited. Ryzen's achilleas heel was increased latency across chiplets, if we can get 8 cores and not have those issues that will be one heck of a gaming chip. Hopefully they don't go the route of 4+4 or 6+2 and actually give us a full single chiplet CPU.

Unless I'm horribly mistaken, every Zen 2 CPU from the 3800X on below (except for the APUs) is a single chiplet (CCD) + IO die processor.

Do you mean 8 cores in a single CCX, instead?
 
The fact that the benchmark was run at 4K rather than 1440p or 1080p is a little suspicious. And the fact that while having much higher cpu frames, it was still marginally behind in actual framerate in 2 out of 3.


So should they have run the test at 480P cause it's such a common resolution?

Plus engineering sample, unknown RAM (except 32GB vs 16, so know the AMD timings are worse), unknown storage, unknown cooling. 8 cores VS 10 cores.
 
So RKL will beat this Ryzen 5000 Zen 3 then ? If that's the case then I'm set on Z590 Dark or ASUS Maximus XIII Apex Z590.
 
Does anyone have a 4k screen, a 2080+ gpu and a 3800x? would be fun to get the score and see the increase.

It seem very promising for sure but i am still keeping my expectation in check to not be disappointed.
 
So should they have run the test at 480P cause it's such a common resolution?

Plus engineering sample, unknown RAM (except 32GB vs 16, so know the AMD timings are worse), unknown storage, unknown cooling. 8 cores VS 10 cores.
Did I say they should’ve run the test at 480p? I mentioned 1080p and 1440p. 4K is not the resolution to test a CPU’s gaming performance. Some sites, like TPU, even test at 720p.
 
What? They skipped right to 5000? What the fuck?!
 
That's nonsense.
Nearly all software are compiled with either MSVC, GCC or LLVM, none of those do any "shenanigans".

I know that MKL is rarely used in games, but this is a good example:
 
What? They skipped right to 5000? What the fuck?!
APUs lagged a generation behind, so they brought them to parity by making desktop skip to 5000. Is this really that shocking?
 
Just got my 3950x. Nota upgrading any time soon. But still, If amd manager to pull high clocks from this CPUs, they be on top in gaming this time.
 
Did I say they should’ve run the test at 480p? I mentioned 1080p and 1440p. 4K is not the resolution to test a CPU’s gaming performance. Some sites, like TPU, even test at 720p.


Used to at 720, but is so useless it's been upped to 1080 form relevant tests.




I remember the debate, as no one uses 720 unless we are comparing low power CPUs in tablets.

So feel free to use a resolution that's unused for a comparison of you feel better about it. But it's like comparing which jet fighter is better at being a submarine. Or which sports car does best off-road. Or which network switch makes the best cricket bat.

Again, you addressed one point, that is meaningless. Any thoughts on core counts, memory latency, power consumption? Typically AMD gets you more overall performance for the dollar.
 
They are, but it's irrelevant as both Intel and AMD CPUs share the same ISA and have very similar performance characteristics. Software developers have no access to the underlying native microarchitecture, so there is nothing to optimize for at this level. Just because a piece of software runs better on one CPU doesn't mean it's optimized for it, it could be that a the hardware just handles the workload better due to resource balancing and advantages of that architecture, advantages which usually are hard or impossible to exploit directly from software.

Pretty much no games contain low-level assembly code anyway these days, and many studios use off-the-shelf game engines and do no low-level coding at all.
Games do, on occasion, optimize for one piece of hardware or another, They will usually mention it or have a flash screen in game to tell you.
 
And i sold my 3300X now for good money, currently running the system with Athlone 200GE :D and waiting for Ryzen 3 5300X.
 
Back
Top