• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 7 5700G and Ryzen 5 5600G "Zen 3" Cezanne Desktop Processors Benched

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,849 (7.39/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Several benchmark numbers of the upcoming AMD Ryzen 7 5700G and Ryzen 5 5600G desktop processors were fished out by Thai PC enthusiast TUM_APISAK. The 5700G and 5600G are based on the 7 nm "Cezanne" silicon that combines up to 8 "Zen 3" CPU cores across a single CCX, sharing a single 16 MB L3 cache; along with an iGPU based on the "Vega" graphics architecture. Both chips were put through the CPU-Z Bench, where they posted spectacular results.

Both chips post higher single-thread score than the Core i9-10900K "Comet Lake," riding on the back of the high IPC of the "Zen 3" cores, and low latencies from the monolithic "Cezanne" silicon. In the multi-threaded test, the 8-core/16-thread 5700G scored above the Core i9-9900KS (5.00 GHz all-core). An HP OMEN 25L pre-built was put through Geekbench 5, where it was found performing within 90% of the Core i5-11600K. Userbenchmark remarks that the 5600X performs within the league of contemporaries, but falls behind on memory latency. Find the validation pages in the source links below.



View at TechPowerUp Main Site
 
I'm looking forward to seeing a Cezanne vs Vermeer comparison.

Any other comparisons are pretty irrelevant since there's nothing new here, it's just a reduced-cache version of Vermeer with some dated 2017 Vega cores glued onto the side.

The interesting parts will be the 15W parts, as the vega IGP is a 100% waste of silicon for those "desktop socket" laptops that always have a dGPU.

OEMs might try to pass these off as alternatives to buying a system with a hard-to-find GPU but they're not cheap, and something that's leagues better than the IGP can be had for cheap because miners are only driving up the price of 4GB+ GPUs. You can pick up an RX550 2GB for £/€/$90 new, 60 used with ease, and that's not even the best option.
 
Last edited:
Hahaha on graphic 4500 points bar is 3 times longer than 4100 point :D.
Yes, that's how scaling works with a close min/max. You'll have a test on it next week.
 
PSA : Please stop using Userbenchmark for Benchmarks/Comparison, it's notoriously biased : https://ownsnap.com/userbenchmark-i...ite-has-a-zero-credibility-in-tech-community/

I'm surprised to see such an uplift over the 4750g! I just bought that chip for an SFX system and it's blowing me away how good it is, if the 5700g is even better.... May need to upgrade in the future if the silicon shortage eventually goes away and prices become sane again, it will be more than enough for me until DDR5 systems become more mainstream in a few years.
 
PSA : Please stop using Userbenchmark for Benchmarks/Comparison, it's notoriously biased : https://ownsnap.com/userbenchmark-i...ite-has-a-zero-credibility-in-tech-community/

I'm surprised to see such an uplift over the 4750g! I just bought that chip for an SFX system and it's blowing me away how good it is, if the 5700g is even better.... May need to upgrade in the future if the silicon shortage eventually goes away and prices become sane again, it will be more than enough for me until DDR5 systems become more mainstream in a few years.
Yes, it's heavily biased and complete garbage to use for a comparison, esp with some of the weird and wild stuff I've seen before....
And even got from it myself when I tried it.

I'll wait for the real, reliable benchies to be ran with these chips before making a decision about them.
 
5700G is a great CPU imho. If priced close to $350, it will be the best for any use with great vfm.
 
5700G is a great CPU imho. If priced close to $350, it will be the best for any use with great vfm.
Why on earth do people come up with these crazy ideas?
What makes you think it will be $350? The 5600G might be $350 because that's only $50 more than the 5600X.

AMD are not going to cannibalise their own profits by selling an 8-core part with graphics for less than any other 8-core part of the same generation. The 5700 sells for around $440 and that doesn't have integrated graphics. Expect the 5700G to cost around $25-75 more than that. History proves that AMD typically charges $20 premium and real-world street pricing on these rarer APUs drove that premium up much higher than that, with lack of retail boxes leaving pricing at the mercy of botique builders who happened to sell CPUs seperately. Renoir APUs were never sold as retail-boxed units in the retail channel.

This is a large, monolithic die. It is significantly more expensive to make than the 8C/16T chiplet of the 5800X. If the 5700G doesn't cost at least $450 I'll be amazed.
 
Last edited:
PSA : Please stop using Userbenchmark for Benchmarks/Comparison, it's notoriously biased : https://ownsnap.com/userbenchmark-i...ite-has-a-zero-credibility-in-tech-community/
Why does Userbenchmark always trigger people?
Controversy is about their Effective speed number and weights of benchmark results with different core counts they use to calculate that. So their rankings and Effective speed stuff is crap.
Benchmark results themselves are fine. They give a pretty good indication of different performance aspects - of a CPU in this case. In a tech-minded community like TPU looking at those should be OK.

5700G gets about the same single-core perf as 5800X, better results from 4-core (4%) and 8-core (+12%) but again 5% slower in 64-core test. The last part is probably TDP and power limit showing up.
For some reason 5700G seems to lag behind 5800X in memory latency though. Interesting.
 
Why on earth do people come up with these crazy ideas?
What makes you think it will be $350? The 5600G might be $350 because that's only $50 more than the 5600X.

AMD are not going to cannibalise their own profits by selling an 8-core part with graphics for less than any other 8-core part of the same generation. The 5700 sells for around $440 and that doesn't have integrated graphics. Expect the 5700G to cost around $25-75 more than that. History proves that AMD typically charges $20 premium and real-world street pricing on these rarer APUs drove that premium up much higher than that, with lack of retail boxes leaving pricing at the mercy of botique builders who happened to sell CPUs seperately. Renoir APUs were never sold as retail-boxed units in the retail channel.

This is a large, monolithic die. It is significantly more expensive to make than the 8C/16T chiplet of the 5800X. If the 5700G doesn't cost at least $450 I'll be amazed.
In real, those APUs are most possibly easier to make and thus, cheaper than the CPUs as they do not have the IO die that needs additional work to be integrated and APUs have less cache also. They are made to be cheaper in order to be sold mainly to OEMs, having lower power draw and temps.
 
Neat, if the price is right I may very well bag one to update my brothers FX6300/Hd7950 calculator,
the 6 core will go nicely with the less than stellar B550 MB/RAM I have kicking around.
Always assuming one can be found.

I'm live in parallel reality here CPU part in APU is much important than iGPU?
Amazing how opinions change.
Lots of people here constantly criticise AMD CPUs' for their lack of an IGP-particularly with the current shortage of even half decent GPUs'-
and here is AMD providing a CPU with a much stronger IGP than anything Intel offers and there's still criticism.
 
In real, those APUs are most possibly easier to make and thus, cheaper than the CPUs as they do not have the IO die that needs additional work to be integrated and APUs have less cache also. They are made to be cheaper in order to be sold mainly to OEMs, having lower power draw and temps.
Why would they be easier to make or cheaper? They are effectively the same CPU minus chiplet downsides plus APU.
Renoir is 156 mm², Cezanne is bigger - best guess for now based on dieshot/diagrams is ~175 mm². Monolithic 7 nm chip, more than twice the size of Zen3 CCD.
 
Last edited:
Why does Userbenchmark always trigger people?
Controversy is about their Effective speed number and weights of benchmark results with different core counts they use to calculate that. So their rankings and Effective speed stuff is crap.
Benchmark results themselves are fine. They give a pretty good indication of different performance aspects - of a CPU in this case. In a tech-minded community like TPU looking at those should be OK.

5700G gets about the same single-core perf as 5800X, better results from 4-core (4%) and 8-core (+12%) but again 5% slower in 64-core test. The last part is probably TDP and power limit showing up.
For some reason 5700G seems to lag behind 5800X in memory latency though. Interesting.
They are banned even from the Intel subreddit. Just try reading some of their reviews when they compare with AMD.
 
Meh

Show me real world game benchmarks against an R5 3400G and the price.
 
In real, those APUs are most possibly easier to make and thus, cheaper than the CPUs as they do not have the IO die that needs additional work to be integrated and APUs have less cache also. They are made to be cheaper in order to be sold mainly to OEMs, having lower power draw and temps.
You don't seem to understand how modern semiconductor fabs work.

The larger the die area, the more wafer it uses, so the fewer chips you get per wafer, which pushes up the price by a squared relationship (because area increases as a square relationship, not a linear releationship).

Ignoring that first exponential cost increase there's still the issue of yields; the chance of a die being defective increases exponentially with area becuase area increases as a square whilst defects per area is a linear constant.

Ignoring that second exponential cost increase there's the third issue of edge wastage. the larger a die, the larger the "steps" around the edge of the circular wafer that don't fit a whole die. The only way to fit a square peg into a round hole is with gaps, and those gaps are unsellable, wasted (but expensive) silicon wafer.

So, as the transistor count of a CPU increases linearly, you have two seperate exponential increases to multiply by, and than another linear cost increase on top of that. Big chips are super expensive.

Using any of the available semiconductor yield calculators, you'll be able to see that AMD can have either ~300 8-core Zen3 chiplets out of a wafer, or ~130 8 core APUs out of the exact same wafer. Whatever they choose to charge for it, the APU silicon is (300/130) about 2.3x as expensive for them to make. We're lucky AMD only charge a small premium for their APUs because 2.3x the cost of a Ryzen 7 5800 is a thousand dollars.
 
Last edited:
Well it smashes the 3700X which is impressive and these things OC very well. If they had of released this several months earlier and not just for OEM, I would have jumped on the 5700G for my new build. Now I'll wait for Zen 4 based APU's with RDNA graphics.
 
You don't seem to understand how modern semiconductor fabs work.

The larger the die area, the more wafer it uses, so the fewer chips you get per wafer, which pushes up the price by a squared relationship (because area increases as a square relationship, not a linear releationship).

Ignoring that first exponential cost increase there's still the issue of yields; the chance of a die being defective increases exponentially with area becuase area increases as a square whilst defects per area is a linear constant.

Ignoring that second exponential cost increase there's the third issue of edge wastage. the larger a die, the larger the "steps" around the edge of the circular wafer that don't fit a whole die. The only way to fit a square peg into a round hole is with gaps, and those gaps are unsellable, wasted (but expensive) silicon wafer.

So, as the transistor count of a CPU increases linearly, you have two seperate exponential increases to multiply by, and than another linear cost increase on top of that. Big chips are super expensive.

Using any of the available semiconductor yield calculators, you'll be able to see that AMD can have either ~300 8-core Zen3 chiplets out of a wafer, or ~130 8 core APUs out of the exact same wafer. Whatever they choose to charge for it, the APU silicon is (300/130) about 2.3x as expensive for them to make. We're lucky AMD only charge a small premium for their APUs because 2.3x the cost of a Ryzen 7 5800 is a thousand dollars.
Tons of theoretical bullshits for chip with area below 200mm2. Maybe you is right if talk for big complex structure...of 500mm2+ because hardness really increase exponentially. Cezanne is not smallest chip but is small chip. How work in modern fabric... With modern software most automatically. Using computers. Not make manually position of every transistor on chip area. Handmade stayed somewhere in '60 or '70 of 20th century.
 
If the R7 5700GE with its 35 W TDP isn't far behind (and coming to DIY desktop as promised), then I guess my R3 3100 will have its worthy replacement.
 
You don't seem to understand how modern semiconductor fabs work.

The larger the die area, the more wafer it uses, so the fewer chips you get per wafer, which pushes up the price by a squared relationship (because area increases as a square relationship, not a linear releationship).

Ignoring that first exponential cost increase there's still the issue of yields; the chance of a die being defective increases exponentially with area becuase area increases as a square whilst defects per area is a linear constant.

Ignoring that second exponential cost increase there's the third issue of edge wastage. the larger a die, the larger the "steps" around the edge of the circular wafer that don't fit a whole die. The only way to fit a square peg into a round hole is with gaps, and those gaps are unsellable, wasted (but expensive) silicon wafer.

So, as the transistor count of a CPU increases linearly, you have two seperate exponential increases to multiply by, and than another linear cost increase on top of that. Big chips are super expensive.

Using any of the available semiconductor yield calculators, you'll be able to see that AMD can have either ~300 8-core Zen3 chiplets out of a wafer, or ~130 8 core APUs out of the exact same wafer. Whatever they choose to charge for it, the APU silicon is (300/130) about 2.3x as expensive for them to make. We're lucky AMD only charge a small premium for their APUs because 2.3x the cost of a Ryzen 7 5800 is a thousand dollars.
You seem to ignore important financial and manufacturing aspects of Ryzen CPUs. Since the IO die is made at GF, they need to ship those dies to another factory and with the Zen chiplet they need to be inteposed. That procedure, if it fails, both dies are a waste. That also costs time and money over the expense to make the dies, which are also having a <100% yield. So, when we account all of the procedure and with the 7nm yields at TSMC being more than great for months now, the APU could be close to or even cheaper to be made than the CPUs.
 
You seem to ignore important financial and manufacturing aspects of Ryzen CPUs. Since the IO die is made at GF, they need to ship those dies to another factory and with the Zen chiplet they need to be inteposed. That procedure, if it fails, both dies are a waste. That also costs time and money over the expense to make the dies, which are also having a <100% yield. So, when we account all of the procedure and with the 7nm yields at TSMC being more than great for months now, the APU could be close to or even cheaper to be made than the CPUs.
How is the failure rate of that packaging any different to the failure rate of packaging a larger monolithic die anyway? You're introducing a constant that applies to both big monolithic and small MCM products alike; It's not relevant.

Tons of theoretical bullshits for chip with area below 200mm2. Maybe you is right if talk for big complex structure...of 500mm2+ because hardness really increase exponentially. Cezanne is not smallest chip but is small chip. How work in modern fabric... With modern software most automatically. Using computers. Not make manually position of every transistor on chip area. Handmade stayed somewhere in '60 or '70 of 20th century.
Not theoretical bullshit, caculated using one of many similar, industry-standard tools; More of the wafer on the right is white, orange or magenta. Less of it is green.

1620408687433.png
1620408777907.png


Above is Zen2 8C/16T vs Renoir 8C/16T. AMD actually get more than twice as many Zen2 chiplets for the same money. Even ignoring the exponents, the sheer die area increase alone is enough to cause significant cost increases of the end product. AMD pay TSMC the same per wafer and the same to run that wafer through the fab no matter what product they etch into that wafer. Put yourself in AMD's shoes and ask yourself if you want to sell 780 products per wafer at $450 each or 322 products per wafer at $470 each? It's okay, I'll wait....
 
Last edited:
Back
Top