Monday, January 22nd 2018

AMD Reveals Specs of Ryzen 2000G "Raven Ridge" APUs

AMD today revealed specifications of its first desktop socket AM4 APUs based on the "Zen" CPU micro-architecture, the Ryzen 2000G "Raven Ridge" series. The chips combine a quad-core "Zen" CPU with an integrated graphics core based on the "Vega" graphics architecture, with up to 11 NGCUs, amounting to 704 stream processors. The company is initially launching two SKUs, the Ryzen 3 2200G, and the Ryzen 5 2400G. Besides clock speeds, the two are differentiated with the Ryzen 5 featuring CPU SMT, and more iGPU stream processsors. The Ryzen 5 2400G is priced at USD $169, while the Ryzen 3 2200G goes for $99. Both parts will be available on the 12th of February, 2018.

The Ryzen 5 2400 features an 4-core/8-thread CPU clocked at 3.60 GHz, with a boost frequency of 3.90 GHz; 2 MB of L2 cache (512 KB per core), and 4 MB of shared L3 cache; and Radeon Vega 11 graphics (with the 11 denoting NGCU count), featuring 704 stream processors. The iGPU engine clock is set at 1250 MHz. The dual-channel DDR4 integrated memory controller supports up to 64 GB of dual-channel DDR4-2933 MHz memory. The Ryzen 3 2200G is a slightly cut down part. Lacking SMT, its 4-core/4-thread CPU ticks at 3.50 GHz, with 3.70 GHz boost. Its CPU cache hierarchy is unchanged; the iGPU features only 8 out of 11 NGCUs, which translate to 512 stream processors. The iGPU engine clock is set at 1100 MHz. Both parts feature unlocked CPU base-clock multipliers; and have their TDP rated at 65W, and include AMD Wraith Stealth cooling solutions.
Add your own comment

97 Comments on AMD Reveals Specs of Ryzen 2000G "Raven Ridge" APUs

#76
eidairaman1
The Exiled Airman
"TheinsanegamerN said:
That is an OPINION, good sir, unless you can scientifically prove that raven ridge is underwhelming. Please, conduct those experiments and submit them to peer-reviewed sources, I'll wait.

You go on and on about how underwhelming the iGPU is, but cant state why. The new G series is much more powerful then the outgoing kaveri based models on desktop, and show massive performance jumps on laptops. The only way this could be underwhelming is if you were expecting way more. As someone who has used APUs in the past, these raven ridge parts look fantastic, and represent a large jump in performance for integrated graphics, setting the bar much higher and making cards like the 1030 and 1050 increasingly irrelevant. How you can call something like this, for a low price underwhelming is a mystery to me.

Please, do explain what about this is underwhelming. the iGPU is underwhelming? Why? What about it is not living up to expectations?
Just a troll, don't feed him. Whine baby is all he is.
Posted on Reply
#77
trparky
OK, so if that's the only way to benchmark the XBOX One then why are we arguing here?
Posted on Reply
#78
Patriot
"trparky said:
OK, so if that's the only way to benchmark the XBOX One then why are we arguing here?
Beats me.
Posted on Reply
#79
slehmann
"Patriot said:
Historically only supported on the pro variants...but it really is up to the motherboards.
Well, the normal non-pro Ryzen Desktop CPUs all support ECC.
So yes, might be up to the MBs, but the Pro stuff is not an argument.
The Bulldozers supported it also, and im not aware of any Bulldozer Pro line.
Posted on Reply
#80
GoldenX
The old normal Phenoms (and Athlon and Sempron variants) also supported ECC.
Posted on Reply
#81
bug
"GoldenX said:
Same old GCN of previous APUs plus Vega's improvements and additional DDR4 bandwidth, do the math.
That only gives us an estimate, not a solid reason to say "The IGP performance looks pretty good".
Posted on Reply
#82
GoldenX
"bug said:
That only gives us an estimate, not a solid reason to say "The IGP performance looks pretty good".
Previous APUs had the best IGPs, this is a Zen CPU with a newer iteration of the arch (Vega), bigger CU count, better clocked, and less bandwidth starved than before.
It's going to be the best IGP save for the Intel+Vega combo, that it's not sold as an standalone product for DIY clients, or the Nintendo Switch, that it's a console.
Anything else you expect for it, it's your fault. You want 1080 ultra gaming? Go for a dedicated video card, don't ask a small CPU+GPU combo limited to a 65w envelope to do that.
Posted on Reply
#83
Valantar
"GoldenX said:
Previous APUs had the best IGPs, this is a Zen CPU with a newer iteration of the arch (Vega), bigger CU count, better clocked, and less bandwidth starved than before.
It's going to be the best IGP save for the Intel+Vega combo, that it's not sold as an standalone product for DIY clients, or the Nintendo Switch, that it's a console.
Anything else you expect for it, it's your fault. You want 1080 ultra gaming? Go for a dedicated video card, don't ask a small CPU+GPU combo limited to a 65w envelope to do that.
The Switch? The Switch runs a variant of the Nvidia Tegra X1, which has 256 Maxwell cores. At best, it runs slightly over 1GHz when docked. In other words: these APUs will stomp all over the Switch. Given that the Switch is based on mobile hardware, this is not exactly surprising. Heck, I wouldn't be too surprised if these chips could run emulated switch games (when the emulators arrive) at full resolution and speed.

Other than that, you're pretty much right, even if you could express it slight less aggressively :p
Posted on Reply
#84
GoldenX
"Valantar said:
The Switch? The Switch runs a variant of the Nvidia Tegra X1, which has 256 Maxwell cores. At best, it runs slightly over 1GHz when docked. In other words: these APUs will stomp all over the Switch. Given that the Switch is based on mobile hardware, this is not exactly surprising. Heck, I wouldn't be too surprised if these chips could run emulated switch games (when the emulators arrive) at full resolution and speed.

Other than that, you're pretty much right, even if you could express it slight less aggressively :p
It's the best Nvidia IGP.
Now I want a Threadripper with a full Vega IGP with HBM.
Posted on Reply
#85
Mussels
Moderprator
You cant use a framerate comparison between Xbox and PS4 based on recording the outputs, for the simple reason they use different graphical settings.

You can compare which is smoother or better looking for a user experience point of view, but it IS NOT, and CAN NOT be used as a performance benchmark unless the graphical details are 100% the same on both platforms (complicate that even further for games that use different settings for the various hardware models of each console)
Posted on Reply
#86
Xzibit
Almost as entertaining as

<div class="youtube-embed" data-id="7tTfL-DtpXk"><img src="https://i.ytimg.com/vi/7tTfL-DtpXk/hqdefault.jpg" /><div class="youtube-play"></div><a href="https://www.youtube.com/watch?v=7tTfL-DtpXk" target="_blank" class="youtube-title"></a></div>
Posted on Reply
#87
IceShroom
"cadaveca said:
You need to pay more attentino to our reviews. I covered a kit some time ago that is 2666 JEDEC-spec, right here:

https://www.techpowerup.com/reviews/AData/XPG_SPECTRIX_D40/

it boots @ 2666 MHz without any user intervention. The XMP profile offered then tightens the timings. There are also kits with 2666 MHz JEDEC profile in SO-DIMMs, and brands like G.Skill will have kits soon (I have spoken to all brands I deal with regularily and have asked for such. G.Skill said they would release new line-up with 2666 MHz default profile).
Had exam that time. Beside that kit and some laptop kit how many kit support 2666MHz JEDEC speed[Ryzen Launced March 2017 and Skylake-X with 2666 launced June 2017]? Even G.Skill Flare X 3200 has 2400 JEDEC speed insted 2666MHz JEDEC[Ryzen supports DDR4-2133/2400/2666 JEDEC].
Posted on Reply
#88
GoldenX
Why is JEDEC so slow to implement higher frequencies?
Posted on Reply
#89
bug
"GoldenX said:
Previous APUs had the best IGPs, this is a Zen CPU with a newer iteration of the arch (Vega), bigger CU count, better clocked, and less bandwidth starved than before.
It's going to be the best IGP save for the Intel+Vega combo, that it's not sold as an standalone product for DIY clients, or the Nintendo Switch, that it's a console.
Anything else you expect for it, it's your fault. You want 1080 ultra gaming? Go for a dedicated video card, don't ask a small CPU+GPU combo limited to a 65w envelope to do that.
Right. So you couldn't play intensive games on previous IGPs, you still can't play them on this one. But this is going to benchmark better. Great.
Posted on Reply
#90
Valantar
"bug said:
Right. So you couldn't play intensive games on previous IGPs, you still can't play them on this one. But this is going to benchmark better. Great.
If "intensive games" are AAA titles and the like, then probably not, at least not at 1080p high/ultra. That's to be expected for a <100W APU, given that you can't do that on low-end dGPUs in the 50-75W class either. In other words: you need a far larger iGPU for that to be an option. The question here isn't whether it'll benchmark better, but if it'll offer playable frame rates for popular games at reasonable resolutions. AAA at 720p high, esports at 1080p medium/high, and so on. I want to be able to play Rocket League on my HTPC, which my current A8-7600 is incapable of (even with DDR3-1866 and a ~150MHz GPU OC).
"GoldenX said:
It's the best Nvidia IGP.
Now I want a Threadripper with a full Vega IGP with HBM.
Yes, it is. But you said
"GoldenX said:
It's going to be the best IGP save for the Intel+Vega combo [...] or the Nintendo Switch[..].
In other words, you were saying that the Switch is more powerful. If that wasn't what you meant, you need to work on your wording :)

Threadripper isn't necessary though, it's pretty useless for gaming. Full Ryzen 8c16t die with on-package 24CU Vega Mobile GPU and HBM, though? Yes please.
Posted on Reply
#91
bug
"Valantar said:
If "intensive games" are AAA titles and the like, then probably not, at least not at 1080p high/ultra. That's to be expected for a <100W APU, given that you can't do that on low-end dGPUs in the 50-75W class either. In other words: you need a far larger iGPU for that to be an option. The question here isn't whether it'll benchmark better, but if it'll offer playable frame rates for popular games at reasonable resolutions. AAA at 720p high, esports at 1080p medium/high, and so on. I want to be able to play Rocket League on my HTPC, which my current A8-7600 is incapable of (even with DDR3-1866 and a ~150MHz GPU OC).
@GoldenX has been sly enough to not tell us what he was referring to when he deemed this "good". I'm not gonna try to guess what he meant.
For me, there's a clear cut: the IGP is for desktop work, for games you need at least a mid range dGPU. Anything between that is just money down the drain: it will not play games well, it will not make Office or your browser of choice perform any better.

Let's just wait for benchmarks and see where this stands, instead of passing on definitive judgment based on pictures and impressions, ok?
Posted on Reply
#92
Valantar
"bug said:
@GoldenX has been sly enough to not tell us what he was referring to when he deemed this "good". I'm not gonna try to guess what he meant.
For me, there's a clear cut: the IGP is for desktop work, for games you need at least a mid range dGPU. Anything between that is just money down the drain: it will not play games well, it will not make Office or your browser of choice perform any better.

Let's just wait for benchmarks and see where this stands, instead of passing on definitive judgment based on pictures and impressions, ok?
I'm with you on that last part, but I disagree a bit on the "value of an iGPU" part. Of course, I'm in the rather privileged position of having a gaming desktop, an HTPC, a desktop workstation and three laptops in a two-person household. As such, I don't exactly need the iGPU to be the be-all, end-all in gaming performance.

Still, given the good CPU performance we know Zen cores deliver, with a "good enough for esports" iGPU, these chips would make for amazing budget-level gaming boxes with the option of adding a dGPU down the line. This argument was made for the previous A-series also, but they fell flat due to weak CPU perf making them too weak for anything but a bargain bin dGPU. I'm hoping this manages to live up to this expectation (i.e. run DOTA, CS:GO, RL and Overwatch at 1080p medium or so), as that could be a great market share win for AMD. Which, again, brings me to why I find it worth my time to speculate on and discuss. Of course, we can't know until we see benchmarks, so the discussion is ultimately meaningless. That doesn't mean it's a waste of time, though :P

"Mussels said:
You cant use a framerate comparison between Xbox and PS4 based on recording the outputs, for the simple reason they use different graphical settings.

You can compare which is smoother or better looking for a user experience point of view, but it IS NOT, and CAN NOT be used as a performance benchmark unless the graphical details are 100% the same on both platforms (complicate that even further for games that use different settings for the various hardware models of each console)
This is just plain misunderstood. Of course you can compare them even if they have different graphical settings. It's even quite simple, as they both have fixed (non-user adjustable) graphical settings. It simply adds another variable, meaning that you have to have to stop asking "which performs better at X settings?" and start asking "which can deliver the best combination of performance and graphical fidelity for a given piece of software?". This does of course mean that the answers might also become more complex, with possibilities such as better graphical fidelity but worse performance. But how is that really a problem? This ought to be far easier to grasp for the average user than the complexities of frame timing and 99th percentile smoothness, after all.

You seem to say that performance benchmarks are only useful if they can provide absolute performance rankings. This is quite oversimplified: the point of a game performance benchmark is not ultimately to rank absolute performance, but to show which components/systems can run [software X] at the the most ideal settings for the end user. Ultimately, absolute performance doesn't matter to the end user - perceived performance does. If a less powerful system lowers graphical fidelity imperceptibly and manages to match a more powerful system, they are for all intents and purposes identical to the end user. To put it this way: if a $100 GPU lets you play your favourite games at ~40fps, and a $150 GPU runs them at ~60fps, that's perceivable for most users. On the other hand, if a $120 GPU runs them at ~45fps, is there any reason to buy the more expensive GPU? Arguably not.

Absolute performance doesn't matter unless it's perceivable. Of course, this also brings into question the end user's visual acuity and sensitivity to various graphical adjustments, which is highly subjective (and at least partially placebo-ish, not entirely unlike the Hi-Fi scene), but that doesn't change if you instead try measuring absolute performance. The end user still has to know their own sensitivity to these things, and ultimately decide what matters to them. That's not the job of the reviewer. The job of the reviewer is to inform the end user in the best way possible, which multi-variable testing arguably does better than absolute performance rankings (as those have a placebo-reinforcing effect, and ignore crucial factors of gameplay).
Posted on Reply
#93
lexluthermiester
"GoldenX said:
Why is JEDEC so slow to implement higher frequencies?
They establish standards based on research and development that are guaranteed to work perfectly. Companies are free to set their own performance baseline parameters. So think of JEDEC as a set of standardized guide-lines. Those guide-lines change as technologies progress and are tested to be stable. Does that make sense?
Posted on Reply
#94
solitude
at least put very low iGPU for next ryzen, well for web browsing, edit documents, etc.. and i'll go for it..
Posted on Reply
#95
Valantar
"solitude said:
at least put very low iGPU for next ryzen, well for web browsing, edit documents, etc.. and i'll go for it..
If they continue the strategy of having one die across the desktop, workstation and enterprise/server segments, I doubt that will happen. Even with a tiny iGPU, it's a significant area increase (needs not only the GPU cores, but encode/decode and other GPU uncore parts) which gives them nothing of any real value outside of the desktop segment. Not to mention that it'll probably occupy some IF lanes, which are plentiful in the desktop space, but invaluable and not expendable in enterprise.
Posted on Reply
#96
Sammyfed1
I just built a system using the Ryzen 3 2200G
-Gigabyte A320M-S2H-CF Motherboard
-8Gig Patriot DDR4 2400
-ADATA M.2 SSD 128g (Fast Win10 Boot)
-RECADATA Industrial MLC Series SATA3 SSD 128g

$220 dollar system

Im amazed at the all-use performance for this price.
The Ryzen 3 2200G single thread and multi thread performance is impressive
Especially when compared to its previous gen FX chip which are priced around equal at $99
The upgradability on the AM4 is great IMO.

The Vega Graphics does a decent job at gaming for the price of this APU
Adding a discreete video card will boost the value even more.

SO far so good for AMD Ryzen series

*Please note the above information is based just on my opinion based on 1 month of use in a windows 10 gaming/ multimedia use
Posted on Reply
#97
GoldenX
Same specs, but with a 1TB HDD for a client. Excelent performace and great graphics power for his 1600x900 display. Didn't have time to try to reach over 3200 on the 2400MHz RAM.

Had time to try Ark (the requested game) and both Persona 5 and Breath of the Wild (I love to benchmark with console emulators). The PS3 emulator ran great save the need for more threads (the PS3 is an eight core monster, and you have to emulate that), it even had better performance than my 1200 + 270X combo or my brother's G4560+1030 one, thanks to the IGP sharing the RAM, so less latency internaly when emulating the PS3 components. Diferent story with Breath of the Wild, the tri-core emulation, full GPU usage and the bad AMD OpenGL drivers managed to reach the CPU's power throttle, so performance was inferior than the G4560 + 1030 in this case, thanks to the reduced CPU frequency. The same happens with the old FX APUs, but you get 30FPS with Zen, instead of 11.

The weird thing is, it was an A320 motherboard, but it did have overclock and voltage options.
Posted on Reply
Add your own comment