Friday, January 29th 2021

Samsung Exynos SoC with AMD RDNA GPU Destroys Competition, Apple 14 Bionic SoC Kneels

Some time ago, Samsung and AMD announced that they will be building a mobile processor that utilizes AMD RDNA architecture for graphics processing. Samsung is readying its Exynos 2100 SoC and today we get to see its performance results in the first leaked benchmark. The new SoC design has been put through a series of GPU-only benchmarks that stress just the AMD RDNA GPU. Firstly there is Manhattan 3 benchmark where the Exynos SoC scored 181.8 FPS. Secondly, the GPU has scored 138.25 FPS in Aztek Normal and 58 FPS in Aztek High. If we compare those results to the Apple A14 Bionic chip, which scored 146.4 FPS in Manhattan 3, 79.8 FPS in Aztek Normal, and 30.5 FPS in Aztek High, the Exynos design is faster anywhere from 25% to 100%. Of course, given that this is only a leak, all information should be taken with a grain of salt.
Source: Tom's Hardware
Add your own comment

43 Comments on Samsung Exynos SoC with AMD RDNA GPU Destroys Competition, Apple 14 Bionic SoC Kneels

#1
watzupken
I am actually excited to see RDNA2 on the mobile space. But I am skeptical when this is paired with Exynos. I hope we won't see anymore of custom cores from Samsung, i.e. Mongoose. I rather they take the reference design from ARM and pair it with RDNA2.
Posted on Reply
#2
Luminescent
It's not like Apple reinvented the wheel with Apple A14 Bionic, they blatantly borrowed/stole everything they had access too but they don't have access to latest technology.
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D
Posted on Reply
#3
Kyuta
it was expected, Desktop architectures are more powerful than mobile counterparts, For example the TEGRA x1 is still the most powerful mobile GPU with 256 Cores even after nearly 6 years of duty
Posted on Reply
#4
ViperXTR
Kyuta
it was expected, Desktop architectures are more powerful than mobile counterparts, For example the TEGRA x1 is still the most powerful mobile GPU with 256 Cores even after nearly 6 years of duty
The one in nintendo switch?
Posted on Reply
#5
Logoffon
watzupken
I am actually excited to see RDNA2 on the mobile space. But I am skeptical when this is paired with Exynos. I hope we won't see anymore of custom cores from Samsung, i.e. Mongoose. I rather they take the reference design from ARM and pair it with RDNA2.
Given that they shutted down their custom design group a year ago, and 2100 using X1/A78/A55 cores, the next chip would definitely use Arm's reference cores.
Posted on Reply
#6
davideneco
Just for saying


This score leaked in may 2020
Posted on Reply
#7
BorisDG
I remember the custom Mongoose chip was faster than the competitors in few instances, because of it's big cache. And the chip itself was big.
Posted on Reply
#8
medi01
As far as I got anand's article, 5000 series mobile Ryzen is essentially 4000 series with just CPU parts going from Zen2 to Zen3, while GPU side of things is exactly the same, with 15% bump in max clocks (still Vega)

It was claimed that they could leave most part of the chip intact and just swap cores.

RDNA2 cores going into mobile chips will be disruptive.
I hope someone rolls out OLED notebook with Ryzen 6000 series (doesn't need to be 4k) for under 2000Euro.
Posted on Reply
#9
R0H1T
More cache as well, 5xxx mobile chips were aimed to "disrupt the market" & take even more market share especially given TGL-H with 8c/16t is nowhere to be seen. The next iteration is rumored to be zen3(+) with RDNA2 IGP on 6nm, that'll be even worse for Intel. At this point in time the only thing Intel's got going for them is the superior IGP & something like :shadedshu:
Amd/comments/l01p4t
Posted on Reply
#10
TumbleGeorge
medi01
s far as I got anand's article, 5000 series mobile Ryzen is essentially 4000 series with just CPU parts going from Zen2 to Zen3
+ 2X cache size... Little more than "essentially"
Posted on Reply
#11
Flanker
This could be a dev board with a huge heatsink. I'll wait for benchmarks of a phone or tablet.
Posted on Reply
#12
Chrispy_
medi01
As far as I got anand's article, 5000 series mobile Ryzen is essentially 4000 series with just CPU parts going from Zen2 to Zen3, while GPU side of things is exactly the same, with 15% bump in max clocks (still Vega)

It was claimed that they could leave most part of the chip intact and just swap cores.

RDNA2 cores going into mobile chips will be disruptive.
This.

AMD haven't prioritised IGP performance at all for the entirety of Zen Mobile. My 2700U is still close enough in graphics performance to the latest Cezanne 5000-series, according to early reviews. No matter how much you dress it up with faster RAM and higher clocks, Vega8 was a downgrade from Vega10 in the first-gen Ryzen mobiles - and that's assuming you could even find a Vega8. There were so few of the 4800U that even many of the popular reviewers gave up waiting and reviewed the 4700U instead! With CL14 DDR4-2400 and (temporarily) unrestricted power budget, my three year old 2700U was pretty comparable to the 4700U, just because it had 10CUs to the 4700U's 7. Also, at 1080p and under, bandwidth is less of an issue than absolute latency, and most of the DDR4-3200 Renoir models are using cheap, high-latency RAM :(

The one and only improvement AMD have made to their IGPs in the last 3.5 years is moving to 7nm. All that brings to the table is higher clocks within any given cooling envelope, and I have to limit my 14nm Raven Ridge laptop to 22.5W for any long-term usage. 35W needed to reach 1100MHz+ on the graphics will overwhelm the cooling in under 10 minutes. The performance isn't exactly rocket-science maths though:

[INDENT]Raven Ridge Vega10: 10CU x 1100MHz = 11000[/INDENT]
[INDENT]Cezanne Vega 7 = 7CU x 1800MHz = 12600[/INDENT]

That's an incredible (/s) 14% improvement over three years and three generations of x700U SKU.

RDNA2 is sorely needed in AMD's APUs. They also need to focus on less die-area for CPU cores and more die area for graphics. Nobody using a 15W laptop is going to care about having 16 threads running - especially since the 15W limit is likely to dial back the clocks of those threads hard, anyway. What would make more sense would be to drop Rembrandt (6000-series) down to 6-cores and potentially free up enough die area to boost the graphics from 8 Vega CUs to 12-15 Navi CUs. Each CPU core is easily as big as three Vega CUs on the existing dies.
Posted on Reply
#13
GeorgeMan
Chrispy_
This.

AMD haven't prioritised IGP performance at all for the entirety of Zen Mobile. My 2700U is still close enough in graphics performance to the latest Cezanne 5000-series, according to early reviews. No matter how much you dress it up with faster RAM and higher clocks, Vega8 was a downgrade from Vega10 in the first-gen Ryzen mobiles - and that's assuming you could even find a Vega8. There were so few of the 4800U that even many of the popular reviewers gave up waiting and reviewed the 4700U instead! With CL14 DDR4-2400 and (temporarily) unrestricted power budget, my three year old 2700U was pretty comparable to the 4700U, just because it had 10CUs to the 4700U's 7. Also, at 1080p and under, bandwidth is less of an issue than absolute latency, and most of the DDR4-3200 Renoir models are using cheap, high-latency RAM :(

The one and only improvement AMD have made to their IGPs in the last 3.5 years is moving to 7nm. All that brings to the table is higher clocks within any given cooling envelope, and I have to limit my 14nm Raven Ridge laptop to 22.5W for any long-term usage. 35W needed to reach 1100MHz+ on the graphics will overwhelm the cooling in under 10 minutes. The performance isn't exactly rocket-science maths though:

[INDENT]Raven Ridge Vega10: 10CU x 1100MHz = 11000[/INDENT]
[INDENT]Cezanne Vega 7 = 7CU x 1800MHz = 12600[/INDENT]

That's an incredible (/s) 14% improvement over three years and three generations of x700U SKU.

RDNA2 is sorely needed in AMD's APUs. They also need to focus on less die-area for CPU cores and more die area for graphics. Nobody using a 15W laptop is going to care about having 16 threads running - especially since the 15W limit is likely to dial back the clocks of those threads hard, anyway. What would make more sense would be to drop Rembrandt (6000-series) down to 6-cores and potentially free up enough die area to boost the graphics from 8 Vega CUs to 12-15 Navi CUs. Each CPU core is easily as big as three Vega CUs on the existing dies.
They don't focus on IGP because simply there is not enough bandwidth available (DDR4 speed limits) for huge performance uplift.
This will change with DDR5, but low end GPUs will already be better by then.
Posted on Reply
#14
Chrispy_
GeorgeMan
They don't focus on IGP because simply there is not enough bandwidth available (DDR4 speed limits) for huge performance uplift.
This will change with DDR5, but low end GPUs will already be better by then.
This has been debunked so many times I'm tired of re-linking the various articles. My DDR4-2400 IGP is more than capable of matching DDR4-3200-equipped laptops, too.

Sure, at higher-resolutions bandwidth is a real problem, but laptop IGPs don't have the graphics horsepower to run at higher resolutions in the first place. We're aiming for 720p60 or 1080p30 where bandwidth doesn't make a huge amount of difference so long as you're running reasonably low absolute RAM latency and have enough bandwidth to shift the bottleneck over to the graphics CUs. Dual-channel DDR4-3200 is close enough in performance to both my 2400 CL14 and LPDDR4X 4266 that clearly a near-doubling of bandwidth doesn't get anywhere close to a doubling of performance. 15-25% at best, CU-for-CU and clock-for-clock.
Posted on Reply
#15
Fabio
Luminescent
It's not like Apple reinvented the wheel with Apple A14 Bionic, they blatantly borrowed/stole everything they had access too but they don't have access to latest technology.
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D
snapdragon cpu use adreno gpus, adreno = radeon
Probably they bought ati/amd ip many years ago, then bult their tech up on them. Actual amd IP is "little" different from ip qualcom bought, so i can bet a qualcom/samsung cpu with a new amd gpu will totally destroy eny apple soc with imagination tech. gpu
Posted on Reply
#16
xantippe666
Luminescent
It's not like Apple reinvented the wheel with Apple A14 Bionic, they blatantly borrowed/stole everything they had access too but they don't have access to latest technology.
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D
Qualcomm ADRENO = RADEON (previously ATI Imageon), AMD sold off, in hindsight, most lucrative business ever...
Posted on Reply
#17
renz496
xantippe666
Qualcomm ADRENO = RADEON (previously ATI Imageon), AMD sold off, in hindsight, most lucrative business ever...
AMD only selling the mobile GPU IP. even in mobile space only having the GPU IP will not going to bring in big money. just look at Imagination.
Luminescent
It's not like Apple reinvented the wheel with Apple A14 Bionic, they blatantly borrowed/stole everything they had access too but they don't have access to latest technology.
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D
but despite having the best tech they still cannot equip all their smartphone with their own SoC.
Posted on Reply
#18
R0H1T
xantippe666
AMD sold off, in hindsight, most lucrative business ever
Hardly, look at PowerVR & where'd they end up. ARM is the biggest & frankly the only "IP" owner who's making big bucks in that space, everywhere else it's just bloodbath unless of course you're Qualcomm :D
Posted on Reply
#19
tfdsaf
As far as I'm aware they are using Vega gpu, not RDNA or RDNA2. Its a custom Vega gpu with some RDNA features in it. But even existing mobile gpu's are using old AMD patents, since AMD decided to sell a ton of it patents about a decade ago when they were struggling for money.
Posted on Reply
#20
remunramu
Im really excited for this, since my region got crappy exynos.
Posted on Reply
#22
TheinsanegamerN
Chrispy_
This has been debunked so many times I'm tired of re-linking the various articles. My DDR4-2400 IGP is more than capable of matching DDR4-3200-equipped laptops, too.
LOLcalmdown
Sure, at higher-resolutions bandwidth is a real problem, but laptop IGPs don't have the graphics horsepower to run at higher resolutions in the first place. We're aiming for 720p60 or 1080p30 where bandwidth doesn't make a huge amount of difference so long as you're running reasonably low absolute RAM latency and have enough bandwidth to shift the bottleneck over to the graphics CUs. Dual-channel DDR4-3200 is close enough in performance to both my 2400 CL14 and LPDDR4X 4266 that clearly a near-doubling of bandwidth doesn't get anywhere close to a doubling of performance. 15-25% at best, CU-for-CU and clock-for-clock.
And if the GPUs got bigger, as you desire, then they would be capable of higher resolution gaming, and would become bottlenecked by memory bandwidth, which is why AMD hasnt done it. You defanged your OWN argument with this little tidbit.
Posted on Reply
#23
R0H1T
It'd be great if Samsung allows other competitors access to their chips, frankly the near monopoly of QC on high end phones is just unbearable & I'm not even talking about their baseband (royalty?) shenanigans!
Posted on Reply
#24
Totally
Kyuta
it was expected, Desktop architectures are more powerful than mobile counterparts, For example the TEGRA x1 is still the most powerful mobile GPU with 256 Cores even after nearly 6 years of duty
But still useless because it sucked up battery life like it was no one's business.
Posted on Reply
#25
Chrispy_
TheinsanegamerN
LOLcalmdown


And if the GPUs got bigger, as you desire, then they would be capable of higher resolution gaming, and would become bottlenecked by memory bandwidth, which is why AMD hasnt done it. You defanged your OWN argument with this little tidbit.
Apologist arguments like that are why AMD's IGPs are so lame at the moment.

We're not talking about enough of a performance jump to go to much higher resolutions, we're just trying to get barely playable games that run 20-30fps to run at maybe 30-50fps. Even a 50% performance jump is well within the limits of current DDR4.
Posted on Reply
Add your own comment