• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Samsung Exynos SoC with AMD RDNA GPU Destroys Competition, Apple 14 Bionic SoC Kneels

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,189 (1.11/day)
Some time ago, Samsung and AMD announced that they will be building a mobile processor that utilizes AMD RDNA architecture for graphics processing. Samsung is readying its Exynos 2100 SoC and today we get to see its performance results in the first leaked benchmark. The new SoC design has been put through a series of GPU-only benchmarks that stress just the AMD RDNA GPU. Firstly there is Manhattan 3 benchmark where the Exynos SoC scored 181.8 FPS. Secondly, the GPU has scored 138.25 FPS in Aztek Normal and 58 FPS in Aztek High. If we compare those results to the Apple A14 Bionic chip, which scored 146.4 FPS in Manhattan 3, 79.8 FPS in Aztek Normal, and 30.5 FPS in Aztek High, the Exynos design is faster anywhere from 25% to 100%. Of course, given that this is only a leak, all information should be taken with a grain of salt.


View at TechPowerUp Main Site
 
I am actually excited to see RDNA2 on the mobile space. But I am skeptical when this is paired with Exynos. I hope we won't see anymore of custom cores from Samsung, i.e. Mongoose. I rather they take the reference design from ARM and pair it with RDNA2.
 
It's not like Apple reinvented the wheel with Apple A14 Bionic, they blatantly borrowed/stole everything they had access too but they don't have access to latest technology.
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D
 
it was expected, Desktop architectures are more powerful than mobile counterparts, For example the TEGRA x1 is still the most powerful mobile GPU with 256 Cores even after nearly 6 years of duty
 
it was expected, Desktop architectures are more powerful than mobile counterparts, For example the TEGRA x1 is still the most powerful mobile GPU with 256 Cores even after nearly 6 years of duty
The one in nintendo switch?
 
I am actually excited to see RDNA2 on the mobile space. But I am skeptical when this is paired with Exynos. I hope we won't see anymore of custom cores from Samsung, i.e. Mongoose. I rather they take the reference design from ARM and pair it with RDNA2.
Given that they shutted down their custom design group a year ago, and 2100 using X1/A78/A55 cores, the next chip would definitely use Arm's reference cores.
 
I remember the custom Mongoose chip was faster than the competitors in few instances, because of it's big cache. And the chip itself was big.
 
As far as I got anand's article, 5000 series mobile Ryzen is essentially 4000 series with just CPU parts going from Zen2 to Zen3, while GPU side of things is exactly the same, with 15% bump in max clocks (still Vega)

It was claimed that they could leave most part of the chip intact and just swap cores.

RDNA2 cores going into mobile chips will be disruptive.
I hope someone rolls out OLED notebook with Ryzen 6000 series (doesn't need to be 4k) for under 2000Euro.
 
More cache as well, 5xxx mobile chips were aimed to "disrupt the market" & take even more market share especially given TGL-H with 8c/16t is nowhere to be seen. The next iteration is rumored to be zen3(+) with RDNA2 IGP on 6nm, that'll be even worse for Intel. At this point in time the only thing Intel's got going for them is the superior IGP & something like :shadedshu:
 
This could be a dev board with a huge heatsink. I'll wait for benchmarks of a phone or tablet.
 
As far as I got anand's article, 5000 series mobile Ryzen is essentially 4000 series with just CPU parts going from Zen2 to Zen3, while GPU side of things is exactly the same, with 15% bump in max clocks (still Vega)

It was claimed that they could leave most part of the chip intact and just swap cores.

RDNA2 cores going into mobile chips will be disruptive.
This.

AMD haven't prioritised IGP performance at all for the entirety of Zen Mobile. My 2700U is still close enough in graphics performance to the latest Cezanne 5000-series, according to early reviews. No matter how much you dress it up with faster RAM and higher clocks, Vega8 was a downgrade from Vega10 in the first-gen Ryzen mobiles - and that's assuming you could even find a Vega8. There were so few of the 4800U that even many of the popular reviewers gave up waiting and reviewed the 4700U instead! With CL14 DDR4-2400 and (temporarily) unrestricted power budget, my three year old 2700U was pretty comparable to the 4700U, just because it had 10CUs to the 4700U's 7. Also, at 1080p and under, bandwidth is less of an issue than absolute latency, and most of the DDR4-3200 Renoir models are using cheap, high-latency RAM :(

The one and only improvement AMD have made to their IGPs in the last 3.5 years is moving to 7nm. All that brings to the table is higher clocks within any given cooling envelope, and I have to limit my 14nm Raven Ridge laptop to 22.5W for any long-term usage. 35W needed to reach 1100MHz+ on the graphics will overwhelm the cooling in under 10 minutes. The performance isn't exactly rocket-science maths though:

Raven Ridge Vega10: 10CU x 1100MHz = 11000​
Cezanne Vega 7 = 7CU x 1800MHz = 12600​

That's an incredible (/s) 14% improvement over three years and three generations of x700U SKU.

RDNA2 is sorely needed in AMD's APUs. They also need to focus on less die-area for CPU cores and more die area for graphics. Nobody using a 15W laptop is going to care about having 16 threads running - especially since the 15W limit is likely to dial back the clocks of those threads hard, anyway. What would make more sense would be to drop Rembrandt (6000-series) down to 6-cores and potentially free up enough die area to boost the graphics from 8 Vega CUs to 12-15 Navi CUs. Each CPU core is easily as big as three Vega CUs on the existing dies.
 
Last edited:
This.

AMD haven't prioritised IGP performance at all for the entirety of Zen Mobile. My 2700U is still close enough in graphics performance to the latest Cezanne 5000-series, according to early reviews. No matter how much you dress it up with faster RAM and higher clocks, Vega8 was a downgrade from Vega10 in the first-gen Ryzen mobiles - and that's assuming you could even find a Vega8. There were so few of the 4800U that even many of the popular reviewers gave up waiting and reviewed the 4700U instead! With CL14 DDR4-2400 and (temporarily) unrestricted power budget, my three year old 2700U was pretty comparable to the 4700U, just because it had 10CUs to the 4700U's 7. Also, at 1080p and under, bandwidth is less of an issue than absolute latency, and most of the DDR4-3200 Renoir models are using cheap, high-latency RAM :(

The one and only improvement AMD have made to their IGPs in the last 3.5 years is moving to 7nm. All that brings to the table is higher clocks within any given cooling envelope, and I have to limit my 14nm Raven Ridge laptop to 22.5W for any long-term usage. 35W needed to reach 1100MHz+ on the graphics will overwhelm the cooling in under 10 minutes. The performance isn't exactly rocket-science maths though:

Raven Ridge Vega10: 10CU x 1100MHz = 11000​
Cezanne Vega 7 = 7CU x 1800MHz = 12600​

That's an incredible (/s) 14% improvement over three years and three generations of x700U SKU.

RDNA2 is sorely needed in AMD's APUs. They also need to focus on less die-area for CPU cores and more die area for graphics. Nobody using a 15W laptop is going to care about having 16 threads running - especially since the 15W limit is likely to dial back the clocks of those threads hard, anyway. What would make more sense would be to drop Rembrandt (6000-series) down to 6-cores and potentially free up enough die area to boost the graphics from 8 Vega CUs to 12-15 Navi CUs. Each CPU core is easily as big as three Vega CUs on the existing dies.

They don't focus on IGP because simply there is not enough bandwidth available (DDR4 speed limits) for huge performance uplift.
This will change with DDR5, but low end GPUs will already be better by then.
 
They don't focus on IGP because simply there is not enough bandwidth available (DDR4 speed limits) for huge performance uplift.
This will change with DDR5, but low end GPUs will already be better by then.
This has been debunked so many times I'm tired of re-linking the various articles. My DDR4-2400 IGP is more than capable of matching DDR4-3200-equipped laptops, too.

Sure, at higher-resolutions bandwidth is a real problem, but laptop IGPs don't have the graphics horsepower to run at higher resolutions in the first place. We're aiming for 720p60 or 1080p30 where bandwidth doesn't make a huge amount of difference so long as you're running reasonably low absolute RAM latency and have enough bandwidth to shift the bottleneck over to the graphics CUs. Dual-channel DDR4-3200 is close enough in performance to both my 2400 CL14 and LPDDR4X 4266 that clearly a near-doubling of bandwidth doesn't get anywhere close to a doubling of performance. 15-25% at best, CU-for-CU and clock-for-clock.
 
It's not like Apple reinvented the wheel with Apple A14 Bionic, they blatantly borrowed/stole everything they had access too but they don't have access to latest technology.
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D

snapdragon cpu use adreno gpus, adreno = radeon
Probably they bought ati/amd ip many years ago, then bult their tech up on them. Actual amd IP is "little" different from ip qualcom bought, so i can bet a qualcom/samsung cpu with a new amd gpu will totally destroy eny apple soc with imagination tech. gpu
 
It's not like Apple reinvented the wheel with Apple A14 Bionic, they blatantly borrowed/stole everything they had access too but they don't have access to latest technology.
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D
Qualcomm ADRENO = RADEON (previously ATI Imageon), AMD sold off, in hindsight, most lucrative business ever...
 
Qualcomm ADRENO = RADEON (previously ATI Imageon), AMD sold off, in hindsight, most lucrative business ever...
AMD only selling the mobile GPU IP. even in mobile space only having the GPU IP will not going to bring in big money. just look at Imagination.

It's not like Apple reinvented the wheel with Apple A14 Bionic, they blatantly borrowed/stole everything they had access too but they don't have access to latest technology.
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D
but despite having the best tech they still cannot equip all their smartphone with their own SoC.
 
As far as I'm aware they are using Vega gpu, not RDNA or RDNA2. Its a custom Vega gpu with some RDNA features in it. But even existing mobile gpu's are using old AMD patents, since AMD decided to sell a ton of it patents about a decade ago when they were struggling for money.
 
Im really excited for this, since my region got crappy exynos.
 
Another just for saying



Why make the same article 10 month after ?
 
This has been debunked so many times I'm tired of re-linking the various articles. My DDR4-2400 IGP is more than capable of matching DDR4-3200-equipped laptops, too.
LOLcalmdown

Sure, at higher-resolutions bandwidth is a real problem, but laptop IGPs don't have the graphics horsepower to run at higher resolutions in the first place. We're aiming for 720p60 or 1080p30 where bandwidth doesn't make a huge amount of difference so long as you're running reasonably low absolute RAM latency and have enough bandwidth to shift the bottleneck over to the graphics CUs. Dual-channel DDR4-3200 is close enough in performance to both my 2400 CL14 and LPDDR4X 4266 that clearly a near-doubling of bandwidth doesn't get anywhere close to a doubling of performance. 15-25% at best, CU-for-CU and clock-for-clock.
And if the GPUs got bigger, as you desire, then they would be capable of higher resolution gaming, and would become bottlenecked by memory bandwidth, which is why AMD hasnt done it. You defanged your OWN argument with this little tidbit.
 
It'd be great if Samsung allows other competitors access to their chips, frankly the near monopoly of QC on high end phones is just unbearable & I'm not even talking about their baseband (royalty?) shenanigans!
 
it was expected, Desktop architectures are more powerful than mobile counterparts, For example the TEGRA x1 is still the most powerful mobile GPU with 256 Cores even after nearly 6 years of duty

But still useless because it sucked up battery life like it was no one's business.
 
Back
Top