Tuesday, February 15th 2022

Samsung RDNA2-based Exynos 2200 GPU Performance Significantly Worse than Snapdragon 8 Gen1, Both Power Galaxy S22 Ultra

The Exynos 2200 SoC powering the Samsung Galaxy S22 Ultra in some regions such as the EU, posts some less-than-stellar graphics performance numbers, for all the hype around its AMD-sourced RDNA2 graphics solution, according to an investigative report by Erdi Özüağ, aka "FX57." Samsung brands this RDNA2-based GPU as the Samsung Xclipse 920. Further, Özüağ's testing found that the Exynos 2200 is considerably slower than the Qualcomm Snapdragon 8 Gen 1 powering the S22 Ultra in certain other regions, including the US and India. He has access to both kinds of the S22 Ultra.

In the UL Benchmarks 3DMark Wildlife test, the Exynos 2200 posted a score of 6684 points, compared to 9548 points by the Snapdragon 8 Gen 1 (a difference of 42 percent). What's even more interesting, is that the Exynos 2200 is barely 7 percent faster than the previous-gen Exynos 2100 (Arm Mali GPU) powering the S21 Ultra, which scored 6256 points. The story repeats with the GFXBench "Manhattan" off-screen render benchmark. Here, the Snapdragon 8 Gen 1 is 30 percent faster than the Exynos 2200, which performs on-par with the Exynos 2100. Find a plethora of other results in the complete review comparing the two flavors of the S22 Ultra.
Özüağ predicts that Samsung could be working on a major software update that could improve or normalize performance between the two phone types. and that the Exynos 2200 is in need of significant software-level optimization. Özüağ also offers valuable insights into a possible cause the RDNA2-based Xclipse 920 is underwhelming. The iGPU could be starved for engine clocks, or we think possibly even memory bandwidth. Engine clocks play a decisive role in the performance of RDNA2-based discrete GPUs. AMD also spent significant engineering capital on lubricating the memory sub-system with the on-die Infinity Cache memory that operates at bandwidths typically 3-4 times that of the GDDR6 memory. The extremely tight power budget and Samsung 4 nm node could be impacting the iGPU's ability to sustain high engine clocks. We'll keep track on this story, as it marks AMD's second rodeo with smartphone graphics since the ATI Imageon days (over 14 years ago).
Source: Erdi Özüağ (YouTube)
Add your own comment

84 Comments on Samsung RDNA2-based Exynos 2200 GPU Performance Significantly Worse than Snapdragon 8 Gen1, Both Power Galaxy S22 Ultra

#1
BorisDG
Funny, but A14/A15 still rocks. :D
Posted on Reply
#2
john_
That's bad even for a first try.
Posted on Reply
#3
Daven
BorisDGFunny, but A14/A15 still rocks. :D
Nothing will touch the performance of Apple A series chips in the handheld space for the foreseeable future. There was some hope that the RDNA-2 based Xclipse would start to catch up. Sadly this doesn’t look like the case. Android users will have to wait longer to get the kinds of performance numbers iOS users have been enjoying for a few years now.
Posted on Reply
#4
DeathtoGnomes
Will be interesting what AD says on this, doubtful anything will be said.
Posted on Reply
#5
watzupken
DavenNothing will touch the performance of Apple A series chips in the handheld space for the foreseeable future. There was some hope that the RDNA-2 based Xclipse would start to catch up. Sadly this doesn’t look like the case. Android users will have to wait longer to get the kinds of performance numbers iOS users have been enjoying for a few years now.
I feel Apple have a significant advantage where they always use cutting edge TSMC nodes and also the benefit of having top notch software support. Not to say that the software is perfect (it is nowhere perfect), but when you have both hardware and software under the same company, it is easier to optimize both to ensure that the devices work optimally. On the other hand, Qualcomm and Samsung for example, don't have a lot of control over software since it is solely developed and maintained by Google. In addition, starting from SN 888 and Exynos 2100, both are on a less optimal Samsung node, which puts both chips at a significant disadvantage. The last SN 865 using TSMC did not perform this badly since it does not throttle that much. With the new flagship chip from Qualcomm and Samsung, they are on a marginally better Samsung node, which I feel is the reason why both are performing poorly, even against the likes of Mediatek that have stick on to TSMC.

Having said that, I feel performance improvements with Apple SOCs are slowing down. Between A14 and 15, the performance jump isn't great, mostly contributed by the GPU, and barely any improvement in the CPU side of things. In any case, I don't find mobile phones are that slow to begin with. Even with say a SN 865, I don't feel like the system is crawling. So I feel we may have hit a point where CPU gains are less important, and more like a nice to have in my opinion.
Posted on Reply
#6
champsilva
watzupkenI feel Apple have a significant advantage where they always use cutting edge TSMC nodes and also the benefit of having top notch software support. Not to say that the software is perfect (it is nowhere perfect), but when you have both hardware and software under the same company, it is easier to optimize both to ensure that the devices work optimally. On the other hand, Qualcomm and Samsung for example, don't have a lot of control over software since it is solely developed and maintained by Google. In addition, starting from SN 888 and Exynos 2100, both are on a less optimal Samsung node, which puts both chips at a significant disadvantage. The last SN 865 using TSMC did not perform this badly since it does not throttle that much. With the new flagship chip from Qualcomm and Samsung, they are on a marginally better Samsung node, which I feel is the reason why both are performing poorly, even against the likes of Mediatek that have stick on to TSMC.

Having said that, I feel performance improvements with Apple SOCs are slowing down. Between A14 and 15, the performance jump isn't great, mostly contributed by the GPU, and barely any improvement in the CPU side of things. In any case, I don't find mobile phones are that slow to begin with. Even with say a SN 865, I don't feel like the system is crawling. So I feel we may have hit a point where CPU gains are less important, and more like a nice to have in my opinion.
Well, both Qualcomm and Samsung work closely to Android to better optimize.
Posted on Reply
#7
z1n0x
"Playtime is over":laugh: Oh Samsung.

No wonder Nvidia are throwing billions at TSMC for access to 5nm.
Posted on Reply
#8
Sybaris_Caesar
The shit started stinking when Samsung postponed the reveal abruptly few weeks back. I just didn't think it would be the GPU or this bad. How could Samsung even let the SoC go so far? Why not let it bake in the oven for another year or so.

Fortunately only EU region is saddled this time I guess. AFAIK previously Exynos regions are also getting Snapdragon this time around.
Posted on Reply
#9
Punkenjoy
Well maybe RDNA2 is not a low power GPU architecture. It can do a lot of thing but maybe not that.
Posted on Reply
#10
nguyen
KhonjelThe shit started stinking when Samsung postponed the reveal abruptly few weeks back. I just didn't think it would be the GPU or this bad. How could Samsung even let the SoC go so far? Why not let it bake in the oven for another year or so.

Fortunately only EU region is saddled this time I guess. AFAIK previously Exynos regions are also getting Snapdragon this time around.
yeah my country is getting SD 8Gen 1 S22, maybe I should buy one :D
Posted on Reply
#11
geon2k2
Wait for official reviews boyz.

From the first picture I can see different scaling governor.
That alone could influence significantly the performance.
Posted on Reply
#12
Sybaris_Caesar
geon2k2Wait for official reviews boyz.

From the first picture I can see different scaling governor.
That alone could influence significantly the performance.
Or look at what Samsung's home market is getting. When Exynos is comparatively worse than SD, Korean market gets SD version.
Posted on Reply
#13
watzupken
champsilvaWell, both Qualcomm and Samsung work closely to Android to better optimize.
Yes they can, but that sorts of partnership is not the same as having everything in house. Google can work closely with Qualcomm and Samsung, but each company have their own agendas and goals, which means their partnership may be limited. Moreover, Google have thrown themselves into the hardware "ring", which means there is even less incentive to optimize their OS for competitors on day 1. Qualcomm or Samsung may get some of the features down the road, but Google will certain keep some ace in their sleeves for themselves.
Posted on Reply
#14
bug
No worries. Thanks to FineWine™, you'll be enjoying stellar performance in 3-4 years.

Joking aside, this lends some credence to existing reports that RDNA2 cannot be clocked as needed within the given power budget. Proper testing needed, of course, but all leaks seem to be going the same way.
Posted on Reply
#15
watzupken
PunkenjoyWell maybe RDNA2 is not a low power GPU architecture. It can do a lot of thing but maybe not that.
My guess is that RDNA2 thrives on high clockspeed. None of the RDNA2 GPUs run below 2Ghz. But with high clockspeed, you will need to pay a power penalty, which also results in exponential increase in heat output. The combination of Samsung's node that seems to suffer with high power consumption and thermal throttling, may result in poor performance. There are rumors that Samsung had to lower the clockspeed significantly to keep the GPU running stably. And if reducing the clockspeed still results in significant throttling based on some feedbacks I saw, I would expect performance to tank. At least based on the previous generation of SN and Samsung SOCs, Samsung's node seems to be the root cause of significant thermal throttling. Not that there is no throttling when Qualcomm was using TSMC, but not to such an extend.
Posted on Reply
#16
Daven
watzupkenI feel Apple have a significant advantage where they always use cutting edge TSMC nodes and also the benefit of having top notch software support. Not to say that the software is perfect (it is nowhere perfect), but when you have both hardware and software under the same company, it is easier to optimize both to ensure that the devices work optimally. On the other hand, Qualcomm and Samsung for example, don't have a lot of control over software since it is solely developed and maintained by Google. In addition, starting from SN 888 and Exynos 2100, both are on a less optimal Samsung node, which puts both chips at a significant disadvantage. The last SN 865 using TSMC did not perform this badly since it does not throttle that much. With the new flagship chip from Qualcomm and Samsung, they are on a marginally better Samsung node, which I feel is the reason why both are performing poorly, even against the likes of Mediatek that have stick on to TSMC.

Having said that, I feel performance improvements with Apple SOCs are slowing down. Between A14 and 15, the performance jump isn't great, mostly contributed by the GPU, and barely any improvement in the CPU side of things. In any case, I don't find mobile phones are that slow to begin with. Even with say a SN 865, I don't feel like the system is crawling. So I feel we may have hit a point where CPU gains are less important, and more like a nice to have in my opinion.
It’s more than just the CPU. As some have pointed out having everything under one house provides a strong advantage. And to further emphasize that I point to this Anandtech article.

www.anandtech.com/show/17004/apples-iphone-13-series-screen-power-battery-life-report-long-lasting-devices

“Today’s investigation into the battery life results of the new iPhone 13 series confirms what many others have already mentioned already – it’s a significant upgrade over the iPhone 12 generation, with vast increases across the board. Apple’s new more efficient displays, larger batteries, as well as notably more efficient A15 chip represent a holy trifecta of hardware characteristic improvements that is extremely positive to the longevity of the new phones. There’s little more left to be said.”

Bold emphasis mine. The best smartphone is the elegant combination of performance and battery life which requires fine control over the manufacturing of every piece of hardware and software.

With the new Exynos you have three players (AMD, Samsung and Google) trying to work together to make this happen. While not 100% for certain, separate collaborators might not be able to deliver that level of fine control.
Posted on Reply
#17
tfdsaf
I don't trust some nobody's twitter version of events. Always wait for official reviews and looks at several sources at that. I have a feeling some of these mobile benches are just too dumb to properly utilize RDNA2. After all the mobile space is so different and its probably going to be several years down the line when mobile catches up to all of the desktop benefits like DX12.1 and all of that.
Posted on Reply
#18
R0H1T
DavenWith the new Exynos you have three players
Pretty sure AMD already work closely with Google, RDNA2 though isn't an answer to anything mobile right now. Maybe next gen will change that.
Posted on Reply
#19
Space Lynx
Astronaut
eh processing power for your average user like me, who uses netflix and that is about it, is I mean well pointless.

personally I find iOS to be finnicky to use, whenever I would try to hit next episode on netflix in bottom right corner on mini ipad it would sometimes mess up and make me go into episodes all over, it was really annoying, never had a single issue with next episode on android. so i mean android it is for me. cause im lazy and don't care and just want netflix in background at work
Posted on Reply
#20
silentbogo
bugProper testing needed, of course, but all leaks seem to be going the same way.
Well, earlier leaks suggested the opposite (20-30% faster than Adreno 730), which means only a few reputable in-depth reviews can settle that.
john_That's bad even for a first try.
Dude, SD888 is barely 6mo old, and this thing beats Adreno 660 by a large margin. I'd say it's a huge achievement for the first try [...at least at cross-breeding ARM and RDNA].
Posted on Reply
#21
Parn
This is bad news for me. My phone is up for contract renewal and upgrade this summer. I've been sticking to the Galaxy S series ever since the original S1. I don't care about single digit performance difference between the SD and Exynos, but 30 - 40% is unacceptable considering they're sold for the same price. Unless the European Exynos S22 is offered at a discount (highly doubtful), I'll probably have to look into other brands.
Posted on Reply
#22
defaultluser
Maybe it's just a case of early drivers needing more optimization? Buit yeah, pretty embarrassing to lose that nbasdly to ATI's forme r product.
Posted on Reply
#23
SL2
Bad news, but I care more about this (8G1 only):

www.pcmag.com/news/exclusive-samsungs-galaxy-s22-is-a-low-signal-beast
defaultluserBuit yeah, pretty embarrassing to lose that nbasdly to ATI's forme r product.
Not really, it's hard to expect too much from a new implementation like this, compared to Qualcomm who have improved and optimized it for years. Nobody expects it to be equal from day one, except those who believe everything videocardz/wccftech posts, and all the sites that copies every rumor from them (cough). (Not aiming at you personally)

"Whoah! It's gots the RDNA2s, it's must be the fastester!!!"
Posted on Reply
#24
yeeeeman
Not only it doesn't fight with SD 8 gen 1, but it is barely faster than last gen, Exynos 2100 Mali GPU. Nice effort AMD, nice.

Ok, I understand this is the first effort of AMD in the low low low power space, BUT what is the point of launching a GPU which is not even faster than the last gen and tout it so much? Oh, AMD, RDNA, blablabla, Ray tracing. We've come to get used to driver issues on amd gpus on desktop, seems like the same is happening on mobile, a few games have graphical errors. so not only you get a barely good gpu, you also get shitty drivers, the never ending story of amd.
Posted on Reply
#25
Chrispy_
Architectures designed for scalabiling up like RDNA which have expectations of massive chips like the 5120 shader RX6900XT don't typically scale down well at all.

Scalability and efficiency are almost mutually-exclusive because if you're NOT scaling up, then the compromises you have to make to enable scalability prevent you from min-maxing for a compact/zero-waste design that works well at the lowest end of the spectrum.
Posted on Reply
Add your own comment
Apr 25th, 2024 22:58 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts