Tuesday, March 12th 2024

Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H

Qualcomm Snapdragon X Elite is about to make landfall in the ultraportable notebook segment, powering a new wave of Windows 11 devices powered by Arm, capable of running even legacy Windows applications. The Snapdragon X Elite SoC in particular has been designed to rival the Apple M3 chip powering the 2024 MacBook Air, and some of the "entry-level" variants of the 2023 MacBook Pros. These chips threaten the 15 W U-segment and even 28 W P-segment of x86-64 processors from Intel, such as the Core Ultra "Meteor Lake," and Ryzen 8040 "Hawk Point." Erdi Özüağ, prominent tech journalist from Türkiye, has access to a Qualcomm-reference notebook powered by the Snapdragon X Elite X1E80100 28 W SoC. He compared its performance to an off-the-shelf notebook powered by a 28 W Intel Core Ultra 7 155H "Meteor Lake" processor.

There are three tests that highlight the performance of the key components of the SoCs—CPU, iGPU, and NPU. A Microsoft Visual Studio code compile test sees the Snapdragon X Elite with its 12-core Oryon CPU finish the test in 37 seconds; compared to 54 seconds by the Core Ultra 7 155H with its 6P+8E+2LP CPU. In the 3DMark test, the Adreno 750 iGPU posts identical performance numbers to the Arc Graphics Xe-LPG of the 155H. Where the Snapdragon X Elite dominates the Intel chip is AI inferencing. The UL Procyon test sees the 45 TOPS NPU of the Snapdragon X Elite score 1720 points compared to 476 points by the 10 TOPS AI Boost NPU of the Core Ultra. The Intel machine is using OpenVINO, while the Snapdragon is using Qualcomm SNPE SDK for the test. Don't forget to check out the video review by Erdi Özüağ in the source link below.
Source: Erdi Özüağ (YouTube)
Add your own comment

55 Comments on Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H

#1
bug
Obviously more benchmarks are needed, but not a bad showing so far.
Even is Qualcomm has a winning design on their hands, there's still the matter of securing fab capacity to produce a significant number.
Posted on Reply
#2
Nanochip
Arm beating x86. Apple did it, now Qualcomm?

meteor lake is a joke. Perhaps lunar lake will be better. AMD ryzen zen5 looks to be much better as well.
Posted on Reply
#3
atomek
When Apple showed off their M1, I wrote on reddit that this was the beginning of the end of x86. I was downvoted to hell by r/hardware experts. To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture. If Microsoft jumps on the ARM wagon and the game studios follow, that will be the end of the X86 road. It already started on server market. I just can't understand why Intel hasn't realised this, they kicked Apple out when they came to them with a request to join venture to develop the cpu for their first IPhone. AMD and NVidia had more common sense and at least started developing their own ARM processors.
Posted on Reply
#4
Shou Miko
Said this chip is too overpriced, here the Lenovo ThinkPad X13s G1 is about 2.275,00 USD for a base spec with 16GB Memory, 256GB NVME it's too overpriced for it to make since.

Even if it can performance as a Apple M2 chip, and got better battery life than a AMD or Intel based laptop this is just too much, it had to be half the price to begin with than it would get a better foothold in the market.

I fail to see this being a good chip because of the price sadly, because at about 1000USD it would make better sense to go with.
Posted on Reply
#5
bug
atomekWhen Apple showed off their M1, I wrote on reddit that this was the beginning of the end of x86. I was downvoted to hell by r/hardware experts. To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture. If Microsoft jumps on the ARM wagon and the game studios follow, that will be the end of the X86 road. It already started on server market. I just can't understand why Intel hasn't realised this, they kicked Apple out when they came to them with a request to join venture to develop the cpu for their first IPhone. AMD and NVidia had more common sense and at least started developing their own ARM processors.
It's not so clear-cut. Arm grows increasingly more complex, while x86 has become more RISC-like over the years. What I think drags x86 down is its legacy compatibility. If someone figures out how to provide that via a software layer, the differences between x86 and Arm would be wiped out.
Posted on Reply
#6
kondamin
Get most software running properly before demanding macbook prices.
Posted on Reply
#7
atomek
bugIt's not so clear-cut. Arm grows increasingly more complex, while x86 has become more RISC-like over the years. What I think drags x86 down is its legacy compatibility. If someone figures out how to provide that via a software layer, the differences between x86 and Arm would be wiped out.
What drags them down is CISC, it really doesn't matter if it is RISC internally, they will never be able to get benefits of fixed width instruction set, and all the joys coming with how caches / branching could be optimised thanks to this. X86 is dying, it will never be able to catch-up with RISC in terms of efficiency (which directly translate to performance nowadays). CISC was wrong horse to bet on. And they could realise it 20 years ago, when compilers were already very sophisticated and promised much better optimisation capability than creating sophisticated, specialised instruction set. It is not possible to fix X86 architecture, even if you drop legacy instructions.
Posted on Reply
#8
Nanochip
atomekTo this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture.
You can’t say that for certain. Arrow lake and lunar lake will be on 3nm (allegedly) and zen 5 will be on at least 4nm. They will be more power efficient than their predecessors. As Intel and AMD move to more advanced nodes (as we know Intel was stuck on 14nm and 10/7nm for years) we have to see how power efficient (or not) the new architectures will be.
For example, Intel’s upcoming lion cove + skymont and then panther cove + darkmont. We have to wait to evaluate those architectures to see how power efficient (or not) they will be. They will be produced on advanced nodes. And as we know, AMD’s 7800x3d is very power efficient for the gaming performance it delivers, relative to the competition.

so you can’t write x86 off just yet.
Posted on Reply
#9
atomek
NanochipYou can’t say that for certain. Arrow lake and lunar lake will be on 3nm (allegedly) and zen 5 will be on at least 4nm. They will be more power efficient than their predecessors. As Intel and AMD move to more advanced nodes (as we know Intel was stuck on 14nm and 10/7nm for years) we have to see how power efficient (or not) the new architectures will be.
For example, Intel’s upcoming lion cove + skymont and then panther cove + darkmont. We have to wait to evaluate those architectures to see how power efficient (or not) they will be. They will be produced on advanced nodes. And as we know, AMD’s 7800x3d is very power efficient for the gaming performance it delivers, relative to the competition.

so you can’t write x86 off just yet.
It is fairly easy to estimate that the efficiency gap is around 5-6 node shrinks to catch up with ARM (at least for Apple silicon, which has the best implementation of ARM ISA so far). 5-6 generations, so it will never happen. Maybe it will be a few years before we can write off x86, but I wouldn't hold Intel stock either.
Posted on Reply
#10
bug
atomekWhat drags them down is CISC, it really doesn't matter if it is RISC internally, they will never be able to get benefits of fixed width instruction set, and all the joys coming with how caches / branching could be optimised thanks to this. X86 is dying, it will never be able to catch-up with RISC in terms of efficiency (which directly translate to performance nowadays). CISC was wrong horse to bet on. And they could realise it 20 years ago, when compilers were already very sophisticated and promised much better optimisation capability than creating sophisticated, specialised instruction set. It is not possible to fix X86 architecture, even if you drop legacy instructions.
I hinted at this in my previous post. Ever since x86 became pipelined, it started mimicking RISC's fixed width rather well.
At the same time, Arm deals with 32bit, 64bit, Thumb, Neon, whatever, so it's going the opposite direction.
Posted on Reply
#11
atomek
bugI hinted at this in my previous post. Ever since x86 became pipelined, it started mimicking RISC's fixed width rather well.
At the same time, Arm deals with 32bit, 64bit, Thumb, Neon, whatever, so it's going the opposite direction.
It doesn't matter if their are pipelined, the bottleneck is with the frontend CISC and x86 is not and will not be able to avoid it. Intel thought they could be smarter than compilers doing their work on hardware. You just can't optimise the hardware pipeline during runtime, it is as it is. You can do this with compiler (and this is also why Rosetta works so well for translating x86 software to ARM). Intel made stupid decision very long time ago, and even more stupid when they kicked-out Apple when they come to develop CPU for IPhones.
Posted on Reply
#12
Denver
atomekWhat drags them down is CISC, it really doesn't matter if it is RISC internally, they will never be able to get benefits of fixed width instruction set, and all the joys coming with how caches / branching could be optimised thanks to this. X86 is dying, it will never be able to catch-up with RISC in terms of efficiency (which directly translate to performance nowadays). CISC was wrong horse to bet on. And they could realise it 20 years ago, when compilers were already very sophisticated and promised much better optimisation capability than creating sophisticated, specialised instruction set. It is not possible to fix X86 architecture, even if you drop legacy instructions.
I don't understand why there are so many ARM preachers out there. ARM is the one trying to gain performance following in the footsteps of AMD and Intel from half a decade ago.

There's no reason to "fix" anything. It's better, more efficient, and it just works. X86 is the dominant ISA, and it's going to be here for a long time.
Posted on Reply
#13
AleXXX666
"entry-level" variants of the 2023 MacBook Pros"
WTF:roll:

I'm waiting to one company would beat this Crapple finally, that's nonsense already.
Posted on Reply
#14
atomek
DenverI don't understand why there are so many ARM preachers out there. ARM is the one trying to gain performance following in the footsteps of AMD and Intel from half a decade ago.
You are wrong for two reasons - there are way more X86 preachers out there (you are one of them). And second - ARM for decades focused on mobile market, where efficiency was most important. Today, when we are reaching limits in terms of physical process, X86 is approaching the heat wall, and let ARM shine as it offers way better efficiency thanks to architecture. And today efficiency becomes performance. Show me any X86 computer from today that could be passively cooled, and offers at least half of the performance of 3 years old M1.

I'm buying 7800X3D as a gaming PC, but I know it is probably my last X86 PC ever build, I'm just not delusional.
Posted on Reply
#15
R0H1T
bugObviously more benchmarks are needed, but not a bad showing so far.
Even is Qualcomm has a winning design on their hands, there's still the matter of securing fab capacity to produce a significant number.
Hardly an issue for the world's biggest/baddest modem maker!
Posted on Reply
#16
Denver
atomekYou are wrong for two reasons - there are way more X86 preachers out there (you are one of them). And second - ARM for decades focused on mobile market, where efficiency was most important. Today, when we are reaching limits in terms of physical process, X86 is approaching the heat wall, and let ARM shine as it offers way better efficiency thanks to architecture. And today efficiency becomes performance. Show me any X86 computer from today that could be passively cooled, and offers at least half of the performance of 3 years old M1.

I'm buying 7800X3D as a gaming PC, but I know it is probably my last X86 PC ever build, I'm just not delusional.
Do you mean the M1, manufactured using the 5nm process found in modern CPUs? Any recent AMD chip with a similar TDP would perform similarly. However, I find it impractical and dumb to run a chip that exceeds 30W and reaches 100°C (high load) under passive cooling. For basic tasks like browsing or using spreadsheets, any APU from the 7nm era or newer would easily handle the workload while consuming 2-5W. In this scenario, the laptop's fan rotation is disabled.

All chipmakers are facing limitations due to the laws of physics, including ARM. That's why recent ARM SOCs can reach around 20W for a short period but struggle to sustain performance, often experiencing thermal throttling and instability. The push to expand ARM into other markets stems from the fact that they've exhausted options in mobile and lack an x86 license.

Delusional suits you very well. :)
Posted on Reply
#17
Mawkzin
NanochipYou can’t say that for certain. Arrow lake and lunar lake will be on 3nm (allegedly) and zen 5 will be on at least 4nm. They will be more power efficient than their predecessors.
Intel 3 is not 3nm.
Posted on Reply
#18
R0H1T
atomekand let ARM shine as it offers way better efficiency thanks to architecture.
Say what? Just clock limit any AMD/Intel processor & they'll easily be way more efficient. Now let's see Apple or any other ARM chip do (unlimited) turbos & see their efficiency then :slap:
Posted on Reply
#19
bug
DenverI don't understand why there are so many ARM preachers out there. ARM is the one trying to gain performance following in the footsteps of AMD and Intel from half a decade ago.

There's no reason to "fix" anything. It's better, more efficient, and it just works. X86 is the dominant ISA, and it's going to be here for a long time.
This is rooted in academia, all the way back to the 90s, when Mr. Tanenbaum described x86 as a dinosaur that needs to make room for more nimble things.

Between the 3 decades that have come to pass and people not realizing real world is not just a detail you can forget about, some still think Arm/RISC "must" happen.
Posted on Reply
#20
Fourstaff
I am not sure why people are still so dismissive of ARM. x86 became niche before COVID. There are far more devices on ARM than x86, and we collectively spend more time on ARM devices than x86 devices. Phones, TVs, routers all use ARM instead of x86. The only holdout in x86 are legacy software, and those are slowly getting converted into cloud (and becoming architecture agnostic as long as we can access the web).
Posted on Reply
#21
R0H1T
It's not being dismissive but the fallacy that ARM is "inherently" more efficient holds no water! It depends on the node, application as well as chip size believe it or not.
Posted on Reply
#22
atomek
R0H1TIt's not being dismissive but the fallacy that ARM is "inherently" more efficient holds no water! It depends on the node, application as well as chip size believe it or not.
This is why we see ARM in every mobile application, and zero X86 in any application where battery live is critical? Is X86 some kind of religion or what? I know most people (including me) own X86 hardware, but I really don't get it why people feel they have to defend X86 like independence.
Posted on Reply
#23
Gucky
atomekWhen Apple showed off their M1, I wrote on reddit that this was the beginning of the end of x86. I was downvoted to hell by r/hardware experts. To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture. If Microsoft jumps on the ARM wagon and the game studios follow, that will be the end of the X86 road. It already started on server market. I just can't understand why Intel hasn't realised this, they kicked Apple out when they came to them with a request to join venture to develop the cpu for their first IPhone. AMD and NVidia had more common sense and at least started developing their own ARM processors.
It would be nice.
I am a gamer, nothing more really, so unless EVERY game is converting or emulated very well, I don't see any reason to switch.
I mean Desktop CPUs of course...
Posted on Reply
#24
bug
FourstaffI am not sure why people are still so dismissive of ARM. x86 became niche before COVID. There are far more devices on ARM than x86, and we collectively spend more time on ARM devices than x86 devices. Phones, TVs, routers all use ARM instead of x86. The only holdout in x86 are legacy software, and those are slowly getting converted into cloud (and becoming architecture agnostic as long as we can access the web).
Saying Arm is not a silver bullet means we're being dismissive?

There are markets where Arm does better. And there are markets where x86 has the upper hand. It's as simple as that.

Plus, there's a built-in fallacy to your statement: this isn't about Arm vs x86, its about implementations of both. x86 can be anything from Netburst to Zen4. Arm can also be anything from cheap Unisoc to Apple's M3...
Posted on Reply
#25
Noyand
DenverAll chipmakers are facing limitations due to the laws of physics, including ARM. That's why recent ARM SOCs can reach around 20W for a short period but struggle to sustain performance, often experiencing thermal throttling and instability. The push to expand ARM into other markets stems from the fact that they've exhausted options in mobile and lack an x86 license.
Apple is also a real freak when it comes to silence. Their fan curve is tuned to get the lowest amount of RPM possible (and that's the M2 Ultra). The ARM mac pro fans are spinning at 500~600 RPM underload. The Macbook Air is also gimped when it comes to thermals to push people to buy the Pro.

I never had the impression that ARM had an intresic thermal issue compared to x86, just that some computers maker are stingy when it comes to coolling. (aka no vapor chamber, or jet engine noise level)
Posted on Reply
Add your own comment
Apr 28th, 2024 01:11 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts