• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

First Reviews are Live and Snapdragon X Elite Doesn't Quite Deliver on Promised Performance

The price point is all wrong for the use case though...


Which is why I specifically wrote that the Asus model doesn't deliver, as I don't want to draw any hastily conclusions.
yeah, if i want to do all that type of "content consumption" and what i call "passive computing" then i'd buy a macbook air, since for not interacting much with the device the OS type is kind of moot, plus you know they have extremely good battery life
 
I don't think back then "Pro" was such a marketing gimmick like it is now
No, and in fact, I misspoke. There was the iBook and Powerbook for laptops. Both topped out with G4 (PowerPC 7447a). The G5 (970fx) was limited to the Powermac, and a slower version landed in the iMac. It was a hotter chip, and it failed to reach the clock speeds that Apple expected from IBM. That was a big deal, since clock for clock, the G5 wasn't really any faster than the G4.The G5 was never able to scale down to mobile, which put Apple in a tough spot. At that time, Intel had moved on from Netburst, and had really good options with Core. I don't think IBM was that interested in continuing to make CPUs anyway, so Intel was a really good landing spot for Apple, and served its purpose until Apple took matters in thier own hands.
 
Last edited:
Tuxedo announced they were developing an ARM laptop with one of these snapdragon elite chips, they'll come eventually. If Tuxedo is doing one, some big white label oem is doing one and so there will be at least a couple on offer from the usual suspects (shenker, tuxedo, system76, etc)

I would be excited for this -- especially if you can run docker/postgres etc. on it natively like you can on cloud linux distros... Would be awesome to have a functional linux devbox that can last 15 hours on battery...
 
Intel didn't make efficient chips at all. They were fast CPUs just through brute force of having the vastly biggest budget.
Hard disagree. The core 2 lineup were efficiency marvels when they released. Nothing IBM had could come close. You could have efficiency, or performance. Intel gave you both.
 
Native ARM application performance seems alright to me. Would have liked it to be more around $1000 but I don't think the doomposting is all that warranted. Many applications run on ARM natively now, and many more are to come.

It's always good to have options.

I would be excited for this -- especially if you can run docker/postgres etc. on it natively like you can on cloud linux distros... Would be awesome to have a functional linux devbox that can last 15 hours on battery...
For now you can run WSL2 with aarch64 ubuntu and you can get native ARM pgsql/docker/etc. A proper linux ARM laptop would definitely get me to buy one, though, but I have some faith in WoA and application support has been expanding nicely.
 
Low quality post by wolf
Overhyped and under delivered.... I wonder if they copied Radeon Technologies Group's homework.

1718772155170.png
 
The irony of your post is that "Reduced Instruction Set Computer" these days includes the FJCVTZS instruction, aka Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero, instruction.
The irony of your post is that i was responding to a specific comment, which its correct since that was exactly what intel did when they jumped from the 486 to the pentium, yet you ignored that.
RISC vs CISC has been dead for decades. Ever since ARM and RISC-V adopted AES Instructions, Javascript instructions, and SIMD, the world has gone 100% CISC.
See the response above and no, i wasnt talking about that war, simply speaking about the original chip.
I dunno, I think Apple gave up on PowerPC when it became obvious that the G5 was under-delivering. They never made a G5 MacBook Pro, I believe because it wouldn’t hit performance and thermal results, even on desktop it required way too much cooling. The writing was on the wall for PPC.
Exactly, perhaps i wasnt clear but the main reason for the stagnation was the lack of volume to justify the development expenses.
No, and in fact, I misspoke. There was the iBook and Powerbook for laptops. Both topped out with G4 (PowerPC 7447a). The G5 (970fx) was limited to the Powermac, and a slower version landed in the iMac. It was a hotter chip, and it failed to reach the clock speeds that Apple expected from IBM. That was a big deal, since clock for clock, the G5 wasn't really any faster than the G4.The G5 was never able to scale down to mobile, which put Apple in a tough spot. At that time, Intel had moved on from Netburst, and had really good options with Core. I don't think IBM was that interested in continuing to make CPUs anyway, so Intel was a really good landing spot for Apple, and served its purpose until Apple took matters in thier own hands.
Again correct, Motorola bailed first, IBM took over of what motorola was supposed to deliver but ended quitting themselves due to the lack of volume.

Technically, the M1 MBA Air is a banger for what it costs now (~USD720 around here). If only this price wasn't for that measly 8G/256G configuration...
Indeed. Same for the M1 Mac Mini.

They just need to add some more RAM at the entry level prices.
It will only get better. x86 days are numbered.
This is a good read about it.

 
Overhyped and under delivered.... I wonder if they copied Radeon Technologies Group's homework.

View attachment 351951

After test driving this laptop for the past couple of hours, I feel like this is near an M1 experience. The emulation layer works well, albeit with extra power usage. However, I did buy this laptop with the intention of finding a laptop with decent performance and exceptional battery life to replace my M2 Mac. So far, this ticks my boxes. I am using Office within Edge, and all but two processes are running native ARM64 code.

I can confirm the battery life is far superior to my x86 notebooks. I am test driving this to see if it's a viable competitor to the 13th Gen Intels I have been buying for work. Overall, my impressions are that the chipset is very impressive, comparable to 13th-14th Gen Intel while sipping power; however, the software needs another 6-12 months. Would I buy this for my workforce moving forward? I need another couple of weeks to decide, but from a purely general business usage perspective, I am genuinely impressed.
 
Shitty smartphone CPU performs shittily in real i.e. desktop workloads. Next on today's news, water is still wet and the sky remains blue.

My distaste for these toy CPUs being sold at desktop CPU prices aside, I'm sure this will part fools from their money and thus cut into Apple's marketshare. And anything that hurts Apple is good for consumers.
 
Last edited:
Good for browsing, email, online banking and online shopping, netflix and prime video...
That's about it then.

So you are saying it would be a good cpu in a phone? hum... makes sense.
 
After test driving this laptop for the past couple of hours, I feel like this is near an M1 experience. The emulation layer works well, albeit with extra power usage. However, I did buy this laptop with the intention of finding a laptop with decent performance and exceptional battery life to replace my M2 Mac. So far, this ticks my boxes. I am using Office within Edge, and all but two processes are running native ARM64 code.

I can confirm the battery life is far superior to my x86 notebooks. I am test driving this to see if it's a viable competitor to the 13th Gen Intels I have been buying for work. Overall, my impressions are that the chipset is very impressive, comparable to 13th-14th Gen Intel while sipping power; however, the software needs another 6-12 months. Would I buy this for my workforce moving forward? I need another couple of weeks to decide, but from a purely general business usage perspective, I am genuinely impressed.
That doesn't mean anything. There are plenty of laptops with the same hardware and battery capacity but with huge disparities in battery life due to different implementation of the power/performance curve. I'm absolutely certain that there are x86 laptops with better battery life (Framework 13.5 or Thinkpad T14s, both 7840u), and all the performance, stability and compatibility inherent in the dominant ISA.

I don't see any advantage in buying qualcomm's buggy product.
 
Not the slam dunk we are looking for, but no slouch either. I wonder if they will be able to price this competitively to drive adoption. Will we be able to pair this with external graphics?
 
Not the slam dunk we are looking for, but no slouch either. I wonder if they will be able to price this competitively to drive adoption. Will we be able to pair this with external graphics?
Doesn't seem to be the idea right now. Not even sure if the X Plus devices will be much cheaper either.

I have one question though: will the development of Prism help those with older 8cx devices?
 
The irony of your post is that i was responding to a specific comment, which its correct since that was exactly what intel did when they jumped from the 486 to the pentium, yet you ignored that.

See the response above and no, i wasnt talking about that war, simply speaking about the original chip.

What part of Intel's 16-bit original FPU modified x87, 32-bit extended MMX, SSE, SSE2, SSE3, SSE4.1, SSE4.2, AVX, AES-NI, BMI, BMI2, AVX512, AVX10 extended instruction set is RISC?

Oh, and Pentium / i686 was dual-issue / dual-pipelined. Intel didn't experiment with microcodes until Pentium4 IIRC, which is where people started talking bullshit about "RISC Core" even though the microcodes are "Perform an entire AES encryption step", which is hardly "RISC". By the time we're talking about 486 and Pentium btw, we're at "16-bit original FPU modified x87, 32-bit extended MMX, SSE". The later stuff hasn't happened yet, but it seems very difficult to call this a "Reduced Instruction Set Computer", especially as Pentium still supports the "loop" assembly instruction, push, pop, divide, and other "complex" instructions that perform multiple tasks in one code.

Unless you're talking load/store architectures btw, which is all the microcode system converts Intel Assembly into. Load/store is just one component of RISC, nothing else Intel / AMD do with x86_64 is anything close to the RISC they talked about in the 80s or 90s.

-----------

Oh, btw. both ARM and RISC-V are microcode engines today. Because their instructions are so complex they need multiple microcodes to implement them.

Everyone's Divide / Modulo instruction is microcoded. Because that's what you do, you destroy RISC-mindset because divide is so common that it makes sense to accelerate it at the machine level. But it also doesn't make sense to implement divide in its entirety all in hardware, because division is a very complex set of operations. So you compromise with microcode.

---------

My point is that CISC vs RISC has been stupid for decades. The entire debate is just a bunch of people misunderstanding microprocessor implementations and circlejerking over it.
 
Last edited:
My point is that CISC vs RISC has been stupid for decades. The entire debate is just a bunch of people misunderstanding microprocessor implementations and circlejerking over it.
Mostly Apple fanboys, back when that was the Apple kool-aid. Nowadays it's the RISC-V fanboys because... IDK... RISC-V is open-source so that makes it "better" somehow? Or something equally facetious.
 
Mostly Apple fanboys, back when that was the Apple kool-aid. Nowadays it's the RISC-V fanboys because... IDK... RISC-V is open-source so that makes it "better" somehow? Or something equally facetious.

Its more than just Apple.

PowerPC, ARM, and the legion of lost 80s/90s processors (SPARC, Alpha, MIPS, PA-RISC) all claimed to be RISC. Furthermore, not a single company claims their ISA to be CISC. CISC is what "other" companies call Intel's design, almost a derogatory term that's pushed upon the competitor to make these other designs feel better about themselves.

If I were to push a company responsible for the RISC-mania, it'd be IBM and its POWER architecture. IBM did many CPU designs and seemed to push RISC as a marketing term the most (even as RISC was used all over the place, IBM was probably the most powerful and widespread mouthpiece to the pseudo-concept).

That being said: deep discussions about pipelines, processor efficiency, throughput etc. etc. allowed for these companies to leapfrog and advance CPUs. Alas, the "CISC" x86 instruction set by Intel (and later AMD) turned out to also take all those innovations anyway, and become the fastest processor of the 2000s, 2010s.

-----------

RISC-V continues the tradition of claiming advanced ISA design, much like its SPARC / ALPHA / MIPS bretherin before it. After all, every new CPU design needs to say why its better than the "CISC" machine over there, without necessarily using the trademarked term (Intel) in their marketing.

-------

I think with all the processor / ISA wars of the last decades, I've come to the conclusion that it doesn't matter. AMD came out with the dual ARM+x86 "Zen" design back in 2016, proving that all of these ISAs can convert between each other on a modern core anyway. And that's when I began to realize how similar ARM vs x86 was at the instruction level.

ARM stands for "Advanced RISC Machine" by the way, and was one of the other major marketers of the RISC term. (ARM wasn't very big in the 90s, but is a big deal now). With AMD making ARM + x86 compatible processor (even if it was just for its own internal tests), it proves how bullshit this whole discussion was. Decoders are not the critical element of modern processors, and they can be swapped out without much hassle. (At least, not a hassle for these multi-$$Billion megacorps).
 
I’m curious if we’ll eventually see a desktop version of this chip, and a passively cooled option as well. I think they need to expand across multiple areas if they want us to take them seriously. How about something with expansion slots and external GPU support? That’s when I think we could more likely say that WOA has actually arrived.
 
It will only get better. x86 days are numbered.
That much is clear. I mean, x86 can't have more than 1 million days or so left, can it?
 
Yup. Give 'em OLED screens all they want, but 1300 bucks is preposterous for the performance they deliver.
SOunds like the surface all over again
 
When an article begins with comparing battery life of an OLED to a non OLED you know the article is gonna be a liiil biased. Js.
 
When an article begins with comparing battery life of an OLED to a non OLED you know the article is gonna be a liiil biased. Js.
It's not biased, it's just written on a sample size of one. And it acknowledges that.
 
It's not biased, it's just written on a sample size of one. And it acknowledges that.
Comparing models that drain a significant more amount of battery life is not the same as just comparing a random benchmark that may or may not be top of the line. It's quite literally not comparable. It's a different product.
 
Comparing models that drain a significant more amount of battery life is not the same as just comparing a random benchmark that may or may not be top of the line. It's quite literally not comparable. It's a different product.
Again, it's the only data we have atm. We'll get more relevant data shortly.
 
Native ARM application performance seems alright to me. Would have liked it to be more around $1000 but I don't think the doomposting is all that warranted. Many applications run on ARM natively now, and many more are to come.

It's always good to have options.
Define "many".
For now you can run WSL2 with aarch64 ubuntu and you can get native ARM pgsql/docker/etc. A proper linux ARM laptop would definitely get me to buy one, though, but I have some faith in WoA and application support has been expanding nicely.
I wouldnt be trusting ANYTHING microsoft makes these days.

SOunds like the surface all over again
If Microsoft had any sense about them, this would have been the CPU in a surface go 4 or 5. It's the perfect formfactor for ARMs tradeoffs.
 
Back
Top