• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Arm Takes Aim at Laptops: Cortex-A78C Processor for PCs Introduced

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,996 (1.06/day)
In the past few years, the Windows-on-Arm (WoA) market has gotten some momentum has with companies trying to develop their WoA PCs, primarily laptops. One example of that is Microsoft. The company has launched a Surface notebook powered by a custom chip from Snapdragon, called Surface Pro X. This device is being powered by the Snapdragon SQ2 processor. However, what seemed to be missing for a while was the lack of support from the Arm itself for this type of market. That is what the company decided to change and make a brand new processor IP dedicated to "on-the-go" devices as they call it, translating into laptops.

The Cortex-A78C is a new design direction which Arm thinks is a good way for laptops, to get some more serious work done. Unlike the regular Cortex-A78 which is a heterogeneous design of big and little cores (Arm's famous big.LITTLE architecture), the new Cortex-A78C revision aims to change that. The "C" version of the processor is actually a homogeneous structure made up of all the big cores. Where you would typically use four big and four little cores for a design, Arm has decided to go all big with this design. For multithreaded purposes, this is the right decision to boost performance. The level three (L3) cache has seen a boost as well, translating to 8 MB of L3$ found on the die. We are eagerly awaiting to see the first designs based on this configuration and see how it performs.

View at TechPowerUp Main Site
 
This could be the basis for SQ3, much like the Kryo 495 CPU being based off of Cortex-A77 and Cortex-A55
 
Could be interesting f there's lots of native Windows ARM apps, x86 or x64 emulation will crush performance for them!
 
And here it is, RISC for mainstream computing. I predict we will see more performance based laptop and desktop ARM CPU's.
 
Why not X1 ? Why not a different ARM core altogether meant for PCs ?

And here it is, RISC for mainstream computing.

Aren't mobiles phones already mainstream computing ?
 
That's peasant talk.
But we are all peasants ...

Patiently waiting for ARM processors in ATX format (either embedded with mobo or socketed).
 
That's peasant talk.

Ironically phones are used for some pretty hardcore computing in photography for instance unlike most PCs out there.
 
Low quality post by Ashtr1x
They can shove that at Apple land where people will buy no matter what is inside. BGA trash, restricted software, lack of HW choice, OS customization lockdown as drivers are not common, and worst of all garbage performance, ofc the new PWA and all the shiny blaoted touch centric apps will run just fine.

Although all of this is for - Shiny portable garbage use and throw computer like a phone, this time more than $2000 so that every year or so replaced by the dumb consumer. Just like phones, nowadays we already have these laptops with batteries having no way of removability or DIY self replacement like old days of latched batteries.

Apple has mastered the above flaws by making their new own massive ecosystem and their first party funded software, in store repairs (high cost and highway robbery). I will see which company will try to emulate that. So far in Phones none of them were able to do so.
 
Aren't mobiles phones already mainstream computing ?
Yes, phones are now the dominant form of computing on the planet, with Android the most common OS, but being common and being mainstream are two different things. Phones are not what people do serious work on for many reasons.
Ironically phones are used for some pretty hardcore computing in photography for instance unlike most PCs out there.
WTH? Why?... I know of no one that edits photo's or video's on a phone. Seriously, why would anyone want to?
 
Last edited:
WTH? Why?... I know of no one that edits photo's or video's on a phone. Seriously, why would anyone want to?

Not editing, just simply taking photos, even the crappiest SoCs are now capable of some heavy computational photography. Just taking a single shot probably requires more FLOPS than most tasks people do with their PCs.

Of course all of those details are hidden from users so people assume their phones are just a bunch of under powered POS.
 
Not editing, just simply taking photos, even the crappiest SoCs are now capable of some heavy computational photography. Just taking a single shot probably requires more FLOPS than most tasks people do with their PCs.

Of course all of those details are hidden from users so people assume their phones are just a bunch of under powered POS.
Ah fair enough, and good points. The interesting thing most people don't know about ARM is that FPU performance is actually quite good.
 
Ah fair enough, and good points. The interesting thing most people don't know about ARM is that FPU performance is actually quite good.

Most of those photography shots are actually from the dedicated DSP cores that are custom-built into modern cell phones. In Apple's case, its probably the Neural Engine. In Qualcomm's case, its the Hexagon.

Neither Apple Neural Engine nor the Qualcomm Hexagon are ARM. They're dedicated FPU coprocessors for all those photography effects. They exist inside of the SoC, but they have their own instruction sets.

---------

ARM has decent FPU performance for its size. But nothing like a dedicated DSP, GPU, or even desktop-class CPU-SIMD unit like AVX on Zen3. Those dedicated DSPs are no joke though: and they have incredibly tight power requirements.

While Apple's neural engine seems closed, the Qualcomm Hexagon is decently open (or at least, has more open documentation). https://developer.qualcomm.com/software/hexagon-dsp-sdk


EDIT: VLIW, wow. I forgot there were VLIW systems still being made, lol. Well, there we go, VLIW is super-mainstream, in every single Qualcomm Snapdragon product for the purposes of camera sensor processing. I knew about Hexagon, but I didn't read any of the details till now...
 
Last edited:
But nothing like a dedicated DSP, GPU, or even desktop-class CPU-SIMD unit like AVX on Zen3.

X1 now has the same combined FPU vector width as something like Zen 3, 4*128bit.
 
X1 now has the same combined FPU vector width as something like Zen 3, 4*128bit.

Zen 3 is 4*256 bit, with a 36 micro-op queue and 160x256-bit FPU register files. At best, X1 is comparable to Zen 1 or Zen+. But I doubt it has the out-of-order capabilities of even Zen1 / Zen+.

Furthermore: X1 doesn't really exist yet. This A78c is a much smaller chip than the X1 (and also doesn't exist yet). If we're comparing something like a current Apple A14 or a Snapdragon 875, we still need to go back a few generations to Zen1 (or Sandy Bridge on Intel) for comparable FPU performance.
 
Last edited:
Zen 3 is 4*256 bit, with a 36 micro-op queue and 160x256-bit FPU register files. At best, X1 is comparable to Zen 1 or Zen+. But I doubt it has the out-of-order capabilities of even Zen1 / Zen+.

As far as I know Zen 3 is still just 2x256bit like Zen 2, Zen 1 was technically 2*128bit. Of course something like X1 would still be slower but not as slow as some people think ARM cores are.

Furthermore: X1 doesn't really exist yet.

It's pretty much confirmed to be in the Snapdragon 875 and the next flagship Samsung SoC, it doesn't exist in the sense that there are no products yet but it's well on it's way.
 
Ironically phones are used for some pretty hardcore computing in photography for instance unlike most PCs out there.
yeah, today it looks like era is changing from desktop to mobile
and today mobile stuffs has better power and good at processing heavy task like video editor, photography etc
it's interesting how far ARM could drive the other platform (notebook), if they are pretty good or at least offering better power/performance ratio than today processor, why not
 
As far as I know Zen 3 is still just 2x256bit like Zen 2, Zen 1 was technically 2*128bit. Of course something like X1 would still be slower but not as slow as some people think ARM cores are.

Zen has always been four pipelines. Zen 1 is 128-bit pipelines, Zen2/3 are 256-bit pipelines.

Those four pipelines can do four simultaneous things. For example, FP Multiply can be done on P0 and P1, while FP Add can be done on P2 and P3. See Agner Fog's manual, section 20.10 ("Floating point execution pipes") for Zen1 / Zen+ details. Citation: https://www.agner.org/optimize/microarchitecture.pdf (Currently page 217, but I know that manual's page number changes as it gets updated).
 
So ARM is basically saying BigLittle makes no sense from Laptop up whereas Intel is currently planning to bring BigLittle to Desktop. Interesting
 
So ARM is basically saying BigLittle makes no sense from Laptop up whereas Intel is currently planning to bring BigLittle to Desktop. Interesting

The biggest Adler Lake device I know of is a laptop. Even then, Intel has an embedded market (for 5G carriers or something), where energy efficiency and throughput is king. In these cases, a small core (energy efficiency) might be superior, but there might be enough latency-sensitive workloads where a big-core would help.

I think Intel is just experimenting with the platform, and they want to "capture" the technology. Whether or not Adler Lake is a successful product is of no relevance: Intel just needs to make something for the sake of engineering.
 
The biggest Adler Lake device I know of is a laptop.
Are you sure about that?
Alder Lake-S is called a desktop processor in all the sources I know, slotting into the new LGA1700 platform. Lakefield is the one for laptops which is already available.
 
Are you sure about that?
Alder Lake-S is called a desktop processor in all the sources I know, slotting into the new LGA1700 platform. Lakefield is the one for laptops which is already available.

I stand corrected. Alder Lake-S does appear to be a Desktop processor.

That's very odd to me. I can't imagine why a Desktop class processor would want atom-cores. EDIT: Maybe there's a segment of power-efficient desktops that Intel is focusing on? Like the NUC or something? I'd still find it strange to call NUC-class processors a "Desktop" however.
 
Back
Top