• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Xeon 6 Slashes Power Consumption for Nokia Core Network Customers

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,895 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel and Nokia are expanding their long-standing and strategic collaboration to advance core network infrastructures with the deployment of Intel Xeon 6 processors with Efficient-cores (E-cores) in the Nokia NFVI v5.0 and Nokia Core Networks Applications. With breakthrough energy efficiency, capacity and scalability for 5G core workloads, this joint infrastructure initiative will provide Nokia customers with up to a 60% reduction in power consumption, a 60% smaller server footprint and a 150% performance boost compared to widely deployed previous-generation servers.

"The combination of Intel Xeon 6 E-core processors - engineered for power-efficient, high-density compute - and Intel Infrastructure Power Manager, which delivers stable run-time power savings, provides a robust foundation for the most energy-efficient 5G core networks. We're proud to see Nokia Core Networks continue to lead with Intel Xeon processors, helping communications service providers reduce both power use and infrastructure footprint at scale," said Alexander Quach, Intel vice president and general manager of the Wireline and Core Network Division.



This adoption highlights the telecom industry's need for power-optimized, high-density infrastructure to meet modern 5G performance and sustainability demands. For decades, Intel and Nokia have worked together to produce tangible innovation. By combining Intel Xeon 6 (specifically the Intel Xeon 6780E) with Intel Infrastructure Power Manager (IPM), Nokia Core Networks will enable:
  • Load-aware and power-aware compute.
  • Lower operational costs.
  • Reduced carbon footprint.
  • No compromise on network stability.
Nokia is already running successful cloud service provider trials with IPM, validating its applicability across multiple server generations and showcasing measurable energy savings. The Intel Xeon 6 processor platform and IPM support for the forthcoming Nokia Packet Core Application v25.7 are on track for availability later this year.

"The results from our joint IPM trials using Intel Xeon 6 with E-cores underscore how industry partnerships - like our long-standing collaboration with Intel - are essential to driving shared innovation and delivering scalable, energy-efficient solutions that enable more sustainable networks. The important energy efficiency gains demonstrated in these trials align closely with what operators need as they modernize their networks," said Kal De, SVP of Products and Engineering, Cloud and Network Services at Nokia.

View at TechPowerUp Main Site
 
[...] with up to a 60% reduction in power consumption, a 60% smaller server footprint and a 150% performance boost compared to widely deployed previous-generation servers.
But Xeon 6338N is not a previous-generation CPU as it's a 3rd gen Ice Lake from 2021 on 10nm. Three generations behind Xeon 6 6780E from 2024 on Intel 3 process used in this comparison. Oh, and it loses AVX-512 present in 6338N due to E-cores.
If anything this suggests Nokia isn't really good at upgrading their server designs? ;)
 
Oh, and it loses AVX-512 present in 6338N due to E-cores.
Is AVX512 relevant for whatever Nokia is doing ?
 
Is AVX512 relevant for whatever Nokia is doing ?
Since we have just one slide with no insight what exact load they were testing it's hard to tell. I mentioned it as it's not often a feature is lost in subsequent generations of similar solutions.
 

Honestly, playing both teams is probably good for them and I guess they want to underline that 5G edge networking is highly threaded and thusly those 144 core 6 Xeons are good at that?
 
But Xeon 6338N is not a previous-generation CPU as it's a 3rd gen Ice Lake from 2021 on 10nm. Three generations behind Xeon 6 6780E from 2024 on Intel 3 process used in this comparison. Oh, and it loses AVX-512 present in 6338N due to E-cores.
If anything this suggests Nokia isn't really good at upgrading their server designs? ;)

Well, since Golden, Raptor, and Redwood Coves are all the same lineage of core architecture, and Crestmont is a minor iteration of Gracemont, perhaps they still consider Sunny Cove to be the official last-gen standalone architecture for Xeon. They're pretty clear that neither Lion Cove or Skymont are going to be scaled up so we're still waiting for Panther Cove and 'Next-Mont' to bring us forward a proper new generation.

As for AVX-512; Gracemont/Crestmont still supports a lot of subset instruction levels of AVX-512 (now detailed within AVX10) that are more like AVX2 recompiles. If Nokia was not using any true 512-bit vector workloads they will be fine.
 
Well, since Golden, Raptor, and Redwood Coves are all the same lineage of core architecture, and Crestmont is a minor iteration of Gracemont, perhaps they still consider Sunny Cove to be the official last-gen standalone architecture for Xeon. They're pretty clear that neither Lion Cove or Skymont are going to be scaled up so we're still waiting for Panther Cove and 'Next-Mont' to bring us forward a proper new generation.
I read "previous-generation servers" as previous-generation Nokia servers, as in Nokia didn't use anything between Ice Lake and now Xeon 6 for this particular workload.
As for AVX-512; Gracemont/Crestmont still supports a lot of subset instruction levels of AVX-512 (now detailed within AVX10) that are more like AVX2 recompiles. If Nokia was not using any true 512-bit vector workloads they will be fine.
Intel revised the whitepaper of AVX10 in March and made 512-bit vector length not optional in AVX10.2 (figure 1-2). The only implementation of AVX10.1 - Granite Rapids - already supports 512-bit length. AVX10.2 is meant to target both P- and E-core designs, so there won't be Xeon E-core designs in the future without both AVX10 and 512-bit vectors. By an extent it will make future E-core Xeons also AVX-512 binary compatible.
I wonder if AMD making AVX-512 "popular enough" weighted in on this late change to the AVX10 specs.
 
Intel revised the whitepaper of AVX10 in March and made 512-bit vector length not optional in AVX10.2 (figure 1-2).

Oh?

All new subsequent vector instructions will be enumerated only as part of Intel AVX10. Apart from a few special cases, those instructions will be supported at all vector lengths.

Intel AVX10 Version 2 will include a suite of new Intel AVX10 instructions covering new AI data types and conversions, data movement optimizations, and standards support. All new instructions will be supported at 128-, 256-,
and 512-bit vector lengths with limited variances.
All Intel AVX10 versions will implement the new versioning enumeration scheme.

Perhaps I misunderstand what you mean by "not optional" because this appears to be optional implementation. You can optionally choose 128, 256, or 512-bit length when employing any of the new instructions. What you can't choose is 32 vector registers. Crestmont implements 16x128-bit double-pumped, but I believe can skirt by on the previously mentioned AVX2 compiles (also mentioned in the whitepaper, section 1.4.)
 
Last edited:
I do not understand your post. You're quoting something that is not related to what I wrote.
Compare the figure on the last page of revision 3.0 and 2.0 which you can find here. "Optional 512-bit FP/int" was removed from both AVX10.1 and 10.2.

Those are quotes from the link you posted. Here's a screenshot.

1751308650699.png
 
Those are quotes from the link you posted. Here's a screenshot.

View attachment 406020
I know where it's from. I do not know what relevance it has to what I wrote since it's about instructions and not implementations.
In revision 2.0 future E-core designs were not guaranteed to support 512-bit instructions since it was an optional feature. In May this changed with rev. 3.0 and now everything will support 512-bit vectors, thus support backwards binary compatibility with AVX-512.
 
I know where it's from. I do not know what relevance it has to what I wrote since it's about instructions and not implementations.
In revision 2.0 future E-core designs were not guaranteed to support 512-bit instructions since it was an optional feature. In May this changed with rev. 3.0 and now everything will support 512-bit vectors, thus support backwards binary compatibility with AVX-512.

AVX10.2 implements instructions that are supported at 128, 256, and 512-bit. That's where it applies to implementation. It says right there in the whitepaper, ALL vector lengths are supported by this versioning of AVX10.2. So if you want to implement a new instruction to AVX10.2 for, say, 32x256-bit you can do so.

The wording changed from 512-bit being optional, I.E. all new instructions HAD to be 128 or 256, optionally also 512, to now ALL instructions must be 128, 256, AND 512.

In this way AVX10 has solved none of the problems of transparency that AVX-512 had, since AVX-512 already contains a plethora of instructions that aren't 512-bit length exclusive.

Perhaps I misunderstand what you mean by "not optional" because this appears to be optional implementation.

I did misunderstand, that's my bad. Remember kids, it's one thing to read something it's another to understand it. I had to reread a few times for it to actually stick.

Okay, so yes AVX10.2 removes the optional implementation at the logic scale. So future E-Cores will very likely be 32x256-bit double-pumped to adhere to the AVX10.2 superset, giving them full 512-bit. I was seeing that Intel didn't specify "Future" in the diagram, saw that they resumed 128 and 256-bit subsets, and made the assumption that AVX10.2 formalized non-512 hardware implementations to the effect of allowing 32x128 by utilizing the subset vector lengths to support it. Which, they did before the change but now I get it.

I'll be getting another cup of coffee before reading tech documents.
 
Last edited:
AVX10.2 implements instructions that are supported at 128, 256, and 512-bit. That's where it applies to implementation. It says right there in the whitepaper, ALL vector lengths are supported by this versioning of AVX10.2. So if you want to implement a new instruction to AVX10.2 for, say, 32x256-bit you can do so.

The wording changed from 512-bit being optional, I.E. all new instructions HAD to be 128 or 256, optionally also 512, to now ALL instructions must be 128, 256, AND 512.
What are you even arguing here, you're agreeing with what I wrote in the first place. This is the case, now, a quite substantial change from previous whitepaper made late.

Before this change you had separate AVX10/256 and AVX10/512 targets. They were binary incompatible, as in a AVX10/512 program wouldn't run on a AVX10/256 CPU (E-core). Intel decided to remove AVX10/256-only cores from the AVX10 spec thus solved one of the gripes I had with it. I wonder if they are going to keep the VMX mechanism to create AVX10/256-only virtual machines on AVX10/512 CPUs.
In this way AVX10 has solved none of the problems of transparency that AVX-512 had, since AVX-512 already contains a plethora of instructions that aren't 512-bit length exclusive.
It solved another problem of AVX-512 which is fragmentation of implementations. Unlike previous extensions to x86 like SSE there are gaps in instruction coverage between subsequent implementations of AVX-512. For example latest Intel designs have no VP2INTERSECT support despite supporting it earlier in Tiger Lake.
AVX10.2 supports everything from AVX10.1, and future AVX10.3 will support everything in AVX10.2, and so on.
What other transparency issues remain?
 
Back
Top