Tuesday, April 2nd 2019

Intel Unleashes 56-core Xeon "Cascade Lake" Processor to Preempt 64-core EPYC

Intel late Tuesday made a boat-load of enterprise-relevant product announcements, including the all important update to its Xeon Scalable enterprise processor product-stack, with the addition of the new 56-core Xeon Scalable "Cascade Lake" processor. This chip is believed to be Intel's first response to the upcoming AMD 7 nm EPYC "Rome" processor with 64 cores and a monolithic memory interface. The 56-core "Cascade Lake" is a multi-chip module (MCM) of two 28-core dies, each with a 6-channel DDR4 memory interface, totaling 12-channel for the package. Each of the two 28-core dies are built on the existing 14 nm++ silicon fabrication process, and the IPC of each of the 56 cores are largely unchanged since "Skylake." Intel however, has added several HPC and AI-relevant instruction-sets.

To begin with, Intel introduced DL Boost, which could be a fixed-function hardware matrix multiplier that accelerates building and training of AI deep-learning neural networks. Next up, are hardware mitigation against several speculative execution CPU security vulnerabilities that haunted the computing world since early-2018, including certain variants of "Spectre" and "Meltdown." A hardware fix presents lesser performance impact compared to a software fix in the form of a firmware patch. Intel has added support for Optane Persistent Memory, which is the company's grand vision for what succeeds volatile primary memory such as DRAM. Currently slower than DRAM but faster than SSDs, Optane Persistent Memory is non-volatile, and its contents can be made to survive power-outages. This allows sysadmins to power-down entire servers to scale down with workloads, without worrying about long wait times to restore uptime when waking up those servers. Among the CPU instruction-sets added include AVX-512 and AES-NI.
Intel Speed Select is a fresh-spin on a neglected feature most processors have had for decades, allowing administrators to select specific multipliers for CPU cores on the fly, remotely. Not too different from this is Resource Director Technology, which gives you more fine-grained QoS (quality of service) options for specific cores, PIDs, virtual machines, and so on.

Unlike previous models of Xeon Scalable, the first Xeon Scalable "Cascade Lake" processor, the Xeon Platinum 9200, is an FC-BGA package and not socketed. This 5,903-pin BGA package uses a common integrated heatspreader with the two 28-core dies underneath. The two dies talk to each other over a UPI x20 interconnect link on-package, while each die puts out its second UPI x20 link as the package's two x20 links, to scale up to two packages on a single board (112 cores).
Source: HotHardware
Add your own comment

88 Comments on Intel Unleashes 56-core Xeon "Cascade Lake" Processor to Preempt 64-core EPYC

#76
aQi
whats the point ???? another 14nm ?? more cores ? more watt ?

BGA hmm not so interesting atleast they should work on TDP for sure.
Posted on Reply
#78
aQi
remixedcat
hahahhhahahahahahahahahhahahahahahahahahaahhaha

true that but Intel makes more cores to compete yet with same nm size ??? more power more heat....be like sales down we need more watts :P
Posted on Reply
#79
efikkan
Aqeel Shahzadwhats the point ???? another 14nm ?? more cores ? more watt ?
Cascade Lake-SP is about three quarters late, so that's why the launch feels a little misplaced.
Cascade Lake-SP is not a huge improvement over Skylake-SP, but features according to Intel "cache improvements", improvements in clock speed, hardware security mitigations(added late) and a few new instructions for "deep learning" stuff and AVX improvements. Cascade Lake-SP is a fine refinement, but I think everyone is eyeing Ice Lake-SP now, which will be the next large generational jump.
notbBut why is it covering all those topics? I mean: why are people here so interested in managers moving between companies? Is this also an important part of being "an enthusiast"?
I don't know. And why is less serious sites like Wccftech all of a sudden covering the marriage status of tech CEOs? My best guess is it's low effort and attracts attention.
notbBut since Ryzen came out TPU decided to cover CPUs more. And, since Ryzen relatively sucked at gaming, they included that "productivity" part, which is really bad - clearly out of reviewers' comfort zone.

Again: why do it? Does AMD expect this (they provide the CPUs)? I'd be fine with that, but maybe they should also give some guideline.
Why is this a bad thing? Very few buyers of high-end hardware do it only for gaming, there are a ot of "power users" among the audience here.

I do agree that the quality of the "productivity" part is questionable though.
notbWouldn't it be nice if we got a "how stuff works" article once in a while? Or how to do something useful using a computer?
Absolutely.

I also wish that tech writers/journalists in general actually knew about the tech they were writing about, not just people who were interested in tech. This sort of stuff is often obvious whenever mentioning deeper technological stuff. I see the same problem whenever even fairly respected outlets post interviews with engineers/developers; it's glaringly obvious that they don't understand the response from the other party, they don't ask the correct questions, and are not able to see through the BS in the PR talking points that most representatives are instructed to use.
Posted on Reply
#80
phill
Solaris17yup in almost all cases the software is DB related be it custom or market like microsoft/oracle SQL. Support contracts are big "gotchyas" for these things. Licensing for multi server/core environment might be a couple hundred K. The "support" contracts tied to those are almost always separate line items for a locked in rate for a locked in amount of years. That is 10s if not 100s of thousands.

The other software, is generally virtualization, licensing for virtual machines can be separated into per core/node and in itself is expensive of course that doesnt include the cost of licensing of the operating systems you install in this environment.

Additionally, you may also have virtualized appliances like sophos paloalto etc. for routing and switching. Big boy routers have many different types of licensing depending on the manufacturer but they can be split up all kinds of ways. Some are done by port/features (vpn etc) or speed (allowed to negotiate past 1gb/10gb/40/100gb?) sometimes and often a collection of all of them. Less we forget like everything else if you want to license your network equipment so it functions, that requires the separate support contract. Big boy iron swithes run 10's of thousands of $$. for the equipment before licensing and support.

In many cases a very large or small complex (think science lab) build out, costs 100's of $ to several million. so $20k or so for a server is nothing. Not to mention we are talking enterprise grade HDDs or SSD/NVME drives. think $700+ per drive.

There will always be variations. And not all ENT switches are 20k, not all routers are 8 grand, not all devices need you to renew a support license, but the idea is that in this field the numbers are very big all the time, think of your wallet having 1, 5s and 10s in it. Thats pretty normal and you are used to seeing it. In ENT network/systems design the wallets are 10,000 / 500,000 / 1,000,000 its just the norm. The cost of playing in the field.

As for the convo itself and not singling you out @phill just for the record, but lets make sure we are keeping the convo cool. I dont want people arguing and getting pissy. I encourage the open discussion of these types of systems, but everyone needs to play nice from both sides of the fence.
Would you mind if I took this to PM or somewhere else for us just to chat about? As I've not been in the field very long I can understand the differences a little but it seems there's another world out there as well! It seems your experience is far and wide (I'm not trying to create a pun here!! :) )

Back to the topic in hand....
In the grand scheme of things such as the support and so on, the server cost might be the last thing on someone's mind. If you want something to do a job, if it's special hardware you need, it's special hardware you buy. You don't buy a single thread based system for multi thread work and expect it to do well.
The prices for these CPUs are massively expensive, but if there's a niche thread of work out there (please excuse the pun) then consumers will have to buy... Wouldn't matter if the competition did offer faster performance in everything else or more cores or whatever, if the other CPU is able to do something twice as fast but cost 4 times the price, people will buy it because of the time saved over the other hardware...

The company I work for is going to be changing to SAP at some point.. I hear that's a bit of an interesting product and program to use.... I wonder what the hardware requirements for that will be...
Posted on Reply
#81
notb
Aqeel Shahzadwhats the point ???? another 14nm ?? more cores ? more watt ?
The point is to pack more cores on a single socket. It saves money.
phillThe company I work for is going to be changing to SAP at some point.. I hear that's a bit of an interesting product and program to use.... I wonder what the hardware requirements for that will be...
Don't get overly excited. I assume you mean SAP ERP.
ERP means enterprise resource planning, which is basically a model of your company. The idea is that one system contains all the data you need: sales, assets, liabilities, inventory, employees, costs, open issues etc.
Lets say your company doesn't have an ERP. You'll have separate systems for different things:
- for sales (who sold what),
- for employees (who came when, who's on medical leave)
- for inventory (what is to be sold)
- for aftersales issues (because the same people who sell are also covering returns or basic repairs)
Keep in mind "a system" could also mean an Excel sheet or hand-written notes...
Imagine the huge cost needed to check efficiency of an employee.

John sold just 3 apples today - he usually sells 17.
Maybe he got sick and went home after lunch? Maybe he was dealing with someone unhappy with bad milk sold yesterday? Maybe there were just 3 apples in the shop? Maybe the power was down for most of the day?
On the other hand, if you have a centralized database (ERP) you can define metrics that can automatically refresh daily (or even live).

Underneath is a normal database, usually Oracle or SQL Server. On top is an interface (either SAP-made or custom) you actually see - it can be good or bad. Those made by SAP seem dated, but I've seen worse custom ones.

The technologically interesting product is SAP HANA, which is a columnar, in-memory database design for analytics. It is hugely fast. It's not a standard yet, but more and more companies buy it.
"In-memory" means the system sucks everything it needs to RAM - in a traditional database disk operations are the biggest cost of queries.
In-memory databases are getting traction now and are the main reason Intel is pushing these Cascade Lake Xeons.
Optane is the game changer, as it was meant to be from the beginning. :)
Posted on Reply
#82
aQi
Exactly more cores and more power. Since when did core count manage to be more admirable then being power efficient ?

Ok being a bit off topic but will these manage to secure a better place in market ? I think intel is again planning dirty. Where 5nm fabrication is already reported by TMC and intel is holding the pole of 14nm fabrication is radiculous and just not making sense.
Posted on Reply
#83
notb
Aqeel ShahzadExactly more cores and more power. Since when did core count manage to be more admirable then being power efficient ?
In datacenters? Since the beginning.
Think about how you're using a desktop. You run programs: browsers, games, maybe even something work-oriented (CAD, video editor or something). You're interested in the total performance and whether there's a single hugely fast core or 20 slower cores doesn't really make a difference as long as performance is on par.

But it does in servers, because here you're dividing a CPU between tasks (systems, VMs). So it might just be that you *need* ~100 cores for whatever reason.
So looking at Skylake offer, you'll need 4 CPUs to get there. 4S racks are very expensive and naturally limited (there's not much space left for drives or accelerators).
With Cascale Lake you can do that with 2 CPUs. 2S racks are cheaper and more flexible.

Also, the potentially high price tag shouldn't be shocking, nor the 400W power draw
9242 and9282 are high frequency CPUs (2.3/3.8 and 2.6/3.8 respectively). They're succeeding (doubling :-)) top of the range 8180, which costs $10k (and TDP is 205W).

But you're saving money on rack, you're saving space and - assuming it's OK for the particular system - you can save A LOT on RAM.
MSRP of Optane 128GB DIMM is slightly under $600. Few stores already offer these and the price is much higher, but still under $1000 (so 5 times less than RAM).
If they launch a 64 GB module for $300, I will consider getting one for my desktop. Amazing.
Ok being a bit off topic but will these manage to secure a better place in market ? I think intel is again planning dirty. Where 5nm fabrication is already reported by TMC and intel is holding the pole of 14nm fabrication is radiculous and just not making sense.
Enterprise customers don't care about fabrication process. They need a tool to get the job done. Intel is giving them these tools.
Posted on Reply
#84
medi01
notbThis is a very important market for Intel, so I'm pretty sure they designed Cascade Lake to be good at it.
Soo... prove that Intel's tech is better is in random thought htat "it's important for Intel". Yay.
notbWhat...?
24+ cores on single MS SQL instance is a very rare case that is accordingly very rarely used.
You probably don't realize how batshit crazy that setup is.
Posted on Reply
#85
londiste
Aqeel ShahzadExactly more cores and more power. Since when did core count manage to be more admirable then being power efficient ?
Twice the cores, twice the power. Power efficiency remains the same.
Posted on Reply
#86
phill
notbThe point is to pack more cores on a single socket. It saves money.


Don't get overly excited. I assume you mean SAP ERP.
ERP means enterprise resource planning, which is basically a model of your company. The idea is that one system contains all the data you need: sales, assets, liabilities, inventory, employees, costs, open issues etc.
Lets say your company doesn't have an ERP. You'll have separate systems for different things:
- for sales (who sold what),
- for employees (who came when, who's on medical leave)
- for inventory (what is to be sold)
- for aftersales issues (because the same people who sell are also covering returns or basic repairs)
Keep in mind "a system" could also mean an Excel sheet or hand-written notes...
Imagine the huge cost needed to check efficiency of an employee.

John sold just 3 apples today - he usually sells 17.
Maybe he got sick and went home after lunch? Maybe he was dealing with someone unhappy with bad milk sold yesterday? Maybe there were just 3 apples in the shop? Maybe the power was down for most of the day?
On the other hand, if you have a centralized database (ERP) you can define metrics that can automatically refresh daily (or even live).

Underneath is a normal database, usually Oracle or SQL Server. On top is an interface (either SAP-made or custom) you actually see - it can be good or bad. Those made by SAP seem dated, but I've seen worse custom ones.

The technologically interesting product is SAP HANA, which is a columnar, in-memory database design for analytics. It is hugely fast. It's not a standard yet, but more and more companies buy it.
"In-memory" means the system sucks everything it needs to RAM - in a traditional database disk operations are the biggest cost of queries.
In-memory databases are getting traction now and are the main reason Intel is pushing these Cascade Lake Xeons.
Optane is the game changer, as it was meant to be from the beginning. :)
Yeah the system we've got at the moment is something similar but made in the dark ages I think.. Very clunky bit of software and I used to hate having to use it.. Never a very nice and friendly bit of software.. Not sure what will be worse to be honest lol

I think if anything I'm more interested in the hardware that runs this software, software for me is dull and rather boring (unless it's finding the cure for cancer etc :D) but hardware I can get my teeth into and want to learn...
Some of the servers we have over in the US for running SAP are complete monsters.. Over a 100 cores, 2Tb of ram, masses of drives.. Unreal bits of kit, certainly a bit more overkill for home usages lol

I take it you've worked with these sorts of software a long time @Solaris17 ? :)
Posted on Reply
#87
Solaris17
Super Dainty Moderator
phillNot sure what will be worse to be honest lol
The medical IT field is a great place to go and lose faith in an entire industry.
phillI take it you've worked with these sorts of software a long time @Solaris17 ?
The software and the hardware. Mostly R&D or engineering roles. Not super fond of the software side though. I’m c level now. Pretty sure that’s called “peters principle”. Fun fun stuff iv had the pleasure of being lucky enough to touch and work with some pretty neat stuff.
Posted on Reply
#88
phill
Solaris17The medical IT field is a great place to go and lose faith in an entire industry.

The software and the hardware. Mostly R&D or engineering roles. Not super fond of the software side though. I’m c level now. Pretty sure that’s called “peters principle”. Fun fun stuff iv had the pleasure of being lucky enough to touch and work with some pretty neat stuff.
Is it really that bad? :(

C level sir??
Posted on Reply
Add your own comment
Apr 25th, 2024 22:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts