News Posts matching #Cooper Lake

Return to Keyword Browsing

Intel Ice Lake-SP and Cooper Lake-SP Details Leaked

Brainbox, a Korean media outlet, has gathered information on Intel's newest Ice Lake and Cooper Lake server processors from a presentation ASUS held for its server lineup. With Cooper Lake-SP paving the way for the first server CPU model to be released on the new "Whitley" platform, it is supposed to launch in Q2 of 2020. Cooper Lake-SP comes with TDP of 300 W and will be available with configurations of up to 48 cores, but there also should be a 56 core model like the Xeon Platinum 9282, that has a TDP of 400 W. Cooper Lake-SP supports up to 64 PCIe 3.0 lanes, 8 channel memory (16 DIMMs in total) that goes up to 3200 MHz and four Ultra Path Interconnect (UPI) links.

Ice Lake-SP, built on the new 10 nm+ manufacturing process, is coming in soon after Cooper Lake-SP release, with a launch window in Q3 of 2020. That is just few months apart from previous CPU launch, so it will be a bit hard to integrate the launches of two rather distinct products. As far as the specifications of Ice Lake-SP goes, it will have up to 38 core for the top end model, within 270 W TDP. It supports 64 PCIe 4.0 lanes with three UPI links. There is also 8 channel memory support, however this time there is an option to use 2nd generation Optane DC Persistent Memory. Both CPU uArches will run on the new LGA 4189 on the P+ socket.

Intel Adds More L3 Cache to Its Tiger Lake CPUs

InstLatX64 has posted a CPU dump of Intel's next-generation 10 nm CPUs codenamed Tiger Lake. With the CPUID of 806C0, this Tiger Lake chip runs at 1000 MHz base and 3400 MHz boost clocks which is lower than the current Ice Lake models, but that is to be expected given that this might be just an engineering sample, meaning that production/consumer revision will have better frequency.

Perhaps one of the most interesting findings this dump shows is the new L3 cache configuration. Up until now Intel usually put 2 MB of L3 cache per each core, however with Tiger Lake, it seems like the plan is to boost the amount of available cache. Now we are going to get 50% more L3 cache resulting in 3 MB per core or 12 MB in total for this four-core chip. Improved cache capacity can result in additional latency because of additional distance data needs to travel to get in and out of cache, but Intel's engineers surely solved this problem. Additionally, full AVX512 support is present except avx512_bf which supports bfloat16 floating-point variation found in Cooper Lake Xeons.

Next-generation Intel Xeon Scalable Processors to Deliver Breakthrough Platform Performance with up to 56 Processor Cores

Intel today announced its future Intel Xeon Scalable processor family (codename Cooper Lake) will offer customers up to 56 processor cores per socket and built-in AI training acceleration in a standard, socketed CPU as part of its mainline Intel Xeon Scalable platforms, with availability in the first half of 2020. The breakthrough platform performance delivered within the high-core-count Cooper Lake processors will leverage the capabilities built into the Intel Xeon Platinum 9200 series, which today is gaining momentum among the world's most demanding HPC customers, including HLRN, Advania, 4Paradigm, and others.

"The Intel Xeon Platinum 9200 series that we introduced as part of our 2nd Generation Intel Xeon Scalable processor family generated a lot of excitement among our customers who are deploying the technology to run their high-performance computing (HPC), advanced analytics, artificial intelligence and high-density infrastructure. Extended 56-core processor offerings into our mainline Intel Xeon Scalable platforms enables us to serve a much broader range of customers who hunger for more processor performance and memory bandwidth."
-Lisa Spelman, vice president and general manager of Data Center Marketing, Intel Corporation

Intel "Sapphire Rapids" Brings PCIe Gen 5 and DDR5 to the Data-Center

As if the mother of all ironies, prior to its effective death-sentence dealt by the U.S. Department of Commerce, Huawei's server business developed an ambitious product roadmap for its Fusion Server family, aligning with Intel's enterprise processor roadmap. It describes in great detail the key features of these processors, such as core-counts, platform, and I/O. The "Sapphire Rapids" processor will introduce the biggest I/O advancements in close to a decade, when it releases sometime in 2021.

With an unannounced CPU core-count, the "Sapphire Rapids-SP" processor will introduce DDR5 memory support to the data-center, which aims to double bandwidth and memory capacity over the DDR4 generation. The processor features an 8-channel (512-bit wide) DDR5 memory interface. The second major I/O introduction is PCI-Express gen 5.0, which not only doubles bandwidth over gen 4.0 to 32 Gbps per lane, but also comes with a constellation of data-center-relevant features that Intel is pushing out in advance as part of the CXL Interconnect. CXL and PCIe gen 5 are practically identical.

Intel "Cooper Lake" Latest 14nm Stopgap Between "Cascade Lake" and "Ice Lake"

With no end to its 10 nm transition woes in sight (at least not until late-2019), Intel is left with refinement of its existing CPU micro-architectures on the 14 nanometer node. The client-desktop segment sees the introduction of the "Whiskey Lake" (aka Coffee Lake Refresh) later this year; while the enterprise segment gets the 14 nm "Cascade Lake." To its credit, Cascade Lake introduces a few major platform innovations, such as support for Optane Persistent Memory, silicon-level hardening against recent security vulnerabilities, and Deep Learning Boost, which is hardware-accelerated neural net building/training, and the introduction of VNNI (Variable Length Neural Network Instructions). "Cascade Lake" makes its debut towards the end of 2018. It will be succeeded in 2019 by Ice Lake the new "Cooper Lake" architecture.

"Cooper Lake" is a refresh of "Cascade Lake," and a stopgap in Intel's saga of getting 10 nm right, so it could build "Ice Lake" on it. It will be built on the final (hopefully) iteration of the 14 nm node. It will share its platform with "Cascade Lake," and so Optane Persistent Memory support carriers over. What's changed is the Deep Learning Boost feature-set, which will be augmented with a few new instructions, including BFLOAT16 (a possible half-precision floating point instruction). Intel could also be presented with the opportunity to crank up clock speeds across the board.
Return to Keyword Browsing