Wednesday, May 17th 2017

Intel Shows 1.59x Performance Improvement in Upcoming Intel Xeon Scalable Family

Intel today unveiled significant performance advances in its upcoming Intel Xeon Processor Scalable family. At the SAP Sapphire NOW conference, Intel showed up to 1.59x higher Intel Xeon processor performance running in-memory SAP HANA workloads over the generation it replaces, demonstrating what the new products will deliver to help meet the increasingly complex demands of big-data, in-memory workloads in the modern data center.

Diane Bryant, group president of the Data Center Group at Intel, outlined how the Intel Xeon Processor Scalable family - available in mid-2017 - will provide enhanced performance to in-memory applications like SAP HANA. This will provide customers faster time-to-insight and allow organizations to rapidly respond to change.
Bryant also announced that SAP has certified HANA to support up to 6x greater system memory on the new Intel platform for 4- or 8-socket configurations over the representative installed base of systems available four years ago. More information about the immediate-term benefits of running SAP HANA workloads on the Intel Xeon Processor Scalable family is available at Intel's IT Peer Network site.

Additionally, exhibiting Intel's commitment to re-architecting the data center to support the future needs of a data-intensive world driven by the growth of artificial intelligence, 5G, autonomous driving and virtual reality, Intel demonstrated live for the first time its future persistent memory solution in a DIMM form factor. Based on 3D XPoint media, Intel persistent memory is a transformational technology that will deliver to the mainstream memory that is higher capacity, affordable and persistent.

With Intel persistent memory, Intel is revolutionizing the storage hierarchy to bring large amounts of data closer to the Intel Xeon processor, which will enable many new usage models and make in-memory applications like SAP HANA even more powerful. Intel persistent memory will be available in 2018 as part of an Intel Xeon Processor Scalable family refresh (codename: Cascade Lake).

During a live demonstration of Intel persistent memory at SAP Sapphire, Lisa Davis, vice president of IT Transformation for Enterprise and Government in the data Data Center Group at Intel, noted that with Intel persistent memory, in-memory databases like SAP HANA will be able to deliver even faster transactions, perform real-time analytics and accelerate business decision-making.

In preparation for next year's availability, software developers can accelerate their readiness for Intel persistent memory today with the libraries and tools at www.pmem.io. Further product details will be unveiled at a later date.
Add your own comment

9 Comments on Intel Shows 1.59x Performance Improvement in Upcoming Intel Xeon Scalable Family

#2
cdawall
where the hell are my stars
TheGuruStudSo... 5% real gain? Lol
This is probably specific load targeted. This market is huge for Intel they won't be goofing around with it.

I expect large gains depending on workload. None in others thanks to them being maxed out.
Posted on Reply
#3
Loosenut
I'm thinking AMD's EPYC Naples chip is spooking Intel...
Posted on Reply
#4
TheGuruStud
cdawallThis is probably specific load targeted. This market is huge for Intel they won't be goofing around with it.

I expect large gains depending on workload. None in others thanks to them being maxed out.
If I'm reading this right, they stuck some really fast flash on a dimm, so the CPU has low latency access to large storage? Sounds ultra pricey... PCI-E wasn't cutting it?
Posted on Reply
#5
notb
TheGuruStudIf I'm reading this right, they stuck some really fast flash on a dimm, so the CPU has low latency access to large storage? Sounds ultra pricey... PCI-E wasn't cutting it?
Different type of memory.
The idea here is that since Optane has RAM-like interface (addressing model), you can make it in a DIMM form factor - CPU will treat it like normal RAM (just slower).
Optane is not going to replace RAM in general, but you can mix them. So one channel would use RAM (as a normal CPU cache) and another would house Optane modules for large in-memory applications (like the mentioned SAP HANA), rendering data or whatever.

And it's actually a huge improvement in server cost. A 64GB RAM module (DDR4, ECC) costs around $1000 and is still fairly rare. 375GB Optane drive (Intel P4800X) is "just" $1500.
If Intel manages to fit 128GB on a DIMM and price it around $500, it'll quickly become a standard issue in many servers and workstations.
And of course it is only supported by Intel CPUs, so - even if Naples CPU turns out to be slightly better value (CPU vs CPU) - Intel's platform should still be more attractive and cheaper.
Posted on Reply
#6
close
<<"Intel Shows [up to] 1.59x Performance Improvement" for very specific workloads>>.

There, fixed it for you TPU. :)
Posted on Reply
#7
Frick
Fishfaced Nincompoop
I have no idea what SAP HANA worklosds are.
Posted on Reply
#8
notb
FrickI have no idea what SAP HANA worklosds are.
SAP AG suggests a compression ratio of 1/7 for estimating RAM requirements. You have to double that (dynamic memory).

Even assuming that you'll pull just a part of database into HANA (into RAM), because not everything is neccessary, the RAM needs are huge. 1TB of RAW data translates into ~300GB of RAM. And 1TB is nothing special...

In my previous workplace we used Oracle DB for department tasks. The company was selling way under a million policies a year and we used just around 15 years of historical data. Nothing special.
Yet, the database was getting close to the 4TB space we had (compressed!). The cost of adding 1TB? Around $50k (I don't know what hardware was used - possibly Cisco B200 Blades). That was few years ago, so I guess it was using max 32GB DIMMs (possibly 16GB). With 16GB per DIMM and 24 slots per node, 1TB translated to 3 server nodes. Costs quickly add up... We didn't get the budget and had to live with that 4TB limit for another year (maybe they bought something new since then).

Think about how Optane could change this. At a cost of slightly lower performance, Intel should be able to produce DIMMs with capacity 4-8x larger than what typical RAM DIMMs (and at a much lower per-GB cost). Somy department would need just a single node and we would most likely get it... :p

Now forget my insurance example. Insurance companies have tiny data. Moving to banks we arrive at 10x larger databases. Moving to telecoms it's another 10x up (or maybe 100x with all the smartphone tracking that takes place)...
Posted on Reply
#9
Fx
Not good enough. I'm waiting until they release Xeon Palladium line.
Posted on Reply
Apr 26th, 2024 13:19 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts