Thursday, October 13th 2016

Industry Leaders Join Forces to Promote New High-Performance Interconnect

A group of leading technology companies today announced the Gen-Z Consortium, an industry alliance working to create and commercialize a new scalable computing interconnect and protocol. This flexible, high-performance memory semantic fabric provides a peer-to-peer interconnect that easily accesses large volumes of data while lowering costs and avoiding today's bottlenecks. The alliance members include AMD, ARM, Cavium Inc., Cray, Dell EMC, Hewlett Packard Enterprise (HPE), Huawei, IBM, IDT, Lenovo, Mellanox Technologies, Micron, Microsemi, Red Hat, Samsung, Seagate, SK hynix, Western Digital Corporation, and Xilinx.

Modern computer systems have been built around the assumption that storage is slow, persistent and reliable, while data in memory is fast but volatile. As new storage class memory technologies emerge that drive the convergence of storage and memory attributes, the programmatic and architectural assumptions that have worked in the past are no longer optimal. The challenges associated with explosive data growth, real-time application demands, the emergence of low latency storage class memory, and demand for rack scale resource pools require a new approach to data access.
Gen-Z provides the following benefits:
  • High Bandwidth, Low Latency: Simplified interface based on memory semantics, scalable from tens to several hundred GB/s of bandwidth, with sub-100 ns load-to-use memory latency.
  • Advanced Workloads and Technologies: Enables data centric computing with scalable memory pools and resources for real-time analytics and in-memory applications. Accelerates new memory and storage innovation.
  • Compatible and Economical: Highly software compatible with no required changes to the operating system. Scales from simple, low cost connectivity to highly capable, rack scale interconnect.
The Gen-Z Consortium, established by current board members AMD, ARM, Cray, Dell EMC, Hewlett Packard Enterprise (HPE), Huawei, IDT, Micron, Samsung, SK hynix, and Xilinx, is an open, non-proprietary, transparent industry standards body. The consortium reflects a broader industry trend that recognizes the importance of open standards and their role in providing a level playing field to promote adoption, innovation and choice. The Gen-Z Consortium is accepting new members. The core specification, covering the architecture and protocol, will be finalized in late 2016.

For more information, visit this page.
Add your own comment

13 Comments on Industry Leaders Join Forces to Promote New High-Performance Interconnect

#1
RejZoR
So, this GenZ is a step closer to what I've predicted we're going to use one day. My prediction was that HDD+RAM will at one point get unified as a single non-volatile storage medium. Imagine having working memory and storage memory as one huge chunk, that isn't volatile, but has the speed of RAM and capacity of HDD's or at least large SSD's. I think GenZ is a step closer to that. Storage memory will also be working memory.
Posted on Reply
#2
hellrazor
Please tell me that this is just an interim name and not the final one.
Posted on Reply
#3
DeathtoGnomes
RejZoR, post: 3538148, member: 1515"
So, this GenZ is a step closer to what I've predicted we're going to use one day. My prediction was that HDD+RAM will at one point get unified as a single non-volatile storage medium. Imagine having working memory and storage memory as one huge chunk, that isn't volatile, but has the speed of RAM and capacity of HDD's or at least large SSD's. I think GenZ is a step closer to that. Storage memory will also be working memory.
And here I thought HDD's already had this as disk cache, but lacking a direct connections.
Posted on Reply
#4
TheLostSwede
hellrazor, post: 3538162, member: 81601"
Please tell me that this is just an interim name and not the final one.
It's a name for the consortium, not the tech.
Posted on Reply
#5
laszlo
anyone noticed Intel isn't in this new group?
Posted on Reply
#6
$ReaPeR$
interesting.. and yeah Intel seems to be missing..
Posted on Reply
#7
RejZoR
"open, non-proprietary"

This basically automatically means no Intel or NVIDIA lolz XD
Posted on Reply
#8
Patriot
laszlo, post: 3538169, member: 6256"
anyone noticed Intel isn't in this new group?
That is because this is to fight omnipath...
Intel is trying to take over networking and interconnects... to control every subsystem.
Posted on Reply
#9
$ReaPeR$
RejZoR, post: 3538191, member: 1515"
"open, non-proprietary"

This basically automatically means no Intel or NVIDIA lolz XD
isnt it sad though?
Posted on Reply
#10
slozomby
DeathtoGnomes, post: 3538163, member: 151150"
And here I thought HDD's already had this as disk cache, but lacking a direct connections.
disk cache is a tiny % of storage capacity.

the top end xeons have 102 GB/s memory bandwidth, a mid/high end server connected to a san might have 4 16Gb fiber ports for roughly 9GB/s storage and that is not cheap.

in local storage terms. to achieve memory like speeds you'd need ~95 sas12 ssds, or ~40 m2 drives, and enough pcie lanes to support this.
Posted on Reply
#11
DeathtoGnomes
slozomby, post: 3538553, member: 166611"
disk cache is a tiny % of storage capacity.

the top end xeons have 102 GB/s memory bandwidth, a mid/high end server connected to a san might have 4 16Gb fiber ports for roughly 9GB/s storage and that is not cheap.

in local storage terms. to achieve memory like speeds you'd need ~95 sas12 ssds, or ~40 m2 drives, and enough pcie lanes to support this.
I was referring to @RejZoR post only nothing else.
Posted on Reply
#12
Prima.Vera
Will we finally see full speed enabled External Video Cards using this connector?
Posted on Reply
#13
RejZoR
slozomby, post: 3538553, member: 166611"
disk cache is a tiny % of storage capacity.

the top end xeons have 102 GB/s memory bandwidth, a mid/high end server connected to a san might have 4 16Gb fiber ports for roughly 9GB/s storage and that is not cheap.

in local storage terms. to achieve memory like speeds you'd need ~95 sas12 ssds, or ~40 m2 drives, and enough pcie lanes to support this.
Why on Earth you'd design a new high speed bus and then hook prehistoric drives on it? That's an equivalent of designing PCIe 5.0 and then sticking a Voodoo 2 into it. With an added PLX bridge!
Posted on Reply
Add your own comment