• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Kioxia, AIO Core and Kyocera Announce Development of PCIe 5.0-Compatible Broadband Optical SSD for Next-Generation Green Data Centers

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Kioxia Corporation, AIO Core Co., Ltd. and Kyocera Corporation today announced the development of a prototype of a PCIe 5.0-compatible broadband SSD with an optical interface (broadband optical SSD). The three companies will develop technologies for broadband optical SSDs to enhance their suitability for advanced applications that require high-speed transfer of large data, such as generative AI, and will also apply them to proof-of-concept (PoC) tests for future social implementation.

The new prototype achieved functional operation with the high-speed PCIe 5.0 interface, which is twice the bandwidth of the previous PCIe 4.0 generation, through the combination of AIO Core's IOCore optical transceiver and Kyocera's OPTINITY optoelectronic integration module technologies. In next-generation green data centers, by replacing the electrical wiring interface with optical and utilizing broadband optical SSD technology significantly increases the physical distance between the compute and storage devices, while maintaining energy efficiency and high signal quality. It also contributes to the flexibility and efficiency of data center system design, where digital diversification and the evolution of generative AI require complex, high-volume, high-speed data processing.



This achievement is the result of the Japanese "Next Generation Green Data Center Technology Development" project JPNP21029. It is subsidized by the New Energy and Industrial Technology Development Organization (NEDO), which is under the "Green Innovation Fund Project: Construction of Next Generation Digital Infrastructure." In this project, companies will develop next-generation technologies with the goal of achieving more than 40% energy savings compared to current data centers. As part of this project, Kioxia is developing broadband optical SSDs, AIO Core is developing optoelectronic fusion devices and Kyocera is developing optoelectronic device packages.

View at TechPowerUp Main Site
 
replacing the electrical wiring interface with optical and utilizing broadband optical SSD technology significantly increases the physical distance between the compute and storage devices, while maintaining energy efficiency and high signal quality.
Yup. Only reason I see this to exists is PCI-e 5.0 and greater over fibre.
I believe PCIe 4.0 was about the cutoff for economical in-rack PCIe SANs. Gen3 being the most economical, requiring the fewest ReTimers/ReDrivers or Repeater-Switches.

Routing changes​

  • Routing for Gen6 signalling will be a major challenge. FEC will help with the recovery of smaller errors, but the move to PAM4 will significantly reduce the SI (signal integrity) overheads in the system. This will make data liable to error due to loss and crosstalk
    • Loss
      The total insertion loss budget for Gen6 is 32dB, down from 36dB in the Gen5 spec. This is a small but significant change and will limit the length of traces and the number of transitions (connectors and similar)
    • Cross-talk
      This is interference from one lane to another (Crosstalk). With PAM4, the probability of interference changing a bit of data increases significantly. This makes cross-talk a much higher risk for Gen6 systems and will require a more sophisticated design to mitigate.
 
This is a whole lot of words for "changed the PCIe physical transport from electrical to optical".
 
A four-lane electrical PCIe interface actually has four lanes in each direction, so eight lanes in total. What about the optical counterpart? 4 fibres total? 4 fibres in each direction?
 
A four-lane electrical PCIe interface actually has four lanes in each direction, so eight lanes in total. What about the optical counterpart? 4 fibres total? 4 fibres in each direction?
You can use a single fiber strand to transmit multiple lanes simultaneously using wavelength-division multiplexing:
640px-WDM_operating_principle.svg.png

Obviously it complicates the endpoints, but that's how most FTTH deployments are done - with two wavelengths on a single fiber for each direction.
 
You can use a single fiber strand to transmit multiple lanes simultaneously using wavelength-division multiplexing:
640px-WDM_operating_principle.svg.png

Obviously it complicates the endpoints, but that's how most FTTH deployments are done - with two wavelengths on a single fiber for each direction.
The cost of WDM is not necessarily acceptable for short range communication. For example, Wikipedia page for Terabit Ethernet lists a few 200G and 400G standards that use it, and all of them have a reach of 2 km or more. Many other standards use multiple fibres without WDM for same speeds at shorter distances.

Here is a PCIe 4.0 card with optical PCIe interfaces, the specs say that it needs 12-strand or 24-strand optical cables. (And that's strange, why not 4/8/16?)
 
The cost of WDM is not necessarily acceptable for short range communication. For example, Wikipedia page for Terabit Ethernet lists a few 200G and 400G standards that use it, and all of them have a reach of 2 km or more. Many other standards use multiple fibres without WDM for same speeds at shorter distances.

Here is a PCIe 4.0 card with optical PCIe interfaces, the specs say that it needs 12-strand or 24-strand optical cables. (And that's strange, why not 4/8/16?)
Yes, you are correct. There's a split between single-mode and multi-mode fibers as well with the latter usually being used for short runs inside datacenters. WDM pretty much requires single-mode fibers, but it's way more suited to long runs such as FTTH I mentioned before. Personally I'm running SM WDM between buildings on my property (a use case generally fit for MM) despite the increased equipment cost partly because finding a qualified SM installation company was way easier in my area due to FTTH abundance. At work we're using both SM and MM.

The MPO (and its proprietary MTP extension) connector supports 8, 12, 16 and 24 strands. There's compatibility issues between different versions. The overall physical shape of this connector was most likely defined by QSFP module's physical design.
In general fiber compatibility is a very complex topic.
 
Back
Top