• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Packs 3D X-Point and QLC NAND Flash Into a Single SSD: Optane H10

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,769 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel today revealed details about Intel Optane memory H10 with solid-state storage - an innovative device that combines the superior responsiveness of Intel Optane technology with the storage capacity of Intel Quad Level Cell (QLC) 3D NAND technology in a single space-saver M.2 form factor. "Intel Optane memory H10 with solid-state storage features the unique combination of Intel Optane technology and Intel QLC 3D NAND - exemplifying our disruptive approach to memory and storage that unleashes the full power of Intel-connected platforms in a way no else can provide," said Rob Crooke, Intel senior vice president and general manager of the Non-Volatile Memory Solutions Group.

Combining Intel Optane technology with Intel QLC 3D NAND technology on a single M.2 module enables Intel Optane memory expansion into thin and light notebooks and certain space-constrained desktop form factors - such as all-in-one PCs and mini PCs. The new product also offers a higher level of performance not met by traditional Triple Level Cell (TLC) 3D NAND SSDs today and eliminates the need for a secondary storage device.



Intel's leadership in computing infrastructure and design allows the company to utilize the value of the platform in its entirety (software, chipset, processor, memory and storage) and deliver that value to the customer. The combination of high-speed acceleration and large SSD storage capacity on a single drive will benefit everyday computer users, whether they use their systems to create, game or work. Compared to a standalone TLC 3D NAND SSD system, Intel Optane memory H10 with solid-state storage enables both faster access to frequently used applications and files and better responsiveness with background activity.

8th Generation Intel Core U-series mobile platforms featuring Intel Optane memory H10 with solid state storage will be arriving through major OEMs starting this quarter. With these platforms, everyday users will be able to:
  • Launch documents up to 2 times faster while multitasking.
  • Launch games 60% faster while multitasking.
  • Open media files up to 90% faster while multitasking.
SSDs with Intel Optane memory are the fastest compared to NAND SSDs in the majority of common client use cases. Intel-based platforms with Intel Optane memory adapt to everyday computing activities to optimize the performance for the user's most common tasks and frequently used applications. With offerings of up to 1TB of total storage, Intel Optane memory H10 with solid state storage will have the capacity users need for their apps and files today - and well into the future.

The Intel Optane memory H10 with solid-state storage will come in the following capacities, 16GB (Intel Optane memory) + 256GB (storage); 32GB (Intel Optane memory) + 512GB (storage); and 32GB (Intel Optane memory) + 1TB storage.

For more information, visit this page.

View at TechPowerUp Main Site
 
From the looks of it this is two SSDs - a PCIe x2 NVMe Optane Memory module and a PCIe x2 NAND SSD (with DRAM cache, judging by the render!) - on a single board. That's a shame, really, as a solution like this with everything integrated into a single controller with integrated caching algorithms and no DRAM is pretty much the killer app for Optane. The capacities seem just right, so all that's missing is a better controller, and this could be amazing.
 
makes a lot more sense than the initial 16-64gb optane launch.It took a full slot and even the 64gb was barely enough for system+programs.this time you've got a 32gb optane stick and a ssd in one slot,this is very nice for os.the qlc keeps the cost down and I bet a 32gb optane + 512gb qlc configuration is gonna be faster than 970 evo in regular use.
 
Last edited:
enough with the qlc anti-progress already. It sucks in longevity, write performance, etc., and those new products will still cost more and be slower then say adata xpg sx8200 pro, no matter what controller wizardry they try with this.
 
From the looks of it this is two SSDs - a PCIe x2 NVMe Optane Memory module and a PCIe x2 NAND SSD (with DRAM cache, judging by the render!) - on a single board. That's a shame, really, as a solution like this with everything integrated into a single controller with integrated caching algorithms and no DRAM is pretty much the killer app for Optane. The capacities seem just right, so all that's missing is a better controller, and this could be amazing.

You're indeed correct.

h10-layout.png

Source: https://www.anandtech.com/show/14196/intel-releases-optane-memory-h10-specifications
 
Disappointed ... i thought this was an Optane-enhanced SSD. It's just 2 "SSD"s on 1 PCB tied to proprietary Intel software to actually make use of it. Thanks, but "no, thanks".
Is it that hard to replace SSD controller RAM buffer with 32GB x-point and have cheap ultra-fast 1TB SSD?
 
Is it that hard to replace SSD controller RAM buffer with 32GB x-point and have cheap ultra-fast 1TB SSD?
Well, in a word, yes. You'd need to replace the (small, simple, scalable, low-power, industry-standard, ubiquitous) DDR3/4 controller in the SSD controller with a (large-ish, complex, proprietary, relatively new and untested) 3D Xpoint controller. The picture posted by @TheLostSwede above shows the size of such a controller (in a package, but it's unlikely that the controller is much smaller than the package) - and the DRAM controller portion of the SSD controller is not that large, so you can pretty much just add its area to the SSD controller's area for a ballpark estimate of the resulting chip size, even if that tells us nothing of the complexity of making such a thing (especially integrating tiering/caching logic, which would either require a lot of processing power (=faster, so hotter or bigger cores) or bespoke hardware (=more die area).

So, this is what we would all want (as using RST means you're platform locked and stuck with half the interface speed for your SSD), it'll be a while until we see it - and it won't be cheap when we do.
 
Well, in a word, yes. You'd need to replace the (small, simple, scalable, low-power, industry-standard, ubiquitous) DDR3/4 controller in the SSD controller with a (large-ish, complex, proprietary, relatively new and untested) 3D Xpoint controller. The picture posted by @TheLostSwede above shows the size of such a controller (in a package, but it's unlikely that the controller is much smaller than the package) - and the DRAM controller portion of the SSD controller is not that large, so you can pretty much just add its area to the SSD controller's area for a ballpark estimate of the resulting chip size, even if that tells us nothing of the complexity of making such a thing (especially integrating tiering/caching logic, which would either require a lot of processing power (=faster, so hotter or bigger cores) or bespoke hardware (=more die area).

So, this is what we would all want (as using RST means you're platform locked and stuck with half the interface speed for your SSD), it'll be a while until we see it - and it won't be cheap when we do.
Seems like a pure Optane SSD is a more viable option then (high price, but at least easier to build). Until they are available at reasonable (for a hardware enthusiast or professional) prices, classic high-end SSDs will do.
 
So it is a fast but small SSD replacing the DRAM cache to help a large but handicapped SSD, all in one package ?
 
So it is a fast but small SSD replacing the DRAM cache to help a large but handicapped SSD, all in one package ?

Not quite, the SSD part still has a DRAM cache. The Optane part ends up as a write cache, which means that up to 32GB of data can be written really fast and that is then flushed to the slow QLC SSD at whatever pace the SSD can accept the data. It might also work as a read cache, but it's not clear how much of the Optane memory would be taken up as a read cache, so it might end up being less than 32GB as a write buffer. The QLC SSD still has DRAM though.
 
Seems like a pure Optane SSD is a more viable option then (high price, but at least easier to build). Until they are available at reasonable (for a hardware enthusiast or professional) prices, classic high-end SSDs will do.
I would say physical size is a serious concern here. High-performance SSD is moving to M.2 and primarily 2280. So far, Optane SSDs can only fit 128GB on that. Also, power and cooling.
 
Not quite, the SSD part still has a DRAM cache. The Optane part ends up as a write cache, which means that up to 32GB of data can be written really fast and that is then flushed to the slow QLC SSD at whatever pace the SSD can accept the data. It might also work as a read cache, but it's not clear how much of the Optane memory would be taken up as a read cache, so it might end up being less than 32GB as a write buffer. The QLC SSD still has DRAM though.

Lol that's complicated, They put L1 and L2 cache in SSD. :roll:
Looks like nobody wants QLC so they had to come up with something to push some sales.
 
I would say physical size is a serious concern here. High-performance SSD is moving to M.2 and primarily 2280. So far, Optane SSDs can only fit 128GB on that. Also, power and cooling.

It's 118GB(well yes technically 128GB), I know sounds arbitrary odd number. And you are spot on, density is the major reason for not getting bigger drives on that form factor.

What are the densities for IMFTs 3D Xpoints anyway(Or micron now-a-days)? I presume Optane 800p uses the highest density chips, it being single sided I presume those two are 64GB chips. Am I right or is there some PoP -chip packaging going on?
 
Last edited:
It's 118GB(well yes technically 128GB), I know sounds arbitrary odd number. And you are spot on, density is the major reason for not getting bigger drives on that form factor.

What are the densities for IMFTs 3D Xpoints anyway(Or micron now-a-days)? I presume Optane 800p uses the highest density chips, it being single sided I presume those two are 64GB chips. Am I right or is there some PoP -chip packaging going on?
When it comes to M.2, Intel does have 905P with 380GB (and the data center counterpart DC P4801X):
https://ark.intel.com/content/www/u...s-380gb-m-2-110mm-pcie-x4-20nm-3d-xpoint.html
This has 7 XPoint chips on it so looks like they are only (mass) producing 64GB dies at this time.
 
The "funny" thing about this is that it clearly demonstrates how Intel can implement PCIe bifurcation support at will on consumer platforms, but that they're only interested in doing so when it helps them sell proprietary hardware.
 
They can't enable it on motherboards that dont have the lanes or lack such support, say amd based / budget boards. It might be this device is splitting the controllers INTERNALLY, transparent to the slot, not by using the motherboard for splitting the lanes. But yeah, other solutions are available for intel proprietary features, such as bios modding, efi modding, to enable VROC or bifurcation where applicable.
 
Looks like nobody wants QLC so they had to come up with something to push some sales.

lol, do you really think designing something like this only takes months? lol
 
They can't enable it on motherboards that dont have the lanes or lack such support, say amd based / budget boards. It might be this device is splitting the controllers INTERNALLY, transparent to the slot, not by using the motherboard for splitting the lanes. But yeah, other solutions are available for intel proprietary features, such as bios modding, efi modding, to enable VROC or bifurcation where applicable.
So you're suggesting this has an onboard PLX chip for PCIe switching? Those are about $100 apiece, so that's not quite likely. "Transparent to the slot" is meaningless - the slot is just a receptacle with wires. What matters is how the chipset or cpu allocates the lanes, and the only way this product works is if each controller on the drive is given two lanes out of the four provided by the interface. Which, again, means that the CPU or chipset is bifurcating what would otherwise be a monolithic x4 interface, which no other device is allowed to do on a consumer Intel platform.

As for enabling it on budget boards - all Intel chipset in the same generation are the same silicon with parts disabled as you go down the range. Which means that if one part has this capability, they all do.

As for you for some reason mentioning AMD chipsets: while it's embarrassingly obvious that Intel has no power to enable or disable features in those, AMD already supports PCIe bifurcation on their consumer platforms (though from the CPU as they don't provide much PCIe through their chipsets). Go figure.
 
lol, do you really think designing something like this only takes months? lol
Emm....Yes ?
It is Intel.
Remember when Ryzen came out, how quickly they launch the 8000 series CPU ?
 
So you're suggesting this has an onboard PLX chip for PCIe switching? Those are about $100 apiece, so that's not quite likely. "Transparent to the slot" is meaningless - the slot is just a receptacle with wires. What matters is how the chipset or cpu allocates the lanes, and the only way this product works is if each controller on the drive is given two lanes out of the four provided by the interface. Which, again, means that the CPU or chipset is bifurcating what would otherwise be a monolithic x4 interface, which no other device is allowed to do on a consumer Intel platform.
Why? Even simple things like M.2 expansion boards are doing 16 > 4x4 quite easily.
PLX chips are switches, this thing doesn't need one.
 
lol, do you really think designing something like this only takes months? lol
Yes, it's not technical, there's no IF or magic, just two chips sharing the 4 lanes on one board.
Magic will happen when they combine the two on one chip and make it X8.
 
Why? Even simple things like M.2 expansion boards are doing 16 > 4x4 quite easily.
PLX chips are switches, this thing doesn't need one.
... have you read my previous posts? I know. It does need a motherboard/chipset/CPU with PCIe bifurcation support, though, just like those m.2 expansion boards you mention. Which was my entire point.
 
Which, again, means that the CPU or chipset is bifurcating what would otherwise be a monolithic x4 interface, which no other device is allowed to do on a consumer Intel platform.
As for you for some reason mentioning AMD chipsets: while it's embarrassingly obvious that Intel has no power to enable or disable features in those, AMD already supports PCIe bifurcation on their consumer platforms (though from the CPU as they don't provide much PCIe through their chipsets). Go figure.
Aren't M.2 slots on Intel motherboards almost exclusively from chipset? These definitely support bifurcation.
AMD also does not support bifurcation on consumer platforms. B350/450 comes to mind.
PLX are switches, this is doing 4 > 2x2 and does not need one. Clock buffer it might need, depending on how both these drives and motherboards are built.

Interesting though, how would these drives show up in non-Intel board or are these functional enough without special software like RST?
 
Last edited:
Back
Top