Wednesday, April 10th 2019

Intel Packs 3D X-Point and QLC NAND Flash Into a Single SSD: Optane H10

Intel today revealed details about Intel Optane memory H10 with solid-state storage - an innovative device that combines the superior responsiveness of Intel Optane technology with the storage capacity of Intel Quad Level Cell (QLC) 3D NAND technology in a single space-saver M.2 form factor. "Intel Optane memory H10 with solid-state storage features the unique combination of Intel Optane technology and Intel QLC 3D NAND - exemplifying our disruptive approach to memory and storage that unleashes the full power of Intel-connected platforms in a way no else can provide," said Rob Crooke, Intel senior vice president and general manager of the Non-Volatile Memory Solutions Group.

Combining Intel Optane technology with Intel QLC 3D NAND technology on a single M.2 module enables Intel Optane memory expansion into thin and light notebooks and certain space-constrained desktop form factors - such as all-in-one PCs and mini PCs. The new product also offers a higher level of performance not met by traditional Triple Level Cell (TLC) 3D NAND SSDs today and eliminates the need for a secondary storage device.
Intel's leadership in computing infrastructure and design allows the company to utilize the value of the platform in its entirety (software, chipset, processor, memory and storage) and deliver that value to the customer. The combination of high-speed acceleration and large SSD storage capacity on a single drive will benefit everyday computer users, whether they use their systems to create, game or work. Compared to a standalone TLC 3D NAND SSD system, Intel Optane memory H10 with solid-state storage enables both faster access to frequently used applications and files and better responsiveness with background activity.

8th Generation Intel Core U-series mobile platforms featuring Intel Optane memory H10 with solid state storage will be arriving through major OEMs starting this quarter. With these platforms, everyday users will be able to:
  • Launch documents up to 2 times faster while multitasking.
  • Launch games 60% faster while multitasking.
  • Open media files up to 90% faster while multitasking.
SSDs with Intel Optane memory are the fastest compared to NAND SSDs in the majority of common client use cases. Intel-based platforms with Intel Optane memory adapt to everyday computing activities to optimize the performance for the user's most common tasks and frequently used applications. With offerings of up to 1TB of total storage, Intel Optane memory H10 with solid state storage will have the capacity users need for their apps and files today - and well into the future.

The Intel Optane memory H10 with solid-state storage will come in the following capacities, 16GB (Intel Optane memory) + 256GB (storage); 32GB (Intel Optane memory) + 512GB (storage); and 32GB (Intel Optane memory) + 1TB storage.

For more information, visit this page.
Add your own comment

31 Comments on Intel Packs 3D X-Point and QLC NAND Flash Into a Single SSD: Optane H10

#1
Valantar
From the looks of it this is two SSDs - a PCIe x2 NVMe Optane Memory module and a PCIe x2 NAND SSD (with DRAM cache, judging by the render!) - on a single board. That's a shame, really, as a solution like this with everything integrated into a single controller with integrated caching algorithms and no DRAM is pretty much the killer app for Optane. The capacities seem just right, so all that's missing is a better controller, and this could be amazing.
Posted on Reply
#2
cucker tarlson
makes a lot more sense than the initial 16-64gb optane launch.It took a full slot and even the 64gb was barely enough for system+programs.this time you've got a 32gb optane stick and a ssd in one slot,this is very nice for os.the qlc keeps the cost down and I bet a 32gb optane + 512gb qlc configuration is gonna be faster than 970 evo in regular use.
Posted on Reply
#3
er557
enough with the qlc anti-progress already. It sucks in longevity, write performance, etc., and those new products will still cost more and be slower then say adata xpg sx8200 pro, no matter what controller wizardry they try with this.
Posted on Reply
#4
TheLostSwede
Valantar, post: 4028305, member: 171585"
From the looks of it this is two SSDs - a PCIe x2 NVMe Optane Memory module and a PCIe x2 NAND SSD (with DRAM cache, judging by the render!) - on a single board. That's a shame, really, as a solution like this with everything integrated into a single controller with integrated caching algorithms and no DRAM is pretty much the killer app for Optane. The capacities seem just right, so all that's missing is a better controller, and this could be amazing.
You're indeed correct.


Source: https://www.anandtech.com/show/14196/intel-releases-optane-memory-h10-specifications
Posted on Reply
#5
holyprof
Disappointed ... i thought this was an Optane-enhanced SSD. It's just 2 "SSD"s on 1 PCB tied to proprietary Intel software to actually make use of it. Thanks, but "no, thanks".
Is it that hard to replace SSD controller RAM buffer with 32GB x-point and have cheap ultra-fast 1TB SSD?
Posted on Reply
#6
Valantar
holyprof, post: 4028368, member: 176806"
Is it that hard to replace SSD controller RAM buffer with 32GB x-point and have cheap ultra-fast 1TB SSD?
Well, in a word, yes. You'd need to replace the (small, simple, scalable, low-power, industry-standard, ubiquitous) DDR3/4 controller in the SSD controller with a (large-ish, complex, proprietary, relatively new and untested) 3D Xpoint controller. The picture posted by @TheLostSwede above shows the size of such a controller (in a package, but it's unlikely that the controller is much smaller than the package) - and the DRAM controller portion of the SSD controller is not that large, so you can pretty much just add its area to the SSD controller's area for a ballpark estimate of the resulting chip size, even if that tells us nothing of the complexity of making such a thing (especially integrating tiering/caching logic, which would either require a lot of processing power (=faster, so hotter or bigger cores) or bespoke hardware (=more die area).

So, this is what we would all want (as using RST means you're platform locked and stuck with half the interface speed for your SSD), it'll be a while until we see it - and it won't be cheap when we do.
Posted on Reply
#7
holyprof
Valantar, post: 4028371, member: 171585"
Well, in a word, yes. You'd need to replace the (small, simple, scalable, low-power, industry-standard, ubiquitous) DDR3/4 controller in the SSD controller with a (large-ish, complex, proprietary, relatively new and untested) 3D Xpoint controller. The picture posted by @TheLostSwede above shows the size of such a controller (in a package, but it's unlikely that the controller is much smaller than the package) - and the DRAM controller portion of the SSD controller is not that large, so you can pretty much just add its area to the SSD controller's area for a ballpark estimate of the resulting chip size, even if that tells us nothing of the complexity of making such a thing (especially integrating tiering/caching logic, which would either require a lot of processing power (=faster, so hotter or bigger cores) or bespoke hardware (=more die area).

So, this is what we would all want (as using RST means you're platform locked and stuck with half the interface speed for your SSD), it'll be a while until we see it - and it won't be cheap when we do.
Seems like a pure Optane SSD is a more viable option then (high price, but at least easier to build). Until they are available at reasonable (for a hardware enthusiast or professional) prices, classic high-end SSDs will do.
Posted on Reply
#8
CheapMeat
Damn, this would have been perfect if it acted like a single drive with smart caching rather than two drives on one PCB. It would be much more useful.
Posted on Reply
#9
Crackong
So it is a fast but small SSD replacing the DRAM cache to help a large but handicapped SSD, all in one package ?
Posted on Reply
#10
TheLostSwede
Crackong, post: 4028526, member: 185495"
So it is a fast but small SSD replacing the DRAM cache to help a large but handicapped SSD, all in one package ?
Not quite, the SSD part still has a DRAM cache. The Optane part ends up as a write cache, which means that up to 32GB of data can be written really fast and that is then flushed to the slow QLC SSD at whatever pace the SSD can accept the data. It might also work as a read cache, but it's not clear how much of the Optane memory would be taken up as a read cache, so it might end up being less than 32GB as a write buffer. The QLC SSD still has DRAM though.
Posted on Reply
#11
londiste
holyprof, post: 4028469, member: 176806"
Seems like a pure Optane SSD is a more viable option then (high price, but at least easier to build). Until they are available at reasonable (for a hardware enthusiast or professional) prices, classic high-end SSDs will do.
I would say physical size is a serious concern here. High-performance SSD is moving to M.2 and primarily 2280. So far, Optane SSDs can only fit 128GB on that. Also, power and cooling.
Posted on Reply
#12
Crackong
TheLostSwede, post: 4028608, member: 3382"
Not quite, the SSD part still has a DRAM cache. The Optane part ends up as a write cache, which means that up to 32GB of data can be written really fast and that is then flushed to the slow QLC SSD at whatever pace the SSD can accept the data. It might also work as a read cache, but it's not clear how much of the Optane memory would be taken up as a read cache, so it might end up being less than 32GB as a write buffer. The QLC SSD still has DRAM though.
Lol that's complicated, They put L1 and L2 cache in SSD. :roll:
Looks like nobody wants QLC so they had to come up with something to push some sales.
Posted on Reply
#13
jabbadap
londiste, post: 4028609, member: 169790"
I would say physical size is a serious concern here. High-performance SSD is moving to M.2 and primarily 2280. So far, Optane SSDs can only fit 128GB on that. Also, power and cooling.
It's 118GB(well yes technically 128GB), I know sounds arbitrary odd number. And you are spot on, density is the major reason for not getting bigger drives on that form factor.

What are the densities for IMFTs 3D Xpoints anyway(Or micron now-a-days)? I presume Optane 800p uses the highest density chips, it being single sided I presume those two are 64GB chips. Am I right or is there some PoP -chip packaging going on?
Posted on Reply
#14
londiste
jabbadap, post: 4028685, member: 148195"
It's 118GB(well yes technically 128GB), I know sounds arbitrary odd number. And you are spot on, density is the major reason for not getting bigger drives on that form factor.

What are the densities for IMFTs 3D Xpoints anyway(Or micron now-a-days)? I presume Optane 800p uses the highest density chips, it being single sided I presume those two are 64GB chips. Am I right or is there some PoP -chip packaging going on?
When it comes to M.2, Intel does have 905P with 380GB (and the data center counterpart DC P4801X):
https://ark.intel.com/content/www/us/en/ark/products/148607/intel-optane-ssd-905p-series-380gb-m-2-110mm-pcie-x4-20nm-3d-xpoint.html
This has 7 XPoint chips on it so looks like they are only (mass) producing 64GB dies at this time.
Posted on Reply
#16
Valantar
The "funny" thing about this is that it clearly demonstrates how Intel can implement PCIe bifurcation support at will on consumer platforms, but that they're only interested in doing so when it helps them sell proprietary hardware.
Posted on Reply
#17
er557
They can't enable it on motherboards that dont have the lanes or lack such support, say amd based / budget boards. It might be this device is splitting the controllers INTERNALLY, transparent to the slot, not by using the motherboard for splitting the lanes. But yeah, other solutions are available for intel proprietary features, such as bios modding, efi modding, to enable VROC or bifurcation where applicable.
Posted on Reply
#18
Scrizz
Crackong, post: 4028626, member: 185495"
Looks like nobody wants QLC so they had to come up with something to push some sales.
lol, do you really think designing something like this only takes months? lol
Posted on Reply
#19
Valantar
er557, post: 4028890, member: 90273"
They can't enable it on motherboards that dont have the lanes or lack such support, say amd based / budget boards. It might be this device is splitting the controllers INTERNALLY, transparent to the slot, not by using the motherboard for splitting the lanes. But yeah, other solutions are available for intel proprietary features, such as bios modding, efi modding, to enable VROC or bifurcation where applicable.
So you're suggesting this has an onboard PLX chip for PCIe switching? Those are about $100 apiece, so that's not quite likely. "Transparent to the slot" is meaningless - the slot is just a receptacle with wires. What matters is how the chipset or cpu allocates the lanes, and the only way this product works is if each controller on the drive is given two lanes out of the four provided by the interface. Which, again, means that the CPU or chipset is bifurcating what would otherwise be a monolithic x4 interface, which no other device is allowed to do on a consumer Intel platform.

As for enabling it on budget boards - all Intel chipset in the same generation are the same silicon with parts disabled as you go down the range. Which means that if one part has this capability, they all do.

As for you for some reason mentioning AMD chipsets: while it's embarrassingly obvious that Intel has no power to enable or disable features in those, AMD already supports PCIe bifurcation on their consumer platforms (though from the CPU as they don't provide much PCIe through their chipsets). Go figure.
Posted on Reply
#20
Crackong
Scrizz, post: 4028955, member: 42078"
lol, do you really think designing something like this only takes months? lol
Emm....Yes ?
It is Intel.
Remember when Ryzen came out, how quickly they launch the 8000 series CPU ?
Posted on Reply
#21
londiste
Valantar, post: 4028973, member: 171585"
So you're suggesting this has an onboard PLX chip for PCIe switching? Those are about $100 apiece, so that's not quite likely. "Transparent to the slot" is meaningless - the slot is just a receptacle with wires. What matters is how the chipset or cpu allocates the lanes, and the only way this product works is if each controller on the drive is given two lanes out of the four provided by the interface. Which, again, means that the CPU or chipset is bifurcating what would otherwise be a monolithic x4 interface, which no other device is allowed to do on a consumer Intel platform.
Why? Even simple things like M.2 expansion boards are doing 16 > 4x4 quite easily.
PLX chips are switches, this thing doesn't need one.
Posted on Reply
#22
Caring1
Scrizz, post: 4028955, member: 42078"
lol, do you really think designing something like this only takes months? lol
Yes, it's not technical, there's no IF or magic, just two chips sharing the 4 lanes on one board.
Magic will happen when they combine the two on one chip and make it X8.
Posted on Reply
#23
Valantar
londiste, post: 4029166, member: 169790"
Why? Even simple things like M.2 expansion boards are doing 16 > 4x4 quite easily.
PLX chips are switches, this thing doesn't need one.
... have you read my previous posts? I know. It does need a motherboard/chipset/CPU with PCIe bifurcation support, though, just like those m.2 expansion boards you mention. Which was my entire point.
Posted on Reply
#24
londiste
Valantar, post: 4028973, member: 171585"
Which, again, means that the CPU or chipset is bifurcating what would otherwise be a monolithic x4 interface, which no other device is allowed to do on a consumer Intel platform.
As for you for some reason mentioning AMD chipsets: while it's embarrassingly obvious that Intel has no power to enable or disable features in those, AMD already supports PCIe bifurcation on their consumer platforms (though from the CPU as they don't provide much PCIe through their chipsets). Go figure.
Aren't M.2 slots on Intel motherboards almost exclusively from chipset? These definitely support bifurcation.
AMD also does not support bifurcation on consumer platforms. B350/450 comes to mind.
PLX are switches, this is doing 4 > 2x2 and does not need one. Clock buffer it might need, depending on how both these drives and motherboards are built.

Interesting though, how would these drives show up in non-Intel board or are these functional enough without special software like RST?
Posted on Reply
#25
R0H1T
Scrizz, post: 4028955, member: 42078"
lol, do you really think designing something like this only takes months? lol
This is pretty trivial, you don't need anything special in here. Intel already has Optane, controller, QLC & caching software. So pray tell what more did they need to get this working? A few months is about right, how many months - take a guess.
Posted on Reply
Add your own comment