Friday, May 19th 2017

Intel to Introduce 3D XPoint DIMM Tech to the Market on 2018

Early on in Intel's 3D XPoint teasers and announcements, the company planned to have this memory integrated not only as a system cache solution or SSD replacement, but also as a potential substitute for DRAM memory. The objective: to revolutionize the amount of DRAM memory a given system can carry, at a much lower price per GB, with a somewhat acceptable performance penalty. Intel describes the current DRAM implementation as too small, too expensive, and too unstable (read: data loss on power loss) to continue being on top of the memory food chain. This is where the 3D Xpoint DIMM implementation can bear fruits, by offering significantly higher amounts of storage at much lower pricing, while keeping attractive bandwidth and latency performance. DRAM will still be used for system-critical operations and booting, albeit in lower capacities, and will be used side by side with these 3D XPoint DIMM slots, which will take in the bulk of the work.

This kind of usage for Intel's 3D XPoint also delivers an interesting side-effect: since this memory is persistent (which means that data isn't lost when the power is turned off,) interruption or loss of power won't erase the work in memory. At the same time, this means that this kind of DRAM-substitute memory requires some security precautions DRAM doesn't, since anyone with direct physical access to the stick could just remove one and take it with all the data inside. Even though a 2018 time to market seems a little to optimistic, considering all the changes this implementation would require from adopters, the technology is definitely promising enough to tempt users to make the jump.
Source: EXPreview
Add your own comment

33 Comments on Intel to Introduce 3D XPoint DIMM Tech to the Market on 2018

#1
bug
So this is like positioning the first round of xpoint as a cache for HDDs. Now we get a cache for RAM. I have to say, I'm not a big fan of adding more stuff into my case.
Posted on Reply
#2
Ed_1
No, this is not HDD cache system IMO, its more like L1, L2 etc.
So instead of trying to have 16-64gb of DDR4, you could have like 4-8gb of DDr4 and 32/64gb of X point, it might not be as fast as DRR4 but has good low latency.

My guess you need at least workstation type system, with same loads, not for mainstream so much, though that would depend on cost moving forward on DDRx vs X point.
Posted on Reply
#3
natr0n
Proprietary Intel Only tech.
Posted on Reply
#4
Solaris17
Super Dainty Moderator
got it, so back to the DDR2 / DDR3 split boards (#unstable) only with proprietary Intel memory. This sounds like a winning recipe.
Posted on Reply
#5
erixx
Windmills are not going to power all of this! Hopefully the return is higher than the costs of all these trillizions and brazillions of wattage needed to feed all this tech! I forsee a cable-link to the Sun sucking in energy directly from its core...

And yet, police wants me to fit an ultra dampening exhaust to my sportsbike!

I swear I am not smoking anything, it's just.. the heat,... all this heat that's making me mad....
Posted on Reply
#6
DRDNA
nice tech for lab work. :toast: Imma bet ya that AMD has some counter tech too....anyone know what it is?...........................................anything that helps cut the waiting game in the lab will be utilized in the lab and that i have no doubt.
Posted on Reply
#7
largon
XPoint RAM?
Nice!
All things DRAM has been awfully boring for 10 years already.
Posted on Reply
#8
ERazer
watch Rambus sue intel :laugh:
Posted on Reply
#9
Camm
I'm going to come out with this as a big 'maybe'.

I can't see many DB admins wanting to go backwards in their memory performance. I would much rather see this as something joined to the fabric as storage and cache rather than trying to replace DRAM.
Posted on Reply
#10
toilet pepper
RaevenlordThe objective: to revolutionize the amount of DRAM memory a given system can carry, at a much lower price per GB, with a somewhat acceptable performance penalty.

lol

This is where the 3D Xpoint DIMM implementation can bear fruits, by offering significantly higher amounts of storage at much lower pricing, while keeping attractive bandwidth and latency performance

Source: EXPreview
I guess this must some sort of an ssd on a dimm slot. Seeing the words Intel and lower pricing made me try to reread the article several times to check if my eyes were fooling me.
Posted on Reply
#11
notb
bugSo this is like positioning the first round of xpoint as a cache for HDDs. Now we get a cache for RAM. I have to say, I'm not a big fan of adding more stuff into my case.
The opposite. Cache is always used to speed up memory access.
So if you insist to use this word, then XPoint DIMM is still a cache for HDD - just in a more obvious place (as RAM is basically the same). And of course RAM will cache the XPoint module if needed.
But the main reason they make this is simply to offer a big jump in "RAM" capacity (at a cost of some performance). RAM size is a major bottleneck in enterprise solutions today.

The whole Optane thing was designed with DIMMs in mind from the start.
The other products we've seen: M.2 caches and more traditional SSDs are most likely just side products benefiting from excellent performance of Optane (albeit it would stand out a lot more if released 2-3 years back).
Intel has a very good tech in their hands - they're using it wherever they can. :)
Posted on Reply
#12
bug
notbThe opposite. Cache is always used to speed up memory access.
So if you insist to use this word, then XPoint DIMM is still a cache for HDD - just in a more obvious place (as RAM is basically the same). And of course RAM will cache the XPoint module if needed.
But the main reason they make this is simply to offer a big jump in "RAM" capacity (at a cost of some performance). RAM size is a major bottleneck in enterprise solutions today.

The whole Optane thing was designed with DIMMs in mind from the start.
The other products we've seen: M.2 caches and more traditional SSDs are most likely just side products benefiting from excellent performance of Optane (albeit it would stand out a lot more if released 2-3 years back).
Intel has a very good tech in their hands - they're using it wherever they can. :)
When I said "cache" I was thinking about something that goes in between, not about actual performance.
Posted on Reply
#13
notb
bugWhen I said "cache" I was thinking about something that goes in between, not about actual performance.
Same here. But then what did you mean by "cache for RAM"? Because XPoint in a DIMM is still a cache for HDD (it's between the HDD and CPU).
XPoint DIMMs are not designed to cache RAM or backup RAM or whatever. This is another type of memory. You could use just XPoint and the PC would still work (just slower).
If a PC houses both DDR and XPoint, it simply can choose how to use them based on properties. This is a move in the right direction.

Let's not forget RAM was invented as a memory for holding currently needed data. Today it has 2 functions: traditional RAM and fast access storage. DDR is not optimal for the latter.
Posted on Reply
#14
Blueberries
I think the goal here is to allow for affordable FSB drive caching. XPoint isn' lt volatile so you could theoretically have your entire operating system and then some (professional apps, CAD software, Photoshop, etc.) stored in ultra fast DDR4 DIMMs twice as fast as an NVMe drive, assuming XPoint drivers support such a configuration.
Posted on Reply
#15
FordGT90Concept
"I go fast!1!11!1!"
That diagram is mighty interesting.

XPoint DIMM:
~6 GB/s
~250 ns latency

PCIe 3.0 x4 NVMe SSD:
~3.2 GB/s
<10 ms latency

Only thing they don't answer is cost, density, and endurance.

I could see this as the go-to memory standard for RAID cards (don't need to write-back anymore because the data is safe on the card should there be an interruption). A shame is over half a year away yet.

Putting 250ns latency into perspective:
Posted on Reply
#16
sergionography
Unless it replaces both the hard disk and ram then its kinda pointless in my opinion. As it is the function of ram is already somewhat of a cache between the cpu and the data, so to add yet another layer of cache is just way too many bus stops in my not very educated opinion lol
Posted on Reply
#17
Captain_Tom
There was a time when I was completely blown away by prospects of 3DXpoint. A storage upgrade bigger than that between HDD and SSD's. I was honestly prepared to throw $500 away if it meant I could have 500GB of something 1000x faster than my current SSD-RAID setup.


But then every presentation Intel cut the expected speed in half, and now it's out - and SLOWER than a good PCIE SSD. One of the biggest tech disappointments I have ever seen.


Considering they can't even beat the fastest conventional SSD's, I don't expect the performance penalty of using it as DRAM as anything close to acceptable...
Posted on Reply
#18
FordGT90Concept
"I go fast!1!11!1!"
Uh, how many SSDs can move at 3.2 GB/s? Even those I've seen that do reach high into the read speeds, write speeds are sorely lacking.

Until someone actually benchmarks these things, we won't really know. But from what Intel has said so far, it looks faster in every way than current SSD tech.
Posted on Reply
#19
Captain_Tom
FordGT90ConceptUh, how many SSDs can move at 3.2 GB/s? Even those I've seen that do reach high into the read speeds, write speeds are sorely lacking.

Until someone actually benchmarks these things, we won't really know. But from what Intel has said so far, it looks faster in every way than current SSD tech.
Uhhh here:

www.newegg.com/Product/Product.aspx?Item=9SIA12K54C9773&cm_re=samsung_SSD_PCIE-_-20-147-596-_-Product


Took 5 seconds to find. This isn't 2012 anymore, these things are becoming quite common.


And btw, there shouldn't even be a competition. Intel advertised 1000x the speed. Instead it's around the same, for much more money, and much less capacity options. What a revolutionary technology.
Posted on Reply
#20
FordGT90Concept
"I go fast!1!11!1!"
Well yeah, reality struck Intel but don't count the chickens until the eggs hatch.

Intel could beat it's 1.47 GB/$, it's endurance, or it's latency. Considering they are talking about packaging it in DIMMs, I'd guess that the endurance is off the charts compared to SSDs.
Posted on Reply
#21
notb
sergionographyUnless it replaces both the hard disk and ram then its kinda pointless in my opinion. As it is the function of ram is already somewhat of a cache between the cpu and the data, so to add yet another layer of cache is just way too many bus stops in my not very educated opinion lol
This is not a cache similar to RAM, which constantly overwrites data to hold what's needed by the CPU.
XPoint (even in DIMM) is a normal storage. It holds everything you need and is non-volatile. So by principle it can replace drives, as they wouldn't be needed at all (one DIMM could replace one M.2 - think how this revolutionizes consumer mobo designs).

However, in the enterprise solution presented by Intel (SAP HANA) it doesn't replace HDD/SSD storage, but it has 2 advantages over RAM (that is normally used by SAP):
1) it's few times more dense, with lower price per GB (cost-wise, less DIMMs also translate into less server nodes - big money here!).
2) it's persistent, so it drastically reduces the time of system setup.

Just to give you an example of setup time importance: when a DB crashes or needs a reset (maintenance etc), you have to populate RAM before it'll work again.
A typical enterprise-grade HDD can be read at around 100MB/s and at that speed a 1TB database will take >2.5h to setup.
Of course you can increase the read speed with RAID or SSDs, but the database will most likely also be way bigger than 1TB...

This is the reason why database maintenance usually means server down for hours... and a server crash during the day means a long lunch break for analysts. :D

A persistent, yet fast memory is the golden grail. I don't know whether XPoint is the technology that will dominate in coming years, but it's almost obvious that the basic idea is what we've been waiting for.
Captain_TomBut then every presentation Intel cut the expected speed in half, and now it's out - and SLOWER than a good PCIE SSD. One of the biggest tech disappointments I have ever seen.
Considering they can't even beat the fastest conventional SSD's, I don't expect the performance penalty of using it as DRAM as anything close to acceptable...
You're looking at the wrong figures. Here are latency and 4K random (including preconditioning), respectively:
www.tomshardware.co.uk/intel-optane-3d-xpoint-p4800x,review-33867-4.html
www.tomshardware.co.uk/intel-optane-3d-xpoint-p4800x,review-33867-5.html
XPoint is years ahead of SSD development curve.
NAND will never even get close to these results, so the simple fact is: other SSD companies still have to develop/launch their future SSD tech. Intel and Micron already have it.
Posted on Reply
#22
bug
notbSame here. But then what did you mean by "cache for RAM"? Because XPoint in a DIMM is still a cache for HDD (it's between the HDD and CPU).
XPoint DIMMs are not designed to cache RAM or backup RAM or whatever. This is another type of memory. You could use just XPoint and the PC would still work (just slower).
If a PC houses both DDR and XPoint, it simply can choose how to use them based on properties. This is a move in the right direction.

Let's not forget RAM was invented as a memory for holding currently needed data. Today it has 2 functions: traditional RAM and fast access storage. DDR is not optimal for the latter.
Something that does not replace RAM, but instead inserts itself between RAM and SSD/HDD.
The way I see it, in this form it's more geared towards alleviating RAM usage, hence my designation as "cache for RAM". I suppose you can look at it the other way and say it's a cache for permanent storage, too, and you wouldn't be wrong. But it's still one more component and as always when adding complexity, there's always something to be lost.
Posted on Reply
#23
FordGT90Concept
"I go fast!1!11!1!"
RAM is cache but it's greatest weakness is its volatility. XPoint's greatest strength is that it is non-volatile. XPoint is ideal for a write cache on storage solutions. If there's a loss of power or catastrophic failure, the data that was meant to be written by a RAID would not be lost so the write can still successfully complete when the issue is resolved.

I can't see it being used as an intermediary between RAM and long-term storage because, unless the price per GB is far less than RAM, RAM is the better option because the performance is more important than non-volatility. That said, if you can get 1 TB of XPoint for the price of 128 GB DDR4, suddenly XPoint looks really attractive.
Posted on Reply
#24
bug
FordGT90ConceptRAM is cache but it's greatest weakness is its volatility. XPoint's greatest strength is that it is non-volatile. XPoint is ideal for a write cache on storage solutions. If there's a loss of power or catastrophic failure, the data that was meant to be written by a RAID would not be lost so the write can still successfully complete when the issue is resolved.

I can't see it being used as an intermediary between RAM and long-term storage because, unless the price per GB is far less than RAM, RAM is the better option because the performance is more important than non-volatility. That said, if you can get 1 TB of XPoint for the price of 128 GB DDR4, suddenly XPoint looks really attractive.
Yeah, well, the the nice part of the story and I agree with it. Cost effective or not, that's how Intel will market it.

But as I said, I'm worried about other things. It's one more component added to the system and it adds complexity. Will it eat PCIe lanes? Will it need a modification of the current RAM channels? Cause it will definitely need additional sockets. As Intel has already stated, it will need Opal encryption, otherwise someone could lift your XPoint stick and read all sorts of things from it. Is Opal encryption viable in conjunction with RAM (i.e. it doesn't completely kill an already underwhelming performance)?

I'm not against XPoint, I'm willing to forgive things in a first iteration. But at the same time I can't not notice the first iteration is a far cry from what was initially promised. Thus I'll take any further marketing slides with a pinch of salt and tend to look at things not said a little more.
Posted on Reply
#25
notb
bugBut as I said, I'm worried about other things. It's one more component added to the system and it adds complexity.
I don't understand this argument, to be honest. What's so terrifying about a DIMM module? :D
bugWill it eat PCIe lanes?
No. It's treated just like any other DIMM module (usually RAM). DIMM sockets are directly connected to the CPU and don't affect PCIe lanes at all.
bugWill it need a modification of the current RAM channels? Cause it will definitely need additional sockets.
Again, why? This is just a module that you'd stick where RAM usually goes. Nothing has to be added.
Posted on Reply
Add your own comment
Apr 27th, 2024 01:56 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts