Wednesday, September 24th 2014

Samsung Starts Producing 3.2-Terabyte NVMe SSD Based on 3D V-NAND

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today announced that it has started mass producing 3.2-terabyte (TB) NVMe* PCIe solid state drives (SSDs) based on its 3D V-NAND (Vertical NAND) flash memory technology, for use in high-end enterprise server systems. The new NVMe PCIe SSD, SM1715, utilizes Samsung's proprietary 3D V-NAND in an HHHL (half-height, half-length) card-type form factor, to offer 3.2TB of storage capacity -- doubling Samsung's previous highest NVMe SSD density of 1.6TB.

"Beginning with mass production of this new V-NAND-based NVMe SSD, which delivers the highest level of performance and density available today, we expect to greatly expand the high-density SSD market," said Jeeho Baek, Vice President, Memory Marketing, Samsung Electronics. "Samsung plans to actively introduce V-NAND-based SSDs with even higher performance, density and reliability in the future, to keep its global customers ahead of their competition."

The SM1715 is an upgraded version of Samsung's XS1715 in terms of drive performance and reliability. That 2.5-inch XS1715 was bestowed a 2014 Flash Memory Summit Best of Show Award earlier this year for being one of the Most Innovative Flash Memory technologies.

The newly introduced 3.2TB NVMe SSD provides a sequential read speed of 3,000 megabytes per second (MB/s) and writes sequentially at up to 2,200MB/s. It also randomly reads at up to 750,000 IOPS (input output operations per second) and writes randomly at up to 130,000 IOPS.

In addition, the 3.2TB SM1715 features outstanding reliability with 10 DWPDs (drive writes per day) for five years. This provides a level of reliability that enterprise server manufacturers have been requesting for their high-end storage solutions.

The SM1715 comes in 1.6TB and 3.2TB versions, adding more NVMe options to a 2.5-inch NVMe XS1715 lineup that includes 800GB and 1.6TB versions.

Since 2013, Samsung has introduced a range of industry-first 3D V-NAND-based SATA SSDs for PCs and data centers. Now, it is rolling out the SM1715 to accelerate the transition to the NVMe interface in the premium server sector, while expanding its 3D V-NAND SSD business to offer drives with more than 3TB of storage.
Add your own comment

18 Comments on Samsung Starts Producing 3.2-Terabyte NVMe SSD Based on 3D V-NAND

#2
RejZoR
Just give me a reasonably priced 2TB SSD for SATA and i'll be happy...
Posted on Reply
#3
jmcslob
If I had the money to throw I would....I so would....
Posted on Reply
#4
shhnedo
RejZoR said:
SSD for SATA
Well, I wouldn't. But then again, not everyone can sell a kidney. :D
Posted on Reply
#5
yapchagi
well at least it has A BACKPLATE!!!
Posted on Reply
#6
Ferrum Master
I really like this thing... give us moar... :rockout:

altough I am quite screwed about Samsung 840Evo slow bug ie charge dissipation from TLC cell... I also suffer from that... and still no firmware available...
Posted on Reply
#7
Prima.Vera
Bye bye crappy and ubbery buggy RevoDrive. You wont be missed. I hope those PCI-e drives will become mainstream, because I would really love a 512 GB one with 3GB/s reads :)
Posted on Reply
#8
Tallencor
Well I guess it may be time to upgrade the mobo for one that is 3 feet long and has 15 pci slots.
Edit: :laugh:
Posted on Reply
#9
shhnedo
Tallencor said:
Well I guess it may be time to upgrade the mobo for one that is 3 feet long and has 15 pci slots.
I disagree with you.
Posted on Reply
#11
Aquinus
Resident Wat-man
Prima.Vera said:
Bye bye crappy and ubbery buggy RevoDrive. You wont be missed. I hope those PCI-e drives will become mainstream, because I would really love a 512 GB one with 3GB/s reads :)
You know, there comes a point of diminishing returns. Doubling my bandwidth by putting my two Force GTs in RAID-0 defintely gave me 1GB/s for a while, but it honestly doesn't feel any more responsive and the only place you notice it is in benchmarks or copying files. IOPS is really a better gauge for responsiveness, but unless you're copying many gigabytes to data on a regular basis, PCI-E flash makes little sense.

A couple great examples would be capturing high-resolution video. Also many servers are starting to use PCI-E flash for database and I/O heavy servers and one of the best things about PCI-E flash is that it is its own controller, so companies like Rackspace can offer "On-Metal" VMs which are really just really beefy VMs with PCI-E flash passed through using VT-d (releasing the device from the host and giving the VM hardware-level access to the device) and your performance is basically the same as a dedicated box as a result.

There are a lot of reasons why this is awesome for servers but not awesome for consumers. Also note how they don't give an estimated price. It's probably more than most of us can afford or would be willing to invest in a single part.
Posted on Reply
#12
Prima.Vera
Thanks.
How about start-up of the programs, of OS, games level loading, etc? I dream of those happening almost instantaneous sometimes in the near future...
With 3GB/s this might just come true. Is almost as fast as the slowest DDR2 RAMs out there...
Posted on Reply
#13
Aquinus
Resident Wat-man
Prima.Vera said:
Thanks.
How about start-up of the programs, of OS, games level loading, etc? I dream of those happening almost instantaneous sometimes in the near future...
With 3GB/s this might just come true. Is almost as fast as the slowest DDR2 RAMs out there...
I've used this over simplified example to explain this before and it doesn't factor in latency, but it gives a general idea.

If you have a game and it takes 10 seconds to load on a spinny disk that can do 125MB/s and 6 seconds of that time is spend doing disk I/O, if you double it (SATA 3GB) you would cut the I/O time in half leaving you with 4 seconds of loading unrelated to the disks and 3 instead of 6 seconds to do disk I/O now resulting in 7 seconds load time instead of 10. Now if you double I/O speed again to SATA 6GB speed, you're halving the speed of disk I/O again, so instead of 3 seconds, it's now 1.5 seconds, but the 4 seconds of loading unrelated to disk I/O hasn't changed, so now you're loading in 5.5 seconds instead of 7, and if you do it again you get 4.75, then 4.38, and so on. So even as you instead disk access speed, your bottleneck shifts from being your disk I/O to everything else.

This is what I mean by diminishing returns because loading isn't 100% disk I/O and less and less of what's being done (time wise) is disk I/O as you make your drives faster.

Loading anything is always as slow as the weakest link in the chain and all you do by making drives faster and faster is making a different component your bottleneck if it's not the code itself at fault for being slow.
Posted on Reply
#14
The Von Matrices
Aquinus said:
I've used this over simplified example to explain this before and it doesn't factor in latency, but it gives a general idea.

If you have a game and it takes 10 seconds to load on a spinny disk that can do 125MB/s and 6 seconds of that time is spend doing disk I/O, if you double it (SATA 3GB) you would cut the I/O time in half leaving you with 4 seconds of loading unrelated to the disks and 3 instead of 6 seconds to do disk I/O now resulting in 7 seconds load time instead of 10. Now if you double I/O speed again to SATA 6GB speed, you're halving the speed of disk I/O again, so instead of 3 seconds, it's now 1.5 seconds, but the 4 seconds of loading unrelated to disk I/O hasn't changed, so now you're loading in 5.5 seconds instead of 7, and if you do it again you get 4.75, then 4.38, and so on. So even as you instead disk access speed, your bottleneck shifts from being your disk I/O to everything else.

This is what I mean by diminishing returns because loading isn't 100% disk I/O and less and less of what's being done (time wise) is disk I/O as you make your drives faster.

Loading anything is always as slow as the weakest link in the chain and all you do by making drives faster and faster is making a different component your bottleneck if it's not the code itself at fault for being slow.
You have a fair point regarding diminishing returns in the present time, but if you consider the disk a long term investment (in computer lifespan, say 5 years) the faster I/O can make sense. Since programs inevitably get larger and more resource intensive as time goes on, the amount of data to load increases over time and therefore the performance gap between the faster disk and the slower disk increases over time. It's also worth noting that the rest of the components in the system are not stagnant, and as the other components of the system get faster, you need faster disks in order to keep the bottleneck from shifting back to the disk.

That said, I still think it's a better idea to upgrade mid-priced components frequently compared to upgrading high priced parts less often.
Posted on Reply
#15
Aquinus
Resident Wat-man
The Von Matrices said:
You have a fair point regarding diminishing returns in the present time, but if you consider the disk a long term investment (in computer lifespan, say 5 years) the faster I/O can make sense. Since programs inevitably get larger and more resource intensive as time goes on, the amount of data to load increases over time and therefore the performance gap between the faster disk and the slower disk increases over time. It's also worth noting that the rest of the components in the system are not stagnant, and as the other components of the system get faster, you need faster disks in order to keep the bottleneck from shifting back to the disk.

That said, I still think it's a better idea to upgrade mid-priced components frequently compared to upgrading high priced parts less often.
I think it's important to re-emphasize that it's only going to really impact loading, so consider that you would be spending at least twice as much to gain possibly a max of a few seconds of load time over a regular SATA 6GB SSD. My point is that the size of applications might be getting bigger, but so is the amount of pre-processing that the CPU has to do anyways. I don't think application complexity implies that the disk will naturally become the bottleneck. I think we've found across the board that performance has hinged more on CPU IPC and GPU perf than anything else.

...but keep in mind, the disk only helps you when you're loading resources that aren't already cached or in memory. Once everything is loaded the disk means next to nothing. I would rather invest more money in something that will actually improve my performance considerably across the board. I'm patient enough where I don't mind waiting a few extra seconds for something to load.
Posted on Reply
#16
Prima.Vera
Let's just be clear, the SSD is STILL the slowest component of the whole system.

I'm curious what is the second one...
Posted on Reply
#17
Aquinus
Resident Wat-man
Prima.Vera said:
Let's just be clear, the SSD is STILL the slowest component of the whole system.

I'm curious what is the second one...
Lets be perfectly clear, it's also the least used component in the system with respect to CPU, memory, and GPU. So you'll encounter diminishing returns even faster than improving something else. Remember we're talking mass storage, so the only time it actually will impact performance is during a fraction of the time the application needs to do disk I/O. The disk usually isn't the limiting factor in most normal applications and once you have an SSD, it's the latency that gets you responsiveness, not bandwidth. I know than when I load a number of games that they won't top out my SSD RAID, because more time is spend doing stuff computations the CPU, GPU, and memory as opposed to reading data off the disk.

All I'm saying is that this hardware is probably designed for business applications and that it won't make your gaming experience any better than it already is. A database on the other hand would fly on something like this versus a normal SSD, but that's also because databases are mostly doing disk I/O and can benefit from it.
Posted on Reply
#18
D007
TRWOV said:
*drooling*
My sentiments exactly Mrs Esterhoouuussseee.
Posted on Reply
Add your own comment