Sunday, July 23rd 2017

Toshiba Intros XG6 Series M.2 NVMe SSDs

Toshiba today introduced the XG6 series SSDs. Built in the M.2-2280 form-factor with PCI-Express 3.0 x4 interface, the drives take advantage of the NVMe 1.3a protocol, and succeed the XG5 series from 2017. The drives implement Toshiba's new 96-layer 3D TLC NAND flash dubbed BiCS Flash, which went into mass-production in Q1-2018. Available in 256 GB, 512 GB, and 1 TB capacities, the drives offer sequential transfer rates of up to 3,180 MB/s reads, with up to 2,960 MB/s writes. Also on offer are 4K random access transfer rates of up to 355,000 IOPS reads, with up to 365,000 IOPS writes. The drives are expected to be backed by 5-year warranties, when they go on sale at prices competitive with the likes of Samsung's 970 EVO series.
Add your own comment

7 Comments on Toshiba Intros XG6 Series M.2 NVMe SSDs

#1
bonehead123
yet another sammy wannabe.....

maybe by this time next year they will have a drive that is as fast as the Evo's, but of course by then, Sammy will have drives that are even faster than their current ones, and so the race continues :D
Posted on Reply
#2
Prima.Vera
It's funny how fast it took to get interface limited from SATA3 550MB/s to NVMe 3500MB/s.:laugh::laugh:
I guess we need (AGAIN) a new type of interface/protocol which is no longer bandwidth limited?? o_Oo_O
Or just extend the number of lines to 8 instead of 4?
I have another question. When PCIe 4.0 and 5.0 arrives, does this mean an automatic doubling increase of maximum bandwidth for NVMe also?
Posted on Reply
#3
hat
Enthusiast
Prima.Vera said:
It's funny how fast it took to get interface limited from SATA3 550MB/s to NVMe 3500MB/s.:laugh::laugh:
I guess we need (AGAIN) a new type of interface/protocol which is no longer bandwidth limited?? o_Oo_O
Or just extend the number of lines to 8 instead of 4?
I have another question. When PCIe 4.0 and 5.0 arrives, does this mean an automatic doubling increase of maximum bandwidth for NVMe also?
If you really wanted to, you could probably put two NVMe drives in RAID-0... besides that, yes, with PCI-e 4.0 hits, bandwidth will double. So, a PCI-e 4.0 x4 drive would theoretically be able to deliver a maximum of 8GB/s when seated in a PCI-e 4.0 slot. PCI-e 5.0 would further increase this to 16GB/s. Or, a hypothetical PCI-e 5.0 x1 SSD could operate at the same theoretical maximum speed as currently existing 3.0 x4 drives, using only one lane instead of 4.
Posted on Reply
#4
Caring1
Prima.Vera said:
It's funny how fast it took to get interface limited from SATA3 550MB/s to NVMe 3500MB/s.:laugh::laugh:
I guess we need (AGAIN) a new type of interface/protocol which is no longer bandwidth limited?? o_Oo_O
Or just extend the number of lines to 8 instead of 4?
I recall reading something about some drives that use X 8 instead of the usual X4.
Apparently they are out there.
Samsung PM1725a 6.4TB AIC HHHL PCIe 3.0 x8 NVMe Enterprise Internal SSD
Posted on Reply
#5
Prima.Vera
hat said:
If you really wanted to, you could probably put two NVMe drives in RAID-0... besides that, yes, with PCI-e 4.0 hits, bandwidth will double. So, a PCI-e 4.0 x4 drive would theoretically be able to deliver a maximum of 8GB/s when seated in a PCI-e 4.0 slot. PCI-e 5.0 would further increase this to 16GB/s. Or, a hypothetical PCI-e 5.0 x1 SSD could operate at the same theoretical maximum speed as currently existing 3.0 x4 drives, using only one lane instead of 4.
Actually RAID-0 doesn't makes too much difference anyways due to the interface limitation which still caps to ~ 3500MB/s. See some proof below:

https://www.vortez.net/articles_pages/samsung_960_pro_raid_review,5.html

Caring1 said:
I recall reading something about some drives that use X 8 instead of the usual X4.
Apparently they are out there.
Samsung PM1725a 6.4TB AIC HHHL PCIe 3.0 x8 NVMe Enterprise Internal SSD
Yeah, that's on a dedicated PCIe card not on the M.2 form...
Posted on Reply
#6
hat
Enthusiast
I thought by putting two PCI-e x4 drives in RAID 0, you'd effectively end up with one large drive with an 8x PCI-e link... guess not...
Posted on Reply
#7
Woomack
hat said:
I thought by putting two PCI-e x4 drives in RAID 0, you'd effectively end up with one large drive with an 8x PCI-e link... guess not...
Yes and no.... all depends on the motherboard and/or controller. NVMe SSD are scaling good up to 2-3 drives, above that, bandwidth increase is not linear. Simply 2 SSD will have max ~7GB/s, 3 will have ~9-10GB/s, 4 will have about 10-11GB/s ... and 5 on my setup caused performance drop to ~8GB/s (M.2 socket+ PCIe card). I was able to reach 11GB/s on 4x 970 EVO/X399M Taichi and PCIe x16 card. The same setup with additional drives in M.2 sockets+additional PCIe card limited bandwidth to ~6GB/s.
Max bandwidth on AMD and Intel chipsets is about the same even though AMD TR has more PCIe lanes. On the other hand, who cares about max sequential bandwidth when we need performance in random operations where at least in low queues (typical home/office work) single NVMe SSD are faster than anything in RAID.

Prima.Vera said:
Actually RAID-0 doesn't makes too much difference anyways due to the interface limitation which still caps to ~ 3500MB/s. See some proof below:

https://www.vortez.net/articles_pages/samsung_960_pro_raid_review,5.html
Old article on old chipset, X299 and X399 are scaling up to ~12GB/s on multiple NVMe SSD. In most cases PCIe x16 card is still required as there are no boards with more than 3 M.2 sockets and no Intel boards that support RAID on multiple VMC. Not to mention that Intel requires hardware VROC keys to make it work.
Posted on Reply