• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD X399 Platform Lacks NVMe RAID Booting Support

Oh dear no NVME boot RAID support. End of the world as we know it! </sarcasm>

Speaking from workstation point of view, RAID0 on NVMe is waste of time. I would be interested in RAID10 as it adds crucial Redundancy (that's R in RAID for "RAID0 generation"), but just R0 on a devices which can deal with like 300000 IOPS R/W. Nuts! RAID at this moment in time is ancient technology anyway - which never was designed to work with NVMe devices. RAID adds a lot of overhead on top of much superior NVMe protocol.

What's the point?

Only for benchmarks nerds and for bigger e-pen. Nothing else.

In advanced servers yes you can utilize this (to a point), but in SOHO segment... even video editing with multiple 8K live streams won't benefit much if at all from RAID0 on NVMe.

What I would love to see are PCIe cards with M.2 slots for up to 4 drives (don't need RAID just NVMe connectivity). Have you tried drive pooling with NVMe? No? That's quite something to behold without RAID quirks and moods.
 
We are talking about boot device here, aren't we? It's not like you can't do this after the machine has started. My point is for how edge-case this is, there are options to get around it that aren't unreasonable.
"Aren't unreasonable" like switching from Windows to Linux and booting kernel from a flash drive? :-)
 
There is no way that I am going to put all my eggs into one baskets. When I changed the platform, and performed a migration of the OS to the new NVMe drive, things are the system was not able to boot after that, I had to do a clean install of the OS, luckily all my important files were stored on 2 other drives. I lost no files, jut time:)
 
ssd nvme running slow no raid.jpg

Omg that's unforgivable look all those people with bunch of nvme ssd-s crying out loud "you let us down AMD, instead of booting to shitty win 10 for 10 sec now i need 13 sec oh the horror.." you get the point do you?
 
No big deal here...It is a Threadripper not raid or boot ripper :) ..

HEDT = High End Desk Top

if ok, buy it, if not , leave it . .
 
Oh the horror! But seriously, does anyone here actually boot off multiple NVMe SSD's? Seems a bit ridiculous with the speed those already provide.
Actually, yes. Sort of. I boot from dual m2's in raid 0. Their not nvme, but whatever. It's fast as hell so why not?
 
Actually, yes. Sort of. I boot from dual m2's in raid 0. Their not nvme, but whatever. It's fast as hell so why not?
Because there are NVMe devices (particularly made by Samsung,) that are capable of doing what your SATA-based M.2 RAID-0 setup can do with a single device. Write speeds are going north of 2GB/s and read speeds are also going north of 3GB/s. Compare that to the 1GB/s I get with SATA3 RAID-0 with two devices and you quickly wonder why people like me think that a single NVMe device is enough for a boot drive.
 
Because there are NVMe devices (particularly made by Samsung,) that are capable of doing what your SATA-based M.2 RAID-0 setup can do with a single device. Write speeds are going north of 2GB/s and read speeds are also going north of 3GB/s. Compare that to the 1GB/s I get with SATA3 RAID-0 with two devices and you quickly wonder why people like me think that a single NVMe device is enough for a boot drive.
And now imagine two of them in tandom. So instead of 20sec bootups, you get 12 to 13sec. If someone has the money and wants to, why not. I say have at it. Oh, and just FYI, my setup gets a nominal 960MB per sec average. There are very few nvme drives that can even approach that number consistently.
 
"Aren't unreasonable" like switching from Windows to Linux and booting kernel from a flash drive? :-)
You realize you can make a (maybe) 100MB partition on the SSD for /boot so that GRUB can load the kernel, while loading everything else off of the software RAID, right?
 
You realize you can make a (maybe) 100MB partition on the SSD for /boot so that GRUB can load the kernel, while loading everything else off of the software RAID, right?
Ouch.
So now it's switching from Windows to Linux, using software RAID and putting /boot on one of the drives. I think I preferred the previous variant. :-D

Is it even possible to do that on a mainstream distribution (Ubuntu, Debian, Mint etc) without editing dozens of config files?
What about the stuff that I want to run before mdadm?
 
Ouch.
So now it's switching from Windows to Linux, using software RAID and putting /boot on one of the drives. I think I preferred the previous variant. :-D

Is it even possible to do that on a mainstream distribution (Ubuntu, Debian, Mint etc) without editing dozens of config files?
What about the stuff that I want to run before mdadm?
You can do all of that before you install with a live CD.
 
You can do all of that before you install with a live CD.

So there's like a box to check for it in the install GUI, or is it like two or three buttons tops to click?
 
And now imagine two of them in tandom. So instead of 20sec bootups, you get 12 to 13sec. If someone has the money and wants to, why not. I say have at it. Oh, and just FYI, my setup gets a nominal 960MB per sec average. There are very few nvme drives that can even approach that number consistently.
Your right, 3GB/s usually isn't sustained but, 2GB/s is for reads. Same thing for writes, 2GB might not be sustained but >1GB/s is. 2GB/s is double the speed of your RAID-0 with a single device (Samsung 960 Pro,) and your forgetting that boot devices like random read performance which almost always doesn't scale in RAID-0. My mid-2015 Macbook pro which is 2 years old, can do practically 800MB/s with the NVMe card that came with the laptop.
 
This thread is about as relevant as that thread where ppl whined and complained about the torque of the torx tool by AMD.
 
But I can still raid non-boot drives. I'm fine with that... If I were to buy a Threadripper... Which I'm not.

Yeah, I mean in an age where a 960EVO gets 3200MB/s bandwidth, do we really need NVMe bootable RAID?
 
Your right, 3GB/s usually isn't sustained but, 2GB/s is for reads. Same thing for writes, 2GB might not be sustained but >1GB/s is. 2GB/s is double the speed of your RAID-0 with a single device (Samsung 960 Pro,) and your forgetting that boot devices like random read performance which almost always doesn't scale in RAID-0. My mid-2015 Macbook pro which is 2 years old, can do practically 800MB/s with the NVMe card that came with the laptop.
And that's a good point. My burst speeds range up into the 1.6GBPS. My point is that I'm using 2 480GB MLC drives on a bootable raid card for $260. You show me a single 960GB NVMe drive that can get the performance you state for the same or less and I'll go buy it.
 
Burst uses cache for that value. Otherwise, you arent breaking 1.1GBps or so as that is how fast the drives are.

For $350, around 33% more, you can get one ~70% faster reads (intel 600).. and smokes it in iops. Also, your raid card wasnt free. That cost should be included. ;)

Or, spend 200 more, around 80%, for 300% performance increase...1TB 960 evo.

Im not saying its the right move, but there are certainly benenfits of having a single, much faster m.2 drive versus 2 sata drives in R0 on a raid card. Cost /GB isnt there, but performance, shorter boot times because of not having to post raid rom, and less chance of an array crapping out is real.
 
Last edited:
Burst uses cache for that value. Otherwise, you arent breaking 1.1GBps or so as that is how fast the drives are.

For $350, around 33% more, you can get one ~70% faster reads (intel 600).. and smokes it in iops. Also, your raid card wasnt free. That cost should be included. ;)

Or, spend 200 more, around 80%, for 300% performance increase...1TB 960 evo.

Im not saying its the right move, but there are certainly benefits of having a single, much faster m.2 drive versus 2 sata drives in R0 on a raid card. Cost /GB isnt there, but performance, shorter boot times because of not having to post raid rom, and less chance of an array crapping out is real.
That cost included the raid card, which was used[but in perfect condition]. I got the drives on sale new. You seem to have missed the part where I said they were M2 drives. But they're not NVMe, which is what Aquinus and I were talking about.
 
Last edited:
That cost included the raid card, which was used[but in perfect condition]. I got the drives on sale new. You seem to have missed the part where I said they were M2 drives. But they're not NVMe, which is what Aquinus and I were talking about.
i know they arent nvme...sata based m.2. 550MB read drives.. the 1.1 value im talking in R0. I didnt specify nvme when i said m.2, and then referred to sata as a protocol. I can see why you thought that. :)

Ok, so a drive on sale and a used raid card were 260...many may not have that opportunity. Im just saying there are use cases for single nvme at a higher cost. Its up to the buyer to determine if those costs are worth it. Not a huge amount of real world performamce increases, but, they are there.
 
Oh the horror! But seriously, does anyone here actually boot off multiple NVMe SSD's? Seems a bit ridiculous with the speed those already provide.

I'm sorry, this is a HEDT platform. Who NEEDS 16 cores 32 threads? Who NEEDS 64 GB of RAM? Who NEEDS a Ferrari? HEDT is for people who WANT the BIGGEST e-peen. I WANT to run my 2x NVMe drives in RAID 0.

It makes no sense why this couldn't be done.
 
For someone needing that kind of drive throughput (not sure what for) how expensive is a raid controller?
Considering how much is being spent already, granted it would be nice to have the option.

Board level "hardware" raid is barely hardware raid, it's CPU bound which shares the negative impact of Windows "software" raid.
In some cases OS level raid can actually be better than basic onboard raid due to recovery options with failed hardware.

Lastly on boot times... Raid 0 often boots slower than a non raid due to raid detection time, everything is faster after the fact.
 
Back
Top