• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Anyone use their PCI slots for m.2 drives?

$1500 is pocket change lol


Thanks! I've never heard of PCIe bifurcation before, I'll have to look into that. There seems to be a lot of "generic" adapters as well. Any thoughts on those?

Bifurication cards are generally all the same, they split 16 lanes into 4/4/4/4 for example. It however depends on your motherboard whether it support bifurication or not.
 
B key means the second drive is SATA only.
Something like this would do two PCIe drives, but comes at a price, as it has a PCIe switch for boards that don't support bifurcation.
Much cheaper option in case of bifurcation support.

Used a simple one slot adapter in my previous system when I upgraded my SSD, as it allowed me to clone my drive easily.
They're simply mechanical converters, so there's nothing really to them.
I did find some dual pcie adapters and they're like $200 lol. Forget that. I'll probably be snagging another 1 TB m.2 and $10 adapter soon. Thanks for the info.

Man, if there was one thing I appreciated about the Asrock Taichi board I had a while back, it was the three m.2 sockets lol. And it looked really cool as well.
 
To fully use it, you need an option in bios to split x8 link into x4+x4 link.

you'll need a board that supports PCIe bifurcation though. If not you can only use one device per PCIe slot.


Hi all,

Sorry to drag this thread up but I'm a little confused over how the bifurcation works. Until I saw this thread,
I thought I would be able to support two M.2 NVME drives on a simple double adapter placed in my 2nd PCIe x16 slot.

My GA Z390 Gaming SLI board has a bifurcation x8 + x8 switch, so, I'm taking what Flaky said literally, and I would like
a little clarification here before I spend money, if you could please help me understand why I need x8 link into x4+x4 link?

kayjay's comment has me wondering as well. Does this mean that without a bifurcation option, I can still use a single device
on the second PCIe x16 slot?

Won't a simple double adapter (mechanical with no controller) just share the PCIe x8 bandwidth on the second x16 slot
if I enable x8 bifurcation?

And who first penned the x16 x8 x4 description for PCIe lanes, I'm calling them out for dyslexia, (not that there is anything
wrong with that), shouldn't it be 16x 8x 4x etc?
 
Last edited:
Hi all,

Sorry to drag this thread up but I'm a little confused over how the bifurcation works. Until I saw this thread,
I thought I would be able to support two M.2 NVME drives on a simple double adapter placed in my 2nd PCIe x16 slot.

My GA Z390 Gaming SLI board has a bifurcation x8 + x8 switch, so, I'm taking what Flaky said literally, and I would like
a little clarification here before I spend money, if you could please help me understand why I need x8 link into x4+x4 link?

kayjay's comment has me wondering as well. Does this mean that without a bifurcation option, I can still use a single device
on the second PCIe x16 slot?

Won't a simple double adapter (mechanical with no controller) just share the PCIe x8 bandwidth on the second x16 slot
if I enable x8 bifurcation?

And who first penned the x16 x8 x4 description for PCIe lanes, I'm calling them out for dyslexia, (not that there is anything
wrong with that), shouldn't it be 16x 8x 4x etc?

First, the x is in the right spot, it's x16 not 16x. 16x denotes a multiplier where x16 denotes a width.

As for these running two nvme on a single pcie slot, you need pcie switches. However that creates lag, unavoidable since we are adding switches in the signal path. This also adds cost as the switches are not cheap.

Within the last few generations bifurication has become an alternative to pcie switches. Essentially the motherboard will partition say an x16 width slot into four x4 partitions, thus allowing us to run four x4 nvme drives off one x16 slot. If we were to do this with pcie switches, you end up with a board like the one posted above around 200 bucks or more.

I did a quick google on your board and some reviews are using the wrong terminology, like the toms review lol. They use the term bifurication incorrectly which is no surprise smh.

CPU PCIe Bifurcation (ie, sharing the CPU’s sixteen PCIe 3.0 lanes across two or three PCIe slots).

See the use of birfurication? CPU lanes are split using pcie switches not bifurication. Bifurication uses no pcie switches.
 
Most of cheap dual m.2 adapters provide one slot for M.2 PCIe, and one for M.2 SATA.
True dual M.2 PCIe cards have x8 connector, for example AOC-SLG3-2M2.
To fully use it, you need an option in bios to split x8 link into x4+x4 link. If such option exists, then such card would have to be placed in second full-width PCIe slot on your board.

There are also active cards (example: AOC-SHG3-4M2P) with a controller on them, but those are extra pricey.

If you only need an extra one, grab a simple x4->M.2 adapter.

The Asus M2 adatpter is usually less than $100 and you can populate that with 4 NVME drives. Not only that but you can have 2 RAID 0 arrays on 1 card.
 
First, the x is in the right spot, it's x16 not 16x. 16x denotes a multiplier where x16 denotes a width.

Only a Dyslexic person could know that :)

But seriously, thanks for the explanation on bifurcation without switches, and the terminology errors,
but does this mean I can or cannot use a PCIe NVME double adapter in my second x16 PCIe slot when
my BIOS has that x8 x8 bifurcation option, or are you saying that it isn't really bifuraction at all that my bios
is affecting, and that my board actually uses switches instead?
 
Only a Dyslexic person could know that :)

But seriously, thanks for the explanation on bifurcation without switches, and the terminology errors,
but does this mean I can or cannot use a PCIe NVME double adapter in my second x16 PCIe slot when
my BIOS has that x8 x8 bifurcation option, or are you saying that it isn't really bifuraction at all that my bios
is affecting, and that my board actually uses switches instead?

You keep mentioning x8/x8, again that is not bifurication. CPU lanes are split in hardware via PCIE switches which your bios controls. Bifurication uses NO SWITCHES. Assuming your lanes go from x16 to x8/x8, that portion is done by the bios, again via the pcie switches. You can literally see the switches between the pcie slots. Any nvme adapter whether its dual or quad, will either need switches or uses bifurication.
 
You keep mentioning x8/x8, again that is not bifurication. CPU lanes are split in hardware via PCIE switches which your bios controls. Bifurication uses NO SWITCHES. Assuming your lanes go from x16 to x8/x8, that portion is done by the bios, again via the pcie switches. You can literally see the switches between the pcie slots. Any nvme adapter whether its dual or quad, will either need switches or uses bifurication.

Yes, I keep mentioning the x8 x8 because that's my only bios option other than AUTO under a setting that says PCIe Bifurcation Support.
So, you are saying that my motherboard that supposedly supports bifurcation actually doesn't.
This makes me wonder then how SLI graphics support is viable on the two PCIe slots if there is added latency via the switches.

So, presumably, from your explanation, even if I buy a PCIe Double M.2 NVME adapter for that second x16 PCIe slot on my motherboard, I'm
still not going to be able to use those SSD's for a RAID0 array with both SSD's on that card, and that I am literally limited to use just one SSD
on any adapter in that PCIe slot?
 
Bifurcation and switching are two different things.
Bifurcation refers to PCIe controller's capability of being reconfigured - so that instead of single x16 link, it can be x8+x8, or x8+x4+x4 - that's what mainstream Intel cpus support since ivy bridge.
Switching refers to motherboard's capability of routing PCIe signal into different slots depending on configuration - software, hardware, or both.

kayjay's comment has me wondering as well. Does this mean that without a bifurcation option, I can still use a single device
on the second PCIe x16 slot?
Yes. Simple adapters with one M.2 PCIe slot generally work everywhere.
Worst case scenario for dual/quad adapters is providing only one functioning M.2.
Under the hood, typical SLI motherboard like yours detects a card in second x16 slot, and enables both switching and bifurcation.

Won't a simple double adapter (mechanical with no controller) just share the PCIe x8 bandwidth on the second x16 slot
if I enable x8 bifurcation?
For using x8->dual M.2 card in second x16 slot, you'd need to have a 4+4 bifurcation option (that would enable the 8+4+4 capability mentioned above).
Without that, such card will have only one M.2 slot working, as the PCIe controller expects only one device on up to 8 lane link.
 
Thanks Flaky,

I think that clears up a few things.

I tried using the two onboard M.2 two slots for a raid0, only to discover that it doesn't actually work properly as those slots are bottlenecked
through the DMI, so I was trying to get a workaround for the new PC, looks like it's not gonna happen.

I don't think my bios has any options to support raid through any double adapter on a PCIe card anyway, probably because that doesn't actually work.

I bought two Silicone Power nvme 1Tb drives and they turned out to have different controllers on each one of them, this new PC is an absolute failure,
I got sucked in by the hype and misleading advertising, very unlike me...
 
If you put one drive in native M.2, and second in x8 via any M.2 PCIe adapter, then if you raid these drives, you won't be bottlenecked by DMI (as only one drive will be communicated via it).

Afaik Intel's RST does not support RAIDing cpu-attached NVMes on mainstream platform, so you'd have to rely on software means.
If you're using windows, there are two options:
1) Dynamic disks. Not recommended, as TRIM isn't supported there
2) Storage spaces. Has some space overhead compared to true/fake raid or dynamic disks. I'm not sure if this supports TRIM, but as it's much newer than DD, I expect it to :P

What was the original point of raiding NVMes at all? Very little use cases do really benefit from such speeds...
 
If you put one drive in native M.2, and second in x8 via any M.2 PCIe adapter, then if you raid these drives, you won't be bottlenecked by DMI (as only one drive will be communicated via it).

I did try this, but so far I've not been able to raid them in bios, or Ctrl+I raid bios.
The options disappear or in certain configurations, the drive in the PCIe slot doesn't show up in BIOS at all.

SATA options are only AHCI and Intel optane blah blah blah, which is the RAID option according to the manual.
There is no simple RAID config under SATA options. I always thought AHCI was a subset of RAID, so I was suprised
by this bios , in several ways to be honest.

My Gigabyte Z87 board has an impressive bios, so I was left a bit stunned by the bios on this new board, very confusing
layout and though it has a ton of overclocking options, this new bios is less than ideal to say the least. The user manual
describes a completely different bios, so I reverted back three bios versions so I could learn it by the manual, get my head
around it, then I loaded the latest bios, and things seemed to fall in place and work properly, whereas I seemed to be missing
options on the bios it shipped with, and Qfan wasn't working either, but it all seems good now.

The GA Z390 Gaming SLI is my first Z390 board, I bought it because it had SLI slots and a 12 phase power filter.
I didn't think about anything other than using the other x16 slot for another GTX1060 one day,
when or if I need to upgrade my graphics. I'm a flight simmer / casual gamer / PC tech with no particular speciality,
I mainly just do support for other gamers / clan members, for the last 20 years or so.

Anyway, I was doing a build for a clan member, and he found some 9600kf cpu's cheap, so he bought me one for doing his
build, and not even wanting a new PC, I started buying socket 1151 / Z390 compatible stuff as it was going EOL, and all this
at the height of COVID price fever on PC parts, not my best bargain PC purchase :p

Why RAID-0 ? The long answer...

So, the only real reason I wanted a new PC was to get some NVME SSD's happening in a RAID0 array, and NVME's have been
so expensive here in Australia, I just paid $500 for 2 x 1Tb el cheapo Silicone Power 3400/3000 speed drives, so when you get back
off the floor, read on......

The reason I wanted RAID-0 is that I am obviously completely insane, because every "expert" out there reckons it's of no benefit.
However, that is not my experience. I've been using RAID-0 OS's for about 20 years and they have been a joy to work with, I move
a lot of data and do a lot of backups, keep client backups and do some game mod development that requires a fair few copies and
backups, etc.
So, yeah, that's why I want RAID0. It made such a difference to my PC experience over the years on legacy drives and SATA SSD's
that I just figured it would scale to NVME after seeing some benchmark speeds posted by the deceptors.

However, I just spent some time in a reputable RAID forum and learned that I was a fool not looking into this deeper, and scoffing
when other power users were suddenly buying AMD chipsets, and not knowing the implications for NVME raid on the latest crop of
Intel consumer grade boards. ID10T error.

It's now pretty obvious to me that I just need to get myself a decent 2Tb NVME SSD and be done with it.

I am returning my Silicone Power drives to the vendor. We have powerful consumer protection laws here in Australia. If a product does
not suit it's intended purpose, it can be returned for a full refund, no questions asked. I bought these SSD's to build a RAID set. I bought them
as they were advertised as the same exact product each, same model, confirmed matching pair by spec sheet and forum research etc, all
the pre-purchase research points to the fact that they are all using Phison controllers and they are identical. They are not. One of them uses
a Phison controller, and the other has a Silicon Motion controller. They have various architectural differences, but the gist of it is that the
Phison controlled one has more raw grunt moving large data and the Silicone Motion controller uses more cache to process smaller amounts
of data quickly. I wanted to have two identical SSD's so I could use the same firmware on them both, and this is impossible. Instead, the lowest
common denominators will be used for each stick in RAID, so they cannot and do not meet the spec for this build.

After MUCH discussion, the vendor has agreed to accept the returned SSD's for analysis, and they admitted they were not aware of the
differences in these SSD's (after first blaming me for not buying compatible products!).

So, if I do get my money back, I'll just get a 2Tb SSD and be done with it, but, if the vendor finds any valid reason to refuse the RMA, then
I only have one option left.... that friend of mine bought the same stick as mine, the one with the Silicone Motion controller, and he will swap it for
my Phison controlled SSD if I get into a bind and need it from him. He'll get a better SSD, and I'll have, hopefully, a raid solution, even if it is one
bottlenecked on the DMI. This is looking more probable as I still cannot see a way to get a full speed RAID-0 working on this board without spending
hundreds on a RAID card... if it's $200 USD, it'll be around $400 AUD here, and money doesn't come easy here either.

You see, I need 2Tb of fast access, my main flight simulator currently has 1.3Tb of HD scenery files and it's slowly growing, so that's a BIG chunk
of a 2Tb drive.

It's all a big deal to me because I'm an invalid with a very limited budget and PC parts are so expensive here in OZ, and I've never been caught with
a red face like this before when buying PC hardware, I've obviously lost touch with current tech, but the traps are there, pictured right on the cover
of the boxed hardware and on the product pages themselves, with disclaimers in the fine print. Buyer beware.

I am so over this PC build, I'd sell it tomorrow and take a 20% loss, every other PC I ever bought I was excited about, but this one has been a nightmare.
I ordered and paid for 3 different motherboards and the vendors kept cancelling as "out of stock". This board was my last choice as stocks dried up.

Bifurcation refers to PCIe controller's capability of being reconfigured - so that instead of single x16 link, it can be x8+x8, or x8+x4+x4 - that's what mainstream Intel cpus support since ivy bridge.

So, why in the world doesn't my bios support bifurcation properly?
Presumably the PCIe controller is on the CPU and it should be a a simple thing to switch it in BIOS ?
You would think a $300 motherboard would not be skimping on this, but if it's as I suspect,
and the manufacturers simply limit the feature in order to sell more premium chipsets, then I
think I'm gonna get a bit vommity......

Bifurcation refers to PCIe controller's capability of being reconfigured - so that instead of single x16 link, it can be x8+x8, or x8+x4+x4 - that's what mainstream Intel cpus support since ivy bridge.

So, why in the world doesn't my bios support bifurcation properly?
Presumably the PCIe controller is on the CPU and it should be a a simple thing to switch it in BIOS ?
You would think a $300 motherboard would not be skimping on this, but if it's as I suspect,
and the manufacturers simply limit the feature in order to sell more premium chipsets, then I
think I'm gonna get a bit vommity......
Wouldn't it actually cost more to put the switches on this board, which thesmokingman says he can see,
(I can't see them, but I don't know what I'm looking for exactly) ?
 
Last edited:
So, why in the world doesn't my bios support bifurcation properly?
Presumably the PCIe controller is on the CPU and it should be a a simple thing to switch it in BIOS ?
Intel's 1xxx socket CPUs PCIe bifurcation is configured by physically setting high/low states on CPU's CFG pins. For this to be controllable by software (that includes bios) the motherboard has to be designed to support that.
You may ask gigabyte support, but don't expect much - from my experience, their support is the worst out of the big 4.

(I can't see them, but I don't know what I'm looking for exactly) ?
4 rectangular chips in a row under the first x16 slot. They are responsible for routing 8 lanes between first and second x16 slot.
 
Ok, I see the switches now, hiding under the graphics card.
Thanks so much for being so specific and accurate in your answers Flaky, it really helped.

I really don't understand how the manufacturers have the hide to blatantly advertise bifurcation support when it doesn't
even exist on these boards. It's just openly lying and disappointing their customers, leaving us feel ripped off. What kind of
business model would do that and expect to survive?

The marketing boys really need to reign in their crack habits.
Oh, but this board does have support for pretty blinking lights ........ uutf?
 

Attachments

  • wowzy2.jpg
    wowzy2.jpg
    61.6 KB · Views: 111
Last edited:
I really don't understand how the manufacturers have the hide to blatantly advertise bifurcation support when it doesn't
even exist on these boards.
The only thing related to this board that has been advertised to you without mentioning important details is chipset's nvme raid capability and it's possibility of being bottlenecked by DMI.

I don't see where did the motherboard manufacturer advertise bifurcation to you. Is this product page? Is this manual? CTRL+F on both of these finds nothing.
Things like chipset HSIO or CPU's bifurcation are building blocks for motherboard manufacturers, and it's up to them whether to even use these, or expose them to end users and market them as "features".
 
i have tested this burification function on already old amd chipset x370 and asrock mobo via an asrock M2 4xM2 expansion card, which on this scenario have supported only 2 m2 drives in a single slot. Have raided them in raid0 configuration just for test, and guess what, reading was exceptional but other wasn't so fast, even it was slower than a single drive, that's why raid0 with nvme drives is not so good. Another thing is raid with standard ssd's, then you will be limited only by chipset bandwidth and of course number of ssd drives using in it. Just like a raid0 with hdd's, have been achieved a result with hard drives near 1500MB/s seq r/w but this was with 8 of them. So sata raid is more scalable than nvme one, but there is the catch, a single M2 drive always beats any sata raid on the speed matters!
 
i have tested this burification function on already old amd chipset x370 and asrock mobo via an asrock M2 4xM2 expansion card, which on this scenario have supported only 2 m2 drives in a single slot. Have raided them in raid0 configuration just for test, and guess what, reading was exceptional but other wasn't so fast, even it was slower than a single drive, that's why raid0 with nvme drives is not so good. Another thing is raid with standard ssd's, then you will be limited only by chipset bandwidth and of course number of ssd drives using in it. Just like a raid0 with hdd's, have been achieved a result with hard drives near 1500MB/s seq r/w but this was with 8 of them. So sata raid is more scalable than nvme one, but there is the catch, a single M2 drive always beats any sata raid on the speed matters!

That's not true. Bifurication setups are really for HEDT setups and they are ridiculously fast when done right, preferably Threadrippers. My 3970x production machine achieves 15GB read and writes using an 8TB array, ie. four x4 nvme pcei 4.0 drives.
 
15Gb transfer? where? on crystal mark? or onto the same drive copy you able to achieve these speeds? Because there are no single drive with that speed transfer to copy from....
 
I don't see where did the motherboard manufacturer advertise bifurcation to you. Is this product page?

No sir, nothing at all on the product page, it was just modified though.
Nothing in the manual on the bios bifurcation support option either, though it's there in bios.

Perhaps I made some assumptions based on web reviews and forum articles then, my apologies. (red faced)
 

Attachments

  • gawebmod.jpg
    gawebmod.jpg
    272 KB · Views: 104
Last edited:
15Gb transfer? where? on crystal mark? or onto the same drive copy you able to achieve these speeds? Because there are no single drive with that speed transfer to copy from....

??

You are missing the whole point of bifurication with that statement.

 
I'm using three 1Tb pciex 3 in raid ATM, using a relatively cheap Asus hyper m.2 x16, I can only use two bifurcated lanes on the second slot of an x470 board and only one native m.2 slot but , it works well(9GB Max transfer or 11GB with a ram cache).
It's doable, the adaptor cost about£40.
 
Back
Top