• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Transcend Introduces 8 TB Industrial SSD with Power Loss Protection

Nomad76

News Editor
Staff member
Joined
May 21, 2024
Messages
1,482 (3.62/day)
Transcend Information, Inc. (Transcend), a global leader in storage and multimedia solutions, proudly announces the launch of its new SSD475P 2.5" solid-state drive, purpose-built for industrial applications and high-performance environments. Featuring Power Loss Protection (PLP) technology, the SSD475P uses built-in capacitors to continue supplying power during unexpected power outages, ensuring data is properly written and significantly reducing the risk of data loss.

The SSD475P is equipped with a SATA III 6 Gb/s interface and 112-layer 3D NAND flash, offering storage capacities of up to 8 TB, combining high-speed access with exceptional capacity. With sustained read/write speeds of up to 560/530 MB/s and a built-in DRAM cache, the drive enhances random access performance and overall endurance. Thanks to its Direct Write firmware, the SSD475P provides outstanding write stability for prolonged, intensive data workloads. It ensures consistent write speeds without throttling, making it ideal for industrial use cases involving frequent access and high write volumes, such as industrial PCs, embedded systems, and data logging devices.



Designed to withstand demanding operating environments, the SSD475P undergoes Transcend's rigorous 100% testing and maintains stable performance across a wide temperature range of -40°C to 85°C. With an Uncorrectable Bit Error Rate (UBER) of 10⁻¹⁷, the SSD475P ensures exceptional data integrity. It supports key reliability and security features, including Dynamic Thermal Throttling, S.M.A.R.T. health monitoring, 4K LDPC ECC (Error Correction Code), AES encryption, and TCG Opal to enhance data protection.

With its large capacity, robust PLP technology, and high durability, the SSD475P is an ideal solution for applications that require long-term operation and prioritize data integrity, such as industrial automation, embedded systems, surveillance recording, aerospace & defense, and edge server computing. It ensures reliable, high-speed storage performance even under extreme conditions.

Manufactured in Taiwan, the SSD475P is backed by a three-year limited warranty and has undergone stringent quality testing. It delivers the trusted performance and long-term stability that define Transcend's industrial-grade storage solutions.



View at TechPowerUp Main Site | Source
 
Doesn't sound very cheap unfortunately.

More large SATA SSDs are needed on the market.
 
More large SATA SSDs are needed on the market.

With the laughable IOPS and bandwidth that looked mediocre 15 years ago, SATA needs to die. I have absolutely zero love for this form factor. What we need is M.2 format PCIe SSDs that have their full capabilities realized. State of the art controllers, high capacities, and decent price. But the same companies that sell SSDs don't want to essentially invalidate their investment on their traditional HDD businesses, so we've stalled.

Everything this "industrial" drive has, the Intel 320 series SSD had back in 2011. And the X25-M even earlier still. With higher endurance MLC flash. The storage cartel is one of the things I can't stand in the tech industry.
 
At the right price 20-30 of these in a RAID6 array would be very sweet.
 
the SSD475P uses built-in capacitors to continue supplying power during unexpected power outages
This is so cutting edge. I mean, capacitors? I'm sure the tech didn't even exist before today. :P
 
What we need is M.2 format PCIe SSDs that have their full capabilities realized.
Unfortunately M.2 interface does not support hot swap natively.

What they need to do is to put e1s in consumer products.
 
With the laughable IOPS and bandwidth that looked mediocre 15 years ago, SATA needs to die. I have absolutely zero love for this form factor. What we need is M.2 format PCIe SSDs that have their full capabilities realized. State of the art controllers, high capacities, and decent price. But the same companies that sell SSDs don't want to essentially invalidate their investment on their traditional HDD businesses, so we've stalled.

Everything this "industrial" drive has, the Intel 320 series SSD had back in 2011. And the X25-M even earlier still. With higher endurance MLC flash. The storage cartel is one of the things I can't stand in the tech industry.
SATA is still far superior to M.2 on board footprint and temperatures/power, and that the drives can be stuck in drive bays or even loose in the case if need be.
For 90% of usage they are also fast enough.

However the problem SATA has is that they no cheaper to manufacturer than NVME drives, so their value is a hard sale. Plus SATA cable data integrity issues are a weakness.

As an example in my NUC, I could run a SATA in there passively cooled no issue, a M.2 in there is pegged at its throttle temperature without active cooling blowing down on the case.
 
SATA is still far superior to M.2 on board footprint
Definitely not. SATA is waste of space - both in terms of the actual PCB housed in the 2,5 enclosure and the space it takes up in a case along with it's two cables.
M.2 does away with cables and significantly reduces the SSD footprint.
and temperatures/power,
That's mostly down to controller. SATA SSD's use old controllers made on larger nodes and these have bigger surface area to dissipate the heat. Not to mention the bigger housing.
and that the drives can be stuck in drive bays or even loose in the case if need be.
Not exactly a plus in my eyes if i have to manage two extra cables.
However the problem SATA has is that they no cheaper to manufacturer than NVME drives, so their value is a hard sale. Plus SATA cable data integrity issues are a weakness.
Exactly. If the price is same or very similar then they have no advantage even against old PCIe 3.0 M.2 SSD's.

The only real advantage i can see for SATA SSD's is to completely replace HDD's in terms of capacity and even cost per GB and overall cost.
The reason being that M.2 only goes to 8TB (well maybe 22110 standard can somehow fit 16TB on both sides, but almost no one uses 22110).
2,5" SATA can easily go to 16TB and probably even 32TB in the 2,5" form factor.

Unfortunately that does not seem to be happening due to total stagnation. QLC never went anywhere and was not significantly cheaper than TLC and PLC is nonexistent.
Layer count increases do not seem to benefit end-user prices either.
 
SATA is still far superior to M.2 on board footprint and temperatures/power, and that the drives can be stuck in drive bays or even loose in the case if need be.
For 90% of usage they are also fast enough.

However the problem SATA has is that they no cheaper to manufacturer than NVME drives, so their value is a hard sale. Plus SATA cable data integrity issues are a weakness.

As an example in my NUC, I could run a SATA in there passively cooled no issue, a M.2 in there is pegged at its throttle temperature without active cooling blowing down on the case.

Board footprint, perhaps. Wouldn't need much of that if they hadn't decided that 8 TB was to be a hard limit for SSDs so they can preserve the high capacity HDD business. Temps/power? Directly consequential to them being more than 10 times slower than NVMe drives IMHO. No need for fast, advanced (and hot) host controllers if you're limited by the bus speed that badly.
 
Definitely not. SATA is waste of space - both in terms of the actual PCB housed in the 2,5 enclosure and the space it takes up in a case along with it's two cables.
M.2 does away with cables and significantly reduces the SSD footprint.
Case space isnt board footprint. M.2 has so much board footprint its caused major issues with board design, removing pcie slots etc. Please read what I said again.
Board footprint is far more premium than case footprint.
I also posted a rational pros and cons list, instead of just a SATA is older so it sucks in all areas thing. I do recognise the benefits of M.2, but I also recognise it has flaws.

Board footprint, perhaps. Wouldn't need much of that if they hadn't decided that 8 TB was to be a hard limit for SSDs so they can preserve the high capacity HDD business. Temps/power? Directly consequential to them being more than 10 times slower than NVMe drives IMHO. No need for fast, advanced (and hot) host controllers if you're limited by the bus speed that badly.
What do you think the limit for capacity would be if there was no segmentation going on? They can put nand both sides of the PCB right?
 
Case space isnt board footprint. M.2 has so much board footprint its caused major issues with board design, removing pcie slots etc.
M.2 mostly sits between PCIe slots. It doesn't have to outright replace them. There's also the option to mount them via PCIe cards.
Board space limits are defined by the ATX standard. Not the M.2 size.
Board footprint is far more premium than case footprint.
Always has been. SATA connectors were also turned to 90 degree angle to save on board space.
And sure - M.2 takes more space on board than standard 2-6, 90 degree angled SATA ports, but i also dont see much missing because of M.2
Mainstream sockets dont give enough PCIe lanes anyway to properly utilize 2x16 or more slots.
What do you think the limit for capacity would be if there was no segmentation going on? They can put nand both sides of the PCB right?
16TB for M.2 most likely. 22110, double-sided. Perfectly doable if the controller supports 16TB. Above that would be 2,5" at 32TB most likely. Perhaps 64TB if they're really pushed it, but then cooling would become and issue in a such densely packed enclosure.

Other sizes and interfaces could support orders of magnitude bigger capacities like 3,5" or bring back the 5,25".
Speed could be be resolved too with U.3 and OcuLink like cables.
 
With the laughable IOPS and bandwidth that looked mediocre 15 years ago, SATA needs to die. I have absolutely zero love for this form factor. What we need is M.2 format PCIe SSDs that have their full capabilities realized. State of the art controllers, high capacities, and decent price. But the same companies that sell SSDs don't want to essentially invalidate their investment on their traditional HDD businesses, so we've stalled.

Everything this "industrial" drive has, the Intel 320 series SSD had back in 2011. And the X25-M even earlier still. With higher endurance MLC flash. The storage cartel is one of the things I can't stand in the tech industry.
SATA is perfect for data backup and such, no need to pay double for an nVME drive with the same capacity.
And yeah, for server usage nVME are useless since the standard does not support hot swap, which is mandatory on Datacenters.
 
SATA is perfect for data backup and such, no need to pay double for an nVME drive with the same capacity.
And yeah, for server usage nVME are useless since the standard does not support hot swap, which is mandatory on Datacenters.

There is pretty much no reason for an NVMe drive to cost twice of what a SATA does, and this doesn't tend to hold true in most segments anyway. If we were not dealing with a storage cartel, SATA would have gone bye bye a long time ago, and SSDs wouldn't be stalled in the current capacity level. We simply don't have SSDs above 8 TB in the client market, and above 2 TB prices get high fast, regardless of form factor.

Case space isnt board footprint. M.2 has so much board footprint its caused major issues with board design, removing pcie slots etc. Please read what I said again.
Board footprint is far more premium than case footprint.
I also posted a rational pros and cons list, instead of just a SATA is older so it sucks in all areas thing. I do recognise the benefits of M.2, but I also recognise it has flaws.

What do you think the limit for capacity would be if there was no segmentation going on? They can put nand both sides of the PCB right?

Considered we already have single sided 8 TB M.2-2280 drives, it's probably 16 to 24 TB for now. 32-48 if they really cram it or use 22110 format. Generally, I don't see any benefit whatsoever to sticking to old SATA at this point in time. I've been an advocate of outright removing SATA ports from motherboards for some time, add an extra M.2 slot instead. High performing USB docks are cheap and plentiful, can just get that if you need hot swap etc.

It's time motherboard vendors also stop with the silly nonsense of adding 2 or 3 x16 size slots on the motherboards, most configurations won't use that and CrossFire/SLI are dead for over a decade at this point. Need to stop advertising support for that and instead add ONE current gen (5.0, 6.0, whatever) PCIe slot for graphics, and optimize the rest of the board's layout to fit several M.2 drives, and leave a x4 slot at the bottom or above the GPU slot for the eventual AIC board like a high end network card - with a lane increase CPU side.
 
Last edited:
It's time motherboard vendors also stop with the silly nonsense of adding 2 or 3 x16 size slots on the motherboards,
I don't Agree with that.
I have a PCIe Sound Card (X-Fi Titanium), a 10Gpbs network card for my 10Gbps Internet Line and another PCIe slot with Thunderbolt 4/USB4.0 card too. Including the GPU which takes another PCIe slot, I need in total 4xPCIe slots. So if you don't use PCIe except for your GPU, please don't assume that everyone else is also not requiring PCIe slots.
 
I don't Agree with that.
I have a PCIe Sound Card (X-Fi Titanium), a 10Gpbs network card for my 10Gbps Internet Line and another PCIe slot with Thunderbolt 4/USB4.0 card too. Including the GPU which takes another PCIe slot, I need in total 4xPCIe slots. So if you don't use PCIe except for your GPU, please don't assume that everyone else is also not requiring PCIe slots.
The things you mentioned should be integrated into the motherboard as controllers instead of extra PCIe slots and addon cards.

X-Fi Titanium is very old. Unless you have a baller audio setup then most people would not even notice the difference with good onboard audio.
Most boards lack 10Gbps, but almost all have 2.5Gbps now and 5Gbps is becoming more common. It's a separate topic if you need more than one 10Gbps port or some other advanced features the integrated 10Gbps on an motherboard does not offer.
Newer boards have USB4 and some have TB4 tho i consider TB4 very niche.

So instead of looking at a board with four PCIe slots you should be looking at a board that already integrates all these functions without half the board being taken up by PCIe slots.
 
The things you mentioned should be integrated into the motherboard as controllers instead of extra PCIe slots and addon cards.

X-Fi Titanium is very old. Unless you have a baller audio setup then most people would not even notice the difference with good onboard audio.
Most boards lack 10Gbps, but almost all have 2.5Gbps now and 5Gbps is becoming more common. It's a separate topic if you need more than one 10Gbps port or some other advanced features the integrated 10Gbps on an motherboard does not offer.
Newer boards have USB4 and some have TB4 tho i consider TB4 very niche.

So instead of looking at a board with four PCIe slots you should be looking at a board that already integrates all these functions without half the board being taken up by PCIe slots.

Not to mention, a Z790 chipset for example doesn't have enough PCIe lanes to keep all of that running at full bandwidth. If I add the 2 NVMe drives I have plus the RTX 5090, I'm maxed out. Anything that I subsequently add, drops the GPU to x8 - even my NU Audio at only x1. I had to take it out of my rig, there's no workaround. The X-Fi Titanium would IMHO be the worst loss in such a scenario, my sound card is really what I miss the most, and that can be safely replaced by much higher fidelity USB DACs - and in a budget, even the Apple USB-C adapter, surprisingly good for what it is. For the niche TB4 card... I did mention keep a x4 slot at a convenient position :p

But really, if you need that much stuff, you're already looking squarely at HEDT IMHO
 
M.2 should be abandoned. There is no way forward at the same speed that media and games are growing. It's not uncommon for 100 gb + games these days.
M.2 maxes out at 8tb. EDSFF maxes out at 122 TB. Let's jump on u.2 or u.3.
 
The things you mentioned should be integrated into the motherboard as controllers instead of extra PCIe slots and addon cards.

X-Fi Titanium is very old. Unless you have a baller audio setup then most people would not even notice the difference with good onboard audio.
Most boards lack 10Gbps, but almost all have 2.5Gbps now and 5Gbps is becoming more common. It's a separate topic if you need more than one 10Gbps port or some other advanced features the integrated 10Gbps on an motherboard does not offer.
Newer boards have USB4 and some have TB4 tho i consider TB4 very niche.

So instead of looking at a board with four PCIe slots you should be looking at a board that already integrates all these functions without half the board being taken up by PCIe slots.
Nice. You just give 4 reasons why you actually NEED extra PCIe slots. And the X-Fi Titanium is NOT "very old". Is 1000x better than any onboard junk out there, and there is not even a competition. I use 10Gbps Internet for 5 years now, and I might wait for another 5 years until you get 10Gbps network ports integrated into the mobo. Same for USB4.
So?
 
M.2 should be abandoned. There is no way forward at the same speed that media and games are growing. It's not uncommon for 100 gb + games these days.
M.2 maxes out at 8tb. EDSFF maxes out at 122 TB. Let's jump on u.2 or u.3.
By EDSFF i assume you mean E1.S form factor? That's a nice ideal, but it's meant for servers as is U.3

it's power consumption reflects that, being at 5W idle and 12W load. That's on the low end. On the higher end it can go to 8W idle and 25W load.
Consumer cases cannot cool that passively. Look at what happened with PCIe 4.0 and 5.0 M.2 models when they came out. And these were sub 10W under load.

Now imagine that it's pulling near 10W constantly and peaks at more than twice that. Servers can handle these due to their cooling solutions, but EDSFF would never be viable for consumer devices. U.3 has same physical size issues as SATA.

You also say that media and game are growing, but hardly anyone owns 8TB M.2 SSD's. I consider myself enthusiast and bit a storage geek and even i only own 4TB M.2 (Gen4). M.2 does not max out at 8TB. There is no such limit. It can go higher, but because prices are too high there's no incentive right now.
4TB can be had for 200+. So logically 8TB should be 400+ but it's not. Its 600+. Triple the price.
Nice. You just give 4 reasons why you actually NEED extra PCIe slots. And the X-Fi Titanium is NOT "very old". Is 1000x better than any onboard junk out there, and there is not even a competition. I use 10Gbps Internet for 5 years now, and I might wait for another 5 years until you get 10Gbps network ports integrated into the mobo. Same for USB4.
So?
No i didn't. I explained that everything you ask for can be handled by onboard controllers these days. Im not sure if you've been living under a rock or what, but 10G has been integrated on motherboards as far back as original Zen launch in 2017. Possibly even before that.

Here. I looked up a mainstream socket board: https://geizhals.eu/gigabyte-z890-ai-top-a3328438.html?hloc=at&hloc=de&hloc=pl&hloc=uk&hloc=eu
2x10G, 2xTB5/USB4, Sabre ESS ES9260 DAC, 1x16 Gen5 electrical/mechanical, 1x8 Gen5 Electrical (x16 mechanical) and 1x4 Gen4 Electrical (x16 mechanical). So not only have they managed to fit dual-10G, dual-TB5/USB4 and top of the line audio on the board - they have also managed to preserve 3 PCIe Slots.
And surprisingly the board does not even cost four figures like i guessed at first based on the specs. Too bad it's on a LGA 1851 socket that perhaps receives only a small refresh before becoming a dead end.

As for X-Fi - Yes it's very old. Released in 2008. That's 17 years ago which is an eternity in PC space.
I owned the Fatal1ty version of the same card and i know full well what it's audio capabilities were. My current integrated audio is just as good.
So unless you play old EAX games i fail to see the reason to keep that relic installed. Around yes. 24/7 installed? Pointless.
Sabre ESS ES9260 DAC has higher SNR, lower THD and higher resolution. X-Fi cant even do 32bit/384kHz and is limited to 24bit/96kHz.

You seem to be under the illusion that motherboard audio and networking still sucks. Things have evolved a "bit" in the last 17 years. Get with the times.
 
I still have my Z5500 5.1 (analog) system and a Xonar Essence STX II PCI-E card run in Windows 7.1-channel mode (side speakers software diabled) with a daughter card, both utilising x1 PCI slot each. The GPU is in another x16 one, then a x1 M.2, x1 U.2 and one SATA SSD.

I use the SATA SSD only for back-up and some other personal files. The U.2 is for gaming files mostly. Both the SSD and the U.2 have been rock stable, cool despite no active cooling (the U.2 comes with an extensive heatsink by the manufacturer anyway) and both far away from the motherboard.

In my next configuration when the next AMD CPUs come up, I want as many PCI-E slots as possible - possibly x4 - as I am keeping my PCI-E x1 audio card with its full analog 7.1 daughter board requiring x2 slots both. I will need another one for my GPU and due to CPU and socket limitations, I am planning to get a 2 or 4TB OS and program files M.2. I couldn't care less about speed as in every-day operations 10 or 15 seconds less or more don't matter whatsoever. What I care is capacity and current M.2 capacities are absolutely miserable and TBW are even more miserable. My M.2 OS drive is still a Samsung 960 Pro with written 21TB while reporting itself at being at 99% health, and I am sure it is as it is a MLC and not some cheap QLC and/or with 20TBW specs.

Also, cases from up to 7-8 years ago provided so many more options for storage plus slots. The best modern cases only offer x2 2.5 inch slots and that's it. Combine this with extremely limited capacity and you only get in the best case scenario and my particular needs - a 2TB M.2 (4TB with good TBW are so hard to find) and another x2 2 or 4TB SSDs. But this is nothing even for gaming, not to mention LLM, video/audio (even amateur) production, pictures, videos, action camera ones, etc. Is a PC for gaming only?? How many U.2, U.3 motherboards are there on the market at all? Almost zero. I paid quite a lot extra for my U.2 one and I am glad I did. Passively cool, away from GPU, CPU and heat, easy connection, no lanes required/shared the very same way as M.2 drives do, etc.

Finally, no USB card/amp can provide me with the audio fidelity, customization, virtual settings and speaker/space repositioning, and the analog sound I get via my PCI-E x1 card, period. The new motherboards even started getting rid of any analog audio in/outputs like audio is no longer needed whasoever. Apparently where we are going is "get another overpriced and unupgradable gaming laptop/MacBook" to be thrown away after 2- years for a new one at x2 the price of the previous one, or handhelds, or everything in the cloud. Sorry, nothing of the above for me. We need 1. the same number of PCI-E slots as before; 2. As many SATA and U.2/3 slots as possible - no one cares about speed when only 4-5 games will fill a 1TB drive, then what? Will run my GPU at x8 speed as I need more M.2's? How about anything that is not games? 3. M.2 the way it is has so many limitations - space, temperatures, throttling, lanes, limited capacity and yet no one talks about it and just feels this is right... I may not be the average Joe but this is why the PC masterrace is dying and we all go handheld, mobile and in the cloud :(((
 
@Tomorrow,
It seems like you keep continuing with with personal attacks, and silly "facts" that you cherry pick just to prove your useless points, so I won't bother replying to you. It seems you are living in your own world, which is fine. Your reality is not other people ones. And besides, you totally missed the point. Is not about that everything can be stuck on a motherboard, is all about the freedom of choices.
Stop bother me with your utter nonsense, trying to look smart.
 
@Tomorrow,
It seems like you keep continuing with with personal attacks, and silly "facts" that you cherry pick just to prove your useless points, so I won't bother replying to you. It seems you are living in your own world, which is fine. Your reality is not other people ones. And besides, you totally missed the point. Is not about that everything can be stuck on a motherboard, is all about the freedom of choices.
Stop bother me with your utter nonsense, trying to look smart.
I stated facts, and you took them as personal attacks.

But it's useless to argue with people like you, who keep defending their small stick in the mud and ignoring facts.
Keep believing what you want. I could not care less.

Also im putting you in my ignore list, so im not gonna waste my time in the future arguing with a person who takes every correction, fact or difference in opinion as a personal attack.
 
By EDSFF i assume you mean E1.S form factor? That's a nice ideal, but it's meant for servers as is U.3

it's power consumption reflects that, being at 5W idle and 12W load. That's on the low end. On the higher end it can go to 8W idle and 25W load.
Consumer cases cannot cool that passively. Look at what happened with PCIe 4.0 and 5.0 M.2 models when they came out. And these were sub 10W under load.

Now imagine that it's pulling near 10W constantly and peaks at more than twice that. Servers can handle these due to their cooling solutions, but EDSFF would never be viable for consumer devices. U.3 has same physical size issues as SATA.

You also say that media and game are growing, but hardly anyone owns 8TB M.2 SSD's. I consider myself enthusiast and bit a storage geek and even i only own 4TB M.2 (Gen4). M.2 does not max out at 8TB. There is no such limit. It can go higher, but because prices are too high there's no incentive right now.
4TB can be had for 200+. So logically 8TB should be 400+ but it's not. Its 600+. Triple the price.
When I say EDSFF, I mean any of the formfactors under the umbrella. E1.S, E3.S, and E3.L They may be made for servers, but there is NO reason that they could not be put on a consumer platform.

Cooling 25W while under load is not really an issue. We manage to cool 200 watt CPU's, you don't think we can cool a 25 watt storage device? Why would you think that they need to be cooled passively? We don't do that will current m.2, why would we do it with u.3 or any of the flavors of E form factors?

AS for size, for 99% of people and cases, size is not really an issue. Most cases sold still have spots for at least one 3.5 drive, you could easily mount it there.

I never said that m.2 are limited to 8tb. I said that they max out at 8tb (currently) compared to 122 TB of u.2/3. Why such a big difference? Enterprise drives are growing at a MUCH quicker rate than consumer drives.

The prices are high because the manufacturers are not increasing the sizes, not the other way around. They could absolutely make larger drives (on the consumer side) that would lower the price of those smaller drives but they make more money this way.
 
When I say EDSFF, I mean any of the formfactors under the umbrella. E1.S, E3.S, and E3.L They may be made for servers, but there is NO reason that they could not be put on a consumer platform.
Only E1.S is compact enough for consumer devices. No one will start putting E3.L in any consumer device. Even Full ATX cases.
Cooling 25W while under load is not really an issue. We manage to cool 200 watt CPU's, you don't think we can cool a 25 watt storage device? Why would you think that they need to be cooled passively? We don't do that will current m.2, why would we do it with u.3 or any of the flavors of E form factors?
Yeah lets put active cooling on all of our storage devices. What a wonderful idea. Passively cooled is exactly what we do with 99% of M.2 devices.
Remember the backlash when X570 chipset came out and was actively cooled. And that was a literal chipset (one chip) in an ATX case.

This idea is flawed on so many levels. It will increase cost, introduce another failure point in terms of a spinning fan that could die and obviously there's noise issue with a small 5000-10000 rpm fan (or multiple) on every SSD. Higher power consumption will also eat trough batteries on laptops too even if hypothetically the aforementioned issued are solved.
AS for size, for 99% of people and cases, size is not really an issue. Most cases sold still have spots for at least one 3.5 drive, you could easily mount it there.
Desktop cases. But most of the consumer market is not that. Laptops. Mini-PC's etc. Good luck inserting a literal thick ruler in there with active cooling on top.
ATX motherboards dont even have appropriate spacing or slots to accept EDSFF.
I never said that m.2 are limited to 8tb. I said that they max out at 8tb (currently) compared to 122 TB of u.2/3. Why such a big difference? Enterprise drives are growing at a MUCH quicker rate than consumer drives.
That's because there is little demand at the current prices for 50TB M.2's when even 8TB is overpriced. Of course it's growing on the server market. Cost is less of an issue there. Cooling is handled by extremely noisy fans blasting across passive heatsinks or even water cooled.
Consumer devices dont have that luxury. These have to be small, compact, silent and in some cases portable.

I think a far better solution is to develop a new consumer standard. Let's say M.3. It could run passively with higher capacities than 8TB at comparable power consumption. I would be even ok with breaking backward compatibility in this case with new keying for the slots and SSD's themselves.
 
There is pretty much no reason for an NVMe drive to cost twice of what a SATA does, and this doesn't tend to hold true in most segments anyway. If we were not dealing with a storage cartel, SATA would have gone bye bye a long time ago, and SSDs wouldn't be stalled in the current capacity level. We simply don't have SSDs above 8 TB in the client market, and above 2 TB prices get high fast, regardless of form factor.



Considered we already have single sided 8 TB M.2-2280 drives, it's probably 16 to 24 TB for now. 32-48 if they really cram it or use 22110 format. Generally, I don't see any benefit whatsoever to sticking to old SATA at this point in time. I've been an advocate of outright removing SATA ports from motherboards for some time, add an extra M.2 slot instead. High performing USB docks are cheap and plentiful, can just get that if you need hot swap etc.

It's time motherboard vendors also stop with the silly nonsense of adding 2 or 3 x16 size slots on the motherboards, most configurations won't use that and CrossFire/SLI are dead for over a decade at this point. Need to stop advertising support for that and instead add ONE current gen (5.0, 6.0, whatever) PCIe slot for graphics, and optimize the rest of the board's layout to fit several M.2 drives, and leave a x4 slot at the bottom or above the GPU slot for the eventual AIC board like a high end network card - with a lane increase CPU side.
This discussion also shows how SATA isnt allowed to fully flex as well, SATA's advantage is it can have more pcb space for NAND, I think we could easily have 64TB SATA nand now if they wanted.

But the price segmentation thing is strong. QLC is still no cheaper than TLC as an example. I also have a very hard time thinking a no moving parts storage is not cheaper to manufacture than complex large heavy mechanical drives.

The thing to realise here, its fine to decide to not use SATA anymore for your personal systems, but there is no market benefit to dropping the product availability, as SATA for 90% of use cases is as fast as NVME will be, as 4k i/o is dominant. It wont make NVME cheaper, it isnt holding back NVME availability, those issues are business not technological. We also both clearly on the opposite for board design. I see M.2 as an evil, PCIe slots are about customisation and flexibility, and you can run NVME drives in those slots as well with no cables. M.2 originated for portable devices like laptops and is a bit weird it got used for desktops as well.

If you want heavy NVME capacity, as you absolutely locked in to that been the only form of storage for you, a customer friendly way of doing it is keep the PCIe slots on boards, and use addon cards with multiple M.2 slots on them. This also makes it far more user friendly to swap them in and out. The problem isnt SATA, its the addiction to using M.2 onboard. (on these 500+ USD boards, these cards should be included by default as well)

But remember NVME is not mass storage, you remove SATA, how do consumers add heavy capacity to their systems?

Imagine this.

64TB SATA
16TB NVME
4 NVME possible per PCIe slot.
8 SATA every board.

Now thats impressive.

They wont do that though as enterprise would then buy these parts and profits down the drain.
 
Last edited:
Back
Top