• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Transcend Introduces 8 TB Industrial SSD with Power Loss Protection

This discussion also shows how SATA isnt allowed to fully flex as well, SATA's advantage is it can have more pcb space for NAND, I think we could easily have 64TB SATA nand now if they wanted.

But the price segmentation thing is strong. QLC is still no cheaper than TLC as an example. I also have a very hard time thinking a no moving parts storage is not cheaper to manufacture than complex large heavy mechanical drives.

The thing to realise here, its fine to decide to not use SATA anymore for your personal systems, but there is no market benefit to dropping the product availability, as SATA for 90% of use cases is as fast as NVME will be, as 4k i/o is dominant. It wont make NVME cheaper, it isnt holding back NVME availability, those issues are business not technological. We also both clearly on the opposite for board design. I see M.2 as an evil, PCIe slots are about customisation and flexibility, and you can run NVME drives in those slots as well with no cables. M.2 originated for portable devices like laptops and is a bit weird it got used for desktops as well.

If you want heavy NVME capacity, as you absolutely locked in to that been the only form of storage for you, a customer friendly way of doing it is keep the PCIe slots on boards, and use addon cards with multiple M.2 slots on them. This also makes it far more user friendly to swap them in and out. The problem isnt SATA, its the addiction to using M.2 onboard. (on these 500+ USD boards, these cards should be included by default as well)

But remember NVME is not mass storage, you remove SATA, how do consumers add heavy capacity to their systems?

Imagine this.

64TB SATA
16TB NVME
4 NVME possible per PCIe slot.
8 SATA every board.

Now thats impressive.

They wont do that though as enterprise would then buy these parts and profits down the drain.

I agree, it's really a bloody cartel holding the entire industry back. I'm sure even a better solution compared to M.2 would have been found by now.

Nice. You just give 4 reasons why you actually NEED extra PCIe slots. And the X-Fi Titanium is NOT "very old". Is 1000x better than any onboard junk out there, and there is not even a competition. I use 10Gbps Internet for 5 years now, and I might wait for another 5 years until you get 10Gbps network ports integrated into the mobo. Same for USB4.
So?

Agree on it being better, but... it is very old indeed. It was released in September of 2008 from what I could gather, which means that it´s just a smidge under 17 years old. This is a geological age when it comes to computer hardware. High-end motherboards with 10 Gbps NICs and USB 4 are available (not every model, but they are available), and if you're lucky enough to have such a monster link, then I suppose you are probably also lucky enough to get a board like that. I guess what I'm trying to say is, you're part of a niche, and one that we can somewhat work around flexibly by switching a few things to USB and whatnot. ;)
 
Only E1.S is compact enough for consumer devices. No one will start putting E3.L in any consumer device. Even Full ATX cases.
You do know that e3L is only 143mm long, right? That is shorter than a 3.5 harddrive. Even E1L is only 318mm, there are plenty of GPU's that are as long as that.
Yeah lets put active cooling on all of our storage devices. What a wonderful idea. Passively cooled is exactly what we do with 99% of M.2 devices.
Remember the backlash when X570 chipset came out and was actively cooled. And that was a literal chipset (one chip) in an ATX case.
It IS a wonderful idea. Thanks.
That's because there is little demand at the current prices for 50TB M.2's when even 8TB is overpriced. Of course it's growing on the server market. Cost is less of an issue there. Cooling is handled by extremely noisy fans blasting across passive heatsinks or even water cooled.
Consumer devices dont have that luxury. These have to be small, compact, silent and in some cases portable.
The whole issue about price... Who do you think sets the price? The companies making the drives artificially limit suppies to keep prices high. If they made more larger drives, prices on smaller drives would fall. Don't you think people would buy larger drives if the prices were better? Why do you accept that 8tb needs to be the price it is? It's 2025. We can buy 20tb harddrives for peanuts, why not ssd's?
Desktop cases. But most of the consumer market is not that. Laptops. Mini-PC's etc. Good luck inserting a literal thick ruler in there with active cooling on top.
ATX motherboards dont even have appropriate spacing or slots to accept EDSFF.
It's amazing how cables can allow someone to place things inside a case where they want, isn't it? The motherboard doesn't need a "slot" it just needs a header for an appropriate cable. Servers do it, why not desktops? Why are people so scared of having cables in a PC?
I think a far better solution is to develop a new consumer standard. Let's say M.3. It could run passively with higher capacities than 8TB at comparable power consumption. I would be even ok with breaking backward compatibility in this case with new keying for the slots and SSD's themselves.
Just what we need. Another standard. Why not use the ones that we have? Why reinvent the wheel?
 
You do know that e3L is only 143mm long, right? That is shorter than a 3.5 harddrive. Even E1L is only 318mm, there are plenty of GPU's that are as long as that.
Yes i meant E1.L. I dint look at E3.L.
It IS a wonderful idea. Thanks.
For you maybe. For 99% of people it's not. They would hate it with passion.
The whole issue about price... Who do you think sets the price? The companies making the drives artificially limit suppies to keep prices high. If they made more larger drives, prices on smaller drives would fall. Don't you think people would buy larger drives if the prices were better? Why do you accept that 8tb needs to be the price it is? It's 2025. We can buy 20tb harddrives for peanuts, why not ssd's?
Of course people would buy larger drivers if the prices were lower. No doubt about that. As would i.
I would not really compare HDD capacity and prices to SSD capacity and prices. Yes it's possible to get a lot of capacity by buying a large HDD, but it's good only for storage, because it's slow. I think Seagate even tried to partially solve that with Dual-Actuator drives that raised the speeds close to the ~500MB/s SATA 6Gbps limit, but i dont think these really took off. They were limited to two 16TB and 18TB models and they were as expensive as 26TB models are today.
It's amazing how cables can allow someone to place things inside a case where they want, isn't it? The motherboard doesn't need a "slot" it just needs a header for an appropriate cable. Servers do it, why not desktops? Why are people so scared of having cables in a PC?
Because consumers, myself included do not want to deal with cables. I prefer direct slots. I only tolerate cables where it's unavoidable like PSU and monitor.
I try to minimize other cables because i dont want my living space looking like a server room either inside or outside the case. The more cables there are the more cable management i have to do.
Just what we need. Another standard. Why not use the ones that we have? Why reinvent the wheel?
Standards also have consumer and server paths and rarely do they cross because of radically different requirements.
 
Because consumers, myself included do not want to deal with cables. I prefer direct slots. I only tolerate cables where it's unavoidable like PSU and monitor.
I have to disagree with you there. SATA allows you to have 8+ devices connected directly to the motherboard. M.2 gives you 3, tops. Cables are always more flexible. With some M.2 slots you have to take out the motherboard shield or the video card to install/replace them. Granted, that's not a flaw of M.2, it's stupid design. But it happens. A lot.
Of course, you want something super-compact, it's M.2 all the way. All I'm saying is it's not as cut and dry as you make it out to be, cables still have advantages.
 
I have to disagree with you there. SATA allows you to have 8+ devices connected directly to the motherboard. M.2 gives you 3, tops. Cables are always more flexible. With some M.2 slots you have to take out the motherboard shield or the video card to install/replace them. Granted, that's not a flaw of M.2, it's stupid design. But it happens. A lot.
Of course, you want something super-compact, it's M.2 all the way. All I'm saying is it's not as cut and dry as you make it out to be, cables still have advantages.
Respectfully, i have to disagree on that.
As i understand your argument is that SATA has an advantage because more devices can be connected to the motherboard at once?
In my eyes that's a pretty weak advantage for several reasons:

1. Most people dont even approach or exceed the limit of SATA ports offered by their boards. I suspect as time goes on there are more people who have nothing connected to those ports at all. Even i have only one HDD there and 3 M.2 devices on the board itself. Motherboard makers have also been reducing the number of SATA ports from 8 to 6, or even 4.
2. No, 3 is not the top limit for directly attached M.2. The current limit is 6 devices for Z890 and 5 devices for AM5 chipsets.
3. The number of M.2 devices can be greatly increased by using an addon card that houses several M.2 devices, such as ASUS Hyper M.2 x16 Gen5 that can house up to 4 SSD's. Of course then the limitation becomes the number of PCIe slots and the overall PCIe lanes available as running an Gen5 x16 card on the first slot would force the dGPU to use the second x16 slot that only has x8 bandwidth on most boards.

So technically a crazy person can populate all 6 slots on the board and add at least one x16 addon card to add 4 more bringing the total up to 10. All without using cables, but that is as unlikely as a person using all 8 SATA ports (if their board even has that many) at once.

As for the mechanical design, yes. Using the dGPU adjacent slots is easier when removing the GPU (it would be the same of the GPU blocks SATA ports like it does on my board), but motherboard shield is usually a slab of metal, not an entire front. The main problem with M.2 mechanical design is the stupid little screw. Thankfully motherboard makers have introduced toolless designs that negate the need for it.
 
@Tomorrow That applies to storage in general, most people in mainstream market likely only have 1 or 2 storage devices only, regardless if SATA or NVME. Us discussing in this thread are enthusiasts who are the type of people who try to load up our systems with as much storage as we can. But we are a tiny portion of the market.

So if we go down that path, that we should only cater for the lowest denominator, then we would also only have one or two M,2 slots on a board as well, just 1 PCIe, 1 or 2 M.2 and maybe 2 SATA.

Look at communities like TrueNAS and you will see SATA is the dominant i/o factor still in use, due to the state of the NVME market in consumer land. NVME is great in terms of performance capability but its poor implementation and deliberate gimping on capacity is restricting it.

As an example I expect the majority on here would rather the vendors pushed out 8TB gen 4 drives compared to the 2TB gen 5 that are being pushed out right now.
 
High-end motherboards with 10 Gbps NICs and USB 4 are available (not every model, but they are available), and if you're lucky enough to have such a monster link, then I suppose you are probably also lucky enough to get a board like that. I guess what I'm trying to say is, you're part of a niche, and one that we can somewhat work around flexibly by switching a few things to USB and whatnot. ;)
Mobos with 10Gbps NICs are USB4 are very recent releases, and extremely expensive. I don't plan to upgrade my PC every year. Besides, I am quite pleased of my add-on cards that work flawlessly. Why to change the mobo is they just...work?
Same for the sound card. It might be 17 years old, but currently there is no other dedicated sound card that can sound significantly better than this and worth the investment into a new card. Forget about the specs. Nobody can tell the difference between 24bit/96KHz and 32bit/192KHz or more, let's be real. 99.99% of all music and game sounds are 16bit/44Khz and it just sounds good. I'm not a picky audiophile, so I don't really care. However there IS a huge difference between the crappy integrated sound and that card. That difference is just too big no renounce my dedicated sound card.
And btw, here in Tokyo is not a niche to have 10Gbps Internet pipe. Is basically mainstream for the past 5 years, and is just a little more expensive than a 1Gbps line. Basically I pay ~30$/month, if that's mainstream.
 
2. No, 3 is not the top limit for directly attached M.2. The current limit is 6 devices for Z890 and 5 devices for AM5 chipsets.
3. The number of M.2 devices can be greatly increased by using an addon card...
And where are you going to put those add on cards when huge PCB footprint for its small connector M.2s hog so much of motherboard's real estate?

Out of what left after graphics card consuming four expansion slots worth of space:
Topmost slot position, if slot is even there, isn't good because of lots of thermal output generating graphics card below it and CPU and its cooler above.
And next three slots of space goes to graphics card.

Under graphics card they get warmed up by the heat of graphics card and accessing them needs removing graphics card first.
And backside of motherboard would be absolutely craptacular place for accessing them.
Developed for limited needs laptops/mobile devices M.2 form factor is just bad fit/not well thought out for desktop PCs.
 
And where are you going to put those add on cards when huge PCB footprint for its small connector M.2s hog so much of motherboard's real estate?

Out of what left after graphics card consuming four expansion slots worth of space:
Topmost slot position, if slot is even there, isn't good because of lots of thermal output generating graphics card below it and CPU and its cooler above.
And next three slots of space goes to graphics card.

Under graphics card they get warmed up by the heat of graphics card and accessing them needs removing graphics card first.
And backside of motherboard would be absolutely craptacular place for accessing them.
Developed for limited needs laptops/mobile devices M.2 form factor is just bad fit/not well thought out for desktop PCs.
The one under the GPU has never been a issue for me, sure it's gets a little warmer than say a inch or so lower down. But the one that tends to get hotter is the one under the CPU and that it/s heatsink is 2-3 times bigger than the one under the GPU.


tmpsd.png
 
And where are you going to put those add on cards when huge PCB footprint for its small connector M.2s hog so much of motherboard's real estate?
In a PCIe slot, obviously. I already explained that for 16 Gen5 lanes it would have to be the top PCIe slot and the GPU would be forced to run at 8 lane Gen5 speed. That is after the user has already populated up to 7 different M.2 slots on the board and still needs 4 more. Tho at such numbers it would make sense to already use NVME based NAS for storage.
Out of what left after graphics card consuming four expansion slots worth of space:
Most motherboards now leave ample space below the primary slot by not placing PCIe slots directly below the primary.
Topmost slot position, if slot is even there, isn't good because of lots of thermal output generating graphics card below it and CPU and its cooler above.
Only with bad cooling. With proper cooling from the front the heat there is not an issue.
Under graphics card they get warmed up by the heat of graphics card and accessing them needs removing graphics card first.
I already have to remove GPU if i want to plug or unplug SATA devices, because the GPU essentially blocks easy access to SATA ports.
And backside of motherboard would be absolutely craptacular place for accessing them.
This applies to microATX or similar boards.
Developed for limited needs laptops/mobile devices M.2 form factor is just bad fit/not well thought out for desktop PCs.
SATA is no better. Two flat cables that are difficult to be properly cable managed or their ports accessed. Accompanied by wasted space in 2,5" enclosure that does not have good mounting spots in a case. Limited by ancient standard that offers less speed than a decent USB thumb drive.

M.2 is not perfect, but there's also nothing better in terms of consumer standards.
 
SATA is no better. Two flat cables that are difficult to be properly cable managed or their ports accessed. Accompanied by wasted space in 2,5" enclosure that does not have good mounting spots in a case. Limited by ancient standard that offers less speed than a decent USB thumb drive.

M.2 is not perfect, but there's also nothing better in terms of consumer standards.
SATA space on a board PCB is tiny, its a little tiny connector. The case space required for the drive is less important as case space is far less of a premium than board PCB space. Cable management isnt really a big deal to be honest. That is far more consumer friendly then having to go into a built system to manage little tiny M.2 screws, especially when to do so you have to remove other components to reach them.
The ancient standard as you call it is still a popular choice and a SATA SSD is still fast in real world usage.
M.2 is a portable form factor, shoe horned in to desktop usage, NVME itself is great, but M.2 has its problems, M.2 is not the only way to use NVME. Luckily we didnt see M.SATA on motherboards, as SATA has its own M.2 equivalent but I only ever seen it inside NUC's.
Struggle to see that you feel there is no difference in usability, scalability.

So if you having to remove your GPU to connect SATA cables, you didnt plan properly by plugging in cables before installing GPU. (yes even if they not attached to drives, plug them in and have them fed round back panel, out of sight out of mind. If the day comes you need them, they already on the board.

What is the popular way of using NVME in the enterprise space? U.2. I wonder why they went in that direction?

Now the problem is we are committed to M.2 at this point, as we have 100s of SKUs using that interface, so the proper way forward is M.2 addon cards for PCIe slots, with up to 4 drives per PCIe slot. Leave SATA as it is, as its not causing you any problems, if you dont like it, dont use it.
 
Back
Top