• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

TerraMaster F8 SSD Plus

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
18,475 (2.47/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
A book sized NAS that not only fits eight M.2 SSDs, but also offers 10 Gbps Ethernet to make the most out of the drives. On top of that, it's extremely quiet during operation and boasts a compelling feature set.

Show full review
 
Very neat. If you pair this with some 4TB drives at ~$250 a piece, you can have a cool 8TB of storage in RAID5 config for only $800+3x$250=$1,550. Plus tax. /s
 
Certainly can see it being used for video editing on the go, not particularly big plus flash drives will make it quite rugged.
 
Certainly can see it being used for video editing on the go, not particularly big plus flash drives will make it quite rugged.
I'm not even sure those sequential speeds make this a good choice for video editing. Maybe if you fill all 8 slots in RAID0...
 
I'm not even sure those sequential speeds make this a good choice for video editing. Maybe if you fill all 8 slots in RAID0...
Even when using RAID 5 it should be fine as the NAS is limited by single 10Gbps NIC. You get peace of mind of having redundancy and with "cheap" 5 port 10Gbps switches out there you can easily have 2 or 3 editors on the go having access to files from single NAS.
 
Very neat. If you pair this with some 4TB drives at ~$250 a piece, you can have a cool 8TB of storage in RAID5 config for only $800+3x$250=$1,550. Plus tax. /s
I believe this is pretty much what I said in the end of the review, no? It's a good piece of kit for those that needs something like this, but it's not cost competitive at all.
 
Even when using RAID 5 it should be fine as the NAS is limited by single 10Gbps NIC. You get peace of mind of having redundancy and with "cheap" 5 port 10Gbps switches out there you can easily have 2 or 3 editors on the go having access to files from single NAS.
Aw snap, I didn't realize a single PCIe3 x1 port can almost saturate 10Gbps Ethernet on its own.

I believe this is pretty much what I said in the end of the review, no? It's a good piece of kit for those that needs something like this, but it's not cost competitive at all.
I was kind of building a solution around this in my head when I post that.

It's been probably 10 years since I keep coming back to these, look at the costs and then decide it's not worth it now. Maybe next year... (I'm talking personal, home usage.)
 
Cool concept but I can't imagine trusting $5600 of M.2 storage to two 50mm fans with a 2 year warranty. The SSDs aren't going to get hot since 10GbE networking won't even keep up with a single PCIe Gen3 x2 drive, so the cooling is kind of academic when your drives are only running at 1/8th their speed but that networking bottleneck also really hurts when you have to spend $4800 on drives and then choke them.

I know there are plenty of workloads where spinning rust is unacceptable, but for a consumer NAS sitting behind a 10GbE network port, 2-3 bays of spinning rust with a couple of much smaller, cheaper SSDs to act as a dedicated read/write cache gets you enough performance to saturate 10GbE in plenty of typical workloads.

Even at PCIe 3.0 Gen1, it feels like the lack of 20Gbps or 40Gbps USB is a bit of an oversight.
 
I was kind of building a solution around this in my head when I post that.

It's been probably 10 years since I keep coming back to these, look at the costs and then decide it's not worth it now. Maybe next year... (I'm talking personal, home usage.)
The issue is twofold, the device itself is at least $200 too expensive for a consumer, although they have that cheaper model with a leaner SoC that solves that problem to a degree.
The other is that SSDs are no longer dirt cheap, as they were a couple of years ago, which makes the final NAS too costly for most that don't need it for professional use.

Even at PCIe 3.0 Gen1, it feels like the lack of 20Gbps or 40Gbps USB is a bit of an oversight.
It's not a DAS though. The only company so far that seems to have worked out how to use something like Thunderbolt in a NAS is QNAP and I believe that might still be limited to 10 Gbps as a virtual networking interface. Someone needs to write a Linux driver that turns the USB/Thunderbolt host controllers into a network interface when plugged into a PC/Mac, so that extra performance that's offered can be taken advantage of.
 
Is that a mashup of Fractal Terra and Cooler Master?
Terramaster predates the Fractal Terra by a decade and a half.

It's CoolerMaster's storage division, and for the first decade of their existence, I hear the software side of things was pretty ropey. As of TOS v5.0 (2022) it's been pretty decent, though I've only used a couple of their NASes briefly.

Ugreen appear to be taking the place of earlier Terramaster - great hardware and design but immature software that's not really ready for any kind of risk-averse deployment.
 
Last edited:
Now, if only we could get some cheap large-capacity (M.2) SSDs...
 
Now, if only we could get some cheap large-capacity (M.2) SSDs...
Prices have certainly come down, but consumer M.2 drives have been stuck at 8TB for 5 whole years already. That's unheard of when you look at the history of increasing capacities of media, going all the way back to analogue tape. It used to increase year on year, reliably, relentlessly.

The only time capacities have stood still for long periods before was when standardised removable media were involved where the reader and the media were (loosely) locked for compatibility reasons.
 
The funny thing is, everything we store on our PCs has really exploded in size. Even just games have really gone up in their install size, 100 GB is now a normal in AAA games, and we have seen much, much larger games - Digital Combat Simulator base game is "only" 200 GB, with some additional planes, terrains and missions and you're above 500 GB in no time!

And if you do any sort of audio visual productivity - 50 megapixel RAW images, 4K video, RAW video, high bitrate audio files have all gone up in size compared to what was standard half a decade ago.

But we're still hearing "gone are the days of hoarders who downloaded movies, music and stored large libraries on home computer, who needs large drives now?"...
 
Nice review

I have a question on this one, I'm guessing the 10Gbps is the LAN chip set speed. Is the other number also Gbps?
1744462181664.png
 
A couple of things that we are missing though, is S.M.A.R.T support for M.2 SSDs, strange, especially in a device such as the F8 SSD Plus that only supports M.2 drives.....This might be a nitpick, but it's an odd feature to be missing.
This is not a nitpick, it's an instant deal-breaker. Really? Who the hell builds a drive array box and doesn't include S.M.A.R.T. support?

@TerraMaster
Seriously with this? Where are your brains, in your backside?
 
Last edited:
Nice review

I have a question on this one, I'm guessing the 10Gbps is the LAN chip set speed. Is the other number also Gbps?
View attachment 394774
I realised I messed up the graph a bit, updated it.

1744466598206.png


Another fileserver without ECC memory.
Well, since the CPU doesn't support ECC, what would be the point of using ECC RAM?
Most Intel CPUs that aren't Xeon's don't support ECC RAM. I thought that was common knowledge by now.
 
Well, since the CPU doesn't support ECC, what would be the point of using ECC RAM?
Most Intel CPUs that aren't Xeon's don't support ECC RAM. I thought that was common knowledge by now.

That's not true. Most of the better 12-14 gen intel CPU support ECC. The trick is that they force you to buy other mainboard chipsets to actually use it. AMD has ECC support in all chipsets and many of the better CPUs.

Anyway. Running a 8 x 8 GB fileserver on 10 Gbit/s without ECC can have a bad wakeup some day. Especially since this thing doesn't support ZFS either (which has checksums).
 
That's not true. Most of the better 12-14 gen intel CPU support ECC. The trick is that they force you to buy other mainboard chipsets to actually use it. AMD has ECC support in all chipsets and many of the better CPUs.
Well, then they've changed that again. Even so, the N-series SoCs don't support ECC, so its not relevant to any NAS based around them.
Anyway. Running a 8 x 8 GB fileserver on 10 Gbit/s without ECC can have a bad wakeup some day. Especially since this thing doesn't support ZFS either (which has checksums).
This is really some bizarre paranoia thing that I don't really understand where it started, but it appears to come from the TrueNAS ZFS zealots that claims everything else is garbage.
I've had my NAS that runs OMV for seven years now and I've lost exactly nothing. Prior to that I had a basic QNAP NAS and once again, lost nothing.
I really believe the whole ZFS and ECC thing is a myth, as no-one has managed to prove that it makes a lick of a difference for your average user. If you're a big corporation, that's a different matter.
Then again, no-one is forcing anyone to buy something they don't want to buy, which is why there's a lot of choice out there. However, no consumer or even small business NAS out there, comes with ECC memory. A few models from QNAP supports ZFS though.
 
Well, then they've changed that again. Even so, the N-series SoCs don't support ECC, so its not relevant to any NAS based around them.

This is really some bizarre paranoia thing that I don't really understand where it started, but it appears to come from the TrueNAS ZFS zealots that claims everything else is garbage.
I've had my NAS that runs OMV for seven years now and I've lost exactly nothing. Prior to that I had a basic QNAP NAS and once again, lost nothing.
I really believe the whole ZFS and ECC thing is a myth, as no-one has managed to prove that it makes a lick of a difference for your average user. If you're a big corporation, that's a different matter.
Then again, no-one is forcing anyone to buy something they don't want to buy, which is why there's a lot of choice out there. However, no consumer or even small business NAS out there, comes with ECC memory. A few models from QNAP supports ZFS though.

What do you think sets our use case apart from these big corporations that do use ECC? Why do all non-consumer platform servers have ECC?

Also, if you don't have ECC you can't tell whether you need ECC. I have an AM4 Linux machine reporting errors right now. ECC isn't so much about random protons, it is against hardware going bad, or overheating, or DIMM coming loose.
 
What do you think sets our use case apart from these big corporations that do use ECC? Why do all non-consumer platform servers have ECC?

Also, if you don't have ECC you can't tell whether you need ECC. I have an AM4 Linux machine reporting errors right now. ECC isn't so much about random protons, it is against hardware going bad, or overheating, or DIMM coming loose.
Sorry, but ECC memory won't save your data if a DIMM comes loose.
If data transfer to and from the RAM really were that unreliable, wouldn't every piece of electronics use ECC memory?
Fortunately, that isn't the case.
But each to their own, if you see a value in spending money on a use case because you have an unstable computer, good on you. I would fix the computer instead.
 
Last edited:
Sorry, but ECC memory won't save your data if a DIMM comes loose.
Honestly, how often does that happen? That's a bit weak..
If data transfer to and from the RAM really were that unreliable, wouldn't every piece of electronics use ECC memory?
Fortunately, that isn't the case.
But each to their own, if you see a value in spending money on a use case because you have an unstable computer, good on you. I would fix the computer instead.
This kinda tells us that you don't really understand the principles of ECC and why it exists.

Software error checking is built into every modern OS and to many programs, so if an error happens(and they are not infrequent) the data is simply re-read from the source. NonECC RAM errors happen but are corrected in the runtime execution. The cost of that, and one of the many reasons why ECC is important, is that correcting errors in software takes a lot more system time than correcting those same errors in hardware. ECC can correct those errors in a few cycles, usually less than 100. Software error correction can take tens of thousands, hundred of thousands depending on the severity of the errors, of cycles to correct.

ECC is superior in every way to nonECC. For a data storage solution to leave out such a important feature, along with the S.M.A.R.T. features, is a bit dubious to say the least.
 
Last edited:
Back
Top