• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

A-DATA Presents the Killer Speed of 1TB SSD with XPG 2.5inch to 3.5inch SSD Converter

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,684 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
A-DATA Technology Co., Ltd., a worldwide leading manufacturer in high-performance DRAM modules and Flash application products, presenting the killer speed of eight A-DATA S592 SSDs with XPG 3.5" converter configure in RAID 0 at Computex Taipei 2009. With the utilization of XPG 3.5" converter, the capacity can reach up to 1TB to perform 825 MB/s read and 1,115 MB/s write transfer rate.

The eight A-DATA S592 SSDs adopting the latest XPG EX93 3.5" SSD converter, equipped with safety lock mechanism that can easily secure the SSD/hard drive in 3.5"drive bay without purchasing any accessories, to keeps the valuable data operate in a safety environment. This functional and worthful XPG EX93 3.5" SSD converter is the best choice for those PC user and enthusiasts.



View at TechPowerUp Main Site
 
Last edited:
Guess you have to give it to A-Data for trying. They are breaking new ground with this, those transfer speeds are killer!
 
What?! They've got a four-way 4TB RAID0 array? That thing will fail in minutes..
 
What?! They've got a four-way 4TB RAID0 array? That thing will fail in minutes..

SSD tho... SSD doesn't fail as often as standard HD. (or at least isnt supposed to)
 
Regardless of SSD or MHD, why would it fail in minutes?
 
Ya I don't quite understand your reasoning Weer, the hard drives are just writing data. Ya know, what they were designed to do, it doesn't put anymore stress on a hard drive to run it in an array.
 
What?! They've got a four-way 4TB RAID0 array? That thing will fail in minutes..

I think it's 8-way with up to 128MB per SSD - not familiar with the part #'s and don't feel like looking it up.
 
I just bought something very similar used from a guy in Canada. A 2.5->3.5 converter that can hold 2 in one 3.5 inch slot. Works off 1 Sata cable as well. Should be interesting to see how it works. I wonder, do the drives need to be the same size when used in one of these caddies? I wasn't looking to RAID but if they're going off the same Sata cable that's probably what it is. Would I just lose the excess GB on the bigger drive?
 
Ya I don't quite understand your reasoning Weer, the hard drives are just writing data. Ya know, what they were designed to do, it doesn't put anymore stress on a hard drive to run it in an array.

You should read up on RAID 0.

A two-way RAID 0 array is far more likely to fail, crash, die and never be heard from again than a single drive. It's extremely unsafe if you're worried about your data.

As it ramps up to four and eight-way arrays, the chances of it failing magnify severely. I, myself, have five 1TB drives but have never been close to risky enough to try a four-way.

Or, at least that's how I see it.
 
I think it's 8-way with up to 128MB per SSD - not familiar with the part #'s and don't feel like looking it up.

That thing is like a ticking clock! Watch out! It could explode at any second!

I'm surprised it held the whole show.

But there's really nothing impressive about this, if you think about it. It's just an eight-way RAID 0 array. You'd get to the same speeds using HDD's as well.. maybe higher since SSD's used to be considered much slower than HDD's in bandwidth.
 
I've used the same array for over an year. RAID 0 being "unreliable" to the extant of keeping you away from it, is just a myth. There's only a mathematically higher probability of data loss in comparison to a single drive, because >1 drives are required for a volume to exist and survive. Then again, if a single drive (non-RAID) is damaged, the data is screwed anyway. It's not that as part of a RAID, the physical drives wear and tear more. In fact, smaller chunks of interleaved data are accessed from each disk, so the wear-and-tear of the RW heads is lesser. With SSDs, the "unreliability" BS goes down the drain completely. There's nothing mechanical inside an SSD.
 
Last edited:
wont this be bottlenecking sataII? sataII is 300MB/sec max., so this speed is far beyond that spec. looks like the storage technology has finally caught up with the interface! going to need sataIII to hit the mainstream soon or else... but even then sataIII will only be 600MB/sec max transfer rate.

maybe pci-e for these drives?
 
wont this be bottlenecking sataII? sataII is 300MB/sec max., so this speed is far beyond that spec. looks like the storage technology has finally caught up with the interface! going to need sataIII to hit the mainstream soon or else... but even then sataIII will only be 600MB/sec max transfer rate.

maybe pci-e for these drives?

That's a very good point, but I believe this converter they're using outputs more than a single Sata cable. Or else they wouldn't be able to get the transfer speeds they are showing.
 
This puts my crytal mark to shame lol
 
That's a very good point, but I believe this converter they're using outputs more than a single Sata cable. Or else they wouldn't be able to get the transfer speeds they are showing.


The information I read elsewhere made it sound like it'd be just one Sata cable going to the converter drive. Mind you maybe they meant only 1 Sata power cable required? How much power does a 2.5 draw compared to a 3.5? If this is the case it may still require 2 Sata cables.
 
I've used the same array for over an year. RAID 0 being "unreliable" to the extant of keeping you away from it, is just a myth. There's only a mathematically higher probability of data loss in comparison to a single drive, because >1 drives are required for a volume to exist and survive. Then again, if a single drive (non-RAID) is damaged, the data is screwed anyway. It's not that as part of a RAID, the physical drives wear and tear more. In fact, smaller chunks of interleaved data are accessed from each disk, so the wear-and-tear of the RW heads is lesser. With SSDs, the "unreliability" BS goes down the drain completely. There's nothing mechanical inside an SSD.

I won't pretend to be an expert, but the MLC's are known to deteriorate over time. They can only handle a certain number of writes - and because entire blocks of cells have to be overwritten for certain operations, the write count is much higher than the actual number of write operations. Of course one block going bad isn't necessarily a problem and there are load leveling algorithms to compensate, but the lack of moving parts is no guarantee of reliability.

When you say wear and tear on the RW heads. What do you mean. The heads never touch the plater and the voice coil will last virtually forever.
 
You should read up on RAID 0.

A two-way RAID 0 array is far more likely to fail, crash, die and never be heard from again than a single drive. It's extremely unsafe if you're worried about your data.

As it ramps up to four and eight-way arrays, the chances of it failing magnify severely. I, myself, have five 1TB drives but have never been close to risky enough to try a four-way.

Or, at least that's how I see it.

That was with mechanical drives that had mechanical failures where the whole drive was rendered useless.

With SSDs, it's many times more rare for an entire drive to fail. Rather small portions will become unusable through standard use, but the controller chip takes care of that for you, and you keep your data.

And considering that the primary downside to SSDs is drive wearing, spreading your use evenly over 8 drives will actually increase your drive life.
 
I won't pretend to be an expert, but the MLC's are known to deteriorate over time. They can only handle a certain number of writes - and because entire blocks of cells have to be overwritten for certain operations, the write count is much higher than the actual number of write operations. Of course one block going bad isn't necessarily a problem and there are load leveling algorithms to compensate, but the lack of moving parts is no guarantee of reliability.

Each drive in RAID 0 ends up moving lesser amount of data (reads/writes), compared to a single drive handling the volume. MLC would "deteriorate" even slower in RAID 0, in that case. It in fact, would be more reliable.

When you say wear and tear on the RW heads. What do you mean. The heads never touch the plater and the voice coil will last virtually forever.

I mean the RW head performing lesser number of of read/write operations. If a single drive can provide say 90 MB/s read speed (in a real-world scenario), a RAID 0 of two such drives, again, in a real world scenario, won't offer 180 MB/s (although in theory they're supposed to). Several factors cause the performance to not perfectly scale, in either case, the member disks are moving <90 MB/s.
 
Each drive in RAID 0 ends up moving lesser amount of data (reads/writes), compared to a single drive handling the volume. MLC would "deteriorate" even slower in RAID 0, in that case. It in fact, would be more reliable.

I understand that. In fact the chips inside the SSD are also set up in a RAID configuration. I'm just saying that lack of moving parts doesn't equal reliability. I have heard of some very bad experiences with SSD's. I recently got one myself to try out as the boot drive on one machine, but I'll be sure to make a regular image backup - good practice all the time, but I'm especially wary of SSD's. Hopefully I'll have a good experience with it though.

I mean the RW head performing lesser number of of read/write operations. If a single drive can provide say 90 MB/s read speed (in a real-world scenario), a RAID 0 of two such drives, again, in a real world scenario, won't offer 180 MB/s (although in theory they're supposed to). Several factors cause the performance to not perfectly scale, in either case, the member disks are moving <90 MB/s.
OK. I'm not sure how that translates to 'wear and tear', but I see your point.
 
Awesome. Wonder what price those are going for right now. Also curious if they are using the new Jmicron controller to lessen the price or their own.
 
In my experience people who have drives fail are usually the cause.



A new defective drive is not unheard of, and a drive failing is not, but to have such terrible luck with numbers of drives leaves one common denominator. You.


I ran RAID 0 for four years, and still have them running, and not a single dead drive yet. Soon I might have one dead, after five years of service.........so if that were in a single drive machine, it would still be dead.
 
Yes, but prices will blow a hole in old Bill Gates pocket
 
Weer your making raid sound much less reliable than it is. I had a raid 0 array from 04-06 and it never crashed except when I had my pci-e bus overclocked (once). 4 drives is worse statistically but you know what if there selling it its not as volatile as you make it out to be. When they release it it will be stable. Stop scaring everyone who thought it looked like a good idea. I would LOVE one if I had the funds. :rolleyes:
 
You should read up on RAID 0.

A two-way RAID 0 array is far more likely to fail, crash, die and never be heard from again than a single drive. It's extremely unsafe if you're worried about your data.

As it ramps up to four and eight-way arrays, the chances of it failing magnify severely. I, myself, have five 1TB drives but have never been close to risky enough to try a four-way.

Or, at least that's how I see it.

wow, thats completely wrong.


If you buy two hard drives, their lifespan is the same, RAID or not. RAID 0 merely runs the risk that ONE dead drive takes out ALL the data, instead of one dead drive taking out half the data. It doesnt increase the chance of either drive failing.
 
Back
Top