i see what your sayin but essentially im the guy whos allready loaded and waitin for your slow ass shit pc t catchup then, untill the year 2014 or sumat, lol hurry up g sort it i get bored waitin:shadedshu
120gig ocz revo x2 for os and BEST games(no wow here) 1tb raid +2tb shit
I use a Warp drive (coolest name of them all), 30gigs, v2. I still have a agility 60 gigs in package. got it before my son was born, no time to get the systems where i want 'em. shit, i think i've turned on my main rig (see system specs) three or four times since he was born, mostly using phone and htpc. oh yeah, almost forgot, 16gig in the htpc!
I started with an 80GB Intel X25-M first generation SSD, then popped for the G2 version (still 80GB). I recently upgraded to a 120GB Vertex 2 for the extra space, plus I got a decent deal on it. Enjoying the silence even more than the speed though!
There has to be something wrong then wit your SSD like not aligned ore so.
Because i had a stripe set of 150GB WDs Velociraptor's, and after using the Vertex I, the Vertex became my main storage medium, and after getting the Vertex II, I moved the raptors to my server and there they will spend the rest of there life as download disks for my Torrent client.
And there is also a other very annoying side effect of raid, and that is that average latency go's up the more disk you use.
1 disk latency 100% (no extra latency)
2 disk latency 150%
3 disk latency 166%
4 disk latency 175%
And so on.
This is because, even do the data is spread evenly over the disks, that dose not mean that every disk can read the data at the same time, because of the rotation latency random data will suffer greatly before the first data can be read from just.
A couple of years ago i have wrote a article for overclockers.com, of the downside of HDD raid, you can find it below under the spoiler tag a copy of it, as i did not wane post suds a long post in this tread, and its a bid OT.
I am a long time RAID user – I used RAID when 4200 rpm disks were still in fashion.
My first RAID array consisted of two Quantum (later Maxtor) 4.5GB Atlas 7200rpm SCSI drives on an Adaptec 2940UW; this put me back a swooping $1500 — for 9GB, back in 1993! Over the years I have been using various and multiple RAID arrays and they cost me a small wheel barrel of money!
But it was well worth it then.
One thing I learned: Although a RAID 0 or striping definitely has its benefits and in the “old days” was even really needed, it is not the holy grail anymore!
When the WD Raptor 74GB SATA came out, it was the fastest disk for home use and the time came to retire the SCSI drives, because modern desktop disks use a different access algorithm than SCSI server disks and they are therefor faster for desktop use.
I built a RAID 0 system with two of these babies and was very happy with it until I got a system crash ;-)
At the time I was also doing some reading around when I had my HD crashed.
There ware two side’s battling over the Pro’s and Con’s of what was better – RAID0 or a single disk.
The argument was that RAID 0 just moves files faster but does nothing for access time, and if one disk is too late with data, the other has to wait.
So is it worth the extra risk and does it give real world benefits?
The truth is somewhere in the middle, I think, but yes – in “some” cases it helps.
It’s like buying a real nice expensive sports car – yes it’s fast, but if you’re driving in town all day (what we all normally do) it doesn’t help much, there is just too much traffic around us; but sometimes you can get on the German autobahn where there are no restrictions – then it’s all worth it!
And so I started thinking “Maybe it would be better to spread the files over two disks and have the best of both worlds.”
I started testing some different setups with a stopwatch and found out that in most cases, two single disks are faster than one RAID 0 array, if setup right, and the fastest setup for me now is:
Disk 1: Windows and the swapfile
Disk 2: Games
The underlining thinking of my setup is the same as with dual core CPU’s.
It stops the interrupts of the task on hand by other data requests,
and even though it doesn’t show much benefit in most benchmarks, it’s because they don’t really look at load times – it does however makes your system feel a lot faster.
The reason I think this setup works faster is that Windows is always doing something in the background and also, if the swap file is getting used, it doesn’t stop your game from being loaded, so it can do two things at the same time.
So instead of going back and forward on the same disk,
you have your main program having full control over your second disk and don’t get interrupted all the time by other (Windows) things.
I also tried using a third disk and used one of the older 10K SCSI disks for use as the swap file. Even though I could measure some difference, especially when I had a lot of programs open and started using the swapfile a lot, for me the overall benefit was not worth the extra noise and heat.
But if you are one of the lucky guys that have a three or more disks setup, I would definitely go this way.
If speed is most important and money is no problem, then look at one of these SATA-II cards from Areca; they are also sold under the Tekram brand name. They are really the fastest SATA controllers on the market for the moment; they have 256MB cache on board that helps also a lot, especially with lots of small files as some games have.
So if you have plenty of money to burn and just want the best of the best, go for it, but these cards have either PCIe 8X or PCI-X 64 bits connectors.
The PCI-X version will also work fine in a standard PCI slot, and the PCIe version will work fine in a SLI 16X slot¹ or 1X / 4X slot² if it fits.
¹ If the BIOS supports it – check Areca’s FAQ on their website or send an email.
² If the back end of the slot isn’t open, as most aren’t, you can use a Dremel to cut it away (if you are real handy and don’t mind losing your warranty).
But both cards will be limited by the max bus speed of the bus you are using, but if you really feel the need for speed, try one of the new ASUS “PCI-X work station AM2/S775 boards” – the PCI-X version will do real fine job for you;
you can have Windows on a Raptor and the game / program files where you need fast access to on the RAID channel.
One thing I learned over the years is that RAID is not an easy subject; what works for one really well doesn’t help another that much. I would advise anyone that wants to spend money on it to do a lot of reading on it, and there plenty of information on the net about it.
And one last but one important tip:
The fastest hard drive setup RAID or non RAID won’t help much if you don’t have enough main memory on your mobo – this should be the first thing to upgrade -
2 GB is a minimum I would say for smooth running and some multitasking.
My 2 cents -
All this is not valid for SSD based raid systems, as SSDs don't have moving parts, the latency is the same for all SSDs, and you will see great benefits of stripe ore raid.
But if you still prefer HDDs go ahead, but imo the speed crown is sturdy in the hands of SSDs, for what ever use you think of, tho not always needed.
There has to be something wrong then wit your SSD like not aligned ore so.
Because i had a stripe set of 150GB WDs Velociraptor's, and after using the Vertex I, the Vertex became my main storage medium, and after getting the Vertex II, I moved the raptors to my server and there they will spend the rest of there life as download disks for my Torrent client.
And there is also a other very annoying side effect of raid, and that is that average latency go's up the more disk you use.
1 disk latency 100% (no extra latency)
2 disk latency 150%
3 disk latency 166%
4 disk latency 175%
And so on.
This is because, even do the data is spread evenly over the disks, that dose not mean that every disk can read the data at the same time, because of the rotation latency random data will suffer greatly before the first data can be read from just.
A couple of years ago i have wrote a article for overclockers.com, of the downside of HDD raid, you can find it below under the spoiler tag a copy of it, as i did not wane post suds a long post in this tread, and its a bid OT.
I am a long time RAID user – I used RAID when 4200 rpm disks were still in fashion.
My first RAID array consisted of two Quantum (later Maxtor) 4.5GB Atlas 7200rpm SCSI drives on an Adaptec 2940UW; this put me back a swooping $1500 — for 9GB, back in 1993! Over the years I have been using various and multiple RAID arrays and they cost me a small wheel barrel of money!
But it was well worth it then.
One thing I learned: Although a RAID 0 or striping definitely has its benefits and in the “old days” was even really needed, it is not the holy grail anymore!
When the WD Raptor 74GB SATA came out, it was the fastest disk for home use and the time came to retire the SCSI drives, because modern desktop disks use a different access algorithm than SCSI server disks and they are therefor faster for desktop use.
I built a RAID 0 system with two of these babies and was very happy with it until I got a system crash ;-)
At the time I was also doing some reading around when I had my HD crashed.
There ware two side’s battling over the Pro’s and Con’s of what was better – RAID0 or a single disk.
The argument was that RAID 0 just moves files faster but does nothing for access time, and if one disk is too late with data, the other has to wait.
So is it worth the extra risk and does it give real world benefits?
The truth is somewhere in the middle, I think, but yes – in “some” cases it helps.
It’s like buying a real nice expensive sports car – yes it’s fast, but if you’re driving in town all day (what we all normally do) it doesn’t help much, there is just too much traffic around us; but sometimes you can get on the German autobahn where there are no restrictions – then it’s all worth it!
And so I started thinking “Maybe it would be better to spread the files over two disks and have the best of both worlds.”
I started testing some different setups with a stopwatch and found out that in most cases, two single disks are faster than one RAID 0 array, if setup right, and the fastest setup for me now is:
Disk 1: Windows and the swapfile
Disk 2: Games
The underlining thinking of my setup is the same as with dual core CPU’s.
It stops the interrupts of the task on hand by other data requests,
and even though it doesn’t show much benefit in most benchmarks, it’s because they don’t really look at load times – it does however makes your system feel a lot faster.
The reason I think this setup works faster is that Windows is always doing something in the background and also, if the swap file is getting used, it doesn’t stop your game from being loaded, so it can do two things at the same time.
So instead of going back and forward on the same disk,
you have your main program having full control over your second disk and don’t get interrupted all the time by other (Windows) things.
I also tried using a third disk and used one of the older 10K SCSI disks for use as the swap file. Even though I could measure some difference, especially when I had a lot of programs open and started using the swapfile a lot, for me the overall benefit was not worth the extra noise and heat.
But if you are one of the lucky guys that have a three or more disks setup, I would definitely go this way.
If speed is most important and money is no problem, then look at one of these SATA-II cards from Areca; they are also sold under the Tekram brand name. They are really the fastest SATA controllers on the market for the moment; they have 256MB cache on board that helps also a lot, especially with lots of small files as some games have.
So if you have plenty of money to burn and just want the best of the best, go for it, but these cards have either PCIe 8X or PCI-X 64 bits connectors.
The PCI-X version will also work fine in a standard PCI slot, and the PCIe version will work fine in a SLI 16X slot¹ or 1X / 4X slot² if it fits.
¹ If the BIOS supports it – check Areca’s FAQ on their website or send an email.
² If the back end of the slot isn’t open, as most aren’t, you can use a Dremel to cut it away (if you are real handy and don’t mind losing your warranty).
But both cards will be limited by the max bus speed of the bus you are using, but if you really feel the need for speed, try one of the new ASUS “PCI-X work station AM2/S775 boards” – the PCI-X version will do real fine job for you;
you can have Windows on a Raptor and the game / program files where you need fast access to on the RAID channel.
One thing I learned over the years is that RAID is not an easy subject; what works for one really well doesn’t help another that much. I would advise anyone that wants to spend money on it to do a lot of reading on it, and there plenty of information on the net about it.
And one last but one important tip:
The fastest hard drive setup RAID or non RAID won’t help much if you don’t have enough main memory on your mobo – this should be the first thing to upgrade -
2 GB is a minimum I would say for smooth running and some multitasking.
My 2 cents -
All this is not valid for SSD based raid systems, as SSDs don't have moving parts, the latency is the same for all SSDs, and you will see great benefits of stripe ore raid.
But if you still prefer HDDs go ahead, but imo the speed crown is sturdy in the hands of SSDs, for what ever use you think of, tho not always needed.
RAID's impact on latency varies by controller, drives, level, load, and access pattern. For example, RAID 5 vs RAID 10; or database storage vs video archive; or ICH10R vs SB950. That said, I've never seen any combination of these factors add 50% measurable latency. I have measured reduced read latency in RAID 0 configurations over their single-drive counterparts (benchmarked back to back as drives were added to the array—these are repeatable results). The worst latency from RAID 0 I've measured was 3% higher latency across a five-drive array.
Selecting/setting up a RAID configuration poorly can seriously hamper performance. E.g. using RAID 5 for a high-traffic database. SSDs are better than HDDs at everything, but for the price, not currently the best choice for most practical applications. Look at the poll results—over 30% of TPUers using SSDs, whereas SSD penetration outside of tech forums (where many people have them more for novelty and community reputation than for practical reasons) is, well, look here.
Bear in mind that you can also short stroke drives and put the rest of each drive to other use (e.g. archival) while using RAID with solutions like Intel Matrix RAID (or whatever they call it now).
I am not using SSD yet, because I don't have enough $$. And I think I'll wait a little bit longer, until new generation with 500MB/s read/write speeds will be cheaper. My second choice would be to wait and get "old generation" (sata2) SSD's almost for free.
RAID's impact on latency varies by controller, drives, level, load, and access pattern. For example, RAID 5 vs RAID 10; or database storage vs video archive; or ICH10R vs SB950. That said, I've never seen any combination of these factors add 50% measurable latency. I have measured reduced read latency in RAID 0 configurations over their single-drive counterparts (benchmarked back to back as drives were added to the array—these are repeatable results). The worst latency from RAID 0 I've measured was 3% higher latency across a five-drive array.
I am still on the budget train for SSD but I'm sure in a few months I'll be able to afford a bigger one when they get cheaper. I just can't live without them now
Nope not yet, there to expensive for the capacity at this stage for me. Have they fixed the issue where they dont leave fragmented files on the drive and they slowly shrink in size?
main box has 2x 128gb kingston SNV425 - one for os and apps, other for steam. HTPC also has a 40gb intel and my server uses a 30gb kingston, and carputer with a 40gb corsair. SSDs are the way to go