Well, I found this via GOOGLE:
http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html
BETTER (or rather, more direct, than Anandtech one I posted initially)
=======================
"The second important parameter is the stripe size of the array, sometimes also referred to by terms such as block size, chunk size, stripe length or granularity. This term refers to the size of the stripes written to each disk. RAID arrays that stripe in blocks typically allow the selection of block sizes in kiB ranging from 2 kiB to 512 kiB (or even higher) in powers of two (meaning 2 kiB, 4 kiB, 8 kiB and so on.) Byte-level striping (as in RAID 3) uses a stripe size of one byte or perhaps a small number like 512, usually not selectable by the user.
Warning: Watch out for sloppy tech writers and marketing droids who use the term "stripe width" when they really mean "stripe size". Since stripe size is a user-defined parameter that can be changed easily--and about which there is lots of argument :^)--it is far more often discussed than stripe width (which, once an array has been set up, is really a static value unless you add hardware.) Also, watch out for people who refer to stripe size as being the combined size of all the blocks in a single stripe. Normally, an 8 kiB stripe size means that each block of each stripe on each disk is 8 kiB. Some people, however, will refer to a four-drive array as having a stripe size of 8 kiB, and mean that each drive has a 2 kiB block, with the total making up 8 kiB. This latter meaning is not commonly used.
The impact of stripe size upon performance is more difficult to quantify than the effect of stripe width:
Decreasing Stripe Size: As stripe size is decreased, files are broken into smaller and smaller pieces. This increases the number of drives that an average file will use to hold all the blocks containing the data of that file, theoretically increasing transfer performance, but decreasing positioning performance.
Increasing Stripe Size: Increasing the stripe size of the array does the opposite of decreasing it, of course. Fewer drives are required to store files of a given size, so transfer performance decreases. However, if the controller is optimized to allow it, the requirement for fewer drives allows the drives not needed for a particular access to be used for another one, improving positioning performance.
Tip: For a graphical illustration showing how different stripe sizes work, see the discussion of RAID 0.
Obviously, there is no "optimal stripe size" for everyone; it depends on your performance needs, the types of applications you run, and in fact, even the characteristics of your drives to some extent. (That's why controller manufacturers reserve it as a user-definable value!) There are many "rules of thumb" that are thrown around to tell people how they should choose stripe size, but unfortunately they are all, at best, oversimplified. For example, some say to match the stripe size to the cluster size of FAT file system logical volumes. The theory is that by doing this you can fit an entire cluster in one stripe. Nice theory, but there's no practical way to ensure that each stripe contains exactly one cluster. Even if you could, this optimization only makes sense if you value positioning performance over transfer performance; many people do striping specifically for transfer performance."
=======================
* BUT, I'd hold off, until we get some more feedback... once you commit it, afaik? There's NO WAY to non-destructively reset it in your RAID controller OR firmware on the mobo etc. (if not a separate card).
APK
P.S.=> Gotta fly, I haven't even read it myself yet, but figured put it out as "food for thought" & a reference for yourself, myself, & others... Time to fly here though, it's ballgame & beer time with pals (back to the "REAL WORLD" outside the Matrix here)...
Well, while waiting on my friends to come pick me up (who will doubtless be late for their own funerals, lol), I edited it out from Anandtech & found another reference above, crucial excerpt is above... apk