One thing to consider with RAID is overhead. The larger the drives, the higher the overhead cost. How much capacity you lose depends on the array configuration. RAID1 and RAID 10 costs you half of your total capacity. RAID5 costs you the capacity of one drive, and RAID6 costs you the capacity of two drives. That said, there is absolutely no reason to use RAID1 or 10 for media storage. Your library is basically going to be write-once read-many, and the interface speed is certainly not going to be a bottleneck for even multiple streams of the highest bitrate rips. Your lowest parity overhead on a 4 drive array would be with RAID5. You'd have 24TB of storage but I'll explain why it's not a good idea with your drives later. RAID6 would give you less storage space (16TB) but greater fault tolerance. 4 drives is the minimum for RAID6, but I would go with at least 5 or 6 to make it worthwhile.
Yes, everyone says "RAID is not a backup solution". And they're right - in mission critical enterprise situations. But for a media library, it gives a bit of security in the fact that if you lose a drive that you aren't straight up losing a sizeable chunk of your library - because once your library starts spanning terabytes, it is completely impractical to do actual 1:1 backups. But herein lies the gamble. If you have a RAID5 array and you lose a drive, the array will keep on working (albeit at a degraded speed), and once you replace the faulty drive, it will rebuild itself. Crisis averted. However if a second drive fails before the rebuild is complete, then game over. You lose the ENTIRE array worth of data. If the array is RAID6, then it can tolerate two drive failures, and a third failure before successful rebuild will crater the array. The thing with RAID6 is, you would need three drives to be failed at any given time to bork the array. So say you lose a drive. You replace it and start a rebuild. A second drive fails during the rebuild. You replace that and begin its rebuild. If the first rebuild completes, then you're back to only one failure. So if a third drive fails while the second is still rebuilding but the first has already completed, you're still okay. RAID 10 has a special quirk as far as fault tolerance which makes it very vulnerable: It can technically tolerate two failed drives, but is only tolerant of ONE stripe failure on either side of the mirror. If both mirrors of the same stripe go down at the same time, you lose the entire array. So you have 4 drives, A1, A2, B1, and B2. A and B are the stripes (0), and 1 and 2 are the mirrors (1). Say drive A1 fails. You replace it and begin rebuilding. If B1 -OR- B2 fails before A1 is back online, then the array remains intact. However if A2 fails, you lose everything.
Now I told you that to tell you this: The larger the individual drives in the array, the longer it takes to rebuild a failed drive. The rebuilding process plus accessing data from the array in a degraded state put a lot of stress on the remaining drives for the duration of the rebuild. For very large drives like your 8TBs that can take literally days depending on how full the array is and what kind of controller you use. The longer the rebuilding process is going on (especially if people are accessing the array while it is degraded), the higher the likelihood of another drive failing.
The ideal individual drive size for a RAID5 or 6 array is 2TB. They rebuild very quickly and they have the lowest $/GB overhead cost. The downside is you need more drives and a case (or other solution) to contain them. 16x 2TB drives and 4x 8TB drives will cost you roughly the same money. However a RAID6 array of 16x 2TB drives will net you 28TB of usable capacity (total capacity minus two drives) while the RAID6 array of 4x 8TB drives will only net you 16TB.
As far as the controller goes, look into used Dell server cards. They are cheap and plentiful on ebay, and use LSI RPUs. I have a PERC H700 in mine and it works great. Onboard RAID on anything less than a server-class board is going to suck. It's not real RAID as there is no dedicated RPU (RAID Processing Unit). It relies on your CPU and RAM for RAID operations which is FAR slower than a dedicated RPU. RAID5/6 performance and rebuild operations are very slow because CPU's suck at XOR operations.
12x 2TB HGST Ultrastar 7200RPM Enterprise drives. 4 of them are on an old LSI 9650 (my original RAID-5 array when I first built this 6 years ago), and then I added another 8 and the PERC H700 (RAID6) a few months ago.