View Single Post
  #13 (permalink)  
Old November 13, 2012, 05:15 AM
oliver_ssa oliver_ssa is offline
Rookie
 
Join Date: Nov 2012
Posts: 9
Default

Quote:
Originally Posted by JD View Post
I suspect it'll take at least a week, if not longer, to fully initialize. Once that does finish, your write speeds should be at least double what you are getting now. My 2TB RAID10 array took about 2 days to initialize on my LSI 9211 (which is basically the same card you have).

So right now you have 19TB. If 1 of your 8 disks fails, your performance is going to drop back to what you're seeing right now (maybe even worse) until you replace that drive and the whole array re-initializes itself. This is largely due to the RAID card you have not having a dedicated CPU and RAM to calculate the parity bits. It's relying on your CPU which is far slower at performing those calculations.

With RAID10, you'd have around 11TB, but you could lose up to 4 drives without any major performance losses. Likewise, read/write speeds should be much higher as there are no parity bits to calculate, it just writes data to pairs of drives.

And unless you have a decent UPS attached to your PC, I really wouldn't recommend enabling caching. Should your PC lose power unexpectedly, you'll risk corrupting 19TB of data.
Thx, JD!

I have a Thermaltake TRX-1000M and a nobreak pluged into the the server only.

And about the initialization, it occur only on the RAID BIOS Console or when i'm in windows it is also processing it?
Reply With Quote