Go Back   Hardware Canucks > HARDWARE > Storage

    
Reply
 
LinkBack Thread Tools Display Modes
  #11 (permalink)  
Old November 12, 2012, 07:53 PM
Perineum's Avatar
Hall Of Fame
F@H
 
Join Date: Mar 2009
Location: Surrey, B.C.
Posts: 4,026

My System Specs

Default

That's still a very nice (overkill) read speed though...

My Adaptec 31605 floats around a 350meg/sec read and a 250meg/sec write sequentially.
Reply With Quote
  #12 (permalink)  
Old November 13, 2012, 05:01 AM
JD's Avatar
JD JD is offline
Moderator
F@H
 
Join Date: Jul 2007
Location: Toronto, ON
Posts: 6,817

My System Specs

Default

Quote:
Originally Posted by oliver_ssa View Post
1- Its saying that the Virtual Drive is in a Background Initialization Progress, i don't understand why and its been one day that i set that up and is still in 0%? So something is not right.
I suspect it'll take at least a week, if not longer, to fully initialize. Once that does finish, your write speeds should be at least double what you are getting now. My 2TB RAID10 array took about 2 days to initialize on my LSI 9211 (which is basically the same card you have).

So right now you have 19TB. If 1 of your 8 disks fails, your performance is going to drop back to what you're seeing right now (maybe even worse) until you replace that drive and the whole array re-initializes itself. This is largely due to the RAID card you have not having a dedicated CPU and RAM to calculate the parity bits. It's relying on your CPU which is far slower at performing those calculations.

With RAID10, you'd have around 11TB, but you could lose up to 4 drives without any major performance losses. Likewise, read/write speeds should be much higher as there are no parity bits to calculate, it just writes data to pairs of drives.

And unless you have a decent UPS attached to your PC, I really wouldn't recommend enabling caching. Should your PC lose power unexpectedly, you'll risk corrupting 19TB of data.
Reply With Quote
  #13 (permalink)  
Old November 13, 2012, 05:15 AM
Rookie
 
Join Date: Nov 2012
Posts: 9
Default

Quote:
Originally Posted by JD View Post
I suspect it'll take at least a week, if not longer, to fully initialize. Once that does finish, your write speeds should be at least double what you are getting now. My 2TB RAID10 array took about 2 days to initialize on my LSI 9211 (which is basically the same card you have).

So right now you have 19TB. If 1 of your 8 disks fails, your performance is going to drop back to what you're seeing right now (maybe even worse) until you replace that drive and the whole array re-initializes itself. This is largely due to the RAID card you have not having a dedicated CPU and RAM to calculate the parity bits. It's relying on your CPU which is far slower at performing those calculations.

With RAID10, you'd have around 11TB, but you could lose up to 4 drives without any major performance losses. Likewise, read/write speeds should be much higher as there are no parity bits to calculate, it just writes data to pairs of drives.

And unless you have a decent UPS attached to your PC, I really wouldn't recommend enabling caching. Should your PC lose power unexpectedly, you'll risk corrupting 19TB of data.
Thx, JD!

I have a Thermaltake TRX-1000M and a nobreak pluged into the the server only.

And about the initialization, it occur only on the RAID BIOS Console or when i'm in windows it is also processing it?
Reply With Quote
  #14 (permalink)  
Old November 13, 2012, 05:20 AM
Perineum's Avatar
Hall Of Fame
F@H
 
Join Date: Mar 2009
Location: Surrey, B.C.
Posts: 4,026

My System Specs

Default

Looks like in the first picture your write through should be write back. That will cripple write speeds.

However, you have no on board battery, and it's unknown if you have a good UPS. Without them write back can corrupt your array as JD mentioned.
Reply With Quote
  #15 (permalink)  
Old November 13, 2012, 05:40 AM
Rookie
 
Join Date: Nov 2012
Posts: 9
Default

Quote:
Originally Posted by Perineum View Post
Looks like in the first picture your write through should be write back. That will cripple write speeds.

However, you have no on board battery, and it's unknown if you have a good UPS. Without them write back can corrupt your array as JD mentioned.
Yeah, i couldn't change durring the array set up. Dont know why. Firmware was the latest. I'm worrie about the background inicialization of the array, it is stuck in 0%.
Reply With Quote
  #16 (permalink)  
Old November 13, 2012, 06:34 AM
BlueByte's Avatar
Allstar
 
Join Date: Feb 2011
Location: Maynooth
Posts: 515

My System Specs

Default

Damn thats slow, I didn't even think of an HBA initializing that slowly. I have an HBA in my desktop for RAID 0, but use ROCs for all my servers. It does take while to initialize and I break my arrays up into 4 disk RAID 5s when I can because of preference. A week to initialize is another reason in my mind to keep the arrays a more manageable size.
Reply With Quote
  #17 (permalink)  
Old November 13, 2012, 07:14 AM
Rookie
 
Join Date: Nov 2012
Posts: 9
Default

Quote:
Originally Posted by BlueByte View Post
Damn thats slow, I didn't even think of an HBA initializing that slowly. I have an HBA in my desktop for RAID 0, but use ROCs for all my servers. It does take while to initialize and I break my arrays up into 4 disk RAID 5s when I can because of preference. A week to initialize is another reason in my mind to keep the arrays a more manageable size.
I don't get it, why taking so much time to initiate a RAID5 array? And if i just stay idle on Windows 7, is the initialize process run automatically?
Reply With Quote
  #18 (permalink)  
Old November 13, 2012, 01:31 PM
Rookie
 
Join Date: Nov 2012
Posts: 9
Default

Ok guys, i guess this is why is so slow!



Almost a hole week to get it done!!!

Can i make it go faster?
Reply With Quote
  #19 (permalink)  
Old November 13, 2012, 04:06 PM
JD's Avatar
JD JD is offline
Moderator
F@H
 
Join Date: Jul 2007
Location: Toronto, ON
Posts: 6,817

My System Specs

Default

Nope, nothing you can do but wait it out. It'll go "faster" if you stop trying to access it.

As Bluebyte and myself are saying though, you should really re-consider RAID5 at such a size in a non-enterprise environment. You're using consumer hard drives that are prone to failure and 1 drive loss means you're going to have to suffer through this whole initialize process all over again.
Reply With Quote
  #20 (permalink)  
Old November 13, 2012, 04:43 PM
Perineum's Avatar
Hall Of Fame
F@H
 
Join Date: Mar 2009
Location: Surrey, B.C.
Posts: 4,026

My System Specs

Default

Every time I add a drive to my setup it takes 4 days, and I suspect it'll take longer the more drives I add.

It's a RAID 6.

I suspect you couldn't set write back because of your lack of battery....
Reply With Quote
Reply


Tags
array , raid controller , raid5

Thread Tools
Display Modes

Similar Threads
Thread Thread Starter Forum Replies Last Post
SLI Configuration jimmyb007 Video Cards 9 January 3, 2012 05:53 AM
How does raid5 work? Blu Storage 16 January 6, 2011 05:08 AM
RAID5 of 3x1TB drives JD Storage 16 August 21, 2010 05:30 PM
Win7 and soft raid5?! SaskGoose Storage 3 March 15, 2010 04:50 PM
Which Hardware RAID5 Card? CanadaRox Storage 12 August 3, 2008 12:26 PM