View Single Post
  #1 (permalink)  
Old October 1, 2010, 08:59 PM
JD's Avatar
JD JD is online now
Moderator
F@H
 
Join Date: Jul 2007
Location: Toronto, ON
Posts: 6,903

My System Specs

Default LSI SAS 9211-4i Review

LSI Internal SATA/SAS 9211-4i 6Gb/s PCI-Express 2.0 RAID Controller Card - Review



Price: $179.99 @ Newegg

Also required: 3WARE Multi Lane Internal SFF-8087 to 4X Serial ATA Breakout Cable 0.5M $17.19 @ NCIX or similar SFF-8087 breakout cable.

Overview:

The LSI SAS 9211-4i host bus adapter provides the greatest available throughput to internal server storage arrays through four internal 6Gb/s ports, driving up to 256 SAS and SATA physical devices. This HBA offers dynamic SAS functionality including dual-port drive redundancy and SATA compatibility. Utilizing one internal x4 SFF8087 Mini-SAS connector, the low-profile SAS 9211-4i is an excellent fit for 1U and 2U servers.

The SAS 9211-4i utilizes the embedded CPU in the LSISAS2008 controller to perform Integrated RAID 0, 1, 1E and 10 operations for reliable data protection with high-availability. The result is an ultra-thin, low-overhead device driver communicating over the x4 PCI Express 2.0 host interface. By reducing RAID code overhead, this HBA delivers high-speed READ/WRITE performance, making it perfect for medium to high-capacity internal storage applications.
  • SAS 6Gb/s Compliant The LSI 9211-4i features hot-pluggable, SATA-compatible SAS (Serial Attached SCSI) ports to provide 6Gb/s data transfer rates for optimum performance, efficiency, convenience and flexibility.
  • RAID Support The LSI 9211-4i supports multi-level RAID configuration including RAID 0, 1, 1E and 10 for better performance, enhanced data security and flexible capacity upgrades.
  • PCI Express 2.0 x4 Interface The LSI 9211-4i features the PCI Express 2.0 x4 interface which provides sufficient throughput and full-duplex operation for enhanced performance.
Initial Thoughts:
Well so far it's installed into my server (Windows Server 2008 R2) on a Maximus Formula (X38). Initially, it axed the Intel RAID and the motherboard was unable to boot from any of the drives attached to the ICH9R. In my scenario, I did not want to boot from the RAID card, so I had no need for the BIOS to detect it. Luckily when you go into the configuration just after POST, there's an option that lets you disable the card entirely, or enable only in OS, BIOS or both. As such, I chose the OS option which allowed me to regain my ICH9R boot support.

Honestly, I have no idea how much offloading this does. Looks like it means business with the heatsink on it, but who knows. At $180 I'm not expecting too much. The fact that it lacks RAID5 leads me to think that it probably doesn't do much logic. Mainly just needed the extra ports, and didn't want some piece of garbage Silicon Image controller. No harm in having SATA 6Gbps support either.

Software/Firmware:
Firmware and BIOS updating was effortless. Can do it from within Windows or DOS. Pretty much the same as flashing a motherboard BIOS. Only difference here is that the BIOS and Firmware are separate and the firmware has two editions: IR and IT. IR stands for Integrated RAID and probably the option you'd typically want. IT is for Initiator which I'm not quite sure what that's for.

On the software side, it has this MegaRAID Storage Mananger application which allows you to mange anything you could possibly want really. Create arrays, check their status, etc. A little more advanced than Intel Rapid Storage Technology of course, but still fairly simple to use.

Windows was able to find and install drivers for it automatically which is always good. Means that installing the OS onto it should be effortless. LSI does offer driver downloads as well which I decided to install.

Performance:
I'm running 2x Seagate 7200.12 1TB (CC46 firmware) and 2x Samsung F3 1TB in RAID10 on this card. Maybe not the most ideal setup, but I opted to mix the drives to lessen the chance of it all failing at once. One of each drive were bought a few months ago, with the other two bought last week.

Running 4x1TB in RAID10 results in approximately 1.8TB of usable space. It offers the best of both worlds in theory, as you basically have a RAID0 of two RAID1 arrays. It also shouldn't suffer from major slowness when in a degraded state should one drive fail compared to RAID5. And with the price of HDDs dropping rapidly, why not get 4 drives instead of 3?

After creating the RAID10 array, it then needs to initialize. This process seems to be rather time consuming and is still in progress. I started around 7PM, and it's now 12PM and I'm at 23%. During this time, the array is not visible to Windows, so you cannot start using it.

Here's some CrystalDiskMark results:



Real-world usage of copying files (mainly 700MB+ files), results in around 100MB/s write across my Gigabit network from my PC's RAID5.

Last edited by JD; October 3, 2010 at 03:58 PM.
Reply With Quote