What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Kingston SSDNow E50 240GB SSD Review

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
As the Enterprise solid state storage industry matures it should come as little surprise to see manufactures expanding their lineups at a breakneck pace. Instead of one size fits all solutions, several are offering highly specialized models which are targeted with laser precision at specific consumer groups and scenarios. The latest additions to many manufactures’ Enterprise line are SSDs aimed at heavy read scenarios where HET/e-MLC NAND’s durability is not needed and their higher price certainly is not wanted. In layman’s terms, this has led to a renewed push towards value in a segment that’s not normally known for it.

Instead of focusing solely on write resiliency this budding market niche is concerned with providing a value orientated SSD, tailor made for read-centric situations so administrators can retire their aging 10K and 15K RPM hard drives without breaking their annual budget in the process. Kingston may not be the first to recognize and cater to this growing group of consumers, but their SSDNow E50 240GB intends to make up for lost time.

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/E50/chart.jpg" border="0" alt="" /></div>
As you can see, the E50 takes an entirely different approach to satisfying the needs of Enterprise consumers when compared against Intel’s DC 3500. Instead of using ONFi 20nm MLC NAND and an Intel branded controller like the DC S3500 series, the SSDNow E50 utilizes a SandForce second generation controller. However, unlike the E50’s mass market orientated HyperX siblings’ standard grade SF2281 controller the E50 houses LSI SandForce’s top of the line SF2581.

The SF2581 is very similar to the SF2281 but differs in three noticeable ways. It has the ability to make use of secondary capacitors for Flush In Flight which ensures any data in the process of being written is kept intact if there’s a sudden unexpected loss of power.

In this instance the E50 doesn’t rely upon the large capacitors seen in some Intel models but instead uses multiple tantalum super-capacitors. The implementation of multiple small super-caps is a more robust option since even if some fail there should still more than enough charge left in the remaining capacitors to ensure Flush In Flight can complete.

The downside to this design boils down to cost; redundancy is expensive. However, with an online price of $190 for the 240GB version, Kingston obviously had no problems in finding ways to fit such a superior setup into the E50’s budget.

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/E50/board2_sm.jpg" border="0" alt="" /></div>
LSI’s SF2581 also supports enhanced SMART diagnostic abilities which gives administrators the ability to log, debug and diagnose potential issues before they become catastrophic failures. This is extremely beneficial as it grants enhanced control over the SSD or SSD array on a day-to-day basis and can be used to head off data loss before it happens.

The last way the SF2581 differs from the SF2281 is arguably the most important. Since it is meant for the highly demanding server market, it undergoes more intensive factory testing than standard retail channel models. Because of this enhanced factory testing each SF2581 is rated for an impressive 10 million hour MTBF instead of the 2 million MTBF rating of the SF2281.

While the SF2581 does share most of the other high level features of the SF2281, it only supports 128-bit AES instead of 256-bit like Intel’s third generation X25 controller. Neither of these is a priority in the server market but their omission could curtain the E50’s usefulness in some scenarios.

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/E50/board1_sm.jpg" border="0" alt="" /></div>
Besides using a different controller and capacitors than its competitors, Kingston has also helped distinguish their new E50 model by the type of NAND it uses. This new breed of enterprise drives usually use 20nm ONFi MLC NAND but Kingston’s decision to go with 16 of the more powerful – but still very reasonably priced – 19nm Toshiba Toggle Mode ICs could pay dividends. These modules are actually the same type as used in a Corsair Neutron GTX or Kingston SSDNow V300 but tend to be more highly binned than what you would find inside a typical consumer grade model. So while they do have a greatly reduced lifespan of 3,000 P/E cycles compared to Intel HET NAND, they have greater small file performance potential compared to either ONFi or standard HET NAND.

These three main features have allowed Kingston to create an Enterprise drive that is unlike most value oriented Enterprise SSDs. This unique approach is what Kingston is counting on to help differentiate the E50 from the rapidly expanding competition.

 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs & HDDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for drives to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our testbed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the Vista load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being a Phoneix Pro 120GB Solid State Drive.

For synthetic tests we used a combination of ATTO Disk Benchmark, HDTach, HD Tune, Crystal Disk Benchmark, IOMeter, AS-SSD and PCMark Vanatage.

For real world benchmarks we timed how long a single 10GB rar file took to copy to and then from the devices. We also used 10gb of small files (from 100kb to 200MB) with a total 12,000 files in 400 subfolders.

For all testing a Asus P8P67 Deluxe motherboard was used, running Windows 7 64bit Ultimate edition (or Vista for boot time test). All drives were tested using AHCI mode using Intel RST 10 drivers.

All tests were run 4 times and average results are represented.

In between each test suite runs (with the exception being IOMeter which was done after every run) the drives are cleaned with either HDDerase, SaniErase, OCZ SSDToolbox or Intel Toolbox and then quick formatted to make sure that they were in optimum condition for the next test suite.


Steady-State Testing

While optimum condition performance is important, knowing exactly how a given device will perform after days, weeks and even months of usage is actually more important for most consumers. For home user and workstation consumers our Non-Trim performance test is more than good enough. Sadly it is not up to par for Enterprise Solid State Storage devices and these most demanding of consumers.

Enterprise administrators are more concerned with the realistic long term performance of any device rather than the brand new performance as down time for TCL is simply not an option. Even though an Enterprise device will have many techniques for obfuscating and alleviating a degraded state (eg Idle Time Garbage Collection, multiple controllers, etc) there does come a point where these techniques fail to counteract the negative results of long term usage in an obviously non-TRIM environment. The point at which the performance falls and then plateaus at a lower performance level is known as the “steady state” performance or as “degraded state” in the consumer arena.

To help all consumer gain a better understanding of how much performance degradation there is between “optimal” and “steady state” we have included not only optimal results but have rerun tests after first degrading a drive until it plateaus and reaches its steady state performance level. These tests are labelled as “Steady State” results and can be considered as such.

While the standard for steady state testing is actually 8 hours we feel this is not quiet pessimistic enough and have extended the pre-test run to a full ten hours before testing actually commences. The pre-test or “torture test” consists of our standard “NonTrim performance test” and as such to quickly induce a steady state we ran ten hours of IOMeter set to 100% random, 100% write, 4k size chunks of data at a 64 queue depth across the entire array’s capacity. At the end of this test, the IOMeter file is deleted and the device was then tested using a given test sections’ unique configuration.


Processor: Core i5 2500
Motherboard: Asus P8P67 Deluxe
Memory: 16GB G.Skill TridentX 2133
Graphics card: Asus 5550 passive
Primary Hard Drive: Intel 520 240GB
Power Supply: XFX 850

Special thanks to G.Skill for their support and supplying the TridentX Ram.

Below is a description of each SSD configuration we tested for this review:

Intel 910 800GB (Single Drive) HP mode: A single LUN of the Intel 910 800GB in its High Performance Mode

Intel 910 800GB (Raid 0 x2) std mode: Two of the Intel 910 800GB SSD LUN's in Standard Mode Configured in RAID 0

Intel 910 800GB (Raid 0 x2) HP mode: Two of the Intel 910 800GB SSD LUN's in High Performance Mode Configured in RAID 0

Intel 910 800GB (Raid 0 x4) std mode: All four of the Intel 910 800GB SSD LUN's in Standard Mode Configured in RAID 0

Intel 910 800GB (Raid 0 x4) HP mode: All four of the Intel 910 800GB SSD LUN's in High Performance Mode Configured in RAID 0

Intel DC S3700 200GB: A single DC S3700 200GB drive

Intel DC S3700 800GB: A single DC S3700 800GB drive

Intel DC S3700 200GB (RAID 0): Two DC S3700 200GB drives Configured in RAID 0

Intel DC S3700 800GB (RAID 0): Two DC S3700 800GB drives Configured in RAID 0

Intel 710 200GB (RAID 0): Two 710 200GB drives Configured in RAID 0

Kingston E50 240GB: A single E50 240GB drive

Kingston E50 240GB (RAID 0): Two E50 240GB drives Configured in RAID 0
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
ATTO Disk Benchmark

ATTO Disk Benchmark


The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB. The test program then spits out an extrapolated performance figure in megabytes per second.

atto_w.jpg

atto_r.jpg

Due to the unique way in which SandForce drives read and write data, they tend to gain an artificial performance boost in ATTO. With that being said, there's excellent scaling of both read and write performance within RAID configurations.

In write scenarios a Intel DC S3700 is able to overcome this handicapping and nearly match the E50s abilities. Obviously the R50's drive's performance has been tuned via firmware for read performance at the expense of write which is completely acceptable given the intended niche it is meant to fill.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Crystal DiskMark / AS-SSD

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB.

cdm_w.jpg

cdm_r.jpg


AS-SSD


AS-SSD is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and small 4K read/write speeds as well as 4K file speed at a queue depth of 6. While its primary goal is to accurately test Solid State Drives, it does equally well on all storage mediums it just takes longer to run each test as each test reads or writes 1GB of data.

asd_w.jpg

asd_r.jpg

Crystal DiskMark and AS-SSD both use different data than ATTO which is not as easily compressed and as such the SandForce controller cannot boost performance beyond its true capabilities. With that being said the inexpensive E50 does post some very impressive read numbers that do scale nicely from single to double to quadruple RAID 0 configurations. For the intended read heavy scenarios the E50 is designed for, it will give enterprise consumers the luxury of saving a noticeable portion of their budget over what a DC S3500 or DC S3700 setup would cost.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
IOMETER: Our Standard Test

IOMETER: Our Standard Test


<i>IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per device (1,4,16,64,128 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular queue depth that is heavily weighted for single user environments and workstation environments.</i>

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/E50/iom_std.jpg" border="0" alt="" /></div>

Even with a 50/50 read and write performance split the Kingston SSDNow E50 does an admirable job of holding its own against much more expensive models. Naturally, as the queue depths get deeper the lack of onboard cache does start to hinder throughput and as the number of drives in the array increases, the differences between the E50 and Intel competition begin to noticeably diverge.

A low queue depths this drive can easily hold its own in single, dual and quite possibly four drive RAID 0 configurations. It is also worth noting that the E50 does scale nicely.

With that being said, the E50 is not really intended for workstation orientated scenarios as the relatively short lifespan of its MLC NAND does come into play. Simply put, with so many write requests the lifespan of an E50 would be could be shorter than any e-MLC, HET MLC or SLC based model, at least on paper. In practice there likely won’t be any noticeable difference unless the drive is used continually for over a decade.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
IOMETER: File, Web & Email Server Testing

IOMETER: File Server Test


To test each drive we ran 6 test runs per device (1,4,16,64,128,256 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 6 subparts were set to run 100% random, 75% read 25% write; testing 512b, 4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 6 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 6. This gives us an average score for that particular queue depth that is heavily weighted for file server usage.

iom_file.jpg

Thanks to its obviously tuned firmware and controller which can boost read performance, the SSDNow E50 can easily outperform nearly any other Enterprise drive we have tested to date. However, as the queue depths get deeper the differences get smaller and smaller. In all likelihood a four drive Raid array consisting of DC S3700 drives would be able to match this level of performance at the 64 queue depth level. This however is rather impressive as the Intel drives are much more expensive and Intel’s cheaper DC 3500 is not able to match this drive at any queue depth. An enterprise consumer could add additional E50 drives to an array and still stay within their budget rather than using DC S3500 drives.


IOMETER: Web Server Test


The goal of our IOMeter Web Server configuration is to help reproduce a typical heavily accessed web server. The majority of the typical web server’s workload consists of dealing with random small file size read requests.

To replicate such an environment we ran 6 test runs per device (1,4,16,64,128,256 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 95% read 5% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular queue depth that is heavily weighted for web server environments.


iom_web.jpg

Since this test consists of almost exclusively read benchmarks, the excellent results of the E50 are not all that unexpected. Obviously this new series would make excellent Web Server storage devices. At this point there is very little reason to opt for expensive 15K RPM hard drives for such scenarios as the E50’s performance and durability will be more than enough to satisfy such situations.


IOMETER: Email Server Test


The goal of our IOMeter Email Server configuration is to help reproduce a typical corporate email server. Unlike most servers, the typical email server’s workload is split evenly between random small file size read and write requests.

To replicate such an environment we ran 5 test runs per drive (1,4,16,64,128 queue depth) each test having 3 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 3 subparts were set to run 100% random, 50% read 50% write; testing 2k,4k,8k, size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the subtests are given a score in I/Os per second. We then take these numbers add them together and divide by 3. This gives us an average score for that particular queue depth that is heavily weighted for email server environments.


iom_email.jpg

As with our standard ‘workstation’-centric IOMeter results, the Email results are decent but due to the 50/50 performance split we would be hesitant to recommend anything other than HET / e-MLC based drives as the absolute minimum here. In such scenarios the E50’s NAND would quickly use its 3,000 p/e cycles. This potential limitation can be mitigated to a great extent by putting more drives into the array, but unless your budget is very tight and you need every additional gigabyte of capacity that can be bought, the risk versus reward equation does not favor the E50 series.

By the same token we said similar things about Intel’s DC S3500 series and as far as MLC drives go, the combination of good performance and great price does make the E50 a slightly more optimal choice. More importantly, only you can decide if its limitations make the E50 a viable candidate for these scenarios. After all, a good argument could be made that an inexpensive price combined with a good backup regime and online RAID array redundancy could make the E50 a worthwhile option. Not every server is expected to last ten years before upgrades occur and at this price point, the E50s are a lot easier to justify decommissioning in favor of newer faster models than the more expensive alternatives.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Steady State Tests: Standard, File, Web & Email Server

Steady State Testing


While optimum condition performance is important, Enterprise administrators are more concerned with the realistic long term performance of any device as down time for TCL is simply not an option. Even though an Enterprise device will have many techniques for obfuscating and alleviating a degraded state (eg Idle Time Garbage Collection, multiple controllers, etc) there does come a point where these techniques fail to counteract the negative results of long term usage in an obviously non-TRIM environment. The point at which the performance falls and then plateaus at a lower performance level is known as the “steady state” performance or as “degraded state” in the consumer arena.

While the standard for steady state testing is actually 8 hours we feel this is not quiet pessimistic enough and have extended the pre-test run to a full ten hours before testing actually commences. The pre-test or “torture test” consists of our standard “NonTrim performance test” and as such to quickly induce a steady state we ran ten hours of IOMeter set to 100% random, 100% write, 4k size chunks of data at a 64 queue depth across the entire array’s capacity. At the end of this test, the IOMeter file is deleted and the device was then tested using a given test sections’ unique configuration.


IOMETER: Our Standard Steady State Test


iom_s_std.jpg


IOMETER: File Server Steady State Test


iom_s_file.jpg


IOMETER: Web Server Steady State Test


iom_s_web.jpg


IOMETER: Email Server Steady State Test


iom_s_email.jpg


As you can see once the E50 is placed into more realistic scenarios the performance deflated faster than a helium balloon with a pin stuck in it.

We must admit that the results up to now seemed almost too good to be true and we get no pleasure in showing how much of a performance drop you can expect from the SSDNow E50 series after using them for a few months. The SF2581 controller is getting rather long in the tooth and even with Enterprise-orientated firmware the controller simply cannot keep the drive from entering a degraded state. Once in such a state the lack of extra over-provisioning – another critical omission – means it will take a long intervals of idle time to return to higher performance levels.

This radical variance in performance underscores the differences in philosophies between Intel and SandForce. Intel place more emphasis on consistency while SandForce puts more stock in read performance. Both do have their strengths and weakness. Even in a degraded state, the E50’s read performance isn’t noticeably impacted and, while the controller will use every spare cycle it can steal to get out of the degraded state, the E50 is still a very good option for read-only scenarios.

Considering this drive is designed, advertised and intended for read heavy scenarios this degradation of write performance is forgivable. Though to be honest, it is only when the reduced asking price is taken into consideration that it becomes easier to overlook. Simply put, we would never expect these drives to be used inside a workstation or email server but they would do very good in web and possibly even file servers. Equally important, even with this major degradation of performance the E50 leaves enterprise hard drives in its dust. Unless you absolutely need the capacity HDDs offer there is now even less reason to opt for them over value-orientated Enterprise SSDs like the E50.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
ADOBE CS5 / Firefox / Real World Data Transfers

Adobe CS5 Steady State Load Time


Photoshop is a notoriously slow loading program under the best of circumstances, and while the latest version is actually pretty decent, when you add in a bunch of extra brushes and the such you get a really great torture test which can bring even the best of the best to their knees. To make things even more difficult we have first placed the devices into a steady state so as to help recreate the absolute worst case scenario possible.

adobe.jpg

Since these tests are heavily weighted towards read performance over writes, the degraded performance of the SandForce SF2581 controller is not of critical importance. The Kinston SSDNow E50 is still a very solid performer and the more drives you use in a RAID array the better it becomes.


Firefox Portable Offline Steady State Performance


Firefox is notorious for being slow on loading tabs in offline mode once the number of pages to be opened grows larger than a dozen or so. We can think of fewer worse case scenarios than having 100 tabs set to reload in offline mode upon Firefox startup, but this is exactly what we have done here.

By having 100 pages open in Firefox portable, setting Firefox to reload the last session upon next session start and then setting it to offline mode, we are able to easily recreate a worst case scenario. Since we are using Firefox portable all files are easily positioned in one location, making it simple to repeat the test as necessary. In order to ensure repetition, before touching the Firefox portable files, we have backed them up into a .rar file and only extracted a copy of it to the test device.

As with the Adobe test, we have first placed the devices into a steady state.


ff.jpg


Real World Data Transfers


No matter how good a synthetic benchmark like IOMeter or PCMark is, it can not really tell you how the device will perform in “real world” situations. All of us here at Hardware Canucks strive to give you the best, most complete picture of a review item’s true capabilities and to this end we will be running timed data transfers to give you a general idea of how its performance relates to real life use. To help replicate worse case scenarios we will transfer a 10.00GB contiguous file and a folder containing 400 subfolders with a total 12,000 files varying in length from 200mb to 100kb (10.00 GB total).

Testing will include transfer to and transferring from the devices, using MS RichCopy and logging the performance of the drive.

Here is what we found.


copy_lg.jpg

copy_sm.jpg

The SSDNow E50 once again posts some excellent read numbers, but the performance does noticeably drop off in write orientated tasks. However, considering this drive does break the critical one dollar per gigabyte mark by a noticeable margin, it actually does become harder and harder to recommend Kingston’s consumer grade HyperX line. After all, those onboard super capacitors backstopped by simply superior NAND do make a very compelling argument in its favor.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Conclusion

Conclusion


The new Kingston E50 240GB drive has a lot going in its favor. It makes use of the best, most heavily tested version of SandForce’s second generation controller, uses custom high performance firmware and has an intelligent super capacitor design with built in redundancy to ensure Flush In Flight always occurs. These are impressive additions for a drive that aims to compete with the industry’s established players.

On the actual storage front, the 19nm Toshiba Toggle Mode MLC NAND is some of the best around from a bang for buck standpoint which allowed Kingston to hit a reasonable asking price. Do all of these points make the E50 a perfect drive for everyone? No, because it wisely targets a narrow yet quickly expanding segment: the read-centric scenarios of file and web servers.

When compared against some higher-priced enterprise drives, the E50 features a drastically reduced write lifespan. However, for a model which is meant to replace hard drives in those aforementioned read-heavy environments, we can easily overlook this.

In pristine condition the E50 is a force to be reckoned with, particularly when its excellent RAID scaling can come into full effect. So much so it can compete directly against heavyweights like Intel’s DC S3700 800GB drive. Unfortunately, our Steady State tests revealed that Kingston didn’t provide enough over-provisioning for the SandForce controller and as a result, the drive tends to slow down quite drastically as time goes on. 16GB worth of NAND set aside may sound like a lot, but it is the same amount a consumer grade SandForce controller requires for less demanding home usage scenarios. Under the harsher operating conditions most enterprise storage devices will find themselves in, it is obviously not enough.

The E50 is a great value for its intended market. It isn’t meant to compete directly against the DC 3700’s of this world but rather offer a fitting alternative for system admins looking for targeted performance metrics for specific useage scenarios. While we would be extremely hesitant to recommend the SSDNow E50 for workstation or email server storage duties, for a webserver, Virtual Machine or even a typical file server it is a great budget-oriented choice.

DGV.gif
 
Last edited by a moderator:

Latest posts

Top