Introduction“
All good things come in pairs”, we have two eyes, two ears, two CPU cores, dual GPU configs. So why not have two storage devices linked up? When RAID was first conceived, it certainly had a business mind approach, increase redundancy without impacting performance (a lot). But with more affordable RAID chips we have been playing around with RAID on desktop systems for many years now.
RAID 0 is what it is all about on desktop systems when you want the highest performance, of course you always have a huge risk of data loss in case one of the RAID members in the RAID 0 array decides to stop working. With ye ‘ol HDD, who have moving parts and spinning platters it’s only a matter of time before they stop working. When SSDs were introduced they boasted impressive speeds, but also very high MTBF (mean time between failures):
(source)SSD: 2 million hours roughly translates into 228 years. Where as HDD is about ~34 years. Most of us know that 34 years for a HDDs is a bit too optimistic, when your HDD is more than 5 years old you can start expecting it to fail, not saying it will, but keeping in mind to make a backup copy. If we translate this 34/5 ratio to the SSD side, it comes to about 34 years. So a realistic MTBF of more than 30 years is quite sufficient, you’ll most likely run out of rewrite cycles on the NAND flash chips inside.
Why is this all important?
Because setting up a RAID-0 array of SSDs will inherit less risk than using one based on HDDs.Stripe Size - Does size matter ? When we talk about Stripe Size regarding RAID configurations, we’re referring to the size of the chunks in which your data is divided between the RAID drive members. If you have a 256Kb file and a Stripe Size of 128Kb, in a RAID 0 config with 2 members, each member will get one piece of 128Kb of the 256Kb file. The stripe size settings and options depend on the kind of raid controller you will be using.
Most raid controllers will allow you to go from 4kb stripe size up to 64 or 128Kb. Our test setup based around an Intel X58 motherboard with integrated Intel RAID controller goes up to 128Kb.
We won’t go through the motions of setting up RAID on your system, if you intend to use it, you should set aside a bit of spare time to experiment with the different settings; what we’ve done for you in this article is configure RAID 0 with 2x SSDs using different Stripe Sizes and see how this impacts performance.
Enabling Hard Disk Write-Back CacheWhen setting up a RAID array on an Intel based controller you should install their
Matrix Storage Manager. In your Windows OS this tool will allow you to
enable write-back cache for your RAID array. In a none-raid setup you can set this up using Windows’ device manager, but once you defined your RAID array you’ll have to use the
Matrix Storage Console.
We’ll do some tests to see if and where there differences when enabling this software option on the next pages.
Test SetupAfter our
real world SSD tests we asked a second sample of
OCZ’s Vertex SSD. Armed with two 30Gb SSDs we installed them into a Dell T5500 workstation which is equipped with X58 motherboard and 3Ghz Core i7 CPU and 4Gb ram.
We installed Windows 7 x64 edition and started our tests. The OCZ Vertex drives were flashed with firmware version 1.41 which has OCZ’s garbage collection.
Note: For All RAID-0 tests Cache Write-Back is enabled unless mentioned otherwise
Partitions were created using W7 disk manager , default NTFS file format. This is an important side note, as you can align your partition to match the stripe size you’re using, as well as the NTFS format to match the amount of bytes you set as stripe size. We did a quick test to see how much the impact on performance would be: negligible. But when you are setting up your final config, it’s recommended to follow these steps nonetheless.
I'll be upgrading soon (or at least after you publish the new Cooler roundup ) and I'm undecided as to an OS drive.
260 Euro can get you a very fast, single, simple to use SSD.