We have a HP SAN with two fiber switches and a MSA2012FC storage array with 12 146GB drives. I am trying to decide how to configure the storage for our VI3 environment and am looking for recommendations.
At this point I am considering two RAID 10 arrays (6 - 146GB drives each) with either 1 (220GB) or 2 (440GB) LUNS defined per RAID array. The VMs are setup in a HA configuration so I think it would be best to have two separate RAID arrays versus one. That way I can split the HA pairs between the RAID arrays. If I create only one RAID 10 array then if more then one drive fails in that array we could lose everything. In the future we will be adding another MSA2000 enclosure so we will have another 12 146GB drives. At that point we can expand each RAID 10 array to 12 146GB drives to give us more storage space and performance.
Does this sound like a good plan? Any advice would be appreciated.
Hello,
RAID 5 is recommended as the array can stay together if 1 drive fails. With hot spares this is a good setup. RAID 10, one drive fails and you are failed over. So this also works.....
I like Raid 5 and I setup my drive space based on running 10-12 VMs per LUN. Which is the the average # per LUN. It is all about redundancy.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll
Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links
Basically you have to take into consideration what is more important space and ease of management or redundancy. With having 1 large raid 10 group if more then 1 drive fails you will loose everything but you will get more space out of your raid and it will be easier to manage. With 2 smaller Raid 10 groups you have much more redundancy but have to manage more raids / data stores.
If you are concerned about hard drive failures and have no spares in your raid10's to assist with hard drive failures I would create 2 raid 10 groups. Later if you get more storage you can always Storage Vmotion your vm's and files around.
Also I'm not sure on how your fiber is hooked up but when possible try to have your fiber going to 2 different fiber switches to remove the possibity of any single point of failure at your fiber switch level. Even though ESX only does failover pathing if an entire fiber switch fails and all your fiber connections are hooked up to that one switch your enviroment will go down, just a thought.
With 12 drives available to me at 146 GB a piece, I would opt for a RAID5 and increase the amount of storage that I have available to me instead. I'd create 2 x 5 drives RAID5 sets, and leave 2 as spares. Unless you have high I/O requirements, RAID5 should give you more than adequate performance and won't destroy half of your usable space.
-KjB
Some observations on this configuration are:
I dont see a hot spare allocated for these arrays. I would recommend a hot spare per disk tray or per controller if they can span trays in the same loop.
Consider the rebuild times when deciding whether a single or two RAID 10 arrays are warranted especially since you dont have a hot spare allocated and are in a critical state if you lose one drive in either array.
I am not familiar with the MSA but consider manually load balancing ownership of the LUNs to two separate controllers to increase throughput
Make some rough calculations on VMs per datastore/LUN when deciding how many datastores or LUNs to have.
Since most of you are recommending RAID 5 versus RAID 10, have you noticed any VM performance problems using RAID 5?
When we first started experimenting with VMware Server we seemed to achieve significantly better performance with RAID10, but this could have been related to the hardware.
Hello,
RAID 5 is recommended as the array can stay together if 1 drive fails. With hot spares this is a good setup. RAID 10, one drive fails and you are failed over. So this also works.....
I like Raid 5 and I setup my drive space based on running 10-12 VMs per LUN. Which is the the average # per LUN. It is all about redundancy.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll
Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links
VMware server and ESX are very different in their control of hardware. I have not noticed performance problems using RAID5 or RAID6. But, then again, my vm I/O requirements do not include sub-milisecond response times either.
-KjB
The only time I have issues with Raid5 is when the active write data footprint is larger than the cache limits of the storage server. Then you will see performance degrade. This primarily occurs with DB reorg activites.
It's not really a problem but you need to be aware of it.
> consider manually load balancing ownership of the LUNs
Good point. On the MSA2000 you would create two vdisks and assign each one to a controller. Then create a single volume per vdisk and present it to the server(s).