VMware Cloud Community
stilmon
Contributor
Contributor

Best Raid Practice

Hello,

I'm somewhat new to the vmware world. Here is a breakdown of our system.

we have 3  Dell R610 on 1 Md3200 with 15000K drives 24 drives total

we currently have 40 VM's. now is it best practice to have them set is sets of raid 0  for best performance?

for example   12 sets of raid 0 to = 24 drives.    then we layer our vms on the raid 0's.  3 on each as example? or better to make them all raid 5.  our applications require high disk i/o

0 Kudos
8 Replies
JarryG
Expert
Expert

Putting everything on raid0 is imho not the best practice, because it does not give you any redundancy. If any single drive in a stripe is lost, all data on that raid0-stripe are lost too. So I'd recommend to use at least raid5, or raid10 (if you can waste some disk-space). If all your drives are equal, you could put one or two aside as global hot-spares, and use all others for a few raid5 spans.

If your apps are i/o sensitive, you should use some kind of serious caching (~1GB cache on raid-controller is not enough). I think Dell uses re-branded LSI-controllers, so something like LSI CacheCade might come handy (caching on ssd). One more nice feature is "hybrid raid1", that is raid1-mirror with ssd+hdd supported by some controllers: ssd is used for i/o, and hdd is slowly synchronised in the background. Of course, it depends on your budget...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
0 Kudos
stilmon
Contributor
Contributor

IM sorry, i meant to say raid 1

0 Kudos
Josh26
Virtuoso
Virtuoso

stilmon wrote:

IM sorry, i meant to say raid 1

Dear God I'm glad you said that.. I shudder every time someone here recommends RAID 0 "for performance".

Honestly the best practise is down to your requirements. Personally I favour RAID5 because on modern hardware, the performance penalty is negligible over RAID1, but the wasted storage space is significant.

There's plenty of legacy advice around RAID5 being too slow that someone is bound to bring up. I can only suggest trying it for yourself. Everyone ever has said "my application requires high IO" but there's a huge amount of variation regarding just how high they need it. If your IO was really that high, the entire array would be SSD, and no "best practice RAID level" would workaround that.

0 Kudos
JarryG
Expert
Expert

Well, then the answer is simple: if you can accept "loosing" half of your total hd-capacity, use raid10 (I'd use 6 or 8 drives per raid10-span). If you want to use capacity of your drives more effectively, use raid5 (or raid6).

No matter how many disks you use, raid10 gives you better performance than raid5. Of course, you'd pay for it by "wasting" disk-space...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
0 Kudos
vmnomad
Enthusiast
Enthusiast

Do you have any approximate number of High Disk IO? Read or Write IOs?

RAID5 is fine for read IOs, but will imply some performance penalty for Write IOs.

Generally you need to estimate IOPS for your 40 VMs and then compare to different combinations of IOPS you will get with RAID5 and RAID1. My personal preference is to consolidate all VMs on as few datastore as possible as long as your storage supports VAAI and hence you won't face SCSI Reservation conflicts issue.

VCP-4/5, VCAP4-DCA ID-553, vExpert, CCNP, CCSP, MCSE http://vmnomad.blogspot.com
0 Kudos
stilmon
Contributor
Contributor

Thank you for all suggestions so far!

JarryG,

So you are saying to do "aprox" 6-8 drives each raid 0+1 (10) gives me about 4 raid sets.

that would be faster to the storage array then say 12 sets of raid 1? is it less work for storage array?

Reason I ask is because normally I would layer on top of each raid 1 to spread out load.

0 Kudos
JarryG
Expert
Expert

Why I recommend fewer faster arrays (raid10) over higher number of slower arrays (raid1)? Because you might again win some performance. Let me put it this way:

If you have 2 VMs, each of them having its own raid1, what each VM gets is read/write speed roughly equal to that of sigle disk (simplified).

If you have 2 VMs sharing one common raid10, they get about half r/w raid speed (that is the same as above) if they access raid10 concurrently. But if only one VM is in particular moment accessing raid10 array, it gets full r/w speed (which in case of 4disk raid10 is let's say roughly twice the speed of single disk).

It is good to spread load. But if you over-partition your resources, you are loosing flexibility. You can never add them together and re-distribute it according to actual conditions...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
0 Kudos
stilmon
Contributor
Contributor

Jarry,

thanks for the input. and sorry for long delay on getting back.

so im going to go with Raid 10. so do you think from a performance view that it is best to do lets say 20 drives @ raid 10 ? or is it better to split them up into more arrays?  like 10 X 10. so 2 arrays?

0 Kudos