Does any one have serperate LUNs for seperate logical partitions/disks for all of their VMs? For example:
VM1
C: drive would be put on OS_LUN1
😧 drive would be put on DATA_D_Lun1
E: drive would be put on DATA_E_Lun1
(repeated for say 8 VMs)
VM9
C: drive would be put on OS_LUN2
😧 drive would be put on DATA_D_Lun2
E: drive would be put on DATA_E_Lun2
How does this plan sound?
I have seen this, however the norm is one vmdk per vm.
Anyone see the positives to this? perhaps a restore of indiviudal drives becomes possible, given the seperate partitions, but that normally needs to be done from within the VM/at a software level anyway.
VMFS and vmdk's are designed specifically to avoid this kind requirement.
If the C, D & E drive were all busy concurrently, then perhaps this would spread the load, however on a fibre based SAN this would not usually be an issue.
Yes ive seen that type of setup, it works well, there used to be an issue with VCB when disks were on different LUNs but that since been resolved..
Mostly though we see multiple LUNs with the entire VM contained..
it will work and you could get good throughput however, in the future it may be hard to plan server capacity if you are going to spread the data over different luns. you can only have a max of 255 luns. is this raw access of are the drives in vmdk format? as long as its in vmdk format i dont see an issue, you can always migrate the files if you need more space etc.
The only issue I see is that you might encounter the situation where the c: lun is doing nothing while your data lun's are getting hit pretty good because that's where all your data is.
I have seen this, however the norm is one vmdk per vm.
Anyone see the positives to this? perhaps a restore of indiviudal drives becomes possible, given the seperate partitions, but that normally needs to be done from within the VM/at a software level anyway.
VMFS and vmdk's are designed specifically to avoid this kind requirement.
If the C, D & E drive were all busy concurrently, then perhaps this would spread the load, however on a fibre based SAN this would not usually be an issue.
I would not run my setup like that. If you run it like that it would mean you would have to create a new lun every time you want to do a simple opeation like adding a disk to a VM.
I would rather have a complete vm on just one lun - actually 10-12 VM's sharing the same lun. The only time I would change this is if I know I have some vm's with high IO requirements.