VMware Cloud Community
wsaxon
Contributor
Contributor

LUN size documentation/guidelines

I see another LUN size thread is active but my question is a little more general, so I started a new thread.

I was surprised to learn today that LUN size can make a large

difference in VM performance. I thought the sizing was more about management overhead, but I see some discussion mention locking of an entire VMFS by specific VMs during specific actions, and large LUNs with lots of VMs having lots of these locking instances leading to reservation issues.

Assuming a need to maximize storage vs. pure speed, we have set up multiple 16 drive RAID-5EE arrays with 450GB 15k SAS drives. LUNs are created on this array and exported via iSCSI. The current configuration uses maximum-sized LUNs as extents to create a single ESX datastore - 3 LUNs per array. We seem to get about 110 VMs per array, ~35-37 per LUN. This allows us maximum flexibility for snapshot growth, vmdk resizing, etc.

We get between 1500 and 2500 scsi reservations messages per day on each VM host we have accessing these arrays.

Am I correct in thinking that instead of our current setup we should think about more, smaller LUNs, with no extents? Would this actually reduce reservation issues and increase performance?

0 Kudos
5 Replies
Ryan2
Enthusiast
Enthusiast

As a rule of thumb (and best practice), 500GB LUNs with an average of 10 VM's per LUN/Datastore. Extents are great for helping out in a quick bind, but ultimately, you should size your Datastores and relative LUNs according to each VM's requirements while keeping to the 500/10 rule whenever possible. Remember that Extents write data sequentially, not across all members of the datastore. Also try to keep in mind that this measure doesn't take into account system requirements for I/O... From my experience, every situation is different and takes a different approach to the SAN design on the backend. But I also find that following this guideline keeps 90% of the systems I've worked with in good shape.

wsaxon
Contributor
Contributor

So is it more important to size by number of VMs?

And this scheme would reduce reservation issues, even when multiple LUNs are hosted on the same physical array?

0 Kudos
Ryan2
Enthusiast
Enthusiast

It may help, but the bigger question may be how the disk groups / RAID configurations on the SAN are carved up. For larger environments, I stay away from iSCSI due to bandwidth bottlenecks (about 60MB per target w/ EMC arrays). But in your case I would say to go with the best practice recommendations in order to reduce file lock and disk contention. If you find that you're still hitting the wall in terms of disk performance, start looking at the RAID groups and I/O performance numbers. The magic numbers there are 180 IOPs per 15k drive / 130 IOPs per 10k drive / 80 IOPs for SATAII. Remember too that parity drives still provide IOPs, but not usable storage. Also, r/w penalties apply based on the RAID configuration utilized. But if this is a general purpose environment, I'd start with 4+1 RAID 5 disk groups (presuming 146GB 15k drives) and create LUNs from each group. That would allow for roughly 550GB Datastores with 900 IOPs each. At 10-15 VM's, you'd net 90-60 IOPs per VM... More than capable for "most" systems. Given that config, you can scale additional datastores as needed by VM application requirements.

Just food for thought :smileygrin:

Good luck.

0 Kudos
wsaxon
Contributor
Contributor

Unfortunately we've already got 48 of the 450GB disks - seems like what we really need is 3x as many smaller disks. Sizing to get 60-90 IOPS per VM would either mean enormous (for us) VMs or a lot of wasted space.

At the end of the day though it sounds like smaller LUNs, even when they are all sharing the same array, will result in lowered contention at the VMFS level. This will increase my overhead, but if it improves performance at all we'll go for it.

0 Kudos
AndreTheGiant
Immortal
Immortal

More infos also in this similar thread:

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos