VMware Cloud Community
pcat
Contributor
Contributor

VM virtual disks best practice

Hi All,

I'm new in this forum and I'm hoping you can help me answer this question.

Here is my environment:

ESXi 6.0

Local VMFS 5.4TB; RAID 10

(10@ 1.2TB 10K RPM SAS 12Gbps 2.5in Hot-plug Hard Drive)

PERC H730 Integrated RAID Controller,

2@ Intel Xeon E5-2667 v3 3.2GHz,20M Cache,9.60GT/s QPI,Turbo,HT,8C/16T

128 RAM

vSphere 6 Essentials kit

vCenter Server Appliance

I need to create a Windows 2012 VM, but I don't know if I should create 1 virtual disk that is partitioned into a C (system) and D (data) drive or

create the VM with two virtual disks.

I've read different schools of thought on this.

One says it is better to create separate disks, for expandability and flexibility.

As far as performance is concerned, ESXi does not see a difference between a VM with 3 vmdk's or a single one that is partitioned.

Since the storage IO requests are mapped to a vmdk file, it makes no difference if there are multiple files or just one.


I've also read that if you are going to use multiple vmdk's, you should distribute them over multiple datastores because multiple vmdk's mean multiple targets for your IO load.

Since we are operating with a single datastore, should use I stick to a single vmdk, instead of using multiple?


Looking forward to your feedback.

6 Replies
hussainbte
Expert
Expert

You should go with 2 disks simply because you will not have an option to extend the C: drive if required in future.

Also, the performance difference is there if you keep the disks on separate LUNs but unless we have something specifically required it should be OK with both disks on same datastore.

If you found my answers useful please consider marking them as Correct OR Helpful Regards, Hussain https://virtualcubes.wordpress.com/
0 Kudos
grasshopper
Virtuoso
Virtuoso

Hi pcat,

Welcome to the communities!  The most popular high performance config for modern Windows Operating Systems on vSphere is to keep the boot drive on LSI (default), and configure the data drive as an additional disk attached to a VMware Paravirtual (pvSCSI) adapter.  You should practice this config to understand the usage (i.e. this requires VMware Tools for the driver, new disks need to be online'd in Windows Disk Management, etc.).  The pvSCSI adapter will decrease CPU consumption by ~30% and increase performance by 12%.  Results may vary by workload, but these things are solid.

In general, one should never place the data drive on the same disk as the OS.  This of course affects your ability to easily expand the volume live.

Also, there is a performance benefit to having multiple disks and multiple controllers from the Guest OS perspective.  This allows the GOS to make more intelligent decisions in traffic copping it's own IO.

Multiple datastore VM configs have the potential to increase performance, but only if the underlying disk provides that value.  In general, using multiple datastores is an old school approach and is not really done nowadays.

So my vote is for LSI boot and pvSCSI data.  You can of course use LSI for everything (default) and there is nothing wrong with that.  Just make sure you have a dedicated vmdk for data (i.e. not on the boot drive).

Once you get into high performance configs for SQL, you will definitely want TempDB, Logs and Data on pvSCSI adapters.

Example SQL:

SCSI 0:0 Boot (LSI)

SCSI 1:0 Data (pvSCSI)

SCSI 2:0 Logs (pvSCSI)

SCSI 3:0 TempDB (pvSCSI)

Note:  It should be stated that fresh builds with pvSCSI are straight forward.  However, converting an existing server to use pvSCSI requires some attention as the disks will need to be online'd in Windows Disk Management at first power on following the change.  As such, before converting an existing LSI data drive to pvSCSI, one should disable services (i.e. SQL) that may try to write to that disk at startup.  Once you online the disks, set the services back to auto and reboot.

Checklist - Converting an Exsiting VM Data Drive from LSI to pvSCSI
1.  Set auto services to manual (for any in-scope apps that write to this disk)

2.  Shutdown VM gracefully

3.  Edit Settings and add/change SCSI controllers (i.e. LSI to pvSCSI)

4.  Power on VM and online the disks from diskmgmt.msc in the GOS

5.  Set services back to auto

6.  Reboot

7.  Confirm disks come online healthy

Note:  I find the Get-VMDiskMap PowerCLI script very helpful for information gathering when performing tasks such as this.  It shows the mapping of logical disk as we know it in Windows to the actual vmdk.

Feel free to ask if there are any questions.

pcat
Contributor
Contributor

Thank you very much for your feedback.

I've been leaning towards doing just that; creating two vmdk's, but was not sure about the SCSI controllers.

Your advise is greatly appreciated.

I do have a question regarding the pvSCSI.

Since the pvSCSI drivers are included with VMWare Tools, should I add the second drive and pvSCSI controller after initial creation and both the OS and VMWare tools are installed?

0 Kudos
pcat
Contributor
Contributor

Thank you for your quick response!!

0 Kudos
grasshopper
Virtuoso
Virtuoso

Hi pcat,

Adding some info based on your question.  Thanks for asking (and sorry for the delay, just saw this).

should I add the second drive and pvSCSI controller after initial creation and both the OS and VMware tools are installed?

Yes indeed.  One should add the data drive (pvSCSI) after the initial build as it depends on VMware Tools.  Please note that most folks don't add a data drive to the template image.  Although you can, most add as a post step.

Another trick worth mentioning is that you can add the paravirtual driver to the Guest OS by adding a dummy drive (i.e. 1GB in size) while it's online.  After rebooting the Windows guest, it will then have the driver and the dummy disk can be removed (i.e. offline in diskmgmt.msc then remove from Edit Settings to delete from disk).  This trick is rarely used, but worth testing to understand when beneficial (i.e. when an upcoming paravirtual change is imminent and you need to have maximum uptime and minimal reboots).

Using the pvSCSI comes down to experience.  There is nothing overly tricky about it, but it does have personality so it's worth learning the behavior.  It should be stated that there is a degree of risk in using them as from time to time the data drives that use these have the potential to come up in an 'offline' state in diskmgmt.msc.  This can happen after certain updates but is very rare.  Usually it is observed during the first reboot (following creation) if it will happen.

0 Kudos
pcat
Contributor
Contributor

Hi Grasshopper,

Once again thanks for your help on this!

0 Kudos