VMware Cloud Community
dsohayda
Enthusiast
Enthusiast
Jump to solution

DRS Rules Question

We have a cluster with 5 esxi 4.1 hosts. Our web team has a collection of 10 VMs that they'd like to have at most 2 of their VMs running on the same host at one time. Is this possible using DRS rules?

I put together two groups of hosts and VMs, one group of VMs 1-5 and another 6-10 then hosts 1-2 and 3-4 (leaving one of the 5 hosts out of the groups, though I suppose I could just add it to the second group to keep them together...) I then set it up so VMs 1-5 could only run on hosts 1-2 and VMs 6-10 could only run on 3-4. This doesn't solve the whole requirement of having only 2 VMs on one host at a time.

Any suggestions?

Thanks

0 Kudos
1 Solution

Accepted Solutions
vmfcraig
Enthusiast
Enthusiast
Jump to solution

Assuming you have admission controll enabled with a 20% threshold then none.

As the DRS rules are 'should' rather than 'have'.  Which means that if host ESXi01 fails then WEB01 and WEB02 will run on any other host, when that host is repaired or comes back on line, DRS will migrate them back to it.

Blog - vmfocus.com Twitter - @Craig_Kilborn

View solution in original post

0 Kudos
8 Replies
TomHowarth
Leadership
Leadership
Jump to solution

There are two types of DRS rules, Must and Should and two styles of rules affinity and anti-afinity

so to have two Guest running on each host you will need to set up some afinity rules to state the these hosts always stay together and some anti affinity rules to say that these guests always stay about

Have a read of this article for a step by step overview of the process

http://www.petri.co.il/host-drs-affinity-rules-vsphere-4-1.htm

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
vmfcraig
Enthusiast
Enthusiast
Jump to solution

You need to think about HA incase you have a host failure.  This is how I would do it using 'should' rules:

Edit Cluster Setting > vSphere DRS > Rules > DRS Group Manager

VM DRS Group

Add DRS Group, call it WEB0102

Add VM WEB0102

Add DRS Group, call it WEB0304

Add VM WEB0304

Add DRS Group, call it WEB0506

Add VM WEB00506

Add DRS Group, call it WEB0708

Add VM WEB0708

Add DRS Group, call it WEB0910

Add VM WEB0910

Host DRS Group

Add DRS Group, call it ESXi01

Add Host ESXi01

Add DRS Group, call it ESXi02

Add Host ESXi02

Add DRS Group, call it ESXi03

Add Host ESXi03

Add DRS Group, call it ESXi04

Add Host ESXi04

Add DRS Group, call it ESXi04

Add Host ESXi04

Rules

Go to the Rules Tab

Select Virtual Machines to Hosts

Cluster VMGroup is WEB0102

Select Should Run On Hosts in Group

Cluster Host Group is ESXi01

Select Virtual Machines to Hosts

Cluster VMGroup is WEB0304

Select Should Run On Hosts in Group

Cluster Host Group is ESXi02

Select Virtual Machines to Hosts

Cluster VMGroup is WEB0506

Select Should Run On Hosts in Group

Cluster Host Group is ESXi03

Select Virtual Machines to Hosts

Cluster VMGroup is WEB0708

Select Should Run On Hosts in Group

Cluster Host Group is ESXi04

Select Virtual Machines to Hosts

Cluster VMGroup is WEB0910

Select Should Run On Hosts in Group

Cluster Host Group is ESXi05

Blog - vmfocus.com Twitter - @Craig_Kilborn
dsohayda
Enthusiast
Enthusiast
Jump to solution

Thank you for the thorough suggestion. What risk does this configuration pose if/when a single host goes down? Would HA run into a problem bringing VMs up on another host if this were to occur, or would they just come up on the surviving second member of the host group?

0 Kudos
vmfcraig
Enthusiast
Enthusiast
Jump to solution

Assuming you have admission controll enabled with a 20% threshold then none.

As the DRS rules are 'should' rather than 'have'.  Which means that if host ESXi01 fails then WEB01 and WEB02 will run on any other host, when that host is repaired or comes back on line, DRS will migrate them back to it.

Blog - vmfocus.com Twitter - @Craig_Kilborn
0 Kudos
dsohayda
Enthusiast
Enthusiast
Jump to solution

I set your scenario up in our lab and it did just as you said. This should work nicely. Thank you for your help

0 Kudos
vmfcraig
Enthusiast
Enthusiast
Jump to solution

No problem, pleased I could help.

Blog - vmfocus.com Twitter - @Craig_Kilborn
0 Kudos
EdWilts
Expert
Expert
Jump to solution

There are several ways to solve this.  Another way, without using host affinity rules, is to use a combination of affinity and anti-affinity rules.

Create rule1 with web01 and web02 and say they should run together

Create rule2 with web03 and web04 and say they should run together

Create rule3 with web01 and web03 and say the should NOT run together.

DRS will now separate the web servers following those rules and it won't matter if you later add another host to the cluster.

You do have to be careful about maintenance/patching.  It's possible that you won't be able to put a host in maintenace because DRS won't be able to move the guests without violating your constraints.  HA isn't a problem - it will do the right thing and bring the guests up on another host before DRS kicks in and tries to enforce the rules.

.../Ed (VCP4, VCP5)
0 Kudos
dsohayda
Enthusiast
Enthusiast
Jump to solution

I'm not sure if I configured it exactly as you explained, but here is what I did. VMs migrated to hosts correctly keeping only two per host, but when I reset one of the four hosts HA seemed to successfully restart the two VMs that were on that host, but gave an HA error as well. The error explained that it was unable to migrate from one host to another and reported errors in its attempts to provide vsphere HA support.

When the host went down VMs 3 and 4 were restarted on hosts 17 and 16 respectively. Since VMs 1 and 2 were already on host 16 and VMs 5 and 6 were already on host 17 that posed a problem based on my rules. I assume you had different rules in mind for the "keep VMs separate" portion though.

0 Kudos