VMware Cloud Community
snmdla
Enthusiast
Enthusiast

vPC and dual homed ESXi (5.1) servers

Recently, we upgraded our core network with Cisco Nexus switches, including a vPC (virtual port channel) setup and 10GE.

The consultant in charge of the migration did not recommend to attach our ESXi servers via vPC (or said that this is only possible with VDS).

Question: as any port in such a configuration with vPC not participating on the vPC is an orphan port, this can have drawbacks (I found a good explan here).

Is it really not possible or not advisable to include dual home ESXi servers in the vPC?

Thanks in advance, Thomas

Tags (4)
2 Replies
snmdla
Enthusiast
Enthusiast

Seems as if this scenario is somewhat complicated.

VMare KB 1001938 (Host requirements for link aggregation for ESXi and ESX) states

   "Link aggregation is never supported on disparate trunked switches."

So we could give it up in the first place.

KB 2006129 (Understanding IP Hash load balancing) is more conceding:

   "Only a single physical switch can be used for a NIC team because most switches do not support EtherChannel bonds across multiple physical switches. This prevents physical hardware redundancy.

   Note: There are some exceptions, as some "stacked" or modular switches can do this across multiple physical switches or modules. Cisco's VPC (virtual port channel) technology can also address this on supported switches. Contact your hardware vendor for more information."

This leaves the situation somewhat unclear.

In case of a lost peer link, the secondary switch will disable all its vPC ports and SVI interfaces. This will certainly lead to unstabilities for ESXi without a LACP config: both ports, as orphan ports will keep up, but only one will have normal access to the network.

Thanks for sharing your thoughts on this,

Thomas

0 Kudos
VirtuallyMikeB

Hello,

So this is a great question, mainly because I can answer it.  The short answer is, feel free to vPC your ESXi hosts to your Nexus switches.  Or not.  It will work both ways with or without a vDS.  There are implications, though, for whichever design you choose.

Let's vPC all the things

----------------------------------

If you vPC you ESXi hosts, from the ESXi host's perspective, you're just changing the port group load balancing algorithm.  The key here for ESXi is that it doesn't have to be an LACP bundle in a vSwitch or northbound on the Nexus. It can be a static Etherchannel on the Nexus-side and still be configured as a vPC.  Because LACP is not required for a vPC, a vDS is not required on the ESXi host, just IP Hash load balancing.  This means a vPC will work just fine with a Standard vSwitch.

Let's KISS

----------------

I don't really mean, kiss.  I mean Keep It Simple Stupid.  Let's better define what an orphan port is first because this affects what a failure scenario may look like.  An orphan port is is a port that carries a vPC VLAN that is not a member of a vPC.  What's a vPC VLAN? I'm glad you asked. A vPC VLAN is a VLAN that traverses the vPC peer-link.  Pretty easy, huh?  Why does this matter? Because if you decide to dual-home an ESXi host to a pair of vPC-enabled Nexus switches without a vPC, there will be additional ESXi failover scenarios to be aware of.  Let me say that this is the simplest scenario of the two listed here and the one I recommend most often because of that.

So you have two sub-scenarios here.  One in which the ESXi host is passing vPC VLANs on it's trunk ports and one in which it is not.

Scenario 1. ESXi is passing vPC VLANs on its trunk ports and its ports are not configured in a vPC

This scenario is the one you've spoken about.  This will show the ESXi ports on the Nexus as orphan ports because they're not in a vPC yet they're passing vPC VLANs.  The risk here is that if the peer-link fails, the secondary Nexus switch will, by default, *shut down* all SVIs fro vPC VLANs.  This can black hole traffic going to the secondary Nexus switch SVI, since that SVI will only exist on the Primary Nexus.  There is a command to keep the SVis of vPC VLANs alive, though: dual-active exclude interface-vlan

Scenario 2. ESXi is *not* passing vPC VLANs on its trunk ports and its ports are still not configured in a vPC

This scenario is perfectly acceptable, as well.  No IP Hash required, no bonding required, LACP or otherwise, and life is good.  At this point, because ESXi's trunk links *do not* carry any vPC VLANs, its ports are not considered orphan ports on the Nexus.  What happens to non-orphan ports (i.e. normal ports) on a Nexus when the peer-link fails? Drum roll please.................................................nothing.  Nothing happens.  Traffic continues to flow, as usual.  So this option is valid, as well, with both Standard vSwitches and vDSs.

I hope this clears things up a bit.

-----------------------------------------

Please consider marking this answer "correct" or "helpful" if you found it useful

Mike Brown

VMware, Cisco Data Center, and NetApp dude

Consulting Engineer

michael.b.brown3@gmail.com

Twitter: @VirtuallyMikeB

Blog: http://VirtuallyMikeBrown.com

LinkedIn: http://LinkedIn.com/in/michaelbbrown

----------------------------------------- Please consider marking this answer "correct" or "helpful" if you found it useful (you'll get points too). Mike Brown VMware, Cisco Data Center, and NetApp dude Sr. Systems Engineer michael.b.brown3@gmail.com Twitter: @VirtuallyMikeB Blog: http://VirtuallyMikeBrown.com LinkedIn: http://LinkedIn.com/in/michaelbbrown