VMware Cloud Community
sansaran
Contributor
Contributor
Jump to solution

vmotion traffic isolation, vlan trunking

we have 2 full  length M910 blade servers sitting in dell blade enclosure. Installed  esxi 5.0 on both blades and joined them to cluster.

  Each full  length blade server has 8 nics.  2 dual port on board NIC and 2 dual  port Ethernet mezzanine card.  All are connected to internal cisco  switch 3130 installed on i/o module A1, A2 ,B1 and B2. all internal  switches are stacked together by network team. and there is a connection  from internal switch (uplink) to external switch  (ports) which are on  vlan 137

  All ports that are connected to esxi host are  configured as trunk on internal physical cisco blade switches by network  team. in our case total 16 ports( 8 nics x 2 servers) are set to trunk  on internal cisco switch and there is uplink from internal cisco switch  to our external switch( which is on vlan 137)

   On esxi5.0 we have configured one big flat switch assigning all physical nics to Vswitch 0.
Please refer screen shot for port groups configured.
 
     To isolate vmotion traffic,  we have configured different vlan  tag(150) for vmotion. but vmotion is not working. unable to ping vmotion  ips from each other.  but if i change Vlan tag to 137. vmkping works  from each other and vmotion working.

   if i change vlan tag to  other than 137 to any port group ( such as management or virtual  machine) , i am loosing connection to corresponding port group.


   i believe missing to configure something on internal cisco blade  switches (3130). please advise on what needs to be configured. i kind of  know why trunking is required. if you could explain exact purpose of  why trunking required for esx would be great.

   what is best practice to configure virtual switch , like  one big flat switch or multiple switches
assigning  port group to each switch. what is recommended configuration to achieve  increased inbound and outbound load balancing and fail over.  detailed  explanation would be really helpful for non-networking admins

Tags (2)
0 Kudos
1 Solution

Accepted Solutions
a_p_
Leadership
Leadership
Jump to solution

Let me try to describe one possible configuration.

First some facts/assumption:

  • 2 ESXi hosts
  • 4 blade switches
  • 1 external switch
  • 8 NICs in each blade server (2 NICs to each of the switches)
  • vmnic0 and vmnic4 are connected to two different switches
  • different subnets/VLANs for vMotion (100), Managment (101) and VM Networks (102, ...)
  • all VLANs represent different IP subnets

Virtual Network Configuration:

  • 2 vSwitches: 1 for vMotion and Management, 1 for the VM Networks
  • vSwitch0 for Management and vMotion (vmnic0 + vmnic4)
    -> Management Port Group (VLAN 101): vmnic0 (active), vmnic4 (standby)
    -> vMotion Port Group: (VLAN 100): vmnic4 (active), vmnic0 (standby)
  • vSwitch1: VM Networks (vmnic1..3 + vmnic5..7)
    -> VM Port Group 1 (VLAN 101)
    -> VM Port Group 2 (VLAN 102)
    -> ...

Blade Switches:

  • all of the VLANs configured in the Virtual Network are present
  • all downlink ports to the ESXi hosts are configured for mode trunk, all VLANs allowed
  • at least 2 uplinks to the external switch configured as an EtherChannel trunk (LACP)
  • the uplink and downlink ports (on each of the switches) are in a Link State Tracking group

External Switch:

  • all of the VLANs configured in the Virtual Network are present
  • four Port Channels/EtherChannels (LACP), one to each blade switch

You may configure the VLANs on the switches separately or by using VTP. Anyway, all VLANs need to be present on any of the switches. If you need to route traffic between certain VLANs you either have to implement a router in your network or - in case the switches support it and are properly licensed - configure ip routing (Inter VLAN Routing).

André

View solution in original post

0 Kudos
8 Replies
a_p_
Leadership
Leadership
Jump to solution

Welcome to the Community,

Did you verify with the network team that VLAN 150 is configured on the switches and is allowed on the uplink ports? Without VLAN 150 being configured/allowed, the switches will not accept traffic tagged with this VLAN ID.

André

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

sansaran wrote:

if you could explain exact purpose of  why trunking required for esx would be great.

"Trunk" is the Cisco word for a link which frames should have additional 4 bytes inserted into them, this is called the 802.1Q VLAN tag, which identifies for the destination side which VLAN the frame is intended for.

When you specify a VLAN number on the vSwitche Portgroup all outgoing frames will be "tagged" with this VLAN id and the destination switch must accept those tags.

My VMware blog: www.rickardnobel.se
0 Kudos
rcporto
Leadership
Leadership
Jump to solution

Hi sansaran,

You wrote that the connection between the blade switchs and external switchs are on vlan 137, right ? If yes, this is the problem 😞 ... the connection between the the blade switchs and external switchs must be TRUNK if you want pass traffic from different VLANs... if you put this connection on VLAN 137, the uplinks will recognize only traffic from VLAN 137... this is why when you put the vmkernel port on VLAN 137, the vmkping works.

Let us know if this solution works 🙂

---

Richardson Porto
Senior Infrastructure Specialist
LinkedIn: http://linkedin.com/in/richardsonporto
0 Kudos
sansaran
Contributor
Contributor
Jump to solution

Thanks for all your Valuable replies. i will check with network team to make sure vlan 150 is configured and allowed on uplink port.

what would be good practise in configuring switches

(i) Having one virual switch (Vswitch 0) assigning all physical NIC's to it

(ii) indivdual switches for each port group  ( no vlan required if we setup in this way right ?)

       Management traffic - Vswitch0( assigning two nics)

       Virtual machine traffic - Vswitch1( Assigning 4 nics)

       Vmotion traffic - Vswitch 2 ( Assigning 2 nics)

which switch configuration provides better throughput and loadbalancing (inbound , outbound)

what would be best practise in configuring Vlan

  (i) Having Mangement traffic(port group) and Virtual machine Traffic(port group) in one Vlan , Vmotion on different vlan

               Management Network - Vlan 50

               VM network               - Vlan 50

                VMKernel (vmotion)   - Vlan 150

(ii)  having all network (management, VM and vmotion) on different vlan

  Differerent Vlan cannot talk each other right.  does Management and VM network has to be on same Vlan ? to share any information between them.

In My case ,  if i want to have different Vlan on Vswitch 0,  Vlans needs to configured on internal cisco blade switchs and need to be allowed on uplink correct?? nothing needs to be configured on Vswitch right other than configuring port group and specifying Vlan id which is confugured on internal blade switchs right

  in my setup

    blade server( ESXi)   ------------> Internal Cisco Blade swith ----------> External switch( for uplink connection)

Each Full length Blade has 8 nics connected to redundant internal switches(A1,A2,B1,B2). all ports needs to be set to trunk right?.

Vlans and trunks should be configured on Internal Blade switch righit. Nothing needs to configured on External swicth right ? Please confirm.

I know it is lengthy question. That would be great if all get answers.

0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Let me try to describe one possible configuration.

First some facts/assumption:

  • 2 ESXi hosts
  • 4 blade switches
  • 1 external switch
  • 8 NICs in each blade server (2 NICs to each of the switches)
  • vmnic0 and vmnic4 are connected to two different switches
  • different subnets/VLANs for vMotion (100), Managment (101) and VM Networks (102, ...)
  • all VLANs represent different IP subnets

Virtual Network Configuration:

  • 2 vSwitches: 1 for vMotion and Management, 1 for the VM Networks
  • vSwitch0 for Management and vMotion (vmnic0 + vmnic4)
    -> Management Port Group (VLAN 101): vmnic0 (active), vmnic4 (standby)
    -> vMotion Port Group: (VLAN 100): vmnic4 (active), vmnic0 (standby)
  • vSwitch1: VM Networks (vmnic1..3 + vmnic5..7)
    -> VM Port Group 1 (VLAN 101)
    -> VM Port Group 2 (VLAN 102)
    -> ...

Blade Switches:

  • all of the VLANs configured in the Virtual Network are present
  • all downlink ports to the ESXi hosts are configured for mode trunk, all VLANs allowed
  • at least 2 uplinks to the external switch configured as an EtherChannel trunk (LACP)
  • the uplink and downlink ports (on each of the switches) are in a Link State Tracking group

External Switch:

  • all of the VLANs configured in the Virtual Network are present
  • four Port Channels/EtherChannels (LACP), one to each blade switch

You may configure the VLANs on the switches separately or by using VTP. Anyway, all VLANs need to be present on any of the switches. If you need to route traffic between certain VLANs you either have to implement a router in your network or - in case the switches support it and are properly licensed - configure ip routing (Inter VLAN Routing).

André

0 Kudos
beckham007fifa
Jump to solution

check the switch configuration specially from balde side. Do one thing thak away all the configuration from the blade switch and ask netwrok people to do the configuration on the blade switch again and immediately have the vlan trunk from network switch end ( access/primary switch) sometimes sync problem between blade and network switch don't allow the trunking properly. Also, do you have any other blade working in your environment, ask network people to take the configuration of the working blade and do the same on this blade. Also check whether transparent option enabled from the blade side coz i have faced an issue due to this. Thanks.

Regards, ABFS
cjscol
Expert
Expert
Jump to solution

Your original confguration will work if VLAN 150 is defined on the internal switches and allowed on ALL the 16 trunk ports the blades are connected to.  As long as you do not need to vMotion between these blades and servers outside of the Dell Blade Chassis then there is no need to change the external switches or the uplinks from the internal switches to the external switches.

When you say that the ports the blades are connected to are configured as trunk ports you do mean trunk ports in that they can carry multiple VLANs don't you?  A lot of people get confussed between trunked and link aggregation and think that when they have combined mutiple ports together in a link aggregation / Etherchannel they think this is a trunk.  The ports on the internal switches that the blades are connected to should be trunk ports, as in they can carry multiple VLANs.

I wouldn't have all NICs conneced to a single vSwitch.  I would go with the setup André suggestion in that you have two vSwitches and also separate the management off to its own VLAN also.  So vSwitch0 has Management and vMotion on it using different VLANs, two NICs connecetd to this vSwitch, Management using one of the NICs as active and the other one as standby, vMotion using the NICs the other way around.  vSwitch1 with all of the other NICs connected to it and your VM Network on it usng VLAN 137.

The ports on the internal CISCO switch used by vSwitch0 should trunk ports and configured for the two VLANs used by Management and vMotion, these ports do not need to include VLAN 137 (as long as you have moved your management interface on to a different VLAN).

The ports on the internal CISCO switch used by vSwitch1 do not need to be truked, they could just be on VLAN 137, but to make it easier for you to add additional VM Network VLANs in the future I would configure these as trunked ports but only on VLAN 137 for now.

If you have other servers outside of the Dell Blade Chassis using the management VLAN then you will need to ensure the management VLAN is also configured on the external switch and the uplinks from the internal switch to the external switch are trunk ports and support the management VLAN as well as VLAN 137 (at both ends).

You shoud be able to configure routing between the VLANs on the switches so if you have a vSphere client running on VLAN 137 it can access the management VLAN.  I would install your vCenter server on the management VLAN also.

Calvin Scoltock VCP 2.5, 3.5, 4, 5 & 6 VCAP5-DCD VCAP5-DCA http://pelicanohintsandtips.wordpress.com/blog LinkedIn: https://www.linkedin.com/in/cscoltock
sansaran
Contributor
Contributor
Jump to solution

Thank you all for wonderful explanations.i will contact network team to configure Vlan and make everything working. i have good understanding now. hope i can explain network team what needs to be done now.

0 Kudos