VMware Cloud Community
Skumar704
Contributor
Contributor
Jump to solution

Network for NFS

Hi,

I have infrastructure as follows;

2 host each having 6 NIC of 1Gig speed each.

One NAS storage having 4 NIC.

Two L2 Switches(HP managed).

Planning to run it per best recommendations and requirement, so that there should not be a SPOF at any level.

So keeping this in mind we thought of using ports on each server as follows;

2 for NFS Storage,2 For Production/Management and remaining 2 for vMotion on each server.

One cable from each port configured for respective roles going into uplink switch1 and switch2,so that if one switch goes down we always have other switch taking it over.

Separate vlans are configured per port on each switch for different type of traffic.

My question as below;

Should I Team both the ports on each vSS.If yes what should be the settings for NIC teaming for production,storage and vMotion Network (keeping in mind cables going on to separate uplink switch).

Should I keep adapter in active-active mode or active-standby.

I do not think if any specific settings to be done up link switch as only one cable per one port is going there and I do not have option of ether channel or LACP.

Moreover license of vmware is essential plus hence having no option to use dynamic switches.

Planning to use 5.5.0.

Also do you suggest to use jumbo frames as well in this.

Regards,

Sushil

0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership
Jump to solution

Hello,

I still suggest you put management and vMotion on the same set of pNICS not management and workloads. It makes no difference where they are from a subnet perspective.  I also suggest reading the following:

Those should get you going.

pNICS have no IP address within a vSphere environment, they act as an uplink between a physical switch and virtual switch. Depending on how you trunk your VLANs the trunk either ends at the pSwitch (external switch tagging) or the virtual switch (virtual switch tagging). Most people trunk their VLANs to the virtual switch.

You want something like the following:

pSwitch <-> pNIC0 <-> [vSwitch0 <-> Portgroup] <-> management (subnet1)

pSwitch <-> pNIC1 <-> [vSwitch0 <-> Portgroup] <-> vMotion (subnet2)

on failover between pNIC0 and pNIC1 vMotion and Management end up on the same pNIC but when normally running they stay separate. This is the recommended method. In this case you would trunk the VLANs to the vSwitch. I know of some who just do not use VLANs but who just use separate subnets and it does work as well.

pSwitch <-> pNIC2/pNIC3 <-> vSwitch1 <-> Portgroup(s) <-> workloads (subnet1)

If you use VLANs  (except the one for vMotion) you are trunking to vSwitch1 (virtual switch tagging). If subnet1 is on the same vSwitch and the trunk is correct through the pSwitch ports then it can talk to management on vSwitch0 without any effort. Switches know how to route VLAN traffic.

pSwitch <-> pNIC4/pNIC5 <-> vSwitch2 <-> Portgroup <-> NFS (subnet3)

Here we can bind pNIC4 and pNIC5 together or use them as a failover pair just for NFS on its own subnet/VLAN itself. This VLAN can terminate at the pSwitch if you desire or once more terminate at the vSwitch.

In this setup you have 3 VLANs and 3 subnets (use subnets per VLAN are also recommended).... as an example:

VLAN100 -> subnet1 -> Management/Workloads

VLAN200 -> subnet2 -> vMOtion

VLAN300 -> subnet3 -> NFS

Let the pSwitches do any 'routing' of traffic for each VLAN. You only need a routing appliance if you WANT to cross VLAN boundaries and there is absolutely no need to do so in this setup.

Best regards,
Edward L. Haletky
VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014

Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.

Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
6 Replies
Texiwill
Leadership
Leadership
Jump to solution

Hello,

I would make one change.... well maybe a few:

2 for NFS (you can bond them, only if your NAS supports this and it will give you an aggregate throughput, which is not a bad way to go)

2 for Workloads

2 for vMotion/Management (1 for each and use, each other as the failover path if necessary)

I would NOT mix your management and workloads on the same set of pNICs, instead you can mix on the vMotion pair. THis is recommended as both management and vMotion are supposed to be 100% segregated from everything else. One way to do this is assign both NICs to a vSwitch then use one in a vMotion port group (specifying the second pNIC for failover) and reverse that for management consoles. In effect you would only be on the same pNIC in a degraded networking situation. You can use VLANs if you want as well. This also helps with Workload performance because management and vMotion are not used all that often and when vMotion is in use, not much management is getting done usually. But workloads always run and this way you do not get impact on workloads when vMotion or management is in use.

Management/vMotion segregated from all other networks. Which you can do using VLANs, physical segregation, or some other segregation technology.

Just my 2 cents...

Best regards,
Edward L. Haletky
VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014

Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.

Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Skumar704
Contributor
Contributor
Jump to solution

Hello Edward,

Thanks for your reply.

I should be good to place Management/Service Console and Vmotion pair together and use in Active/Passive Failover.

Keeping above in mind; (Please correct Wherever I am wrong)

1).I need to consider both Management and vMotion in same subnet.

2). Keep both at separate Vlans,but have inter-Vlan communication among each-other (Considering this because in case of failure of one pNIC management reachable via vMotion network of vice-Versa).

3). If statement point number 2 is correct or best practice,then I require to have Layer 3 device at uplink in order to have inter-Vlan communication,which I do not have in my setup.

4). Is it possible If I can keep my Production and this Management/vMotion in same Subnet (But on separate pNIC as per your recommendation)? Reason for this is that we wanted to have vCenter in same subnet as that of my Domain Controller.Again if we opt for different subnet than production for management/vMotion subnet required to be accessible from production and hence inter-vlan communication and L3 deivce.Or Suggestion is to have management totally separate from production subnet?

5).There comes workaround for separate vlan.Since we are connecting virtual setup on two HP Layer 2 switches which are non-stackable.These two L2  in turn going to be connected to core at Layer3 Device.I do not know if it is a great idea if this device for inter-vlan to have separate vlans to answer question no.4.

Few More suggestions needed;

1. Any specific recommendations for teaming at vmware for NFS storage setup.Of course I will keep it in separate vlan than all.

2. Does aggregation/Load sharing works for NFS or does failover only.

3. As for these two switches we are going to trunk with two port each so that we get redundancy.Also one cable from each pNIC to one switch each and other to second.Hope that serves my redundancy and SPOF.Also throughput as far as my NFS storage will be good.I am worried since having 1Gig connecting the storage.

Regards,

Sushil

0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

1) no, you never put vMotion and management on the same subnet, you actually HAVE to use a different subnet. In this mode the pNIC is nothing but an uplink device so it does not matter the IPs are set within the vmkernel for both, there is a virtual wire from that port to the vSwitch, from the vSwitch to the pNIC and then you have a phyisical wire. Just think of how you would normally connect up a network and logically that is how it is.

2) No need for inter-vlan commo, actually that would be a very very bad idea from a security perspective for these two networks. Nothing should be on the vMotion network but ESXi hosts as this is sending memory over the wire. Memory has all sorts of goodies in it such as credentials, encryption keys, etc.

3) not necessary

4) Use multiple portgroups on your management vSwitch for management VMs or have a new vSwitch that is in the same broadcast domain. I do this, I have a DVS for admin VMs and a VSS for vCenter, VMKernel, etc. You can also have a VCNS Edge gateway or other inline firewall to route non-management traffic such as AD into your management network.

5) over thinking this

Check out the following for some assistance. http://www.virtualizationpractice.com/?file_id=380 . This is a full hybrid cloud architecture but the important parts for you are what happens in the virtual environment with respect to networking (the center of the diagrams). I build up from nothing to a fully secure virtual/cloud environment. Let me know if there are any questions.

Re: NFS

NFS is pretty network agnostic what is important is the NAS/NFS Server and how it handles the various forms of teaming. Remember, you are teaming from the device to a physical switch, but not necessarily past that. So if your device supports teaming. You would team from the NAS to the procurve, then let your hosts talk to it as normal. You will aggregate 2G of traffic from X number of hosts all running at 1G... may be the better way to go, not sure on that actually but it is something to think about.

Remember it is many hosts talking to a single NAS head, so where you put the team could be important for over all performance across the cluster of ESXi hosts.

Best regards,
Edward L. Haletky
VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014

Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.

Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Skumar704
Contributor
Contributor
Jump to solution

Thanks Edward.I do not have fancy to run DVS as has purchased the licenses essential plus kit.

I indeed may be overthinking.

All looks good to me now.I Will keep Management in same subnet as that of production (hope that is fine with separate pNIC) and vMotion on separate subnet.All will be on vSS.

Just curious to know how will fail over on Management and vMotion going to work.

Lets say Management  vmkernel on 192.168.10.100/24  (so is the Production) and vMotion is on 172.16.0.100/16 in Active-Standby uplinks on pNic0 and pNic1 in order for Host one.(Can it be kept as active-active under NIC team)?

On host 2 Management is on 192.168.10.101/24 and vMotion on 172.16.0.101/16.

Uplink for pNic0 is connected to port 3 on Physical Switch.Since port 3 has vlan of 192.168.10.0/24 subnet.

Uplink for pNic1 is connected to port4 on physical Switch.Port 4 has vlan of 172.16.0.0/16 subnet.

Now say if pNic0 fails then pNic1 will take it over and assuming if traffic will take path using pNic1 via port4 on Physical Switch.

Does it going to work in background this way?But pNic1 is hard coded for vlan of 172.16.0.0/16 subnet and I must not be thrown out of management of host on ip 192.168.10.100.

If I compare it with physical switches we rely on trunking to communication among the pSwitches and allow all the vlans to trunk through only one uplink port.

In Virtual environment we connecting using 2 nics and that too the port where it is connected is on separate subnet and not communicating too.

In nutshell,If I consider pNIC as uplink device does it allow all vlans to propagate through it on all uplinks coming behind the virtual vmkernel?

Regards,

Sushil

0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

I still suggest you put management and vMotion on the same set of pNICS not management and workloads. It makes no difference where they are from a subnet perspective.  I also suggest reading the following:

Those should get you going.

pNICS have no IP address within a vSphere environment, they act as an uplink between a physical switch and virtual switch. Depending on how you trunk your VLANs the trunk either ends at the pSwitch (external switch tagging) or the virtual switch (virtual switch tagging). Most people trunk their VLANs to the virtual switch.

You want something like the following:

pSwitch <-> pNIC0 <-> [vSwitch0 <-> Portgroup] <-> management (subnet1)

pSwitch <-> pNIC1 <-> [vSwitch0 <-> Portgroup] <-> vMotion (subnet2)

on failover between pNIC0 and pNIC1 vMotion and Management end up on the same pNIC but when normally running they stay separate. This is the recommended method. In this case you would trunk the VLANs to the vSwitch. I know of some who just do not use VLANs but who just use separate subnets and it does work as well.

pSwitch <-> pNIC2/pNIC3 <-> vSwitch1 <-> Portgroup(s) <-> workloads (subnet1)

If you use VLANs  (except the one for vMotion) you are trunking to vSwitch1 (virtual switch tagging). If subnet1 is on the same vSwitch and the trunk is correct through the pSwitch ports then it can talk to management on vSwitch0 without any effort. Switches know how to route VLAN traffic.

pSwitch <-> pNIC4/pNIC5 <-> vSwitch2 <-> Portgroup <-> NFS (subnet3)

Here we can bind pNIC4 and pNIC5 together or use them as a failover pair just for NFS on its own subnet/VLAN itself. This VLAN can terminate at the pSwitch if you desire or once more terminate at the vSwitch.

In this setup you have 3 VLANs and 3 subnets (use subnets per VLAN are also recommended).... as an example:

VLAN100 -> subnet1 -> Management/Workloads

VLAN200 -> subnet2 -> vMOtion

VLAN300 -> subnet3 -> NFS

Let the pSwitches do any 'routing' of traffic for each VLAN. You only need a routing appliance if you WANT to cross VLAN boundaries and there is absolutely no need to do so in this setup.

Best regards,
Edward L. Haletky
VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014

Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.

Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
Skumar704
Contributor
Contributor
Jump to solution

Thanks Edward for sharing the articles and your posts.

I understood it well now.

Thanks a ton.

Regards,

Sushil

0 Kudos