Hi,
I want to isolate NFS from management traffic on a vSphere 5 cluster. The ESX servers have 2 vmkernel interfaces each, one is configured for management (vmk0) and the other has no attributes (vMotion, FT, etc). The interfaces are created on different vswitches therefore are on different NICs.
The NFS box providing the storage is on the same subnet of the management interface, so by looking at the routing table of the ESXi there is only one entry for the subnet
#esxcfg-route -l
VMkernel Routes:
Network Netmask Gateway Interface
10.10.X.0 255.255.255.0 Local Subnet vmk0
default 0.0.0.0 10.10.X.Y vmk0
Since there is no port binding for NFS like with iSCSI, all the traffic to the NFS box will be sent out from vmk0. I am thinking on different options to force the traffic to go through the second vmkernel (vmk1).
1. Add an static route specifying the destination of the NFS box to go through vmk1.
2. Use VLAN tagging on vmk1 and configure the tagging on the switch and in the NFS box.
3. Reconfiguring the network and have a different subnet for NFS traffic (not really my preferred option).
Any comment is appreciated.
Thanks,
Juan.
If you would ask me - I would choose option 3, and if not option two.
The Storage network should be segregated and available only to the hosts that are acessing it.
Maish
Thanks Maish. I ended up going for option 3.
I have pratically the exact same setup. (im doing link aggregation)
I wondered which vmk:# it was using to pass NFS traffic... how do you tell?
Check the ESXi routing table, if you have SSH enabled on the host use esxcfg-route -l.
At the end of each entry you will see which vmk interface is using to reach each network.
I see now:
~ # esxcfg-route -l
VMkernel Routes:
Network Netmask Gateway Interface
192.168.1.0 255.255.255.0 Local Subnet vmk0
default 0.0.0.0 192.168.1.1 vmk0
~ #
Whats wrong with doing vlan tagging? Seems the easiest to implement
It looks like you are using one vmk interface for everything.
yea I think I dont know what im doing
going to try reworking this to set it up more like the way you have it with option #3
Right, if the two vmkernels are on the same subnet, the ESX server will use the first adapter you added (vmk0), so having different subnets for storage and management makes the traffic segregation, and you will see it on the routing table.
This is what I have now:
~ # esxcfg-route -l
VMkernel Routes:
Network Netmask Gateway Interface
192.168.1.0 255.255.255.0 Local Subnet vmk0
169.254.0.0 255.255.0.0 Local Subnet vmk1
default 0.0.0.0 192.168.1.1 vmk0
How do I confirm that NFS traffic is using vmk1?
Also was wondering about this, do I need to make sure my datastore connects via the second subnet, like:
The only way you can connect to the storage on 169.254.1.125 is by using a vmkernel on the same subnet (hopefully you are not routing the storage), so you can be sure the storage traffic is going through vmk1.
Another way to test it is from your storage, from the NFS box management you should be able to see the active session from 169.254.1.25.
jaristizabal wrote:
The only way you can connect to the storage on 169.254.1.125 is by using a vmkernel on the same subnet (hopefully you are not routing the storage), so you can be sure the storage traffic is going through vmk1.
Another way to test it is from your storage, from the NFS box management you should be able to see the active session from 169.254.1.25.
I think it is working because yea on the NFS Server (Synology NAS), I see a lot of network traffic on the 169.254 bound nic
this is how I set it up http://imgur.com/a/KVIIK#0 . I directly connected a cat6 cable from the host to the synology second nic. This killed the ability to talk to the 192.168.1. network.. which killed my ability to talk to my router/internet/etc. So I physically connected another cat6 cable to the host on a different nic to my switch and assigned it to the vm port group (vmnic1)
hopefully im doing this right.. it seems to be working
I guess this is the same thing only cleaner http://i.imgur.com/aDFth.png
Yes, it looks much better when you dedicate a physical NIC for management, storage traffic and VM traffic.
In my infra i used different vlan for NFS traffic for Isolation but 3rd option is also simple and correct it seems because that is a different subnet . But you could also check from ESXTOP and press N to check the traffic transfer copy file somethin from one LUN to Another you will find immediate increase on the inputs/Output from vmk1 NIC