VMware Cloud Community
vapor
Contributor
Contributor

iSCSI -vmkernal gateway

I cannot get my host to connect to iscsi datastores unless there is a router present, or gateway.

Screen shot 2011-02-22 at 2.17.46 PM.png

If I connect these switches to the network switch which has a route to gateway then the iscsi datastores will connect.

If I unplug the switches from the rest of the network they will not connect to my datastores, might this have anything to do with dynamic discovery?

I read that the vmkernal gets its routing table from the gateway router? but i wouldn't think that mattered since the vmkernals i created were on seperate subnets.

Here is another view

Screen shot 2011-02-22 at 2.28.34 PM.png

Do I need the port group?

0 Kudos
5 Replies
Dave_Mishchenko
Immortal
Immortal

What sort of storage system are you connecting to?  You might want to look through this document - http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf - start about page 36.  If the SAN presents a single IP then you can create a vSwitch with multiple vmkernel ports and bind each to the software iSCSI adapter.  If not, the you might go with 2 vmkernel ports on different subnets.  Which IP do you connect to for management purposes?

vapor
Contributor
Contributor

172.160.10.20, But that is just for now to get it working, before I had another vmkernal on the virtual machine network I was using for managment. Since I could not get the storage connected on seperate subnet I reverted to using a shared subnet with my virtual machines and the 10.1.1.0 network as another connection.

My storage has two Ip addresses a 172.16.10 address and a 10.1.1 address so i can multipath and set my preferred.

When I put my iscsi vmkernals and storage on another network they do not connect, my vmkernal gateway is set to 172.16.10.1.

0 Kudos
DCjay
Enthusiast
Enthusiast

I have had similar issues in the past. Mine was fixed by ensuring  that the vmkernel  default gateway is accessible/pingable by the ESX host.

0 Kudos
depping
Leadership
Leadership

Best practice would be to create different VLANs for different types of traffic so for example:

Management Network - VLAN10

vMotion VMkernel - VLAN20

Storage Network - VLAN30

VM Network - VLAN40

Within the storage network you can then, depending on the type of array you are using, setup multiple VMkernels as Dave mentioned and either bind these to a single NIC or create a channel to the network, again also depending on the type of network switches you have.

It might be a good idea to get a local consultancy partner in for a day to help ensure you are following best practices. Although there is a cost associated with it there would also be a cost associated with possible downtime when incorrectly configured.

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

0 Kudos
vapor
Contributor
Contributor

Flashed a router with ddwrt and set up vlans and all is good, thanks for the binding suggestion. I heard heard somewhere else that nics can change names and names can be given to different nics when booting, is this the importance of binding the vkernals to the nics?

0 Kudos