VMware Cloud Community
manofbronze
Enthusiast
Enthusiast

Converting iSCSI vNICS from (4) 1Gbps Broadcom to (2) 10Gbps Intel X540 NICS

Our company has recently procured (2) Force 10 S4820T network switches with the intent of upgrading our switch fabric to 10Gbps across the board. Additionally, we have purchased (2) Intel X540 DP (Dual Port) 10GbaseT NICS for each of our (3) vmWare host servers.

Currently we are running (3) ESXi 5.5 U1 host servers in an HA cluster. There is (1) Dell R710 with (4) Broadcom 1Gbps nics dedicated to iSCSI traffic destined for (2) SANs - a DELL MD3220i and MD3800i. The other (2) hosts are DELL R720 servers with (6) Broadcom NICS dedicated to iSCSI traffic destined to the same target SANs. The R710 has a single vswitch defined for iSCSI traffic to both SANs. Each of the R720 host servers have (2) vswitches dedicated to iSCSI traffic. One each with (4) vmk ports matched to (4) 1Gbps nics dedicated to the MD3220i and one each with (2) vmk ports matched to (2) 1Gbps NICS dedicated to the MD3800i. All vswitches have jumbo frames enabled. All data stores are configured to use Round Robin multi-path I/O.This design gives us (4) active and (4) inactive paths to the MD3220i on each host and (2) active and (2) inactive paths on each host to the MD3800i.

Our intent is to use (2) of the (4) 10Gbps nics for all iSCSI traffic and (2) for general network/managemnt traffic. We would "retire" the 1Gbps nics entirely.

My questions/concerns are as follows:

  1. Even though each NIC dedicated to iSCSI traffic will now be 10Gbps, vs 1Gbps, will reducing the physical NIC count to (2) serve to defeat the purpose of the NIC upgrade?
  2. What ramifications are there to no longer having the iSCSI traffic destined for the MD3800i separate from that destined for the MD3220i?
  3. What are the ramifications of defining (4) paths on (2) vNICS for traffic destined to the MD3220i (one path for each subnet defined on the MD3220, as best practices dictate)?
  4. Since the MD3220i only has 1Gbps ports - (4) on each of (2) controllers, would it make sense to reconfigure the iSCSI ports on the MD3220? Currently there are (4) subnets, pairing the corresponding ports from each controller (I.E. 192.168.230.x = ports 0/0+1/0, 192.168.231.x =0/1+1/1, 192.168.232.x = 0/2+1/2, etc.) into (2).
  5. The S4820T switch is auto-sensing, so we are being assured we can attach the (8) ports from the MD3220i, but what are the ramifications of a 10Gbps NIC attempting to negotiate with one or more 1Gbps port? I have read stories of the 10Gbps NIC falling back to 1Gbps entirely until communication with the 1Gbps target is complete. This would be highly undesirable. Has anyone encounter such an issue?
  6. Finally, has anyone made a similar move with similar hardware?

Thanks in advance to anyone that might like to offer up their opinions and/or advise.

Regards,

Don

0 Kudos
1 Reply
vfk
Expert
Expert

Hi

Here is my thoughts

Even though each NIC dedicated to iSCSI traffic will now be 10Gbps, vs 1Gbps, will reducing the physical NIC count to (2) serve to defeat the purpose of the NIC upgrade?


No, it will actually be better, you will have more bandwidth, less cabling, simpler configuration.

What ramifications are there to no longer having the iSCSI traffic destined for the MD3800i separate from that destined for the MD3220i?

It depends on your workload, but this should not really matter at 10GBe, also MD3220i cannot do more than 1gbe anyway.

What are the ramifications of defining (4) paths on (2) vNICS for traffic destined to the MD3220i (one path for each subnet defined on the MD3220, as best practices dictate)?

Here is the best practise running iscsi storage with vmware http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf - Also, you want to avoid multiple subnets for iscsi, here is a good read http://wahlnetwork.com/2015/03/09/when-to-use-multiple-subnet-iscsi-network-design/

Since the MD3220i only has 1Gbps ports - (4) on each of (2) controllers, would it make sense to reconfigure the iSCSI ports on the MD3220? Currently there are (4) subnets, pairing the corresponding ports from each controller (I.E. 192.168.230.x = ports 0/0+1/0, 192.168.231.x =0/1+1/1, 192.168.232.x = 0/2+1/2, etc.) into (2).

As above be careful with multiple subnet for iscsi, please read Chris Wahl article.  Will it make sense for iscsi ports??? depends what other protocols are available for this storage, NFS, FC and so on.

The S4820T switch is auto-sensing, so we are being assured we can attach the (8) ports from the MD3220i, but what are the ramifications of a 10Gbps NIC attempting to negotiate with one or more 1Gbps port? I have read stories of the 10Gbps NIC falling back to 1Gbps entirely until communication with the 1Gbps target is complete. This would be highly undesirable. Has anyone encounter such an issue?

It will work, backward compatible, but you will not be able to get more than 1GBps of speed.

Finally, has anyone made a similar move with similar hardware?

It is not complicated, you just need to plan it carefully, and do it in maintenance window.   If you have free ports on the storage, it will help as you can use that for test initial configure is working for your new switches.

--- If you found this or any other answer helpful, please consider the use of the Helpful or Correct buttons to award points. vfk Systems Manager / Technical Architect VCP5-DCV, VCAP5-DCA, vExpert, ITILv3, CCNA, MCP
0 Kudos