VMware Cloud Community
Vengant
Contributor
Contributor

ESXi 6.7 iSCSI MPIO and link aggregation

Hi all,

Please help me to understand the best way to achive maximum performance in following scenario.

I have a standalone ESXi host and a HPE MSA 1050 storage shelf.

MSA has 4 1GbE iSCSI ports. ESXi has a 4-ports 1GbE NIC dedicated for SAN traffic. Both MSA and ESXi are connected to Cisco switch stack.

Ports of MSA Controller A have IPs 192.168.1.1 and 192.168.1.2, Controller B - 192.168.2.1 and 192.168.2.2. Subnet A and subnet B are in different VLANs.

esx-san.jpg

 

What would be the best way to get maximum performance here? I see following options:

  1. Create 4 VMKernel NICs for each physical NIC of server, assign IPs (2 in subnet A and 2 in subnet B) and use MPIO Round-Robin (but I am not sure if it would work without vSphere and if all 4 server NICs would be really utilized)
  2. Combine 4 physical NICs of server to 2 port channels for subnets A and B, create 2 VMKernel NICs and use MPIO Round-Robin (still not sure if ESXi would utilize all MSA controller ports...)
  3. Place all MSA controller ports to the same subnet and use iSCSI VMKernel binding? Would it work on a standalone ESXi?
  4. ???

The goal is to get kind of link aggregation (2x1 GBit) for connection between server and each controller of MSA.

Thank you.

0 Kudos
0 Replies