To provide redundancy for both the service console and vmkernel, is it practical to combine them on the same vSwitch with teamed NICs?
Each NIC of course would be connected to a different network switch. I just don't like the idea of the console and vmkernel being single points of failure.
All opinions are welcome. Thanks.
Each NIC of course would be connected to a different network switch. I just don't like the idea of the console and vmkernel being single points of failure.
Well the switch is just more of a port aggregrate. Sure having a backup would give you better piece of mind. We have about 30 ESX servers now. we don't use more than 1 NIC for the SC or VMkernel, and they are on the same switch.
When was the last time a NIC actually failed? Also keep in mind if you use NIC for your VM's they wouldn't be part of that scheme anyway, so if the SC port goes down and vkernel, what's the worst that can happen? Your VM's will still run..... And in 3 years since we had this setup, not once have we had a problem with a single NIC.
So that 'failure' is a contingency, even physical servers with 2 NIC's it a false sense of security, because rarely do people set them properly to fail over, they are just there as an alternate.
JDoll,
I've been working with VMware for about 5 years now (yea, started on 2.x )_ As a consultan. Since the VI3 release it's been a common practice for the SC and VM ports to be configured on Vswitch0. Vswitch0 will be configured with two pnics Typicaly the first port on seperate cards. (one on board and one on the first NIC card in the server)
For case of argument.... Let's say pnic0 and pnic2 are connected to Vswitch0. Loadbalance is set to route on port ID. (also no eitherchannel) the SC and the VM ports could be either on the same vlan on on different vlans.
The SC port group would be configured to use pnic0 as the active nic and pnic2 as a standby nic.
The VM port group would be configured to use pnic2 as the active nic and pnic0 as the standby nic.
This gives you redundancy without "burning" too many nic ports. This also can support truning to keep Vmotion data off of the production vlan. And this will also adapt well to redundant active/active core networks as well.
-Mr. Andersen.
I create logical segregation points between "management" traffic and vm traffic, and storage traffic. I consider service console and vmkernel for vmotion, management traffic, and will keep these on one vSwitch with two pNics for redundancy. I would create a separate vSwitch for vm traffic, for security. Since vSwitch's are objects in memory, separate vSwitch's would be separate objects in memory, and would completely separate any "sharing" between the two traffic types. Storage would be the 3rd type, and would warrant its own vSwitch and pNic's to segregate that traffic as well. If you can manage it, have separate VLANs for everything as well, for logical and physical separation.
-KjB
VMware vExpert