VMware Cloud Community
pauska
Contributor
Contributor

vSphere 4.1 networking - what are the best practices?

Hi,

I'm wondering if there are any updated "Best Practices" document when it comes to vSphere 4.1 networking. It used to be separate (and redunant) networks for SC, VMK, FT, iSCSI and guest networks, has this changed in vSphere 4.0 or 4.1?

The reason I'm asking this is that I see more and more companies using vSphere with active directory intergration and running vSphere Client on regular computers (as in not on a admin VLAN) - in the same manner as you would use RSAT against Windows servers.

Is it considered safe to run SC on the same network as your guests?

I'm also wondering if there are any problems performance (or security)-wise to use just 2 ethernet ports when you have a gigE enviroment if the load never reaches anything near 30% utilization.

Tags (2)
0 Kudos
12 Replies
chadwickking
Expert
Expert

hey pauska,

I would read up on Kendrick's blog talking about networking set up when working with ESX. It was very helpful to me.

He also references a lot of other material as well..

In regards to the management network its usually best practice to have them on a seperate vlan not necessarily a different network. you can have the inter-vlan setup for a layer 3 switch so you can get to it from your production network. as far as nics go the more the better. Give the blog a good read because you have some other things you should plan for like redundancies and so forth so you can avoid downtime and things like "host isolation". I recall when going through my vcp class they mentioned the top 3 things to avoid doing is

1. having too little memory

2. having Too few NICS

3. Not enough storage

Ultimately the implementation is up to you depending on if this is a "lab" type setup and what not.

Network I/O Best Practices:

another very similar post:

http://communities.vmware.com/thread/74204

I hope this helps.

Regards,

Chad King

VCP

"If you find this post helpful in anyway please award points as necessary"

Cheers, Chad King VCP4 Twitter: http://twitter.com/cwjking | virtualnoob.wordpress.com If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
pauska
Contributor
Contributor

Edit: Funny, my vacation message from Exchange appeared here..

0 Kudos
sketchy00
Hot Shot
Hot Shot

Great questions. ...I was wondering the same thing. I had designed and deployed my cluster under 3.5. Blades that had 6 NIC ports in them (3 cards x 2 ports per card). My designed followed best practices at the time (2 ports for iscsi, 2 ports for production LAN, and 2 ports for the SC and VMotion, which ride over different VLAN's), and each item never riding over two NIC ports that come from the same card. The transition to 4.0 was fine, but now with the migration to the ESXi architecture in 4.1, I didn't know if there was a way I could/should tweak my setup.

I've wanted to employ FT, but under 6 NIC's with iSCSI, was under the impression that I wasn't able to. I'll have to print out and digest that link provided by the other responder, and see if that would work for me.

Will be interested to see what others are doing.

.

0 Kudos
chadwickking
Expert
Expert

Hey sketchy,

Your implementation is pretty standard but implementing FT brings with it many things. One of the biggest things is that it puts a huge amount of load on the network and I would recommend dedicating it to its very own set of NICs apart from any other traffic. When I was attending my class I believe the instructor mentioned that you could possibly get anywhere from 5-7 VM's running FT on a 1 GB NIC. His recommendation was if you plan on implementing it to try to wait for utilizing a 10GB FCoE card to maximize performance. FT is awesome but isn't always the best implementation for some things. Working in a large enterprise (over 5000 servers and nearly 1000 vm host) I am still surprised that we haven't seen it implemented but then again we are a MS Cluster shop so that is where they are going even though it doesn't always make sense :).

Regards,

Chad King

VCP

"If you find this post helpful in anyway please award points as necessary"

Cheers, Chad King VCP4 Twitter: http://twitter.com/cwjking | virtualnoob.wordpress.com If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
sketchy00
Hot Shot
Hot Shot

Thanks Chad for the great reply. Yeah, I had put any FT desires on permanent hold until changes in my hardware occur (The Dell M6xx blades can only have 6 NIC's in them, but they have some neat new switches that fit in the blade enclosure that can increase the capacity of that. 10GbE might come for me before that though...). My questions were out of curiosity from the orginal post, where he mentioned about the Service Console network.

Again, great reply. Thanks.

0 Kudos
pauska
Contributor
Contributor

Thank you for your answers.

I'm aware that the minimum of recommended NIC's are 6 due to redundancy and failover when you're not a FT-shop. I do have to admit that I didn't know FT could cause so insane amounts of traffic - luckily we have 8 ESX servers in total, so this shouldnt be a problem.

My original question still remains. Do you consider it safe to put SC and regular client trafikk (your production network) on the same VLAN? Do you consider it safe to have ONLY two 10GiGE NIC's for -all- traffic?

0 Kudos
chadwickking
Expert
Expert

You could use the 10 gig fc card if you want but i usually do seperate the sc traffic from vm traffic and its a best practice to separate traffic with different switches . You want to use different vmnics on different switches for redundancy purposes. The only implementation for 10gb fcoe cards i have seen are for vmotion and ft traffic. As far if its safe? I would probably say no due to troubleshooting and performance problem with running all the traffic over 2 nics. Though you would have plenty of bandwidth.

Cheers, Chad King VCP4 Twitter: http://twitter.com/cwjking | virtualnoob.wordpress.com If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
pauska
Contributor
Contributor

Hi again..

These are not 10GigE cards planned for FCoE. I'm talking about trunking all traffic (SC, VMK, FT, iSCSI, VM) over two separate cards (connected to separate switches) instead of using 6 (or more) 1gig NICS - like I'm seeing being deployed with many new blade installations.

Why does this involve any performance loss? I also can't see any problems with troubleshooting as long as every vSwitch would use one card for normal traffic and the other one for failover..

0 Kudos
chadwickking
Expert
Expert

I thought you were talking about 10 FCoE cards. Performance would be the current problem with vmotion in vsphere utilizing upto 8GBpS (vSphere 4.1)when performing multiple vmotions... must of not seen your specifics when talking about that. Yes, if you are using multiple vswitches it would help in troubleshooting. It seemed that you were talking like you may be just using 2 10 FCoE cards as an option which would imply one switch.... sorry and thanks for clarifying Smiley Happy .

Cheers, Chad King VCP4 Twitter: http://twitter.com/cwjking | virtualnoob.wordpress.com If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
Texiwill
Leadership
Leadership

Hello,

It is NEVER suggested that your 'Virtualization Management Network' and 'Virtual Machine Networks' share the same VLAN. I would go as far as to say they should never share the same wire even in a 10Gb network setup. Why? Purely for security reasons.

Virtualization Management Network contains: Service Console, ESXi Management Appliance, vCenter, vSphere CLients, and any other virtualization management tool. In effect the KEYS to your Kingdom. This is a single trust zone.

VM Network contains: as many trust zones as you think are necessary. Each trust zone is most likely segregated by VLANs.

Then you have the Storage Network which is its own Trust Zone as well as FT/Vmotion which can be their own trust zone.

So in a 10G environment I would assign at least two of the on board gigabit NICs to be Virtualization Management Network and use 10G for other things.

This depends on how secure you need to be. I know some locations that have to maintain PHYSICAL network separation. Others that rely on VLANs and TRUST their physical switches. Either works, but in each case, the Virtualization Management Network is always segregated from any other network/Trust Zone. Yes you can combine on the same wire, but you are 'trusting' VLANs to be safe in this case.

Remember if you work with VLANs your Trust moves to the physical switch... Ensure you can verify their configuration as often as necessary to maintain this security stance.


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, 2010

Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]

Also available 'VMWare ESX Server in the Enterprise'[/url]

Blogging: The Virtualization Practice[/url]|Blue Gears[/url]|TechTarget[/url]|Network World[/url]

Podcast: Virtualization Security Round Table Podcast[/url]|Twitter: Texiwll[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
pauska
Contributor
Contributor

Texiwill, thank you for the very precise and interesting answer.

I agree on you when it comes to separating all traffic to private segments so that nothing or no one can interfere (or hack its way in). I do however have problems seeing the valid point in having to trust your switches, why wouldnt you do that? I'd might think that way if other people managed my switches (and suddenly merged the DMZ and FT VLAN or something stupid like that), but when I'm the guy that runs and admins everything this shouldnt be an issue.

Good point about onboard gbit nics, all servers have though. I was told once to never rely on intergrated nics alone (as they often share the same controller/bus etc), so in my head I've always designed ESX to use vmnic0 (first intergrated gbit nic) as a primary NIC and vmnic2 (first extra NIC) as failover etc.

Perhaps I should looking at dedicating each 10gbit port for it's own usage and just have failover spread out "just in case". I can't possible believe that I will get the need for load-balanced 10GigE in any near future.

Thanks for the good advice!

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I agree on you when it comes to separating all traffic to private segments so that nothing or no one can interfere (or hack its way in). I do however have problems seeing the valid point in having to trust your switches, why wouldnt you do that? I'd might think that way if other people managed my switches (and suddenly merged the DMZ and FT VLAN or something stupid like that), but when I'm the guy that runs and admins everything this shouldnt be an issue.

The trust for VLANs is in your switch configuration and its failure mode. As a one man shoppe so to speak, you have trust that they are configured properly but I have heard of high end and low end switches that failed 'open' and broadcasted VLAN traffic across ALL VLANs and not the designated one. This was a physical switch failure that required switch replacement. So your TRUST is still in your physical switch layer.

Good point about onboard gbit nics, all servers have though. I was told once to never rely on intergrated nics alone (as they often share the same controller/bus etc), so in my head I've always designed ESX to use vmnic0 (first intergrated gbit nic) as a primary NIC and vmnic2 (first extra NIC) as failover etc.

I use my onboard NICs for management mostly but like you I also have them shared with a pNIC on another board just to be sure a PCI failure does not occur. This depends entirely on the PCI controller setup. Not all on board pNICS share the same controller, but if you are going down the PCI controller path it is best to understand your hardware very well. For example, eth0 and ILO/DRAC cards often share the same IRQ and may not be gigabit so you get weird results just using eth0. Once more a case of knowing your hardware. I always look for that combination as it bit me pretty bad once.

Perhaps I should looking at dedicating each 10gbit port for it's own usage and just have failover spread out "just in case". I can't possible believe that I will get the need for load-balanced 10GigE in any near future.

I would still use 2 10G ports and 2 1G ports. I really like redundancy, if you have 1 10G port and it fails where would that traffic go?


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, 2010

Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]

Also available 'VMWare ESX Server in the Enterprise'[/url]

Blogging: The Virtualization Practice[/url]|Blue Gears[/url]|TechTarget[/url]|Network World[/url]

Podcast: Virtualization Security Round Table Podcast[/url]|Twitter: Texiwll[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos