Hi there,
We are about to introduce new Dell blades with a single CNA (dual port) and 10 GB NIC in our environment.
The CNA will be used for IP and FCoE. Now there is some internal discussion on redundancy as it is just one CNA per blade. I see that as a single point of failure. I think we should at least use the 10 GB NIC to separate some traffic (for example management or vmotion).
I’m curious, how do you see this or how did you design this for your environment?
Thanks
We run two dual port CNA, for a total of four converged 10g connections. Each CNA has one connection to each upstream fabric switch.
Each link is used for FCoE, and also carries all VLANs (MGMT, vMotion, NFS, Guests). We run multi-adapter vmotion each link is used for VMotion traffic.
We tag the traffic with COS values on the n1kv, and let the UCS fabric do ingress queuing to guarantee each class of traffic a portion of the link, but not limit it to only that portion if the other traffic types aren't consuming all of their allocation.
With only a single CNA you won't have storage redundancy for FCoE, even if you can use your other 10G to create network redundancy.
We run two dual port CNA, for a total of four converged 10g connections. Each CNA has one connection to each upstream fabric switch.
Each link is used for FCoE, and also carries all VLANs (MGMT, vMotion, NFS, Guests). We run multi-adapter vmotion each link is used for VMotion traffic.
We tag the traffic with COS values on the n1kv, and let the UCS fabric do ingress queuing to guarantee each class of traffic a portion of the link, but not limit it to only that portion if the other traffic types aren't consuming all of their allocation.
With only a single CNA you won't have storage redundancy for FCoE, even if you can use your other 10G to create network redundancy.
Good point about the storage redundancy. Things can get really ugly when suddenly VMs are missing their storage...
You say each link is used for vMotion, but how much bandwidth do you allocate for vMotion in total?
By the way, a great presentation on your blog. Gave me usefull insights on vMotion and large VMs.
Thanks!
We reserve 15% of each of the four links for vMotion, for a total of 6Gb. During congestion only 1.5Gb on each link is reserved, but when the other classes of traffic aren’t using their full reservation vMotion can use more.
In our prod environment with four vMotion VMK interfaces we see between 12Gb and 16Gb throughput on a vMotion. We don’t’ run jumbo frames, jumbo frames could push that higher but we are happy with where it is. It takes I think 8 minutes to put a box with 1TB of ram and 160 virtual machines into maintenance mode.