We have a fairly substantial ESX 3.5 estate (40+ hosts, 700+ VMs) spread across 3 different datacenters, with each datacenter having its own VTP domain.
We currently have a total of more than 560 portgroups on multiple standard vSwitches deployed across those three datacentres (we're a hosting provider and we have one portgroup/VLAN per customer). I'm going to audit the portgroups currently deployed and see whether we can remove some unused ones but over time the number is likely to grow.
We're finally getting around to looking at the vSphere upgrade and considering the use of distributed vSwitches, but the limit stated in the Maximums document of 512 portgroups per vCenter means we would have to split our estate into more than one vCenter, which means extra cost and inconvenience, though we could use Linked Mode to help on the inconvenience.
Anyone have any idea why the limit is per vCenter rather than per Distributed vSwitch, or even per Datacenter?
Are all the portgroups on the same vDS or are they spread across multiple vDSes?
I believe it's because each vCenter is controlling the portgroups and dvSwitches themselves. You also can't take a host and put it into another vcenter and have those portgroups and dvSwitch follow and be seen by the new vCenter.
At the moment they're not on DvSes at all, they're on standard vSwitches as we're still running 3.5
The portgroups are split roughy equally per datacenter, a couple of hundred or so per DC.
We could move some portgroups to DvS and leave some on standard switches, but that's messy and would reduce resilience as we'd have to split teamed uplinks.
Seems strange that the PG limit for an entire vCenter with multiple DvSes in multiple datacenters is the same as for a single standard vSwitch on a single host.
I appreciate that we're an unusual case in using so many portgroups, but it's a bit of a restriction and is likely to mean we can't readily use DvSes.