Hi,
we're running a larger VMware environment with NFS based storage. Unfortunately we're also running out of IP adresses in our /24 Storage-subnet.
So we'll have to mount the same datastores via a new subnet (or even IPv6) to upcoming hosts.
I assume they will be recognized as "new" datastores then, correct?
Is there any way to live migrate VMs from hosts with "old" datastores to hosts with "new" datastores other then having to do a storage migration (basically on the same datastore)?
Regards
With NFS datastores I don't think it matters from where you connect. So all hosts that connect to that NFS mount can access the files that are on that mount. You could even access NFS storage over a layer 3 routed connection: https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-5BE0AD02-2622-467C...
Let me clarify what the problem is:
Host A mounts NFS with IPv4 Datastore is named e.g. "NFS-01"
Host B has to mount with IPv6 Datastore cannot be named "NFS-01" as this name is already in use
So we have "NFS-01" and e.g. "NFS-02" which vSphere thinks are different datastores. Migrations will become storage-vMotions then 😕
Regards
Ah, yes. That would be an issue. If multiple hosts access the same datastore, they must all use the same protocol. Maybe it is possible to switch the IPv4 hosts to IPv6?
Basically that's were we wanted to go. But sooner or later you'll have to move workload from an IPv4 host to an IPv6 host. That's were it becomes a storage-vmotion.
If it's only a couple of hosts or VMs, it wouldn't be a problem. But we're facing ~100 hosts, ~2000VMs and xxxTB of data to move...