Ok, I have searched and read through all docs I can find, but cannot find the answer to the following.
I have a new 6 node cluster running esx 5i, and two other clusters running esx 4.1.x.
I have EVC enabled so I can hot migrate between clusters, but when I do the VM's are showing unprotected and will not participate in a HA failover. A reboot of the VM does not work, you have to power down the VM, then back up and it will come up as protected and participate in an HA failover.
Is there a way to hot migrate and be protected without having to power off/on?
I have tried various EVC modes, reconfigure the hosts for HA, nothing other then a power off and on seems to work.
I have several hundred VM's I need to migrate onto the new 5i cluster and it is not practical to think I have to power each one off and back on to work correctly. Am I missing a settings somewhere, or is this the only way?.
I apppreciate any responses prior to opening up a case with VMware.
Thanks!
Have you tried this option.
Turn OFF HA on 6 node ESXi5 cluster and migrate VM from legacy cluster to ESXi5 and TRUN ON HA see if the error comes in.
I suspect the reason would be HA architecture change between ESXi5 and legacy.
that seemed to work, by disabling HA on the new cluster, then migrating a VM to it. Then, re-enabling HA, the VM now shows protected.
But, With several VM's to migrate, it is un-realistic to think this would have to be done at various intervals during the migrations to maintain HA functions should a piece of hardware fail.
Any other ideas, workarounds? Can anyone from VMware comment?
Have similar problem, but all my clusters are based on ESXi-5 hosts. Power off/ power on VM doesn't help.
After migrate between clusters VM stays unprotected.
I have the same issue as well when migrating from an esxi4.1 to an esxi5 cluster.
Same thing if trying to migrate the VM back to the 4.1 cluster, it will not protect the VM. Only workaround for me so far has been to unconfigure/configure HA for the cluster. Not a pretty workaround but at least it works without affecting the VMs.
I have also found that disabling HA on the 4.1 cluster before migrating VMs resolves the issue and protects the VMs after migration.
Starting to think that the issue has something to do with multiple HA clusters using the same datastores....
mcamp001 wrote:
Any other ideas, workarounds? Can anyone from VMware comment?
If wanted the thread can be moved to the HA forum part of the communities where Duncan Epping from VMware often replies.
Did any of you by any chance opened a support ticket? If so please list the numbers...
Also, any particular errors in the FDM log files or in the VM log files?
** moved the thread as many of the HA developers follow this sub-community **
Okay, just did a quick search. It is a known issue:
Thanks Duncan, I completely missed that kb when I did a search but I came to the same conclusion as in the kb.
In my customers case its not a big issue. Toggling HA off/on every now and then while migrating is a good enough workaround
If I hear when it will be resolved I will let you know in this thread.
I've got a short update, since this happend at a customer after a HA Failover (which only partly worked, but that's a different story). I had to readd all Hosts by IP address and hadn't disabled HA before that, because time was short in this very moment. This process also produces the bug and you cannot eliminate it by disabling and enabling HA. The solution that worked for me, is:
1. Disable HA
2. Disconnect, but don't remove all ESX hosts.
3. Reenable HA on the Cluster
4. Reconnect all hosts, one by one.
5. The VMs will turn to protected after a minute or two.
Regards,
Pesché