VMware Cloud Community
jackjack2
Contributor
Contributor
Jump to solution

How to wipe and do a new install with attached iscsi luns?

I am planning to migrate two esx 4.1 hosts to esxi5.1 by wiping them and doing a clean install.  Each host is connected to two iscis sans, with 4 luns from each san attached to each host.  Each lun
is spanned over 3 extents. I am planning to vmotion (not storage vmotion)the guests from one host to another to do the upgrade.  I am stuck on determining what will happen to the isci luns after I
upgrade the hosts to esxi5.1.  It seems like I should just be able to reattach them, but I am not sure.  Can I simply just re-attach the luns to the newly install 5.1 hosts and see all of the
vmdk's intact, or is there something I am overlooking?

And should the the vm's migrated to the other host continue to run on the iscsi san while I upgrade the host they were vacated from?  It seems the storage should be removed from the process as the luns
should not remain attached to the host after it is wiped and the iqn's no longer exist.


Am I on the right track here?  What am I missing?

1 Solution

Accepted Solutions
huckfinn
Contributor
Contributor
Jump to solution

jackjack2, did you figure this out? I went ahead and migrated to esxi 5.0 U2 from ESX 3.5 u5. Had to perform clean install and I have a iSCSI SAN. Here was what I did:

1. Took a screen shot of every configuration of my ESX host. Went to configuration->software adapter->click the iSCSI adapter->properties->copy iqn and alias->click dynamic discovery tab and note any IPs. You may need these IPs when discovering datastores later on.

2. After documenting the configs, put the host in maintenance mode.

3. Once in maintenance, remove the host from the vcenter server. Log in to the host directly and shut it down.

4. Remove all network cables. Labeling cable helps before you detach the cables.

5. Clean install esxi 5.0.

6. After install, before rebooting, attach the network cables.

7. Configure and test management network. If your management network test fails see if you entered anything in vlan id. Chances are your port isn't trunked in the physical switch. In such case, removing the vlan id (when configuring management network) works.

8. Bring the host to vcenter server.

9. configure network, time, dns, etc.

10. Add a software iSCSI adapter. This adapter will have a new iqn. Either you need to provide this iqn to your SAN administrator so that he can update it on SAN side, or you can use the old iqn.

11. In my case, I made a copy of the new iqn and then deleted it. Then entered the old iqn gotten from step 1. Also, entered the alias (since one was present before). After this, rescanning and refresh still didn't populate the datastores. So, I went to the iSCSI adapter's properties and then dynamic discovery tab. Added the IPs that were present before (otherwise, you will need to enter the virtual IP of the cluster(s) that you datastore(s) belong to). Once IPs were added to dynamic discovery tab, it asked if I wanted to rescan HBAs. I said yes and viola! my datastores were populated.

12. Add the host to the cluster.

Note that, you can add iSCSI adapter after you join the host to the cluster. However, in that case HA will not be happy since no datastores heartbeats will be detected. Also, when datastores are discovered it may initially show up as unmounted. Don't panic. After a minute or so it will be mounted.

Hope this helps!

View solution in original post

0 Kudos
14 Replies
vmroyale
Immortal
Immortal
Jump to solution

You will need to upgrade vCenter Server first. Then you can vMotion all guests to one of the hosts that vCenter is managing and do the clean install of the other host. Once the install is complete, configure ESXi (storage, networking, security, etc) and join it back to the vCenter Server. Then you can migrate the VMs to the 5.1 host and repeat the process on the remaining host. Your VMs should stay up for the entire process.

Of course, you will want to make sure your hardware is on the HCL and you might also want to consult the VMware Product Interoperability Matrixes.

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
huckfinn
Contributor
Contributor
Jump to solution

Did you figure this out?


I am in the same boat. Have a cluster of esx 3.5 with three hosts and trying to upgrade to esxi 5.0. Recently upgrade vcenter server to version 5.0 (needed to upgrade esx 3.5 U5 to build 317866 in order to enable HA in cluster) . VMware recommends performing clean install of esxi 5.0 instead of trying to upgrade to esx 4 first. My hardware is listed in the HCL and vcenter 5.0 supports a cluster of mixed hosts.
VMware support suggested putting the host in maintenance mode before performing clean install and that everything would be taken care of after you bring the host out of maintenance mode. However, the support guy had not done this in the past and I am not comfortable with his suggestions.

This was what i was thinking of doing:

1. Migrate VMs from the host to other two hosts.

2. Disconnect and remove the host from the cluster.

3. Shut down the host. Detach the iSCSI network cables from the host before the clean install to avoid any accidental writing to the LUNs during installation (read this somewhere)

4. perform clean install. (the part i am not sure is what to choose when the installer asks "Migrate ESX, preserve vmfs datastore", "install esxi, preserve vmfs datastore", and "overwrite vmfs datastore".

5. Configure network on the host and then connect it to the cluster containing the other ESX 3.5 hosts.

6. Rescan storage

But someone who has done it needs to verify the steps before I proceed with the upgrade.

If anyone should shed light on what to choose for step 4 and also point out the steps so that the host with a clean install sees the storage that ESX 3.5 hosts are attached to (vmfs 3.x), it would be much appreciated.

0 Kudos
jackjack2
Contributor
Contributor
Jump to solution

Huckfinn,

No, I haven't figured it out yet, I am attempting to locate some spare equipment so I can set up a test environment to test the procedure.  What I am afraid of is that the newly installed host will see the lun but not mount it as it is, but will want to format it instead.  I need to preserve the existing virtual machines intact on the existing lun, and attach it to the new host.  Skimming other forums on similar topics it seems that as long as the new host has been added to a cluster, then the iscsi luns will be marked as "sharable" so there will be no issues, but then I read in other posts that this should be possible without formatting the existing  lun with either paid product cluster hosts or standalone free hypervisor hosts.  And in still other postings I read that the only way to attach the existing lun to the new host is to reformat it and start over, so it looks like the only way to find the answer to my question is to set up a test environment and try it before I wipe the production 4.1 esx hosts and install esxi 5.1.

Regarding your question about vmfs datastore options for the local datastore (i.e. hard drives installed in the server, not the iscsi luns) if any virtual machines or other data have been moved off of it, I would overwrite the vmfs datastore.  If I understand correctly, it is now possible to do an in place upgrade from classic ("fat") esx to esxi, which I believe the "Migrate ESX, preserve vmfs datastore" would do, so it would not be a clean install.  Preserving the existing local vmfs datastore would also not be a clean install, so I am planning to overwrite the local vmfs datastore (I will be detaching the iscsi network cables prior to starting the install just as you are, so there won't be any confusion about which disk to install on). As I am building a new virtual center I will be removing the host from the old cluster and adding it to the new cluster in the new virtual center after the installation. 

If anyone in this forum has previously done a wipe and reinstall of their hosts attached to an iscsi san, could you please post what you did to attach the existing iscsi luns to the newly re-installed hosts intact, preserving the production virtual machines on the volume?

EddieJ300
Contributor
Contributor
Jump to solution

Please remove

0 Kudos
jackjack2
Contributor
Contributor
Jump to solution

why?

0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

With ESX 3.5 a local datastore is created so that is what is being seen - I would acces your controlled card for the internal disks and use that to remove and recrate the drive which will remove any DAS VMFS datastore -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

What you describe is correct - if you are doing an inplace upgrade when you reattach the iSCSI network connections it will recognize the storage -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
huckfinn
Contributor
Contributor
Jump to solution

Thanks Weinstein. Please help me outline the procedure correctly. I plan to reuse the hostname and the IP. There are three ESX 3.5 hosts sharing vmfs 3.x  datastores. For a clean install on one of the hosts, do i enter it into maintenance mode, shut it down once in maintenance mode, and then clean install esxi 5.0? Or, do I remove the host from the cluster first and then shut it down?

Also, during clean install I would choose "overwrite vmfs datastores" since I don't have any local VMs? Once the host is up and network configured I would add it to the cluster that has ESX 3.5 hosts and then rescan the storage (I am assuming even if it is in maintenance mode, taking it out of maintaining mode would require reconnecting the host first, similar to adding host to the cluster)?

0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

The first step is to upgrade vCenter to vSphere 5 -

Please help me outline the procedure correctly. I plan to reuse the hostname and the IP. There are three ESX 3.5 hosts sharing vmfs 3.x  datastores. For a clean install on one of the hosts, do i enter it into maintenance mode, shut it down once in maintenance mode, and then clean install esxi 5.0? Or, do I remove the host from the cluster first and then shut it down?

Yes you will want to enter into maintenance mode which will force you to move or shut down all the VMs at which point you shoudl remove it form the cluster and then shut it down.

Also, during clean install I would choose "overwrite vmfs datastores" since I don't have any local VMs? Once the host is up and network configured I would add it to the cluster that has ESX 3.5 hosts and then rescan the storage (I am assuming even if it is in maintenance mode, taking it out of maintaining mode would require reconnecting the host first, similar to adding host to the cluster)?

You will want select the overwrite VMFS datastores you will want to make sure that is not connected to the iSCSI network. Once you have ESXi 5 installed before restarting reattach the iSCSI network and reboot the host - when it is up readd it to the cluster -

Also I am sure you have reviewed this but here is alink to the upgrade guide - http://pubs.vmware.com/vsphere-51/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-511-upgrad...

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
huckfinn
Contributor
Contributor
Jump to solution

vCenter Server and the update manager both have been upgraded to vsphere 5. Now, the only task is to move to esxi 5.0 U2 since that is the furthest that is supported per HCL.
Thanks for the link. So, my final steps would be

1. Enter host 1 into maintenance mode.

2. Take host 1 out of cluster and shut it down.

3. Detach iSCSI network cables from the host

4. Boot the host from the installer CD. Follow the instructions for installation and chose "overwrite vmfs datastore" option.

5. Reboot once install completes

6. Connect the host to vcenter server and join it to the cluster (network drivers may need to be installed before configuring network?)

7. scan for storage

8. Use update manager to apply all the patches.

Repeat the same for all the hosts. Once every host is esxi 5.0 U2 version, upgrade the vmware tools first and then the hardware version.

0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

A couplde of comments:

Before Step 5 reconnect the iSCSI network -

Since you are doing an upgrade you should really not have to do anything to the host other than add the host to the vCenter environment -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
huckfinn
Contributor
Contributor
Jump to solution

Thanks a ton weinstein5!

0 Kudos
huckfinn
Contributor
Contributor
Jump to solution

jackjack2, did you figure this out? I went ahead and migrated to esxi 5.0 U2 from ESX 3.5 u5. Had to perform clean install and I have a iSCSI SAN. Here was what I did:

1. Took a screen shot of every configuration of my ESX host. Went to configuration->software adapter->click the iSCSI adapter->properties->copy iqn and alias->click dynamic discovery tab and note any IPs. You may need these IPs when discovering datastores later on.

2. After documenting the configs, put the host in maintenance mode.

3. Once in maintenance, remove the host from the vcenter server. Log in to the host directly and shut it down.

4. Remove all network cables. Labeling cable helps before you detach the cables.

5. Clean install esxi 5.0.

6. After install, before rebooting, attach the network cables.

7. Configure and test management network. If your management network test fails see if you entered anything in vlan id. Chances are your port isn't trunked in the physical switch. In such case, removing the vlan id (when configuring management network) works.

8. Bring the host to vcenter server.

9. configure network, time, dns, etc.

10. Add a software iSCSI adapter. This adapter will have a new iqn. Either you need to provide this iqn to your SAN administrator so that he can update it on SAN side, or you can use the old iqn.

11. In my case, I made a copy of the new iqn and then deleted it. Then entered the old iqn gotten from step 1. Also, entered the alias (since one was present before). After this, rescanning and refresh still didn't populate the datastores. So, I went to the iSCSI adapter's properties and then dynamic discovery tab. Added the IPs that were present before (otherwise, you will need to enter the virtual IP of the cluster(s) that you datastore(s) belong to). Once IPs were added to dynamic discovery tab, it asked if I wanted to rescan HBAs. I said yes and viola! my datastores were populated.

12. Add the host to the cluster.

Note that, you can add iSCSI adapter after you join the host to the cluster. However, in that case HA will not be happy since no datastores heartbeats will be detected. Also, when datastores are discovered it may initially show up as unmounted. Don't panic. After a minute or so it will be mounted.

Hope this helps!

0 Kudos
jackjack2
Contributor
Contributor
Jump to solution

Huckfinn,

Thank you for posting your upgrade sequence list.  My upgrade is scheduled for 7/20 and I will be using it as a guide, thank you!

0 Kudos