Trying to install ESXi400-201002401-BG, ESXi400-201002402-BG on some systems using vSphere Host Update Utility. Some works fine, others fail.
One system fails with:
"System.IO.IOException: I/O error occurred"
This one is an installation that boots on a USB-stick. I imagine that the partition table of the stick does not leave enough free space for the update. On one of the failing systems, df -h returns the following output:
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.4M 35.8M 84% /
vfat 285.9M 264.5M 21.4M 93% /vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c
vfat 249.7M 60.5M 189.2M 24% /vmfs/volumes/7c7b15ea-54f861ef-a388-473f43007b89
vfat 249.7M 59.3M 190.4M 24% /vmfs/volumes/4b72c5e3-3a61e05b-ed4c-b9b6da83be5c
The stick is a 2 GB stick, but the installation apparently only partitions around 1GB data for storage.
On the system where the upgrade works, I ran df -h a number of times during the upgrade:
-
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 183.1M 35.2M 84% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 24.3M 4.0G 1% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.2M 36.1M 83% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 46.3M 4.0G 1% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.2M 36.1M 83% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 74.5M 3.9G 2% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.3M 36.0M 84% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 98.3M 3.9G 2% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.3M 36.0M 84% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 108.3M 3.9G 3% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.3M 36.0M 84% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 115.6M 3.9G 3% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.4M 35.9M 84% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 142.0M 3.9G 3% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 218.8M -572.0k 100% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 176.6M 3.8G 4% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 249.9M -31.6M 114% /
vfat 285.9M 242.7M 43.2M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 176.6M 3.8G 4% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
/vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557 # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 249.9M -31.6M 114% /
vfat 285.9M 223.1M 62.8M 78% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 4.0G 176.6M 3.8G 4% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
After upgrade
~ # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 181.1M 37.2M 83% /
vfat 285.9M 243.3M 42.6M 85% /vmfs/volumes/e00f98e1-2bcc0c91-e7a2-3487611c1557
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ab49947e-ec35530f-62b1-a6e88c946a1e
vfat 4.0G 3.3M 4.0G 0% /vmfs/volumes/4a44e3e9-aaf48127-a0c8-0025b3249484
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/ebdacc79-0269ce43-bee5-12c36af12735
vmfs3 131.8G 25.6G 106.2G 19% /vmfs/volumes/4a44e3e9-bfd59a8a-c493-0025b3249484
-
Which tells me, that the upgrade apparently uses the visorfs and one of the vfat's.
If I am correct about my conclutions about the system having to little space for the upgrade to complete, then how to complete the upgrade successfully on these systems?
Regards, Lars.
There isn't enough space for the upgrade.
Check on http://www.vmware.com/pdf/vsphere4/r40_u1/vsp_40_u1_upgrade_guide.pdf the required space.
But I suggest to reinstall in a clean way (to use all your 2 GB space).
Andre
Yeah, but I am pretty sure, that a clean "i" install partitions the disk like that.
Regards, Lars.
Lars, it looks like the issue is with the space constraint in the following partition, it has only 21Mb of free space. It happens if /var/core has old core dumps. Please try upgrade after cleaning up this folder.
> vfat 285.9M 264.5M 21.4M 93%
/vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c
:+: VCP3, VCP4, RHCE, EMCPA.
Nothing to delete in /var/core - it is emtpy.
Can't I just mount the free space and make the upgrade use this space for upgrading?
If so... what should I mount it as?
Regards, Lars.
I have been replacing the USB-stick that I boot the ESX on with a larger one (4GB).
Data has been copied with dd, and now I actually don't know how to be able to use this extra space.
What I want ti do is to mount an extra partition as, let's say "/temp" and then configure this in the vSphere client advanced features as the new scratch disk. I think that this will solve my problems with the failing upgrade (?)
fdisk -l gives me:
/dev/disks/mpx.vmhba32:C0:T0:L0p1 5 1886 1927168 5 Extended
/dev/disks/mpx.vmhba32:C0:T0:L0p4 * 1 4 4080 4 FAT16 <32M
/dev/disks/mpx.vmhba32:C0:T0:L0p5 5 254 255984 6 FAT16
/dev/disks/mpx.vmhba32:C0:T0:L0p6 255 504 255984 6 FAT16
/dev/disks/mpx.vmhba32:C0:T0:L0p7 505 614 112624 fc VMKcore
/dev/disks/mpx.vmhba32:C0:T0:L0p8 615 1352 755696 6 FAT16
/dev/disks/mpx.vmhba32:C0:T0:L0p9 1353 1886 546800 fb VMFS
where /dev/disks/mpx.vmhba32:C0:T0:L0p9 is the new partition that I would like to mount as temp
but I am a bit stuck on how to continue from here to achive this. Can someone comment on this?
Thanks in advance!
Regards, Lars.