This script performs backups of virtual machines residing on ESX(i) 3.5/4.x/5.x/6.x/7.x servers using methodology similar to VMware's VCB tool. The script takes snapshots of live running virtual machines, backs up the master VMDK(s) and then upon completion, deletes the snapshot until the next backup. The only caveat is that it utilizes resources available to the Service Console of the ESX server or Busybox Console (Tech Support Mode) of the ESXi server running the backups as opposed to following the traditional method of offloading virtual machine backups through a VCB proxy.
This script has been tested on ESX 3.5/4.x/5.x and ESXi 3.5/4.x/5.x/6.x/7.x and supports the following backup mediums: LOCAL STORAGE, SAN and NFS. The script is non-interactive and can be setup to run via cron. Currently, this script accepts a text file that lists the display names of virtual machine(s) that are to be backed up. Additionally, one can specify a folder containing configuration files on a per VM basis for granular control over backup policies.
Additionally, for ESX(i) environments that don't have persistent NFS datastores designated for backups, the script offers the ability to automatically connect the ESX(i) server to a NFS exported folder and then upon backup completion, disconnect it from the ESX(i) server. The connection is established by creating an NFS datastore link which enables monolithic (or thick) VMDK backups as opposed to using the usual *nix mount command which necessitates breaking VMDK files into the 2gbsparse format for backup. Enabling this mode is self-explanatory and will evidently be so when editing the script (Note: VM_BACKUP_VOLUME variable is ignored if ENABLE_NON_PERSISTENT_NFS=1 ).
In its current configuration, the script will allow up to 3 unique backups of the Virtual Machine before it will overwrite the previous backups; this however, can be modified to fit procedures if need be. Please be diligent in running the script in a test or staging environment before using it on production live Virtual Machines; this script functions well within our environment but there is a chance that it may not fit well into other environments.
If you have any questions, you may post in the dedicated ghettoVCB VMTN community group.
If you have found this script to be useful and would like to contribute back, please click here to donate.
Please read ALL documentation + FAQ's before posting a question about an issue or problem. Thank You
1) Download ghettoVCB from github by clicking on the ZIP button at the top and upload to either your ESX or ESXi system (use scp or WinSCP to transfer the file)
2) Extract the contents of the zip file (filename will vary):
# unzip ghettoVCB-master.zip
Archive: ghettoVCB-master.zip
creating: ghettoVCB-master/
inflating: ghettoVCB-master/README
inflating: ghettoVCB-master/ghettoVCB-restore.sh
inflating: ghettoVCB-master/ghettoVCB-restore_vm_restore_configuration_template
inflating: ghettoVCB-master/ghettoVCB-vm_backup_configuration_template
inflating: ghettoVCB-master/ghettoVCB.conf
inflating: ghettoVCB-master/ghettoVCB.sh
3) The script is now ready to be used and is located in a directory named ghettoVCB-master
# ls -l
-rw-r--r-- 1 root root 281 Jan 6 03:58 README
-rw-r--r-- 1 root root 16024 Jan 6 03:58 ghettoVCB-restore.sh
-rw-r--r-- 1 root root 309 Jan 6 03:58 ghettoVCB-restore_vm_restore_configuration_template
-rw-r--r-- 1 root root 356 Jan 6 03:58 ghettoVCB-vm_backup_configuration_template
-rw-r--r-- 1 root root 631 Jan 6 03:58 ghettoVCB.conf
-rw-r--r-- 1 root root 49375 Jan 6 03:58 ghettoVCB.sh
4) Before using the scripts, you will need to enable the execute permission on both ghettoVCB.sh and ghettoVCB-restore.sh by running the following:
chmod +x ghettoVCB.shchmod +x ghettoVCB-restore.sh
The following variables need to be defined within the script or in VM backup policy prior to execution.
Defining the backup datastore and folder in which the backups are stored (if folder does not exist, it will automatically be created):
VM_BACKUP_VOLUME=/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
Defining the backup disk format (zeroedthick, eagerzeroedthick, thin, and 2gbsparse are available):
DISK_BACKUP_FORMAT=thin
Note: If you are using the 2gbsparse on an ESXi 5.1 host, backups may fail. Please download the latest version of the ghettoVCB script which automatically resolves this or take a look at this article for the details.
Defining the backup rotation per VM:
VM_BACKUP_ROTATION_COUNT=3
Defining whether the VM is powered down or not prior to backup (1 = enable, 0 = disable):
Note: VM(s) that are powered off will not require snapshoting
POWER_VM_DOWN_BEFORE_BACKUP=0
Defining whether the VM can be hard powered off when "POWER_VM_DOWN_BEFORE_BACKUP" is enabled and VM does not have VMware Tools installed
ENABLE_HARD_POWER_OFF=0
If "ENABLE_HARD_POWER_OFF" is enabled, then this defines the number of (60sec) iterations the script will before executing a hard power off when:
ITER_TO_WAIT_SHUTDOWN=3
The number (60sec) iterations the script will wait when powering off the VM and will give up and ignore the particular VM for backup:
POWER_DOWN_TIMEOUT=5
The number (60sec) iterations the script will wait when taking a snapshot of a VM and will give up and ignore the particular VM for backup:
Note: Default value should suffice
SNAPSHOT_TIMEOUT=15
Defining whether or not to enable compression (1 = enable, 0 = disable):
ENABLE_COMPRESSION=0
NOTE: With ESXi 3.x/4.x/5.x, there is a limitation of the maximum size of a VM for compression within the unsupported Busybox Console which should not affect backups running classic ESX 3.x,4.x or 5.x. On ESXi 3.x the largest supported VM is 4GB for compression and on ESXi 4.x the largest supported VM is 8GB. If you try to compress a larger VM, you may run into issues when trying to extract upon a restore. PLEASE TEST THE RESTORE PROCESS BEFORE MOVING TO PRODUCTION SYSTEMS!
Defining the adapter type for backed up VMDK (DEPERCATED - NO LONGER NEEDED😞
ADAPTER_FORMAT=buslogic
Defining whether virtual machine memory is snapped and if quiescing is enabled (1 = enable, 0 = disable):
Note: By default both are disabled
VM_SNAPSHOT_MEMORY=0
VM_SNAPSHOT_QUIESCE=0
NOTE: VM_SNAPSHOT_MEMORY is only used to ensure when the snapshot is taken, it's memory contents are also captured. This is only relevant to the actual snapshot and it's not used in any shape/way/form in regards to the backup. All backups taken whether your VM is running or offline will result in an offline VM backup when you restore. This was originally added for debugging purposes and in generally should be left disabled
Defining VMDK(s) to backup from a particular VM either a list of vmdks or "all"
VMDK_FILES_TO_BACKUP="myvmdk.vmdk"
Defining whether or not VM(s) with existing snapshots can be backed up. This flag means it will CONSOLIDATE ALL EXISTING SNAPSHOTS for a VM prior to starting the backup (1 = yes, 0 = no):
ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP=0
Defining the order of which VM(s) should be shutdown first, especially if there is a dependency between multiple VM(s). This should be a comma seperate list of VM(s)
VM_SHUTDOWN_ORDER=vm1,vm2,vm3
Defining the order of VM(s) that should be started up first after backups have completed, especially if there is a dependency between multiple VM(s). This should be a comma seperate list of VM(s)
VM_STARTUP_ORDER=vm3,vm2,vm1
Defining NON-PERSISTENT NFS Backup Volume (1 = yes, 0 = no):
ENABLE_NON_PERSISTENT_NFS=0
NOTE: This is meant for environments that do not want a persisted connection to their NFS backup volume and allows the NFS volume to only be mounted during backups. The script expects the following 5 variables to be defined if this is to be used: UNMOUNT_NFS, NFS_SERVER, NFS_MOUNT, NFS_LOCAL_NAME and NFS_VM_BACKUP_DIR
Defining whether or not to unmount the NFS backup volume (1 = yes, 0 = no):
UNMOUNT_NFS=0
Defining the NFS server address (IP/hostname):
NFS_SERVER=172.51.0.192
Defining the NFS export path:
NFS_MOUNT=/upload
Defining the NFS datastore name:
NFS_LOCAL_NAME=backup
Defining the NFS backup directory for VMs:
NFS_VM_BACKUP_DIR=mybackups
NOTE: Only supported if you are running vSphere 4.1 and this feature is experimental. If you are having issues with sending mail, please take a look at Email Backup Log section
Defining whether or not to email backup logs (1 = yes, 0 = no):
EMAIL_LOG=1
Defining whether or not to email message will be deleted off the host whether it is successful in sending, this is used for debugging purposes. (1 = yes, 0 = no):
EMAIL_DEBUG=1
Defining email server:
EMAIL_SERVER=auroa.primp-industries.com
Defining email server port:
EMAIL_SERVER_PORT=25
Defining email delay interval (useful if you have slow SMTP server and would like to include a delay in netcat using -i param, default is 1second):
EMAIL_DELAY_INTERVAL=1
Defining recipient of the email:
EMAIL_TO=auroa@primp-industries.com
Defining from user which may require specific domain entry depending on email server configurations:
EMAIL_FROM=root@ghettoVCB
Defining to support RSYNC symbolic link creation (1 = yes, 0 = no):
RSYNC_LINK=0
Note: This enables an automatic creation of a generic symbolic link (both a relative & absolution path) in which users can refer to run replication backups using rsync from a remote host. This does not actually support rsync backups with ghettoVCB. Please take a look at the Rsync Section of the documentation for more details.
# cat ghettoVCB.conf
VM_BACKUP_VOLUME=/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
DISK_BACKUP_FORMAT=thin
VM_BACKUP_ROTATION_COUNT=3
POWER_VM_DOWN_BEFORE_BACKUP=0
ENABLE_HARD_POWER_OFF=0
ITER_TO_WAIT_SHUTDOWN=3
POWER_DOWN_TIMEOUT=5
ENABLE_COMPRESSION=0
VM_SNAPSHOT_MEMORY=0
VM_SNAPSHOT_QUIESCE=0
ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP=0
ENABLE_NON_PERSISTENT_NFS=0
UNMOUNT_NFS=0
NFS_SERVER=172.30.0.195
NFS_MOUNT=/nfsshare
NFS_LOCAL_NAME=nfs_storage_backup
NFS_VM_BACKUP_DIR=mybackups
SNAPSHOT_TIMEOUT=15
EMAIL_LOG=0
EMAIL_SERVER=auroa.primp-industries.com
EMAIL_SERVER_PORT=25
EMAIL_DELAY_INTERVAL=1
EMAIL_TO=auroa@primp-industries.com
EMAIL_FROM=root@ghettoVCB
WORKDIR_DEBUG=0
VM_SHUTDOWN_ORDER=
VM_STARTUP_ORDER=
To override any existing configurations within the ghettoVCB.sh script and to use a global configuration file, user just needs to specify the new flag -g and path to global configuration file (For an example, please refer to the sample execution section of the documenation)
Running multiple instances of ghettoVCB is now supported with the latest release by specifying the working directory (-w) flag.
By default, the working directory of the ghettoVCB instance is /tmp/ghettoVCB.work and you can run another instance by providing an alternate working directory. You should try to minimize the number of ghettoVCB instances running on your ESXi host as it does consume some amount of resources when running in the ESXi Shell. This is considered an experimental feature, so please test in a development environment to ensure everything is working prior to moving to production system.
Ensure that you do not edit past this section:
########################## DO NOT MODIFY PAST THIS LINE ##########################
# ./ghettoVCB.sh
###############################################################################
#
# ghettoVCB for ESX/ESXi 3.5, 4.x+ and 5.x
# Author: William Lam
# http://www.virtuallyghetto.com/
# Documentation: http://communities.vmware.com/docs/DOC-8760
# Created: 11/17/2008
# Last modified: 2012_12_17 Version 0
#
###############################################################################
Usage: ghettoVCB.sh [options]
OPTIONS:
-a Backup all VMs on host
-f List of VMs to backup
-m Name of VM to backup (overrides -f)
-c VM configuration directory for VM backups
-g Path to global ghettoVCB configuration file
-l File to output logging
-w ghettoVCB work directory (default: )
-d Debug level [info|debug|dryrun] (default: info)
(e.g.)
Backup VMs stored in a list
./ghettoVCB.sh -f vms_to_backup
Backup a single VM
./ghettoVCB.sh -m vm_to_backup
Backup all VMs residing on this host
./ghettoVCB.sh -a
Backup all VMs residing on this host except for the VMs in the exclusion list
./ghettoVCB.sh -a -e vm_exclusion_list
Backup VMs based on specific configuration located in directory
./ghettoVCB.sh -f vms_to_backup -c vm_backup_configs
Backup VMs using global ghettoVCB configuration file
./ghettoVCB.sh -f vms_to_backup -g /global/ghettoVCB.conf
Output will log to /tmp/ghettoVCB.log (consider logging to local or remote datastore to persist logs)
./ghettoVCB.sh -f vms_to_backup -l /vmfs/volume/local-storage/ghettoVCB.log
Dry run (no backup will take place)
./ghettoVCB.sh -f vms_to_backup -d dryrun
The input to this script is a file that contains the display name of the virtual machine(s) separated by a newline. When creating this file on a non-Linux/UNIX system, you may introduce ^M character which can cause the script to miss-behave. To ensure this does not occur, plesae create the file on the ESX/ESXi host.
Here is a sample of what the file would look like:
[root@himalaya ~]# cat vms_to_backup
vCOPS
vMA
vCloudConnector
Debug Mode
Note: This execution mode provides a qucik summary of details on whether a given set of VM(s)/VMDK(s) will be backed up. It provides additional information such as VMs that may have snapshots, VMDK(s) that are configured as independent disks, or other issues that may cause a VM or VMDK to not backed up.
[root@himalaya ghettoVCB]# ./ghettoVCB.sh -f vms_to_backup -d dryrun
Logging output to "/tmp/ghettoVCB-2011-03-13_15-19-57.log" ...
2011-03-13 15:19:57 -- info: ============================== ghettoVCB LOG START ==============================
2011-03-13 15:19:57 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:19:57 -- info: CONFIG - GHETTOVCB_PID = 30157
2011-03-13 15:19:57 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:19:57 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:19:57 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-19-57
2011-03-13 15:19:57 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:19:57 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:19:57 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:19:57 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2011-03-13 15:19:57 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:19:57 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:19:57 -- info: CONFIG - LOG_LEVEL = dryrun
2011-03-13 15:19:57 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2011-03-13_15-19-57.log
2011-03-13 15:19:57 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:19:57 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:19:57 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2011-03-13 15:19:57 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:19:57 -- info:
2011-03-13 15:19:57 -- dryrun: ###############################################
2011-03-13 15:19:57 -- dryrun: Virtual Machine: scofield
2011-03-13 15:19:57 -- dryrun: VM_ID: 704
2011-03-13 15:19:57 -- dryrun: VMX_PATH: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield.vmx
2011-03-13 15:19:57 -- dryrun: VMX_DIR: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield
2011-03-13 15:19:57 -- dryrun: VMX_CONF: scofield/scofield.vmx
2011-03-13 15:19:57 -- dryrun: VMFS_VOLUME: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:19:57 -- dryrun: VMDK(s):
2011-03-13 15:19:58 -- dryrun: scofield_3.vmdk 3 GB
2011-03-13 15:19:58 -- dryrun: scofield_2.vmdk 2 GB
2011-03-13 15:19:58 -- dryrun: scofield_1.vmdk 1 GB
2011-03-13 15:19:58 -- dryrun: scofield.vmdk 5 GB
2011-03-13 15:19:58 -- dryrun: INDEPENDENT VMDK(s):
2011-03-13 15:19:58 -- dryrun: TOTAL_VM_SIZE_TO_BACKUP: 11 GB
2011-03-13 15:19:58 -- dryrun: ###############################################
2011-03-13 15:19:58 -- dryrun: ###############################################
2011-03-13 15:19:58 -- dryrun: Virtual Machine: vMA
2011-03-13 15:19:58 -- dryrun: VM_ID: 1440
2011-03-13 15:19:58 -- dryrun: VMX_PATH: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vMA/vMA.vmx
2011-03-13 15:19:58 -- dryrun: VMX_DIR: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vMA
2011-03-13 15:19:58 -- dryrun: VMX_CONF: vMA/vMA.vmx
2011-03-13 15:19:58 -- dryrun: VMFS_VOLUME: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:19:58 -- dryrun: VMDK(s):
2011-03-13 15:19:58 -- dryrun: vMA-000002.vmdk 5 GB
2011-03-13 15:19:58 -- dryrun: INDEPENDENT VMDK(s):
2011-03-13 15:19:58 -- dryrun: TOTAL_VM_SIZE_TO_BACKUP: 5 GB
2011-03-13 15:19:58 -- dryrun: Snapshots found for this VM, please commit all snapshots before continuing!
2011-03-13 15:19:58 -- dryrun: THIS VIRTUAL MACHINE WILL NOT BE BACKED UP DUE TO EXISTING SNAPSHOTS!
2011-03-13 15:19:58 -- dryrun: ###############################################
2011-03-13 15:19:58 -- dryrun: ###############################################
2011-03-13 15:19:58 -- dryrun: Virtual Machine: vCloudConnector
2011-03-13 15:19:58 -- dryrun: VM_ID: 2064
2011-03-13 15:19:58 -- dryrun: VMX_PATH: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector/vCloudConnector.vmx
2011-03-13 15:19:58 -- dryrun: VMX_DIR: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector
2011-03-13 15:19:58 -- dryrun: VMX_CONF: vCloudConnector/vCloudConnector.vmx
2011-03-13 15:19:58 -- dryrun: VMFS_VOLUME: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:19:58 -- dryrun: VMDK(s):
2011-03-13 15:19:59 -- dryrun: vCloudConnector.vmdk 3 GB
2011-03-13 15:19:59 -- dryrun: INDEPENDENT VMDK(s):
2011-03-13 15:19:59 -- dryrun: vCloudConnector_1.vmdk 40 GB
2011-03-13 15:19:59 -- dryrun: TOTAL_VM_SIZE_TO_BACKUP: 3 GB
2011-03-13 15:19:59 -- dryrun: Snapshots can not be taken for indepdenent disks!
2011-03-13 15:19:59 -- dryrun: THIS VIRTUAL MACHINE WILL NOT HAVE ALL ITS VMDKS BACKED UP!
2011-03-13 15:19:59 -- dryrun: ###############################################
2011-03-13 15:19:59 -- info: ###### Final status: OK, only a dryrun. ######
2011-03-13 15:19:59 -- info: ============================== ghettoVCB LOG END ================================
In the example above, we have 3 VMs to be backed up:
Note: This execution modes provides more in-depth information about environment/backup process including additional storage debugging information which provides information about both the source/destination datastore pre and post backups. This can be very useful in troubleshooting backups
[root@himalaya ghettoVCB]# ./ghettoVCB.sh -f vms_to_backup -d debug
Logging output to "/tmp/ghettoVCB-2011-03-13_15-27-59.log" ...
2011-03-13 15:27:59 -- info: ============================== ghettoVCB LOG START ==============================
2011-03-13 15:27:59 -- debug: Succesfully acquired lock directory - /tmp/ghettoVCB.lock
2011-03-13 15:27:59 -- debug: HOST VERSION: VMware ESX 4.1.0 build-260247
2011-03-13 15:27:59 -- debug: HOST LEVEL: VMware ESX 4.1.0 GA
2011-03-13 15:27:59 -- debug: HOSTNAME: himalaya.primp-industries.com
2011-03-13 15:27:59 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:27:59 -- info: CONFIG - GHETTOVCB_PID = 31074
2011-03-13 15:27:59 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:27:59 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:27:59 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-27-59
2011-03-13 15:27:59 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:27:59 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:27:59 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:27:59 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2011-03-13 15:27:59 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:27:59 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:27:59 -- info: CONFIG - LOG_LEVEL = debug
2011-03-13 15:27:59 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2011-03-13_15-27-59.log
2011-03-13 15:27:59 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:27:59 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:27:59 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2011-03-13 15:27:59 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:27:59 -- info:
2011-03-13 15:28:01 -- debug: Storage Information before backup:
2011-03-13 15:28:01 -- debug: SRC_DATASTORE: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:28:01 -- debug: SRC_DATASTORE_CAPACITY: 1830.5 GB
2011-03-13 15:28:01 -- debug: SRC_DATASTORE_FREE: 539.4 GB
2011-03-13 15:28:01 -- debug: SRC_DATASTORE_BLOCKSIZE: 4
2011-03-13 15:28:01 -- debug: SRC_DATASTORE_MAX_FILE_SIZE: 1024 GB
2011-03-13 15:28:01 -- debug:
2011-03-13 15:28:01 -- debug: DST_DATASTORE: dlgCore-NFS-bigboi.VM-Backups
2011-03-13 15:28:01 -- debug: DST_DATASTORE_CAPACITY: 1348.4 GB
2011-03-13 15:28:01 -- debug: DST_DATASTORE_FREE: 296.8 GB
2011-03-13 15:28:01 -- debug: DST_DATASTORE_BLOCKSIZE: NA
2011-03-13 15:28:01 -- debug: DST_DATASTORE_MAX_FILE_SIZE: NA
2011-03-13 15:28:01 -- debug:
2011-03-13 15:28:02 -- info: Initiate backup for scofield
2011-03-13 15:28:02 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_3.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/scofield/scofield-2011-03-13_15-27-59/scofield_3.vmdk"
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_3.vmdk'...
Clone: 37% done.
2011-03-13 15:28:04 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_2.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/scofield/scofield-2011-03-13_15-27-59/scofield_2.vmdk"
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_2.vmdk'...
Clone: 85% done.
2011-03-13 15:28:05 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_1.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/scofield/scofield-2011-03-13_15-27-59/scofield_1.vmdk"
2011-03-13 15:28:06 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/scofield/scofield-2011-03-13_15-27-59/scofield.vmdk"
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield.vmdk'...
Clone: 78% done.
2011-03-13 15:29:52 -- info: Backup Duration: 1.83 Minutes
2011-03-13 15:29:52 -- info: Successfully completed backup for scofield!
2011-03-13 15:29:54 -- debug: Storage Information after backup:
2011-03-13 15:29:54 -- debug: SRC_DATASTORE: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:29:54 -- debug: SRC_DATASTORE_CAPACITY: 1830.5 GB
2011-03-13 15:29:54 -- debug: SRC_DATASTORE_FREE: 539.4 GB
2011-03-13 15:29:54 -- debug: SRC_DATASTORE_BLOCKSIZE: 4
2011-03-13 15:29:54 -- debug: SRC_DATASTORE_MAX_FILE_SIZE: 1024 GB
2011-03-13 15:29:54 -- debug:
2011-03-13 15:29:54 -- debug: DST_DATASTORE: dlgCore-NFS-bigboi.VM-Backups
2011-03-13 15:29:54 -- debug: DST_DATASTORE_CAPACITY: 1348.4 GB
2011-03-13 15:29:54 -- debug: DST_DATASTORE_FREE: 296.8 GB
2011-03-13 15:29:54 -- debug: DST_DATASTORE_BLOCKSIZE: NA
2011-03-13 15:29:54 -- debug: DST_DATASTORE_MAX_FILE_SIZE: NA
2011-03-13 15:29:54 -- debug:
2011-03-13 15:29:55 -- debug: Storage Information before backup:
2011-03-13 15:29:55 -- debug: SRC_DATASTORE: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:29:55 -- debug: SRC_DATASTORE_CAPACITY: 1830.5 GB
2011-03-13 15:29:55 -- debug: SRC_DATASTORE_FREE: 539.4 GB
2011-03-13 15:29:55 -- debug: SRC_DATASTORE_BLOCKSIZE: 4
2011-03-13 15:29:55 -- debug: SRC_DATASTORE_MAX_FILE_SIZE: 1024 GB
2011-03-13 15:29:55 -- debug:
2011-03-13 15:29:55 -- debug: DST_DATASTORE: dlgCore-NFS-bigboi.VM-Backups
2011-03-13 15:29:55 -- debug: DST_DATASTORE_CAPACITY: 1348.4 GB
2011-03-13 15:29:55 -- debug: DST_DATASTORE_FREE: 296.8 GB
2011-03-13 15:29:55 -- debug: DST_DATASTORE_BLOCKSIZE: NA
2011-03-13 15:29:55 -- debug: DST_DATASTORE_MAX_FILE_SIZE: NA
2011-03-13 15:29:55 -- debug:
2011-03-13 15:29:55 -- info: Snapshot found for vMA, backup will not take place
2011-03-13 15:29:57 -- debug: Storage Information before backup:
2011-03-13 15:29:57 -- debug: SRC_DATASTORE: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:29:57 -- debug: SRC_DATASTORE_CAPACITY: 1830.5 GB
2011-03-13 15:29:57 -- debug: SRC_DATASTORE_FREE: 539.4 GB
2011-03-13 15:29:57 -- debug: SRC_DATASTORE_BLOCKSIZE: 4
2011-03-13 15:29:57 -- debug: SRC_DATASTORE_MAX_FILE_SIZE: 1024 GB
2011-03-13 15:29:57 -- debug:
2011-03-13 15:29:57 -- debug: DST_DATASTORE: dlgCore-NFS-bigboi.VM-Backups
2011-03-13 15:29:57 -- debug: DST_DATASTORE_CAPACITY: 1348.4 GB
2011-03-13 15:29:57 -- debug: DST_DATASTORE_FREE: 296.8 GB
2011-03-13 15:29:57 -- debug: DST_DATASTORE_BLOCKSIZE: NA
2011-03-13 15:29:57 -- debug: DST_DATASTORE_MAX_FILE_SIZE: NA
2011-03-13 15:29:57 -- debug:
2011-03-13 15:29:58 -- info: Initiate backup for vCloudConnector
2011-03-13 15:29:58 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector/vCloudConnector.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/vCloudConnector/vCloudConnector-2011-03-13_15-27-59/vCloudConnector.vmdk"
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector/vCloudConnector.vmdk'...
Clone: 97% done.
2011-03-13 15:30:45 -- info: Backup Duration: 47 Seconds
2011-03-13 15:30:45 -- info: WARN: vCloudConnector has some Independent VMDKs that can not be backed up!
2011-03-13 15:30:45 -- info: ###### Final status: ERROR: Only some of the VMs backed up, and some disk(s) failed! ######
2011-03-13 15:30:45 -- debug: Succesfully removed lock directory - /tmp/ghettoVCB.lock
2011-03-13 15:30:45 -- info: ============================== ghettoVCB LOG END ================================
[root@himalaya ~]# ./ghettoVCB.sh -f vms_to_backup
# ./ghettoVCB.sh -m MyVM
/ghettoVCB # ./ghettoVCB.sh -a
/ghettoVCB # ./ghettoVCB.sh -a -e vm_exclusion_list
1. Create folder to hold individual VM backup policies (can be named anything):
[root@himalaya ~]# mkdir backup_config
2. Create individual VM backup policies for each VM that ensure each file is named exactly as the display name of the VM being backed up (use provided template to create duplicates):
[root@himalaya backup_config]# cp ghettoVCB-vm_backup_configuration_template scofield
[root@himalaya backup_config]# cp ghettoVCB-vm_backup_configuration_template vCloudConnector
Listing of VM backup policy within backup configuration directory
[root@himalaya backup_config]# ls
scofield vCloudConnector
ghettoVCB-vm_backup_configuration_template
Backup policy for "scofield" (backup only 2 specific VMDKs)
[root@himalaya backup_config]# cat scofield
scofield_2.vmdk,scofield_1.vmdk
VM_BACKUP_VOLUME=/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
DISK_BACKUP_FORMAT=thin
VM_BACKUP_ROTATION_COUNT=3
POWER_VM_DOWN_BEFORE_BACKUP=0
ENABLE_HARD_POWER_OFF=0
ITER_TO_WAIT_SHUTDOWN=4
POWER_DOWN_TIMEOUT=5
SNAPSHOT_TIMEOUT=15
ENABLE_COMPRESSION=0
VM_SNAPSHOT_MEMORY=0
VM_SNAPSHOT_QUIESCE=0
VMDK_FILES_TO_BACKUP=""
Backup policy for VM "vCloudConnector" (backup all VMDKs found)
[root@himalaya backup_config]# cat
vCloudConnectorVM_BACKUP_VOLUME=/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
vCloudConnector.vmdk
DISK_BACKUP_FORMAT=thin
VM_BACKUP_ROTATION_COUNT=3
POWER_VM_DOWN_BEFORE_BACKUP=0
ENABLE_HARD_POWER_OFF=0
ITER_TO_WAIT_SHUTDOWN=4
POWER_DOWN_TIMEOUT=5
SNAPSHOT_TIMEOUT=15
ENABLE_COMPRESSION=0
VM_SNAPSHOT_MEMORY=0
VM_SNAPSHOT_QUIESCE=0
VMDK_FILES_TO_BACKUP=""
Note: When specifying -c option (individual VM backup policy mode) if a VM is listed in the backup list but DOES NOT have a corresponding backup policy, the VM will be backed up using the default configuration found within the ghettoVCB.sh script.
Execution of backup
[root@himalaya ~]# ./ghettoVCB.sh -f vms_to_backup -c backup_config -l /tmp/ghettoVCB.log
2011-03-13 15:40:50 -- info: ============================== ghettoVCB LOG START ==============================
2011-03-13 15:40:51 -- info: CONFIG - USING CONFIGURATION FILE = backup_config//scofield
2011-03-13 15:40:51 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:40:51 -- info: CONFIG - GHETTOVCB_PID = 2967
2011-03-13 15:40:51 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:40:51 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:40:51 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-40-50
2011-03-13 15:40:51 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:40:51 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:40:51 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:40:51 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4
2011-03-13 15:40:51 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:40:51 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:40:51 -- info: CONFIG - LOG_LEVEL = info
2011-03-13 15:40:51 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log
2011-03-13 15:40:51 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:40:51 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:40:51 -- info: CONFIG - VMDK_FILES_TO_BACKUP = scofield_2.vmdk,scofield_1.vmdk
2011-03-13 15:40:51 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:40:51 -- info:
2011-03-13 15:40:53 -- info: Initiate backup for scofield
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_2.vmdk'...
Clone: 100% done.
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_1.vmdk'...
Clone: 100% done.
2011-03-13 15:40:55 -- info: Backup Duration: 2 Seconds
2011-03-13 15:40:55 -- info: Successfully completed backup for scofield!
2011-03-13 15:40:57 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:40:57 -- info: CONFIG - GHETTOVCB_PID = 2967
2011-03-13 15:40:57 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:40:57 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:40:57 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-40-50
2011-03-13 15:40:57 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:40:57 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:40:57 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:40:57 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2011-03-13 15:40:57 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:40:57 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:40:57 -- info: CONFIG - LOG_LEVEL = info
2011-03-13 15:40:57 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log
2011-03-13 15:40:57 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:40:57 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:40:57 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2011-03-13 15:40:57 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:40:57 -- info:
2011-03-13 15:40:59 -- info: Snapshot found for vMA, backup will not take place
2011-03-13 15:40:59 -- info: CONFIG - USING CONFIGURATION FILE = backup_config//vCloudConnector
2011-03-13 15:40:59 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:40:59 -- info: CONFIG - GHETTOVCB_PID = 2967
2011-03-13 15:40:59 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:40:59 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:40:59 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-40-50
2011-03-13 15:40:59 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:40:59 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:40:59 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:40:59 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4
2011-03-13 15:40:59 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:40:59 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:40:59 -- info: CONFIG - LOG_LEVEL = info
2011-03-13 15:40:59 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log
2011-03-13 15:40:59 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:40:59 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:40:59 -- info: CONFIG - VMDK_FILES_TO_BACKUP = vCloudConnector.vmdk
2011-03-13 15:40:59 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:40:59 -- info:
2011-03-13 15:41:01 -- info: Initiate backup for vCloudConnector
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector/vCloudConnector.vmdk'...
Clone: 100% done.
2011-03-13 15:41:51 -- info: Backup Duration: 50 Seconds
2011-03-13 15:41:51 -- info: WARN: vCloudConnector has some Independent VMDKs that can not be backed up!
2011-03-13 15:41:51 -- info: ###### Final status: ERROR: Only some of the VMs backed up, and some disk(s) failed! ######
2011-03-13 15:41:51 -- info: ============================== ghettoVCB LOG END ================================
Please take a look at FAQ #25 for more details before continuing
To make use of this feature, modify the variable ENABLE_COMPRESSION from 0 to 1. Please note, do not mix uncompressed backups with compressed backups. Ensure that directories selected for backups do not contain any backups with previous versions of ghettoVCB before enabling and implementing the compressed backups feature.
nc (netcat) utility must be present for email support to function, this utility is a now a default with the release of vSphere 4.1 or greater, previous releases of VI 3.5 and/or vSphere 4.0 does not contain this utility. The reason this is listed as experimental is it may not be compatible with all email servers as the script utlizes nc (netcat) utility to communicate to an email server. This feature is provided as-is with no guarantees. If you enable this feature, a separate log will be generated along side any normal logging which will be used to email recipient. If for whatever reason, the email fails to send, an entry will appear per the normal logging mechanism.
Users should also make note due to limited functionality of netcat, it uses SMTP pipelining which is not the most ideal method of communicating with an SMTP server. Email from ghettoVCB may not work if your email server does not support this feature.
You can define an email recipient in the following two ways:
EMAIL_TO=william@virtuallyghetto.com
OR
EMAIL_TO=william@virtuallyghetto.com,tuan@virtuallyghetto.com
If you are running ESXi 5.1, you will need to create a custom firewall rule to allow your email traffic to go out which I will assume is default port 25. Here are the steps for creating a custom email rule.
Step 1 - Create a file called /etc/vmware/firewall/email.xml with contains the following:
<ConfigRoot>
<service>
<id>email</id>
<rule id="0000">
<direction>outbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>25</port>
</rule>
<enabled>true</enabled>
<required>false</required>
</service>
</ConfigRoot>
Step 2 - Reload the ESXi firewall by running the following ESXCLI command:
~ #
esxcli network firewall refresh
Step 3 - Confirm that your email rule has been loaded by running the following ESXCLI command:
~ # esxcli network firewall ruleset list | grep email
email true
Step 4 - Connect to your email server by usingn nc (netcat) by running the following command and specifying the IP Address/Port of your email server:
~ # nc 172.30.0.107 25
220 mail.primp-industries.com ESMTP Postfix
You should recieve a response from your email server and you can enter Ctrl+C to exit. This custom ESXi firewall rule will not persist after a reboot, so you should create a custom VIB to ensure it persists after a system reboot. Please take a look at this article for the details.
To make use of this feature, modify the variable RSYNC_LINK from 0 to 1. Please note, this is an experimental feature request from users that rely on rsync to replicate changes from one datastore volume to another datastore volume. The premise of this feature is to have a standardized folder that rsync can monitor for changes to replicate to another backup datastore. When this feature is enabled, a symbolic link will be generated with the format of "<VMNAME>-symlink" and will reference the latest successful VM backup. You can then rely on this symbolic link to watch for changes and replicate to your backup datastore.
Here is an example of what this would look like:
[root@himalaya ghettoVCB]# ls -la /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/vcma/
total 0
drwxr-xr-x 1 nobody nobody 110 Sep 27 08:08 .
drwxr-xr-x 1 nobody nobody 17 Sep 16 14:01 ..
lrwxrwxrwx 1 nobody nobody 89 Sep 27 08:08 vcma-symlink -> /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/vcma/vcma-2010-09-27_08-07-37
drwxr-xr-x 1 nobody nobody 58 Sep 27 08:04 vcma-2010-09-27_08-04-26
drwxr-xr-x 1 nobody nobody 58 Sep 27 08:06 vcma-2010-09-27_08-05-55
drwxr-xr-x 1 nobody nobody 58 Sep 27 08:08 vcma-2010-09-27_08-07-37
FYI - This feature has not been tested, please provide feedback if this does not work as expected.
To recover a VM that has been processed by ghettoVCB, please take a look at this document: Ghetto Tech Preview - ghettoVCB-restore.sh - Restoring VM's backed up from ghettoVCB to ESX(i) 3.5, ...
There may be a situation where you need to stop the ghettoVCB process and entering Ctrl+C will only kill off the main ghettoVCB process, however there may still be other spawn processes that you may need to identify and stop. Below are two scenarios you may encounter and the process to completely stop all processes related to ghettoVCB.
Step 1 - Press Ctrl+C which will kill off the main ghettoVCB instance
Step 2 - Search for any existing ghettoVCB process by running the following:
# ps -c | grep ghettoVCB | grep -v grep
3360136 3360136 tail tail -f /tmp/ghettoVCB.work/ghettovcb.Cs1M1x
Step 3 - Here we can see there is a tail command that was used in the script. We need to stop this process by using the kill command which accepts the PID (Process ID) which is identified by the first value on the far left hand side of the command. In this example, it is 3360136.
# kill -9 3360136
Note: Make sure you identify the correct PID, else you could accidently impact a running VM or worse your ESXi host.
Step 4 - Depending on where you stopped the ghettoVCB process, you may need to consolidate or remove any existing snapshots that may exist on the VM that was being backed up. You can easily do so by using the vSphere Client.
Step 1 - Search for the ghettoVCB process (you can also validate the PID from the logs)
~ # ps -c | grep ghettoVCB | grep -v grep
3360393 3360393 busybox ash ./ghettoVCB.sh -f list -d debug
3360790 3360790 tail tail -f /tmp/ghettoVCB.work/ghettovcb.deGeB7
Step 2 - Stop both the main ghettoVCB instance & tail command by using the kill command and specifying their respective PID IDs:
kill -9 3360393
kill -9 3360790
Step 3 - If a VM was in the process of being backed up, there is an additional process for the actual vmkfstools copy. You will need to identify the process for that and kill that as well. We will again use ps -c command and search for any vmkfstools that are running:
# ps -c | grep vmkfstools | grep -v grep
3360796 3360796 vmkfstools /sbin/vmkfstools -i /vmfs/volumes/himalaya-temporary/VC-Windows/VC-Windows.vmdk -a lsilogic -d thin /vmfs/volumes/test-dont-use-this-volume/backups/VC-Windows/VC-Windows-2013-01-26_16-45-35/VC-Windows.vmdk
Step 4 - In case there is someone manually running a vmkfstools, make sure you take a look at the command itself and that it maps back to the current VM that was being backed up before kill the process. Once you have identified the proper PID, go ahead and use the kill command:
# kill -9 3360796
Step 5 - Depending on where you stopped the ghettoVCB process, you may need to consolidate or remove any existing snapshots that may exist on the VM that was being backed up. You can easily do so by using the vSphere Client.
Please take a moment to read over what is a cronjob and how to set one up, before continuing
The task of configuring cronjobs on classic ESX servers (with Service Console) is no different than traditional cronjobs on *nix operating systems (this procedure is outlined in the link above). With ESXi on the other hand, additional factors need to be taken into account when setting up cronjobs in the limited shell console called Busybox because changes made do not persist through a system reboot. The following document will outline steps to ensure that cronjob configurations are saved and present upon a reboot.
Important Note: Always redirect the ghettoVCB output to /dev/null and/or to a log when automating via cron, this becomes very important as one user has identified a limited amount of buffer capacity in which once filled, may cause ghettoVCB to stop in the middle of a backup. This primarily only affects users on ESXi, but it is good practice to always redirect the output. Also ensure you are specifying the FULL PATH when referencing the ghettoVCB script, input or log files.
e.g.
0 0 * * 1-5 /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB.sh -f /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/backuplist > /dev/null
or
0 0 * * 1-5 /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB.sh -f /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/backuplist > /tmp/ghettoVCB.log
Task: Configure ghettoVCB.sh to execute a backup five days a week (M-F) at 12AM (midnight) everyday and send output to a unique log file
Configure on ESX:
1. As root, you'll install your cronjob by issuing:
[root@himalaya ~]# crontab -e
2. Append the following entry:
0 0 * * 1-5 /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB.sh -f /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/backuplist > /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB-backup-$(date +\%s).log
3. Save and exit
[root@himalaya dlgCore-NFS-bigboi.VM-Backups]# crontab -e
no crontab for root - using an empty one
crontab: installing new crontab
4. List out and verify the cronjob that was just created:
[root@himalaya dlgCore-NFS-bigboi.VM-Backups]# crontab -l
0 0 * * 1-5 /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB.sh -f /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/backuplist > /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB-backup-$(date +\%s).log
You're ready to go!
Configure on ESXi:
1. Setup the cronjob by appending the following line to /var/spool/cron/crontabs/root:
0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-$(date +\%s).log
If you are unable to edit/modify /var/spool/cron/crontabs/root, please make a copy and then edit the copy with the changes
cp /var/spool/cron/crontabs/root /var/spool/cron/crontabs/root.backup
Once your changes have been made, then "mv" the backup to the original file. This may occur on ESXi 4.x or 5.x hosts
mv /var/spool/cron/crontabs/root.backup /var/spool/cron/crontabs/root
You can now verify the crontab entry has been updated by using "cat" utility.
2. Kill the current crond (cron daemon) and then restart the crond for the changes to take affect:
On ESXi < 3.5u3
kill $(ps | grep crond | cut -f 1 -d ' ')
On ESXi 3.5u3+
~ # kill $(pidof crond)
~ # crond
On ESXi 4.x/5.0
~ # kill $(cat /var/run/crond.pid)
~ # busybox crond
On ESXi 5.1 to 6.x
~ # kill $(cat /var/run/crond.pid)
~ # crond
On ESXi 7.x
~ # kill $(cat /var/run/crond.pid)
~ # /usr/lib/vmware/busybox/bin/busybox crond
3. Now that the cronjob is ready to go, you need to ensure that this cronjob will persist through a reboot. You'll need to add the following two lines to /etc/rc.local (ensure that the cron entry matches what was defined above). In ESXi 5.1, you will need to edit /etc/rc.local.d/local.sh instead of /etc/rc.local as that is no longer valid.
On ESXi 3.5
/bin/kill $(pidof crond)
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
crond
On ESXi 4.x/5.0
/bin/kill $(cat /var/run/crond.pid)
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
/bin/busybox crond
On ESXi 5.1 to 6.x
/bin/kill $(cat /var/run/crond.pid)
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
crond
On ESXi 7.x
/bin/kill $(cat /var/run/crond.pid) > /dev/null 2>&1
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
/usr/lib/vmware/busybox/bin/busybox crond
Afterwards the file should look like the following:
~ # cat /etc/rc.local
#! /bin/ash
export PATH=/sbin:/bin
log() {
echo "$1"
logger init "$1"
}
#execute all service retgistered in /etc/rc.local.d
if [http:// -d /etc/rc.local.d |http:// -d /etc/rc.local.d ]; then
for filename in `find /etc/rc.local.d/ | sort`
do
if [ -f $filename ] && [ -x $filename ]; then
log "running $filename"
$filename
fi
done
fi
/bin/kill $(cat /var/run/crond.pid)
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
/bin/busybox crond
This will ensure that the cronjob is re-created upon a reboot of the system through a startup script
2. To ensure that this is saved in the ESXi configuration, we need to manually initiate an ESXi backup by running:
~ # /sbin/auto-backup.sh
config implicitly loaded
local.tgz
etc/vmware/vmkiscsid/vmkiscsid.db
etc/dropbear/dropbear_dss_host_key
etc/dropbear/dropbear_rsa_host_key
etc/opt/vmware/vpxa/vpxa.cfg
etc/opt/vmware/vpxa/dasConfig.xml
etc/sysconfig/network
etc/vmware/hostd/authorization.xml
etc/vmware/hostd/hostsvc.xml
etc/vmware/hostd/pools.xml
etc/vmware/hostd/vmAutoStart.xml
etc/vmware/hostd/vmInventory.xml
etc/vmware/hostd/proxy.xml
etc/vmware/ssl/rui.crt
etc/vmware/ssl/rui.key
etc/vmware/vmkiscsid/initiatorname.iscsi
etc/vmware/vmkiscsid/iscsid.conf
etc/vmware/vmware.lic
etc/vmware/config
etc/vmware/dvsdata.db
etc/vmware/esx.conf
etc/vmware/license.cfg
etc/vmware/locker.conf
etc/vmware/snmp.xml
etc/group
etc/hosts
etc/inetd.conf
etc/rc.local
etc/chkconfig.db
etc/ntp.conf
etc/passwd
etc/random-seed
etc/resolv.conf
etc/shadow
etc/sfcb/repository/root/interop/cim_indicationfilter.idx
etc/sfcb/repository/root/interop/cim_indicationhandlercimxml.idx
etc/sfcb/repository/root/interop/cim_listenerdestinationcimxml.idx
etc/sfcb/repository/root/interop/cim_indicationsubscription.idx
Binary files /etc/vmware/dvsdata.db and /tmp/auto-backup.31345.dir/etc/vmware/dvsdata.db differ
config implicitly loaded
Saving current state in /bootbank
Clock updated.
Time: 20:40:36 Date: 08/14/2009 UTC
Now you're really done!
If you're still having trouble getting the cronjob to work, ensure that you've specified the correct parameters and there aren’t any typos in any part of the syntax.
Ensure crond (cron daemon) is running:
ESX 3.x/4.0:
[root@himalaya dlgCore-NFS-bigboi.VM-Backups]# ps -ef | grep crond | grep -v grep
root 2625 1 0 Aug13 ? 00:00:00 crond
ESXi 3.x/4.x/5.x:
~ # ps | grep crond | grep -v grep
5196 5196 busybox crond
Ensure that the date/time on your ESX(i) host is setup correctly:
ESX(i):
[root@himalaya dlgCore-NFS-bigboi.VM-Backups]# date
Fri Aug 14 23:44:47 PDT 2009
Note: Careful attention must be noted if more than one backup is performed per day. Backup windows should be staggered to avoid contention or saturation of resources during these periods.
0Q: I'm getting error X when using the script or I'm not getting any errors, the backup didn’t even take place. What can I do?
0A: First off, before posting a comment/question, please thoroughly read through the ENTIRE documentation including the FAQs to see if your question has already been ansered.
1Q: I've read through the entire documentation + FAQs and still have not found my answer to the problem I'm seeing. What can I do?
1A: Please join the ghettoVCB Group to post your question/comment.
2Q: I've sent you private message or email but I haven't received a response? What gives?
2A: I do not accept issues/bugs reported via PM or email, I will reply back, directing you to post on the appropriate VMTN forum (that's what it's for). If the data/results you're providing is truely senstive to your environment I will hear you out, but 99.99% it is not, so please do not messsage/email me directly. I do monitor all forums that contain my script including the normal VMTN forums and will try to get back to your question as soon as I can and as time permits. Please do be patient as you're not the only person using the script (600,000+ views), thank you.
3Q: Can I schedule backups to take place hourly, daily, monthly, yearly?
3A: Yes, do a search online for crontab.
4Q: I would like to setup cronjob for ESX(i) 3.5 or 4.0?
4A: Take a look at the Cronjob FAQ section in this document.
5Q: I want to schedule my backup on Windows, how do I do this?
5A: Do a search for plink. Make sure you have paired SSH keys setup between your Windows system and ESX/ESXi host.
6Q: I only have a single ESXi host. I want to take backups and store them somewhere else. The problem is: I don't have NFS, iSCSI nor FC SAN. What can I do?
6A: You can use local storage to store your backups assuming that you have enough space on the destination datastore. Afterwards, you can use scp (WinSCP/FastSCP) to transfer the backups from the ESXi host to your local desktop.
7Q: I’m pissed; the backup is taking too long. My datastore is of type X?
7A: YMMV, take a look at your storage configuration and make sure it is optimized.
8Q: I noticed that the backup rotation is occurring after a backup. I don't have enough local storage space, can the process be changed?
8A: This is primarily done to ensure that you have at least one good backup in case the new backup fails. If you would like to modify the script, you're more than welcome to do so.
9Q: What is the best storage configuration for datastore type X?
9A: Search the VMTN forums; there are various configurations for the different type of storage/etc.
10Q: I want to setup an NFS server to run my backups. Which is the best and should it be virtual or physical?
10A: Please refer to answer 7A. From experience, we’ve seen physical instances of NFS servers to be faster than their virtual counterparts. As always, YMMV.
11Q: I have VMs that have snapshots. I want to back these things up but the script doesn’t let me do it. How do I fix that?
11A: VM snapshots are not meant to be kept for long durations. When backing up a VM that contains a snapshot, you should ensure all snapshots have been committed prior to running a backup. No exceptions will be made…ever.
12Q: I would like to restore from backup, what is the best method?
12A: The restore process will be unique for each environment and should be determined by your backup/recovery plans. At a high level you have the option of mounting the backup datastore and registering the VM in question or copy the VM from the backup datastore to the ESX/ESXi host. The latter is recommended so that you're not running a VM living on the backup datastore or inadvertently modifying your backup VM(s). You can also take a look at ghettoVCB-restore which is experimentally supported.
13Q: When I try to run the script I get: "-bash: ./ghettoVCB.sh: Permission denied", what is wrong?
13A: You need to change the permission on the script to be executable, chmod +x ghettoVCB.sh
14Q: Where can I download the latest version of the script?
14A: The latest version is available on on github - https://github.com/lamw/ghettoVCB/downloads
15Q: I would like to suggest/recommend feature X, can I get it? When can I get it? Why isn't it here, what gives?
15A: The general purpose of this script is to provide a backup solution around VMware VMs. Any additional features outside of that process will be taken into consideration depending on the amount of time, number of requests and actual usefulness as a whole to the community rather than to an individual.
16Q: I have found this script to be very useful and would like to contribute back, what can I do?
16A: To continue to develop and share new scripts and resources with the community, we need your support. You can donate here Thank You!
17Q: What are the different type of backup uses cases that are supported with ghettoVCB?
17A: 1) Live backup of VM with the use of a snapshot and 2) Offline backup of a VM without a snapshot. These are the only two use cases supported by the script.
18Q: When I execute the script on ESX(i) I get some funky errors such as ": not found.sh" or "command not found". What is this?
18A: Most likely you have some ^M characters within the script which may have come from either editing the script using Windows editor, uploading the script using the datastore browser OR using wget. The best option is to either using WinSCP on Windows to upload the script and edit using vi editor on ESX(i) host OR Linux/UNIX scp to copy the script into the host. If you still continue to have the issue, do a search online on various methods of removing this Windows return carriage from the script
19Q: My backup works fine OR it works for a single backup but I get an error message "Input/output error" or "-ash: YYYY-MM-DD: not found" during the snapshot removal process. What is this?
19A: The issue has been recently identified by few users as a problem with user's NFS server in which it reports an error when deleting large files that take longer than 10seconds. VMware has recently released a KB article http://kb.vmware.com/kb/1035332 explaining the details and starting with vSphere 4.1 Update 2 or vSphere 5.0, a new advanced ESX(i) parameter has been introduced to increase the timeout. This has resolved the problem for several users and maybe something to consider if you are running into this issue, specifically with NFS based backups.
20Q: Will this script function with vCenter and DRS enabled?
20Q: No, if the ESX(i) hosts are in a DRS enabled cluster, VMs that are to be backed up could potentially be backed up twice or never get backed up. The script is executed on a per host basis and one would need to come up a way of tracking backups on all hosts and perhaps write out to external file to ensure that all VMs are backed up. The main use case for this script are for standalone ESX(i) host
21Q: I'm trying to use WinSCP to manually copy VM files but it's very slow or never completes on huge files, why is that?
21A: WinSCP was not designed for copying VM files out of your ESX(i) host, take a look at Veeam's FastSCP which is designed for moving VM files and is a free utility.
22Q: Can I use setup NFS Server using Windows Services for UNIX (WSFU) and will it work?
22A: I've only heard a handful of users that have successfully implemented WSFU and got it working, YMMV. VMware also has a KB article decribing the setup process here: http://kb.vmware.com/kb/1004490 for those that are interested. Here is a thread on a user's experience between Windows Vs. Linux NFS that maybe helpful.
23Q: How do VMware Snapshots work?
23A: http://kb.vmware.com/kb/1015180
24Q: What files make up a Virtual Machine?
24A: http://virtualisedreality.wordpress.com/2009/09/16/quick-reminder-of-what-files-make-up-a-virtual-ma...
25Q: I'm having some issues restoring a compressed VM backup?
25A: There is a limitation in the size of the VM for compression under ESXi 3.x & 4.x, this limitation is in the unsupported Busybox console and should not affect classic ESX 3.x/4.x. On ESXi 3.x, the maximum largest supported VM is 4GB for compression and on ESXi 4.x the largest supported VM is 8GB. If you try to compress a larger VM, you may run into issues when trying to extract upon a restore. PLEASE TEST THE RESTORE PROCESS BEFORE MOVING TO PRODUCTION SYSTEMS!
26Q: I'm backing up my VM as "thin" format but I'm still not noticing any size reduction in the backup? What gives?
2bA: Please refer to this blog post which explains what's going on: http://www.yellow-bricks.com/2009/07/31/storage-vmotion-and-moving-to-a-thin-provisioned-disk/
27Q: I've enabled VM_SNAPSHOT_MEMORY and when I restore my VM it's still offline, I thought this would keep it's memory state?
27A: VM_SNAPSHOT_MEMORY is only used to ensure when the snapshot is taken, it's memory contents are also captured. This is only relavent to the actual snapshot itself and it's not used in any shape/way/form in regards to the backup. All backups taken whether your VM is running or offline will result in an offline VM backup when you restore. This was originally added for debugging purposes and in generally should be left disabled
28Q: Can I rename the directories and the VMs after a VM has been backed up?
28A: The answer yes, you can ... but you may run into all sorts of issues which may break the backup process. The script expects a certain layout and specific naming scheme for it to maintain the proper rotation count. If you need to move or rename a VM, please take it out of the directory and place it in another location
29Q: Can ghettoVCB support CBT (Change Block Tracking)?
29A: No, that is a functionality of the vSphere API + VDDK API (vSphere Disk Development Kit). You will need to look at paid solutions such as VMware vDR, Veeam Backup & Recovery, PHD Virtual Backups, etc. to leverage that functionailty.
30Q: Does ghettoVCB support rsync backups?
30A: Currently ghettoVCB does not support rsync backups, you either obtain or compile your own static rsync binary and run on ESXi, but this is an unsupported configuration. You may take a look at this blog post for some details.
31Q: How can I contribute back?
31A: You can provide feedback/comments on the ghettoVCB Group. If you have found this script to be useful and would like to contribute back, please click here to donate.
32Q: How can select individual VMDKs to backup from a VM?
32A: Ideally you would use the "-c" option which requires you to create individual VM configuration file, this is where you would select specific VMDKs to backup. Note, that if you do not need to define all properties, anything not defined will adhere from the default global properties whether you're editing the ghettoVCB.sh script or using ghettoVCB global configuration file. It is not recommended that you edit the ghettoVCB.sh script and modify the VMDK_FILES_TO_BACKUP variable, but if you would like to keep everything in one script, you may add the extensive list of VMDKs to backup but do know this can get error prone as script may be edited frequently and lose some flexibility to support multiple environments.
33Q: Why is email not working when I'm using ESXi 5.x but it worked in ESXi 4.x?
33A: ESXi 5.x has implemented a new firewall which requires the email port that is being used to be opened. Please refer to the following articles on creating a custom firewall rule for email:
http://www.virtuallyghetto.com/2012/09/creating-custom-vibs-for-esxi-50-51.html
How to Create Custom Firewall Rules in ESXi 50
How to Persist Configuration Changes in ESXi 4.x/5.x Part 1
How to Persist Configuration Changes in ESXi 4.x/5.x Part 2
34Q: How do I stop the ghettoVCB process?
34A: Take a look at the Stopping ghettoVCB Process section of the documentation for more details.
Many have asked what is the best configuration and recommendation for setting up a cheap NFS Server to run backups for VMs. This has been a question we've tried to stay away from just because the possiblities and solutions are endless. One can go with physical vs. virtual, use VSA (Virtual Storage Appliances) such as OpenFiler or Lefthand Networks, Windows vs. Linux/UNIX. We've not personally tested and verify all these solutions and it all comes down to "it depends" type of answer. Though from our experience, we've had much better success with a physical server than a virtual.
It is also well known that some users are experiencing backup issues when running specifically against NFS, primarily around the rotation and purging of previous backups. The theory from what we can tell by talking to various users is that when the rotation is occuring, the request to delete the file(s) may take awhile and does not return within a certain time frame and causes the script to error out with unexpected messages. Though the backups were successful, it will cause unexpected results with directory structures on the NFS target. We've not been able to isolate why this is occuring and maybe due to NFS configuration/exports or hardware or connection not being able to support this process.
We'll continue to help where we can in diagonising this issus but we wanted to share our current NFS configuration, perhaps it may help some users who are new or trying to setup their system. ( Disclaimer: These configurations are not recommendations nor endorsement for any of the components being used)
UPDATE: Please also read FAQ #19 for details + resolution
Server Type: Physical
Model: HP DL320 G2
OS: Arch linux 2.6.28
Disks: 2 x 1.5TB
RAID: Software RAID1
Source Host Backups: ESX 3.5u4 and ESX 4.0u1 (We don't run any ESXi hosts)
uname -a output
Linux XXXXX.XXXXX.ucsb.edu 2.6.28-ARCH #1 SMP PREEMPT Sun Jan 18 20:17:17 UTC 2009 i686 Intel(R) Pentium(R) 4 CPU 3.06GHz GenuineIntel GNU/Linux
NICs:
00:05.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5702X Gigabit Ethernet (rev 02)
00:06.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5702X Gigabit Ethernet (rev 02)
NFS Export Options:
/exports/vm-backups XXX.XXX.XXX.XXX/24(rw,async,all_squash,anonuid=99,anongid=99)
*One important thing to check is to verify that your NFS exportion options are setup correctly, "async" should be configured to ensure that all IO requests are processed and reply back to the client before waiting for the data to be written to the storage.
*Recently VMware released a KB article describing the various "Advanced NFS Options" and their meanings and recommendations: http://kb.vmware.com/kb/1007909 We've not personally had to touch any of these, but for other vendors such as EMC and NetApp, there are some best practices around configuring some of these values depending on the number of NFS volumes or number of ESX(i) host connecting to a volume. You may want to take a look to see if any of these options may help with NFS issue that some are seeing
*Users should also try to look at their ESX(i) host logs during the time interval when they're noticing these issues and see if they can find any correlation along with monitoring the performance on their NFS Server.
*Lastly, there are probably other things that can be done to improve NFS performance or further optimization, a simple search online will also yield many resources.
Windows utility to email ghettoVCB Backup Logs - http://www.waldrondigital.com/2010/05/11/ghettovcb-e-mail-rotate-logs-batch-file-for-vmware/
Windows front-end utility to ghettoVCB - http://www.magikmon.com/mkbackup/ghettovcb.en.html
Note: Neither of these tools are supported, for questions or comments regarding these utilities please refer to the author's pages.
Enhancements:
Fixes:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Enhancements:
Fixes:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Enhancements:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Enhancements:
Fixes:
Enhancements:
Fixes:
Big thanks to Alain Spineux and his contributions to the ghettoVCB script and helping with debugging and testing.
Enhancements:
Fixes:
Enhancements:
Enhancements:
Enhancements:
Fixes:
Fixes:
Enhancements:
Fixes:
Fixes:
Big thanks goes out to the community for the suggested features and to those that submitted snippet of their modifications.
Updated FAQ #20-24 for common issues/questions. Also included a new section about our "personal" NFS configuration and setup.
Fix the crontab section to reflect the correct syntax + updated FAQ #17,#18 and #19 for common issues.
The following enhancements and fixes have been implemented in this release of ghettoVCB. Special thanks goes out to all the ghettoVCB BETA testers for providing time and their environments to test features/fixes of the new script!
Enhancements:
Fixes:
Hi
vmlist just has one server in it called londsbs01.
{/vmfs/volumes/4b4f58b6-4836d164-7575-f4ce46ae9e83/Backups # ls
ghettoVCB.sh vmlist
/vmfs/volumes/4b4f58b6-4836d164-7575-f4ce46ae9e83/Backups # ./ghettoVCB.sh -f vmlist -d
###############################################################################
#
ghettoVCB for ESX/ESXi 3.5 & 4.x+
Author: William Lam
Created: 11/17/2008
Last modified: 11/14/2009
#
###############################################################################
Usage: ./ghettoVCB.sh -f -c -l
OPTIONS:
-f List of VMs to backup
-c Configuration directory for VM backups
-l File to output logging
-d Debug level info (default: info)
(e.g.)
Backup VMs stored in a list
./ghettoVCB.sh -f vms_to_backup
Backup VMs based on specific configuration located in directory
./ghettoVCB.sh -f vms_to_backup -c vm_backup_configs
Output will log to /tmp/ghettoVCB.log
./ghettoVCB.sh -f vms_to_backup -l /tmp/ghettoVCB.log
Dry run (no backup will take place)
./ghettoVCB.sh -f vms_to_backup -d dryrun
/vmfs/volumes/4b4f58b6-4836d164-7575-f4ce46ae9e83/Backups #
}
Hi willian, Hi all,
I just found out, that I have the same problem as bonibaz.
The backup runs fine, all vmdk are cloned and then the script stops after removing the snapshot.
This is a big problem since that is the time when the old backup should be deleted.
So my NAS is filling up with old backups.
2010-01-21 06:40:23 -- info: Removing snapshot from stoaut01 ...
2010-01-22 00:00:01 -- info: ============================== ghettoVCB LOG START ==============================
2010-01-22 00:00:02 -- info: CONFIG - USING CONFIGURATION FILE = /vmfs/volumes/nas51_backup/gconf//svaut01
2010-01-22 00:00:02 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas51_backup/sicherung
2010-01-22 00:00:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2
2010-01-22 00:00:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2010-01-22 00:00:02 -- info: CONFIG - ADAPTER_FORMAT = lsilogic
2010-01-22 00:00:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2010-01-22 00:00:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2010-01-22 00:00:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4
2010-01-22 00:00:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2010-01-22 00:00:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2010-01-22 00:00:02 -- info: CONFIG - LOG_LEVEL = info
2010-01-22 00:00:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettolog.log
2010-01-22 00:00:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2010-01-22 00:00:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2010-01-22 00:00:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2010-01-22 00:00:04 -- info: Initiate backup for svaut01
2010-01-22 00:00:04 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-01-22" for svaut01
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/datastore1/svaut01/svaut01_1.vmdk'...
Clone: 100% done.
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/datastore1/svaut01/svaut01.vmdk'...
Clone: 100% done.
2010-01-22 02:15:48 -- info: Removing snapshot from svaut01 ...
2010-01-22 02:17:08 -- info: Backup Duration: 137.07 Minutes
2010-01-22 02:17:08 -- info: Successfully completed backup for svaut01!
2010-01-22 02:17:08 -- info: CONFIG - USING CONFIGURATION FILE = /vmfs/volumes/nas51_backup/gconf//stoaut01
2010-01-22 02:17:08 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas51_backup/sicherung
2010-01-22 02:17:08 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2
2010-01-22 02:17:08 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2010-01-22 02:17:08 -- info: CONFIG - ADAPTER_FORMAT = buslogic
2010-01-22 02:17:09 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2010-01-22 02:17:09 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2010-01-22 02:17:09 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4
2010-01-22 02:17:09 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2010-01-22 02:17:09 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2010-01-22 02:17:09 -- info: CONFIG - LOG_LEVEL = info
2010-01-22 02:17:09 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettolog.log
2010-01-22 02:17:09 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2010-01-22 02:17:09 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2010-01-22 02:17:09 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2010-01-22 02:17:10 -- info: Initiate backup for stoaut01
2010-01-22 02:17:10 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-01-22" for stoaut01
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/datastore1/stoaut01/stoaut01_1.vmdk'...
Clone: 100% done.
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/datastore1/stoaut01/stoaut01.vmdk'...
Clone: 100% done.
2010-01-22 05:25:46 -- info: Removing snapshot from stoaut01 ...
~ #
I have that problem on 2 out of 3 ESXi 4.0
This is the first I've heard of another case. Please provide a debug one and just attached as text file to make it easier to read. Also can you provide the build #'s of the 3 ESXi 4.0 hosts? Is there something different between the 2 that have issues and the one that does not? Does this occur if you try to backup to a local datastore that's NOT NFS?
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Hello
I assume from my reading above this will work with a ESXI 3.5u4 host with a free license?
Will ghettoVCB.sh successfully back and restore my 2 VMs? ( MS Server 2008 with SQL 2005) and (MS Server 2003 with Altiris). I know...small potatoes compared to many of you.
This VM newbie would truly appreciate any advice you can throw my way.
Thanks!
Yes to ESXi 3.5u4
Yes to backup
No to restore (there is another script you an utilize if you so choose in the documentation)
Again, all this is documented from above, please spend some time thoroughly going through the documentation as it'll answer all your questions
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Was the Syntax Error on line 735 ever solved?
Could you jog my memory on what the issue was? I would say easiest way is to download the latest version and see if you still run into this "syntax error" and let me know what you see.
Thanks
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Sure,
-
Destination disk format: sparse with 2GB maximum extent size
Cloning disk '/vmfs/volumes/SATA1/server01/server01.vmdk'...
Clone: 100% done.
2010-01-26 12:55:42 -- info: Removing snapshot from server01 ...
sh: /vmfs/volumes/backup-nfs//server01/server01-2010-01-25: bad number
ghettoVCB.sh: line 735: syntax error: /vmfs/volumes/backup-nfs//server01/server01-2010-01-25+1
-
output of uname -a:
VMkernel wmware02 4.0.0 #1 SMP Release build-171294 Jun 11 2009 12:44:08 x86_64 unknown
I'm having the same "Bad Number" problem. Here's a snapshot the script running:
-
Logging output to "/tmp/vmback.log" ...
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore1/Mail Server/Mail Server.vmdk'...
Clone: 100% done.
sh: /vmfs/volumes/Backups/Mail Server/Mail Server-2010-01-22: bad number
./ghettoVCB.sh: line 735: syntax error: /vmfs/volumes/Backups/Mail Server/Mail Server-2010-01-22+1
/vmfs/volumes/4b56de2d-e839ce49-7300-0024e86a3a66/scripts # uname -a
VMkernel diamond.nei.local 4.0.0 #1 SMP Release build-208167 Nov 8 2009 01:02:11 x86_64 unknown
-
Kevin L. Collins, MCSE
Systems Manager
Nesbitt Engineering, Inc.
Hello
Has anyone that is using these script to backup and restore applied Upgrade 5 to ESXI 3.5? Or is this not an issue I need to be concerned about?
Thank you
Hi William,
I finally managed to enable debugging on wednesday and got my hands on the logs this morning.
As always, when you want to prove something, you can't replicate the error :smileygrin:
My backups ran flawless over the last two days.
I think the problem isn't the ESXi itself since both ESXi have different Patchlevels but behave the same.
The problem is likely to be the NAS.
I noticed timeouts when I deleted the left over backups on wednesday.
Something like
$ rm -f *
(no response for 2min because of files being deleted)
(some errormessage about input/output errors)
And then the nfs connection broke and the directory wasn't mounted anymore.
It then takes 1-2min until nfs works again.
Deleting itself works until the point of the timeout, so you have to try 5 or 6 times until all files are deleted.
I never noticed that on Sundays when I backup to another NAS.
So I compared both NAS and saw, that the faulty one had support for jumbo frames set on.
So I disabled it.
Maybe that was the problem.
bonibaz: Can you test that on your NAS, too?
Try deleting a big file and see if you run into timeouts, too.
Btw, I use Thecus N5200 NAS systems with 5 WD disks on RAID10+spare each.
hi,
it looks like, i have the resolution of my problem...
at beginning i saved the ghettoVCB.sh-script and the vmlist at /backup.
yesterday i shotdown my vm´s and restarted the esxi and i noticed the folder /backup was deleted.
is it possible i can not create folders in / ?
so i saved the script and backupfile at /vmfs/volumes/datastore1
i created the cronjob an till now i have no error in deleting snapshot.
i will monitor the backup the next days... i hope it´s now running.
Hi Could someone please spread some light?
I am still trying to figure out why this is happening?
{
vmfs/volumes/4b50aab6-e58168b3-4d08-f4ce46ae9e83/Backups # ./ghettoVCB.sh -f vmlist -d
: not found.sh: line 6:
: not found.sh: line 8:
: not found.sh: line 11:
: not found.sh: line 18:
: not found.sh: line 21:
: not found.sh: line 26:
: not found.sh: line 29:
: not found.sh: line 34:
: not found.sh: line 38:
: not found.sh: line 41:
: not found.sh: line 45:
: not found.sh: line 48:
: not found.sh: line 51:
: not found.sh: line 54:
: not found.sh: line 59:
: not found.sh: line 61:
: not found.sh: line 64:
: not found.sh: line 67:
: not found.sh: line 70:
: not found.sh: line 73:
: not found.sh: line 75:
: not found.sh: line 76:
: not found.sh: line 78:
: not found.sh: line 83:
: not found.sh: line 86:
: not found.sh: line 87:
###############################################################################
#
ghettoVCB for ESX/ESXi 3.5 & 4.x+
Author: William Lam
Created: 11/17/2008
Last modified: 11/14/2009
#
###############################################################################
: not found.sh: line 98: echo
Usage: ./ghettoVCB.sh -f -c -l
: not found.sh: line 100: echo
OPTIONS:
-f List of VMs to backup
-c Configuration directory for VM backups
-l File to output logging
-d Debug level info (default: info)
: not found.sh: line 106: echo
(e.g.)
Backup VMs stored in a list
./ghettoVCB.sh -f vms_to_backup
Backup VMs based on specific configuration located in directory
./ghettoVCB.sh -f vms_to_backup -c vm_backup_configs
Output will log to /tmp/ghettoVCB.log
./ghettoVCB.sh -f vms_to_backup -l /tmp/ghettoVCB.log
Dry run (no backup will take place)
./ghettoVCB.sh -f vms_to_backup -d dryrun
: not found.sh: line 116: echo
./ghettoVCB.sh: exit: line 117: Illegal number: 1
/vmfs/volumes/4b50aab6-e58168b3-4d08-f4ce46ae9e83/Backups # ls
ghettoVCB-vm_backup_configuration_template
ghettoVCB.sh
vmlist
/vmfs/volumes/4b50aab6-e58168b3-4d08-f4ce46ae9e83/Backups #
}
@wolfi.
You did edit the script with a Windows tools which upset EOL characters.
Only edit with Linux tools like vi or WinSCP's built in editor which will do the conversation.
HTH
grubi.
@bonibaz:
You can create the folder at the root of course but it will dissappear after reboot as it is not part of the bootbank. If you want to have anything there persistent easiest way is to integrate it into oem.tgz.
@all:
I strongly encourage not to use Win2003R2 NFS server for backup. We had occasional i/o errors and speed was dissapointing at least. We now switched to AllegroNFS and i/o errors are gone, and speed increased about factor 8. Now we have a backup performance of 4GB/min which is ok for us.
Regards,
grubi.
Now I have another concern. Yesterday I configured my ESXi box to run ghettoVCB in a cron job. As far as I can tell that all worked fine. I have a copy of the virtual machine, I have logs that said what ghettoVCB did, etc.
But I when I looked at the backup server's free disk space, I was amazed to find that the free space only dropped by 7GB instead of the 50GB that the virtual machine uses. Here are a couple of commands that I ran on the backup machine:
===========================================
root@beige:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 294G 33G 261G 12% /var/vm-back
===========================================
===========================================
root@beige:/var/vm-back/Mail Server/Mail Server-2010-01-29# ls -lah
total 7.0G
drwxr-xr-x 2 root root 79 2010-01-28 19:01 .
drwxr-xr-x 5 root root 93 2010-01-28 19:01 ..
-rw------- 1 root root 50G 2010-01-28 19:04 Mail Server-flat.vmdk
-rw------- 1 root root 500 2010-01-28 19:04 Mail Server.vmdk
-rwxr-xr-x 1 root root 3.0K 2010-01-28 19:01 Mail Server.vmx
===========================================
===========================================
root@beige:/var/vm-back/Mail Server/Mail Server-2010-01-29# du -h .
7.0G .
===========================================
The virtual machine's vitrual hard disk is 50GB - that's the size I told ESXi to make it. It appears as though 'ls' shows that. But 'du' only reports 7GB in actual use. In addition 'df' is only showing 33GB in use instead of 76GB as it should be.
I've not tried to restore this virtual machine yet, but I'm concerned that there may be some file truncation happening and the restore simply will not be successful.
Has anyone else seen this type of behavior before?
For what it's worth: The backup machine is running Ubuntu 6.06 and NFS which is exporting the /var/vm-back directory. I've attached this NFS share to the ESXi box as datastore "Backups". According to the logs, the backup went through without incident and took nearly three minutes to complete.
Here's the ghettoVCB log file:
===========================================
2010-01-29 00:00:01 -- info: ============================== ghettoVCB LOG START
2010-01-29 00:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/Backups
2010-01-29 00:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2
2010-01-29 00:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick
2010-01-29 00:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic
2010-01-29 00:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2010-01-29 00:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2010-01-29 00:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2010-01-29 00:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2010-01-29 00:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2010-01-29 00:00:01 -- info: CONFIG - LOG_LEVEL = info
2010-01-29 00:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/vmback.log
2010-01-29 00:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2010-01-29 00:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2010-01-29 00:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2010-01-29 00:00:03 -- info: Initiate backup for Mail Server
2010-01-29 00:00:03 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-01-29" for Mail Server
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore1/Mail Server/Mail Server.vmdk'...
MClone: 0% done.MClone: 1% done.MClone: 2% done.MClone: 3% done.^MClone: 4% {SNIP...This goes all the way to 100%}
2010-01-29 00:02:49 -- info: Removing snapshot from Mail Server ...
===========================================
I'm going to try to restore the virtual machine later this morning, but I'd really like to hear why my backup machine thinks a 50GB virtual machine is only occupying 7GB of space.
Kevin L. Collins, MCSE
Systems Manager
Nesbitt Engineering, Inc.
@goppi
Thanks for the reply.
I did edit the file with notepad.
So i connected to another machine.
Downloaded the file's to this machine (Win 2003 SP2)
Then uploaded the file's without editing them.
But still get the same output in my previous post.
Is there another method i need to download the file?
Maybe to a linux machine and not a windows one?
Thanks for your investigation and comments. In the back of my mind, this was what I was thinking but could not confirm. I know users with issues with their NFS may see a whole variety of problems pertaining to snapshots or rotation of the directories. If this is in fact the case, looks like I need to create a new FAQ
Thanks and let me know you if you run into the issue again
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Hi, I agree with Wolfi 3,
I have the same errors:
: command not founde 6:
: command not founde 8:
: command not founde 11:
: command not founde 18:
: command not founde 21:
: command not founde 26:
: command not founde 29:
: command not founde 34:
: command not founde 38:
: command not founde 41:
: command not founde 45:
: command not founde 48:
: command not founde 51:
: command not founde 54:
: command not founde 59:
: command not founde 61:
: command not founde 64:
: command not founde 67:
: command not founde 70:
: command not founde 73:
: command not founde 75:
: command not founde 76:
: command not founde 78:
: command not founde 83:
: command not founde 86:
: command not founde 87:
'/ghettoVCB.sh: line 88: syntax error near unexpected token `{
'/ghettoVCB.sh: line 88: `printUsage() {
I manually removed the blank lines with vi (haven't edited anything with Windows tools) and then I end up with only the last two errors.
What seppenin'?
BK
lamw
Thank you for this great backup option. I gave it a first trial run this weekend and it appeared to work just as advertised.
Has this script and process been tested on ESXI 3.5 Update 5?
In response to FAQ 18,
I place my backups on a Linux-based NFS-server, and I just tested deleting the oldest backup that ghettoVCB.sh can't delete from my ESXi 4... It deletes fine with rm -rf * so I can't really se what would be the problem with the NFS-server..
Anyone else use a Linux-based NFS-server for the backups?
I've not tested this on ESXi 3.5u5, but it should work without any problems. Nothing has drastically changed between u4 and u5 but please let me know through your testing if you do run into any issues.
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Majority of the users if not all that run into this problem is with an NFS server, probably the easiest way if you have spare capacity located on local vmfs volumes is to do a backup with a set of VMs and see if you still run into this problem. This would hopefully rule out the NFS server
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
I have the ghettoVCB script running fine on my VM ESXi 4 servers. It's running the backup ans dumping the files to a NFS share on a windows server 2003 box. However when the script writes the backup files to the NFS share the folders/files become read only. So when the script runs a second time its fine but on the third time when its supposed to delete the snapshot and folder in the NFS share its not doing so because the permissions on the files are read only. I checked the NFS and NTFS permisstions and they seem to be ok.
Any ideas?
Can you check to see if the initial backup the folders are set to RO after the backup has completed? Is this Windows changing the permissions? I don't believe ESX would do so, but you can verify after the first run and if that is the case, you may need to ensure permissions is set properly ... or if Windows is automatically doing it, you'll need to investigate why as this will be a problem. Don't you just love windows
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
I have two ESXi 4 hosts managed by a vSphere Essentials server. Before adding these to vSpere, I was able to run ghettoVCB.sh, now when I run it the script fails immediately:
./ghettoVCB.sh -f guests-to-backup.txt
2010-02-01 18:22:22 -- info: ============================== ghettoVCB LOG START ==============================
2010-02-01 18:22:22 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/BACKUP
2010-02-01 18:22:22 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2010-02-01 18:22:22 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick
2010-02-01 18:22:22 -- info: CONFIG - ADAPTER_FORMAT = buslogic
2010-02-01 18:22:22 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2010-02-01 18:22:22 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2010-02-01 18:22:22 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2010-02-01 18:22:22 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2010-02-01 18:22:22 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2010-02-01 18:22:22 -- info: CONFIG - LOG_LEVEL = info
2010-02-01 18:22:22 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout
2010-02-01 18:22:22 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2010-02-01 18:22:22 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2010-02-01 18:22:22 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
Failed to login: vim.fault.NoPermission
2010-02-01 18:22:23 -- info: Error: failed to locate and extract VM_ID for server1!
If I attempt to run the vim-cmd command separately, it produces this:
/usr/bin/vim-cmd vmsvc/getallvms
Failed to login: vim.fault.NoPermission
I believe I turned off local administration by 'root' when adding these to vCenter. Anyone know how to re-enable local root administration?
Looks like whatever you did, it's revoking the permissions. When you say these hosts are being managed by vSphere Essentials? I'm assuming you mean they're being manged by vCenter where as before they were stand alone hosts?
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Yes, sorry. The two ESXi servers used to be stand alone and are now managed by vCenter Server. I'm still digging through the documentation to find out if this setting can be changed, short of removing the ESXi host and adding it back.
Edit:
Ok, found it "Lockdown Mode" 'When enabled, lockdown mode prevents remote users from logging into this host using its administrative login name (e.g., "admin" or "root").
This can be enabled / disabled by selecting the host -> Configuration -> Security Profile -> Lockdown Mode
The wording is a bit misleading if you ask me, as it appears to prevent any local interaction with the vim-cmd tool as well.
I disabled the feature and now the command runs successfully.
a word of caution to anyone using vCenter, the vCenter server is not aware of the "Creating Snapshot" and "Removing Snapshot" steps (at least I didn't see those steps display in the vCenter "Recent Tasks" pane.
If I log directly into the ESXi host, the "Recent Tasks" does show the snapshot steps.
I wonder if there is any way from the ESXi host to pass the snapshot command to the vCenter server.
Correct, I had a feeling you were referring to vCenter and 'lockdown' mode, but wanted to be sure. You'll definitely want that off if you're going to be using the script.
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
All tasks being done are not sent to vCenter, it's directed to an individual ESX(i) host. There is no way to get that information up to vCenter and it's not part of the use case for this script.
If you're looking to do something with vCenter being aware of the backups/etc. you may want to take a look at ghettoVCBg2 but still you'll need to modify the script it to go through vCenter versus going through each individual hosts, majority of the users don't have vCenter or is using the free version of ESXi, so they require going through this unsupported setup.
Hopefully this clears up any confusion you may have
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Hello
I was reviewing the files from my first backup attempt. The VM I backed up was a Server 2008x64 with SQL installed. It has 3 drives configured on it.
1. 80 GIG Data
2. 40 GIG System
3. 8 GIG Swap
As I said in my previous post the backup said it completed successfully. I stored the files locally in my datastore and the process took around 73 minutes without compression.
My view from WinSCP shows the following files :
XXXXX.vmdk Size 436
XXXXX. vmx Size 2,767
XXXXX-flat.vmdk Size 85,899,345,920
XXXXX_1.vmdk Size 436
XXXXX_1-flat.vmdk Size 8,589,934,592
XXXXX_2.vmdk Size 436
XXXXX_2-flat.vmdk Size 42,949,672,960
My view from the VMware Infrastructure Client (browsing datastore) shows the following :
XXXXXXXX.vmx Size 2.70 KB
XXXXXXX.vmdk Size 19,189,760.00 KB
XXXXXXX_1.vmdk Size 237,568.00 KB
XXXXXXX_2.vmdk Size 4,936,704.00,KB
Why the difference in files and sizes between the VMware Infrastructure Client and WinSCP views ? Also is it normal for it to create bigger backups then the actual disk size ? ( 80 GG Drive verses XXXXX-flat.vmdk Size 85,899,345,920 )
Thanks
Not sure how familiar you are with the files that make up a VM, but for each disk in a VM there is a .vmdk (disk descriptor file) and -flat.vmdk (disk flat file). The descriptor file is pretty much meta data that describes what type VMDK each disk is and has a mappping to the actual disk contents which are the *-flat.vmdk files.
What you see in the datastore broswer is ONLY the descriptor file, this was probably done intentionally so users don't accidently browse and delete the actual disk file, you can re-generated a descriptor file if you've accidently deleted it, it's not the same when you delete the actual disk file. That is why you're seeing this discrepancy.
Regarding the size, if you do the math on the the actual *-flat.vmdk...you'll find out that's close to your actual disk allocation. Remember 1024 = 1MB and 85,899,345,920 != 85GB
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
!http://engineering.ucsb.edu/~duonglt/vmware/vexpert_silver_icon.jpg!
If you find this information useful, please award points for "correct" or "helpful".
Hi there, great script!
Anyways i encountered a problem with using the script. It runs through a list of vm's specified in a file. However a certain VM named "Office-01" makes problems. I configured the script to take 3 backups of each VM and rotate them. If there are less than 3 backups of the VM "Office-01", the script works correctly. It just takes the backup, renames it to fit into rotation and continous with the next VM.
But if there are already 3 backups, the script disbehaves somehow. It just stops working with the last message beeing
"2010-01-23 00:59:25 -- info: Removing snapshot from Office-01 ..."
There is no error message. It takes the backup of the VM but does not rotate anymore. The oldest backup doesnt get deleted and the new backup will not be renamed. This leads to my backup space filled up with unwanted old backups. Additionally all subsequent VMs listed in my list file doesnt get backuped, because the script just stops working, without giving some error message.
Would be really nice if someone could give me a hint on how to fix the problem!
I appended a log including configuration parameters (left out some "Clone: XX% done." messages). If additional information is needed, just let me know!
2010-01-23 00:05:01 -- info: ============================== ghettoVCB LOG START ==============================
2010-01-23 00:05:01 -- debug: HOST BUILD: VMware ESXi 4.0.0 build-171294
2010-01-23 00:05:01 -- debug: HOSTNAME: svr01.localhost
2010-01-23 00:05:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas/backups
2010-01-23 00:05:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2010-01-23 00:05:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick
2010-01-23 00:05:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic
2010-01-23 00:05:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2010-01-23 00:05:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2010-01-23 00:05:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2010-01-23 00:05:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2010-01-23 00:05:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2010-01-23 00:05:01 -- info: CONFIG - LOG_LEVEL = debug
2010-01-23 00:05:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = /vmfs/volumes/nas/backups/logs/ghettoVCB-backup-1264205101.log
2010-01-23 00:05:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2010-01-23 00:05:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2010-01-23 00:05:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2010-01-23 00:05:06 -- info: Initiate backup for CA
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore2/CA/CA.vmdk'...
Clone: 0% done.
Clone: 1% done.
Clone: 2% done.
...
Clone: 98% done.
Clone: 99% done.
Clone: 100% done.
2010-01-23 00:11:35 -- info: Backup Duration: 6.48 Minutes
2010-01-23 00:11:35 -- info: Successfully completed backup for CA!
2010-01-23 00:11:37 -- info: Initiate backup for Dev-01
2010-01-23 00:11:37 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-01-23" for Dev-01
2010-01-23 00:11:38 -- debug: Waiting for snapshot "ghettoVCB-snapshot-2010-01-23" to be created
2010-01-23 00:11:38 -- debug: Snapshot timeout set to: 900 seconds
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore1/Dev-01/Dev-01.vmdk'...
Clone: 0% done.
Clone: 1% done.
Clone: 2% done.
...
Clone: 100% done.
2010-01-23 00:19:57 -- info: Removing snapshot from Dev-01 ...
2010-01-23 00:20:32 -- info: Backup Duration: 8.92 Minutes
2010-01-23 00:20:32 -- info: Successfully completed backup for Dev-01!
2010-01-23 00:20:33 -- info: Initiate backup for Monitoring
2010-01-23 00:20:33 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-01-23" for Monitoring
2010-01-23 00:20:34 -- debug: Waiting for snapshot "ghettoVCB-snapshot-2010-01-23" to be created
2010-01-23 00:20:34 -- debug: Snapshot timeout set to: 900 seconds
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore2/Monitoring/Monitoring.vmdk'...
Clone: 0% done.
...
Clone: 100% done.
2010-01-23 00:23:30 -- info: Removing snapshot from Monitoring ...
2010-01-23 00:23:49 -- info: Backup Duration: 3.27 Minutes
2010-01-23 00:23:49 -- info: Successfully completed backup for Monitoring!
2010-01-23 00:23:51 -- info: Initiate backup for CRM-01
2010-01-23 00:23:51 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-01-23" for CRM-01
2010-01-23 00:23:52 -- debug: Waiting for snapshot "ghettoVCB-snapshot-2010-01-23" to be created
2010-01-23 00:23:52 -- debug: Snapshot timeout set to: 900 seconds
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore2/CRM-01/CRM-01.vmdk'...
Clone: 0% done.
Clone: 1% done.
...
Clone: 100% done.
2010-01-23 00:28:16 -- info: Removing snapshot from CRM-01 ...
2010-01-23 00:28:36 -- info: Backup Duration: 4.75 Minutes
2010-01-23 00:28:36 -- info: Successfully completed backup for CRM-01!
2010-01-23 00:28:37 -- info: Initiate backup for Office-01
2010-01-23 00:28:37 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-01-23" for Office-01
2010-01-23 00:28:38 -- debug: Waiting for snapshot "ghettoVCB-snapshot-2010-01-23" to be created
2010-01-23 00:28:38 -- debug: Snapshot timeout set to: 900 seconds
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore1/Office-01/Office-01.vmdk'...
Clone: 0% done.
...
Clone: 99% done.
Clone: 100% done.
2010-01-23 00:59:25 -- info: Removing snapshot from Office-01 ...
hi ggehring,
that's exactly the same problem bonibaz and i have.
as i pointed out in my last post
http://communities.vmware.com/docs/DOC-8760#comments-14281
i think it's related to the nas itself.
do you get i/o errors when you rm -rf * the old backups?
Hi angelone, thanks for your answer.
Looks like i have indeed the same problem. I ran a rm -rf * and it nearly took 2 minutes with some Input/ouput errors, as you stated. Heres the console log:
/vmfs/volumes/e863fe05-624ba7f8/backups/Office-01 # rm -rf *
rm: cannot remove 'Office-01-2010-02-01/Office-01-flat.vmdk': Input/output error
rm: cannot remove 'Office-01-2010-02-01': Input/output error
ash: getcwd: Input/output error
The content of the backup folder gets deleted, but not the folder itself.
Im using a QNAP Turbo NAS TS-419U with 4 Drives and Raidlevel 5.
I mounted the nas as type "NFS" as a datastore using the vSphere Client.
What would you suggest to solve the problem? I will look if jumbo frames are enabled on the nas and how to deactivate them.
lamw
Thank you for the guidance above (again ). I have searched this thread and I apologize in advance if this has been covered but what is the recommended DISK_BACKUP_FORMAT ? I see the default in the script is "zeroedthick" but all the examples above show "thin" Is there a best choice?
Also tried to backup to my NFS share tonight and got the following issues. Any ideas? Are the results more reliable if a VM is powered down before starting the backup?
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # ./ghettoVCB.sh -f vmbackups
2010-02-04 01:50:26 -- info: ============================== ghettoVCB LOG START ==============================
2010-02-04 01:50:26 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/esxi-back up
2010-02-04 01:50:26 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2010-02-04 01:50:26 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2010-02-04 01:50:26 -- info: CONFIG - ADAPTER_FORMAT = buslogic
2010-02-04 01:50:26 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2010-02-04 01:50:26 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2010-02-04 01:50:26 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2010-02-04 01:50:26 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2010-02-04 01:50:26 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2010-02-04 01:50:26 -- info: CONFIG - LOG_LEVEL = info
2010-02-04 01:50:26 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout
2010-02-04 01:50:26 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2010-02-04 01:50:26 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2010-02-04 01:50:26 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2010-02-04 01:50:31 -- info: Initiate backup for MSServer2003X32
2010-02-04 01:50:31 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-04" f or MSServer2003X32
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27/MSServer2003X32/MSServer2003X32_2.v mdk'...
Clone: 100% done.
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27/MSServer2003X32/MSServer2003X32_1.v mdk'...
Clone: 100% done.
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/datastore2/MSServer2003X32/MSServer2003X32.vmdk'...
Clone: 61% done.Failed to clone disk : Input/output error (327689).
2010-02-04 03:29:39 -- info: Removing snapshot from MSServer2003X32 ...
2010-02-04 03:31:31 -- info: Backup Duration: 101.00 Minutes
2010-02-04 03:31:31 -- info: Successfully completed backup for MSServer2003X32!
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # 2010-02-04 01:50:31 -- info: Initiate backup for MSServer2003X32
-ash: 2010-02-04: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # 2010-02-04 01:50:31 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-04" f or MSServer2003X32
-ash: 2010-02-04: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Destination disk format: VMFS thin-provisioned
-ash: Destination: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Cloning disk '/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27/MSServer2003X32/MSServer2003X32_2.v mdk'...
-ash: Cloning: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Clone: 100% done.
-ash: Clone:: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27/MSServer2003X32/MSServer2003X32_1.v mdk'...
-ash: Destination: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Cloning disk '/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27/MSServer2003X32/MSServer2003X32_1.v mdk'...
-ash: Cloning: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Clone: 100% done.
-ash: Clone:: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Destination disk format: VMFS thin-provisioned
-ash: Destination: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Cloning disk '/vmfs/volumes/datastore2/MSServer2003X32/MSServer2003X32.vmdk'...
-ash: Cloning: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # Clone: 61% done.Failed to clone disk : Input/output error (327689).
-ash: Syntax error: "(" unexpected
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # 2010-02-04 03:29:39 -- info: Removing snapshot from MSServer2003X32 ...
-ash: 2010-02-04: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # 2010-02-04 03:31:31 -- info: Backup Duration: 101.00 Minutes
-ash: 2010-02-04: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # 2010-02-04 03:31:31 -- info: Successfully completed backup for MSServer2003X32!
-ash: 2010-02-04: not found
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 #
/vmfs/volumes/4a8ea23d-22b66fc2-ab77-0025b382fb27 # 2010-02-04 03:31:31 -- info: ============================== ghettoVCB LOG END ================================
I would say for backup purposes, thin would make the most sense unless you have a reason not to use it. There negligible performance difference and the main reason I would recommend zerothick versus thin is ensuring that you properly manage your storage. You may find yourself over allocating if you're thin provisioning all VMs and you don't keep an eye on it.
Regarding your errors, please take a look back at the last 10 or so threads. This is an issue with your NFS server and specifically deleting large files which causes a delay or intermittent disconnect where the ESX(i) host is unable to communicate with the NFS datastore causing some issues which has been experienced by quite a few individuals. This has nothing to do with the script
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
i test the skrip nearly one week without problems till now.
everything works fine: create snapshot, copy snapshot to nfs, remove snapshot, rename and remove old backup-folders.
I doupt myself, but now, i´m really happy!
THANKS FOR HELP WILLIAM!!!
@ws2000:
We use 2gbsparse as it produces the smallest backups.
We found this in some cases to also consume significant less space than thin.
Keep in mind that some NFS do not support thin (e.g. NFS on Windows) resulting in thick instead.
So my personal favorite is still 2gbsparse.
grubi.
goppi
I am using Windows Services for UNIX Version 3.5 on a Windows XP Dell Dimension (firewall disabled). The machine has a new SATA Western Digital Black drive and 2 gig of ram. Pretty good for a simple NAS I thought.
I have tried the backup twice now. Both times it succeeds on my smaller 6 GB and 25 GB VM disks and then fails with a error (Failed to clone disk : Input/output ) on the 51 GB VM disk. The first time it was at 61% and the second time at 10 %. I purposely attempted this after hours so there would be minimal network congestion.
Do you think it could be format related? 2gbsparse verses thin?
On a side note I tried to WinSCP some files off my ESXI host. WinSCP runs for a few minutes and then fails with an error about the host not communicating for the last 15 seconds. Seems to create the same sized file ( 488268 KB ) and stops. There has to be some setting that is getting me??? I drop a 6 GB .wim image onto PCs in around 15 to 17 minutes without ever a network issue.
One other thought.... I have only added the NFS as storage on the ESXI host. I have not added an additiional vmkernel based vSwitch pointing at this machine. Is that recommemded?
I have a quick question. When I run this script on one of my ESX servers it backs up fine for virtual machines that are on the running server. I am unable to backup vm's that are not running on the same ESX host as the script. Is there a way around this or do I need to turn DRS off and backup vm's that are running on each server?
Hi soleblazer,
this script is just for backing up VMs on the local ESX(i).
regards,
Martin
You'll need to disable DRS else the VMs will be moving around and the script will only work on VMs that are running local to the ESX(i) host during the period of the script execution. I know some individuals have placed the script on shared storage (recommended) approach and actually have a global backup list of VMs and that way they're able to backup all VMs and have each host execute the script in a staggered period.
Again, this won't always guarantee that you'll backup all VMs or if you backup duplicates. You potentially can miss one after going from HostA to HostB via a vMotion. You can easily prevent duplicate backups by writing out to another file keeping track of the VMs that's been backed up during a period. This script is really meant to be used w/o vCenter and majority of the user case will be standalone ESX or ESXi host.
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Glad to hear everything is working out.
I do though would like to know what exactly is causing so many users to run into the issue with their NFS server? I'm not sure if the pipe between the two are not fast enough or if the NFS server itself is not sufficient in terms of cpu/memory resources to keep up with the demand.
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Any idea what is causing WinSCP to fail copying a .vmdk file from the datastore? I have tried it 6 times now and everytime it stops at 488268 KB with a host has not communicated for 15 seconds.... error. I have plenty of disk space. It’s only a 8 GB VM disk. To stop at the same file size everytime screams of newbie user mistake But I can't figure it out.
I will try your suggestions.
Thanks
Well first off I would investigate why your NAS is having issues with the backup, if you take a look at the last 20 or so threads, few of the posts have referenced that this may be a potential issue with NFS server (take a look at the server logs/etc).
Secondly, WinSCP was not designed to support transfer of VMs and should not be used any issues you may run into is probably expected. You may want to take a look at Veeam's FastSCP which "IS" design to transfer VMs off an ESX(i) host via the SCP protocol. The issue has nothing to do with free space but how you're communicating with the host to transport the files.
I would say the latter would be a short term fix but you probably want to figure out why your NFS server is not sufficient enough in supporting the backup. This just may be a known issue when using NFS from Windows which is probably not my first recommendation when setting up a NAS.
Good Luck
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Another question. I backed up my first VM, and it seems to have worked. I restored the vm (and by restore I mean I just added it to the inventory and started it up, just as a test) and I notice one odd thing. It seems the /etc/sysconfig/network-scripts/ifcfg-eth0 file has been modified, it was changed to use DHCP. Its def different than the source vm, could this do anything like that? I'm just curious, it could obviously be something else, just curious if anyone has seen that, I would assume it would be an exact replica, using the same IP. Everything else seemed the same.
Well Veeam's FastSCP is working WONDERFULLY! I will continue hacking away with your other recommendations and post back.
THANKS FOR ALL THE HELP! Its very much appreciated.
The backup is preserving everything you had in the source .. may want to double check that you were in fact using static IP address. Also when you restored, was the source still running? perhaps a conflict and whatever OS you had backed up defaulted to DHCP?
This is more of guestOS question versus a script question and would recommend you do some investigation on you end.
Thanks
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".