ghettoVCB.sh - Free alternative for backing up VM's for ESX(i) 3.5, 4.x, 5.x, 6.x & 7.x

ghettoVCB.sh - Free alternative for backing up VM's for ESX(i) 3.5, 4.x, 5.x, 6.x & 7.x

Table of Contents:

    • Description
    • Features
    • Requirements
    • Setup
    • Configurations
    • Usage
    • Sample Execution   
      • Dry run Mode
      • Debug backup Mode
      • Backup VMs stored in a list
      • Backup All VMs residing on specific ESX(i) host
      • Backup All VMs residing on specific ESX(i) host and exclude the VMs in the exclusion list
      • Backup VMs using individual backup policies
    • Enable compression for backups
    • Email Backup Logs
    • Restore backups (ghettoVCB-restore.sh)
    • Cronjob FAQ
    • Stopping ghettoVCB Process
    • FAQ
    • Our NFS Server Configuration
    • Useful Links
    • Change Log

 

Description:


This script performs backups of virtual machines residing on ESX(i) 3.5/4.x/5.x/6.x/7.x servers using methodology similar to VMware's VCB tool. The script takes snapshots of live running virtual machines, backs up the  master VMDK(s) and then upon completion, deletes the snapshot until the next backup. The only caveat is that it utilizes resources available to the Service Console of the ESX server or Busybox Console (Tech Support Mode) of the ESXi server  running the backups as opposed to following the traditional method of offloading virtual machine backups through a VCB proxy.

This script has been tested on ESX 3.5/4.x/5.x and ESXi 3.5/4.x/5.x/6.x/7.x and supports the following backup mediums: LOCAL STORAGE, SAN and NFS. The script is non-interactive and can be setup to run via cron. Currently, this script accepts a text file that lists the display names of virtual machine(s) that are to be backed up. Additionally, one can specify a folder containing configuration files on a per VM basis for  granular control over backup policies.

Additionally, for ESX(i) environments that don't have persistent NFS datastores designated for backups, the script offers the ability to automatically connect the ESX(i) server to a NFS exported folder and then upon backup completion, disconnect it from the ESX(i) server. The connection is established by creating an NFS datastore link which enables monolithic (or thick) VMDK backups as opposed to using the usual  *nix mount command which necessitates breaking VMDK files into the 2gbsparse format for backup. Enabling this mode is self-explanatory and will evidently be so when editing the script (Note: VM_BACKUP_VOLUME variable is ignored if ENABLE_NON_PERSISTENT_NFS=1 ).

In its current configuration, the script will allow up to 3 unique backups of the Virtual Machine before it will overwrite the previous backups; this however, can be modified to fit procedures if need be. Please be diligent in running the script in a test or staging environment before using it on production live Virtual Machines; this script functions well within our environment but there is a chance that  it may not fit well into other environments.

 

If you have any questions, you may post in the dedicated ghettoVCB VMTN community group.

 

If you have found this script to be useful and would like to contribute back, please click here to donate.

 

Please read ALL documentation + FAQ's before posting a question about an issue or problem. Thank You

Features

  • Online back up of VM(s)
  • Support for multiple VMDK disk(s) backup per VM
  • Only valid VMDK(s) presented to the VM will be backed up
  • Ability to shutdown guestOS and initiate backup process and power on VM afterwards with the option of hard power timeout
  • Allow spaces in VM(s) backup list (not recommended and not a best practice)
  • Ensure that snapshot removal process completes prior to to continuing onto the next VM backup
  • VM(s) that intially contain snapshots will not be backed up and will be ignored
  • Ability to specify the number of backup rotations for VM
  • Output back up VMDK(s) in either ZEROEDTHICK (default behavior) or 2GB SPARSE or THIN or EAGERZEROEDTHICK format
  • Support for both SCSI and IDE disks
  • Non-persistent NFS backup
  • Fully support VMDK(s) stored across multiple datastores
  • Ability to compress backups (Experimental Support - Please refer to FAQ #25)
  • Ability to configure individual VM backup policies
  • Ability to include/exclude specific VMDK(s) per VM (requires individual VM backup policy setup)
  • Ability to configure logging output to file
  • Independent disk awareness (will ignore VMDK)
  • New timeout variables for shutdown and snapshot creations
  • Ability to configure snapshots with both memory and/or quiesce options
  • Ability to configure disk adapter format
  • Additional debugging information including dry run execution
  • Support for VMs with both virtual/physical RDM (pRDM will be ignored and not backed up)
  • Support for global ghettoVCB configuration file
  • Support for VM exclusion list
  • Ability to backup all VMs residing on a specific host w/o specifying VM list
  • Implemented simple locking mechenism to ensure only 1 instance of ghettoVCB is running per host
  • Updated backup directory structure - rsync friendly
  • Additional logging and final status output
  • Logging of ghettoVCB PID (proces id)
  • Email backup logs (Experimental Suppport)
  • Rsync "Link" Support (Experimental Suppport)
  • Enhanced "dryrun" details including configuration and/or VMDK(s) issues
  • New storage debugging details pre/post backup
  • Quick email status summary
  • Updated ghettoVCB documentation
  • ghettoVCB available via github
  • Support for ESXi 5.1 NEW!
  • Support for individual VM backup via command-line NEW!
  • Support VM(s) with existing snapshots NEW!
  • Support mulitple running instances of ghettoVCB NEW!
    (Experimental Suppport)
  • Configure VM shutdown/startup order NEW!
  • Support changing custom VM name during restore NEW! 

 


 

Requirements:

  • VMs running on ESX(i) 3.5/4.x+/5.x
  • SSH console access to ESX(i) host

 


 

Setup:


1) Download ghettoVCB from github by clicking on the ZIP button at the top and upload to either your ESX or ESXi system (use scp or WinSCP to transfer the file)



2) Extract the contents of the zip file (filename will vary):

# unzip ghettoVCB-master.zip

Archive:  ghettoVCB-master.zip
   creating: ghettoVCB-master/
  inflating: ghettoVCB-master/README
  inflating: ghettoVCB-master/ghettoVCB-restore.sh
  inflating: ghettoVCB-master/ghettoVCB-restore_vm_restore_configuration_template
  inflating: ghettoVCB-master/ghettoVCB-vm_backup_configuration_template
  inflating: ghettoVCB-master/ghettoVCB.conf
  inflating: ghettoVCB-master/ghettoVCB.sh



3) The script is now ready to be used and is located in a directory named ghettoVCB-master

# ls -l

-rw-r--r--    1 root     root           281 Jan  6 03:58 README
-rw-r--r--    1 root     root         16024 Jan  6 03:58 ghettoVCB-restore.sh
-rw-r--r--    1 root     root           309 Jan  6 03:58 ghettoVCB-restore_vm_restore_configuration_template
-rw-r--r--    1 root     root           356 Jan  6 03:58 ghettoVCB-vm_backup_configuration_template
-rw-r--r--    1 root     root           631 Jan  6 03:58 ghettoVCB.conf
-rw-r--r--    1 root     root         49375 Jan  6 03:58 ghettoVCB.sh

4) Before using the scripts, you will need to enable the execute permission  on both ghettoVCB.sh and ghettoVCB-restore.sh by running the following:

chmod +x ghettoVCB.shchmod +x ghettoVCB-restore.sh

 


 

Configurations:


The following variables need to be defined within the script or in VM backup policy prior to execution.

Defining the backup datastore and folder in which the backups are stored (if folder does not exist, it will automatically be created):

VM_BACKUP_VOLUME=/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS



Defining the backup disk format (zeroedthick, eagerzeroedthick, thin, and 2gbsparse are available):

DISK_BACKUP_FORMAT=thin

Note: If you are using the 2gbsparse on an ESXi 5.1 host, backups may fail. Please download the latest version of the ghettoVCB script which automatically resolves this or take a look at this article for the details.

Defining the backup rotation per VM:

VM_BACKUP_ROTATION_COUNT=3



Defining whether the VM is powered down or not prior to backup (1 = enable, 0 = disable):

Note: VM(s) that are powered off will not require snapshoting

POWER_VM_DOWN_BEFORE_BACKUP=0



Defining whether the VM can be hard powered off when  "POWER_VM_DOWN_BEFORE_BACKUP" is enabled and VM does not have VMware  Tools installed

ENABLE_HARD_POWER_OFF=0



If "ENABLE_HARD_POWER_OFF" is enabled, then this defines the number  of (60sec) iterations the script will before executing a hard power off  when:

ITER_TO_WAIT_SHUTDOWN=3



The number (60sec) iterations the script will wait when powering off  the VM and will give up and ignore the particular VM for backup:

POWER_DOWN_TIMEOUT=5



The number (60sec) iterations the script will wait when taking a  snapshot of a VM and will give up and ignore the particular VM for  backup:

Note: Default value should suffice

SNAPSHOT_TIMEOUT=15



Defining whether or not to enable compression (1 = enable, 0 = disable):

ENABLE_COMPRESSION=0



NOTE: With ESXi 3.x/4.x/5.x, there is a limitation of the maximum size of a VM for compression within the unsupported Busybox Console which should not affect backups running classic ESX 3.x,4.x or 5.x. On ESXi 3.x the largest supported VM is 4GB for compression and on ESXi 4.x the largest  supported VM is 8GB. If you try to compress a larger VM, you may run into issues when trying to extract upon a restore. PLEASE TEST THE RESTORE PROCESS BEFORE MOVING TO PRODUCTION SYSTEMS!

Defining the adapter type for backed up VMDK (DEPERCATED - NO LONGER NEEDED😞

ADAPTER_FORMAT=buslogic



Defining whether virtual machine memory is snapped and if quiescing is enabled (1 = enable, 0 = disable):

Note: By default both are disabled

VM_SNAPSHOT_MEMORY=0
VM_SNAPSHOT_QUIESCE=0



NOTE: VM_SNAPSHOT_MEMORY is only used to ensure when the snapshot is taken, it's memory contents  are also captured. This is only relevant to the actual snapshot and it's  not used in any shape/way/form in regards to the backup. All backups  taken whether your VM is running or offline will result in an offline VM  backup when you restore. This was originally added for debugging  purposes and in generally should be left disabled

Defining VMDK(s) to backup from a particular VM either a list of vmdks or "all"

VMDK_FILES_TO_BACKUP="myvmdk.vmdk"

 

Defining whether or not VM(s) with existing snapshots can be backed up. This flag means it will CONSOLIDATE ALL EXISTING SNAPSHOTS for a VM prior to starting the backup (1 = yes, 0 = no):

ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP=0

 

Defining the order of which VM(s) should be shutdown first, especially if there is a dependency between multiple VM(s). This should be a comma seperate list of VM(s)

VM_SHUTDOWN_ORDER=vm1,vm2,vm3

 

Defining the order of VM(s) that should be started up first after backups have completed, especially if there is a dependency between multiple VM(s). This should be a comma seperate list of VM(s)

VM_STARTUP_ORDER=vm3,vm2,vm1

 

 

Defining NON-PERSISTENT NFS Backup Volume (1 = yes, 0 = no):

ENABLE_NON_PERSISTENT_NFS=0

NOTE: This is meant for environments that do not want a persisted connection to their NFS backup volume and allows the NFS volume to only be mounted during backups. The script expects the following 5 variables to be defined if this is to be used: UNMOUNT_NFS, NFS_SERVER, NFS_MOUNT, NFS_LOCAL_NAME and NFS_VM_BACKUP_DIR

 

Defining whether or not to unmount the NFS backup volume (1 = yes, 0 = no):

UNMOUNT_NFS=0

Defining the NFS server address (IP/hostname):

NFS_SERVER=172.51.0.192

Defining the NFS export path:

NFS_MOUNT=/upload

Defining the NFS datastore name:

NFS_LOCAL_NAME=backup

Defining the NFS backup directory for VMs:

NFS_VM_BACKUP_DIR=mybackups

 

NOTE: Only supported if you are running vSphere 4.1 and this feature is experimental. If you are having issues with sending mail, please take a look at Email Backup Log section

Defining whether or not to email backup logs (1 = yes, 0 = no):

EMAIL_LOG=1



Defining whether or not to email message will be deleted off the host  whether it is successful in sending, this is used for debugging  purposes. (1 = yes, 0 = no):

EMAIL_DEBUG=1



Defining email server:

EMAIL_SERVER=auroa.primp-industries.com



Defining email server port:

EMAIL_SERVER_PORT=25

 

Defining email delay interval (useful if you have slow SMTP server and would like to include a delay in netcat using -i param, default is 1second):

EMAIL_DELAY_INTERVAL=1


Defining recipient of the email:

EMAIL_TO=auroa@primp-industries.com



Defining from user which may require specific domain entry depending on email server configurations:

EMAIL_FROM=root@ghettoVCB

 

Defining to support RSYNC symbolic link creation (1 = yes, 0 = no):

RSYNC_LINK=0

 

Note: This  enables an automatic creation of a generic symbolic link (both a  relative & absolution path) in which users can refer to run  replication backups using rsync from a remote host. This does not  actually support rsync backups with ghettoVCB. Please take a look at the  Rsync Section of the documentation for more details.

 

  • A sample global ghettoVCB configuration file is included with the download called ghettoVCB.conf.  It contains the same variables as defined from above and allows a user  to customize and define multiple global configurations based on a user's  environment.

 


# cat ghettoVCB.conf
VM_BACKUP_VOLUME=/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
DISK_BACKUP_FORMAT=thin
VM_BACKUP_ROTATION_COUNT=3
POWER_VM_DOWN_BEFORE_BACKUP=0
ENABLE_HARD_POWER_OFF=0
ITER_TO_WAIT_SHUTDOWN=3
POWER_DOWN_TIMEOUT=5
ENABLE_COMPRESSION=0
VM_SNAPSHOT_MEMORY=0
VM_SNAPSHOT_QUIESCE=0
ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP=0
ENABLE_NON_PERSISTENT_NFS=0
UNMOUNT_NFS=0
NFS_SERVER=172.30.0.195
NFS_MOUNT=/nfsshare
NFS_LOCAL_NAME=nfs_storage_backup
NFS_VM_BACKUP_DIR=mybackups
SNAPSHOT_TIMEOUT=15
EMAIL_LOG=0
EMAIL_SERVER=auroa.primp-industries.com
EMAIL_SERVER_PORT=25
EMAIL_DELAY_INTERVAL=1
EMAIL_TO=auroa@primp-industries.com
EMAIL_FROM=root@ghettoVCB
WORKDIR_DEBUG=0
VM_SHUTDOWN_ORDER=
VM_STARTUP_ORDER=


To override any existing configurations within the ghettoVCB.sh script  and to use a global configuration file, user just needs to specify the  new flag -g and path to global configuration file (For an example,  please refer to the sample execution section of the documenation)

 

Running multiple instances of ghettoVCB is now supported with the latest release by specifying the working directory (-w) flag.

By default, the working directory of the ghettoVCB instance is /tmp/ghettoVCB.work and you can run another instance by providing an alternate working directory. You should try to minimize the number of ghettoVCB instances running on your ESXi host as it does consume some amount of resources when running in the ESXi Shell. This is considered an experimental feature, so please test in a development environment to ensure everything is working prior to moving to production system.

 

Ensure that you do not edit past this section:

########################## DO NOT MODIFY PAST THIS LINE ##########################



 


 

Usage:


# ./ghettoVCB.sh
###############################################################################
#
# ghettoVCB for ESX/ESXi 3.5, 4.x+ and 5.x
# Author: William Lam
# http://www.virtuallyghetto.com/
# Documentation: http://communities.vmware.com/docs/DOC-8760
# Created: 11/17/2008
# Last modified: 2012_12_17 Version 0
#
###############################################################################

Usage: ghettoVCB.sh [options]

OPTIONS:
   -a     Backup all VMs on host
   -f     List of VMs to backup
   -m     Name of VM to backup (overrides -f)
   -c     VM configuration directory for VM backups
   -g     Path to global ghettoVCB configuration file
   -l     File to output logging
   -w     ghettoVCB work directory (default: )
   -d     Debug level [info|debug|dryrun] (default: info)

(e.g.)

Backup VMs stored in a list
    ./ghettoVCB.sh -f vms_to_backup

Backup a single VM
    ./ghettoVCB.sh -m vm_to_backup

Backup all VMs residing on this host
    ./ghettoVCB.sh -a

Backup all VMs residing on this host except for the VMs in the exclusion list
    ./ghettoVCB.sh -a -e vm_exclusion_list

Backup VMs based on specific configuration located in directory
    ./ghettoVCB.sh -f vms_to_backup -c vm_backup_configs

Backup VMs using global ghettoVCB configuration file
    ./ghettoVCB.sh -f vms_to_backup -g /global/ghettoVCB.conf

Output will log to /tmp/ghettoVCB.log (consider logging to local or remote datastore to persist logs)
    ./ghettoVCB.sh -f vms_to_backup -l /vmfs/volume/local-storage/ghettoVCB.log

Dry run (no backup will take place)
    ./ghettoVCB.sh -f vms_to_backup -d dryrun



The input to this script is a file that contains the display name of the  virtual machine(s) separated by a newline. When creating this file on a  non-Linux/UNIX system, you may introduce ^M character which can cause  the script to miss-behave. To ensure this does not occur, plesae create  the file on the ESX/ESXi host.

Here is a sample of what the file would look like:

[root@himalaya ~]# cat vms_to_backup
vCOPS
vMA
vCloudConnector



 


 

Sample Execution:

  • Dry run Mode
  • Debug Mode

  • Backup VMs stored in a list
  • Backup Single VM using command-line
  • Backup All VMs residing on specific ESX(i) host
  • Backup VMs based on individual VM backup policies

 

Dry run Mode (no backup will take place)

Note: This execution mode provides a qucik summary of details on whether a given set of VM(s)/VMDK(s) will be backed up. It provides additional information such as VMs that may have snapshots, VMDK(s) that are configured as independent disks, or other issues that may cause a VM or VMDK to not backed up.

 

  • Log verbosity: dryrun
  • Log output: stdout & /tmp (default) 
    • Logs by default will be stored in /tmp, these log files may not persist through reboots, especially when dealing with ESXi. You should log to either a local or remote datastore to ensure that logs are kept upon a reboot.
[root@himalaya ghettoVCB]# ./ghettoVCB.sh -f vms_to_backup -d dryrun
Logging output to "/tmp/ghettoVCB-2011-03-13_15-19-57.log" ...
2011-03-13 15:19:57 -- info: ============================== ghettoVCB LOG START ==============================

2011-03-13 15:19:57 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:19:57 -- info: CONFIG - GHETTOVCB_PID = 30157
2011-03-13 15:19:57 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:19:57 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:19:57 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-19-57
2011-03-13 15:19:57 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:19:57 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:19:57 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:19:57 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2011-03-13 15:19:57 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:19:57 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:19:57 -- info: CONFIG - LOG_LEVEL = dryrun
2011-03-13 15:19:57 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2011-03-13_15-19-57.log
2011-03-13 15:19:57 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:19:57 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:19:57 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2011-03-13 15:19:57 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:19:57 -- info:
2011-03-13 15:19:57 -- dryrun: ###############################################
2011-03-13 15:19:57 -- dryrun: Virtual Machine: scofield
2011-03-13 15:19:57 -- dryrun: VM_ID: 704
2011-03-13 15:19:57 -- dryrun: VMX_PATH: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield.vmx
2011-03-13 15:19:57 -- dryrun: VMX_DIR: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield
2011-03-13 15:19:57 -- dryrun: VMX_CONF: scofield/scofield.vmx
2011-03-13 15:19:57 -- dryrun: VMFS_VOLUME: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:19:57 -- dryrun: VMDK(s):
2011-03-13 15:19:58 -- dryrun:  scofield_3.vmdk 3 GB
2011-03-13 15:19:58 -- dryrun:  scofield_2.vmdk 2 GB
2011-03-13 15:19:58 -- dryrun:  scofield_1.vmdk 1 GB
2011-03-13 15:19:58 -- dryrun:  scofield.vmdk   5 GB
2011-03-13 15:19:58 -- dryrun: INDEPENDENT VMDK(s):
2011-03-13 15:19:58 -- dryrun: TOTAL_VM_SIZE_TO_BACKUP: 11 GB
2011-03-13 15:19:58 -- dryrun: ###############################################

2011-03-13 15:19:58 -- dryrun: ###############################################
2011-03-13 15:19:58 -- dryrun: Virtual Machine: vMA
2011-03-13 15:19:58 -- dryrun: VM_ID: 1440
2011-03-13 15:19:58 -- dryrun: VMX_PATH: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vMA/vMA.vmx
2011-03-13 15:19:58 -- dryrun: VMX_DIR: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vMA
2011-03-13 15:19:58 -- dryrun: VMX_CONF: vMA/vMA.vmx
2011-03-13 15:19:58 -- dryrun: VMFS_VOLUME: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:19:58 -- dryrun: VMDK(s):
2011-03-13 15:19:58 -- dryrun:  vMA-000002.vmdk 5 GB
2011-03-13 15:19:58 -- dryrun: INDEPENDENT VMDK(s):
2011-03-13 15:19:58 -- dryrun: TOTAL_VM_SIZE_TO_BACKUP: 5 GB
2011-03-13 15:19:58 -- dryrun: Snapshots found for this VM, please commit all snapshots before continuing!
2011-03-13 15:19:58 -- dryrun: THIS VIRTUAL MACHINE WILL NOT BE BACKED UP DUE TO EXISTING SNAPSHOTS!
2011-03-13 15:19:58 -- dryrun: ###############################################

2011-03-13 15:19:58 -- dryrun: ###############################################
2011-03-13 15:19:58 -- dryrun: Virtual Machine: vCloudConnector
2011-03-13 15:19:58 -- dryrun: VM_ID: 2064
2011-03-13 15:19:58 -- dryrun: VMX_PATH: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector/vCloudConnector.vmx
2011-03-13 15:19:58 -- dryrun: VMX_DIR: /vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector
2011-03-13 15:19:58 -- dryrun: VMX_CONF: vCloudConnector/vCloudConnector.vmx
2011-03-13 15:19:58 -- dryrun: VMFS_VOLUME: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:19:58 -- dryrun: VMDK(s):
2011-03-13 15:19:59 -- dryrun:  vCloudConnector.vmdk    3 GB
2011-03-13 15:19:59 -- dryrun: INDEPENDENT VMDK(s):
2011-03-13 15:19:59 -- dryrun:  vCloudConnector_1.vmdk  40 GB
2011-03-13 15:19:59 -- dryrun: TOTAL_VM_SIZE_TO_BACKUP: 3 GB
2011-03-13 15:19:59 -- dryrun: Snapshots can not be taken for indepdenent disks!
2011-03-13 15:19:59 -- dryrun: THIS VIRTUAL MACHINE WILL NOT HAVE ALL ITS VMDKS BACKED UP!
2011-03-13 15:19:59 -- dryrun: ###############################################

2011-03-13 15:19:59 -- info: ###### Final status: OK, only a dryrun. ######

2011-03-13 15:19:59 -- info: ============================== ghettoVCB LOG END ================================

In the example above, we have 3 VMs to be backed up:

  • scofield has 4 VMDK(s) that total up to 11GB and does not contain any snapshots/independent disks and this VM should backup without any issues
  • vMA has 1 VMDK but it also contains a snapshot and clearly this VM will not be backed up until the snapshot has been committed
  • vCloudConnector has 2 VMDK(s), one which is 3GB and another which is 40GB and configured as an independent disk. Since snapshots do not affect independent disk, only the 3GB VMDK will be backed up for this VM as denoted by the "TOTAL_VM_SIZE_TO_BACKUP"

Debug backup mode

Note: This execution modes provides more in-depth information about environment/backup process including additional storage debugging information which provides information about both the source/destination datastore pre and post backups. This can be very useful in troubleshooting backups

 

  • Log verbosity: debug
  • Log output: stdout & /tmp (default) 
    • Logs by default will be stored in /tmp, these log files may not persist  through reboots, especially when dealing with ESXi. You should log to  either a local or remote datastore to ensure that logs are kept upon a  reboot.
[root@himalaya ghettoVCB]# ./ghettoVCB.sh -f vms_to_backup -d debug
Logging output to "/tmp/ghettoVCB-2011-03-13_15-27-59.log" ...
2011-03-13 15:27:59 -- info: ============================== ghettoVCB LOG START ==============================

2011-03-13 15:27:59 -- debug: Succesfully acquired lock directory - /tmp/ghettoVCB.lock

2011-03-13 15:27:59 -- debug: HOST VERSION: VMware ESX 4.1.0 build-260247
2011-03-13 15:27:59 -- debug: HOST LEVEL: VMware ESX 4.1.0 GA
2011-03-13 15:27:59 -- debug: HOSTNAME: himalaya.primp-industries.com

2011-03-13 15:27:59 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:27:59 -- info: CONFIG - GHETTOVCB_PID = 31074
2011-03-13 15:27:59 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:27:59 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:27:59 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-27-59
2011-03-13 15:27:59 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:27:59 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:27:59 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:27:59 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2011-03-13 15:27:59 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:27:59 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:27:59 -- info: CONFIG - LOG_LEVEL = debug
2011-03-13 15:27:59 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2011-03-13_15-27-59.log
2011-03-13 15:27:59 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:27:59 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:27:59 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2011-03-13 15:27:59 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:27:59 -- info:
2011-03-13 15:28:01 -- debug: Storage Information before backup:
2011-03-13 15:28:01 -- debug: SRC_DATASTORE: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:28:01 -- debug: SRC_DATASTORE_CAPACITY: 1830.5 GB
2011-03-13 15:28:01 -- debug: SRC_DATASTORE_FREE: 539.4 GB
2011-03-13 15:28:01 -- debug: SRC_DATASTORE_BLOCKSIZE: 4
2011-03-13 15:28:01 -- debug: SRC_DATASTORE_MAX_FILE_SIZE: 1024 GB
2011-03-13 15:28:01 -- debug:
2011-03-13 15:28:01 -- debug: DST_DATASTORE: dlgCore-NFS-bigboi.VM-Backups
2011-03-13 15:28:01 -- debug: DST_DATASTORE_CAPACITY: 1348.4 GB
2011-03-13 15:28:01 -- debug: DST_DATASTORE_FREE: 296.8 GB
2011-03-13 15:28:01 -- debug: DST_DATASTORE_BLOCKSIZE: NA
2011-03-13 15:28:01 -- debug: DST_DATASTORE_MAX_FILE_SIZE: NA
2011-03-13 15:28:01 -- debug:
2011-03-13 15:28:02 -- info: Initiate backup for scofield
2011-03-13 15:28:02 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_3.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/scofield/scofield-2011-03-13_15-27-59/scofield_3.vmdk"
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_3.vmdk'...
Clone: 37% done.
2011-03-13 15:28:04 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_2.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/scofield/scofield-2011-03-13_15-27-59/scofield_2.vmdk"
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_2.vmdk'...
Clone: 85% done.
2011-03-13 15:28:05 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_1.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/scofield/scofield-2011-03-13_15-27-59/scofield_1.vmdk"

2011-03-13 15:28:06 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/scofield/scofield-2011-03-13_15-27-59/scofield.vmdk"
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield.vmdk'...
Clone: 78% done.
2011-03-13 15:29:52 -- info: Backup Duration: 1.83 Minutes
2011-03-13 15:29:52 -- info: Successfully completed backup for scofield!

2011-03-13 15:29:54 -- debug: Storage Information after backup:
2011-03-13 15:29:54 -- debug: SRC_DATASTORE: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:29:54 -- debug: SRC_DATASTORE_CAPACITY: 1830.5 GB
2011-03-13 15:29:54 -- debug: SRC_DATASTORE_FREE: 539.4 GB
2011-03-13 15:29:54 -- debug: SRC_DATASTORE_BLOCKSIZE: 4
2011-03-13 15:29:54 -- debug: SRC_DATASTORE_MAX_FILE_SIZE: 1024 GB
2011-03-13 15:29:54 -- debug:
2011-03-13 15:29:54 -- debug: DST_DATASTORE: dlgCore-NFS-bigboi.VM-Backups
2011-03-13 15:29:54 -- debug: DST_DATASTORE_CAPACITY: 1348.4 GB
2011-03-13 15:29:54 -- debug: DST_DATASTORE_FREE: 296.8 GB
2011-03-13 15:29:54 -- debug: DST_DATASTORE_BLOCKSIZE: NA
2011-03-13 15:29:54 -- debug: DST_DATASTORE_MAX_FILE_SIZE: NA
2011-03-13 15:29:54 -- debug:
2011-03-13 15:29:55 -- debug: Storage Information before backup:
2011-03-13 15:29:55 -- debug: SRC_DATASTORE: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:29:55 -- debug: SRC_DATASTORE_CAPACITY: 1830.5 GB
2011-03-13 15:29:55 -- debug: SRC_DATASTORE_FREE: 539.4 GB
2011-03-13 15:29:55 -- debug: SRC_DATASTORE_BLOCKSIZE: 4
2011-03-13 15:29:55 -- debug: SRC_DATASTORE_MAX_FILE_SIZE: 1024 GB
2011-03-13 15:29:55 -- debug:
2011-03-13 15:29:55 -- debug: DST_DATASTORE: dlgCore-NFS-bigboi.VM-Backups
2011-03-13 15:29:55 -- debug: DST_DATASTORE_CAPACITY: 1348.4 GB
2011-03-13 15:29:55 -- debug: DST_DATASTORE_FREE: 296.8 GB
2011-03-13 15:29:55 -- debug: DST_DATASTORE_BLOCKSIZE: NA
2011-03-13 15:29:55 -- debug: DST_DATASTORE_MAX_FILE_SIZE: NA
2011-03-13 15:29:55 -- debug:
2011-03-13 15:29:55 -- info: Snapshot found for vMA, backup will not take place

2011-03-13 15:29:57 -- debug: Storage Information before backup:
2011-03-13 15:29:57 -- debug: SRC_DATASTORE: himalaya-local-SATA.RE4-GP:Storage
2011-03-13 15:29:57 -- debug: SRC_DATASTORE_CAPACITY: 1830.5 GB
2011-03-13 15:29:57 -- debug: SRC_DATASTORE_FREE: 539.4 GB
2011-03-13 15:29:57 -- debug: SRC_DATASTORE_BLOCKSIZE: 4
2011-03-13 15:29:57 -- debug: SRC_DATASTORE_MAX_FILE_SIZE: 1024 GB
2011-03-13 15:29:57 -- debug:
2011-03-13 15:29:57 -- debug: DST_DATASTORE: dlgCore-NFS-bigboi.VM-Backups
2011-03-13 15:29:57 -- debug: DST_DATASTORE_CAPACITY: 1348.4 GB
2011-03-13 15:29:57 -- debug: DST_DATASTORE_FREE: 296.8 GB
2011-03-13 15:29:57 -- debug: DST_DATASTORE_BLOCKSIZE: NA
2011-03-13 15:29:57 -- debug: DST_DATASTORE_MAX_FILE_SIZE: NA
2011-03-13 15:29:57 -- debug:
2011-03-13 15:29:58 -- info: Initiate backup for vCloudConnector
2011-03-13 15:29:58 -- debug: /usr/sbin/vmkfstools -i "/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector/vCloudConnector.vmdk" -a "buslogic" -d "thin" "/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/vCloudConnector/vCloudConnector-2011-03-13_15-27-59/vCloudConnector.vmdk"
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector/vCloudConnector.vmdk'...
Clone: 97% done.
2011-03-13 15:30:45 -- info: Backup Duration: 47 Seconds
2011-03-13 15:30:45 -- info: WARN: vCloudConnector has some Independent VMDKs that can not be backed up!

2011-03-13 15:30:45 -- info: ###### Final status: ERROR: Only some of the VMs backed up, and some disk(s) failed! ######

2011-03-13 15:30:45 -- debug: Succesfully removed lock directory - /tmp/ghettoVCB.lock

2011-03-13 15:30:45 -- info: ============================== ghettoVCB LOG END ================================

Backup VMs stored in a list

[root@himalaya ~]# ./ghettoVCB.sh -f vms_to_backup

Backup Single VM using command-line

# ./ghettoVCB.sh -m MyVM

Backup All VMs residing on specific ESX(i) host

/ghettoVCB # ./ghettoVCB.sh -a

Backup All VMs residing on specific ESX(i) host and exclude the VMs in the exclusion list

/ghettoVCB # ./ghettoVCB.sh -a -e vm_exclusion_list

 

Backup VMs based on individual VM backup policies and log output to /tmp/ghettoVCB.log

  • Log verbosity: info (default)
  • Log output: /tmp/ghettoVCB.log 
    • Logs by default will be stored in /tmp, these log files may not persist  through reboots, especially when dealing with ESXi. You should log to  either a local or remote datastore to ensure that logs are kept upon a  reboot.


1. Create folder to hold individual VM backup policies (can be named anything):

[root@himalaya ~]# mkdir backup_config



2. Create individual VM backup policies for each VM that ensure each  file is named exactly as the display name of the VM being backed up (use  provided template to create duplicates):

[root@himalaya backup_config]# cp ghettoVCB-vm_backup_configuration_template scofield
[root@himalaya backup_config]# cp ghettoVCB-vm_backup_configuration_template vCloudConnector



Listing of VM backup policy within backup configuration directory

[root@himalaya backup_config]# ls
ghettoVCB-vm_backup_configuration_template 
scofield  vCloudConnector 



Backup policy for "scofield" (backup only 2 specific VMDKs)

[root@himalaya backup_config]# cat scofield
VM_BACKUP_VOLUME=/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
DISK_BACKUP_FORMAT=thin
VM_BACKUP_ROTATION_COUNT=3
POWER_VM_DOWN_BEFORE_BACKUP=0
ENABLE_HARD_POWER_OFF=0
ITER_TO_WAIT_SHUTDOWN=4
POWER_DOWN_TIMEOUT=5
SNAPSHOT_TIMEOUT=15
ENABLE_COMPRESSION=0
VM_SNAPSHOT_MEMORY=0
VM_SNAPSHOT_QUIESCE=0
VMDK_FILES_TO_BACKUP="
scofield_2.vmdk,scofield_1.vmdk"



Backup policy for VM "vCloudConnector" (backup all VMDKs found)

[root@himalaya backup_config]# cat vCloudConnector
VM_BACKUP_VOLUME=/vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
DISK_BACKUP_FORMAT=thin
VM_BACKUP_ROTATION_COUNT=3
POWER_VM_DOWN_BEFORE_BACKUP=0
ENABLE_HARD_POWER_OFF=0
ITER_TO_WAIT_SHUTDOWN=4
POWER_DOWN_TIMEOUT=5
SNAPSHOT_TIMEOUT=15
ENABLE_COMPRESSION=0
VM_SNAPSHOT_MEMORY=0
VM_SNAPSHOT_QUIESCE=0
VMDK_FILES_TO_BACKUP="
vCloudConnector.vmdk"



Note: When specifying -c option (individual VM backup policy mode) if a VM is listed in the backup list but DOES NOT have a corresponding backup policy, the VM will be backed up using the  default configuration found within the ghettoVCB.sh script.

Execution of backup

[root@himalaya ~]# ./ghettoVCB.sh -f vms_to_backup -c backup_config -l /tmp/ghettoVCB.log

2011-03-13 15:40:50 -- info: ============================== ghettoVCB LOG START ==============================

2011-03-13 15:40:51 -- info: CONFIG - USING CONFIGURATION FILE = backup_config//scofield
2011-03-13 15:40:51 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:40:51 -- info: CONFIG - GHETTOVCB_PID = 2967
2011-03-13 15:40:51 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:40:51 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:40:51 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-40-50
2011-03-13 15:40:51 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:40:51 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:40:51 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:40:51 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4
2011-03-13 15:40:51 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:40:51 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:40:51 -- info: CONFIG - LOG_LEVEL = info
2011-03-13 15:40:51 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log
2011-03-13 15:40:51 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:40:51 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:40:51 -- info: CONFIG - VMDK_FILES_TO_BACKUP = scofield_2.vmdk,scofield_1.vmdk
2011-03-13 15:40:51 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:40:51 -- info:
2011-03-13 15:40:53 -- info: Initiate backup for scofield
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_2.vmdk'...
Clone: 100% done.

Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/scofield/scofield_1.vmdk'...
Clone: 100% done.

2011-03-13 15:40:55 -- info: Backup Duration: 2 Seconds
2011-03-13 15:40:55 -- info: Successfully completed backup for scofield!

2011-03-13 15:40:57 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:40:57 -- info: CONFIG - GHETTOVCB_PID = 2967
2011-03-13 15:40:57 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:40:57 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:40:57 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-40-50
2011-03-13 15:40:57 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:40:57 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:40:57 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:40:57 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3
2011-03-13 15:40:57 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:40:57 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:40:57 -- info: CONFIG - LOG_LEVEL = info
2011-03-13 15:40:57 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log
2011-03-13 15:40:57 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:40:57 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:40:57 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all
2011-03-13 15:40:57 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:40:57 -- info:
2011-03-13 15:40:59 -- info: Snapshot found for vMA, backup will not take place

2011-03-13 15:40:59 -- info: CONFIG - USING CONFIGURATION FILE = backup_config//vCloudConnector
2011-03-13 15:40:59 -- info: CONFIG - VERSION = 2011_03_13_1
2011-03-13 15:40:59 -- info: CONFIG - GHETTOVCB_PID = 2967
2011-03-13 15:40:59 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS
2011-03-13 15:40:59 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3
2011-03-13 15:40:59 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2011-03-13_15-40-50
2011-03-13 15:40:59 -- info: CONFIG - DISK_BACKUP_FORMAT = thin
2011-03-13 15:40:59 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0
2011-03-13 15:40:59 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0
2011-03-13 15:40:59 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4
2011-03-13 15:40:59 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5
2011-03-13 15:40:59 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15
2011-03-13 15:40:59 -- info: CONFIG - LOG_LEVEL = info
2011-03-13 15:40:59 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log
2011-03-13 15:40:59 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0
2011-03-13 15:40:59 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0
2011-03-13 15:40:59 -- info: CONFIG - VMDK_FILES_TO_BACKUP = vCloudConnector.vmdk
2011-03-13 15:40:59 -- info: CONFIG - EMAIL_LOG = 0
2011-03-13 15:40:59 -- info:
2011-03-13 15:41:01 -- info: Initiate backup for vCloudConnector
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/himalaya-local-SATA.RE4-GP:Storage/vCloudConnector/vCloudConnector.vmdk'...
Clone: 100% done.

2011-03-13 15:41:51 -- info: Backup Duration: 50 Seconds
2011-03-13 15:41:51 -- info: WARN: vCloudConnector has some Independent VMDKs that can not be backed up!

2011-03-13 15:41:51 -- info: ###### Final status: ERROR: Only some of the VMs backed up, and some disk(s) failed! ######

2011-03-13 15:41:51 -- info: ============================== ghettoVCB LOG END ================================

 

 


 

Enable compression for backups (EXPERIMENTAL SUPPORT)


Please take a look at FAQ #25 for more details before continuing

To make use of this feature, modify the variable ENABLE_COMPRESSION from 0 to 1. Please note, do not mix uncompressed backups with  compressed backups. Ensure that directories selected for backups do not contain any backups with previous versions of ghettoVCB before enabling  and implementing the compressed backups feature.

 


 

Email Backup Logs (EXPERIMENTAL SUPPORT)

nc (netcat) utility must be present for email support to function, this utility is a now a default with the release of vSphere 4.1 or greater, previous releases of VI 3.5 and/or vSphere 4.0 does not contain this utility. The reason this is listed as experimental is it may not be compatible with all email servers as the script utlizes nc (netcat) utility to communicate to an email server. This feature is  provided as-is with no guarantees. If you enable this feature, a  separate log will be generated along side  any normal logging which will  be used to email recipient. If for whatever reason, the email fails to  send, an entry will appear per the normal logging mechanism.

 

Users should also make note due to limited functionality of netcat, it uses SMTP pipelining which is not the most ideal method of communicating with an SMTP server. Email from ghettoVCB may not work if your email server does not support this feature.

 

You can define an email recipient in the following two ways:

 

EMAIL_TO=william@virtuallyghetto.com

OR

EMAIL_TO=william@virtuallyghetto.com,tuan@virtuallyghetto.com

 

If you are running ESXi 5.1, you will need to create a custom firewall rule to allow your email traffic to go out which I will assume is default port 25. Here are the steps for creating a custom email rule.

 

Step 1 - Create a file called /etc/vmware/firewall/email.xml with contains the following:

<ConfigRoot>
  <service>
    <id>email</id>
    <rule id="0000">
      <direction>outbound</direction>
      <protocol>tcp</protocol>
      <porttype>dst</porttype>
      <port>25</port>
    </rule>
    <enabled>true</enabled>
    <required>false</required>
  </service>
</ConfigRoot>

 

Step 2 - Reload the ESXi firewall by running the following ESXCLI command:

~ #
esxcli network firewall refresh

Step 3 - Confirm that your email rule has been loaded by running the following ESXCLI command:

~ # esxcli network firewall ruleset list | grep email
email                  true

Step 4 - Connect to your email server by usingn nc (netcat) by running the following command and specifying the IP Address/Port of your email server:

~ # nc 172.30.0.107 25
220 mail.primp-industries.com ESMTP Postfix

You should recieve a response from your email server and you can enter Ctrl+C to exit. This custom ESXi firewall rule will not persist after a reboot, so you should create a custom VIB to ensure it persists after a system reboot. Please take a look at this article for the details.

 


 

Rsync Support  (EXPERIMENTAL SUPPORT)


To make use of this feature, modify the variable RSYNC_LINK from 0  to 1. Please note, this is an experimental feature request from users that rely on rsync to replicate changes from one datastore volume to  another datastore volume. The premise of this feature is to have a standardized folder that rsync can monitor for changes to replicate to  another backup datastore. When this feature is enabled, a symbolic link  will be generated with the format of "<VMNAME>-symlink" and will  reference the latest successful VM backup. You can then rely on this  symbolic link to watch for changes and replicate to your backup  datastore.

Here is an example of what this would look like:

[root@himalaya ghettoVCB]# ls -la /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/vcma/
total 0
drwxr-xr-x 1 nobody nobody 110 Sep 27 08:08 .
drwxr-xr-x 1 nobody nobody  17 Sep 16 14:01 ..
lrwxrwxrwx 1 nobody nobody  89 Sep 27 08:08 vcma-symlink -> /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/WILLIAM_BACKUPS/vcma/vcma-2010-09-27_08-07-37
drwxr-xr-x 1 nobody nobody  58 Sep 27 08:04 vcma-2010-09-27_08-04-26
drwxr-xr-x 1 nobody nobody  58 Sep 27 08:06 vcma-2010-09-27_08-05-55
drwxr-xr-x 1 nobody nobody  58 Sep 27 08:08 vcma-2010-09-27_08-07-37



FYI - This feature has not been tested, please provide feedback if this does not work as expected.


 

Restore backups (ghettoVCB-restore.sh):


To recover a VM that has been processed by ghettoVCB, please take a look at this document: Ghetto Tech Preview - ghettoVCB-restore.sh - Restoring VM's backed up from ghettoVCB to ESX(i) 3.5, ...

 


Stopping ghettoVCB Process:


There may be a situation where you need to stop the ghettoVCB process and entering Ctrl+C will only kill off the main ghettoVCB process, however there may still be other spawn processes that you may need to identify and stop. Below are two scenarios you may encounter and the process to completely stop all processes related to ghettoVCB.

 

Interactively running ghettoVCB:

 

Step 1 - Press Ctrl+C which will kill off the main ghettoVCB instance

 

Step 2 - Search for any existing ghettoVCB process by running the following:

 

# ps -c | grep ghettoVCB | grep -v grep
3360136 3360136 tail                 tail -f /tmp/ghettoVCB.work/ghettovcb.Cs1M1x

 

Step 3 - Here we can see there is a tail command that was used in the script. We need to stop this process by using the kill command which accepts the PID (Process ID) which is identified by the first value on the far left hand side of the command. In this example, it is 3360136.

# kill -9 3360136

 

Note: Make sure you identify the correct PID, else you could accidently impact a running VM or worse your ESXi host.

 

Step 4 - Depending on where you stopped the ghettoVCB process, you may need to consolidate or remove any existing snapshots that may exist on the VM that was being backed up. You can easily do so by using the vSphere Client.

 

Non-Interactively running ghettoVCB:

 

Step 1 - Search for the ghettoVCB process (you can also validate the PID from the logs)

 

~ # ps -c | grep ghettoVCB | grep -v grep
3360393 3360393 busybox              ash ./ghettoVCB.sh -f list -d debug
3360790 3360790 tail                 tail -f /tmp/ghettoVCB.work/ghettovcb.deGeB7

 

Step 2 - Stop both the main ghettoVCB instance & tail command by using the kill command and specifying their respective PID IDs:

 

kill -9 3360393
kill -9 3360790

 

Step 3 - If a VM was in the process of being backed up, there is an additional process for the actual vmkfstools copy. You will need to identify the process for that and kill that as well. We will again use ps -c command and search for any vmkfstools that are running:

# ps -c | grep vmkfstools | grep -v grep
3360796 3360796 vmkfstools           /sbin/vmkfstools -i /vmfs/volumes/himalaya-temporary/VC-Windows/VC-Windows.vmdk -a lsilogic -d thin /vmfs/volumes/test-dont-use-this-volume/backups/VC-Windows/VC-Windows-2013-01-26_16-45-35/VC-Windows.vmdk

 

 

Step 4 - In case there is someone manually running a vmkfstools, make sure you take a look at the command itself and that it maps back to the current VM that was being backed up before kill the process. Once you have identified the proper PID, go ahead and use the kill command:

# kill -9 3360796

 

Step 5 - Depending on where you stopped the  ghettoVCB process, you may need to consolidate or remove any existing  snapshots that may exist on the VM that was being backed up. You can  easily do so by using the vSphere Client.

 


 

Cronjob FAQ:


Please take a moment to read over what is a cronjob and how to set one up, before continuing

The task of configuring cronjobs on classic ESX servers (with Service Console) is no different than traditional cronjobs on *nix operating  systems (this procedure is outlined in the link above). With ESXi on the  other hand, additional factors need to be taken into account when  setting up cronjobs in the limited shell console called Busybox because changes made do not persist through a system reboot. The following document will outline steps to ensure that cronjob configurations are saved and present upon a reboot.

 

Important Note: Always redirect the ghettoVCB output to /dev/null and/or to a log when automating via cron, this becomes very important as one user has identified a limited amount of buffer capacity in which once filled, may cause ghettoVCB to stop in the middle of a backup. This primarily only affects users on ESXi, but it is good practice to always redirect the output. Also ensure you are specifying the FULL PATH when referencing the ghettoVCB script, input or log files.

 

e.g.

0 0 * * 1-5 /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB.sh -f /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/backuplist > /dev/null

or

0 0 * * 1-5 /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB.sh -f /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/backuplist > /tmp/ghettoVCB.log

 

Task: Configure ghettoVCB.sh to execute a backup five days a week (M-F) at 12AM (midnight) everyday and send output to a unique log file

Configure on ESX:

1. As root, you'll install your cronjob by issuing:

[root@himalaya ~]# crontab -e



2. Append the following entry:

0 0 * * 1-5 /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB.sh -f /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/backuplist > /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB-backup-$(date +\%s).log



3. Save and exit

[root@himalaya dlgCore-NFS-bigboi.VM-Backups]# crontab -e
no crontab for root - using an empty one
crontab: installing new crontab



4. List out and verify the cronjob that was just created:

[root@himalaya dlgCore-NFS-bigboi.VM-Backups]# crontab -l
0 0 * * 1-5 /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB.sh -f /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/backuplist > /vmfs/volumes/dlgCore-NFS-bigboi.VM-Backups/ghettoVCB-backup-$(date +\%s).log



You're ready to go!

Configure on ESXi:

1. Setup the cronjob by appending the following line to /var/spool/cron/crontabs/root:

0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-$(date +\%s).log

 

If you are unable to edit/modify /var/spool/cron/crontabs/root, please make a copy and then edit the copy with the changes

cp /var/spool/cron/crontabs/root /var/spool/cron/crontabs/root.backup

Once your changes have been made, then "mv" the backup to the original file. This may occur on ESXi 4.x or 5.x hosts

mv /var/spool/cron/crontabs/root.backup /var/spool/cron/crontabs/root

You can now verify the crontab entry has been updated by using "cat" utility.


2. Kill the current crond (cron daemon) and then restart the crond for the changes to take affect:

On ESXi < 3.5u3

kill $(ps | grep crond | cut -f 1 -d ' ')



On ESXi 3.5u3+

~ # kill $(pidof crond)
~ # crond



On ESXi 4.x/5.0

~ # kill $(cat /var/run/crond.pid)
~ # busybox crond

 

On ESXi 5.1 to 6.x

~ # kill $(cat /var/run/crond.pid)
~ # crond

 

On ESXi 7.x

~ # kill $(cat /var/run/crond.pid)
~ # /usr/lib/vmware/busybox/bin/busybox crond


3. Now that the cronjob is ready to go, you need to ensure that this  cronjob will persist through a reboot. You'll need to add the following two lines to /etc/rc.local (ensure that the cron entry matches what was defined above). In ESXi 5.1, you will need to edit /etc/rc.local.d/local.sh instead of /etc/rc.local as that is no longer valid.

On ESXi 3.5

/bin/kill $(pidof crond)
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
crond



On ESXi 4.x/5.0

/bin/kill $(cat /var/run/crond.pid)
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
/bin/busybox crond

 

On ESXi 5.1 to 6.x

/bin/kill $(cat /var/run/crond.pid)
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
crond

 

On ESXi 7.x

/bin/kill $(cat /var/run/crond.pid) > /dev/null 2>&1
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
/usr/lib/vmware/busybox/bin/busybox crond



Afterwards the file should look like the following:

~ # cat /etc/rc.local
#! /bin/ash
export PATH=/sbin:/bin

log() {
   echo "$1"
   logger init "$1"
}

#execute all service retgistered in /etc/rc.local.d
if [http:// -d /etc/rc.local.d |http:// -d /etc/rc.local.d ]; then
   for filename in `find /etc/rc.local.d/ | sort`
      do
         if [ -f $filename ] && [ -x $filename ]; then
            log "running $filename"
            $filename
         fi
      done
fi

/bin/kill $(cat /var/run/crond.pid)
/bin/echo "0 0 * * 1-5 /vmfs/volumes/simplejack-local-storage/ghettoVCB.sh -f /vmfs/volumes/simplejack-local-storage/backuplist > /vmfs/volumes/simplejack-local-storage/ghettoVCB-backup-\$(date +\\%s).log" >> /var/spool/cron/crontabs/root
/bin/busybox crond



This will ensure that the cronjob is re-created upon a reboot of the system through a startup script

2. To ensure that this is saved in the ESXi configuration, we need to manually initiate an ESXi backup by running:

~ # /sbin/auto-backup.sh
config implicitly loaded
local.tgz
etc/vmware/vmkiscsid/vmkiscsid.db
etc/dropbear/dropbear_dss_host_key
etc/dropbear/dropbear_rsa_host_key
etc/opt/vmware/vpxa/vpxa.cfg
etc/opt/vmware/vpxa/dasConfig.xml
etc/sysconfig/network
etc/vmware/hostd/authorization.xml
etc/vmware/hostd/hostsvc.xml
etc/vmware/hostd/pools.xml
etc/vmware/hostd/vmAutoStart.xml
etc/vmware/hostd/vmInventory.xml
etc/vmware/hostd/proxy.xml
etc/vmware/ssl/rui.crt
etc/vmware/ssl/rui.key
etc/vmware/vmkiscsid/initiatorname.iscsi
etc/vmware/vmkiscsid/iscsid.conf
etc/vmware/vmware.lic
etc/vmware/config
etc/vmware/dvsdata.db
etc/vmware/esx.conf
etc/vmware/license.cfg
etc/vmware/locker.conf
etc/vmware/snmp.xml
etc/group
etc/hosts
etc/inetd.conf
etc/rc.local
etc/chkconfig.db
etc/ntp.conf
etc/passwd
etc/random-seed
etc/resolv.conf
etc/shadow
etc/sfcb/repository/root/interop/cim_indicationfilter.idx
etc/sfcb/repository/root/interop/cim_indicationhandlercimxml.idx
etc/sfcb/repository/root/interop/cim_listenerdestinationcimxml.idx
etc/sfcb/repository/root/interop/cim_indicationsubscription.idx
Binary files /etc/vmware/dvsdata.db and /tmp/auto-backup.31345.dir/etc/vmware/dvsdata.db differ
config implicitly loaded
Saving current state in /bootbank
Clock updated.
Time: 20:40:36   Date: 08/14/2009   UTC



Now you're really done!

If you're still having trouble getting the cronjob to work, ensure that  you've specified the correct parameters and there aren’t any typos in  any part of the syntax.

Ensure crond (cron daemon) is running:

ESX 3.x/4.0:

[root@himalaya dlgCore-NFS-bigboi.VM-Backups]# ps -ef | grep crond | grep -v grep
root      2625     1  0 Aug13 ?        00:00:00 crond



ESXi 3.x/4.x/5.x:

~ # ps | grep crond | grep -v grep
5196 5196 busybox              crond

 

Ensure that the date/time on your ESX(i) host is setup correctly:

ESX(i):

[root@himalaya dlgCore-NFS-bigboi.VM-Backups]# date
Fri Aug 14 23:44:47 PDT 2009

 

Note: Careful attention must be noted if more than one backup is performed per day. Backup windows  should be staggered to avoid contention or saturation of resources  during these periods.

 


 

FAQ:


0Q: I'm getting error X when using the script or I'm not getting any errors, the backup didn’t even take place. What can I do?
0A: First off, before posting a comment/question, please thoroughly read through the ENTIRE documentation including the FAQs to see if your question has already been ansered.

1Q: I've read through the entire documentation + FAQs and still have not found my answer to the problem I'm seeing. What can I do?
1A: Please join the ghettoVCB Group to post your question/comment.

 

2Q: I've sent you private message or email but I haven't received a response? What gives?
2A: I do not accept issues/bugs reported via PM or email, I will  reply back, directing you to post on the appropriate VMTN forum (that's  what it's for). If the data/results you're providing is truely senstive  to your environment I will hear you out, but 99.99% it is not, so please  do not messsage/email me directly. I do monitor all forums that contain  my script including the normal VMTN forums and will try to get back to  your question as soon as I can and as time permits. Please do be patient as you're not the only person using the script (600,000+ views), thank you.

3Q: Can I schedule backups to take place hourly, daily, monthly, yearly?
3A: Yes, do a search online for crontab.

4Q: I would like to setup cronjob for ESX(i) 3.5 or 4.0?
4A: Take a look at the Cronjob FAQ section in this document.

5Q: I want to schedule my backup on Windows, how do I do this?
5A: Do a search for plink. Make sure you have paired SSH keys setup between your Windows system and ESX/ESXi host.

6Q: I only have a single ESXi host. I want to take backups and  store them somewhere else. The problem is: I don't have NFS, iSCSI nor  FC SAN. What can I do?
6A: You can use local storage to store your backups assuming that  you have enough space on the destination datastore.  Afterwards, you  can use scp (WinSCP/FastSCP) to transfer the backups from the ESXi host  to your local desktop.

7Q: I’m pissed; the backup is taking too long. My datastore is of type X?
7A: YMMV, take a look at your storage configuration and make sure it is optimized. 

8Q: I noticed that the backup rotation is occurring after a  backup. I don't have enough local storage space, can the process be  changed?
8A: This is primarily done to ensure that you have at least one  good backup in case the new backup fails. If you would like to modify  the script, you're more than welcome to do so.

9Q: What is the best storage configuration for datastore type X?
9A: Search the VMTN forums; there are various configurations for the different type of storage/etc. 

10Q: I want to setup an NFS server to run my backups. Which is the best and should it be virtual or physical? 
10A: Please refer to answer 7A. From experience, we’ve seen  physical instances of NFS servers to be faster than their virtual  counterparts. As always, YMMV.

11Q: I have VMs that have snapshots. I want to back these things up but the script doesn’t let me do it. How do I fix that?
11A: VM snapshots are not meant to be kept for long durations.  When backing up a VM that contains a snapshot, you should ensure all snapshots have been committed prior to running a backup. No exceptions  will be made…ever.

12Q: I would like to restore from backup, what is the best method?
12A: The restore process will be unique for each environment and  should be determined by your backup/recovery plans. At a high level you have the option of mounting the backup datastore and registering the VM  in question or copy the VM from the backup datastore to the ESX/ESXi  host. The latter is recommended so that you're not running a VM living  on the backup datastore or inadvertently modifying your backup VM(s). You can also take a look at ghettoVCB-restore which is experimentally supported.

13Q: When I try to run the script I get: "-bash: ./ghettoVCB.sh: Permission denied", what is wrong?
13A: You need to change the permission on the script to be executable, chmod +x ghettoVCB.sh

14Q: Where can I download the latest version of the script?
14A: The latest version is available on on github - https://github.com/lamw/ghettoVCB/downloads

15Q: I would like to suggest/recommend feature X, can I get it?  When can I get it? Why isn't it here, what gives? 
15A: The general purpose of this script is to provide a backup  solution around VMware VMs. Any additional features outside of that  process will be taken into consideration depending on the amount of  time, number of requests and actual usefulness as a whole to the  community rather than to an individual.

16Q: I have found this script to be very useful and would like to contribute back, what can I do?
16A: To continue to develop and share new scripts and resources with the community, we need your support. You can donate here Thank You!

17Q: What are the different type of backup uses cases that are supported with ghettoVCB?
17A: 1) Live backup of VM with the use of a snapshot and 2)  Offline backup of a VM without a snapshot. These are the only two use  cases supported by the script.

18Q: When I execute the script on ESX(i) I get some funky errors such as ": not found.sh" or "command not found". What is this?
18A: Most likely you have some ^M characters within the script  which may have come from either editing the script using Windows editor,  uploading the script using the datastore browser OR using wget. The  best option is to either using WinSCP on Windows to upload the script  and edit using vi editor on ESX(i) host OR Linux/UNIX scp to copy the  script into the host. If you still continue to have the issue, do a  search online on various methods of removing this Windows return  carriage from the script

19Q: My backup works fine OR it works for a single backup but I get an error message  "Input/output error" or "-ash: YYYY-MM-DD: not found" during the snapshot removal process. What is this?
19A: The issue has been recently identified by few users as a problem with user's NFS server in which it reports an error when deleting large files that take longer than 10seconds. VMware has recently released a KB article http://kb.vmware.com/kb/1035332 explaining the details and starting with vSphere 4.1 Update 2 or vSphere 5.0, a new advanced ESX(i) parameter has been introduced to increase the timeout. This has resolved the problem for several users and maybe something to consider if you are running into this issue, specifically with NFS based backups.

20Q: Will this script function with vCenter and DRS enabled?
20Q: No, if the ESX(i) hosts are in a DRS enabled cluster, VMs  that are to be backed up could potentially be backed up twice or never  get backed up. The script is executed on a per host basis and one would  need to come up a way of tracking backups on all hosts and perhaps write  out to external file to ensure that all VMs are backed up. The main use  case for this script are for standalone ESX(i) host

21Q: I'm trying to use WinSCP to manually copy VM files but it's very slow or never completes on huge files, why is that?
21A: WinSCP was not designed for copying VM files out of your  ESX(i) host, take a look at Veeam's FastSCP which is designed for moving  VM files and is a free utility.

22Q: Can I use setup NFS Server using Windows Services for UNIX (WSFU) and will it work?
22A: I've only heard a handful of users that have successfully  implemented WSFU and got it working, YMMV. VMware also has a KB article  decribing the setup process here: http://kb.vmware.com/kb/1004490 for those that are interested. Here is a thread on a user's experience between Windows Vs. Linux NFS that maybe helpful.

23Q: How do VMware Snapshots work?
23A: http://kb.vmware.com/kb/1015180

24Q: What files make up a Virtual Machine?
24A: http://virtualisedreality.wordpress.com/2009/09/16/quick-reminder-of-what-files-make-up-a-virtual-ma...

25Q: I'm having some issues restoring a compressed VM backup?
25A: There is a limitation in the size of the VM for compression  under ESXi 3.x & 4.x, this limitation is in the unsupported Busybox  console and should not affect classic ESX 3.x/4.x. On ESXi 3.x,  the maximum largest supported VM is 4GB for compression and on ESXi 4.x  the largest supported VM is 8GB. If you try to compress a larger VM, you  may run into issues when trying to extract upon a restore. PLEASE TEST THE RESTORE PROCESS BEFORE MOVING TO PRODUCTION SYSTEMS!

26Q: I'm backing up my VM as "thin" format but I'm still not noticing any size reduction in the backup? What gives?
2bA: Please refer to this blog post which explains what's going on: http://www.yellow-bricks.com/2009/07/31/storage-vmotion-and-moving-to-a-thin-provisioned-disk/

27Q: I've enabled VM_SNAPSHOT_MEMORY and when I restore my VM it's still offline, I thought this would keep it's memory state?
27A: VM_SNAPSHOT_MEMORY is only used to ensure when the  snapshot is taken, it's memory contents are also captured. This is only  relavent to the actual snapshot itself and it's not used in any  shape/way/form in regards to the backup. All backups taken whether your  VM is running or offline will result in an offline VM backup when you  restore. This was originally added for debugging purposes and in  generally should be left disabled

28Q: Can I rename the directories and the VMs after a VM has been backed up?
28A: The answer yes, you can ... but you may run into all sorts  of issues which may break the backup process. The script expects a  certain layout and specific naming scheme for it to maintain the proper  rotation count. If you need to move or rename a VM, please take it out  of the directory and place it in another location

29Q: Can ghettoVCB support CBT (Change Block Tracking)?
29A: No, that is a functionality of the vSphere API + VDDK API (vSphere Disk Development Kit). You will need to look at paid solutions such as VMware vDR, Veeam Backup & Recovery, PHD Virtual Backups, etc. to leverage that functionailty.

 

30Q: Does ghettoVCB support rsync backups?
30A: Currently ghettoVCB does not support rsync backups, you either obtain or compile your own static rsync binary and run on ESXi, but this is an unsupported configuration. You may take a look at this blog post for some details.

 

31Q: How can I contribute back?

31A: You can provide feedback/comments on the ghettoVCB Group. If you have found this script to be useful and would like to contribute back, please click here to donate.

 

32Q: How can select individual VMDKs to backup from a VM?

32A: Ideally you would use the "-c" option which requires you to create individual VM configuration file, this is where you would select specific VMDKs to backup. Note, that if you do not need to define all properties, anything not defined will adhere from the default global properties whether you're editing the ghettoVCB.sh script or using ghettoVCB global configuration file. It is not recommended that you edit the ghettoVCB.sh script and modify the VMDK_FILES_TO_BACKUP variable, but if you would like to keep everything in one script, you may add the extensive list of VMDKs to backup but do know this can get error prone as script may be edited frequently and lose some flexibility to support multiple environments.

 

33Q: Why is email not working when I'm using ESXi 5.x but it worked in ESXi 4.x?

33A: ESXi 5.x has implemented a new firewall which requires the email port that is being used to be opened. Please refer to the following articles on creating a custom firewall rule for email:

http://www.virtuallyghetto.com/2012/09/creating-custom-vibs-for-esxi-50-51.html

How to Create Custom Firewall Rules in ESXi 50

How to Persist Configuration Changes in ESXi 4.x/5.x Part 1

How to Persist Configuration Changes in ESXi 4.x/5.x Part 2

 

34Q: How do I stop the ghettoVCB process?

34A: Take a look at the Stopping ghettoVCB Process section of the documentation for more details.

 


 

Our NFS Server Configuration


Many have asked what is the best configuration and recommendation for  setting up a cheap NFS Server to run backups for VMs. This has been a  question we've tried to stay away from just because the possiblities and  solutions are endless. One can go with physical vs. virtual, use VSA  (Virtual Storage Appliances) such as OpenFiler or Lefthand Networks,  Windows vs. Linux/UNIX. We've not personally tested and verify all these  solutions and it all comes down to "it depends" type of answer. Though  from our experience, we've had much better success with a physical  server than a virtual.

It is also well known that some users are experiencing backup issues  when running specifically against NFS, primarily around the rotation and  purging of previous backups. The theory from what we can tell by  talking to various users is that when the rotation is occuring, the  request to delete the file(s) may take awhile and does not return within  a certain time frame and causes the script to error out with unexpected  messages. Though the backups were successful, it will cause unexpected  results with directory structures on the NFS target. We've not been able  to isolate why this is occuring and maybe due to NFS  configuration/exports or hardware or connection not being able to  support this process.

We'll continue to help where we can in diagonising this issus but we  wanted to share our current NFS configuration, perhaps it may help some  users who are new or trying to setup their system. ( Disclaimer: These configurations are not recommendations nor endorsement for any of the components being used)

UPDATE: Please also read FAQ #19 for details + resolution

Server Type: Physical
Model: HP DL320 G2
OS: Arch linux 2.6.28
Disks: 2 x 1.5TB
RAID: Software RAID1
Source Host Backups: ESX 3.5u4 and ESX 4.0u1 (We don't run any ESXi hosts)

uname -a output

Linux XXXXX.XXXXX.ucsb.edu 2.6.28-ARCH #1 SMP PREEMPT Sun Jan 18 20:17:17 UTC 2009 i686 Intel(R) Pentium(R) 4 CPU 3.06GHz GenuineIntel GNU/Linux



NICs:

00:05.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5702X Gigabit Ethernet (rev 02)
00:06.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5702X Gigabit Ethernet (rev 02)



NFS Export Options:

/exports/vm-backups XXX.XXX.XXX.XXX/24(rw,async,all_squash,anonuid=99,anongid=99)

 

*One important thing to check is to verify that your NFS exportion options are setup correctly, "async" should be configured to ensure that all IO requests are processed and  reply back to the client before waiting for the data to be written to  the storage.

*Recently VMware released a KB article describing the various "Advanced NFS Options" and their meanings and recommendations: http://kb.vmware.com/kb/1007909 We've not personally had to touch any of these, but for other vendors  such as EMC and NetApp, there are some best practices around configuring  some of these values depending on the number of NFS volumes or number  of ESX(i) host connecting to a volume. You may want to take a look to  see if any of these options may help with NFS issue that some are seeing

*Users should also try to look at their ESX(i) host logs during the time  interval when they're noticing these issues and see if they can find  any correlation along with monitoring the performance on their NFS  Server.

*Lastly, there are probably other things that can be done to improve NFS  performance or further optimization, a simple search online will also  yield many resources.


 

Useful Links:


Windows utility to email ghettoVCB Backup Logs - http://www.waldrondigital.com/2010/05/11/ghettovcb-e-mail-rotate-logs-batch-file-for-vmware/
Windows front-end utility to ghettoVCB -  http://www.magikmon.com/mkbackup/ghettovcb.en.html

Note: Neither of these tools are supported, for questions or comments regarding these utilities please refer to the author's pages.

 


 

Change log:

01/13/13 -

 

Enhancements:

  • ghettoVCB & ghettoVCB-restore supports ESXi 5.1
  • Support for individual VM backup via command-line and added new -m flag
  • Support VM(s) with existing snapshots and added new configuration variable called ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP
  • Support multiple running instances of ghettoVCB running and added a new -w flag
  • Configure VM shutdown/startup order and added two new configuration variables called VM_SHUTDOWN_ORDER and VM_STARTUP_ORDER
  • Support changing custom VM name during restore
  • Documentation updates

Fixes:

  • Fixed tab/indentation for both ghettoVCB/ghettoVCB-restore
  • Temp email files and email headers
  • Fixed "whoami" command as it is no longer valid in ESXi 5.1 to check for proper user
  • Added 2gbsparse check in sanity method to auto-load VMkernel module
  • Various typos, for greater detail, you can refer to the "diff" in github repo

 

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

11/19/11 -

 

Enhancements:

  • ghettoVCB & ghettoVCB-restore is now packaged together and both scripts are versioned on github
  • ESXi 5 firewall check for email port (Check FAQ #33 for more details)
  • New EMAIL_DELAY_INTERVAL netcat variable to control slow SMTP servers
  • ADAPTER_TYPE (buslogic,lsilogic,ide) no longer need to manually specified, script will auto-detect based on VMDK descriptor file
  • Using symlink -f parameter for quicker unlink/re-link for RSYNC use case
  • Updated documentation, including NFS issues (Check FAQ #19 for more details including new VMware KB article)

Fixes:

  • vSphere 4.1 Update 2 introduced new vim-cmd snapshot.remove param, this has now been updated in script to detect this new param change

 

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

06/28/11 -


Enhancements:

  • Support for vSphere 5.0 - ESXi 5.0

 

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

05/22/11 -


Enhancements:

 

  • Support for multiple email recipients
  • Support for individual VMDK backup within ghettoVCB.sh script - FAQ #33

 

Fixes:

  • Minor fix in additional validation prior to VM rotation

 


03/14/11 -

 

Enhancements:

  • Enhanced "dryrun" details including configuration and/or VMDK(s) issues 
    • Warning messages about physical RDM and Independent VMDK(s)
    • Warning messages about VMs with snapshots
  • New storage debugging details 
    • Datastore details both pre and post backups
    • Datstore blocksize miss-match warnings
  • Quick email status summary is now included in the title of the email, this allows a user to quickly verify whether a backup was successful or had complete/partial failure without having to go through the logs.
  • Updated ghettoVCB documentation
  • ghettoVCB going forward will now be version tracked via github and previous releases will not be available for download


Fixes:

  • Updated absolute sym link path for RSYNC_LINK variable to relative path
  • Enhanced logging and details on warning/error messages

 

Big thanks to Alain Spineux and his contributions to the ghettoVCB script and helping with debugging and testing.

 


09/28/10 -


Enhancements:

 

  • Additional email support for Microsoft IIS and email debugging functionality (Experimental Support)
  • ghettoVCB PID is now captured in the logs
  • Rsync support, please take a look at the above documentation for Rsync Support (Experimental Support)


Fixes:

 

  • Fixed a few typos in the script
  • Trapping SIG 13

 

 


 

07/27/10 -


Enhancements:

 

  • Support for emailing backup logs (Experimental Support)

 

 


 

07/20/10 -


Enhancements:

 

  • Support for vSphere 4.1 (ESX and ESXi)
  • Additional logging information for debugging purposes

 

 


 

05/12/10 -


Enhancements:

 

  • Thanks to user Rodder who submitted a patch for a workaround  to handle the NFS I/O issue. The script will check to see if the return  code of the "rm" operation for VMs that are to be rotated. If the return  code has not returned right away, we may be running into the NFS I/O  issue, the script will not sleep and check perodically to see if NFS  volume is responsive and then continue to the next VM for backup.


Fixes:

 

  • Resolved the problem when trying to specify ghettoVCB global configuration file with the fullpath

 

 


 

05/11/10 -

 

 

  • Updated useful links to 2 utilties that were written by users for ghettoVCB

 

 


 

05/05/10 -


Fixes:

 

  • Resolved an issue where VMs with spaces were not being properly rotated. Thanks to user chrb for finding the bug

 

 


 

04/24/10 -


Enhancements:

 

  • Added the ability to include an exclusion list of VMs to not backup


Fixes:

 

  • Resolved persistent NFS configuration bug due to the addition of the global ghettoVCB conf

 

 


 

04/23/10 -


Fixes:

 

  • Resolved a bug in the VM naming directory which may not delete backups properly

 

 


 

04/20/10 -

 

 

  • Support for global ghettoVCB configuration file. Users no longer  need to edit main script and can use multiple configuration files based  on certain environment configurations
  • Ability to backup all VMs residing on a specific host w/o specifying VM list
  • Implemented simple locking mechenism to ensure only 1 instance of ghettoVCB is running per host
  • Updated backup directory structure - rsync friendly. All backup VM  directories will now have the format of "VMNAME-YYYY-MM-DD_HH_MM_SS"  which will not change once the backup has been completed. The script  will keep N-copies and purge older backups based on the configurations  set by the user.
  • Additional logging and final status output has been added to the  script to provide more useful error/warning messages and an additoinal  status will be printed out at the end of each backup to provide an  overall report


Big thanks goes out to the community for the suggested features and to those that submitted snippet of their modifications.


 

03/27/10 -

 

  • Updated FAQ #0-1 & #25-29 for common issues/questions.
  • For those experiencing NFS issue, please take a look at FAQ #29
  • Re-packaged ghettoVCB.sh script within a tarball (ghettoVCB.tar.gz)  to help assist those users having the "Windows affect" when trying to  execute the script

 


 

02/13/10 -


Updated FAQ #20-24 for common issues/questions.      Also included a new section about our "personal" NFS configuration and setup.


 

01/31/10 -


Fix the crontab section to reflect the correct syntax + updated FAQ #17,#18 and #19 for common issues.


 

11/17/09 -


The following enhancements and fixes have been implemented in this  release of ghettoVCB. Special thanks goes out to all the ghettoVCB BETA  testers for providing time and their environments to test features/fixes  of the new script!

Enhancements:

 

  • Individual VM backup policy
  • Include/exclude specific VMDK(s)
  • Logging to file
  • Timeout variables
  • Configur snapshot memory/quiesce
  • Adapter format
  • Additional logging + dryrun mode
  • Support for both physical/virtual RDMs

Fixes:

  • Independent disk awareE
Attachments
Comments

Hi William,

I have made the last backup with the debug option. Here you can find the log-output: http://pastebin.com/8QM1CGfF

I tried to delete the oldest backup fom the commandline of the ESXi server (so he tried to delete in the NFS) and it faild like the script. The error I got there was similar "Input/Output error". The days before I found a thread were somebody had the same problem and they solved it using XFS insted of ext3 for their NFS storage (today I don´t find the thread - I forgot to bookmark).

One more thing: Do you have an answer to the question above2m asked? (http://communities.vmware.com/docs/DOC-8760#comments-15465).

So looking at your logs, it seems that your backup was successful? was it not? You're right you did hit the NFS timeout issue as mentioned in the last reply, but there is a section of code that hopes to provide a quick fix by just sleeping until a specified internal which I believe is up to 5min to see if it can get access back from the NFS server. If it does, then it'll move forward through the code else you'll get an error that it tried to apply the "NFS I/O" hack but failed. In your case, it did not have the failed logs ... so as far as I can tell, it was successful? Can you confirm and let me know what you have in both your source and destination folders of the backup?

=========================================================================

William Lam

VMware vExpert 2009

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Hi William,

maybe it was a misunderstanding, I did not mean that the backup failed. I wanted to mention that the deletion, following the rotation count, fails with my biggest VM (TITAN - you see in log)

Today I formated the NFS store with XFS, maybe with this filesystem the problem is gone.

That's correct, generally the NFS issue is hit on large deletions as mentioned earlier. If the backup was successful and no orphaned snapshots were left behind, then the fix that was implemented in the most recent release of the script worked!

Moving off of ext3 which is what I'm assuming your NFS volume was configured originally to XFS is the other fix which is described in the documentation above.

Glad to hear you got it working

=========================================================================

William Lam

VMware vExpert 2009

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

I noticed now that I have a really bad performance with XFS. The backup of my first sever lasts 3 times longer than with ext3. Do you have some recommendations for parameters to tune up the XFS performance?

The other option would be to go back to ext3 and try the

data=writeback mode

, is it correct?

What about an other filesystems like ext4? Do they have the same problem?

In terms of performnce tuning, I would recommend doing some research on the web, those will probably be your best resources. For those that reported back from switching from EXT3 to XFS, did not mention any performance issues, again this is not an issue we hit so it's hard for us to really say.

That's correct, you can try to change it to "writeback mode" and see if you're able to get around the NFS timeout issue.

Also other things that may affect your testing is the # of spindles behind your NFS volume, you may also want to research that with the various filesystems you'll be performing the test on and ensure you're using the most optimal configuration.

I'll be interesting to see what comes out of your testing and perhaps there is a better solution.

Thanks

=========================================================================

William Lam

VMware vExpert 2009

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

While setting up the script to backup our server i found two whings:

1. Round brackets

VMs with brackets in the name (like "machine xy (Test)") do not work. The brackets have to be escaped for the grep command.

2. Exclusion List

The VM name should only match an entry in the exclusion list if it is exactly the same. For example, i wanted to exclude an excluded machine for one run and added a "#" in front of the name in the exclusion list. But the name still matched because the grep command in the script also matches, if the machine name is a part of an line in the exclude file. So i changed the grep command a bit (line 475) from:

grep -E "$" "$" > /dev/null 2>&1 to: grep -E "^$\$" "$" > /dev/null 2>&1

Urs

Hello,

We have a problem with my backup and we can´t find the problem.

We use the ESXi 4.0.0 VMWare Server and ghettoVCB (last modified 05/12/2010)

We try to backup a VM with two disks. The first disk is located under "/vmfs/volumes/RAID-SAS/VM_1" (35GB). The backup works on this disk.

The second Disk is located under "/vmfs/volumes/RAID-SATA/VM_1"(800GB). Means the disks are in diffrent datastores. If i try to backup this disk the backup fail.

Below please find the logfile of the failed backup.

I already try to set the "SNAPSHOT_TIMEOUT" to "115", didn´t solve the problem.

Any ideas whats going wrong? Or need more information about configuration and environment?

Here is the log from the faild backup:

/ghettoVCB # ./ghettoVCB.sh -f ./vms_to_backup -g ./ghettoVCB.conf -d debug

2010-06-17 09:13:03 -- info: ============================== ghettoVCB LOG START ==============================

2010-06-17 09:13:03 -- debug: Succesfully acquired lock directory - /tmp/ghettoVCB.lock

2010-06-17 09:13:03 -- debug: HOST BUILD: VMware ESXi 4.0.0 build-244038

2010-06-17 09:13:03 -- debug: HOSTNAME: esx1

2010-06-17 09:13:03 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ./ghettoVCB.conf

2010-06-17 09:13:03 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/RAID-SATA/backup/

2010-06-17 09:13:03 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2

2010-06-17 09:13:03 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2010-06-17_09-13-03

2010-06-17 09:13:03 -- info: CONFIG - DISK_BACKUP_FORMAT = thin

2010-06-17 09:13:03 -- info: CONFIG - ADAPTER_FORMAT = buslogic

2010-06-17 09:13:03 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0

2010-06-17 09:13:03 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0

2010-06-17 09:13:03 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3

2010-06-17 09:13:03 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5

2010-06-17 09:13:03 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15

2010-06-17 09:13:03 -- info: CONFIG - LOG_LEVEL = debug

2010-06-17 09:13:03 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout

2010-06-17 09:13:03 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0

2010-06-17 09:13:03 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0

2010-06-17 09:13:03 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all

2010-06-17 09:13:06 -- info: Initiate backup for VM_1

2010-06-17 09:13:06 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-06-17" for VM_1

2010-06-17 09:13:07 -- debug: Waiting for snapshot "ghettoVCB-snapshot-2010-06-17" to be created

2010-06-17 09:13:07 -- debug: Snapshot timeout set to: 900 seconds

2010-06-17 09:13:08 -- debug: Waiting for snapshot creation to be completed - Iteration: 0 - sleeping for 60secs (Duration: 0 seconds)

2010-06-17 09:14:09 -- debug: Waiting for snapshot creation to be completed - Iteration: 1 - sleeping for 60secs (Duration: 30 seconds)

2010-06-17 09:15:10 -- debug: Waiting for snapshot creation to be completed - Iteration: 2 - sleeping for 60secs (Duration: 60 seconds)

2010-06-17 09:16:11 -- debug: Waiting for snapshot creation to be completed - Iteration: 3 - sleeping for 60secs (Duration: 90 seconds)

2010-06-17 09:17:12 -- debug: Waiting for snapshot creation to be completed - Iteration: 4 - sleeping for 60secs (Duration: 120 seconds)

2010-06-17 09:18:13 -- debug: Waiting for snapshot creation to be completed - Iteration: 5 - sleeping for 60secs (Duration: 150 seconds)

2010-06-17 09:19:13 -- debug: Waiting for snapshot creation to be completed - Iteration: 6 - sleeping for 60secs (Duration: 180 seconds)

2010-06-17 09:20:14 -- debug: Waiting for snapshot creation to be completed - Iteration: 7 - sleeping for 60secs (Duration: 210 seconds)

2010-06-17 09:21:15 -- debug: Waiting for snapshot creation to be completed - Iteration: 8 - sleeping for 60secs (Duration: 240 seconds)

2010-06-17 09:22:16 -- debug: Waiting for snapshot creation to be completed - Iteration: 9 - sleeping for 60secs (Duration: 270 seconds)

2010-06-17 09:23:17 -- debug: Waiting for snapshot creation to be completed - Iteration: 10 - sleeping for 60secs (Duration: 300 seconds)

2010-06-17 09:24:18 -- debug: Waiting for snapshot creation to be completed - Iteration: 11 - sleeping for 60secs (Duration: 330 seconds)

2010-06-17 09:25:19 -- debug: Waiting for snapshot creation to be completed - Iteration: 12 - sleeping for 60secs (Duration: 360 seconds)

2010-06-17 09:26:20 -- debug: Waiting for snapshot creation to be completed - Iteration: 13 - sleeping for 60secs (Duration: 390 seconds)

2010-06-17 09:27:21 -- debug: Waiting for snapshot creation to be completed - Iteration: 14 - sleeping for 60secs (Duration: 420 seconds)

2010-06-17 09:28:22 -- info: Snapshot timed out, failed to create snapshot: "ghettoVCB-snapshot-2010-06-17" for VM_1

2010-06-17 09:28:22 -- debug: Removing /vmfs/volumes/4bf393c5-edde2dfb-7cb5-001517d7e5d5/backup/VM_1/VM_1-2010-06-16_11-08-26

2010-06-17 09:28:22 -- info: Backup Duration: 15.27 Minutes

2010-06-17 09:28:22 -- info: Error: Unable to backup VM_1 due to snapshot creation!

2010-06-17 09:28:22 -- info: ###### Final status: ERROR: All VMs failed! ######

2010-06-17 09:28:22 -- debug: Succesfully removed lock directory - /tmp/ghettoVCB.lock

2010-06-17 09:28:22 -- info: ============================== ghettoVCB LOG END ================================

Hello,

Where are you getting the latest version of ghettoVCB? http://sourceforge.net/ is not producing a download but an html file.

Thanks

Hello,

I am getting an error when running the script with no compression.

  • Failed to clone disk : The file already exists (39).

  • but creates a gz file on the target. Any thoughts?

Also if i run the script with no compression i get the following errors:

  • Failed to clone disk : The file already exists (39).

  • mv: unable to rename `/vmfs/volumes/WinL/AABBES.COM-2010-06-19': Input/output error

any thoughts?

Thank you...

Sorry for the late reply, I've been studying for an exam these last few weeks and just been swamped with other things.

I think I mentioned this somewhere in the documentation, but I'm not a fan of using spaces or special chars in a naming convention for your VMs or hosts. Yes, Windows and VMwares allows you to do this, but I prefer to use underscores or dashes to separate words.

Regarding #2, I'll go ahead and mark this down and make sure it's resolved in a future release of the script. Thanks for your findings.

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Sorry for the late reply, I've been studying for an exam these last few weeks and just been swamped with other things.

I'm a little confused from your comments. You're able to backup this VM which has 2 VMDKs and one is successful and the other is not? Based on the logs, it's not able to create a snapshot which means if the VM was powered on a backup would have not happened.

Could you provide some more details in your exact execution? Are you trying to backup each VMDK and found that only the first one works but not the second?

FYI - Please refer FAQ #1 for posting logs.

Thanks

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Sorry for the late reply, I've been studying for an exam these last few weeks and just been swamped with other things.

You may want to refer to FAQ #18

Thanks

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Hi, great little tool. I have used this to successfully backup a number of machines.

I noticed in the VMA client GUI that when it is doing the copy (the slow bit) there is a little percentage bar at the bottom. I like this feature and would like to incorporate this into the script.

The command that actually does the copy is; /usr/bin/vmkfstools --server "SSS" --username "UUU" --password "funkytmppass" -i "ds file" -a buslogic -d thin "ds file"

Is there any way to get the progress of the command? I've tried looking at the file size but it seems to allocate it all upfront... maybe a separate VMA interface call?

Any ideas appreciated.

Hello,

In the first step we backed up only disk number one of the VM. The backup worked well. The second disk was independend (no snapshot).

Then we tried to backup the vmdk file with the option "VMDK_FILES_TO_BACKUP=" didnt work.

Here are the 3 versions we try(ghettoVCB.conf):

VMDK_FILES_TO_BACKUP="VM_1_1_1.vmdk"

VMDK_FILES_TO_BACKUP="/vmfs/volumes/RAID5-SATA/VM_1/VM_1_1_1.vmdk"

VMDK_FILES_TO_BACKUP="/vmfs/volumes/4bf393c5-edde2dfb-7cb5-001517d7e5d5/VM_1/VM_1_1_1.vmdk"

This didnt work.

after that we configured the second disk to be "dependant" in order to be included in the snapshots.

Here the link to the logfile: http://pastebin.com/brtVtgLm

Summary:

1. When both disks are configured to be "dependant" the backup fails.

2. When only the first disk is configured to be "dependant" then the backup succeeds!

If you ran the vmkfstools interactively, you'll get a % completed but there is no bar that exists. This tool is meant to run in the background, if you need to see the current progress you can run it interactively OR tail the logs

If you're referring to the progress bar found in the vSphere Client, that is only available when using the vSphere API, when you manually call vmkfstools from the Service Console or the unsupported Busybox console, this is not reflected on the UI. ghettoVCBg2 uses the vSphere API, hence when logging directly into an ESX or ESXi host while the backup is being performed, you'll notice the progress for a given VMDK copy.

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

So the format of VMDK_FILES_TO_BACKUP variable is just the VMDK name, you don't need to specify the full path and if you do, it probably will not pass the checks as the script is looking for a specific name (e.g. mydisk.vmdk) This is clearly documented, so the 1st configuration is correct and should be used.

Secondly, make sure you disks are can be snapshotted.

I'll need the following pieces of information:

1) Can you please provide screen shots of your VM configurations, specifically for each of the VMDK. Click on the VM settings, highlight each VMDK and provide a snapshot of the configuration

2) You mentioned this VM resides on 2 separate datastores? Are they both VMFS or NFS? If its VMFS, can you provide the VMFS blocksize configured for each datastore that this VM in question resides in

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

I ran the command interactively - it took about 5minutes - and completed successfully. But I did not see any percentage completed output.

I also tried extracting the logs (hostd, message) from the ESX server using vilogger - but these logs also did not contain a % output. Which logs were you referring to?

If you ran the copy command manually with vmkfstools, you should have seen a counter displaying the progress which should be changing on your screen.

The logs I'm referring to is the ghettoVCB logs that are being generated, they also capture the status. Also note that timestamps are captured in the log along with the completion time for your records

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

ok - here's my problem.

I put ghettovcb.sh and the control file that contains the names of the file to backup in the root directory of my esxi3.5 box and after I reboot the server - the files are GONE!

why is this?

thanks in advance.

Jeff

Changes in the ESXi unsupported Busybox console are not persisted after reboots other than for a few configuration file. You should store your script in either the local VMFS volume or on shared storage else they'll be wiped each time the host reboots

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

I must have screwed something up, because It didn't output anything.. I did find tho, if i put "--verbose 3" on the command line it spits out a bunch of XML, where one of the fields is a percentage . I will strip out that field.

Hi there,

My users need to retain multiple snapshots of their virtual machines. Is there any way to backup machines with snapshots? Would it work to:

a) create a snapshot

b) "vmkfstools -i 2nd_last_snap.vndk ...."

c) remove the last snapshot

Has anyone figured out the addressing scheme for snapshots using "vim-cmd vmcvs/snapshot.remove". The docs specify the arguments snapshotLevel and snapshotIndex, but I haven't figured out how they work with branched trees yet..

Thanks!

multiple snapshots are accomodated for right in the script file

VM_BACKUP_ROTATION_COUNT=5

or whatever number you want to save and you can vary to backup schedule in the cron entry.

Thanks, but doesn't VM_BACKUP_ROTATION_COUNT just determine the number of backups to save for each virtual machine?

I need to be able backup virtual machines that have virtual machine snapshots. I don't really need to retain the snapshots in the backup, but I can't delete the snapshots on the virtual machines. From the "Features" section at the top of this page:

"VM(s) that intially contain snapshots will not be backed up and will be ignored"

@mkennetha you are correct, the variable VM_BACKUP_ROTATION_COUNT is used to determine the number of backups for each VM which is a complete backup of a given VM w/o snapshots.

As mentioned in the documentation and in FAQ #11, VMs w/prior snapshots is NOT supported. It's not a best practice to keep snapshots for long durations and more commonly they're miss-used and miss-understood. Here is another article on why snapshots are NOT meant to be used for backups - http://blogs.vmware.com/kb/2010/06/vmware-snapshots.html

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Thanks for your reply.

I certainly agree that snapshot retention is neither resource-friendly nor suitable as a backup strategy.

The situation that I am confronted with is that the developers I support have a workflow in place which makes copious use of snapshots. Poorly conceived, perhaps, but nevertheless I do have to provide some kind of backup strategy.

Up to now, I've been more or less manually using the workflow described in my first post. I haven't been able to reliably script it yet, because I haven't been able to address snapshots after a snapshot tree has been forked. I suppose the "snapshotIndex" is relevant, but I haven't been able to find documentation and my own experimental poking has not yet yielded results.

Hi,

I'm using this script to backup my vm in vsphere environment (i send a batch from Windows).

The script work right with ESX4 ,but I have some problem with ESXi 4: the backup is OK (also the restore) but the log file is empty.

Kindly, can you help me?

Thanks for all

Pretty nifty code.

Wondering...

1. Are you phasing off this script and telling users to switch to the ghettoVCBg2? If so skip #4.

2. To truly create a backup window, don't see a way to terminate the backup process if we want the backups to stop at... lets say 7am then resume at 6pm. Is this possible?

3. any dedupe features coming... like VDR has? or would that hose the esxi host too much? Backups are fast, the compressing of VM's to *.gz is the slowest. The backup/compress every night will take awhile for 30 or more VM's.

4. email feature? we have script that can email log but would be nice there were an email feature.

We're beta testing VDR, but I need something for production until Azmir fixes the slow Integrity Checks. : ) waiting patiently.

  • - VMware truly needs to focus efforts on its backup stragety of VM's for its Advanced or higher license customers, the core software and other features are fine, VDR just needs more attention.

Have you taken a look at some of the resources above in the documentation with regards to executing the script from a Windows environment? That should give you some tips on setting it up, if you're sending the correct command, it should log to the destination datastore

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Hi William,

thanks for reply.

this is the batch file that i have scheduled in Windows:

C:\backup_vm\plink.exe root@10.1.0.13 -pw password "nohup /vmfs/volumes/4bbc4a19-fe5178cd-1a04-d8d3855fa43e/copia_vm/ghettoVCB.sh -f /vmfs/volumes/4bbc4a19-fe5178cd-1a04-d8d3855fa43e/copia_vm/serverweb > /vmfs/volumes/ce5226e0-34047eed/Backup/backuplogserverweb.txt &"

The backup is correct but the log is empty(ESX 4i).

The same script works correctly in ESX4

I have read some tips but I'm unable to identify the solution.

Thanks

Hello,

1. I'm not phasing ghettoVCB, on the contrary, I have more users on this script than ghettoVCBg2. The primary reason I can think of, is ghettoVCB supports free ESXi which is used for many small shops, labs and even some mid size shops. I continue to get feedback from the community and provide enhancements/fixes as appropriate and as time permits. Please refer to FAQ #15 for more questions on enhancement requests

2. Not possible and I don't plan on supporting this type of complicated backup process. Remember this solution is not doing any type of "incremental" backup, this is using vmkfstools which I expect users of this script to understand how that works and how the backups are actually taking place. If you understand that, then you'll see that implementing a "pause" option would not work

3. Please refer to comment #2 from above and FAQ #25. If you're looking for de-dupe capabilities, this will have to be on the actual storage subsystem whether that be an NFS/ZFS server or an array that has de-depe capabilities. Things like change block tracking is not possible using vmkfstools, this is provided via the vSphere API and is implemented in various commercial backup solutions such as VMware vDR, Veeam Backup/Replication, etc. if you're looking for those features, you need to look at a commercial product. As much as I would love to implement a ghettoCBT, it just does not exists or available via the methods this script

4. Yes it would be nice, there have been some posts on the internet on getting email sent from classic ESX, the problem arises when you're dealing with ESXi. If you need email capabilities, have a remote management system grab the logs after the backup or after n-period of time and email it to you, this allow for custom configuration/etc. if a user wanted the email to come from a centralized system. No plans to support this in the future

I've heard horror stories with vDR and from what I "hear", it's still a 1.0 product. If you're looking for a mature and reliable commercial product, take a look at Veeam Backup and Replication.

Hopefully this answered your questions

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Giunat,

I personally use the vMA appliance -you could use any linux flavor- to remotely via SSH trigger the ghettovcb script on ESX/ESXi hosts. The backup repository is a NFS server attached to all hosts which also centralizes the configuration/scripts. It has been working really good and have found no problems. I wrote a small script to handle the logging and email notifications that include the backup log.

If you are interested on implementing in this -similar- way let me know and I can email you more details.

Hi Wiliam,

Thanks for this great tool - it really helps all of us 'part-time-admins' in small projects a lot!!

I would like to mention 3 things:

1. I ran into a problem when the NFS mount ran out of space during disk clone. This was reported by vmkfstools on the screen and in the debug log as follows:

--- snip ---
2010-06-23 11:19:08 -- debug: Waiting for snapshot "ghettoVCB-snapshot-2010-06-23" to be created
2010-06-23 11:19:08 -- debug: Snapshot timeout set to: 900 seconds
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/vmsOnC2edev13/vmc2eocs03/vmc2eocs03.vmdk'...

Clone: 0% done.
Clone: 1% done.
Clone: 2% done.
Clone: 3% done.
Clone: 4% done.
Clone: 5% done.
Clone: 6% done.
Clone: 7% done.
Clone: 8% done.
Clone: 9% done.
Clone: 10% done.
Clone: 11% done.
Clone: 12% done.
Clone: 13% done.
Clone: 14% done.
Clone: 15% done.
Clone: 16% done.
Clone: 17% done.
Clone: 18% done.
Clone: 19% done.
Clone: 20% done.
Clone: 21% done.
Clone: 22% done.
Clone: 23% done.
Clone: 24% done.
Clone: 25% done.
Clone: 26% done.
Clone: 27% done.
Clone: 28% done.
Clone: 29% done.
Clone: 30% done.
Clone: 31% done.
Clone: 32% done.
Clone: 33% done.
Clone: 34% done.
Clone: 35% done.
Clone: 36% done.
Clone: 37% done.
Clone: 38% done.
Clone: 39% done.
Clone: 40% done.
Clone: 41% done.
Clone: 42% done.
Clone: 43% done.
Clone: 44% done.
Clone: 45% done.
Clone: 46% done.
Clone: 47% done.
Clone: 48% done.
Clone: 49% done.
Clone: 50% done.
Clone: 51% done.
Clone: 52% done.
Clone: 53% done.
Clone: 54% done.
Clone: 55% done.
Clone: 56% done.
Clone: 57% done.
Clone: 58% done.
Clone: 59% done.
Clone: 60% done.
Clone: 61% done.
Clone: 62% done.
Clone: 63% done.
Clone: 64% done.
Clone: 65% done.
Clone: 66% done.
Clone: 67% done.
Clone: 68% done.
Clone: 69% done.
Clone: 70% done.
Clone: 71% done.
Clone: 72% done.
Clone: 73% done.
Clone: 74% done.
Clone: 75% done.
Clone: 76% done.Failed to clone disk : No space left on device (1835017).
2010-06-23 12:05:29 -- info: Removing snapshot from vmc2eocs03 ...
2010-06-23 12:06:04 -- info: Backup Duration: 46.95 Minutes
2010-06-23 12:06:04 -- info: Successfully completed backup for vmc2eocs03!

2010-06-23 12:06:05 -- info: Initiate backup for vSphere Management Assistant
--- snip ---

As you can see ghettoVCB did not find this situation to be an error and continued the backup and reported backup success. Ist there any chance to change this? Perhaps checking the vmkfstools call's return value?

2. I built a small customization for logwatch (see http://www.logwatch.org/) that parses a directory containing the ghettoVCB log files of several hosts (i.e. on the central NFS backup datastore) and outputs a report on the executions and statuses of the backup operations that were performed as cron jobs on several ESXi hosts. And the logwatch tool runs as cron job on the Server that provides the NFS share to the ESXI hosts. The E-Mail report looks as follows \[due to testing the error-detection there are more error messages that success messages in the report, but that errors were produced intentionally Smiley Wink ]:

 ################### Logwatch 7.3 (03/24/06) #################### 
        Processing Initiated: Thu Jun 24 08:17:00 2010
        Date Range Processed: all
      Detail Level of Output: 0
              Type of Output: unformatted
           Logfiles for Host: xxxxxxxxxxxxxxxxxx
  ################################################################## 
 
 --------------------- GhettoVCB VM backups Begin ------------------------ 

 !!!WARNING!!! You may have backup errors
 localhost.localdomain:vmc2eocs03 @ 2010-06-23 11:19:08:
 		Clone: 76% done.Failed to clone disk : No space left on device (1835017).: 1 time(s)
 localhost.localdomain:vmc2eocs03 @ 2010-06-23 13:03:42:
 		Clone: 76% done.Failed to clone disk : No space left on device (1835017).: 1 time(s)
 localhost.localdomain: @ 2010-06-22 11:07:25:
 		 -- info: ###### Final status: ERROR: All VMs failed! ######: 1 time(s)
 localhost.localdomain: @ 2010-06-22 11:07:01:
 		 -- info: ###### Final status: ERROR: All VMs failed! ######: 1 time(s)
 localhost.localdomain: @ 2010-06-23 07:02:02:
 		 -- info: ###### Final status: ERROR: All VMs failed! ######: 1 time(s)
 localhost.localdomain: @ 2010-06-23 12:11:28:
 		 -- info: ###### Final status: ERROR: Only some of the VMs backed up! ######: 1 time(s)
 
 
 ------------------------------------------------------------------
 ------------------------       Backup summary      ---------------
 ------------------------------------------------------------------
 Host:			Build:
 	VM:
 		Backup     			Success:   	duration:
 ------------------------------------------------------------------
 localhost.localdomain	VMware ESXi 4.0.0 build-208167
 	vSphere Management Assistant
 		2010-06-23 12:06:05	success	5.38 Minutes
 ------------------------------------------------------------------
 	vmc2eappsvct-esxi
 		2010-06-22 06:33:03	success	9.15 Minutes
 		2010-06-22 07:00:03	success	9.15 Minutes
 		2010-06-22 11:07:01	!!omitted!!	- unknown -  
 		2010-06-22 11:07:25	!!omitted!!	- unknown -  
 		2010-06-23 07:02:02	!!omitted!!	- unknown -  
 		2010-06-23 07:31:48	success	9.15 Minutes
 		2010-06-23 11:14:26	- unknown -	- unknown -  
 		2010-06-23 11:19:06	!!omitted!!	- unknown -  
 		2010-06-23 12:52:41	success	10.98 Minutes
 ------------------------------------------------------------------
 	vmc2eocs03
 		2010-06-23 11:19:07	!!Error!!	- unknown -  
 		2010-06-23 13:03:41	!!Error!!	- unknown -  
 ------------------------------------------------------------------
 ------------------------------------------------------------------
 esxihost2.x.com	VMware ESXi 4.0.0 build-208167
 	FW-Builder
 		2010-06-24 05:33:32	success	12.43 Minutes
 ------------------------------------------------------------------
 	VAS win2k
 		2010-06-24 05:56:59	success	17.02 Minutes
 ------------------------------------------------------------------
 	vasa xp
 		2010-06-24 05:46:00	success	10.97 Minutes
 ------------------------------------------------------------------
 	vmccccontrol
 		2010-06-24 06:14:02	- unknown -	- unknown -  
 ------------------------------------------------------------------
 ------------------------------------------------------------------
 
 ---------------------- GhettoVCB VM backups End ------------------------- 

 
 ###################### Logwatch End ######################### 

I assume this to be a good starting point for a community tool - but I have not the time to maintain this script for the community and provide support and a comprehensive documentation for the tool - So I would leave a *.tar.gz somewhere with some short usage notes and provide this script as is. Where and how would I leave such a script?

3. One information I woluld like to have parseable in the log file (for my logwatch report ) would be the actual disk size of the backup performed (and perhaps the free space on the backup datastore before and after backup operation). This would be a useful information for my logwatch report, so in case you are doing work on your script anyway this would be a nice addition.

Again, thanks for your great work

Greetings

Thomas

1. Good catch! I presume all we need to do is add something like this to the script immediately after each vmkfstools clone operation:

http://pastebin.com/tY1Ypby3

(untested yet)

2. Up to you IMHO.

3. Interesting. Maybe you would propose a patch?

William,

after your updates in april-may it has been a few weeks before I got around to try it out on our servers (instead of my own cooked version). It appears to be a "drop in replacement", and of course I'd prefer to use "upstream" instead of a separate version.

Nevertheless I still have a few questions. They might be a bit lengthy for a comment here, so they're here: http://pastebin.com/EkfWfG2K

Take a look and let me know.

Hi traugust,

Thanks for the commends, I'm aware of this scenario and will be properly handled in a future release. I'll make a note of your comments but depending on my free cycles, some of these may or may not make it into future release.

Very cool report btw.

Thanks

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

gautelund,

Let me take a look at get back to you, got lots on my plate Smiley Wink Remember this is not the only script I manage and have to keep up to date.

Thanks for the feedback

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Hi,

Here are the two screenshots.

http://yfrog.com/mtvmdisk1jx

Both disks are VMFS typed. The First disk has a maximum disk size 256GB. The second disk 1024GB maximum size.

The first disk is stored in RAID5-SAS with blocksize 1MB.

The second disk is stored in RAID5-SATA with blocksize 4MB

This can be an issue if your main VM configuration file is stored on a VMFS volume with a smaller blocksize configured than it's other VMDK with larger block size. We personally hit a snag with a similar configuration that caused our VM to actually crash as documented here - http://www.virtuallyghetto.com/2010/05/vsphere-esx-40-crash-vm-bug.html

It's probably best to either keep the same blocksize or if you're going to have a mix, ensure that the VM's configuration file is actually stored on the VMFS volume with the larger blocksize else you will be restricted by the VMFS volume with the small blocksize.

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

If we are running this script locally on a shared lun and configuring it to run from cron. Would the cron job be configured on the VMA?

Do you have a vma/ghettovcb2 for dummies.. Sorry..

It does not have to be on vMA, any remote manage host will do just fine. You can also schedule it via Windows using the task scheduler. You just need to ensure the scheduled task/cron is outside of the ESX or ESXi host

FYI - You're posting in ghettoVCB and not ghettoVCBg2

In either case, the documentation has information on setting up a cronjob

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Hello,

Thank you for your help. After i moved all date on the SAS store to the SATA store i cant delete the SAS store in the GUI.

I solved the problem with the vmkfstools:

vmkfstools --createfs vmfs3 --blocksize 4M -S RAID5-SAS.new /vmfs/devices/disks/mpx.vmhba2\:C0\:T0\:L0\:3

Now i have an other problem:

If i try to backup the VM i get this message:

2010-07-07 13:45:48 -- info: Snapshot found for VM_1, backup will not take place

I already tryed to delete the snapshot in the vClient. I used the "delete all" button in the snapshot magager. After i delete all snapshots i get the same errormessage again.

Here is a list with existing files after deleting snapshots in the manager:

http://pastebin.com/Gr2dcZvq

After deleting the snapshot the ghetto script tells me that snapshots are found anyway. How can i delete the snapshot that the gehtto script is working?

The script searches for 2 types of files: *-delta.vmdk (snapshot descriptor) and *.vmsn (snapshot state files). You will need to get rid of these else you'll continue to hit the error. I would recommend manually creating a snapshot and then deleting it, hopefully this will cleanly remove any stale files leftover. If not, then you can manually delete these .vmsn files only

=========================================================================

William Lam

VMware vExpert 2009,2010

VMware scripts and resources at:

Twitter: @lamw

vGhetto Script Repository

Getting Started with the vMA (tips/tricks)

Getting Started with the vSphere SDK for Perl

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Community

If you find this information useful, please award points for "correct" or "helpful".

Hi William -

Great script! You have definitely filled a need in the community (but you know that). I have read and (mostly) memorized this page and have learned much in the process. I have reached an impasse however, and am looking for some help as to why I the 2gbsparse format is not working for me. I can't use thick as my destination is not large enough to hold the whole image and Windows NFS does not support "thin".

I am trying to backup to a Windows NFS mount and am using the 2gbspares format. However, the files are all filling up the whole 2GB which ultimately fills up the NFS and the copy fails. The strange thing is that when I did this manually last week, it worked! I saw the various files all with different file sizes.

What am I missing? I though that Windows NFS supported 2gbsparse. Is there a (WSFU) NFS configuration that I overlooked?

Thanks in advance!

-


2010-07-07 17:26:59 -- info: ============= ghettoVCB LOG START ==============================

2010-07-07 17:26:59 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/esxi_backups/vm-backups

2010-07-07 17:26:59 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3

2010-07-07 17:26:59 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2010-07-07_17-26-59

2010-07-07 17:26:59 -- info: CONFIG - DISK_BACKUP_FORMAT = 2gbsparse

2010-07-07 17:26:59 -- info: CONFIG - ADAPTER_FORMAT = buslogic

2010-07-07 17:26:59 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0

2010-07-07 17:26:59 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0

2010-07-07 17:26:59 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3

2010-07-07 17:26:59 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5

2010-07-07 17:26:59 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15

2010-07-07 17:26:59 -- info: CONFIG - LOG_LEVEL = info

2010-07-07 17:26:59 -- info: CONFIG - BACKUP_LOG_OUTPUT = /vmfs/volumes/VMDataStore1/ghettoVCB/ghettoVCB.sh.log

2010-07-07 17:26:59 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0

2010-07-07 17:26:59 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 1

2010-07-07 17:26:59 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all

2010-07-07 17:27:02 -- info: Initiate backup for Rainier

2010-07-07 17:27:02 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-07-07" for Rainier

Destination disk format: sparse with 2GB maximum extent size

Cloning disk '/vmfs/volumes/VMDataStore1/Rainier-2010-06-22_18-58-43/Rainier-0.vmdk'...

...^MClone: 84% done.Failed to clone disk : Insufficient permission to access file (38).

2010-07-07 19:35:16 -- info: Removing snapshot from Rainier ...

2010-07-07 19:35:21 -- info: Backup Duration: 128.32 Minutes

2010-07-07 19:35:21 -- info: Successfully completed backup for Rainier!

-


Additional Info:

ESXi 4.0U1 w/ most recent ghettoVCB script

3 Active VMs

2 DataStores internal RAID-5 storage

1 Persistent NFS DataStore running on Windows WSFU (No other problems with the NFS datastore)

*The VMA VM backs up fine, but again the first 2 VMDK-part files are the full 2GB.

*I can see the creation of all 2GB part files at the beginning of the process and then each one being written to 2GB.

  • I revise my previous statement to that I only "think" I saw it work last week (that was so long ago and it may have been a test run on a local data store).

Hello,

I am using ESXi 4 and just started using the latest version of the backup script. I experienced the following problem each time I tried to backup a VM that had a .vmdk file that was on another datastore. This is probably a bug, but I'm not sure if other users have experienced this issue. When it would create the folder with the name of the datastore UUID that the .vmdk was on, and then also when it would create the backup .vmdk file in that folder, the name of both the folder and file would be incorrect. Some characters were replaced with spaces. For example, the UUID of the datastore was "4bb1f74c-ee55f49c-5525-001b212f6af8", but the folder was created as "4bb1f74c-ee55f49c-55 5-001b 1 f6af8". Then the .vmdk that was put into the folder was "C EN 0 3.vmdk", which also has spaces that have replaced characters. To solve the problem I changed the following consecutive three lines of code.

old:

DS_UUID="$(echo ${VMDK#/vmfs/volumes/*})"

DS_UUID="$(echo ${DS_UUID%//})"

VMDK_DISK="$(echo ${VMDK##/*/})"

new:

DS_UUID="${VMDK#/vmfs/volumes/*}"

DS_UUID="${DS_UUID%//}"

VMDK_DISK="${VMDK##/*/}"

It appears that using the echo command was giving me adverse results, but it is really not needed to do string manipulations.

Hi William et al,

I really like the script and have started to use is in my Production environment after running some tests this past week. Curiously, I ran into the problem in question 8:

8Q: I noticed that the backup rotation is occurring after a backup. I don't have enough local storage space, can the process be changed?

8A: This is primarily done to ensure that you have at least one good backup in case the new backup fails. If you would like to modify the script, you're more than welcome to do so.

In our case, I have 2 NASes that I am using to backup our Production server - one on MWF, one on TTHS. We are running this way because it provides a little more redundancy in the event that we have a NAS failure. Due to size constraints, how can I modify the ghettoVCB script to delete the old backup first?

Thanks in advance,

Seth

I have found if you add too many lines to /var/spool/cron/crontabs/root the file becomes blank.

if you delete it first then you can add in as many lines as needed.

Put the following in /etc/rc.local right after the line for killing crond and before you add any entries manually, delete the file but don't forget to readd the first two lines.

/bin/rm /var/spool/cron/crontabs/root

/bin/echo "01 01 * * * /sbin/tmpwatch.sh"> /var/spool/cron/crontabs/root

/bin/echo "01 * * * * /sbin/auto-backup.sh">> /var/spool/cron/crontabs/root

After the weekend, I have new and strange results from my continued attempts to copy off the snapshot vmdk file created with ghettoVCB using the 2gbsparse directive.

There were three attempts at this since Friday - the Sat and Sun backup dirs contained only the .vmx file which is consistent with what I've been seeing. However, to my total surprise, this morning's backup performed correctly!!! It contains multiple variable length .vmdk files which it what I would expect from the 2gbsparse format. This is in spite of not being touched all weekend.

Has anybody seen this kind of inconsistent behavior using the 2gbsparse directive?

I cannot begin to imagine why the backup operation suddenly succeeds where it has failed for the previous 10 attempts. Any assistance would be very much appreciated!!

Version history
Revision #:
3 of 3
Last update:
‎07-06-2021 07:20 AM
Updated by:
 
Contributors