Hello,
I have a File Server VM here and Local Disk C is Full (40GB). I want to expand the volume size in VSphere but the option is greyed out, after doing some research I was able to find out that the 2 snapshots attached to this particular VM are the reason why I can't extend the volume.
My questions are:
The datastore still has 4.49 TB remaining
The snapshots are only around 1.5- 1.6TB in Total. So the value within the the snapshot manager is wrong or misleading.
So when your disks are already Thick than a Delete All doesnt need additional space. All changes will be written back into the original -flat. See VMware Knowledge Base
You should start the snapshot deletion on a weekend. Dont worry about the progress bar when it hangs on 95 or 99% for ages.... let the process run. You can calculate the needed time when using esxtop to check Read and Write values for the given LUN/Storage HBA. If you get 80MB/s for writes than it should took up to 6h. The timestamps on the files names are also updated during the process.
Regards,
Joerg
Your friend doesn’t know what they are talking about.
You would lose the ability to revert the VM to the state it was in when the snapshot was created.
That is a better situation than the one you face right now with a snapshot of that size! (you have not suffered any data loss)
I shall have to leave your questions to more knowledgeable folks... but watch out for rogue snapshots in the future, delete them when they are not needed (usually after a backup or upgrade completes)
Hahaha, thank you Scott.
Can I delete that large snapshot right now without it affecting that server? Performance etc.
Also, the datastore it's housed on only has 4.49TB storage remaining and the snapshot is 9.03 TB.
Does this matter?
Those were the questions I am leaving to “more knowledgable folks”
However, I believe performance will definitely be impacted (there’s 9TB of data to be merged from your child disk to the parent disk), and whether you need free space or not depends on whether the virtual disks of the VM are thick provisioned or thin provisioned.
Whatever you do, make sure you have a backup first.
Thanks again!
We have Veeam run nightly and it backs up of all our virtual machines, so I suspect it should be fine.
If anyone else can clarify what is best practice, I'd really appreciate it.
Deleting a Snap can effect VM performance when you have a slow storage and you need to commt a lot of Data back.
Running a snapshot for a long time as you is a risk. Dont do it again. I'll ask every customer if he will going go back in time on its company main fileserver for half a year. Nobody want like to do,... so there is no reason for holding the snapshot.
You can press the "Delete all" within the GUI.
Regards,
Joerg
Welcome to the Community,
Also, the datastore it's housed on only has 4.49TB storage remaining and the snapshot is 9.03 TB.
To find out whether there's enough disk space available to safely delete the snapshot, please do present the output of ls -lisa as asked for by IRIX201110141.
With a 15TB datastore size, a ~9TB snapshot, and ~4.5TB free disk space, the VM's virtual disk must have been thin provisioned, which means that deleting the snapshot will require temporary disk space.
André
Hi Team,
Thanks for your help so far!
Where do I run this ls -lisa command? via the command line ?
I have powerCLI and I can also remote in via ssh but not with the root account as I was not given the password and the guy who set it up is long gone.
Thanks again
cd FS03
please.
Can you share "ls -lisah"? The 'h' is for human readable and make the numbers more clear.
Also a df -h which show the datastore usage. Btw. there is no need to create screenshots. Just use C&P.
Regards,
Joerg
This is the output of ls -lisah
Thank you!
total 11282721024
3524 128 drwxr-xr-x 1 root root 92.0K Apr 30 13:11 .
4 1024 drwxr-xr-t 1 root root 72.0K Sep 13 2019 ..
25167044 3072 -rw------- 1 root root 2.5M Sep 13 2019 -FS03-000001-ctk.vmdk
16778436 24576 -rw------- 1 root root 181.0M Sep 13 2019 -FS03-000001-sesparse.vmdk
20972740 0 -rw------- 1 root root 463 Sep 14 2019 -FS03-000001.vmdk
96470212 3072 -rw------- 1 root root 2.5M Apr 30 13:06 -FS03-000002-ctk.vmdk
113247428 16177152 -rw------- 1 root root 15.5G May 1 05:33 -FS03-000002-sesparse.vmdk
117441732 0 -rw------- 1 root root 443 Apr 30 13:06 -FS03-000002.vmdk
1220 0 -rw-r--r-- 1 root root 1.7K Sep 13 2019 -FS03-13d12057.hlog
62915780 16777216 -rw------- 1 root root 16.0G Sep 13 2019 -FS03-Snapshot1237.vmem
67110084 2048 -rw------- 1 root root 1.3M Sep 13 2019 -FS03-Snapshot1237.vmsn
109053124 64 -rw------- 1 root root 19.5K Sep 13 2019 -FS03-Snapshot1238.vmsn
100664516 0 -rw------- 1 root root 13 Sep 13 2019 -FS03-aux.xml
12584132 3072 -rw------- 1 root root 2.5M Sep 13 2019 -FS03-ctk.vmdk
146801860 16777216 -rw------- 1 root root 16.0G Jan 18 04:49 -FS03-f75025eb.vswp
4195524 37175296 -rw------- 1 root root 40.0G Sep 13 2019 -FS03-flat.vmdk
54527172 64 -rw------- 1 root root 8.5K Apr 30 13:06 FS03.nvram
8389828 0 -rw------- 1 root root 641 Sep 13 2019 -FS03.vmdk
58721476 0 -rw------- 1 root root 933 Apr 30 13:06 -FS03.vmsd
104858820 0 -rwxr-xr-x 1 root root 3.1K Apr 30 13:06 -FS03.vmx
138413252 0 -rw------- 1 root root 0 Jan 18 04:49 -FS03.vmx.lck
142607556 0 -rwxr-xr-x 1 root root 3.1K Apr 30 13:06 -FS03.vmx~
50332868 5120 -rw------- 1 root root 4.5M Sep 13 2019 -FS03_2-000001-ctk.vmdk
41944260 1024 -rw------- 1 root root 36.3G Sep 13 2019 -FS03_2-000001-sesparse.vmdk
46138564 0 -rw------- 1 root root 445 Sep 14 2019 -FS03_2-000001.vmdk
121636036 5120 -rw------- 1 root root 4.5M Apr 30 13:06 -FS03_2-000002-ctk.vmdk
125830340 1531897856 -rw------- 1 root root 1.5T May 1 05:32 -FS03_2-000002-sesparse.vmdk
130024644 0 -rw------- 1 root root 452 Apr 30 13:11 -FS03_2-000002.vmdk
37749956 5120 -rw------- 1 root root 4.5M Sep 13 2019 -FS03_2-ctk.vmdk
29361348 9663676416 -rw------- 1 root root 9.0T Sep 13 2019 -FS03_2-flat.vmdk
33555652 0 -rw------- 1 root root 571 Sep 13 2019 -FS03_2.vmdk
88081604 3072 -rw------- 1 root root 2.3M Sep 13 2019 vmware-13.log
83887300 28672 -rw------- 1 root root 27.3M Sep 13 2019 vmware-14.log
79692996 1024 -rw------- 1 root root 304.0K Sep 13 2019 vmware-15.log
75498692 22528 -rw------- 1 root root 21.7M Sep 13 2019 vmware-16.log
71304388 3072 -rw------- 1 root root 2.6M Sep 13 2019 vmware-17.log
150996164 8192 -rw-r--r-- 1 root root 7.8M Jan 17 22:38 vmware-18.log
92275908 7168 -rw-r--r-- 1 root root 6.1M May 1 03:20 vmware.log
This is the output of df -h
FS03 is located VOL4!!!
Filesystem Size Used Available Use% Mounted on
VMFS-5 8.0T 1.2T 6.8T 15% /vmfs/volumes/VOL1
VMFS-6 24.0T 20.2T 3.8T 84% /vmfs/volumes/VOL2
VMFS-6 15.0T 5.6T 9.4T 37% /vmfs/volumes/VOL3
VMFS-6 15.0T 10.5T 4.5T 70% /vmfs/volumes/VOL4
vfat 285.8M 205.8M 80.0M 72% /vmfs/volumes/55b70951-8359c070-ecb1d78a78d0
vfat 249.7M 173.1M 76.7M 69% /vmfs/volumes/361fd378-61a418af-b1bfb3d0242e
vfat 249.7M 172.8M 76.9M 69% /vmfs/volumes/9898347f-6c8107d8-11d9834ad2ee
vfat 4.0G 19.8M 4.0G 0% /vmfs/volumes/59cf0f06-19429230-f40343ac87b8
The snapshots are only around 1.5- 1.6TB in Total. So the value within the the snapshot manager is wrong or misleading.
So when your disks are already Thick than a Delete All doesnt need additional space. All changes will be written back into the original -flat. See VMware Knowledge Base
You should start the snapshot deletion on a weekend. Dont worry about the progress bar when it hangs on 95 or 99% for ages.... let the process run. You can calculate the needed time when using esxtop to check Read and Write values for the given LUN/Storage HBA. If you get 80MB/s for writes than it should took up to 6h. The timestamps on the files names are also updated during the process.
Regards,
Joerg
FS03_2-flat.vmdk can grow by 1.5 TB + a bit
FS03-flat.vmdk can grow by 16 GB + a bit
Both should fit but to consolidate 1.5 TB deltas while the VM is working you need some extra nerves - we are talking about half a year of data/ work.
You will sleep better when you do that while the VM is powered off.
If it helps - expanding your systemdisk after consolidating ONLY FS03.vmdk would probably take less than an hour of downtime ....
@ Joerg
sorry - had not seen your post and dont want to interact here if you are already looking into it ...
Ulli
Thank you all for your help!
Is it possible to leave the VM powered while I delete the snapshot, we're concerned about it not powering on as we've been having issues with this server?
If you think it's best to turn it off, then I will do so.
Leave it on. If possible dont create any kind of disk utilization when possible.. i mean if you use Windows Dedup be sure that no optimization or garbage collection runs during snapshot delete! Similar is a anti virus full scan.
If you shutdown the VM... you cannot start it again until the snapshot delete is done. If you have some kind of downtime... like no user during night you can try to restart the VM and press F5/F8 to get the windows start menu. Now you can start snapshot delete but if there is a need you can start Windows OS.
Regards,
Joerg
Although the question has already been correctly answered by IRIX201110141, here's one more hint.
Make sure that backup doesn't kick in for that VM while "Delete All" snapshots is in progress, i.e. temporarily pause the backup job!
André
Thank you all for your help!
Everything went to plan and is functioning