When I look at VIMTOP for the vcsa ,vCenter Server is using 90% of the memory.
How do find out what is using all this memory ?
This is an upgrade form vcenter 6.0 with a Horizon 7.4 environment running on it. According to vmware's compatibility matrix, these are compatible
We started with 32 GB of memory, and I bumped it up to 44 GB. But that made little difference.
I looked in the vsphere-client logs, and the memory looks good there.
The database looks good.
The only alert in the vcsa mgmt UI is the memory, which has a high paging alert.
What is using all this memory?
This is causing the web client to bog down after a few days and makes it impossible to manage vcenter or Horizon.
Best Regards
Apologies for delayed response
From the above output I have few interesting observations
1) TOTAL(RAM=44262MB) 9076 87032 86837 42770 577 465 8796093022207
What is interesting here is , though you have alloated 44262, approx 44GB ram here, Allocation is for only 9076 MB, and rest are not getting allocated dynamically
Generally when you increase ram or what ever ram you using should be dynamically allocated among various processes
2) your postgressql is not behaving properly, as you can see Dynamic allocation for it is 3956, but it is already touching approx 10Gb , and it is creating a lot of
vmware-vpostgres 3956 9858 687 206 480 435
3) your linux kernel and vpxd is also taking out all juice of your environment, first they are not getting any memory dynamically allocated but are taking out
LinuxKernel -1 43517 43426 0 14 2 8796093022207
vpxd -1 42752 40897 40865 31 25 8796093022207
If i were you, I would do the following steps
1) follow the vcenter best practice document, for postgres and other services, there are some very good recomendations like increase initial thread pool for vpxd, putting log and some other disks on seperate luns etc
2) I will try to create a seperate vCenter VM as per recomendations and migrate to the new vCenter( basically you need to do advanced vCSA deployment)
2) I will try to add another vCenter vm, to existing PSC to load balance the load on one vCenter VM or if PSC is also cramped i Will do 2vcenter 2 psc load balanced configuration as i am not currently aware of kind of I/O or size of your environment
Can you please list the output of
cloudvm-ram-size -S
Also since you are going towards non standard memory sizes, may be your JVM heap size is not adjusted properly for your vCenter Server Services: vSphere Web Client, Inventory Service and SPS (vSphere Profile-Driven Storage)
you can use the following commands for adjusting the heap size,
cloudvm-ram-size -C XXX vsphere-client
cloudvm-ram-size.bat -C XXX vspherewebclientsvc
follow it for other two processes too.
output of cloudvm-ram-size -S:
# [0mcloudvm-ram-size -S
Service-Name AllocatedMB MaxMB CurrentMB Curr-RSS Cache MapFiles MemoryLimit
-.mount -1 -1 -1 -1 -1 -1 -1
LinuxKernel -1 43517 43426 0 14 2 8796093022207
applmgmt 358 264 46 43 3 1 8796093022207
auditd -1 5 0 0 0 0 8796093022207
boot.mount -1 -1 -1 -1 -1 -1 -1
cloud-config -1 0 0 0 0 0 8796093022207
cloud-final -1 0 0 0 0 0 8796093022207
cloud-init -1 0 0 0 0 0 8796093022207
cloud-init-local -1 0 0 0 0 0 8796093022207
cm -1 372 98 97 0 0 8796093022207
content-library -1 696 151 149 1 1 8796093022207
crond -1 845 1 0 1 0 8796093022207
dbus -1 1 0 0 0 0 8796093022207
dev-disk-by\x2did-dm\x2dname\x2dswap_vg\x2dswap1.swap -1 -1 -1 -1 -1 -1 -1
dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dUt2S1ANggfk1F4MU1dwKAEnE0eFeu8db3Ze5R6JlG41UsMsLZA0gWgZQ3TpHxDUf.swap -1 -1 -1 -1 -1 -1 -1
dev-disk-by\x2dpartuuid-cf62c174\x2dc051\x2d4432\x2d8e96\x2deea39e70c539.swap -1 -1 -1 -1 -1 -1 -1
dev-disk-by\x2dpath-pci\x2d0000:00:10.0\x2dscsi\x2d0:0:0:0\x2dpart2.swap -1 -1 -1 -1 -1 -1 -1
dev-disk-by\x2duuid-ad4ff4a3\x2df500\x2d444c\x2db128\x2d8da00f9b140f.swap -1 -1 -1 -1 -1 -1 -1
dev-disk-by\x2duuid-d5e3b2e1\x2d4502\x2d4247\x2d98ee\x2dd2b4db9dd184.swap -1 -1 -1 -1 -1 -1 -1
dev-dm\x2d6.swap -1 -1 -1 -1 -1 -1 -1
dev-hugepages.mount -1 -1 -1 -1 -1 -1 -1
dev-mapper-swap_vg\x2dswap1.swap -1 -1 -1 -1 -1 -1 -1
dev-mqueue.mount -1 -1 -1 -1 -1 -1 -1
dev-sda2.swap -1 -1 -1 -1 -1 -1 -1
dev-swap_vg-swap1.swap -1 -1 -1 -1 -1 -1 -1
eam -1 466 45 41 3 1 8796093022207
haveged -1 5 0 0 0 0 8796093022207
irqbalance -1 0 0 0 0 0 8796093022207
kmod-static-nodes -1 0 0 0 0 0 8796093022207
lsassd -1 0 0 0 0 0 8796093022207
lvm2-activate -1 0 0 0 0 0 8796093022207
lvm2-lvmetad -1 3 0 0 0 0 8796093022207
lvm2-monitor -1 0 0 0 0 0 8796093022207
lwsmd -1 52 9 7 2 2 8796093022207
ntpd -1 2 0 0 0 0 8796093022207
perfcharts -1 537 18 17 0 0 8796093022207
rbd -1 262 41 39 2 2 8796093022207
rhttpproxy -1 32 9 7 1 1 8796093022207
rpcbind -1 1 0 0 0 0 8796093022207
rsyslog -1 6 0 0 0 0 8796093022207
saslauthd -1 1 0 0 0 0 8796093022207
sca -1 260 94 93 1 0 8796093022207
sendmail -1 6 0 0 0 0 8796093022207
sps -1 629 65 64 0 0 8796093022207
sshd -1 142 30 25 4 3 8796093022207
statsmonitor -1 28 11 2 8 1 8796093022207
storage-autodeploy.mount -1 -1 -1 -1 -1 -1 -1
storage-core.mount -1 -1 -1 -1 -1 -1 -1
storage-db.mount -1 -1 -1 -1 -1 -1 -1
storage-dblog.mount -1 -1 -1 -1 -1 -1 -1
storage-imagebuilder.mount -1 -1 -1 -1 -1 -1 -1
storage-log.mount -1 -1 -1 -1 -1 -1 -1
storage-netdump.mount -1 -1 -1 -1 -1 -1 -1
storage-seat.mount -1 -1 -1 -1 -1 -1 -1
storage-updatemgr.mount -1 -1 -1 -1 -1 -1 -1
sys-kernel-debug.mount -1 -1 -1 -1 -1 -1 -1
sysstat -1 0 0 0 0 0 8796093022207
system-getty.slice -1 -1 -1 -1 -1 -1 -1
system-lvm2\x2dpvscan.slice -1 -1 -1 -1 -1 -1 -1
system-systemd\x2dfsck.slice -1 -1 -1 -1 -1 -1 -1
systemd-fsck-root -1 0 0 0 0 0 8796093022207
systemd-journal-flush -1 0 0 0 0 0 8796093022207
systemd-journald -1 83 15 0 14 10 8796093022207
systemd-logind -1 1 0 0 0 0 8796093022207
systemd-networkd -1 1 0 0 0 0 8796093022207
systemd-networkd-wait-online -1 0 0 0 0 0 8796093022207
systemd-random-seed -1 0 0 0 0 0 8796093022207
systemd-remount-fs -1 0 0 0 0 0 8796093022207
systemd-resolved -1 0 0 0 0 0 8796093022207
systemd-sysctl -1 0 0 0 0 0 8796093022207
systemd-tmpfiles-setup -1 0 0 0 0 0 8796093022207
systemd-tmpfiles-setup-dev -1 0 0 0 0 0 8796093022207
systemd-udev-trigger -1 0 0 0 0 0 8796093022207
systemd-udevd -1 112 0 0 0 0 8796093022207
systemd-update-utmp -1 0 0 0 0 0 8796093022207
systemd-user-sessions -1 0 0 0 0 0 8796093022207
systemd-vconsole-setup -1 0 0 0 0 0 8796093022207
tmp.mount -1 -1 -1 -1 -1 -1 -1
updatemgr -1 430 33 30 3 2 8796093022207
vami-lighttp -1 1185 1 1 0 0 8796093022207
vaos -1 0 0 0 0 0 8796093022207
vapi-endpoint -1 485 231 230 1 0 8796093022207
vgauthd -1 9 1 0 1 1 8796093022207
vmafdd 66 0 0 0 0 0 8796093022207
vmonapi 15 24 3 1 2 2 8796093022207
vmtoolsd -1 9 1 0 0 0 8796093022207
vmware-firewall -1 0 0 0 0 0 8796093022207
vmware-vmon 5 124 18 1 16 1 8796093022207
vmware-vpostgres 3956 9858 687 206 480 435 8796093022207
vpxd -1 42752 40897 40865 31 25 8796093022207
vpxd-svcs -1 839 135 134 1 0 8796093022207
vsan-health -1 446 14 12 1 1 8796093022207
vsm -1 334 33 33 0 0 8796093022207
vsphere-client 2338 1988 675 673 1 0 8796093022207
vsphere-ui 2338 1753 28 28 0 0 8796093022207
xinetd -1 0 0 0 0 0 8796093022207
TOTAL(RAM=44262MB) 9076 87032 86837 42770 577 465 8796093022207
Your recommendation?
Thanks!
Apologies for delayed response
From the above output I have few interesting observations
1) TOTAL(RAM=44262MB) 9076 87032 86837 42770 577 465 8796093022207
What is interesting here is , though you have alloated 44262, approx 44GB ram here, Allocation is for only 9076 MB, and rest are not getting allocated dynamically
Generally when you increase ram or what ever ram you using should be dynamically allocated among various processes
2) your postgressql is not behaving properly, as you can see Dynamic allocation for it is 3956, but it is already touching approx 10Gb , and it is creating a lot of
vmware-vpostgres 3956 9858 687 206 480 435
3) your linux kernel and vpxd is also taking out all juice of your environment, first they are not getting any memory dynamically allocated but are taking out
LinuxKernel -1 43517 43426 0 14 2 8796093022207
vpxd -1 42752 40897 40865 31 25 8796093022207
If i were you, I would do the following steps
1) follow the vcenter best practice document, for postgres and other services, there are some very good recomendations like increase initial thread pool for vpxd, putting log and some other disks on seperate luns etc
2) I will try to create a seperate vCenter VM as per recomendations and migrate to the new vCenter( basically you need to do advanced vCSA deployment)
2) I will try to add another vCenter vm, to existing PSC to load balance the load on one vCenter VM or if PSC is also cramped i Will do 2vcenter 2 psc load balanced configuration as i am not currently aware of kind of I/O or size of your environment
Thanks !
I think this points me in the right direction.
Do you have a good link for configuring the memory for vcsa 6.5 ? other than the different sizes.
Thanks again ,
Tom
Did you ever get this resolved?