Hi:
When I connect to VDI, some time App volume agent show me an error code 502
Also if I browse the manager the same thing happens randomly: "Server Unavailable (502) The server did not respond. Bad Gateway" and "This request is taking too long. Check that your services are running and retry."
Has anyone fixed the problem?
Regards.
Hello Riccardo
The problem is due to the LSB health check. This sampled the servers every 5 seconds and not every 30.
Regards and thank for your help!!
hi
did you see this KB?
https://kb.vmware.com/s/article/2114670
Hello Riccardo
I have checked the KB and I have it set to 443
I attach a screenshot where it can be verified.
Additionally from the Desktop, I have verified that via browser I get to https: //FQDN:443 that I have configured and it shows me the App Volumes Manager's login page
Regards
did you verify the FW Rules From VDI and App Volume manager server?
Is there a LB for the Appvolume manager (or a Proxy) ??? if yes verify the request timeout
Hello Riccardo
Yes there is a LB proxy between VDI and App Volumes Managers, but the error is obtained when navigating directly in the manager and when the VDI connects through the balancer randomly.
The problem can come from the times configured by default in nginx?
Not seeing anything on the internet about changing this setting I have not touched any of this... Will investige
Regards.
Hi
I think that can be a problem related to the load balancer.
The error display a 502 - the server did not respond.. as if it couldn't find its way back... but it is my thought due to the information here
You'd must analyze app volume manager logs and Nginx logs and do some troubleshooting on the hop of packets... from source to destination and find where the hop is interrupt
Try to create a Test VDI with the app volume agent pointing on one app volume manager (without LB) and verify that is all ok with VDI...
take a proof verify on FW (if the VLAN is different) from source to destination and reverse...
Hello Riccardo
The problem is due to the LSB health check. This sampled the servers every 5 seconds and not every 30.
Regards and thank for your help!!
we have the same issue: 2 managers balanced with F5 but the problem is due to server number 2 only. If we access directly each server via browser only server two returns the 502 bad gateway error. We also noticed a strange odbc error on the production log of server two, error that appears after we restart the appvolume service, service that stays "idle" for about five minutes and then actually starts. This is the error:
2022-06-28 09:11:21 UTC] P1028 ERROR Failed to query database during application server initialization: ODBC::Error: 40001 (1205) [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Transaction (Process ID 86) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.: ALTER DATABASE "XXXXXXXX(name hidden)" SET READ_COMMITTED_SNAPSHOT ON
the situation is similar to this one: https://communities.vmware.com/t5/Horizon-Desktops-and-Apps/App-Volumes-4-Managers-2-and-3-hung-inst... At the end on that post is said that "this can be resolved by configuring the database with “READ_COMMITTED_SNAPSHOT” parameter" but our dba has some doubts about the safety of this operation. Any idea?