Part warning, part question....
My ESXi Hosts were v5.5.0 206190 (straight from the 5.5U2-RollupISO2 cd), I applied the following patches:
ID: ESXi550-201410101-SG Impact: Important Release date: 2014-10-15 Products: embeddedEsx 5.5.0 Updates esx-base
ID: ESXi550-201410401-BG Impact: Critical Release date: 2014-10-15 Products: embeddedEsx 5.5.0 Updates esx-base
ID: ESXi550-201410402-BG Impact: Important Release date: 2014-10-15 Products: embeddedEsx 5.5.0 Updates misc-drivers
ID: ESXi550-201410403-BG Impact: Important Release date: 2014-10-15 Products: embeddedEsx 5.5.0 Updates sata-ahci
ID: ESXi550-201410404-BG Impact: Important Release date: 2014-10-15 Products: embeddedEsx 5.5.0 Updates xhci-xhci
ID: ESXi550-201410405-BG Impact: Important Release date: 2014-10-15 Products: embeddedEsx 5.5.0 Updates tools-light
ID: ESXi550-201410406-BG Impact: Important Release date: 2014-10-15 Products: embeddedEsx 5.5.0 Updates net-vmxnet3
afterwards they were v5.5.0 2143827
Once upgraded my 2 Citrix Netscaler VPX Appliances (vmware hardware version 8, Citrix Netscaler OS v10.5.52.11nc (latest), FreeBSD 64bit, E1000 Nic) stopped responding on the network. They work fine when on the older 5.5U2 release but start dropping packets intermittently and then ultimately drop off the network when migrated to a host that has the above listed patches. I have 2 in High Availability mode, this occurs even if just one is on and the other off. It goes back to normal when vmotioned back to the pre-patched host.
I've tried switching networks, removing and re-adding nics, upgrading to hardware 10, deleting them all together and deploying the latest version from citrix's website, if it doesn't drop off all together it will drop a lot of packets. You can watch the netscaler console show NODE FAIL, NODE DOWN, and NODE UP events many times a minute.
FIX (or work around?)
Hi all,
when working with VMWare Technical support they foud a possible solution which is working for me almost an hour now.
maybe you guys can "Test" this solution also ?
Enter the shell mode in the NetScaler, then:
1) find where loader.conf is located on NetScaler VM #find / -name loader.conf
For the uploaded NetScaler VM, there are 2 loader.conf: ./flash/boot/defaults/loader.conf and ./flash/boot/loader.conf, we only need to change the first one.
2) add "hw.em.txd=512" to loader.conf, this will change Tx ring size to 512 (note: do not set the ring size to 256, this will cause NetScaler VM core dump)
3) reboot the NetScaler VM
4) migrate it back to a host with latest patches
Good luck.....
edit: still working fine, also with userload in production environment
Its not recommended to upgrade the virtual hardware on Virtual Appliances, did you also change something with the NIC?
// Linjo
Same issue in our esx environment since the updates from yesterday.
Netscaler starts and the VMWare Console is responding but after about 5min with a running CitrixGateway-Connection there is no communication to the netscaler website in and outside our network. (DMZ, LAN) No more ping to the VIPs. Only a restart of the Netscaler-VM will bring up the connections.again till a user ist logging in or a XenDesktop connection is starting.
Any Ideas?
Hi,
We have the same problem, netscaler works for a couple of minutes and than i need to do
dis int 0/1
reset int 0/1
en in 0/1
and then it works again for a couple of minutes with low usercount.
i see al lot of package drops in the applicance, this happens since the latest vmware esx patches from oktober 2014
i also created a case with vmware (14543709210) for this issue
hey guys thanks for adding to this thread, i too have created a case, SR # 14543312510... I'll let you know if we come to any conclusions
Same exact issue here with Netscaler virtual appliance on ESXi 5.5 Update 2 build 2143827. Will be opening ticket with VMware.
For the moment we build a nested VM Host with 5.1U2 on the updated 5.5U2. On this nested vm host the netscaler is working without problems.
I rolled back to ESXi 5.5 Update 2 (Build 2068190) and Netscaler is working great, FYI.
yeah that's what i had to do on one of my hosts to get it to work
just to keep you guys updated.. i have a webex scheduled with technical support for tomorrow morning. they want to collect data to investigate....
for the mean time i isolated a host and reinstalled ESXi 5.5 Update 2 (Build 2068190) on which netscaler vpx is working just fine again
I'm glad I found this thread. Rolling back to build 2068190 fixed the networking issues with our Netscaler.
So do we need a patch from VMware or Citrix so that the Netscaler runs with this last batch of updates from VMware?
Thanks for creating the thread.
We had the same issue yesterday during our upgrade of vmware.
Netscaler 10.1 + 10.5 same issue
Hopefully there is a fix soon.
Allowing promiscous mode for the vswitch helped for me but this is a homelab so not sure if it works in production with more load.
vSwitch Properties - Security - Promiscous mode - Accept
It seems the update enforces more security on the switch.
vmkernel.log ->
2014-10-21T15:48:49.606Z cpu2:89298)etherswitch: L2Sec_EnforcePortCompliance:155: client NSVPX-ESX requested promiscuous mode on port 0x200000e, disallowed by vswitch policy
2014-10-21T15:48:49.642Z cpu3:89289)NetPort: 1632: disabled port 0x200000e
2014-10-21T15:48:49.643Z cpu3:89289)Net: 3354: disconnected client from port 0x200000e
Thomas
We have the same issue after applying 2143827 patch. Fortunately we had one host left at 2068190 and migrated them there.
Anxiously awaiting a resolution.
We ran into this issue last night. Citrix tech had us do a roll back on the ESX server (shift r during reboot) to remove the patch and it fixed the problem.
rollback resolved that issue, also created ticket with VMware and Citrix
I have the same problem at a customer site. We left one of the 8 ESXi hosts on ESXi 5.5 U2 without the latest patches.
Promiscous mode is not an option inside the production environment.
Hopefully a fix will be available shortly.....
Vswitch properties - Security - Promiscous mode -Accept
is a workarround for in a test evoirement, but not possible in our production envoirement. still working with VMWare support to locate and fix the issue
I have the same problem with other VMs too. All have the E1000 vnic configured. Very strange.....
It a little bit like russian roulette
are you're other VM's with the issue FreeBSD 64bit ??