VMware Cloud Community
santunezlenovo
Contributor
Contributor
Jump to solution

Error Deployment VRLCM

Hello Guys

I am implementing VCF and when deploying VRLCM it gives me errors, the Management vlan is configured with MTU 9000, and the vmk0 from 1500 is changed to 9000. When I run from an esxi host vmkping -s 9000 10.0.246.254 which is the gateway from X-Region does not respond, but if I do vmkping -s 1500 if you get a response, and the switch ports, it is at mtu 9000

the error is the following:

Message: vRealize Suite Lifecycle Manager deployment failed.
Correction message: Check if the Jumbo frames between SDDC Manager network and the vRealize Suite Lifecycle Manager network are enabled and if the required ports listed at https://ports.esp.vmware.com are open.

Reference Token: QT04T5

Reason: vRSLCM deployer failed: Local command /iso_store/ovftool/ovftool --skipManifestCheck --powerOn --diskMode=thin --acceptAllEulas --allowExtraConfig --ipProtocol=IPv4 --ipAllocationPolicy=fixedPolicy --targetSSLThumbprint=4E:7C:61 :9C:78:F7:0D:FB:6C:00:47:7E:68:8E:01:35:83:6D:77:B0 --datastore=sfo-m01-cl01-ds-vsan01 --network =X-Region --prop:vami.DNS.VMware_vRealize_Suite_Life_Cycle_Manager_Appliance=10.0.10.11,10.0.10.13 --prop:vami.netmask0.VMware_vRealize_Suite_Life_Cycle_Manager_Appliance=255.255.255.0 --prop:vami.gate way.VMware_vRealize_Suite_Life_Cycle_Manager_Appliance=10.0.246.254 --name =vcf-vrlcm --prop:vami.hostname=vcf-vrlcm.pe280.bally.local --prop:vami.ip0.VMware_vRealize_Suite_Life_Cycle_Manager_Appliance=10.0.246.2 --X:waitForIp --prop:va-fips-enabled=False --prop:varoot-password=******** /nfs/vmware/vcf/nfs-mount/bundle/3132b806-f891-11ed-b67e-0242ac120002/bundle-79587/vrslcm_install/VMware-vLCM-Appliance -8.10.0.6-21331275_OVF10.ova vi://administrator@vsphere.local:************@vcenter-benavides.pe280.bally.local/VCF-BENAVIDES/host/VCF-VSAN-BENAVIDES /Resources/sfo-m01-cl01-rp-sddc-mgmt executed successfully with exit value true LocalProcess INFO: 2023-08-15 00:34:14 - Opening OVA source: /nfs/vmware/vcf/nfs-mount/bundle /3132b806-f891-11ed-b67e-0242ac120002/bundle-79587/vrslcm_install/VMware-vLCM-Appliance-8.10.0.6-21331275_OVF10.ova LocalProcess INFO: 2023-08-15 00:34:14 - The manifest does not validate LocalProcess INFO: 2023-08-15 00:34:14 - Opening VI target: vi://administrator%40vsphere.local@vcenter-benavides.pe280.bally.local:443/VCF-BENAVIDES/host/VCF-VSAN-BENAVIDES /Resources/sfo-m01-cl01-rp-sddc-mgmt LocalProcess INFO: 2023-08-15 00:35:10 - Error: LocalProcess INFO: 2023-08-15 00:35:10 - - An error occurred during host configuration: Failed to attach VIF: RPC call to NSX management plane timeout. LocalProcess INFO: 2023-08-15 00:35:10 - Warning: LocalProcess INFO: 2023-08-15 00:35:10 - - The manifest is present but user flag causing to skip it LocalProcess INFO: 2023-08-15 00:35:10 - Completed with errors

Labels (2)
0 Kudos
1 Solution

Accepted Solutions
CyberNils
Hot Shot
Hot Shot
Jump to solution

Change vmk0 back to 1500 bytes MTU. That is the recommended setting.



Nils Kristiansen
https://cybernils.net/

View solution in original post

0 Kudos
4 Replies
CyberNils
Hot Shot
Hot Shot
Jump to solution

Change vmk0 back to 1500 bytes MTU. That is the recommended setting.



Nils Kristiansen
https://cybernils.net/
0 Kudos
LUCKY1011
Enthusiast
Enthusiast
Jump to solution

Recommendation

To test the MTU size of 9,000, the database was running an OLTP workload while the virtual machine was moved using vMotion to another server. We captured the time to vMotion the server as well as New Orders per Minute (NOPM) and Transactions per Minute (TPM) for analysis. Increasing the MTU size showed significant gains for these metrics:

  • vMotion time to complete
  • NOPM
  • TPM

A larger MTU size of 9,000 reduced the time to complete moving the database with an active workload compared to the default MTU size of 1,500. This impressive time-saving function led us to label this best practice as a Day 1, Highly Recommended procedure. 

NOPM and TPM also increased, but not as substantially as the vMotion time savings. The overall benefit of this best practice is that database performance has greater consistency during vMotion with a substantial time-savings in moving the database. 

Implementation Steps

Process to configure MTU size = 9000 for vMotion network

  1. Configure MTU Size = 9000 at vMotion Distributed Switch
    1. To change at vMotion distributed switch level, select the distributed switch for vMotion in networking tab in vSphere and select setting
    2. In the Edit setting, select Advanced and change the MTU to 9000
    3. Click OK to save the change
  1. Configure MTU Size = 9000 at VMKernel Adapter
    1. Login to vSphere and select the EXSi host, and click the Configure tab 
    2. Select the VMkernel adapters 
    3. Under VMkernel adapters, select the Motion network and click Edit
    4. In the Edit Settings, under VMkernel port settings, change the value of MTU from 1500 to 9000.
    5. Click OK to save the change
    6. Repeat the same steps above on another host
  1. Configure MTU Size = 9216 at Physical Switch
    1. To change at switch level, show running configuration interface ethernet (Run this to find port rang
    2. Configure Terminal
    3. Interface range ethernet "enter the port range" (eg:1/1/25 -1/1/2
    4. mtu 92
    5. Exit
    6. Write memory to save the configuration

Note: To change it at network switch level, login into switch console and update the MTU

0 Kudos
CyberNils
Hot Shot
Hot Shot
Jump to solution

This is not relevant. We are not talking about the vMotion network here.



Nils Kristiansen
https://cybernils.net/
0 Kudos
cristiangh
Contributor
Contributor
Jump to solution

Hi! You have to set the MTU to 9216 on all VLANs on the switch.

0 Kudos