Hello,
The VCF 4.0 bringup is now at "Deploy and Configure NSX-T Data Center" step.
On the vcenter I do see VMs nsx01a (10.58.6.4), nsx01b (10.58.6.5), nsx01c (10.58.6.6) created (their mgmt is pingable) but the NSX-T Managament Cluster VIP (10.58.6.3) never get reachable. After some times the nsx01[abc].ova get redeployed over and over.
From build node's UI (or vcf-bringup.log) I eventually see:
NSXT_MANAGER_NON_OPERATIONAL NSX-T Manager operation status is false on 10.58.6.4
How can I further troubleshoot this ?
Thx,
A.
Hello,
I found the issue..
I open the just deployed NSX-T VM console and noticed the password I expected to use was not accepted:
So I entered with standard root/vmware and from prompt I tried to "passwd root" with the password I used in the excel configuration spreadsheet:
The error "it is too simplicistic/systematic" made me thinking that "builder node" is not using same algorithm as NSX-T when checking the excel password...
...so I used an acceptable ones then update the excel spreadsheet and restart the overal "bringup" and this time the step is done succesfully and I can ssh also:
***************************************************************************
NOTICE TO USERS
WARNING! Changes made to NSX Data Center while logged in as the root user
can cause system failure and potentially impact your network. Please be
advised that changes made to the system as the root user must only be made
under the guidance of VMware.
***************************************************************************
root@vcf-nsx01a:~#
Thx,
A.
Hello,
I found the issue..
I open the just deployed NSX-T VM console and noticed the password I expected to use was not accepted:
So I entered with standard root/vmware and from prompt I tried to "passwd root" with the password I used in the excel configuration spreadsheet:
The error "it is too simplicistic/systematic" made me thinking that "builder node" is not using same algorithm as NSX-T when checking the excel password...
...so I used an acceptable ones then update the excel spreadsheet and restart the overal "bringup" and this time the step is done succesfully and I can ssh also:
***************************************************************************
NOTICE TO USERS
WARNING! Changes made to NSX Data Center while logged in as the root user
can cause system failure and potentially impact your network. Please be
advised that changes made to the system as the root user must only be made
under the guidance of VMware.
***************************************************************************
root@vcf-nsx01a:~#
Thx,
A.
I have been battling with this error for a month now. I have varied the password in the excel many times and still end up with NSX appliance getting re-deployed over and over again all day till bring up process finally fails. Any suggestions?
My suggestion is to use a complex password generator or validate complexity prior to deployment.
Meanwhile, I'll be more than happy to open a bug for this issue.
I got the same issue, building a nested lab using VLC 4.1. When the nsx-1.wld.vcf.sddc.lab appliance is powered on you can access it and it can communicate with other devices. The deployment process is unable to communicate with the nsx appliance and keeps redploting the OVF in a loop. Any support would be appreicted.
It's more than likely storage latency, at the bottom of this link there are instructions to reduce the size of the appliance. https://www.lab2prod.com.au/2020/09/vcf-wld-usingapi-multinic.html
Also have you had a look at the domainmanager log file during the process? From the naming I assume this is for a WLD and not management?
You can pause / shutdown SDDC manager / cloud builder until the manager(s) are fully operational and then restart the task. I've gotten around this using this method a couple of times in a nested lab.
FYI, you have 15 minutes after the OVF is deployed for everything to become operational, after which it is torn down and retried. This is repeated several times and then the task fails if it never comes up.
If the issue is resolved, please mark the thread as resolved.
After changing the security portgroup and updating /opt/vmware/evorack-imaging/config/via.properties, it works fine.