VMware Cloud Community
anandgopinath
Enthusiast
Enthusiast
Jump to solution

Wierd Routing issue on VSAN Witness while using Witness traffic separation

Dear Team

We are facing a wierd issue on our VSAN 7.0.U3 stretched cluster 

our setup below .

============

On both Data nodes , 

we have the management VMK now having 2 tags management + vsan witness  and having management vlan x  ip 

we have the VSAN VMK with just vsan tag with default gateway  ( as gateway does not matter since network is l2 stretched  ) having vsan data vlan y ip 

                                                        

On the witness, 

we have management vmk0 having management tag and having same  management vlan A   ip  ( routable with vlan x )

we have the vsan vmk1 with vsan tag and having the vsan witness vlan z ip  ( routable with vlan x  )

==========

Our issue is that vsan cluster goes to partitioned state if we configure VMK1 on VSAN Witness with "vsan" traffic tag

Routing is configured on our network between vlan z & vlan x   and even we added  static routes on the vsan witness for vmk1 to reach vlan x on data nodes via the gateway on vlan z.  override default gateway option was also tried on vmk1 but to no effect 

The moment we untag VSAN taffic from vmk1 and move it to vmk0  on the vsan witness, everything works fine. 

I am lost and just wanted to understand if this kind of expected behaviour or a bug ( that with WTS we should use only 1 vmk on the witness  ? )   as we cannot find any fault on our network side 

 

Thanks in advance for any useful pointers  . 

 

Labels (2)
0 Kudos
1 Solution

Accepted Solutions
anandgopinath
Enthusiast
Enthusiast
Jump to solution

Dear @TheBobkin   / All

 

Issue was related to a backend network config issue on the esx cluster hosting the witness appliance  . some esx hosts in the cluster did not have the witness vsan vmk vlan configured and hence witness was being isolated when starting on such esx hosts during the vsan cluster stop / start  .

 

Thanks for your guidance  . 🙂

View solution in original post

0 Kudos
3 Replies
TheBobkin
Champion
Champion
Jump to solution

@anandgopinath, There is likely a misconfiguration or misunderstanding of the backing physical network topology (e.g. you were told it looks like 'A' but is actually 'X>Y>Z').

 

Is there a reason why you want to set this on it's own vmk instead of just leaving it on vmk0 which clearly works?

0 Kudos
anandgopinath
Enthusiast
Enthusiast
Jump to solution

@TheBobkin  : 

 

The reason why we want to separate vsan traffic on the witness on vmk1 ( & not vmk0 ) is to be able to seggregate management traffic and vsan traffic on different VLANs on the Witness.  

Is my below understanding correct  on the traffic flow with our setup    ? or am i missing something  ? 

VLAN X & VLAN Z are routable with each other via their respective gateways  . 

From Data nodes   to Witness 

===============

source VLAN  : VLANX X  ( management + witness traffic vlan on vmk0  of both data nodes via WTS ) 

Destination VLAN  :   VLAN Z  ( vsan traffic on vmk1  of witness ) 

From witness to  Data Nodes 

===============

source VLAN : VLAN Z  ( vsan traffic on vmk1  of witness )   

Destination  VLAN : VLAN  X ( management + witness traffic vlan on vmk0  of both data nodes )

 

Please help  

 

0 Kudos
anandgopinath
Enthusiast
Enthusiast
Jump to solution

Dear @TheBobkin   / All

 

Issue was related to a backend network config issue on the esx cluster hosting the witness appliance  . some esx hosts in the cluster did not have the witness vsan vmk vlan configured and hence witness was being isolated when starting on such esx hosts during the vsan cluster stop / start  .

 

Thanks for your guidance  . 🙂

0 Kudos