Hello all,
I'm trying to improve the speed of my home lab . I have an all-in-one ESXi host :
1 Physical Host with 3 internal disks. One disk is where ESXi is installed and where I created the 2 nested ESXi VMs .
Now , till yesterday I was using a Storage appliance deployed on the physical host , where I attached as RDMs the local remaining 2 disks , and this disks were given via iSCSI to the 2 nested ESXi .The performance was not that great, but it was working ...
I tried yesterday something else : I gave directly the 2 disks as RDMs to the 2 Nested ESXi using an LSI controller and set the Bus sharing to Physical.
Both nested ESXi hosts have access to the disks, I can vMotion , Storage Migration , everything like a "normal" shared storage ...
I have to say that the performance has significantly improved , this is when it is working . I have all kind of SCSI locks , and all kind of errors/warning ( Long VMFS rsv time on 'Silver-Shared-01' (held for 289 msecs) )
vMotion sometimes fails , same for the clones ... it is very instable
I know this is not even close to supported, but wanted to see if there is something else that I might try to fix this .If not, then I'll go back to a Storage apliance.
Thank you in advance.
This configuration is not going to work. You can't share the same physical storage among multiple ESXi VMs.
Hi,
Well , this is an answer that I was expecting , just said to give it a try. Actually , it *works* , but not as expected ... Right now I have even a Datastore Cluster , can do Storage Migration , vMotion , basically it "looks" & " feels" like normal storage . Until it goes haywire ... and then all hell breaks loose
Thank you for the confirmation .