I'm interested in setting up a virtual NAS device on my ESXi host.
As this will primarily be serving up storage to other VMs on the same ESXi host, what sort of speeds across the virtual network can I expect?
Thank you,
It depends on you disks and your CPUs and how overcommitted your CPUs are. As a point of reference, the underlying vSwitch perfortmance on my Opteron 1352 test box is about 1.3Gbps (as measured with iperf running on Debian linux on two VMs).
Please award points to any useful answer.
The intent is to purposely undercommit,
The mobo will be running an i7 950, the NAS server will be Openfiler, the mobo has 6 ports for Sata drives.
If I can use VMDirectPath on a PCI-X card there can be proper hardware RAID5 and 8 drives.
1.3gbs may / may not be enough for 7 drives in RAID5.
Not being restricted to realistic ethernet speeds over the internal virtual network would be ideal.
When traffic stays on vSwitch it goes as fast as the host can push it. You will have one challenge with your potential setup. ESXi will bring up storage before it begins to start virtual machines but since your virtual machines will depend on a VM for storage they'll end up orphaned when you boot the host up. You may have to add some scripts to the host that execute post boot to clean that up.
Dave
VMware Communities User Moderator
Now available - vSphere Quick Start Guide
Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL.
On the mobo there are 6 Sata ports and hardware RAID0 / 1 support.
If I can passthrough the PCI-x RAID controller, then I have another 8. 14 Sata ports in total.
Using two drives in RAID1 to host the VMDKs, it will come up before the VMs, but as it is simply local (off the mobo), there is no dependency on the virtual NAS server, correct? The NAS server is more so there as a point of 'will it work without being a resource hog?'.
An alternative idea to creating a NAS VM would be to use your local disk as a storage location for thin provisioned data disks in your other VMs. I don't know all fo your design goals but it seems less complicated to add storage that way then to worry about virtual networking and VM DirectPath.
Mike P
MCSE, VCP3/4
VMDirectPath will still play a role for other VMs,
The RAID card approach was a consideration to ensure minimal strain on the CPU (and at 50$ for a 8 port RAID5 Controller as PCI-x, rather good deal).
There will be PCI-e graphics cards also being passed through to VMs (hopefuly)
The NAS is with the intent of having a centralized repository for additional devices in my house that consume harddrive space.
It's also fun to flesh out my home workstation to better emulate the lab environment I manage.
(Without the need to have multiple power hungry noisy devices sitting around my livingroom)
RAID1 on the local drives still gives me 500GB for all the Virtual machines. 50gB, or even 100gB, is rather generous for a system partition.