VMware Cloud Community
dletkeman
Contributor
Contributor

NFS vs iSCSI RDM

Hello,

We are plannnig changes to our storage for our file servers and I have the need to create file storage disks larger than 2TB.  We have been, up until now, creating vmdk disks and adding them to the vm, but now that we need larger disks, I would rather not create multiple vmdk's and span the disk accross two vmdk's.

So from what I have read, NFS is an option, but so is and iSCSI RDM.  I like everything about NFS other than the fact that it is difficult to load balance it like MPIO with iSCSI.  Using RDM's with iSCSI for me looks easy as our system already has multiple sans using iSCSI.

Not sure if there is right or wrong way, so I would like to ask what do most people prefer to use?

Thanks,
Dan.

0 Kudos
4 Replies
weinstein5
Immortal
Immortal

First question what version of vSphere are you using? Remeber with vSphere 4 there is still limt of 2 TB -512 B for SAN Luns being presented while with vSphere 5 the limit is 64 TB and with either version version the size of the VMDK is still limited to 2 TB - 512 B -

So the iSCSI RDM will only work with vSphere 5 -

NFS will only work if you allow the VM to direct access to NFS datastore because cannot have an RDM with the NAS/NFS but access to VMDK - 

Another option is to load software iSCSI intiators in the VM and allow it access to the iSCSI SAN -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
dletkeman
Contributor
Contributor

Vsphere 5.1.

NFS directly to the VM was what I was thinking, not using and RDM on an NFS datastore.  No VMDK's in this design.

Yes iSCSI initiators directly on the VM would work as well.  From a manageability stand point it would be easier to use an RDM.

Dan.

0 Kudos
roh
Contributor
Contributor

Hi,

Did you make any progress in the mean time, 3 months later?. I just stumbled on this post and could relate the subject to our environment.

We are running vSphere 5.1 and have full Ethernet NAS/SAN running, NFS and iSCSI. Traditionally we make use of VMFS volumes on iSCSI (10Gb/s backbone) VMFS volumes, which in turn initiate their own NFS/iSCSI (linux / windows server) connection over their 'NAS interface'.

Drawback to this is the lack of control over I/O from within VMWare and it adds to maintenance load (initializing servers requires additional storage management to be done, and the vms require a second network interface for storage).

We are considering moving to a full VMDK setup, with the vmdks stored on NFS storage in stead of iSCSI.

I am curious about your setup and reasons to reconsider, are you willing to share?

0 Kudos
dletkeman
Contributor
Contributor

We have actually come to the conclusion that using our system as a NAS and using CIFS shares is by far the best way to go.  This is what we will be migrating too.  No VM's to setup and no round about way to access the files.  Clients will connect directly to the NAS storage.

0 Kudos