Hi,
We're planning a vmware infrastructure installation on a HP DL360G5 with a HP MSA 1510i (iSCSI) SAN.
We want to use serveral quests (linux) to host a application which will need a lot (5TB+) of storage. Each
application server will hold its own data so for example server A doesn't need to get to the data of server A
and vice versa. We do however want 1 place to store the data, using a folder structure to separate the data
from the different application servers. Sounds like a fileserver. 😄
As VMFS has a 2TB disk limit we can't use VMFS but I think what we need is a guest server which is setup
as a fileserver which will provide the data storage for the application servers. The fileserver guest will then
need a raw device mapping to the 5TB+ Lun on the SAN.
Can anybody please confirm that this is the way to go or perhaps are there different solutions for this problem?
Thanks and regard,
vmhost
vmhost -
That is definitely the way to go. Use an RDM for the file storage. If you plan on using VCB or take advantage of snapshots, you will need to create the RDM in compatability mode to enable snapshots of the file storage.
Dave
vmhost -
That is definitely the way to go. Use an RDM for the file storage. If you plan on using VCB or take advantage of snapshots, you will need to create the RDM in compatability mode to enable snapshots of the file storage.
Dave
Although I'm sure using an RDM is the correct answer here, I thought I'd ask the following for clarification...
I'm currently studying for the ol' VCP (Friday week), and I'm sure I read that...
The maximum VMFS volume is 2Tb, the maximum Extent size is also 2Tb, I think (gotta go look this up again) you can have upto 4 extents, so you can get a total of 10Tb storage in one volume
So for a 5Tb data store you could, if you wanted to use VMFS for some reason, have a 2Tb VMFS plus a 2Tb extent plus another 1Tb extent.
Is this correct, or did I misread something in my studies?
Terry.
extents are evil. ;o) They should only be used as a temproary means of adding disk space. They are a software based cancatenated raid 0.
Although as an academic exercise you could use extents, in reality you would just use a RDM. it is a similar analogy to using software RAID in windows, you can but why would you
If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
Kind Regards
Tom,
I just read that the max. lun size on the MSA 1510i is also 2TB so setting up a guest as a fileserver with a 5tb+ RDM will not work either..
We will just have to setup multiple application servers with 2TB disks (vmfs).
Thanks!
Hi Terry
Spend some time with the minimum and maximums here is a good document
http://pubs.vmware.com/vi301/wwhelp/wwhimpl/js/html/wwhelp.htm
Basically the answer to your question is that you can hve 32 extents of 2TB each for a maximum size of 64TB
Leonard...
I just read that the max. lun size on the MSA 1510i is also 2TB so setting up a guest as a fileserver with a 5tb+ RDM will not work either..
Don't feel sad - MOST SCSI-based storage on this planet is limited to 2 TeraByte LUNs (2^32 blocks, each block 512 Bytes). VMware ESX servers cannot deal with SCSI LUNs > 2TB and the MBR-style partition in which a VMFS lives, cannot either.
Thinking outside the box...why couldn't you create mutliple VMFS volumes, then create multiple large VMDKs (say 5 x 1TB) then use LVM to appear to the OS as a single volume?
Dave
You could certainly do that, David. But you run into the same problem as extents. It's evil.
Joining them with LVM is just another type of software RAID, but now you're shifting the CPU overhead for managing the RAID inside the VM rather than outside it.
I agree that VMFS extents are evil. But is the CPU overhead really an issue if using RAID-0?
I've had a customer INSIST they needed a 12TB single volume, so we did this (Windows 2003) and it worked fine. Risk and recovery is another issue, but they decided to go ahead regardless of the risk as it was simply to hard to modify the application to use multiple smaller volumes (long story).
Anyway, it's an OPTION.
Dave
It's certainly an option. But CPU overhead would be a significant issue.
As you know, with a physical disk the OS writes the data to a sector. In a RAID setup, you need to keep track of which sectors are on which disk (and where CRCs or replicas are). In hardware, we offload this to a dedicated hw processor.
With extents, it's the VM kernel which has to deal with it in CPU. With an LVM, the VM is dealing with it in CPU. There's overhead in every disk read and write, including memory swapping. So, the more disk I/O there is, the more overhead.
Unless it's file storage (which you could do with multiple disk so it's unlikely.) there's only a handful of apps which would need 5 TB. Typically, it's a huge busy e-mail server, a database or a high-bandwidth pr0n site ( Sorry....). Inthose cases, typically high disk I/O therefore high CPU overhead.
Fair point on asking what it's actually doing. My 12TB example was all file, mostly read-only archival.
Dave
In my experience the CPU overhead from raid 0 is pretty much next to nothing even under heavy IO. There are plenty of filesystems that do this in some pretty big environments. Sun's ZFS and ADIC/Quantum's SNFS both provide a means to sling a few 2TB extents together in a larger logical volume. Since disk redundancy/parity is handled by the storage there's no additional CPU overhead.