VMware Cloud Community
vmhost
Contributor
Contributor
Jump to solution

Data storage (5TB+)

Hi,

We're planning a vmware infrastructure installation on a HP DL360G5 with a HP MSA 1510i (iSCSI) SAN.

We want to use serveral quests (linux) to host a application which will need a lot (5TB+) of storage. Each

application server will hold its own data so for example server A doesn't need to get to the data of server A

and vice versa. We do however want 1 place to store the data, using a folder structure to separate the data

from the different application servers. Sounds like a fileserver. 😄

As VMFS has a 2TB disk limit we can't use VMFS but I think what we need is a guest server which is setup

as a fileserver which will provide the data storage for the application servers. The fileserver guest will then

need a raw device mapping to the 5TB+ Lun on the SAN.

Can anybody please confirm that this is the way to go or perhaps are there different solutions for this problem?

Thanks and regard,

vmhost

0 Kudos
1 Solution

Accepted Solutions
dconvery
Champion
Champion
Jump to solution

vmhost -

That is definitely the way to go. Use an RDM for the file storage. If you plan on using VCB or take advantage of snapshots, you will need to create the RDM in compatability mode to enable snapshots of the file storage.

Dave

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"

View solution in original post

0 Kudos
13 Replies
dconvery
Champion
Champion
Jump to solution

vmhost -

That is definitely the way to go. Use an RDM for the file storage. If you plan on using VCB or take advantage of snapshots, you will need to create the RDM in compatability mode to enable snapshots of the file storage.

Dave

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"
0 Kudos
tmcd35
Enthusiast
Enthusiast
Jump to solution

Although I'm sure using an RDM is the correct answer here, I thought I'd ask the following for clarification...

I'm currently studying for the ol' VCP (Friday week), and I'm sure I read that...

The maximum VMFS volume is 2Tb, the maximum Extent size is also 2Tb, I think (gotta go look this up again) you can have upto 4 extents, so you can get a total of 10Tb storage in one volume

So for a 5Tb data store you could, if you wanted to use VMFS for some reason, have a 2Tb VMFS plus a 2Tb extent plus another 1Tb extent.

Is this correct, or did I misread something in my studies?

Terry.

0 Kudos
dconvery
Champion
Champion
Jump to solution

extents are evil. ;o) They should only be used as a temproary means of adding disk space. They are a software based cancatenated raid 0.

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"
0 Kudos
TomHowarth
Leadership
Leadership
Jump to solution

Although as an academic exercise you could use extents, in reality you would just use a RDM. it is a similar analogy to using software RAID in windows, you can but why would you Smiley Wink

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points

Kind Regards

Tom,

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
vmhost
Contributor
Contributor
Jump to solution

I just read that the max. lun size on the MSA 1510i is also 2TB so setting up a guest as a fileserver with a 5tb+ RDM will not work either.. Smiley Sad

We will just have to setup multiple application servers with 2TB disks (vmfs).

Thanks!

0 Kudos
lholling
Expert
Expert
Jump to solution

Hi Terry

Spend some time with the minimum and maximums here is a good document

http://pubs.vmware.com/vi301/wwhelp/wwhimpl/js/html/wwhelp.htm

Basically the answer to your question is that you can hve 32 extents of 2TB each for a maximum size of 64TB

Leonard...

---- Don't forget if the answers help, award points
0 Kudos
BUGCHK
Commander
Commander
Jump to solution

I just read that the max. lun size on the MSA 1510i is also 2TB so setting up a guest as a fileserver with a 5tb+ RDM will not work either..

Don't feel sad - MOST SCSI-based storage on this planet is limited to 2 TeraByte LUNs (2^32 blocks, each block 512 Bytes). VMware ESX servers cannot deal with SCSI LUNs > 2TB and the MBR-style partition in which a VMFS lives, cannot either.

davidbarclay
Virtuoso
Virtuoso
Jump to solution

Thinking outside the box...why couldn't you create mutliple VMFS volumes, then create multiple large VMDKs (say 5 x 1TB) then use LVM to appear to the OS as a single volume?

Dave

0 Kudos
GBromage
Expert
Expert
Jump to solution

You could certainly do that, David. But you run into the same problem as extents. It's evil. Smiley Happy

Joining them with LVM is just another type of software RAID, but now you're shifting the CPU overhead for managing the RAID inside the VM rather than outside it.

I hope this information helps you. If it does, please consider awarding points with the 'Helpful' or 'Correct' buttons. If it doesn't help you, please ask for clarification!
0 Kudos
davidbarclay
Virtuoso
Virtuoso
Jump to solution

I agree that VMFS extents are evil. But is the CPU overhead really an issue if using RAID-0?

I've had a customer INSIST they needed a 12TB single volume, so we did this (Windows 2003) and it worked fine. Risk and recovery is another issue, but they decided to go ahead regardless of the risk as it was simply to hard to modify the application to use multiple smaller volumes (long story).

Anyway, it's an OPTION.

Dave

0 Kudos
GBromage
Expert
Expert
Jump to solution

It's certainly an option. But CPU overhead would be a significant issue.

As you know, with a physical disk the OS writes the data to a sector. In a RAID setup, you need to keep track of which sectors are on which disk (and where CRCs or replicas are). In hardware, we offload this to a dedicated hw processor.

With extents, it's the VM kernel which has to deal with it in CPU. With an LVM, the VM is dealing with it in CPU. There's overhead in every disk read and write, including memory swapping. So, the more disk I/O there is, the more overhead.

Unless it's file storage (which you could do with multiple disk so it's unlikely.) there's only a handful of apps which would need 5 TB. Typically, it's a huge busy e-mail server, a database or a high-bandwidth pr0n site ( Sorry....). Inthose cases, typically high disk I/O therefore high CPU overhead.

I hope this information helps you. If it does, please consider awarding points with the 'Helpful' or 'Correct' buttons. If it doesn't help you, please ask for clarification!
0 Kudos
davidbarclay
Virtuoso
Virtuoso
Jump to solution

Fair point on asking what it's actually doing. My 12TB example was all file, mostly read-only archival.

Dave

0 Kudos
RussellCorey
Hot Shot
Hot Shot
Jump to solution

In my experience the CPU overhead from raid 0 is pretty much next to nothing even under heavy IO. There are plenty of filesystems that do this in some pretty big environments. Sun's ZFS and ADIC/Quantum's SNFS both provide a means to sling a few 2TB extents together in a larger logical volume. Since disk redundancy/parity is handled by the storage there's no additional CPU overhead.

0 Kudos