VMware Cloud Community
jplindemann
Contributor
Contributor

ESX 3.x - IBM DS4200 Array Partioning Advice?

Hey all, I have a setup consisting of an IBM BladeCenter 8852/DS4200/ESX Server 3.x. I have 7 Blades in the BC and 8 drives in the DS4200 at the moment. I've been doing research on how to set up my arrays/logical drives on the DS4200, and haven't really been able to come up with the best performance solution. I'm trying to be mindful of resource lockups. I'll need two types of VMs, some inert and some active (lots of read/writes). My current plan:

Total size of the DS4200 array is about 2.8TB.

Array 1 (5 Drives of 8 Total)

- RAID 3

- VMs on this will have lots of read/writes

- Split this into 7 logical drives, one per blade

Array 2 (3 drives of 8 Total)

- RAID 5

- Almost all reads, very few writes

- Same thing, split it into 7 logical drives, one per blade

The VMware docs recommend that you have no more than 16 Virtual Disks per volume (which I'm assuming is each logical drive in the array). I know how big my images are going to be (Array 1 will have images about 41GB each, and Array 2 will have images about 20GB each, that factors in snapshots and the log space required), so that's not an unknown, but I'm kind of stuck on how to best handle the partitions. It seems like I should have 1 ESX Server host mapped to 1 volume on any given array, but I'm not sure.

Any ideas?

-j

0 Kudos
15 Replies
rpartmann
Hot Shot
Hot Shot

Hi,

hi we also use the DS4200 but probably you also want to configure a hot standby drive for your arrays..

The 16 vm´s per volume (=lun) is as best practice not to produce a too heavy I/O load,

because you can define different path on a per lun basis and share the load over different hba´s.

hth

ps: Award points if you find answers helpful. Thanks.

ps: Award points if you find answers helpful. Thanks.
0 Kudos
mike_laspina
Champion
Champion

Hello,

If you only have one controller then assign all the drives to one Raid5 array a hot spare would be advisable if you cannot afford any data loss. The more drives on an array the more performance you will get out of it.

If you have two controllers then split them evenly and try to get more drives in those arrays even if you don't need the space. (performance = more spindles and arms)

Hope this helps.

http://blog.laspina.ca/ vExpert 2009
0 Kudos
jplindemann
Contributor
Contributor

I have an A and a B controller. So you'd recommend a RAID 5 even though I have two different types of I/O needs?

Also, since I'll have 7 Hosts (Blades, which are ESX Servers), how should I map them to the volumes?

I thought I had this figured out earlier in the week, but another guy with experience setting up a similar environment mentioned that having a big RAID 3 and a big RAID 5 array with only two total volumes wasn't a good idea because of resource contention issues.

If they only recommend having 16 VMs per volume, what's a practical way to police this? Zoning?

0 Kudos
mike_laspina
Champion
Champion

You have defined your Raid 3 array as an active r/w world. Raid 3 sucks for wirte perfomance since it must write to a single parity drive. Raid 3 works well for a very specific type of application and usually it's streaming media reads or other read intensive apps. Your system would wait for 1 drive most of the time in this config.

While Raid5 does not have excellent write performance it does offer a good comprimise for the multipurpose environment with is more condusive to a VM store. To improve the write performance (and Read) of a Raid 5 array the answer is to add more drives to distribute writes across them. Each drive can perform a unit of work and the sum of the total = perf. Raid 5 has excellent read performace.

As far as the 16 VMs per volume goes you are still there. The 7 LUN's will become VMFS volumes @ 400GB which you can spread across the two paths. If you cannot get more drives then you would be better off with 1 larger array.

Hope that helps

Mike

http://blog.laspina.ca/ vExpert 2009
jplindemann
Contributor
Contributor

To add to my post above, here's the type of stuff that confuses me. Looking at the VMware SAN Design Guide, the guidelines they list are contradictory:

-


From Page 73:

"For VMware Infrastructure 3, it is recommended that you can have at most 16 VMFS

partitions per volume. In other words, have no more than 16 virtual machines or

virtual disks sharing the same volume. You can, however, decide to use one large

volume or multiple small volumes."

From 176:

"Carefully determine the number of virtual machines to place on a single volume

in your environment. In a lightly I/O loaded environment (with 20 to 40 percent

I/O bandwidth use), you might consider placing up to 80 virtual machines per

volume. In a moderate I/O load environment (40 to 60 percent I/O bandwidth

use), up to 40 virtual machines per volume is a conservative number. In a heavy

I/O load environment (above 60 percent I/O bandwidth use), 20 virtual machines

per volume is a good maximum number to consider."

-


So what's the real answer here? Is it 16, or the 20/40/80 guideline? Or are they even talking about the same thing?

0 Kudos
mike_laspina
Champion
Champion

I have more than 40 on my LUNS. It really depends on the apps that are running on the VM's and how good the I/O performance is at the storage system as well as the bandwidth.

http://blog.laspina.ca/ vExpert 2009
0 Kudos
jplindemann
Contributor
Contributor

How big are your LUNs, and what kind of software do you have on them? I'll be using this BladeCenter server in a QA environment, so high performance isn't a huge requirement. We aren't doing performance testing, just functionality testing. However, we'll have the following types of applications:

1) Client/Server Application that uses an installed SQL Server instance to process data (read/write)

2) Remote "agent" applications that gather data and send it up to SQL Server (mostly reading)

So for clarity, you're advocating a setup something like this:

- One big RAID 5 array

- Use one of the 8 drives as a Hot Spare

- Have several big volumes (maybe 400GB a piece)

- Of the 7 Blades, have Blades dedicated to LUNs

Right?

0 Kudos
mike_laspina
Champion
Champion

My hosts are running a very wide scope of I/O characteristics. DB's, Web, SOA, AD, Large Files.

My VMFS LUN's are all 500GB and some RDM's as well.

7 SATA drrives in one array is functional. More would be better but you may not have that option. It will work fine. You should not expect performance to be remarkable on SATA.

One thing that is very important is a regular cycle of scrubbing the disks to prevent data loss in the event of a disk failure.

So many people find out later that there is a bad block on a disk and when a drive goes down the block is found during the rebuild and cannot be recovered.

Disks are the #1 component to fail during the burnin time of a new system. (Outside of human error)

http://blog.laspina.ca/ vExpert 2009
0 Kudos
jplindemann
Contributor
Contributor

If I would have know the "more drives=higher performance" stuff, I'd have filled out the entire array. Of course, hindsight is 20/20.

Any recommendations for VMFS vs. RDM? VMotion would be nice, and from what I read it looks like it can only be done via RDM.

But the 1 big RAID 5 array is the way to go, right?

0 Kudos
mike_laspina
Champion
Champion

I think I missed something. You do not need to assign 1 LUN per ESX host you will be sharing them across all hosts in a farm. This will allow the use of HA, DRS and VMotion if you wish to do so. The technical requirement for VMotion is shared disk across the ESX hosts. Each host will see 7 LUN's that are defined on your 4200.

Yes one array is the best for this configuration.

RDM is not required and does not perform much better that VMFS.

http://blog.laspina.ca/ vExpert 2009
jplindemann
Contributor
Contributor

OK, here's how I configured it.

- set drive 1 as a hot spare drive.

- created one big RAID 5 array (capacity: 2.8TB)

- divided the array into (4) 700GB volumes, assigning two volumes to Controller A, and two volumes to Controller B

Sound good?

0 Kudos
mike_laspina
Champion
Champion

That sounds good.

You will get better perf if you align the partion I/O with the VM's disk

Here is the howto.

http://www.vmware.com/pdf/esx3_partition_align.pdf

http://blog.laspina.ca/ vExpert 2009
0 Kudos
jplindemann
Contributor
Contributor

Thanks Mike, you've been incredibly helpful. I'll have a look at the document you linked to.

0 Kudos
jplindemann
Contributor
Contributor

One more for the group. So I have my LUNs set up (1 through 4), and I've added several of my ESX Servers (from the BladeCenter server) to a cluster. What adding a datastore, I'm forced to choose only one of the 4 LUNs.

Noob question: My goal is to have all of my ESX blades share the 4 LUNs...how do I specify that?

I don't want to lock a blade to one particular LUN if I don't have to.

0 Kudos
mike_laspina
Champion
Champion

Hello,

On a 4200 you would create a host group and map all the ESX WWN ports to it then all the hosts will be part of the shared disk group. Make sure you set the host type LNXCL.

Once you init a LUN as a vmfs volume on one host you need to rescan for it on the other hosts.

For some sleeping aid you can read this http://www.redbooks.ibm.com/redbooks/pdfs/sg246363.pdf

http://blog.laspina.ca/ vExpert 2009
0 Kudos