VMware Cloud Community
smsf
Contributor
Contributor

Upgrading to VSphere with HA on iSCSI SAN

Hi all,

I'm looking to revamp my server/storage infrastructure to achieve true HA for all/most of my servers. There are a lot of factors involved but I'm hoping members of the VMWare community would be able to help point me in the right direction.

Here's what I'm looking at:

Goal: Our main goals in this upgrade are (in order of importance with highest on top):

1) Redudancy through high availability

2) Flexibility to scale up to 20 VMs over the next 3-5 years.

3) Decent performance (moving from DAS to SAN incurs quite a bit of hit in I/O perf. but I am willing to accept it, to a certain level).

My plan is to move everything into VSphere VMWare environment with HA & Vmotion on redundant iSCSI SAN.

If budget allows: add a third ESXi 4.x host (around $3-4k with minimal disks).

Current environment:

~120 users, 12 VMs on 2 hosts, 2-3TB total data (all DAS).

VM hosts are: 1 ESX 3.5 and 1 ESXi 4.0. No existing VSphere

Physical hosts: 2 Exchange servers (1 front-end, 1 back-end), 1 SQL, 1 ERP


I estimate that the final config will look something like this:

3 x ESXi hosts (12 current + 3 new medium-load VMs)

2 x dedicated GigE switches for iSCSI (1 new + 1 existing Netgear GS724T)

SMB SAN solution with 2 units (for on-campus replication) - Need help choosing a solution. Currently considering Equalogic PS4000E, Dell MD3000i, HP Storageworks AiO1200, or software SAN from Starwind software.

Each SAN will have about 3-4TB total capacity, probably using RAID5 or 6. Still deciding between this and RAID10. RAID5/6 with high spindle count should be much better than RAID10 right? I believe partitioning RAID array on a 12-disk SAN will hurt, not help.

VSphere Essentials Plus package (new)

Backup: using our existing Acronis image backup (to disk) solution + existing file/db level backup system so no additional cost.

Budget: Our budget is small ($20-$25K) so I'm probably going to have to scale back on some of the specs (or delay that 3rd VM host purchase).

Hardware SAN look good but it will be hard to fit into this limited budget. However, as my main goal is high availability (and reliability), I don't feel a software SAN with distributed workstation-level hardware will do, especially when placing 15 VMs on the SAN. Software SAN on enterprise-class servers will bring me to the same level (if not more) as hardware SMB SANs anyway (but perhaps with lower perf) so it's a tough decision.

Any comment/input will be appreciated.

0 Kudos
15 Replies
Josh26
Virtuoso
Virtuoso

smsf wrote:

2 x dedicated GigE switches for iSCSI (1 new + 1 existing Netgear GS724T)

SMB SAN solution with 2 units (for on-campus replication) - Need help choosing a solution. Currently considering Equalogic PS4000E, Dell MD3000i, HP Storageworks AiO1200, or software SAN from Starwind software.

Each SAN will have about 3-4TB total capacity, probably using RAID5 or 6. Still deciding between this and RAID10. RAID5/6 with high spindle count should be much better than RAID10 right?

I believe you are focusing on the wrong thing with regards to reliability and performance.

Netgear switches (any model frankly) and Starwind software, and possibly even the AiO1200 are below entry level requirements for any businesses production storage in a virtualized workload imo. I appreciate that budget and so forth factors in, in a way that we can't always control - but there is little to no value concerning yourself around the various RAID levels when there are much bigger limitations here.

0 Kudos
smsf
Contributor
Contributor

Well, I do have an almost impossible budget to work with so I had to go with the Netgear. I thought they'd be OK if I do 2 dedicated switches with redundant paths. I can also route 1 path through my enterprise-level HP 4200 switch using Vlan and leave the Netgear as backup/secondary path but I read that's not good. Another option is go with HP procurve 1810 which are about the same price range of the GS724T.

BTW, I've been using 4 GS724Ts over 3+ years and did not have any problem thus far. In fact, all my Netgear switches have been great. I would love to go a little higher but the budget is already low as is.

OK, so you don't recommend low-end SANs. What about the HP MSA2000i G2? How about the HP P4000 Starter SATA SAN? I think that one has dual-node and is under $20k. Keep in mind this is for an SMB (a bit larger than typical SMB but still not big enough to classify it enterprise yet).

What would you recommend if you have a budget of $25K for a SAN? How about at $30K?

0 Kudos
Osm3um
Enthusiast
Enthusiast

HP switches have an excellent warranty.

Check out the EMC VNX series SANs.  The price point is rather impressive (under 20K fully loaded with 600gb SAS and two SPs).  It's integration with Vsphere makes it quite user friendly for a smaller business.

Bob

0 Kudos
smsf
Contributor
Contributor

Thanks Bob. I'll look into the VNX series.

However, the more I read into Starwind HA, the more I like it.

Here's a solution with Starwind HA, dual SAN nodes, dual dedicated gigE switches and vSphere license for around $25K:

Capture.PNG

Here's a link discussing Starwind + off-the-shelf servers vs. EqualLogic. Here's another discussion about using Virtual Storage Arrays like Starwind.

Also, I have concerns over using the Netgear gigE unmanaged switches but these guys do have some good points. Still, if there's a good (under $1000) solution to redundant switches for my iSCSI SAN, I'm more than happy to hear.

Any thoughts?

0 Kudos
KOOLER201110141
Contributor
Contributor

Wow! This hurts! What make you think StarWind & HP are both "below entry level requirements"? Any real world experience running huge production environments on them?

0 Kudos
Josh26
Virtuoso
Virtuoso

KOOLER wrote:

Wow! This hurts! What make you think StarWind & HP are both "below entry level requirements"? Any real world experience running huge production environments on them?

It doesn't take much to understand the technology.

I certainly didn't say "HP" was low level, I referred to a specific model. HP sales will tell you exactly what the AiO series is for, and the answer is not "huge production environments", and yes, I have run huge production environments on higher level HP equipment quite successfully.

Since you joined Kooler, every post you have made reads like a marketing pitch for Starwind, as is the case from nearly every pro-Starwind poster on this forum. Are you associated with them by chance?

0 Kudos
Josh26
Virtuoso
Virtuoso

What about the HP MSA2000i G2? How about the HP P4000 Starter SATA SAN? I think that one has dual-node and is under $20k.

Both of these appear to fit your budget, and are quite solid performers.

I have implemented both. My personal preference is the MSA2000fc, or the P4000 route when iSCSI is used (which would be the case for your budget).

0 Kudos
Anton_Kolomyeyt
Hot Shot
Hot Shot

1) "below entry level requirements for any businesses production storage" and "not huge production environments" are quite different things. Don't you think so? I do. So I repeat my question. Except the fact (let's assume it's a fact) you did play with big boys toys with HP what make you think referenced hardware and software is bad? I mean something like "I've put 500 VMs on a 2TB and IOPS were unaccessable" or "I've configured MPIO and it failed and support people were useless" or so. Do you have any real negative experience to share with us?

2) My goodness! Since I've joined (2 years before you did BTW) I did a lot of posts. And it should not take a while face-to-face with Google to find out who I am and what I do. And probably why Smiley Happy

PS Sorry for posting under different name - too many e-mail accounts...

0 Kudos
Josh26
Virtuoso
Virtuoso

The CEO of Starwind comes here under the account of an anonymous member of the public, and attempts to disrupt the discussion.

I'm not going to continue dignifying this - I have no marketing interest, I'll stick to the technical discussions and stay out of an impending flame war.

0 Kudos
Anton_Kolomyeyt
Hot Shot
Hot Shot

These are old news really as this account name is entirely associated with my real name starting from 1998 or so. Google and forum search are your friends J

OK, this is my third attempt to make you answer one entirely TECHNICAL question you like so much. I repeat.

What make you feel referenced HP & StarWind products are "below entry level requirements for any businesses production storage" (c) you

Maybe StarWind is too much for you as you accept this conversation as being too personal. That's fine let's focus on HP! I did bother myself to talk to local HP sales people and they had confirmed obvious fact HP does not have (and never had) any storage products positioned to test & development and non-production environment niche. You think different. Good! But can you tell why do you think so?

It's not flame war or whatever. I just want you to share some facts to prove what you've said. We call this a constructive dialogue helping original poster to avoid messing with product "with hairs" and also helping OEMs like HP & S/W making their products better. Everybody wins J

So?

0 Kudos
smsf
Contributor
Contributor

Guys, let's get back to the original discussion.

I'm still up in the air regarding which option to choose.

I'm still investigating Starwind but if I go that route, support must be extremely solid.

It's also clear that I need to get high reliability on the network backbone for the SAN, but something that still fits the budget. Perhaps $1000 for dual GigE switches for traffic from VM hosts to SAN node then cross over cable for SAN-node-to-SAN-node connection? What about managed vs un-managed switches? These guys think unmanaged switches would be faster.

0 Kudos
Josh26
Virtuoso
Virtuoso

The link you posted appears to 404 on me.

Unmanaged switches are unlikely to support flow control or Jumbo frames, which would hurt performance, and above all, lack the ability to provide counters on errors on ports, which can be useful in debugging.

Many SAN vendors won't support crossover cables, even though it's technically acceptable.

A decent managed switch will do linerate on all ports - you can't do faster than that.

0 Kudos
smsf
Contributor
Contributor

Oops, here it is again: http://community.spiceworks.com/topic/96916-what-kind-of-switch-are-you-using-in-your-iscsi-san

What about the HP 1810-24? Here's what another VMWare community member posted about it:

"Re: iSCSI Flow Control vs. Jumbo Frames

The new "HP ProCurve Switch 1810G-24, 24-Port (J9450A)" (successor of HP 1800G-24) supports Jumbo Frames as well as Flow Control at the same time. I have it running here in my lab and it works like a charm."

The price is right at around $300 per switch. 2810-24s are pretty good (I have 2) but they are a bit high for this project's budget. It seems large per-port buffer is good for both Jumbo Frame and Flow Control. The 1810 has 512Kb while the 2810 has 768Kb. Not a huge difference but then again, what would a 16 VM, SMB SAN network need?

0 Kudos
Anton_Kolomyeyt
Hot Shot
Hot Shot

Less hardware is involved - less chances to see something broken. Any switch even fastest one is expected to add extra latency on packet re-route and even the most reliable one has dedicated silicon to burn and firmware to fail. Copper and fiber are faster, cheaper and more reliable. Absolute win-win situation. Start with using crossover cables where possible, configure the whole setup and then start inserting switches if storage / hypervisor nodes are located too far away from each other to use direct cabling.
Back to managed Vs. non-managed... See there's no way somebody else scenario is going to work for you AS IS. Too many things are involved. So go build backbone of your test bed setup and run experiments.
***

Guys, let's get back to the original discussion.

I'm still up in the air regarding which option to choose.

I'm still investigating Starwind but if I go that route, support must be extremely solid.

It's also clear that I need to get high reliability on the network backbone for the SAN, but something that still fits the budget. Perhaps $1000 for dual GigE switches for traffic from VM hosts to SAN node then cross over cable for SAN-node-to-SAN-node connection? What about managed vs un-managed switches? These guys think unmanaged switches would be faster.

0 Kudos
Anton_Kolomyeyt
Hot Shot
Hot Shot

Any idea why they do this?

***

Many SAN vendors won't support crossover cables, even though it's technically acceptable.

0 Kudos