VMware Cloud Community
beanieboy
Contributor
Contributor

esx 3.5 with san storage

Hi,

I'm about to buy some Lefthandnetworks nsm2120 San/iq boxes which we'll store VM files on using iscsi. The san/iq boxes can do network raid type sync a bit like what the emc stuff does with mirrorview. Has anyone ever done this before and had any issues? There will be two san/iq boxes linked by a 1gig link. The san/iq software makes the units look like one unit and therefore giving us a form of HA/DRS. The host servers will fully loaded Dell PE2950s. We basically are going to use one site as a dr. Any advice/tips would be welcome, as Dell are less than helpful since i returned their MD3000i's as they dont do replication as they promised.

Thanks

0 Kudos
11 Replies
Dave_Mishchenko
Immortal
Immortal

Your post has been moved to the Strategy and Planning forum

Dave Mishchenko

VMware Communities User Moderator

0 Kudos
beanieboy
Contributor
Contributor

Sorry Dave Smiley Happy

0 Kudos
Berniebgf
Enthusiast
Enthusiast

Hi Beanieboy.

How would the two box's communicate? E.G. Sync or Async?

If it is Sync (commit to cache at secondary box before responding to IO) then I would be cautious about performance at the primary site due to latency.

I am not familiar with LeftHand product, however I can tell you my experience with the "SANmelody" option in this type of configuration.

My experience with having ethernet (iSCSI mirror links) between two "HA Sync'd" box's was not positive with latency cause us issue's in the production environment.

I have since ONLY configured HA mirror links with 4Gb Fibre links (when at same site) when doing HA Sync relationship.

Obviously this will not be possible for your specific requirements if they are separate site.

I would recommend you do some performance testing (if possible) to validate the solution before you put into production.

best regards

Bernie

http://sanmelody.blogspot.com

0 Kudos
christianZ
Champion
Champion

Of course it works - they can do 1,2 or 3 way replication (1 - none, mirroring). When you search here for "lefthand" you will find many guys running it in production.

Check this thread too: (test from cmanucy, page 21).

On Lefthand home page you can find many webinars - with testing the campus cluster also.

I would be only sceptic on performance of nsm2120 - maybe the hp would be the better choice here.

0 Kudos
Berniebgf
Enthusiast
Enthusiast

ChristianZ

Beanieboy's concerns are not if the product works, it is regarding the use of two across a 1Gb link between sites.

From what I read, cmanucy configuration was at the single location through a 3COM 4500G switch, results could vary for a site to site configuration.

From what I looked at on the Web, the product looks impressive, I still think it would be wise to perform testing on the intended solution before committing $$.

As the Author of "Open unofficial storage performance thread" do I dare question your logic? Smiley Wink (attempted humor)

best regards

Bernie.

0 Kudos
christianZ
Champion
Champion

There is only a small difference whether the units configured on one site or 2 sites - they do the 2 way replication in the same way.

But suggested was an uplink at least 2 X 1Gb to avoid any kind of bottlenecks.

As I heard you can get tests boxes too (at least in US).

And of course it is pleasant many of you guys join the discussions here.

Regards

Christian

0 Kudos
christianZ
Champion
Champion

...and don't forget the 10Gbit option !!

0 Kudos
kukacz
Enthusiast
Enthusiast

ChristianZ: NSM2120 actually is HP Smiley Wink

beanieboy:The 1Gbit link between sites is one half of theoretical maximum throughput you can get out of those boxes. The more your I/O traffic will contain random operations (database-like), the less throughput will it require. There is another reason for two links, however - redundancy. You have perhaps considered it already, just mentioning it here for safe.

And to make you comfortable, there are plenty of SAN/IQ installments running VMware, just ask LeftHand for references if you like. I work for a LeftHand reseller in Czechia, have deployed 3 SAN/IQ+VMware installments with Dell hardware with no troubles.

--

Lukas Kubin

0 Kudos
stumpr
Virtuoso
Virtuoso

I think there is some confusion about LeftHand's solution.

LeftHand does network raid. 2 or more NSMs should be deployed for redundancy. You need 3 units to do 3 way replication, etc. Adding more NSMs increases your storage as well as your I/O. A virtual management IP address is created (quorum is created in LHN terms). This becomes your Discovery target. Every NSM can respond to this IP address so it's fully redundant.

This NSM cluster is comparable to a single unit. I don't know if I would be comfortable with a single NSM solution, but that's a personal preference.

Mirrorview is comparable to RemoteCopy from Lefthand. RemoteCopy is asynchronous only. It uses LUN snapshots to replicate delta changes from a LUN at one site of NSMs to another site of NSMs. It's really no different than most of the iSCSI vendor replication options.

I think the network raid design of LHN got confused with Mirrorview/RemoteCopy. The 2-way, 3-way replication is a configuration option in the network raid. Data blocks are written across NSM units in a cluster...2 way, data is written to two NSMs in a cluster. 3 way, 3 NSMs, etc. The higher the replication, the more data protection you have. It also increases your storage (1 block x 3 = 3 blocks of storage on 3 NSMs).

As for the 1Gig link between sites....I don't know. Is this DR site next door? Smiley Happy If he means that the connections to the switches are 1Gig, yes, this is almost a requirement in iSCSI deployments these days. You can of course etherchannel the links for failover and higher throughput.

I heard from an LHN rep a few months ago they are now upto 7 way replication in the network raid (don't confuse their 2 way, 3 way, 4 way ... replication with other vendor's replication features) and now support this replication (including fail over prioritization) across MAN links. I don't know what b/w and latency requirements are needed for this to be successful.

I've run LHN in production. The performance was on par with every other iSCSI solution I've used. Your mileage of course varies based on your environment and how much $$ you can spend. I really liked their solution. The management interface seems sort of 'low-end' but it was simple and easy to use. One negative was it required you to be on the iSCSI network or have it routable from your data network, but I usually isolate my iSCSI networks from my data networks. The product scales as you expand the number of nodes, very cool product but definitely different from the other major iSCSI vendors.

They released a couple different virtual appliances of their SAN/IQ software if you want to play with the solution more. I use the laptop vsa in my laptop ESX Cluster.

Reuben Stump | http://www.virtuin.com | @ReubenStump
0 Kudos
beanieboy
Contributor
Contributor

Hi Folks,

Thanks for the advice so far, couple of extra bits of info. The line is 1gig p2p link from one of the big uk isps, also we have a pointless 100mb backup line which is supposed to be diversely routed. I've managed to get them to get this upgrade to another 1gig, so hopefully should have 2gb p2p over diversly routed lines. Also not sure if this will help/hinder but our switches Cisco 4510r can do jumbo frames. We're going to have two 2120s at one site and 2 at the other, not decided on 1 or 2 clusters as yet. Unfortunately we cant get trials here in UK as our supplier aren't the best, so we're just going to have to buy them. I'll download that virtual server and see what it does. I think some serious reading will be needed in the two week delivery time. How much are the 2120s in the US? I'm going to get 2 x 2120 (12 SAS Drives, 300 GB, 15,000 RPM) & 2 x 2120 (12 SATA Drives, 750 GB, 7,500 RPM). This is around £47k or $92k .

Thanks

Beano

0 Kudos
stumpr
Virtuoso
Virtuoso

Beanie,

I can't help you with the pricing, it's been too long. I remember getting mine a bit cheaper than NetApp and EMC quoted us, but we also were one of the first to take on the new DL 380s NSMs, which look like the 2120s you're buying. (I got them without LHN branding, I think we might have bought them direct from HP and just did the SAN/IQ install on site in our data center). We got a lot of special treatment. We had a test done in their lab with a sample of our production database, training, and some good advice on ski slopes (they're in Denver). I had nothing but good experiences. We were given a high end support rep who was very knowledgeable.

From what you're telling me, I think you're in good shape to try the MAN replication. However, I'm not sure how that works with mixed spindle speeds like you have. The feature is fairly new I believe, it wasn't around when I did my last LHN install. Hopefully those 1gig p2p links aren't burstable and have good latency. If not, you should be able to get a pretty tight RemoteCopy schedule going.

Keep in mind if you do 2way replication, you half your capacity. We were so busy with performance concerns we almost forgot to check capacity requirements. I'm guessing you'll do the SAS 2120s as your primary NSM cluster, and the SATA 2120s as your DR NSM cluster?

Is anyone still a LHN partner or reseller? I thought they had an option to purchase the SAN/IQ software and install it on supported platforms, but from their site I only see the packaged NSM units. Did they pull this option?

Reuben Stump | http://www.virtuin.com | @ReubenStump
0 Kudos