VMware Cloud Community
HarisB
Contributor
Contributor

Picking your brain about iSCSI solution

Hello everyone,

I'm planning to introduce an iSCSI solution in my setup, and wanted to fly the design plan by you to see if it makes sense and if anyone has any suggestions for improvement.

Current situation:

2x Quad Core Dell sc1430 server, with 8 GB ram, single integrated GB network adapter, internal 2610sa SATA RAID controller (6 ports), and 4x 80GB SATA disks in RAID 10 configuration. There are currently 3 of these in use, running around 35-40 VMs altogether.

All VMs are configured with 2 VCPUs, 256 MB ram, and are heavily loaded (75% - 80% CPU utilization) for about 8 hours every day. For other 16 hours they sit there turned on but don't do much.

For each VM, disk activity peaks at around 5 Mbits per second (writes), not much on disk reads (except on startup, and these don't get rebooted for months). Network traffic is also negligible.

Because of nature of work they perform, and different scheduling times, it is important for me to be able to dynamically balance the load across hosts. Currently I do this manually and even though it's not the end of the world, I'm planning on expanding my setup with at least 3 more servers of same configuration (short term, more (up to 10) may be expected within a year or so), and growing number of VMs to at least 75 (short term), therefore I figured centralized storage and dynamic load balancing would be in order.

Situation after expansion:

- 6 servers of above config, with single SATA drive used for booting ESX, not used for any VM storage.

- Servers equipped with additional 100Mbps NIC for their regular network traffic

- 1Gbps NIC used exclusively for iSCSI traffic

- Main storage box and the center of the universe: storage server running as iSCSI target :

- 20 SATA 80GB hotpluggable drives

- 3x 2610sa controllers, each in charge of 6 SATA disks in RAID10 configuration (usable space rougly 3x80GB) (total 18 disks). VMs are spread equally across these 3 arrays.

- 2x 80GB SATA disks in mirror used as boot for storage server

- Two 1Gbps NICs, one used exclusively for iSCSI network, the other for regular network traffic.

- iSCSI network will run on completely separate gigabit network, no other network traffic.

I am undecided about what OS/iSCSI target software to run on storage server. Bear in mind I'm solely responsible for this setup and non-Windows skills are on the "can follow step-by-step written instructions" level. Preference is Windows, however if performance is double under some preconfigured eaasily deployable iSCSI enabled flavor of Linux I may consider that.

I am also debating what software to use for iSCSI target if I go Windows route. There are a few that are available, I haven't had a chance to do a decent performance test with any of them, and certainly not with planned setup. Any input here is greatly appreciated.

Also, I have no idea what to expect in terms of CPU load on iSCSI target storage server. As I'm building this from chassis and available parts, I'm planning to canibalize AMD 64 3200+ single core CPU with 1,5 GB ram from another machine. Obviously this being as important as it will be, I can beef this up as required. Chassis supports standard server boards, so even dual socket is an option.

Notes:

- I am not concerned with single disk in ESX hosts, I can tolerate losing a host to disk failure as long as my VMs get restarted on other available hosts.

- With 75 VMs writting at 5Mbits/sec, I'm looking at ~400 Mbits = 50MBytes / sec requirement on the iSCSI box. While I'm sure 3 arrays will handle this easily, I'm not sure at all if a single Gb NIC can handle all that (SCSI gets encapsulated into TCP and is sent over network, there is overhead here, don't know how much).

Should I be looking at additional Gb NICs on iSCSI box side, and somehow create more capacity? Fatpiping, direct crossover to hosts using multiport NICs? What about mainboard bus, can it transfer all that data from disks to NICs?

Many thanks to all who can share their knowledge and experience with these things.

Cheers

0 Kudos
5 Replies
Paul_Lalonde
Commander
Commander

Hey.

An iSCSI target solution should work well in your environment. From experience, however, I'd suggest your iSCSI target platform meet the following minimum requirements:

1. The motherboard and CPU combo should be "server grade" and provide multiple high-speed PCI Express or PCI-X slots. If I recall, the 2610sa (or 21610sa) is a 64-bit, 66MHz PCI-X card. So, you're looking at a server board that should have at least two separate PCI-X buses. Most decent server boards provide two 133MHz PCI-X buses across 3 or 4 PCI slots. One of the slots is usually 133MHz, and the other three are 66/100MHz.

A single 64-bit, 66MHz PCI slot provides 533MB/s of throughput, so there's plenty of bandwidth available in this kind of setup. 100MHz and 133MHz slots provide even more...

2. I'd suggest two GigE NICs for iSCSI and one for management. ESX doesn't load balance iSCSI across GigE ports (at least not seamlessly), but the second NIC will ensure communications to your VMFS volumes is not lost if one NIC drops / bounces.

3. For best performance, favour spindle count over capacity. From what I've read, you're doing this already so you should be fine performance-wise.

4. For a Windows-based iSCSI target, your options are: Windows Universal Data Storage Server R2, Falconstor iSCSI Storage Server, Datacore SanMelody, and Rocket Division StarWind. If memory serves, only the Falconstor and Datacore offerings are currently "certified" in the HCL.

5. Windows environments need to be tweaked for decent Gigabit Ethernet performance. See the following article:

This should pretty much get you covered. It looks like you're planning this out very well. Good luck!

Paul

JDLangdon
Expert
Expert

2x Quad Core Dell sc1430 server, with 8 GB ram, single integrated GB network adapter, internal 2610sa SATA RAID controller (6 ports), and 4x 80GB SATA disks in RAID 10 configuration. There are currently 3 of these in use, running around 35-40 VMs altogether.

I am undecided about what OS/iSCSI target software to run on storage server. Bear in mind I'm solely responsible for this setup and non-Windows skills are on the "can follow step-by-step written instructions" level. Preference is Windows, however if performance is double under some preconfigured eaasily deployable iSCSI enabled flavor of Linux I may consider that.

Should I be looking at additional Gb NICs on iSCSI box side, and somehow create more capacity? Fatpiping, direct crossover to hosts using multiport NICs? What about mainboard bus, can it transfer all that data from disks to NICs?

Personally, carve the iSCSI SAN up into two sets of LUNS. One set of LUNs I would format as VMFS and the other I would format as NTFS/ext3.

Then I would purchase a iSCSI hardware initiator for each host server and configure ESX to access the iSCSI. On this set of LUNS I wou d create the system disks of all my VM's.

Next I would purchase some additional 1GB nics and configure each VM with two vmnics. One vmnic attached to the production network while the other is attached to a seperate iSCSI network. On each VM I would install and configure a software iSCSI initiator so that the VM's is talking directly to the NTFS/ext3 iSCSI LUNS.

According to vendor reports, you should get better performance and you have the option of attaching a physical server to the iSCSI and accessing the data.

Jason

0 Kudos
HarisB
Contributor
Contributor

So well planned in fact I forgot I need another GB NIC for VMotion - so the plan calls for 3 NICs : 1 for VMotion, 1 for iSCSI, and 1 for regular network traffic. All three will be connected to their completely separate networks to avoid any contention.

That AMD I planned to cannibalize doesn't have any PCI-X ports so I'll be shopping for a board.

Storage Server seems to be available to OEMs only - and I can't find a demo that can be used for testing / comparison of iSCSI solutions

@JD - all VMs run with a single 4GB VMDK, and have no reason to access any other disks or LUNs. I see what you are suggesting but for what I do I think it would be a complication without a clear benefit.

Cheers

0 Kudos
HarisB
Contributor
Contributor

Hi,

Further to above I'm thinking about pros and cons of having 2 arrays of 6 disks in RAID10 vs 3 arrays of 4 disks in RAID10. Capacity is not an issue for me, performance is critical. All ESX hosts will have access to all arrays, and VMs will be equally spread across arrays.

I'm not sure about the disk queues - if I'm not mistaken one queue is created per drive (array would be seen as one disk in this case so one queue), but I'm not sure how are they created, to what size, is their size dynamic in any way, can they be tweaked etc. If we wanted to go into extremes I can see 12 disk RAID10 with 6x disk capacity in usable space - it could perform better than smaller array because ot available spindles but would the performance suffer compared to smaller array in case of heavy random read/writes?

Thanks

0 Kudos
JTowner
Contributor
Contributor

One thing you will want is to toss as much server ram (ECC) into that iSCSI host system and make sure your iSCSI target software can use it. Datacore likes to use perf numbers of HP systems with 16gb of RAM and a bunch of SAS disks when comparing to EMC/HP/Dell boxes.

Since perf is such a huge thing, and you're over all disk size is small, maybe pickup some of the 10/15k SCSI or SAS drives for your RAID10 VMFS and use the Sata for nfs/smb space for your backups and such.

0 Kudos