VMware Cloud Community
glidic
Contributor
Contributor
Jump to solution

poor performace iscsi

Hi all i use an esx4 connect to an opensolaris datastore. On the open solaris i share a dir by nfs and a lun by iscsi. When i try this share by windows i have the same perf (nearly 100 mo/s) but when i test with vm on each datastore i have 67 mo/s for nfs(that's ok) but only 34 mo/s with iscsi.

So it's possible to increse the iscsi perf or not?

0 Kudos
1 Solution

Accepted Solutions
dilidolo
Enthusiast
Enthusiast
Jump to solution

I'm getting 100MB/s without any problem. You have to tell your configuration on COMSTAR so we can see where the problem is. Do you have SSD as slog? I heard NFS and iSCSI are both use sync write in ESX

View solution in original post

0 Kudos
21 Replies
AndreTheGiant
Immortal
Immortal
Jump to solution

It could depend by your target configuration.

See for some tuning on your opensolaris box.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
glidic
Contributor
Contributor
Jump to solution

but if i have good performance witj the iscsi initiator of windows the problem is probably not my target but the initiator of esx no?

0 Kudos
AndreTheGiant
Immortal
Immortal
Jump to solution

if i have good performance witj the iscsi initiator of windows the problem is probably not my target but the initiator of esx no?

Right Smiley Happy

A you using a dedicated NIC for iSCSI initiator?

The NIC is the same model of the NIC used for NFS?

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
glidic
Contributor
Contributor
Jump to solution

for the moment it's just a test so i use the same nic and the same vmkernel for nfs and iscsi. And normally esx use iet to connect with iscsi share no?

And it's just a test is to decide if the real architecture use nfs or iscsi. So iwant try to increase perf for the two mode. But dont use jumbo frames, just use the option, maybe it's possible to change the parameters of conf for esx or something like that. I'm a newbe with esx so i search and make test

0 Kudos
AndreTheGiant
Immortal
Immortal
Jump to solution

And normally esx use iet to connect with iscsi share no?

Normally ESX use the vmkernel interface that is in the same network of the target.

And it's just a test is to decide if the real architecture use nfs or iscsi.

IMHO could be better use NFS, rater than a software iSCSI target (that can have more overhead).

maybe it's possible to change the parameters of conf for esx or something like that.

There aren't some other "magic" option to increase the performance.

See: http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
glidic
Contributor
Contributor
Jump to solution

And you have you test esx with iscsi if the answer is yes what type of perf have you and what type of iscsi target/server have you use?

0 Kudos
AndreTheGiant
Immortal
Immortal
Jump to solution

Performance are similar between RDM iSCSI disks and iSCSI disks connected with a guest initiator.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
TimPhillips
Enthusiast
Enthusiast
Jump to solution

And what perfomance with other targets?

0 Kudos
glidic
Contributor
Contributor
Jump to solution

for me i just try isci with opensolaris because the iscsi of linux is bad compare iscsi opensolaris. So i think you are obliged to use a pure iscsi solution like netap to have good performance.

To finish i stop iscsi with esx and now i go use nfs

0 Kudos
TimPhillips
Enthusiast
Enthusiast
Jump to solution

Linux iSCSI solutions are very poor and bad, after a lot of experiments with it I decided to use Windows-based solutions, like Starwind or Lefthand.

0 Kudos
glidic
Contributor
Contributor
Jump to solution

Ok i don't know the perf of starwind bur just for comparison i have nearly 100mo/s with a gigabit link between my server (3 disk 10000rpm in raidz (like raid5)) and my laptop 2 disk in raid 0. I think the gigabit link make the limit of 100mo/s i will try with 2 aggregate link when i have the time.

0 Kudos
glidic
Contributor
Contributor
Jump to solution

If you test that one day please send your result to compare.

0 Kudos
TimPhillips
Enthusiast
Enthusiast
Jump to solution

I`ve tested it for my productiom enviroinment, and I was very pleased by it`s functionality and perfomance.

0 Kudos
glidic
Contributor
Contributor
Jump to solution

It's possible to have a perfomance seeing (how many mo/s for example).

And i see an other weird stuff. The performance in nfs was increase when the iscsi sorage adapter was enable. I think a better syncro was established but i'm not sure.

In fact i think the vmware doesn't have syncro with nfs and iscsi in the same time.

0 Kudos
ajgball
Contributor
Contributor
Jump to solution

Don't forget your switch n the middle.

You need to enable flow control on the relevent switch ports.

You should enable jumbo frames on the switch and the SAN.

Have atleast a 0.5MB port buffer on your switch.

glidic
Contributor
Contributor
Jump to solution

i'm ok with that, jumbo frames gain nearly 15% but the problem is not here. The problem is why when i make the test in my windows laptop i obtain 100mo/s and when i make the test on vm of the esx i obtain only 40mo/s. That's all.

And what is the flow control? the effect ...

0 Kudos
dilidolo
Enthusiast
Enthusiast
Jump to solution

I'm getting 100MB/s without any problem. You have to tell your configuration on COMSTAR so we can see where the problem is. Do you have SSD as slog? I heard NFS and iSCSI are both use sync write in ESX

0 Kudos
glidic
Contributor
Contributor
Jump to solution

no i don't have ssd in slog (i know is good but is too expensive). My configuration is :

LU Name: 600144F019388C0000004AB79F350013

Operational Status: Online

Provider Name : sbd

Alias : /dev/zvol/rdsk/data/iscsi2

View Entry Count : 1

Data File : /dev/zvol/rdsk/data/iscsi2

Meta File : not set

Size : 214748364800

Block Size : 512

Management URL : not set

Vendor ID : SUN

Product ID : COMSTAR

Serial Num : not set

Write Protect : Disabled

Writeback Cache : Enabled

i use an opensolaris snv 122 and the lun was created in a zpool raidz (3 disk sas 10000rpm).

Message was edited by: glidic

0 Kudos