VMware Cloud Community
dhanarajramesh

RAID 1 read and write IOPS is greater than RAID5 IOPS

Hi All,  just started VSAN performance testing and by HD tune pro tool file bench mark testing, I found VM with RAID 1 policy is showing higher read and write IOPS  than RAID 5 policy IOPS (the same VM) where traditionally the RAID 5 read/write performance supposed to be higher than RAID 1. before I move further, I would like to understand this result.

RAID 1 out put:

raid1.png

RAID 5 output:

RAID5.png

setup is 4 X DL380 G9 server and 2X10Gbps LACP VSAN network

0 Kudos
5 Replies
MKguy
Virtuoso
Virtuoso

where traditionally the RAID 5 write performance supposed to be higher than RAID 1

No, it's actually opposite. Traditionally, RAID5 writes are always slower than RAID1 because the parity has to be computed and written as well. Reads however are faster on RAID5, because you can read sequentially from all drives at once.

Your result screenshots show exactly this behavior: With RAID5 your reads are faster, while your writes are slower.

So these are perfectly in line with what is well known in the storage industry since decades.

-- http://alpacapowered.wordpress.com
dhanarajramesh

yes write wise correct. how about random reads? and more over I noticed that the results are changing over  multiple re-test with same parameters. may be I will use different tools and compare the data with each other.

0 Kudos
MKguy
Virtuoso
Virtuoso

how about random reads?

It doesn't matter how random the stream is, as long as the RAID stripe size is smaller than the requested IO size, the IO will be split to more than one disk, hence you can read the single IO from multiple disks at once. Randomness affects RAID5 generally less because splitting IO on disks already causes a more or less random pattern.


I wouldn't trust consumer-grade IO performance tools too much and it never hurts to consult other tools and OS platforms and run multiple tests on each of them, only then you can get a halfway objective picture.

-- http://alpacapowered.wordpress.com
0 Kudos
vtonev
VMware Employee
VMware Employee

Please take in consideration VSAN does not work like a normal RAID controller. There are two tiers Cache and Capacity tier. Probably your test is using only the Cache area (in the All-Flash case "write buffer") if the test duration is not long.

Take also in consideration:

1. RAID5 works calculates one parity from three data sets on different vsan nodes. vs. RAID1 creates two replicas on two separate vsan nodes. The parity calculation time add latency, but the write on 3 hosts in parallel reduce latency. Of course it depends on the number of stripes you defined in the storage policy for the RAID1 and RAID5 but i assume here you have stripe of 1. If you have simple VM and only write sequentially. raid5 writes to 3 nodes. RAID1 only to 2 nodes. raid5 may be faster. However if you have random RE-writes with small blocks. raid5 needs to read all four datasets before calculating the parity and the new small write. Then raid5 will be slower than raid1. However it depends on many factors.

2. The difference in performance between RAID5 and raid1 will depend on the type of performance test. random/sequential, blocksize, workset compared to cache size, etc.  

3. The performance difference will depend on the network latency as well.

4. Write performance for small blocks will be depending on the latency of your SSD device for Caching.  big block performance will depend on the throughput of your Cache SSD. The read performance on the number of SSD Capacity devices per diskgroup and the latency of those.

0 Kudos
zdickinson
Expert
Expert

Good afternoon, is this with or without dedupe/all flash?  Here is a good article on how to test performance when using dedupe:  How to correctly test the performance of Virtual SAN 6.2 deduplication feature - VMware VROOM! Blog ...  Thank you, Zach.

0 Kudos