VMware Cloud Community
bellocarico
Enthusiast
Enthusiast
Jump to solution

ESXi 6.5 standalone: Raid5 degraded.... or not?

I had a disk crash on my DATA raid5 made by 3x 8TB SATA HDD. The controller is an Adapted 6805T and ESXi has been installed with a custom ISO with vendor specific .vib. It worked fine for years...

Due to a supplier issue the disk was replaced with a different brand (old brand Toshiba, new brand is Seagate), still I read up in Internet and was also reassured by the vendor you can mix and match different HDD make, so I have installed it. Initialised the disk using the controller BIOS, added to the RAID5 and the re-sync immediately started. At that point I restarted the box to allow for ESXi to come back online.

Now... I personally don't think this has anything to do with the different HDD brand here but my issue is: after about 24 hours the RAID5 is still reported as degraded:

pastedImage_0.png

pastedImage_0.png

However... using arcconf I get a different message:

Logical Device number 2

   Logical Device name                      : DATA

   Block Size of member drives              : 512 Bytes

   RAID level                               : 5

   Unique Identifier                        : D63F93AD

   Status of Logical Device                 : Optimal

   Additional details                       : Initialized with Build/Clear

   Size                                     : 15257590 MB

   Parity space                             : 7628800 MB

   Stripe-unit size                         : 256 KB

   Interface Type                           : Serial ATA

   Device Type                              : HDD

   Read-cache setting                       : Enabled

   Read-cache status                        : On

   Write-cache setting                      : Disabled

   Write-cache status                       : Off

   Partitioned                              : Yes

   Protected by Hot-Spare                   : No

   Bootable                                 : No

   Failed stripes                           : No

   Power settings                           : Disabled

   --------------------------------------------------------

   Logical Device segment information

   --------------------------------------------------------

   Segment 0                                : Present (7630885MB, SATA, HDD, Connector:1, Device:0)             WKD068VK

   Segment 1                                : Present (7630885MB, SATA, HDD, Connector:1, Device:1)         27M2K0VBFP9E

   Segment 2                                : Present (7630885MB, SATA, HDD, Connector:1, Device:2)         27M2K0VLFP9E

I suppose rebooting the box would be a thing to try but I'm worried about the re-sync to start over. Who to believe here?

0 Kudos
1 Solution

Accepted Solutions
IRIX201110141
Champion
Champion
Jump to solution

Maybe the state is corrected when resetting IPMI/CIM on the ESXi.

Regards,
Joerg

View solution in original post

0 Kudos
3 Replies
IRIX201110141
Champion
Champion
Jump to solution

Maybe the state is corrected when resetting IPMI/CIM on the ESXi.

Regards,
Joerg

0 Kudos
bellocarico
Enthusiast
Enthusiast
Jump to solution

Hum.... for me to understand: why would IPMI play any part here?

This is a driver running on ESXi querying directly the pci-e raid controller, e.g. if I log into IPMI it doesn't provide me any information on cards installed or similar...

I had disks failing in the past, the second Logical disk you see in the first image was degraded last week as I incidentally touched the power cable of one of the two disks. I spotted the mistake, plugged back and yellow turned into green within 6 hours. It seems like this issue is linked to the logical array.

0 Kudos
bellocarico
Enthusiast
Enthusiast
Jump to solution

I think I got what you mean now.... you were referring to the CIM service!

That fixed, thank you 🙂

pastedImage_0.png

0 Kudos