VMware Cloud Community
cassio
Contributor
Contributor

Error message "A general system error occurred: Failed to serialize result" after rescan storage adapters on ESX 3.0.3

Hello all,

I had the message A general system error occurred: Failed to serialize result after scanning lun's on my Clarion CX3-10 Storage system.

This error occured when I was using ESX 3.0.0.

I upgraded it to ESX 3.0.3 and the error still happens.

I tried only to restart naviagent and mgmt-vmware service.

If I reboot the system, the error stop until I rescan again new storage adapters.

Thanks in advance.

Cassio

Cassio
Tags (2)
0 Kudos
8 Replies
Texiwill
Leadership
Leadership

Hello,

I would look at your SAN logs as well and try to correlate the error. Also, what other errors are surrounding this one, one there is one SCSI error there is usually more.


Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs
Top Virtualization Security Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
cassio
Contributor
Contributor

Hello Edward,

I'll attach my storage logs here in next post.

I don't know if there another errors, since I'm not able to do anything else in my VI Client. The error window doesn't disappear. :smileymischief:

Thanks in advance,

Cassio

Cassio
0 Kudos
Texiwill
Leadership
Leadership

Hello,

I would not post the logs, they are way too big for the forum....

However, look at the vmkernel logfile on the ESX host when the error occurs. Also look at the logs on the physical SAN as well. The VIC can not tell us much.


Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs
Top Virtualization Security Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
cassio
Contributor
Contributor

Hello!

I was up to post the storage events... It's not so big.. Smiley Happy

I looked at vmkernel.log and there's nothing registered.

I took a look at hostd.log and the messages since the moment I rescanned the lun until the moment I got the error is attached.

I'm also looking for some hint on this logs in forums and google.

I'm also looking for some stuff related to my qlogic driver. There's a hotfix for it and I'm verifying it.

Thanks in advance.

Cássio.

Cassio

Cassio
0 Kudos
Cooldude09
Commander
Commander

What is the multipathing policy we have implemented? It is MRU or preferred?

Regards

Anil

Save the planet, Go Green

if you found my answer to be useful, feel free to mark it as Helpful or Correct.

If U find my answer useful, feel free to give points by clicking Helpful or Correct.

Subscribe yourself at walkonblock.com

0 Kudos
cassio
Contributor
Contributor

Hi.

I'm not able to check what's being used in this host since my VI Client is always displaying the dialog box with the message on the subject of this e-mail.

Therefore, based on another VM Server, I do believe it's the MRU method.

I'm searching for a way to discover it without using the VI Client.

Hi again! I'm completing my answer:

Disk vmhba1:0:1 /dev/sdb (10240MB) has 4 paths and policy of Most Recently Used

FC 5:7.0 210000e08b8bb03a<->5006016041e07ede vmhba1:0:1 On active preferred

FC 5:7.0 210000e08b8bb03a<->5006016841e07ede vmhba1:1:1 Standby

FC 5:8.0 210000e08b9188c2<->5006016141e07ede vmhba2:0:1 On

FC 5:8.0 210000e08b9188c2<->5006016941e07ede vmhba2:1:1 Standby

Disk vmhba1:0:2 /dev/sdc (102400MB) has 4 paths and policy of Most Recently Used

FC 5:7.0 210000e08b8bb03a<->5006016041e07ede vmhba1:0:2 On active preferred

FC 5:7.0 210000e08b8bb03a<->5006016841e07ede vmhba1:1:2 Standby

FC 5:8.0 210000e08b9188c2<->5006016141e07ede vmhba2:0:2 On

FC 5:8.0 210000e08b9188c2<->5006016941e07ede vmhba2:1:2 Standby

Disk vmhba1:0:4 /dev/sdd (102400MB) has 4 paths and policy of Most Recently Used

FC 5:7.0 210000e08b8bb03a<->5006016041e07ede vmhba1:0:4 On active preferred

FC 5:7.0 210000e08b8bb03a<->5006016841e07ede vmhba1:1:4 Standby

FC 5:8.0 210000e08b9188c2<->5006016141e07ede vmhba2:0:4 On

FC 5:8.0 210000e08b9188c2<->5006016941e07ede vmhba2:1:4 Standby

Disk vmhba1:0:5 /dev/sde (102400MB) has 4 paths and policy of Most Recently Used

FC 5:7.0 210000e08b8bb03a<->5006016041e07ede vmhba1:0:5 On active preferred

FC 5:7.0 210000e08b8bb03a<->5006016841e07ede vmhba1:1:5 Standby

FC 5:8.0 210000e08b9188c2<->5006016141e07ede vmhba2:0:5 On

FC 5:8.0 210000e08b9188c2<->5006016941e07ede vmhba2:1:5 Standby

Disk vmhba1:0:0 /dev/sda (51200MB) has 4 paths and policy of Most Recently Used

FC 5:7.0 210000e08b8bb03a<->5006016041e07ede vmhba1:0:0 Standby preferred

FC 5:7.0 210000e08b8bb03a<->5006016841e07ede vmhba1:1:0 On active

FC 5:8.0 210000e08b9188c2<->5006016141e07ede vmhba2:0:0 Standby

FC 5:8.0 210000e08b9188c2<->5006016941e07ede vmhba2:1:0 On

Thanks in advance.

Cassio

Cassio
0 Kudos
cassio
Contributor
Contributor

Hello all!

Today I restarted my server and the problem disappeared, even when I repeat the procedure of rescaning LUN's.

Tomorrow I will try again this procedure in order to verify the problem was really solved.

Thanks for all messages!

Cassio

Cassio
0 Kudos
Cooldude09
Commander
Commander

Cassio, thats gr8 that the problem got knocked off......just to be on safer side confirm with the storage vendor if the policy has to be MRU or preferred.

Regards

Anil

Save the planet, Go Green

if you found my answer to be useful, feel free to mark it as Helpful or Correct.

If U find my answer useful, feel free to give points by clicking Helpful or Correct.

Subscribe yourself at walkonblock.com

0 Kudos