As a result of hurricane Matthew, our business shutdown all servers for just two days.

As a result of hurricane Matthew, our business shutdown all servers for just two days.

One of several servers ended up being an ESXi host with a attached HP StorageWorks MSA60.

costa rica dating sites free

As soon as we logged in to the vSphere customer, we realized that none of our visitor VMs can be found (they are all listed as “inaccessible”). So when we glance at the equipment status in vSphere, the array controller and all sorts of connected drives look as “Normal”, nevertheless the drives all reveal up as “unconfigured disk”.

We rebooted the host and attempted going in to the RAID config utility to see just what things appear to be after that, but we received the message that is following

“An invalid drive motion ended up being reported during POST. Adjustments towards the array setup after a drive that is invalid can lead to loss in old setup information and articles regarding the initial rational drives”.

Needless to state, we are really confused by this because absolutely nothing ended up being “moved”; absolutely absolutely nothing changed. We simply driven within the MSA as well as the host, and now have been having this problem from the time.

We have two questions/concerns that are main

The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the possibility for this occurring once more (especially it) since I have no idea what caused.

Can there be a snowball’s opportunity in hell that i could recover our array and guest VMs, rather of experiencing to reconstruct every thing and restore our VM backups?

I’ve two main questions/concerns:

  1. The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say whatsyourprice have the choice to reconstruct the array and commence over, but i am leery in regards to the risk of this taking place again (especially since I have actually do not know exactly what caused it).

A variety of things. Would you schedule reboots on your entire gear? If you don’t you want to for only this explanation. The main one host we’ve, XS decided the array was not prepared with time and don’t install the primary storage space amount on boot. Constantly nice to learn these things in advance, right?

  1. Will there be a snowball’s possibility in hell that I am able to recover our array and guest VMs, alternatively of experiencing to reconstruct every thing and restore our VM backups?

Perhaps, but i have never ever seen that one mistake. We are chatting extremely restricted experience here. According to which RAID controller its attached to the MSA, you could be in a position to browse the array information through the drive on Linux making use of the md utilities, but at that true point it is faster simply to restore from backups.

A variety of things. Would you schedule reboots on all of your gear? Or even you should really for only this explanation. The main one host we’ve, XS decided the array was not prepared over time and don’t install the primary storage space amount on boot. Constantly good to learn these plain things in advance, right?

I really rebooted this host multiple times about a month ago whenever I installed updates onto it. The reboots went fine. We additionally completely driven that server down at across the time that is same I added more RAM to it. Once more, after powering every thing right straight straight back on, the server and raid array information had been all intact.

A variety of things. Would you schedule reboots on all of your equipment? If you don’t you should really just for this explanation. Usually the one host we’ve, XS decided the array was not prepared with time and did not install the primary storage space amount on boot. Constantly good to understand these plain things in advance, right?

I really rebooted this host numerous times about a month ago once I installed updates about it. The reboots went fine. We additionally entirely driven that server down at across the time that is same I added more RAM to it. Once more, after powering every thing right straight straight back on, the server and raid array information ended up being all intact.

Does your normal reboot routine of the host incorporate a reboot of this MSA? would it be which they had been driven right right back on within the order that is incorrect? MSAs are notoriously flaky, likely this is where the presssing problem is.

I would phone HPE help. The MSA is just a flaky unit but HPE help is very good.

I really rebooted this host times that are multiple a month ago whenever I installed updates onto it. The reboots went fine. We additionally entirely driven that server down at around the exact same time because I added more RAM to it. Once more, after powering every thing right right back on, the raid and server array information ended up being all intact.

Does your normal reboot routine of one’s server include a reboot associated with MSA? would it be which they had been powered right straight back on in the order that is incorrect? MSAs are notoriously flaky, likely this is where the problem is.

We’d phone HPE support. The MSA is an unit that is flaky HPE help is very good.

We unfortuitously don’t have a “normal reboot routine” for almost any of our servers :-/.

I am not certain exactly what the order that is correct :-S. I would personally assume that the MSA would get driven on very first, then your ESXi host. If this is proper, we now have currently tried doing that since we first discovered this problem today, as well as the problem remains :(.

We do not have help agreement with this host or perhaps the connected MSA, and they are most most most likely way to avoid it of warranty (ProLiant DL360 G8 and a StorageWorks MSA60), therefore I’m uncertain exactly how much we would need certainly to invest to get HP to “help” us :-S.

I really rebooted this host times that are multiple a month ago once I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at round the exact same time because I added more RAM to it. Once again, after powering every thing right straight back on, the server and raid array information had been all intact.

Posted on