This comes from our SE Eddie Tang in Canada.
Created a simple script using private ssh keys, putty and plink.exe for their Windows host running the APC software.
The script just runs this :
plink.exe “ssh session name” halt --array
Where the session name is a putty session configured for the Nimble array hostname and private SSH key information.
Great question and thank you Adam and Eddie for a great response. I'd like to add, since you have a few moments of hang time when the UPS kicks in: please consider triggering a snapshot of individual critical vols or whole volume collections at that time to ensure you've got a known point in time to return to when you bring things back online. This can certainly be part of the same script Adam describes, taking place just prior to the halt array event and would include simple specifics of the volume or collection(s) to snap. The two methods are detailed below.
volcoll --snap: Create a snapshot of volumes associated with specified volume collection. Created snapshots are consistent with each other. Creates a snapshot collection with created snapshots as members of the collection. If the volume collection is application syn-chronized (VmWare, VSS or Oracle), the volume snapshots are syn-chronized with the application too. However, if application synchronization is disabled for all the protection schedules in the collection, the snapshot will not be taken and an error is returned. The --snap option takes the following sub-options.
Name of snapshot collection to create. This is also the name of
Description of snapshot collection. If the description includes
spaces, enclose the description in quotation marks.
Start with snapshot set online. Default setting is offline.
Allow applications to write to created snapshot(s). Default set-
ting is to disallow.
vol --snap volname Snapshot a volume. The --snap option takes the following sub- options. --snapname name Name of snapshot to create. --description text Description of snapshot. If the description includes spaces, enclose the description in quotation marks. --start_online Set the snapshot online after creation. Default setting is offline. --allow_writes Allow applications to write to snapshot. Default setting is to disallow.
Kindest regards team,
A similar thread was opened awhile back. Devin's response regarding the snapshots is a great idea. I would, however, hold off on shutting down the array via an automated script. If you shutdown the array you will have to manually power it on. As our VP of Support so eloquently pointed out:
1) if both controllers are powered on and power loss occurs: when power is restored, controllers will power on and services will recover.
2) if array halt (Shutdown) command is given, controllers will power off. The power button must be depressed to resume operation in all cases in this scenario
Basically, what I would do is take Devin Hamilton's suggestion of kicking of a snapshot at that point of time for data recovery, if necessary, but do not try to automate the shutdown of the array unless you have the capability to manually power it back on (A good remote controlled Robot could do this though).
I hope that is of some assistance. Here is a link to the other thread too: Re: Any recommendations on an IP-based PDU that could remotely start the array after shutdown?
Thanks for the info. For our current environment, the rest of the infrastructure will most likely require manual intervention to power up. Physically powering on the Nimble will be a part of the process.
From what I understand, there is "no harm" in the Nimble array being powered off. If the connected systems are idle, the Nimble will come up clean and ready to go. The idea behind a proper shutdown is to ensure that when the system is powered up, there will not be a need for a RAID set rebuild, file system check or any related background processes. A clean start up, ready to go array state is desired. If that is achievable with a power failure and no active data transfer, that should be fine.
The snapshot upon power failure sounds good, I will see what we can do with our vSphere/APC agent to make that happen.