Upgrade Tips and FAQs

If you used the system upgrade
or system upgrade local
command to upgrade a node that is joined to a NAS cluster, old versions of some daemons will be running, preventing new versions from starting and, in some cases, preventing config files from being properly updated.
- To prevent this situation, upgrade the node by using the
nascluster upgrade
ornascluster upgrade local
command on the master node. See Appliance Controller Upgrades. - To resolve this situation, reboot the node to repair the configuration. This removes the errant daemons and updates the config files for the new configuration.

During an appliance upgrade, the StorNext file systems are unmounted and re-mounted. If the appliance node is a NAS cluster node, the file system might be busy even if the node is otherwise idle, and therefore the appliance upgrade can fail. If you encounter this situation, remove the node from the cluster before performing the upgrade, and then re-join the node to the cluster after the upgrade. See Remove a Node from a NAS Cluster and Step 3: Join the Nodes to the NAS Cluster in the Configure Scale-Out NAS Clusters section of the NAS Cluster Configuration page of this doc center.

If you used the nascluster reset factory-defaults
command to address an Appliance Controller upgrade issue, subsequent NAS cluster creation might fail with an error similar to the following:
Validation failure: Share path /stornext/fs1/smb_share1 is already in use by NAS cluster ID 0 (E-2002)
You will see this error if you recreate a cluster that was originally created with an earlier version of the Appliance Controller.
To resolve, you need to repair the cluster ID on the shares and then recreate the cluster.

- From the root prompt on the master node, issue the following command:
- After executing the command for each share, re-create the NAS cluster. See Create the NAS Cluster.
for i in $(su sysadmin -c "share show smb" |awk '$4=="smb"{print $6}'); do if [[ $i = /* ]]; then setfattr -x user.SNNAS_CLUSTERID $i; fi done
If you delete the NAS cluster again later, you do not need to repeat these steps to recreate it.

If you used the system upgrade
or system upgrade local
command to upgrade a node that is joined to a NAS cluster, then you might see this issue.
Reboot the node to repair the configuration. This removes the errant daemons and updates the config files for the new configuration.