Upgrading NAS to 1.3.0 |
-You must upgrade to 1.2.5 before going to 1.3.0.
-With a single node (unclustered) you can use the YUM repo to upgrade from 1.2.3 to 1.2.5 with the NAS Shell command ‘system upgrade’. This will step to 1.2.5, once finished you can run the same command to upgrade to 1.3.0. If you manually do the upgrade, you can run into issues if not stepping the code correctly. See the next bullet.
-The version of NAS you’re running dictates which cluster you should upgrade first.
To upgrade from NAS 1.2.5 -> NAS 1.3.0 you upgrade the MASTER first since it will force all the slave to leave the cluster.
Upgrade from NAS 1.X.X -> NAS 1.2.5 you upgrade the SLAVE first and finish with the master since it does not have the logic to force the slave to leave the cluster.
Previously we were told there would be no easy access to the 1.3.0 rpms, that’s not true. You can obtain the 1.3.0 code from csweb. If you upgrade from 1.2.3 to 1.3.0 from the local .rpms being copied over to /var/upgrade, the upgrade will be incomplete. The upgrade code is smart enough to realize that the quantum-snfs-nas 1.2.3 and 1.3.0 are an unsupported step, however the sernet-samba packages will upgrade. This leaves the NAS code behind but upgrade samba portion of the NAS stack.
#These RPMs get upgraded
[root@downm440 ~]# rpm -qa | grep sernet
sernet-samba-common-4.1.20-11.el6.x86_64
sernet-samba-client-4.1.20-11.el6.x86_64
sernet-samba-winbind-4.1.20-11.el6.x86_64
sernet-samba-libsmbclient0-4.1.20-11.el6.x86_64
sernet-samba-4.1.20-11.el6.x86_64
sernet-samba-libs-4.1.20-11.el6.x86_64
#This one does not
[root@downm440 ~]# rpm -qa | grep nas
quantum-snfs-nas-1.2.3-5580.el6.x86_64
You’ll notice that the NAS package from csweb, starting with 1.2.5 code isn’t a single RPM. There are additional .rpms with at that level, so you’ll download a .tgz file, which needs to be extracted into the /var/upgrade directory to do a ‘system upgrade local’. So do something like this ‘tar xvf quantum-snfs-nas-1.2.5-5580.e17.centos.X86_64.tar.gz –C /var/upgrade’.
-Be careful and patient when doing an upgrade on a cluster. It’s typical for the controller to be stopped and to see response from the controller being down, like this one.
You don’t want to upgrade the other node until you get a response that both nodes are rejoined to the cluster, like this.
-If you do get a status from the nodes other then ‘joined’ or the controller still shows down, you’ll want to review the /var/log/snnascontroller logging. Also its typical to see a /UPGRADING file on the system if the upgrade is hung. You can also learn more about the status of the controller by running the following command (Centos 7).
It’s important to note the if the controller is running and for how long (PID 7256 in the above screenshot) and that the webserver is running (PID 7264). If you need to restart the controller here is how from each OS we support.
# initctl restart snnas_controller <== CentOS 6
# systemctl restart snnas_controller <== CentOS 7
Also the CENTOS command to query the snnas_controller service is much less verbose.
[root@cx-node2 ~]# initctl status snnas_controller
snnas_controller start/running, process 27102
A feature mentioned of the 1.3.0 code is the improvement of the ‘system show smb’ command. We can now see the version of SMB the clients connect at.
When you upgrade to 5.4.0.1 you will be upgraded to NAS 1.3.0 unless you are already configure in a NAS cluster. In the case of a NAS cluster you will be updated to NAS 1.2.5 then you will need to manually upgrade to NAS 1.3.0 using (system upgrade local).
Notes |
This page was generated by the BrainKeeper Enterprise Wiki, © 2018 |