After an initial upgrade or install of StorNext 6 (or later), you may wish to switch from the default cluster to a named cluster.
File Systems (or more specifically File System Managers - FSMs) run in a specific cluster. Clients mount file systems, so a mount is also associated with a specific cluster.
In preparation for switching to a named cluster, all clients should un-mount the file systems, FSMs should be stopped and services stopped on MDCs and coordinator nodes. It is common to run MDCs as coordinators, but this section uses external coordinator nodes as examples.
The following example illustrates how to switch to cluster "cluster1", leaving the administrative domain as the default setting. On the coordinator nodes, you update the fsnameservers and create the fsmcluster in /usr/cvfs/config.
Start the services on the coordinator nodes. On the MDC, with services stopped, make the same changes to fsnameservers and fsmcluster. Do not change fsmlist. Make the following change to /etc/fstab:
snfs1 /stornext/snfs1 cvfs rw 0 0
snfs1@cluster1 /stornext/snfs1 cvfs rw 0 0
Start the services on the MDC.
On Linux client nodes, make the changes to fsmcluster, fsnameservers and /etc/fstab and start the services. Creating the fsmcluster file is optional on client nodes.
For Windows clients, changes to the fsnameservers file are made with the client configuration GUI. At this point start the services. The file systems appear with cluster information and can be mounted. The fsmcluster file can also be created on Window clients, but, like Linux clients, the creation of this file is optional.