Replace an MDC in a non-HA environment (backup/restore method)
In the event that the metadata archive is damaged, an MDC can be migrated to another system. If you need further assistance, contact Quantum support.
- While not required, it is recommended that all managed file systems be unmounted on all the clients and quiesced to help eliminate I/O errors on the clients due the StorNext MDC being down.
-
Create a full backup using the snbackup command. Take note of the backup ID. This will be needed in the next step:
# snbackupThe snbkpreport can be used to determine the latest backup ID if necessary:
# snbkpreport -
Copy backup files and manifests off the system onto the network or external file system. For the purpose of this procedure assume that $DESTDIR is the location of an nfs share at /net/share/migration. Also assume the $ID is the backup ID from the full backup in Step 2. Use showsysparm to identify the mount point for the StorNext backup. There is a meta.$FSNAME.$ID.tgz file for each managed file system in the system where $FSNAME is the file system name of the managed file system.
# showsysparm BACKUPFS
# mkdir $DESTDIR/snbackup
# cp $BACKUPFS/.ADIC_INTERNAL_BACKUP/conf.$ID.*.tgz $DESTDIR/snbackup
# cp $BACKUPFS/.ADIC_INTERNAL_BACKUP/db.$ID.*.tgz $DESTDIR/snbackup
# cp $BACKUPFS/.ADIC_INTERNAL_BACKUP/meta.*.$ID.*.tgz $DESTDIR/snbackup
# cp /usr/adic/TSM/internal/status_dir/snbackup_manifest $DESTDIR
# cp /usr/adic/TSM/internal/status_dir/device_manifest $DESTDIR
-
Copy /etc/fstab to the external file system (for later reference):
# cp /etc/fstab $DESTDIR -
Remove StorNext with the -remove option to preserve log files:
# install.stornext -remove -
Create tar archive of preserved files for later reference:
# tar -zcvhf $DESTDIR/preserved_logs.tar.gz /usr/adic - (Optional) At this point, StorNext has been removed from the original MDC with the necessary state copied off to a safe location and the operating system can now safely be reinstalled if so desired.
-
Install StorNext on new destination using the identical version that the original MDC was running:
# install.stornext -
Do a full restore of the snbackup. You will be prompted for the backup ID. This can be found in the snbackup_manifest or derived from the db.$ID.0.tgz file name where $ID is the backup ID.
Note: The snrestore command will generate error messages that it is unable to read the manifest files. This is expected and can be safely ignored.
# snrestore -r $DESTDIR/snbackup -
Modify local configuration files as necessary.
If the IP address for the MDC has changed, the fsnameservers file may need to be updated:
/usr/cvfs/config/fsnameserversIf the motherboard or network card(s) have changes a new license file may be required from Quantum:
/usr/cvfs/config/license.dat - The mount points and fstab entries may need to be recreated. Refer to the /etc/fstab file copied to $DESTDIR in Step 4.
-
Restart StorNext with the following commands:
# service cvfs stop
# service cvfs start
# service stornext_web start
-
Synchronize all the managed file system with the database using the fspostrestore command. $FS_MNT_PT is the mount point of the managed file system and <YYYY:MM:DD:hh:mm:ss> is the time from just before the backup was created. Use the snbkpreport command to determine when the backup was created:
# fspostrestore -s <YYYY:MM:DD:hh:mm:ss> $FS_MNT_PT -
Verify that backups are working correctly by running a full backup:
# snbackup - Verify StorNext is functioning by storing and retrieving files.
-
StorNext should now be up and running and safe for clients to start accessing. However, it is possible that a new mapping file or event files may need to be generated for each file system. This happens as part of the rebuild policy. This step may be scheduled to run at a later time and is by default run once a week.
# fspolicy -b -y /path/to/stornext/mount/point