Replication / Deduplication Removal Procedures
StorNext replication/deduplication provides a tightly coupled set of services for StorNext file systems. Data deduplication removes duplicated data by identifying duplicated data segments in files and only storing the unique data segments in a blockpool repository. Reference pointers to the unique data segments are stored in files for data retrieval. Replication copies source directories to one or more target directories through scheduled policy or run on-demand. The content of files replicated can be either raw file data (non-deduplicated) or deduplicated file data. Deduplication provides data reduction while replication provides data protection. User-defined policy is used to control the replication/deduplication behavior.
Although the GUI provides support to set up replication/deduplication, it does not support the decommissioning or removal of replication/deduplication. This section aims to provide the necessary procedures to remove the replication/deduplication configuration and return the system to the original configuration without replication/deduplication.
This section is aimed at audiences who have advanced knowledge and expertise of StorNext replication/deduplication and are responsible for the configuration, administration of StorNext file systems including replication/deduplication.
- It is assumed StorNext software 4.1.x or later is installed. The information only applies to those releases.
- It is assumed that full removal of replication/deduplication configuration will be performed. It is not intended for partial removal that leaves some file system’s replication/deduplication configuration intact.
This section describes the detailed procedures to remove a replication/deduplication configuration and restore StorNext file system to the original configuration. Examples are used to demonstrate how to perform each step.
The first step for the removal of replication/deduplication is to understand your current replication/deduplication configurations. The configuration information includes:
- Which file systems are snpolicy-managed file systems and where are they mounted?
- What replication/deduplication policies have been defined? Is a policy defined for deduplication only, replication only, or replication with deduplication?
- Does the snpolicy-managed file system work as a replication source site, or target site or both? Where are the target sites which are defined in the replication policies on source site?
- Which directories are source directories for the source site file systems, which directories are realized namespaces for the target site file systems? Is there a TSM relation point associated with the source directory or target namespace?
Note: On a target, the realized namespace must land under a TSM relation point.
- Where is the blockpool repository located? Is the file system where the blockpool repository resides only used for the blockpool?
Such information can be acquired either through the GUI or command line. Start from the replication source side host; obtain the list of target site host(s); then collect the configuration information on all targets. For deduplication-only configuration, there are no target hosts involved.
- From the GUI “Configuration->File Systems”, the StorNext File systems are listed with mounting point if mounted. Select a file system and click “Edit”, if “Replication/Deduplication” is checked, then the file system is an snpolicy-managed file system.
- From the GUI “Configuration->Storage Policies->Replication/Deduplication”. Policies for all snpolicy-managed file systems are displayed. Select a policy and click “View”. Determine whether deduplication is “on” or replication is “on”. If replication is “on”, and “Outbound replication” is also “on”, this is a policy defined for source site replication. You can also find the associated directories, the source replication directories. You’ll also find the target location and directories it populates.
- From the GUI “Configuration->Destinations->Replication Targets”, you’ll find all defined replication targets (the host and the directory to be replicated into).
- From the GUI “Configuration->Storage Policies->Storage Manager”, you’ll find all TSM policy classes. Select a class and click “View”, you’ll find the directories associated with the class. If the directory is also a snpolicy-managed directory or is the parent of a snpolicy-managed directory, then the directory has both an snpolicy policy and a TSM relation point associated.
- From the GUI “Configuration->Destinations->Deduplication”, the blockpool file system is displayed. Normally the blockpool repository is in a sub-directory “blockpool” of the mount point of the blockpool file system.
-
Obtain snpolicy-managed file systems:
# /usr/cvfs/bin/snpolicy –listfilesystems=localhost
fsname: snfs1 [replication dedup] up 110:49:07
mount: /stornext/snfs1
blockpool: Running up 110:49:07
-
Obtain policy information:
# /usr/cvfs/bin/snpolicy –listpolicies=mnt_path
# /usr/cvfs/bin/snpolicy –listpolicies=/stornext/snfs1
NAME: default
NAME: global inherits from: default
NAME: target inherits from: global
NAME: rep_pol1 inherits from: global
DIR: /stornext/snfs1/test (key: 371660016)
active: dedup inherits from: rep_pol1
DIR: /stornext/snfs1/test1 (key: 371660016)
active: dedup rep inherits from: rep_pol1
The above output indicates that there are two snpolicy-managed directories: directory
/stornext/snfs1/testhas a replication policy associated, while directory/stornext/snfs1/test1has a policy with both replication and deduplication configured. -
View the policy configuration:
# /usr/cvfs/bin/snpolicy –dumppolicy=mnt_path –name=policy_name# /usr/cvfs/bin/snpolicy –dumppolicy=/stornext/snfs1 –name=rep_pol1name=rep1inherit=globaldedup=ondedup_filter=offmax_seg_size=1Gmax_seg_age=5mdedup_age=1mdedup_min_size=4Kdedup_seg_size=1Gdedup_min_round=8Mdedup_max_round=256Mdedup_bfst="localhost"fencepost_gap=16Mtrunc=offtrunc_age=365dtrunc_min_size=4Ktrunc_low_water=0trunc_high_water=0rep_output=truerep_dedup=truerep_report=truerep_target="target://stornext/tgt1@10.65.189.39:"rep_inline_size=4KFrom the output, it can be seen that this is a replication source policy. It has
rep_dedup = true, so deduplication is enabled. It also hasrep_output = trueso this is a replication source policy, the replication target is host 10.65.89.39 (rep_target), the intended namespace will be realized under /stornext/tg1 on the target. As a result, the associated directory (/stornext/snfs1/test1) is a source replication directory.If
rep_input = true, the policy is a target policy. Normally this is configured on policy “target”. A host that has a policy (typically policy “target”) configured with rep_input turned on is a target host. -
Check if a directory is associated with a TSM relation point:
# /usr/adic/TSM/bin/fsdirclass pathThis will show the TSM policy class if the directory is associated with a TSM relation point. The blockpool repository can be found in file
/usr/cvfs/config/blockpool_root:# cat /usr/cvfs/config/blockpool_root
BFST_ROOT=/stornext/snfs1/blockpool/
CURRENT_SETTINGS=_stornext1TB
BFST_ROOTpoints to the blockpool repository, in this case, the blockpool is located at/stornext/snfs1/blockpool.
From the previous section, Obtain Information from the Command Line, you obtained the replication/deduplication configuration on source host and target host(s). Now you start the removal on the target hosts. There are 10 steps described below. Follow these steps to remove replication/deduplication on target host(s).
Note: If there are only deduplication policies, and no replication policy is configured, you should skip this section and jump to section Replication Removal on a Source Host .
|
STEP 1 |
Backup Replication/Deduplication Configurations In case you need to reuse the current replication/deduplication configuration in the future, it is recommended to back up the current replication/deduplication configuration first. Run: # /usr/cvfs/bin/snpolicy_gather &>snpolicy_dump
The configuration information is saved to file |
|
STEP 2 |
Suspend Replication/Deduplication Activities The next step is to suspend replication/deduplication activities so that the snpolicy daemon becomes idle. Run the following commands to suspend potential ingest processing, replication processing, truncate processing, blockpool delete and compact processing. Run the following commands where the mnt_path is the mount path of the snpolicy-managed file system. # /usr/cvfs/bin/snpolicy –runingest=mnt_path –suspend
# /usr/cvfs/bin/snpolicy –runreplicate=mnt_path –suspend # /usr/cvfs/bin/snpolicy –runtruncate=mnt_path –suspend # /usr/cvfs/bin/snpolicy –rundelete=mnt_path –suspend # /usr/cvfs/bin/snpolicy –compact=mnt_path –suspend In addition, disable # /usr/cvfs/bin/snpolicy –updatepolicy=mnt_path –name=target –policy=’rep_input=false’
|
|
STEP 3 |
Namespace and Private Directory Removal In this step, we’ll find the target keys of all realized and unrealized (due to some error) namespaces, identify whether a realized namespace lands under a TSM relation point, then remove all realized namespaces and clean up their content in the snpolicyd-managed private directory.
|
|
STEP 4 |
Stop Snpolicy Daemon and Blockpool Server In this step, the snpolicy daemon and blockpool server need to be stopped. Run the following commands: # /usr/cvfs/bin/cvadmin –e "stopd snpolicyd"
# /usr/cvfs/bin/bp_stop Check whether the process “snpolicyd” and “blockpool” have been stopped. You may run command “ |
|
STEP 5 |
Remove Snpolicy-managed Private Directories For each snpolicy-managed file system, a private directory is created to store replication/deduplication related information. For a realized namespace directory that lands under a TSM relation point, a private directory is also created under the relation point. Run the following commands to remove them: # /bin/rm –rf mnt_path/.rep_private
For each relation point that has realized namespace landed under it, run # /bin/rm –rf relation_point_path/.rep_private
|
|
STEP 6 |
Remove Replication History Logs Note: Skip this step if you want to retain the Replication History log files. Run the following command where # /bin/rm –rf /usr/cvfs/data/fsname/rep_reports/*
# /bin/rm –rf /usr/cvfs/data/fsname/policy_history |
|
STEP 7 |
The snpolicy managed event files are located under /usr/adic/TSM/internal/event_dir. Run the following commands to remove any existing snpolicy-managed event files. # /bin/rm –f /usr/adic/TSM/internal/event_dir/*.blocklet
# /bin/rm –f /usr/adic/TSM/internal/event_dir/*.blocklet_delete # /bin/rm –f /usr/adic/TSM/internal/event_dir/*.blocklet_truncate # /bin/rm –f /usr/adic/TSM/internal/event_dir/*.replicate # /bin/rm –f /usr/adic/TSM/internal/event_dir/*.replicate_src |
|
STEP 8 |
Remove Blockpool and its Configurations This step removes a blockpool repository and its configuration files. As mentioned in section Obtain Information from the Command Line, the blockpool repository path can be found from file /usr/cvfs/config/blockpool_root. Run the following commands: # /bin/rm –rf blockpool_repository_path
# /bin/rm –f /usr/cvfs/config/blockpool_root # /bin/rm –f /usr/cvfs/config/blockpool_config.txt If the file system where the blockpool repository resides is used only for blockpool, you may use it for other purpose or unmount it and remove the file system to reuse the disks in other file system. |
|
STEP 9 |
Turn off the Snpolicy-managed Attribute in File System Configuration File This step turns off the “snpolicy-managed” attribute of the currently snpolicy-managed file systems. Perform the following for each snpolicy-managed file system:
|
|
STEP 10 |
The StorNext GUI service needs to be restarted in order to view the changed configurations. Run: # /sbin/service stornext_web restart
|
The replication removal on the source host is very similar to that on a target host. For simplicity, refer to the corresponding step in a previous section if a step is the same as mentioned before.
On a replication source host, snpolicy-managed directories need a policy assigned directly to them. The policy can be either deduplication only, replication only or replication with deduplication. For directories that have deduplication enabled, the content may have been truncated by the snpolicy daemon. Snpolicy command removepolicy will retrieve the truncated content back before removing the policy from a file if no TSM relation point is associated with it.
The removal of replication/deduplication on host side has 11 steps. It is assumed you have already collected the replication/deduplication configuration described in section Collect and Understand Replication/Deduplication Configurations.
|
STEP 1 |
Backup Replication/Deduplication Configurations This saves the replication/deduplication configurations on the source host. See Replication Removal on a Target Host on how to back up. |
|
STEP 2 |
Suspend Replication/Deduplication Actitivities This is similar to Replication Removal on a Target Host except that there is no need to change policy “target”. Note: If you have multiple snpolicy-managed file systems, you must stop replication/deduplication activities for each file system. |
|
STEP 3 |
Remove Policy from Snpolicy-managed Directories For each snpolicy-managed directory obtained from section Obtain Information from the Command Line (from the output of snpolicy command # /usr/cvfs/bin/snpolicy –removepolicy=dir_path
# /usr/cvfs/bin/snpolicy -removepolicy=/stornext/snfs1/test I [0126 09:48:06.955781 28768] Removed policy from /stornext/snfs1/test Note: If you have multiple snpolicy-managed file systems, run this command for every snpolicy-managed directory on each snpolicy-managed file system. After all snpolicy-managed directories have been removed the associated policies, run snpolicy command [root@ylu-rep-src1 ylu]$ snpolicy -listpolicies=/stornext/snfs1
NAME: default NAME: global inherits from: default NAME: target inherits from: global NAME: rep_pol1 inherits from: global |
|
STEP 4 |
Remove Replication Targets from StorNext GUI In the StorNext GUI, click Configuration > Storage Destinations > Replication Targets, and delete all replication targets defined there. |
|
STEP 5 |
Stop Snpolicy Daemon and Blockpool Server Stop the snpolicy daemon and blockpool server on the source host. See Replication Removal on a Target Host for details. |
|
STEP 6 |
Remove Snpolicy-managed Private Directories This step removes the snpolicy-managed private directories. See Remove Snpolicy-managed Private Directories for details. Note: On the source host, no private directory is created under a TSM relation point, so it is not necessary to remove a private directory under a TSM relation point as shown in Remove Snpolicy-managed Private Directories. Also, the private directories for all snpolicy-managed file systems must be removed. |
|
STEP 7 |
Remove Replication History Logs See Remove Replication History Logs for details. |
|
STEP 8 |
Remove Related Event Files See Remove Related Event Files for details. |
|
STEP 9 |
Remove Blockpool and its Configurations See Remove Blockpool and its Configurations for details. |
|
STEP 10 |
Turn off “Snpolicy-managed” Attribute in File System Configuration File See Turn off the Snpolicy-managed Attribute in File System Configuration File for details. |
|
STEP 11 |
Restart the StorNext GUI See Restart the StorNext GUI for details. |
minute read