Best Practices for File Replication

FlexSync is a feature that utilizes the advanced metadata capabilities within StorNext that generate replicas of an entire file system, or just a portion of a file system hierarchy. The advanced metadata capabilities within StorNext allow FlexSync to identify and copy only changed and new files without needing to scan the file system, dramatically shortening the time and resources needed to create a file replica.
All enterprises need to protect their digital assets against data loss. Accepted best practices typically recommend creating remote or local copies (replicas) of important unstructured data, so that it can be easily recovered if it has been damaged or deleted. There are many ways to design, implement, and optimize replication solutions. The correct approach for your use case must consider these factors:
- Importance and criticality of data.
- How quickly lost or damaged data needs to be restored.
- How often replication tasks can be performed, and the potential impact on the overall environment and production workloads.
This topic provides guidance for Quantum and partner solutions architects and professional services teams in the design and implementation of a StorNext-FlexSync solution to ensure end-user customers get the most out of their investment. By design, FlexSync software is enhanced and released separately from StorNext, so that new functionality and fixes can be delivered quickly. As improvements are made to FlexSync, this topic is updated with the latest insights for implementing a simple and efficient FlexSync solution.
Note: This topic assumes you have a solid understanding of StorNext data management software, as well as basic networking and SAN experience. The topic provides key recommendations and useful information for configuring a FlexSync solution plus recommendations and performance-tuning considerations.


Depending on how an ACL and metadata is stored by an application, it is on a case by case basis if the information is preserved (for example, StorNext metadata in the file system versus user defined metadata within an application). If the information is saved as encoding within the file, then it is present. A file system ACL is not replicated since it is different from a StorNext file system ACL, but the files are replicated without any restraint.
Note: If StorNext is installed on both the target mount and the destination mount, the ACL is preserved. If one of the mounts does not have StorNext installed on it, then the ACL is not preserved.
Note: ACL data is not saved when you perform a FlexSync version 3.2.1 S3 object-based synchronization process.

If you enable named streams on the source system, then you must also enable named streams on the destination. Quantum highly recommends that the file system configurations of the source and destination match.
Caution: If you do not have the same named streams configuration on both the source system and the destination system, then the extended attributes are lost when FlexSync replicates the files.
Note: FlexSync 3.2.1 does not support named streams for S3 object-based synchronization.

You can use the StorNext Quality of Service (QoS) function to limit and enforce client I/O limits.
Note: If you use the QoS feature, then it applies to all I/O operations, and not individual applications like FlexSync. Bandwidth Throttling is available within Flexsync to limit host I/O resource usage of Flexsync tasks.

The supported number of characters in a directory path is 4,096; limit your directory path to 4,096 characters or less.

Caution: File names that are NOT UTF-8 compliant are NOT synchronized; you can scan your file names to determine if you have invalid file name characters. One reason your file name might not be UTF-8 compliant is because the file originated from a file system that allows non-compliant UTF-8 characters (for example, Latin-1 characters). To scan for invalid file name characters in your file system, use the snfsnamescanner -u command (see snfsnamescanner in the StorNext 6 Man Pages Reference Guide); to convert your invalid file name characters in your file system to UTF-8, use the script (utf8FileNames.sh) that is generated by the snfsnamescanner command.
Example
# /usr/cvfs/lib/snfsnamescanner -u /stornext/test
Wed Feb 26 08:15:11 2020
Starting search in: /stornext/test
Scanning for:
invalid UTF8 names
Files/directories scanned: 1
Elapsed time: 00:00:00
0: File names with invalid UTF8 results in ./utf8FileNames.sh

- Use care to avoid renaming destination directories, as FlexSync re-replicates files when the name of the destination directory is changed.
- When you configure a task, you can only configure one task to write to a destination directory. In other words, you cannot overlap or run multiple tasks to run the same process on the same directory structure concurrently/simultaneously.

Based on the test scenario and configuration described in detail within this topic, optimal over-all performance and maximum FlexSync throughput can be achieved by following these best practices:
-
You can use Flexsync with StorNext Storage Manager managed file systems (source or destination), if the FlexSync version is 2.0 or greater and the StorNext version is 6.3.0 or greater. See FlexSync and Managed Files for configuration considerations when using StorNext Storage Manager.
Note: FlexSync version 2.2.1 or greater is ONLY supported on a system running StorNext 7.0.1 or greater.
Caution: Use caution if you attempt to mix If needed, select Bandwidth Throttling. and Storage Manager. If you run FlexSync on a throttled link or if the link is substantially slower than the minimum tape drive read speed, see Considerations for Low Network Bandwidth Tape Retrieval for configuration options.
-
FlexSync supports data movement to and from 3rd party file systems and disk mounts, such as NAS / SMB and NFS-based mounts.
Note: When you replicate between StorNext file systems, the advanced metadata capabilities within StorNext allow FlexSync to identify and copy only changed and new files without needing to scan the file system, dramatically shortening the time and resources needed to create a file replica. When you use FlexSync to replicate data from a third party file system, FlexSync must do a full scan of the third party file system to determine changes before it can create file replicas; this scan of the third party storage results in lower replication performance versus a native StorNext to StorNext replication task, and can have an impact on the metadata performance of the third party storage.
-
In order to leverage metadata capabilities, you must enable StorNext metadata archive (mdarchive) on the source system, but may be turned off on the destination system, reducing the performance impact on the destination system. Only disable mdarchive on the target system if other features such as file system auditing and online stripe group management, which also use mdarchive, are not needed on the target system. For FlexSync to connect to the source MDC, the hostname/IP pairs of the MDCs need to be in DNS or by adding the mapping to /etc/hosts on both sides of the MDC.
If you plan to have truncated files on your source system and you are running FlexSync 2.1.x (or any prior release), Quantum recommends you set the Metadata Archive Cache Size to a minimum of 8 GiB on your source system (the default is 2 GiB). To configure the Metadata Archive Cache Size, see Edit a File System, expand the Manual Configuration drop-down, and then expand the Configuration Parameters > Features Tab drop-down.
- It is important to ensure Allocation Session Reservation (ASR) is enabled on the target and destination for optimal performance and space utilization. For more information, see Allocation Session Reservation (ASR) and How ASR Works.
-
Estimating the size of the destination file system is important to ensuring the solution will scale and encompass future needs. The initial recommendation is to size the destination a minimum of two times (2x) the size of the source. When enabling versioning, increase this factor based on the number of versions that will be kept and the rate of change on the source file system; a minimum of three times (3x) might be a good place to start.
Note: Depending on the task configuration, metadata changes including updates to a file's ACL or Extended Attribute may result in a new copy of the file even if the file's contents file did not change.
-
The file change rate within the source file system, the rate at which files are created and existing files are changed, affects FlexSync resource usage more than any other factor. FlexSync copy tasks are configured for and operate on portions of the source file system. Defining copy tasks, with each task replicating a variable portion of the file hierarchy based on change rate, can have a considerable influence on FlexSync throughput by load balancing across tasks. For example, splitting the workload across copy tasks that executes more frequently on very active ("hot") areas from copy tasks defined for regions of the source file system that have comparatively lower change rates, and run less often, will optimize efficiency. As much as possible, try to define your FlexSync copy tasks so they represent logical units of data – for example, have a copy task correspond to the topmost directory of a large project.
-
For an unmanaged file system and data protection scenarios that demand continual replication and minimal performance loss on the primary node of an Xcellis Workflow Director storage system, you should implement a FlexSync solution that distributes FlexSync copy tasks across one or more Xcellis Workflow Extenders. This approach delivers optimal replication performance, while lowering the resource and performance impact caused when running FlexSync processes on the failover node (node 2). For a managed file system, using clients for FlexSync causes the data for truncated to make two hops, one from the MDC to the source flexsyncd daemon and then from the source flexsyncd daemon to the target flexsyncd daemon, which is inefficient. FlexSync does not understand the concept of a distributed data mover (DDM).
-
If you plan to use the versioning capability within FlexSync, be aware of the overhead associated when creating hard links. Consider using a phased approach, starting small and gradually expanding the use of versioning, to ensure that the hard link creation process does not adversely affect the performance of your Xcellis Workflow Director or Xcellis Workflow Extender.
Caution: A hard link created outside of the FlexSync versioning capability is not replicated and might create a duplicate file. FlexSync does not preserve this type of a hard link during replication, and is essentially duplicating the file on the destination when it encounters this type of a hard link, hence more space usage on the destination side compared to the source.
-
Distributed replication solutions, comprised of an Xcellis Workflow Director and one or more Xcellis Workflow Extenders, with each running FlexSync, can be configured to minimize performance degradation for mission-critical work-flows. You can achieve this by distributing copy tasks across one or more Xcellis Workflow Extender nodes.
-
For end users in the media and entertainment market segment, follow the recommended best practices outlined in the StorNext and QXS for 4K Media Work-flows document.
-
If your replication requirements require interrupted replication, and you cannot tolerate any performance degradation due to FlexSync processes running on the active metadata controller (MDC), deploying FlexSync on another (non-MDC) system, for example, an Xcellis Workflow Extender, is recommended
-
-
By default, FlexSync copy tasks use Secure Sockets Layer (SSL) to transmit encrypted data to the destination target. SSL requires about 1 CPU core per 1 GB/sec of throughput to the destination. Consider distributing copy tasks onto Xcellis Workflow Extenders to minimize the impact caused by SSL running on an Xcellis Workflow Director.
-
As much as practicable, configure FlexSync copy tasks to run only when latency-sensitive applications are not running.
-
Limit the number of large memory and computer-intensive applications that are running while FlexSync tasks are also being executed.
-
For systems where FlexSync is configured to run on MDCs, if the primary node (node 1) should failover (to node 2), Quantum recommends that you fail-back to node 1 as quickly as possible, as this distributes the compute and I/O processes across both nodes.
WARNING: While running in a failover scenario, with all compute and storage processing running on a single node, applications running within the Dynamic Application Environment (DAE) have degraded performance. If you are able to tolerate degraded performance, then you can continue to run FlexSync on node 2 until you are able to fail-back to node 1 being the primary node.
-
Tuning the Host TCP stack might be necessary to optimize network throughput for Flexsync tasks, but should be done with care, since these changes affect all network activity and resources. For example:
net.core.wmem_max = 268435456
net.core.rmem_max = 268435456
net.ipv4.tcp_rmem = 4096 65536 268435456
net.ipv4.tcp_wmem = 4096 65536 268435456
net.ipv4.tcp_window_scaling = 1

To ensure your solution scales and encompasses your future needs, it is important you properly estimate the size of your destination file system.
Quantum recommends you size your destination a minimum of two times (2x) the size of the source.
When you enable Versioning (see Schedule Tab), increase this factor based on the number of versions that are kept and the rate of change on the source file system; a minimum of three times (3x) might be a good place to start.
Note: Depending on the task configuration, metadata changes including updates to a file's ACL or Extended Attribute might result in a new copy of the file even if the file's contents file did not change.

When you replicate to a StorNext file system, you might think that the exact same space is consumed since the files are identical. However, there can potentially be a difference in space usage between the source and destination. This is due to StorNext's decisions about how to allocate space for files based on the hardware geometry of the storage and the manner in which the files were written. Therefore, files written by FlexSync onto the destination might end up using a different allocation strategy than files which were landed on the source.
For example, StorNext will often "stripe align" files, that is, make their allocations a multiple of the stripe size and align their start to the beginning of a stripe, to get the best performance out of spinning disk. If your stripe sizes differ on the source and destination storage, then the space used by the files might differ as well. These changes can add up to substantial amounts. Additionally, StorNext will often "pre-allocate" space for files as they are being written, in anticipation of more writes consuming the space. This can result in files consuming more space than required if StorNext over-estimates. Differences in the estimates that StorNext makes can result in files having different sizes on different systems.

Question | Answer |
---|---|
Is there a way to accurately estimate these differences? | No, due to the complexity of the algorithm in the StorNext file system. |
Why does FlexSync with StorNext have this behavior if no one else does? | Most file systems do, in fact, have this behavior, but to a much more limited extent. The reason for this is that most file systems run on an opaque block device and they do not know, or care, about its internal geometry. Therefore, they might pad out files to align with a block size (for example, 4 KB). StorNext, by contrast, might try to align files to a multi-megabyte stripe boundary, so the magnitude of alignment differences might be larger. |
Can I turn off this behavior in StorNext to make it more predictable? | There are changes that can be made to make StorNext less aggressive about performance and alignment. However, the changes have a negative impact on performance. |
Can I configure FlexSync to maintain the space allocations even if the StorNext systems are configured differently? | No. |
How do I find out what the space difference is? | All operating systems have some way to tell you this. On Windows or macOS Finder, you can look at the advanced properties of a file or directory. On Linux, run the command du to retrieve the space usage by default, and you can report the logical size by passing it the — apparent-size parameter. |

You can enable Versioning (see Schedule Tab) to maintain multiple versions of the target file namespace dating backwards over time and make snapshots of files in a file system.
FlexSync accomplishes versioning by creating a "version snapshot". The process creates a directory tree of hard links that point at the destination directory, file-by-file. Any modification, including changes to the permissions or renames, results in a new file copy. Therefore, if you enable versioning, your configuration might result in a substantial amount of increased space usage on the destination.
For example, if you configure your replication task to replicate every hour, make a new version "snapshot" once a week on Saturday, and keep your snapshots for four weeks. At any given time, you would have five versions of the target namespace, one representing the current state, one representing the last Saturday, then three more for the three Saturday instances before that.
In this example, the maximum number of copies of a given file you could have is five, since there are four snapshots plus the current version, but if a file has not changed, then it would only have one copy shared by all five.
Note: It does not matter how many times you update an individual file since it is based on a time schedule and not the number of copies.

Question | Answer |
---|---|
If I rename all my files on the source, will my space usage on the target double? | Yes, your space usage doubles until all versions using the old names have expired and are removed, at which point the space is recovered. |
If I change the permissions on all my files, will my space usage on the target double? | Yes, your space usage doubles until all versions using the old permissions have expired and are removed, at which point the space is recovered. |
Every time I create a new version snapshot does it makes a new copy of all the data? | No, version snapshot creation does not consume any new space (other than a bit of metadata space for the hard links). When you modify something that is "snapshotted", that space usage increases. |
How much space is required on my destination? | You need enough space to hold multiple copies of the source data. The number of copies depends on your organic file change rate as well as how often you are inclined to rename large amounts of files or bulk-change permissions on a large amount of content. Quantum recommends you always have a minimum of at least two times the space on the destination so the operations do not fail. |

FlexSync does not preserve a hard link to a file on the source during replication, and is duplicating the file on the destination when it encounters a hard link, hence more space usage on the destination side compared to the source.
As a result, if your source file system contains a large number of hard links, you must take the into account on the destination.

If your replication task is configured to NOT delete extraneous files on destination, then this results in a larger amount of data on the destination file system, as older versions of the files in their previous location are retained.
- Moving or renaming files after a replication has occurred results in duplicates of the files being created on the destination system.
- Renaming upper level directory after a replication has occurred results in the files below the rename to be treated as new files and replicated to destination.
- In the event that the above occurs regularly on a project, this might result in larger disk space being consumed on the destination.
Related to Versioning, you should remember that even if Delete Extraneous Files is enabled for the destination, if files are deleted or renamed on the source, those files are NOT deleted in older versions on the destination. These versions remain until they expire. This might result in more disk space being consumed on the destination than possibly expected.
If the source file system is a StorNext managed file system and files on that managed file system are truncated, the source might appear to take up less space than the destination file system.

- As much as possible, avoid replicating truncated files.
-
If your source has truncation enabled, set the minimum truncation time to be much longer than the replication interval .
-
This provides a grace period to complete replication before files are truncated .
-
TSM does not explicitly wait for FlexSync to truncate.
-
If files do get truncated, FlexSync retrieves them ,but this adds significant time and wears tapes/drives.
-
-
Be conservative, consider real world ingest, not just configured interval.
- For example, if you configure replication to occur every 10 minutes, but in practice your change rate bursts such that sometimes it takes a day to run.
- Might require additional system configuration based on observed performance and change rate.
-
- Make sure your source has sufficient capacity to avoid emergency truncation.

- Business imperative often provisions the smallest possible system for backup, especially with respect to disk capacity.
- The destination must have enough capacity and performance to comfortably receive, store, and truncate content without triggering an emergency truncation.
- If an emergency truncation is frequently triggered while FlexSync is performing replicating, the resulting I/O stalls might cause transfer failures due to timeouts.
- A short minimum truncation time is probably desirable on the destination.
- Consider a scenario where the destination might take over as the source.

- Tape drives have both a maximum throughput and a minimum throughput.
- Reading from a tape drive at below the minimum throughput requires the drive to repeatedly start, stop, and rewind the tape (known as, tape head scrubbing).
- This effect can very rapidly wear out tape drives.
- When replicating data from tapes over a WAN, it is important to make sure the number and speed of drives is matched to the WAN speed.

- The FlexSync versioning capability instantiates version namespaces reflecting the state of the data at various times in the past, to allow easy recovery and browsing of historical snapshots of the data.
- This versioning capability is accomplished using hard links.
- StorNext does not permit hard links to span relation points.
- Therefore, if you use FlexSync versioning, the destination directory must be inside a relation point, not a relation point itself or the parent of a relation point.
- This allow FlexSync to create sister version directories and ensure they are replicated within the same relation point.

Recommended StorNext Cache Settings for a Managed File System
You can use FlexSync with StorNext Storage Manager managed file systems (source or destination), if the FlexSync version is 3.0 (or later) and the StorNext version is 7.1.x (or later).
Note: FlexSync version 3.0 (or later) is ONLY supported on a system running StorNext 7.1.x (or later).
Considerations for Object-based Replication
If your file system contains a managed relation point, then you can checkout a repository to a file system that is designated as managed; however, you cannot checkout to a directory (or below) where a relation point is configured.
Example
You designate /stornext/snfs1 as a managed file system per the file system configuration.
You create a sub-directory labeled tape_policy and add a Storage Manager relation point to it.
You can checkout a repository to /stornext/snfs1 but you cannot checkout a repository to /stornext/snfs1/tape_policy (or below).
Caution: Use caution if you attempt to mix If needed, select Bandwidth Throttling. and Storage Manager. If you run FlexSync on a throttled link or if the link is substantially slower than the minimum tape drive read speed, see Considerations for Low Network Bandwidth Tape Retrieval for configuration options.
If FlexSync is replicating files managed by Storage Manager, it is possible that some of the files may have been truncated before the replication starts. For these files, Storage Manager subsequently retrieves the file data from the first available tertiary storage (for example, tape, or object storage) copy and transfers it directly to the FlexSync destination.

Quantum recommends you set the following cache settings:
-
mdarchive: Set the cache to a minimum value of 10 GB for every 150 million files.
-
innodb: Set the cache to a minimum value of 10 GB for every 500 million files.
-
buffercachecap: Set the cache to a minimum value of 8 GB for every one billion files.

In the case where the file has been truncated and there are multiple copies in tertiary storage, FlexSync retrieves the first copy in the retrieve order of the managed files’ policy class. If a different copy is desired, you can change the retrieve order using the following CLI command:
For example, if there are two copies, the first on tape, and the second on object storage, then the tape copy is preferred if the retrieve order is 1,2, and the object storage copy is preferred if the retrieve order is 2,1.

By default, FlexSync stages data through Storage Manager’s in-memory buffers when processing a FlexSync request. However, in the case where the data is stored on tape media and the available network bandwidth is low in comparison with the tape bandwidth, the buffers might fill completely, requiring the tape to stop intermittently to wait for the network to catch up. Since this behavior is undesirable, it might be appropriate to stage incoming data through temporary disk files rather than through memory. You can configure the behavior by setting the FS_DIRECT_STAGE_ENABLE_SIZE and FS_DIRECT_STAGE_DISK configuration parameters using the following guidelines:
- The parameter, FS_DIRECT_STAGE_ENABLE_SIZE, specifies the minimum request size. If the total amount of data requested for transfer equals or exceeds this value, staging files are used to transfer data from tape to FlexSync. You must specify the value in bytes.
- When staging is enabled, the size of the staging files must also be configured by setting the parameter FS_DIRECT_STAGE_DISK. You must specify the value in bytes.
- While the optimal settings for the above parameters differ depending on multiple external factors, the values of FS_DIRECT_STAGE_ENABLE_SIZE greater than or equal to 2 GB and FS_DIRECT_STAGE_DISK greater than or equal to 5 GB are appropriate, under most circumstances.
Note: The parameters affect only retrieval from tape and are not required for retrieving explicitly from object storage media.

If a source file is truncated and has never been replicated:
- The file is requested from Storage Manager and streamed to the destination.
- The file is not rehydrated on the source.
If a source file has been replicated and is also truncated, then FlexSync ignores the file.

- Optionally, you could rely on Storage Manager for versioning rather than use FlexSync versioning.
- In this case, you should consider the following:
- FlexSync updates files by atomic rename replacement, so you must enable TSM rename tracking in order for old versions to be properly associated with a current inode.
- Restoring the source to a point in time on the destination is much more complicated than using FlexSync versioning, since the files are versioned ad-hoc as stored by Storage Manager.

As part of the replication process, FlexSync creates temporary files on the destination. If your destination is a managed StorNext directory, then it is not desirable to allow StorNext Storage Manager to store the temporary files as it might cause problems in some cases. To prevent Storage Manager from storing the temporary files you must update the TSM configuration file, excludes.store entry depending upon the version of FlexSync you are running.
If you are running FlexSync 2.2.4 (or later), then your entry must be:
If you are running FlexSync 2.2.2 (or earlier), then your entry must be:
Note: One of the entries might already exist, depending on the StorNext software release. Beginning with StorNext 7.0.1, the entry .__flexsync_tmp was added. Beginning with StorNext 7.0.2, the entry .FlexSync_tmp__ was added.

Caution: In general, you should not perform the changes outlined below unless you are experiencing consistent timeouts or you are at a high risk for timeouts. For example, if you run FlexSync and already know that your replication involves tens of millions of files or a large amount of data to be transferred, if you experience a timeout, you can perform the changes outlined below. The changes in the configuration require you restart your system (for example, StorNext, Tertiary Storage Manager [TSM], and flexsyncd). The TSM and flexsyncd restart might not be intrusive, and can resume the replication at a later time. The StorNext file system restart might have more of an impact and can affect replication.
FlexSync uses REST API over HTTP to communicate with other StorNext components, such as StorNext FSM server and the Storage Manager fs_restd daemon.
You can configure two timeout parameters involved in the HTTP communication:

The connect timeout defines the maximum amount of time required to make an HTTP connection. If the connection is not established within a given period of time, the connection timeout occurs. As a result, the timeout event fails the REST API operation. It is unlikely you need to change the connect timeout value unless the server is extremely busy or the network condition is poor. If you experience a connect timeout error, increase the connect timeout value.
The default connect timeout in flexsyncd is 30 seconds. To change the timeout value, use the option --connect-timeout to specify the connect timeout value in seconds for flexsyncd:

The data timeout defines the maximum amount of time required to read or write data on a connection. If the data is not ready on a connection within the given period of time, the data timeout occurs. As a result, the timeout event fails the REST API operation. You might need to adjust the data timeout value under the following circumstances:
- The REST API servers are overloaded. The REST API servers include the FlexSync source node, the StorNext MDC node running the StorNext file system service and the Storage Manager daemons. If only the source flexsyncd node is overloaded, configure and set the same data timeout value on both the source and the destination flexsyncd nodes. If the MDC running the StorNext file system is overloaded, configure and set the same data timeout on the StorNext process fsm, the source flexsyncd node and the destination flexsyncd node.
- If the source file system is unmanaged and the number of files or the size of files to be copied are very large. For example, your replication contains millions of files to be copied, or the total size to be copied is more than 20 GB, increase the data timeout value on the source and destination flexsyncd nodes.
- If the replication directory on source side is managed, your files are truncated based on a policy, and the number of files (millions of files) or the size of files to be copied are very large (greater than 20 GB). You should increase your data timeout value.
- If the primary retrieval copy (generally copy1) is not object storage, then configure and set the same data timeout value for FlexSync flexsyncd processes for the source and destination systems.
- If the primary copy is stored on object storage, then configure and set the same data timeout value for the StorNext MDC fsm, and the FlexSync flexsyncd processes for the source and destination systems.
The maximum number of truncated files to be replicated per task by FlexSync version 2.0.0 through FlexSync version 2.1.1 is 300,000. Quantum recommends you increase the data timeout to one hour (3,600 seconds) if your truncated files reach this limit. You should increase all flexsyncd and fsrestd to the same value.
Note: Beginning with FlexSync version 2.1.2, there is no limit in the number of truncated files; Quantum recommends you upgrade to FlexSync version 2.2.4 (or later) if your system contains more than 300,000 truncated files to replicate.
The default data timeout in flexsyncd is 15 minutes (900 seconds). To change the timeout value, use the option --data-timeout to specify the data timeout value in seconds for flexsyncd:
Beginning with FlexSync version 2.2.4, a new timeout option (--tsmdata-timeout) is available with a default value of 30 minutes. If you increase the value of the --data-timeout option beyond 30 minutes, then you should also increase the value of the the --tsmdata-timeout option to match the --data-timeout option. The --tsmdata-timeout option only affects truncated files and you do not need to configure the value if your system does not contain truncated files. To change the timeout value, use the option --tsmdata-timeout to specify the tsmdata timeout value in seconds for flexsyncd:
You might need to configure /etc/sysconfig/flexsyncd for the flexsyncd service (see the flexsync, flexsyncd, and flexsyncadmind man pages for more details).
You can configure the parameters conn_timeout and data_timeout for the StorNext file system (fsm) and fsrestd in the configuration file:
Note: The default data_timeout for fsrestd is 5 minutes (300 seconds).
Beginning with StorNext.4.0, the default value in the StorNext file system (SNFS) snfs_rest_config.json file for the attribute/parameter in the table below has changed.
Note: The existing snfs_rest_config.json file is saved to snfs_rest_config.json.rpmsave. If needed, you can modify any settings you might have changed in the new snfs_rest_config.json file.
Attribute/Parameter | Old Default Value | New Default Value |
---|---|---|
data_timeout (for the fsm process) |
10 seconds | 900 seconds |
If you upgrade to StorNext.4.0 from a previous version of StorNext, the installation process relocates the snfs_rest_config.json file to snfs_rest_config.json.rpmsave, and installs the new template with the new default value. The unit of measurement for the timeout in the configuration file is milliseconds. See snfs_rest_config.json in the StorNext Man Pages Reference Guide for more information.

Beginning with FlexSync 2.1.3, you can configure the number of streams per task, to use to retrieve data from Storage Manager for object storage and for tape media.
Note: You can only perform this operation using the CLI.
Use the flexsyncadmin edit-task command to configure any of the following three task parameters.
Attribute/Parameter | Description |
---|---|
tsm-media-reqs |
This parameter defines the maximum number of inflight tape media in the TSM media retrieval. Note: The default value is 0, meaning no limit. |
tsm-obj-reqs |
This parameter defines the maximum number of inflight TSM media retrieval requests per object storage media. Note: The default value is 4. |
tsm-req-files |
This parameter defines the maximum number of files per TSM media retrieval request. Note: The default value is 10,000. |
- Log in to an SSH client, and connect to your system.
- At the prompt, enter the following command:
/opt/quantum/flexsync/bin/flexsyncadmin -U <username> -P <password> edit-task <dest_host_name> <task_name> set <option_name>=<value>
Example
/opt/quantum/flexsync/bin/flexsyncadmin -U admin -P password edit-task mdc2 project1 set tsm-obj-reqs=8000
-
If the command succeeds, then the following output appears:
Changes will take effect when the task becomes idle.