FSBlockSize, Metadata Disk Size, and JournalSize Settings
The FsBlockSize
(FSB), metadata disk size, and JournalSize
settings all work together.
All file systems use a File System Block Size (FsBlockSize
[FSB]) of 4 KB. This is the optimal value and is no longer tunable. Any file systems created with versions prior to StorNext 5 will be automatically converted to use 4 KB the first time the file system is started with StorNext 5. While internally file Systems which have been upgraded from StorNext 4.x will use a 4 KB block size, StorNext tools will continue to display the original FsBlockSize values. This is done to ensure that StorNext 5 can continue to support prior versions of StorNext clients.
Metadata Disk Size Setting
For internal metadata, you should provision approximately 2 GB of disk space for every 1 million user files, as each file consumes1 KB for its inode plus additional space for at least one directory entry. This is slightly generous but provides some room for future growth and covers pathological cases where files have several hard links and/or many extended attributes.
The amount of external metadata needed is a little more complicated to estimate.
- Unmanaged file systems that do not have Metadata Archive enabled require zero external metadata space.
- Unmanaged file systems that have Metadata Archive enabled require about 1 GB of space on the HA shared file system for every 1 million user files.
- Managed file systems require about 2 GB of space on the HA shared file system for every 1 million files due to additional metadata stored in MySQL.
After you calculate the estimated usage, you must triple the number to account for special circumstances, such as a StorNext upgrade moving to a new Metadata Archive format or keeping an extra on-line copy of the MySQL database for a service-related purpose.
Below is an example calculation of external metadata:
Description | Value |
---|---|
File system snfs1, managed, 50 M files | 100 GB |
File system snfs2, unmanaged with metadata archive enabled, 50 M files | 50 GB |
File system snfs3, unmanaged, with metadata archive disabled, 100 M files | 0 GB |
File system snfs4, managed, 20 M files | 40 GB |
Sub-total | 190 GB |
Total (sub-total multiplied by 3) | 570 GB |
Additional space is required when Metadata Archive history is enabled. The amount needed depends on factors such as the number of history days configured, the worst-case rate at which files are modified, and whether the audit feature is also enabled. It is difficult to estimate ahead of time exactly how much additional usage results from when you enable Metadata Archive history. One relatively safe way to measure its impact is to set metadataArchiveDays
to a very small number for a file system and monitor the disk usage of its metadata archive over a period of days. Unfortunately, to measure this way means that the file system is already running. This, in turn, means that the provisioning decision has already been made. Nevertheless, this may provide guidance on whether or not expanding external metadata on an existing system is required to enable history.
As described later, if a StorNext system outgrows the storage initially provisioned for internal or external metadata, you can add additional space.
JournalSize Setting
Quantum recommends all new file systems on the current release be created with a JournalSize of 256 megabytes (MB).
Note: The JournalSize log expansion process relies on the cvupdatefs utility; Quantum recommends you perform an offline file system check (cvfsck) prior to the JournalSize expansion.
Increasing the JournalSize beyond 256 MB may be beneficial for workloads where many large size directories are being created, or removed at the same time. For example, workloads dealing with 100 thousand files in a directory and several directories at once experience improved throughput with a larger journal.
The downside of a larger journal size is potentially longer FSM startup and failover times.
If you use a value less than 256 MB, then your failover time might be improved, but your file system performance might be reduced
Note: Journal replay has been optimized, so a 256 MB journal often replays significantly faster than a 16 MB journal.
A file system created with a release prior to StorNext 5 might have been configured with a small JournalSize. This is true for file systems created on Windows MDCs where the old default size of the journal was 4 MB. Journals of this size continue to function with StorNext 5.x, but experience a performance benefit if the size is increased to 256 MB. You can adjust the setting by using the cvupdatefs utility. For more information, see the command cvupdatefs
in the StorNext MAN Pages Reference Guide.
If a file system previously had been configured with a JournalSize larger than 256 MB, there is no reason to reduce it to 256 MB when upgrading to a current release of StorNext.
Note: To view an example on how to update the JournalSize setting of a file system, see Update the JournalSize Setting of a File System.