Add a File System
The following procedure describes how to create a new file system. The number of file systems you can add is limited only by the number of disks available for configuration.
-
On the Configuration menu, click File Systems. The Configuration > File System page displays all currently configured file systems.
Note: You can also access this page from the StorNext Configuration Wizard by choosing the File Systems option.
-
Click New to add a new file system.
-
Enter the following fields:
- File System Name: Enter the name for the new file system.
-
Mount Point: Enter the mount point for the new file system, or accept the displayed default mount point.
-
Storage Manager: Select this option if you want this file system to be managed by StorNext Storage Manager.
Note: If you plan to protect the contents of the file system using FlexSync, do not select the Storage Manager option. FlexSync does not support Storage Manager protected file systems. For additional information, see the FlexSync Documentation Center.
-
Replication/Deduplication: Select this option if you would like to enable data replication/deduplication on the new file system.
Note: The StorNext Replication and Deduplication options are not supported with FlexSync and cannot be combined with either of the options. For additional information, see the FlexSync Documentation Center.
-
Stripe Group Configuration: Select Generated or Manual. When you choose Generated, StorNext creates the file system with typical parameters after you enter basic configuration information. If you select Manual, you are given the opportunity to manually create the file system by specifying all parameters. If you select Manual configuration, exit this procedure and proceed to Manual Configuration.
StorNext uses Stripe Groups to separate data with different characteristics onto different LUNs. Every StorNext file system has three kinds of Stripe Groups.
- Metadata Stripe Groups hold the file system metadata: the file name and attributes for every file in the file system. Metadata is typically very small and accessed in a random pattern.
- Journal Stripe Groups hold the StorNext Journal: the sequential changes to the file system metadata. Journal data is typically a series of small sequential writes and reads.
- User Data Stripe Groups hold the content of files. User data access patterns depend heavily on the customer workflow, but typical StorNext use is of large files sequentially read and written. Users can define multiple User Data Stripe Groups with different characteristics and assign data to those Stripe Groups with Affinities; see StorNext File System Stripe Group Affinity.
See FSBlockSize, Metadata Disk Size, and JournalSize Settings for additional information about how to determine the size of a stripe group.
Because the typical access patterns for Metadata and User Data are different, Quantum recommends that you create different stripe groups for metadata and user data. The journal data access patterns are similar enough to be placed on the Metadata Stripe Group.
To achieve optimal performance, Quantum recommends you split apart the data, the metadata, and the journal data into separate stripe groups. The create, remove, and allocate (for example, write) operations are very sensitive to I/O latency of the journal stripe group. However, if create, remove, and allocate performance are not critical, then you can share a stripe group for both metadata and journal. You must set the exclusive property on the metadata and journal stripe group so that it does not get allocated for user data as well.
Note: Quantum recommends that you have only a single metadata stripe group. For increased performance, use multiple LUNs (two or four) for the stripe group.
The access patterns on an High Availability (HA) shared file system allows for sharing of metadata, journal and user data on a single stripe group. Some appliances might have a single stripe group for the HA shared file system pre-configured from the factory. If you expand an HA shared file system configured as a single stripe group, you must add either:
- Another shared metadata, user data stripe group.
- An exclusive metadata stripe group and a data stripe group.
Caution: You cannot add an exclusive user data stripe group.
See the StorNext 6 Tuning Guide for additional information about optimizing your system.
- If you select Generated configuration, click Continue to proceed to the second configuration page.
-
Complete the following fields:
-
RAID Type: Select the RAID type that corresponds to your system configuration from the drop-down list.
Note: If you are using a StorNext G300 Gateway Appliance, the default value is Quantum Disk.
-
Data Disks per LUN: The number of data disks per LUN.
-
Segment Size (Bytes): The amount of data that will be written to one drive in a RAID LUN before writing data to the next drive in that LUN. Configure the Segment Size using the RAID user interface.
-
Data Stripe Breadth: The amount of data that StorNext writes to a LUN before switching to the next LUN within a stripe group. For best performance in many RAIDs, you can set the Data Stripe Breadth to a value resulting from the following calculation:
Data Stripe Breadth Value = Segment Size x Disks per LUNNote: Required fields are marked by an asterisk (*).
-
-
Select one or more disks to assign to the file system.
Note: Use the check-box column to select or deselect.
-
After selecting one or more disks, click Meta to designate any disks to be used for metadata, or click Journal to any disks for journaling. A disk can be used for both metadata and journaling.
-
In the field to the left of the Label button, enter a label name. Click Label to apply the label name to the selected labels. Click Unlabel to remove the label name from selected labels.
-
After you are finished entering label information, click Assign to assign the selected disks to the file system. Click Unassign to remove existing associations between disks and the file system. For example, if you assign disks erroneously, clicking Unassign is an easy way to remove associations and reassign disks.
-
Click Continue.
-
Click the arrows beside the headings Configuration Parameters and Stripe Group/Disk Management to display that information. If desired, make any changes in these areas.
-
When you are satisfied with the file system parameters, click Apply. StorNext automatically configures and mounts the file system based on the information you entered.
If you chose Manual Configuration, you must complete the fields on the Configuration Parameters tabs and the Stripe Group/Disk Management fields.
Note: If necessary, click the arrow to the left of these headings to display the tabs and fields.
-
When you are finished entering Configuration Parameters and Stripe Group/Disk Management information for the manually configured file system, click Apply to save your changes and create the file system.
-
When a message informs you that the file system was successfully created, click OK.
The Allocation tab contains fields that affect how resources are allocated on your file system.
-
Journal Size: Defines the size of the file system journal.
Note: To view an example on how to update the JournalSize setting of a file system, see Update the JournalSize Setting of a File System.
-
Strategy: Defines the method for choosing stripe groups when allocating disk blocks. Options are Round, Fill, or Balance.
-
Reserved Space: Enables delayed allocations on clients. Reserved space is a performance feature that allows clients to perform buffered writes on a file without first obtaining real allocations from the metadata controller. The allocations are performed later during periodic cache synchronization.
Note: If the Reserved Space option is not enabled, slightly more disk space will be available, but file fragmentation may increase and performance may not be satisfactory.
-
Stripe Align Size: Defines the minimum allocation size to trigger automatic stripe-aligned allocations.
-
Inode Stripe Width: If non-zero, causes large files to have their allocations striped across stripe groups in chunks of the specified size.
-
Allocation Session Reservation Size: The Allocation Session Reservation feature optimizes on-disk allocation behavior. Allocation requests occur whenever a file is written to an area that has no actual disk space allocated, and these requests are grouped into sessions. The amount you specify in this field determines the size of the chunk of space reserved for a session.
In the first field enter the desired chunk size. At the second field specify the chunk unit of measure (B=bytes, KB=kilobytes, MB=megabytes, GB=gigabytes, TB=terabytes). For more information about the Allocation Session Reservation feature, refer to the StorNext File System Tuning Guide.
-
Affinity Preference: If checked, permits files of a particular affinity to have their allocations placed on other available stripe groups (with non-exclusive affinities) when the stripe groups of their assigned affinity do not have sufficient space. Otherwise, allocation attempts will fail with an out-of-space error.
For additional information, see Affinity.
-
Auto Affinities: Designates the affinity (one or more stripe groups) to which allocations will be targeted for all files on this file system whose names have the specified file extension.
To add a new entry to the Auto Affinities table, type a file extension (omit the "." dot), select an affinity from the Affinity menu, and click Add. Use the file extension "*" (asterisk) to indicate "all other" file extensions that are not explicitly listed. The Affinity menu will list only affinities that are currently assigned to a stripe group of this file system. The affinity NoAffinity indicates that allocations will be targeted at stripe groups that have no affinity. To delete one or more entries, check any rows to delete, and click Delete.
Each unique file extension can be targeted at only one affinity. However, each affinity may serve as the allocation target for more than one file extension. To sort the Auto Affinities table by file extension or affinity name, click the File Extension or Affinity column title, respectively. Multiple clicks cause the sort order to alternate between ascending and descending alphabetic order.
The Performance tab fields allow you to adjust parameters for optimized performance.
-
Buffer Cache Size: Defines the amount of memory used by the FSM process for caching metadata.
-
Inode Cache Size: Defines the number of inodes that can be cached in the SNFS server. The default and minimum setting for the cache size is 16.
-
Use Physical Memory Only: When this option selected, the file system will use only physical memory, not swapped or paged.
-
High Priority FSM: Determines whether the FSM process should run with real-time priority.
The Debug tab fields allow you to enable or disable debugging and set parameters for the debug log.
-
Enable Debugging: Enables detailed file system debug tracing. When debug tracing is enabled, file system performance could be significantly reduced.
-
Debug Log Settings: Settings to turn on debug functions for the file system server. The log information may be useful if a problem occurs. A Quantum Technical Assistance Center representative may ask for certain debug options to be activated to analyze a file system or hardware problem.
-
Maximum Log Size: Defines the maximum number of bytes (size) to which a StorNext Server log file can grow. When the log file reaches the specified size, it is rolled and a new log is started. In this situation, the two log files might use twice the maximum log size space specified in this field. The range is from 1 to 32 megabytes.
-
Maximum Number of Logs: Determines the number of rolled logs kept. Choices range from 4 to 64.
-
OP Hang Limit (Seconds): Defines the time threshold (in seconds) used by the FSM process to discover hung operations.
The Features tab fields allow you to enable or disable various file system-related features.
-
Case Insensitive: The StorNext GUI only allows you to enable the Case Insensitive option on a new file system during its creation, and controls how the FSM reports case sensitivity to clients. Windows clients are always case insensitive, Mac clients default to case insensitive, but if the FSM is configured as case sensitive then they will operate in case sensitive mode. Linux clients will follow the configuration variable, but can operate in case insensitive mode on a case sensitive file system by using the caseinsensitive mount option.
Note: Linux clients must be at StorNext 5.4 or later to enable this behavior.
Caution: Before you enable case insensitive, Quantum recommends that you stop the file system and then run cvfsck -A to detect name case collisions (this process might consume a large amount of time). The cvupdatefs command does not enable case insensitive when name case collisions are present in the file system.
Caution: After you run cvfsck -A, stop the file system (if it is not already stopped) and run the command cvupdatefs once the configuration file has been updated in order to enable or disable case insensitive. Also, clients must re-mount the file system in order to pick up the change.
How To Enable Case Sensitivity on an Existing File System Using the CLIUse the CLI on an existing file system to enable case sensitivity.
Note: The following procedures only applies to the CLI.
-
Use the CLI to edit the following file, and to set the variable, caseInsensitive, to the value, true:
/usr/cvfs/config/snfs1.cfgx -
Run the following command:
/usr/adic/util/syncha.pl -primary -
Wait approximately one minute, or run the following command on the backup node:
/usr/adic/util/syncha.pl -secondary -
Run the following command to stop the file system:
cvadmin -e 'stop snfs1' -
Run the following command to detect name case collisions:
Note: This process might require a large amount of time.
cvfsck -A -
Run the following command to commit the configuration change to the system:
cvupdatefs snfs1 -
Run the following command to start the file system:
cvadmin -e 'start snfs1'
-
-
I/O Tokens: Allows you to select which coherency model should be used when different clients open the same file, concurrently. If I/O Tokens is disabled, then the coherency model uses 3 states: exclusive, shared, and shared write. If a file is exclusive, only one client at a time can use the file. Shared indicates that multiple clients can have the file open but only in read only mode. This allows clients to cache data in memory. Shared write indicates that multiple clients can have the file open and at least one client has the file open for write. In this mode, coherency is resolved by using DMA I/O and no caching of data.
If I/O Tokens is enabled, there are two cases:
- If all the file opens are read-only, no token is used and all clients can read and cache data. In other words, writes are not allowed.
- If at least one client opens the file for write, each I/O performed by a client must have a token. In other words, clients can do many I/Os while they have the token, and can use the cache until it is invalidated.
As a best practice, if you have multiple writers on a file, enable I/O Tokens, unless you know that the granularity and length of I/Os are safe for DMA.
Note: File locking does not prevent read-modify-write across lock boundaries.
For backward compatibility, if a client opens a file from a prior StorNext release that does not support I/O Tokens, then the coherency model reverts to the Shared Write model using DMA I/O, but on a file-by-file basis.
Note: If the I/O Tokens option is changed and the MDC is restarted, then the files that were open at that time continue to operate in the model before the change. To switch these files to the new value of I/O Tokens, all applications must close the file and wait for a few seconds and then re-open it. Or, if the value was switched from enabled to disabled, then a new client can open the file and all clients are transparently switched to the old model on that file.
For additional information, see StorNext File System Data Coherence.
-
Security Model: Determines the scheme for specifying and enforcing security policies. The available options are legacy, unixpermbits, and acl. The default value is legacy.
- If the Security Model is legacy, the Unix Id Mapping field is grayed out (disabled); however, the Windows Security option and the Enforce ACLs option remain enabled.
-
If the Security Model is acl, the Unix Id Mapping field is not grayed out; however, the Windows Security and the Enforce ACLs are grayed out (disabled).
-
If the Security Model is acl, the Unix Id Mapping field is not allowed to be none. You must select a value from the Unix Id Mapping list.
- Unix Id Mapping: Determines the Unix Id mapping. The available options are none, algorithmic, winbind, and mdc. The default value is none.
- Windows Security: Determines whether Windows ACLs are enabled for the file system.
- Enforce ACLs: Determines whether ACLs are enforced on XSan clients.
- Windows Global ShareMode: Determines whether Windows Global ShareMode is enabled for the file system. The Global ShareMode variable enables or disables the enforcement of Windows Share Modes across StorNext clients. This feature is limited to StorNext clients running on Microsoft Windows platforms. See the Windows CreateFile documentation for the details on the behavior of share modes. When enabled, sharing violations will be detected between processes on different StorNext clients accessing the same file. Otherwise sharing violations will only be detected between processes on the same system. The default of this variable is false. This value may be modified for existing file systems.
-
Quotas: Specifies whether the Quota feature is enabled for the file system.
Note: Quotas are calculated differently on Windows and Linux systems. You cannot migrate a metadata controller running quotas between these different types.
Note: You cannot configure the Quotas feature when the securityModel is set to legacy and windowsSecurity is set to false.
Note: To configure an individual user quota or group quota, use the CLI command snquota. To configure a directory quota, use the StorNext GUI (see Tools > File Systems > Manage Quotas).
-
Quota Logs Retention Period: If Quotas is enabled, you can configure the length of time (Days, Weeks, Years) to retain the Quota logs. In other words, Quota logs are not retained after the length of time determined by the Quota Logs Retention Period value.
-
Named Streams: Determines whether a file system includes support for the Xsan Named Streams feature. Accessing files with Named Streams from a non-Xsan client is not supported. The Named Streams feature enables the storing of additional file system metadata. Because of this, the Named Streams feature cannot be disabled after it has been applied to a file system. See StorNext Named Streams for Managed File Systems for additional information.
StorNext Named Streams for Managed File SystemsBeginning with StorNext 6.2, you can enable the Named Streams feature for managed file systems. If you enable Name Streams, then it causes user-defined metadata created from macOS clients, such as extended attributes and Resource Forks, to be stored directly in StorNext metadata instead of creating Apple Double files having the "._" prefix. Besides reducing the number of files, the feature improves overall performance and reduces overhead, especially with macOS Finder.
Note: You can enable Named Streams for an existing managed file system; however, you should immediately run the macOS dot_clean(1) utility to merge Apple Double files into StorNext metadata.
Considerations for Resource Forks
Below are items you should take into account regarding how Storage Manager interacts with Resource Forks when you enable the Named Streams feature:
- While versions of the primary files are tracked as before, versions of Resource Forks are not kept. If changes are made to a Resource Fork, only the most recent version is kept.
- A change of version in the primary file does not affect an associated Resource Fork.
-
Resource Forks can only be truncated by policy.
Note: Their truncation is not tied to the truncation of the primary file. A file can be truncated with a disk resident named stream or vice versa.
- The retrieval of a primary file will not cause the retrieval of an associated Resource Fork. It will not be retrieved until if/when it is accessed.
- A Resource Fork is not visible in the Linux namespace.
- When a primary file with a Resource Fork is removed, its Resource Fork is removed as well.
- If just the Resource Fork is removed, the information needed to recover that Resource Fork is not kept. This removal does not affect the primary file.
- When a primary file is recovered its Resource Fork is automatically recovered with it. It is not possible to recover only the primary file or the named stream.
-
If a stored named stream is updated, both the named stream and the primary file can end up being re-stored, regardless of whether or not the primary file was also updated. This is also true if the primary file is updated; both can be re-stored. See Modification of Files with Named Streams, which provides information about modification of named streams.
Improved Recovery of Deleted Files for Managed File SystemsBeginning with StorNext 6.2, a complete set of metadata is saved when a file under a Storage Manager policy is deleted. Besides basic file attributes, this also includes:
- Extended Windows ACLs and DOS bits
- Extended attributes
- macOS resource forks
This metadata is included when a file is recovered using the Storage Manager fsrecover(1) command, provided the system is running StorNext 6.2 (or later) when the file was deleted. In releases prior to StorNext 6.2, only very basic metadata, such as the following, was saved at the time of deletion:
- Unix file owner
- Unix permission bits
- Modification time
- File size
Modification of Files with Named StreamsWhile there is a relationship between a primary file and its named stream from a user perspective, from the Storage Manager perspective they are two separate files. Each is stored, truncated, retrieved, and so on, individually as needed.
One relationship the two files do share is that the files with named streams, and their associated named streams, share their modification time. If the file or the named stream is updated, the one shared modification time is updated.
Within Storage Manager, the modification time for stored files is tracked in the database as a mechanism to help ensure data integrity. For example, if a named stream is updated, the modification time is changed and that modification time does not match the saved value for the primary file.
When a manual store (fsstore) is run on the primary file, where the modification time in the inode does not match the database value, the current version is invalidated and the file is re-stored with a new version.
Note: The fspolicy process does not make the modification time check and the file is not restored then by a policy. However, when it is time to truncate the file, the modification time check is made and the current file version is invalidated and the file is added as a candidate to be restored. The following fspolicy restores the file.
Note: You can only truncate a named stream using a truncation policy. Since you cannot specify files in a policy, then you must use the following command to truncate all truncation candidates: fspolicy -tc -m0 -o0
- Spotlight Proxy: Determines if Spotlight proxy is enabled for the file system. For additional information, see Configure Spotlight Proxy.
-
Use Active Directory SFU: Determines if Active Directory is enabled for the file system.
-
File Locks: Determines whether the FSM tracks and enforces file locks across all clients.
- FileLock Resync Timeout: Defines the timeout for clients re-registering file locks following FSM failover.
-
Metadata Archive: Lets you enable or disable metadata archive creation by the FSM process. A metadata archive logs file system operations and is a key piece of restoring a file system after a disaster on non-managed or managed file systems. By default, metadata archive creation is disabled on non-managed file systems, and enabled on managed file systems.
- Metadata Archive Days: Allows you to set the number of days of metadata history to keep available in the Metadata Archive. The default value is zero (no metadata history).
- Metadata Archive Cache Size: Allows you to configure the size of the memory cache for the Metadata Archive. The default value is 2 GiB.
- Metadata Archive Search: Allows you to enable or disable the Metadata Archive Search capability in Metadata Archive. If enabled, Metadata Archive supports advanced searching capabilities that are used by various other StorNext features. Metadata Archive Search is enabled by default and should only be disabled if performance issues are experienced.
-
Audit: This option allows you to control if the file system maintains extra metadata for use with the snaudit command and for tracking client activity on files. By default, this option is disabled.
Note: The Audit feature requires that the Metadata Archive option is enabled.
When the Audit feature is enabled, then the FSM requests that file system clients send information about what file I/O they have performed. The FSM then records this information in the Metadata Archive. The snaudit tool queries the data from there. As a consequence of the data originating in the client, some information might not be gathered from older clients who do not know how to send the necessary data.
- Global Super User: Enable this option (check the box) to allow a user with super-user privileges to assert these privileges on the file system.
- If the Global Super User option is enabled, super users have global access rights on the file system. This selection is the same as the maproot=0 directive in the Network File System (NFS).
- If the Global Super User option is not enabled, super users can modify only files they can access, like any other users.
The LDAP tab fields allow you to enter parameters related to LDAP (Lightweight Directory Access Protocol, an application protocol for querying and modifying directory services running over TCP/IP).
-
Unix File Creation Mode on Windows: The number of mode bits for UNIX files
-
UnixDirectory Creation Mode on Windows: The number of mode bits for UNIX directories.
-
Unix Nobody UID on Windows: UNIX user ID to use if no other mapping can be found.
-
Unix Nobody GID on Windows: UNIX group ID to use if no other mapping can be found.
-
Unix ID Fabrication on Windows: Allows you to enable or disable using fabricated IDs on a per-file system basis. If enabled, Windows user IDs are mapped using fabricated IDs.
The Advanced tab fields allow you to enable or disable advanced file system-related features.
-
File System Capacity Threshold: Defines the file system fill level (in percent) that triggers a RAS event.
-
Extent Count Threshold: Defines the number of extents in a file required to trigger a fragmentation RAS event.
-
Remote Notification: Determines whether to enable partial support for cluster-wide Windows directory event notification. The list of events supported includes only FILE_NOTIFY_CHANGE_FILE_NAME and FILE_NOTIFY_CHANGE_DIR_NAME. Another limitation is that only a specific directory can be monitored, and not the full hierarchy beneath it. If you enable Remote Notification, then you might experience higher metadata traffic.
To modify an existing stripe group, under the Stripe Groups heading select the stripe group you want to modify, and then change its properties as desired.
To add a new stripe group to the file system, click Add and then enter the remaining fields for the new stripe group.
(Optional) Select the Skip trim/unmap of thin provisioned disks option to disable the trim operation for thin provisioned disks. By default, this option is not selected and all thin provisioned disks found are trimmed (unmapped) as part of creating a file system. The trim operation can take several minutes depending on the mappings present in the storage array.
When you are finished on the Stripe Group tab, click Apply to save your changes, or Cancel to abandon your changes.
- Stripe Group: Select the stripe group you want to modify or delete.
-
Add: Click this button to add a new stripe group.
Note: If the following indented fields are not displayed, they appear after you click Add. Likewise, after you delete a stripe group these fields may not be displayed.
- Name: Enter a name for the new stripe group, or skip this field to accept the displayed name.
- Breadth: Specify the stripe group breadth, which is the number of kilobytes (KB) that is read from or written to each disk in the stripe group.
- Content: Specify whether the stripe group will be used for metadata, journaling, or user data. You can specify one, two, or all of these content types.
-
Delete: Click this button to delete the currently selected stripe group.
WARNING: This particular delete function does not provide a confirmation message, so be absolutely sure you want to delete the selected stripe group before you click Delete. The selected stripe group is immediately deleted after you click Delete. This function is permanent and irreversible.
-
Affinity: An affinity is a label that allows you to control the physical location of files, by placing selected file types on specific stripe groups. Managed file systems are restricted to using only two affinities, which are used for Disk-to-Disk relocation Storage Manager policies. Managed file systems are limited to using the same two affinity names on all managed file systems. By default, the affinity names on a managed file systems are “Tier1” and “Tier2”.
For unmanaged file systems, using affinities is a two step process:
- Each stripe group can be assigned one or more affinities during file system configuration.
-
A directory is associated with each affinity.
For example, if you configure stripe group SG2 to have affinity AFF2 and then associate the directory special_files with affinity AFF2, all files put into special_files can exist only on the disks that make up SG2. Otherwise, the files put into that directory could exist on any stripe group or on any disk in the file system.
It makes sense to use affinities in environments where performance is critical. For example, you might want to constrain video files to a stripe group made of Fibre Channel disks tuned for video playback, but have audio files reside on slower SCSI disks in a different stripe group.
You cannot remove an affinity from a stripe group if that affinity is not assigned to another stripe group and the Auto Affinities table includes a file extension that is targeted at that affinity. Instead you must first update the Auto Affinities table to delete that auto affinity entry. See the Auto Affinities section for how to delete an auto affinity.
If you want to associate an affinity to the new stripe group, select the desired options.
- Exclusive: When this option is enabled, the selected stripe group is used exclusively for the affinity's files.
- Access: Specify the permission level for the stripe group:
- Full R/W (read/write)
- Read Only
- Disabled
- Quality of Service: The RTIO/RVIO implementation of Quality of Service is being deprecated in favor of Quality of Service Bandwidth Management (QBM). Specify parameters for the Quality of Service (QOS) feature. QOS allows real-time applications to reserve a specified amount of bandwidth on the storage system.
- RealTime I/O/sec: The amount of I/O per second to reserve for realtime applications.
- RealTime I/O MB/sec: The amount of I/O space per second to reserve for realtime applications.
- Non-RealTime I/O/sec: The amount of I/O per second to reserve for non-realtime applications.
- Non-RealTime I/O MB/sec: The amount of I/O space per second to reserve for non-realtime applications.
- RealTime Timeout secs: The timeout interval to reserve for realtime applications.
- Disk Assignment: Select one or more disks to assign to the file system. Press and hold Shift or Ctrl to select multiple disks.
- Label: In the field to the left of the Label button, enter a label name. Click Label to apply the label name to the selected disks.
- Unlabel: Click Unlabel to remove label names from selected disks.
- Assign: Click Assign to assign selected disks to the file system stripe group.
- Unassign: Click Unassign to remove previous associations between disks and the stripe group.