Scalar i3 iBlades
Overview
There are two types of iBlades supported by Scalar
The SLTFS iBlade can be used in two ways:
Single-volume volume groups (scratch disabled) - allows you to manage your media such that you know what data is written to each volume. As volumes fill up, you are responsible for writing data to the next volume.
Multi-volume volume groups (scratch enabled) - allows you to create volume groups and start writing data to a volume. When the volume is filled to capacity, the system will start writing to another volume in the same volume group. If volumes are full in the volume group, the system will pull volumes from the scratch pool to ensure data is captured.
As new media are imported into the SLTFS partition, the volumes will initially be in the Discovered Media volume group, or pool. When in the discovered pool, the system does not know the state of the media. It may be new, unformatted tapes, or tapes that have had data written to them previously. To use new, unformatted media in the discovered media pool, it will have to be formatted. Once formatted, the media are moved to the Scratch Media volume group, or pool. All newly formatted media are in the scratch media pool. Media with data may be attached if the data is to be retained. Upon attachment, the media is moved into a volume group.
Additional Information
-
iBlades are not compatible with LTO-9 drives.
-
See Scalar LTFS Terms for a list of terms specific to the Scalar LTFS iBlade.
Layout
Provides an overview of the media in the SLTFS partition as well as all volume groups. It displays which media belong to which volume groups, how full the media are and the state of the media. Administrators will use this to configure their volume groups and manage media. They can monitor when media needs to be added to the scratch pool or they can combine volume groups or export media.
Lists all the details of a selected ScalarLTFS volume group. This includes configuration details such as free space available, number of assigned media, scratch pool enabled and more.
Shows the configuration details of the installed ScalarLTFS iBlade.
Provides options for iBlade configuration, volume group management and actions.
Configuration | |
---|---|
Add |
Allows an administrator to add a new volume group to the SLTFS partition. |
Modify |
Allows you can change settings for a volume group. |
Delete |
Allows you to delete a volume group. |
Media View |
Presents media view in North Panel. |
VG View | Presents volume group view in North Panel. |
Maintenance | |
---|---|
Job Queue |
Allows you to monitor the state of a Scalar LTFS operations initiated from the WebGUI. |
Assignment |
Allows you to add scratch pool or discovered media to an existing volume group. |
Merge |
Allows you to combine two (2) existing volume groups into a single volume group. |
Replication |
Allows you to create a copy of a volume group. |
Repair |
Allows you to restore a volume group back to an available status if a problem arises during another operation. |
Safe Repair |
Allows you to fix a problem so that you may continue using the volume group and/or media. |
Actions | |
---|---|
Format |
Allows you to clear all data from media to reuse or take brand new media and prepare it for use in the library. |
Attach |
Gets media ready and available to use in the library. |
Sequester |
Allows you to designate media as not available for file system activity. |
Prepare |
Allows you to export media from a volume group. |
Export |
Allows you to either export all media or selected media from a volume group. |
Online |
Allows you to enable a volume/media available for file system activity. |
Offline |
Allows you to disable a volume/media available for file system activity. |
Tasks
Configuration
The Add feature allows an administrator to add a new volume group to the Scalar LTFS partition. Once added, the new volume group will not appear in the file system until media has been added using either an assign (see Assign Media to a Volume Group) or merge (see Merge Volume Groups).
- From the Navigation panel, select NAS - iBlade.
-
In the Operations panel, click Add.
Item Description Action Volume Group Name Allows administrators to assign a name to the volume group. The volume group name must be unique, less than 255 characters and cannot contain special characters (<>?/:*"|\). Enter text. Free Threshold Percent Allows administrators to set a threshold percentage for volume groups storage capacity. A message is generated when a volume group exceeds this threshold. Select a value from the drop-down menu. Scratch Pool Allows administrators to set the volume group to automatically add scratch pool media when media currently assigned to the volume group is full or reaches the maximum file count.
Deselecting this feature requires manual addition of media to volume group when full or reaches the maximum file count.
Select the checkbox enable scratch pool media addition. Deselect the checkbox to disable scratch pool media addition. Comment Allows administrators to add comments about the volume group up to a maximum of 128 characters. Enter text. -
Click Apply to save your settings.
-
Click Close to exit the window.
This features allows you to change some settings for a volume group. You can make it scratch tape enabled, add a comment or change its free threshold.
- From the Navigation panel, select NAS - iBlade.
- In the North Panel, select the check box next to the volume group you want to modify.
-
In the Operations panel, click Modify.
Item Description Action Free Threshold Percent Allows administrators to set a threshold percentage for volume groups. An alert is generated when a volume group exceeds this threshold. Scratch media needs to be available when this threshold is exceeded. Select a value from the drop-down menu. Scratch Pool Allows administrators to set the volume group to automatically add blank media when necessary. Select the checkbox enable scratch pool media addition. Deselect the checkbox to disable scratch pool media addition. Comment Allows administrators to add comments about the volume group. Enter text. Vaulted Location An additional comment field for you to note the location of a vaulted volume group. You can enter up to 128 characters. Enter text. -
Click Apply to save your settings.
-
Click Close to exit the window.
Deleting a volume group removes it from appearing in the North Panel. You can only delete a volume group that contains no volumes (is empty) and you can delete multiple volume groups at one time. You cannot delete the scratch or discovered media pool.
Maintenance
The Job Queue allows you to monitor the state of a Scalar LTFS operations initiated from the WebGUI. The screen will update every 60 seconds but there is also a Refresh button that can be used. File system operations (read/write) will not be displayed in the Job Queue. By default, the Job Queue displays all jobs in progress and jobs completed within the last 24 hours.
- From the Navigation panel, select NAS - iBlade.
-
In the Operations panel, click Job Queue.
Item Description Action Cancel Job Indicates which job(s) to be canceled. Select the check box next to the job you want to cancel and click Apply. Job ID A unique identifier created when a function is initiated by the system. Information only. A visual representation of the state of a job. Information only. Source Indicates the volume group or media the job is being performed on. Information only. Event Indicates what the job is performing. Information only. State The state of the job. Value include:
completed successfully:
failed:
canceled:
completed with exception:
paused:
in progress:
Information only. Destination Indicates where the media is going. This is an optional field based on job type. If given, this is the destination volume group. Information only. Started Indicates when the job was begun. Information only. Progress Indicates where in the process the job is currently. Information only. Reason Indicates why the job failed. Information only. Ended Indicates when the job was completed. Information only. User Indicates who initiated the job. Information only. Refresh Allows you to refresh the list. Click the Refresh button. Display List Allows you to filter the completed jobs displayed in the Job Queue. Values include:
- 24 Hours (default)
- 2 Days
- 3 Days
- 1 Week
- 2 Weeks
- 1 Month
- 2 Months
- 3 Months
Note: All jobs are automatically deleted from the system after 90 days.
Click the button and select a value. -
Click Apply to save your settings.
-
Click Close to exit the window.
The Assignment feature allows you to add scratch pool or discovered media to an existing volume group. Discovered media is any media added to the library and available for assignment to a volume group. Scratch media is any media added to the library that has been formatted and is known to be empty. You can assign either discovered or scratch media to a volume group but not both at the same time.
After you've assigned media to a volume group, you can use the Job Queue to view status of the assignment.
- From the Navigation panel, select NAS - iBlade.
- In the North Panel, select the check box next to the discovered or scratch media you want to assign to a volume group.
-
In the Operations panel, click Assignment.
Item Description Action Destination Volume Group Displays a list of all available volume groups. Destination volume group must: - be non-system (SLTFS) volume group
- be in ready state or empty
- be online
Select a value from the drop-down list. -
Click Apply to save your settings.
-
Click Close to exit the window.
The Merge feature combines two (2), and only two (2), existing volume groups into a single volume group. When merged, the remaining volume group will retain the name of the destination volume group.
Caution: Files that exist in both volume groups (file collision) will fail the merge.
Volume group merge requirements:
Selected volume group from North panel must:
- be in ready state
- have no offline media
- have no sequestered media
- be online
Destination volume group must:
- be non-system (SLTFS) volume group
- be in ready state or empty
- be online
- Select a volume group from the North Panel.
-
In the Operations panel, select Merge.
Item Description Action Destination Volume Group Name Displays a list of all available volume groups. Select a value from the drop-down list. Merge all volumes - this takes all volumes and data from each volume group and combines them into a single volume group
Merge only volumes with data - this takes only the volumes that have data and moves them to the intended destination volume group.
Note: The source volume group will remain with the empty volumes
Select the desired radio button. -
Click Apply to save your settings.
-
Click Close to exit the window.
The Replication feature allows you to create a copy of a volume group. To copy a volume group, you will need two available drives and, depending on the size of the volume group, a considerable amount of time to complete the copy.
Note: Replication cannot occur if the volume group contains empty media or if more than one (1) volume group is selected. The Replication button will be unavailable in this instance.
- Select a user from the North Panel.
-
In the Operations panel, select Replicate.
Item Description Action Destination Volume Group Name Allows you to enter a name for the new volume group. New volume group name must:
- Be new (does not exist in the system)
- Be less than 255 characters in length
- Not contain these characters: <>?|:*"/\
Enter text. Verify Data Allows the system to verify the volume group has been copied
Note: Enabling the Verify Data option will increase the amount of time it takes to complete the volume group replication.
Select the check box to enable verify. Deselect the check box to disable verify. -
Click Apply to save your settings.
-
Click Close to exit the window.
The Repair feature allows you to restore a volume group back to an available status if a problem arises during another operation, such as a merge. Only a single volume group can be repaired at a time. To perform a repair, the following requirements must be met:
-
A non-system volume group
-
Volume group state must be unavailable
-
Volume group may not have any offline media
The Safe Repair feature allows you to fix a problem so that you may continue using the volume group and/or media. To repair a volume group, it must have the following:
- at least one media in Sequestered state or failed a job request (ex. Merge Volume Groups, Replicate a Volume Group) or is needed to restore a volume group that is in Ready for Export state
- selected volume group must be a non-system volume group
- volume group state must be either ready or unavailable
- volume group can have no offline media
If successful, the repaired media will now reside in the new volume group. If you want to return the media to its original volume group, you will have to Assign Media to a Volume Group manually. A Repair a Volume Group or Attach Media can be attempted on the volumes in the new volume group in order to resolve issues with the sequestered media.
Note: Safe Repair cannot occur if the volume group contains empty media or if more than one (1) volume group is selected. The Safe Repair button will be unavailable in this instance.
- Select a volume group from the North Panel.
-
In the Operations panel, select Safe Repair.
Item Description Action Destination Volume Group Name Allows you to enter a name for the new volume group. New volume group name must:
-
Must be new (does not exist in the system)
-
Must be less than 255 characters in length
-
May not contain these characters: <>?|:*"/\
Enter text. -
-
Click Apply to save your settings.
-
Click Close to exit the window.
Actions
The Format feature allows you to clear all data from media to reuse or take brand new media and prepare it for use in the library. Media can only be formatted if they are in the Sequestered or Auto-Attachable state. Sequestered media that belong to a volume group will be removed from that volume group after formatting. After a format completes, the media will enter the auto-attachable state and become a member of the scratch pool.
You may complete the formatting of multiple media across multiple volume groups at the same time. However, all media in a volume group must be in an online state before you can initiate a format.
WARNING: Formatting media containing data erases all data!
The Attach feature gets media ready and available to use in the library. Media must be either sequestered or in the discovered or scratch pool to be able to be attached. Sequestered media is either media the user or system sequestered or is in the discovered pool. You may sequester media to get it out of action for whatever reason. The system will sequester media if it encountered some type of error during an operation. The discovered pool are media that have been imported into the system. Attaching discovered pool media is the mechanism for the system to determine what is on it. Successfully attaching a sequestered tape will put it back in action.
During an attach operation, media is mounted and then depending on what is found on the tape, is attached to a volume group. If the media previously belonged to a volume group, the tape will be returned to that volume group. If the media doesn't belong to a volume group, a volume group is created for it using the media's barcode.
After attaching, the volume state will be pending attach and a job will have been created for it in the Job Queue
The Sequester feature allows you to designate media as not available for file system activity. The system will automatically sequester a tape if a problem is encountered but you may also manually sequester media.
The library will allow you to sequester multiple media, from multiple volume groups as long as the following requirements are met:
- media must be in either attached or vaulted state
- volume group(s) must be online.
The Prepare feature allows you to export media from a volume group. You can select multiple volume groups for export as well as multiple media within the volume group(s). To prepare a volume group(s) for export, the following requirements must be met:
-
A non-system volume group
-
Volume group(s) must contain at least one media
-
Volume groups cannot have any offline media
-
Volume groups must be in ready state
-
All media in the volume group must be attached (see Attach Media).
Note: Once a volume group(s) is prepared for export, the only way to restore the volume group(s) to be available is to:
1) perform a Repair a Volume Group operation on the selected volume group(s), or
2) complete the export operation and then re-import the media back into the volume group(s)
There are two types of export operations:
Exporting a volume group - the volume group must be prepared for export (see Prepare a Volume Group for Export). This means the media in the volume group must be in Ready for Export state . Once the volume group is in this state and you click the Export button, all media within the volume group is moved to available I/E slots. Until the media is physically removed from the I/E slot, the media in the volume group will be in the Pending Export state and the volume group state will be Unavailable.
Exporting individual media from a volume group - must be in sequestered state and any discovered or scratch media can be exported.
Note: More media than defined I/E slots can be exported. When media is removed from the I/E slots, additional media will be moved to the I/E slots until all media scheduled for export are move to I/E slots and physically removed from the I/E slots.
- From the Navigation panel, select NAS - iBlade.
- In the North Panel, select the check box next to the media you want to export.
-
In the Operations panel, click Export.
-
Click Apply to save your settings.
-
The selected media is set to Pending Export in the Job Queue
-
Click Close to exit the window.
- Remove media from the I/E slot.
- From the Navigation panel, select NAS - iBlade.
- In the North Panel, select the check box next to the media you want to export.
-
In the Operations panel, click Export.
- The above screen displays a list of the coordinate location for each media and it's barcode. The coordinate number is broken down in the following way:
- First two numbers are always 1, 1.
- The third number is the library section.
- The forth number is the magazine column.
- The fifth number is the magazine slot.
Note: For more details on reading coordinates, click here.
- Using the coordinate, navigate to the media's location and remove the media. Do this for all media listed.
- Once all media are manually removed, click Close to exit the window.
There are two modes:
- Online — Volume group is available for file system activity. This is the normal operating mode for the volume groups and media.
- Offline — Volume group is unavailable for file system activity, meaning no data I/O can be performed on selected volume group or media.
Some operations require that the volume group or media be offline.
Scalar LTFS iBlade Best Practices
Network Attached Storage (NAS) is a client-server presentation of a filesystem hosted by a server and made available to one or more clients using a local area network. In this product, the Scalar LTFS iBlade is a NAS server. The user’s desktop or laptop is an example of a NAS client.
A NAS client sees Scalar LTFS media as NAS storage by using NFS (Network File System) or CIFS (Common Internet File System) file sharing protocols. Scalar LTFS media are aggregated into a single name space that can be mounted by NFS clients or mapped by CIFS clients. A set of one or more Scalar LTFS media can be grouped together into volume groups. Each volume group is accessible by a NAS client as a NAS disk folder.
A Linux client can gain access to the Scalar LTFS iBlade volume groups by NFS mounting using either NFSv3 or NFSv4 protocols. A command line example for an NFS mount looks like this:
To mount the Scalar LTFS iBlade automatically upon NFS client restart or power-on, enter the following entry into the NFS client’s /etc/fstab
file.
Whichever procedure is used to mount the Scalar LTFS iBlade, substitute #.#.#.#
with the IP Address of the Scalar LTFS iBlade and substitute <mount-name>
with the NAS Share name that’s configured for the Scalar LTFS iBlade. Select Devices > Operations > Settings to see what the NAS Share Name is set to. It will be ScalarLTFS if it has never been changed.
Note: A very large timeo
value of 12000 deci-seconds (20 minutes) is chosen to help hide the occasional latencies that is incurred when tapes are mounting or tapes are seeking files.
Windows
A Windows client can gain access to the Scalar LTFS iBlade volume groups by connecting the Scalar LTFS iBlade as a CIFS file server.
- Display the computer view on the host machine (i.e. my computer).
- Click Map Network Drive.
- Select an available drive letter from the Drive: drop-down menu.
- Type the following:
- Click Finish.
MacOS
A MacOS client can gain access to the Scalar LTFS Blade volume groups by connecting the Scalar LTFS iBlade as a NAS file server.
-
From the Finder, select Go > Connect to Server.
-
Type in the Scalar LTFS Appliance IP address that was entered as part of the Setup Wizard in the Server Address field.
- Click Connect.
Some NFS client applications write files using I/O patterns that can cause out-of-order file fragments. Media that are written this way will have large indexes describing the files. Such large indexes will cause extensive performance issues when reading directory content or restoring file data.
Mitigation options to prevent future fragmentation can be any of the following:
- Use muCommander - Quantum Edition to write files.
- Use the
dd
command with theoflag=direct
option as the primary copy engine to write files to the LTFS Blade. - If the
dd
command cannot be used as the primary copy engine, see if the application can pipe its output todd
to leverage theoflag=direct
option. - If the NFS client application can be modified, then use the
O_DIRECT
mode to write files to the LTFS Blade. - Switch over from Linux NFS to Linux CIFS.
Once a volume group’s media have become fragmented, use Replication to create a replica volume group with defragmented media.
An extra available drive is recommended when spanning files using NFS. When writing files using NFS, the write patterns can sometimes write file fragments out of order. In order to span a full file, the Scalar LTFS iBlade may need to utilize two (2) drives to reorder the incoming out of order file fragments. A volume group write failure may occur if an extra drive is not available when file spanning.
Tape motion such as loading tape or seeking tape position can take a long time and cause I/O timeouts for file browser, script, or application requests. Use the following procedure to modify the timeout value in the windows registry to avoid I/O timeout failures.
Caution: It is recommended that before attempting to modify a Windows Registry, that users backup the existing registry in case the modifications cause issues with their host.
- From the Windows Start menu, click Run. The Run window displays.
-
Type REGEDIT, and click OK. The Registry Editor window displays.
-
Locate the following registry key:
HKEY_LOCAL_MACHINE/System/CurrentControlSet/Services/LanmanWorkstation/Parameters
-
Set the value of SessTimeout to 3600.
-
If SessTimeout value does not exist, create it as a new REG_DWORD value.
-
Right click in the Registry Editor window. Select New > DWORD value.
-
Name the new value SessTimeout.
-
Right click SessTimeout > Modify. The DWORD Value window displays.
-
In the Value Data field, type 3600.
-
Click OK.
-
Avoid launching and leaving Windows Explorer up without performing I/O. Windows Explorer will periodically send background filesystem requests to the Scalar LTFS iBlade to retrieve directory content for the purpose of checking whether directory content has changed. This may incur performance issues with ongoing jobs (media or volume group) as well as ongoing I/O from another file browser, script, or application.
Avoid Windows Explorer icon and thumbnail generation which results in unsolicited filesystem requests sent to the Scalar LTFS iBlade. The user will experience file write and read latencies when the Scalar LTFS iBlade goes to read a different section of Scalar LTFS media to facilitate icon or thumbnail generation.
File directories should always be set to Details or List mode to make file transfers as easy as possible.
Windows
- Select View > Options > Change Folder and Search Options.
-
Select the View tab.
- Click the Apply to Folders button.
- A confirmation box displays asking if you want to apply the view options to all the sub-folders. Click Yes.
Mac OS
- Select the LTFS directory.
-
Select View > as List.
- Select View > Show View Options.
- From the View Options window, select the Always open in list view check box.
- Close the window to apply changes.
Note: Mac users will have to apply these settings to each directory. Sub-directories will not inherit properties of their parent directory.
Linux
- Select the LTFS directory.
- Select Edit > Preferences.
-
From the View tab of the File Management Preferences window, select List View from the View new folders using: drop-down menu.
- Click Close.
When playing video files on Windows Explorer, ensure that the Duration column is not visible when selecting the file. This could cause a full file reload, delaying the playback of the file.
Users may see a Properties Lost dialog while copying files with Windows Explorer. Disregard this dialog since there is no impact to the data transfer.
It is recommended for Windows Explorer users to disable submenu launch on mouse hovering.
Finder is unsupported for the Scalar LTFS iBlade. Some Finder features make it incompatible for use with the Scalar LTFS iBlade. Instead, use muCommander - Quantum Edition. The muCommander - Quantum Edition User's Guide and Release Notes are found on the Release and Reference Documentation page.
Avoid concurrent access to the same volume group from two (2) or more file browsers or applications. A Scalar LTFS tape will undergo shoe-shining when different files are concurrently accessed and the files are spaced away from each other on tape. File write and read performance will be extremely poor and tape drive heads and media can wear out faster when they’re subjected to shoe-shining.
Avoid using file editing applications (e.g., word processors, database software, spreadsheet editors, etc.) directly on Scalar LTFS iBlade files. These file editing applications can fragment files on Scalar LTFS media and cause subsequent read performance issues. The recommended procedure is to:
- Copy the Scalar LTFS iBlade file to local disk
- Run the file editing application
- Copy back the file to Scalar LTFS iBlade
Avoid accessing more volume groups than there are available drives with file browsers or applications. When the number of accessed volume groups exceeds the number of drives, the drives are said to be oversubscribed. Applications and scripts will get EAGAIN
filesystem errors and file browsers will get Network Busy pop-ups. The Scalar LTFS iBlade attempts to warn that oversubscribing is happening by posting the M105 “No drives available” system message.
Avoid volume group full failures due to lack of scratch media in the scratch pool when writing files. The Scalar LTFS iBlade helps by warning the user of a dwindling scratch pool by posting the M104 “Scratch pool low capacity threshold reached” system message. Act quickly when this system message is posted by formatting media to make scratch media.
Working with files that are below 100MB may lead to degraded performance in the following areas:
- File Transfer rate
- File Browser Refresh time
- Appliance Startup time
Volume group media can pick up media overhead such as deleted files or redundant indexes. The extra media overhead can degrade read performance. Select Replication to create a replica volume group that’s clean of media overhead to regain read performance.
To check whether a volume group qualifies for reclamation, select Devices > NAS Blade page, select each constituent volume of your volume group in the North Panel and check the reclamation rating shown in the Information panel. If any media has a high rating (5 or greater), then select your volume group and replicate it.
Scalar LTFS Terms
Term | Definition |
---|---|
Discovered Pool | Volumes imported into Scalar LTFS via the Library. These volumes can be assigned to a volume group or they can be formatted into scratch media for expanding a volume group's capacity. The discovered pool is displayed on the GUI as the [discovered media] volume group. |
Fibre Channel | A high-speed data transfer technology. Using optical fibre to connect devices, Fibre Channel primarily transports SCSI traffic between computers and I/O. |
File Collision | Situation where there are already existing file names in a volume group and there is an attempt to either merge a volume(s) into the volume group or to attach a sequestered volume that is in the volume group that has the same path name. |
File Spanning | Spanning of files across multiple media/volumes. |
LTFS | An acronym for Linear Tape File System, a file system that provides access to files on the latest generations of LTO tape technology as if the files were on the user’s local disk. |
Partition | A logical subset of an underlying physical library that may present a different personality, capacity, or both to the host. It is a representation of real physical elements, combined to create a grouping that is different from the physical library. Also a logical portion of the physical library that is viewed by the host as if it is a complete library. Partitions present the appearance of multiple, separate libraries for purposes of file management, access by multiple users, or dedication to one or more host applications. |
RAS Ticket | A ticket that alerts the user of an issue with the appliance. RAS tickets identify which appliance components are causing an issue. When possible, the RAS ticket provides instructions for resolving the issue. |
Scratch Pool | Volumes formatted and then available to be auto-attached by SLTFS for expanding a volume group's capacity. The scratch media pool is displayed on the GUI as the [scratch media] volume group. |
Sequester | A media state that removes all metadata from the system, such that the tape, and it's data, will no longer be seen in the file system. At this point a user can either physically remove it from the system, reformat it or reattach it. |
URB | User Request Broker. |
Vaulted Media | A media state indicating that volumes have been exported from the system but the system still retains their metadata. |
Volume Group | A volume group is a collection of one or more media that is presented to end users and applications as a directory in the file system. |
Scalar LTFS Frequently Asked Questions
Here are some frequently asked questions and their answers. If the question is regarding an issue, then a resolution is recommended.
Note: Volume Group is abbreviated as VG in this FAQ.
Issue | Resolution |
---|---|
The VG may have been taken offline by a user. | Select NAS - iBlade from the Navigation panel, select your VG in the North Panel, and check to make sure the VG is not offline. Click Online in the Operation panel to bring the VG back online. |
The VG may have been taken offline due to a job failure. | Select NAS - iBlade from the Navigation panel and check whether the state is unavailable. If so, then Repair a Volume Group on the VG to bring it back online. |
The VG index sizes may be large due to file fragmentation which can cause index load latencies (i.e., file browser refresh delays). File fragmentation is normally caused by the NAS Client I/O patterns during write. |
Select NAS - iBlade from the Navigation panel, click on each constituent volume of your VG, and check the fragmentation rating shown in the Information panel. If any volumes have high ratings (5 or greater), then fragmentation is contributing to the sluggish display of VGs. Resolution 1 Prevent future file fragmentation with any of the following options:
Resolution 2 Defragment existing VGs by selecting NAS - iBlade from the Navigation panel , select your VG in the North Panel, and Replicate a Volume Group. This will create a replica VG with no file fragmentation and the index load latency will be much shorter. |
The VG index sizes may be large due to the VGs being written on the Scalar LTFS Appliance or the Standalone LTFS System. Those systems allowed writing more than 1 million file objects (files, directories, and symbolic links) per volume, which causes the indexes to become large. The large indexes can cause index load latency (i.e., file browser refresh delays). |
Select NAS - iBlade from the Navigation panel, select your VG in the North Panel, and Replicate a Volume Group. This will create a replica VG with smaller indexes and the index load latency will be much shorter. |
Note: Your replica VG may have more VG media than the original VG.
Issue | Resolution |
---|---|
The NAS Client wrote files with I/O patterns that cause out-of-order file fragmentation, which causes poor read performance. |
Resolution 1 Recode the application to write files with the Resolution 2 Select NAS - iBlade from the Navigation panel, click on each constituent volume of your VG in the North Panel, and check the reclamation and fragmentation rating shown in the Information panel. If any volumes have high ratings (5 or greater), then select your VG and Replicate a Volume Group. This will produce a replica VG with better read performance by reducing overhead space (e.g., reclamation) and reordering file fragments (e.g., defragmentation). |
There may be a second NAS client attempting to access the Scalar LTFS iBlade at the same time that your NAS client is reading files. | Shut down the second NAS client application or file browser. |
The Scalar LTFS iBlade may be receiving filesystem requests to manipulate security attributes while your NAS client is reading files. | Select NAS - iBlade from the Navigation panel, select the iBlade in the North Panel, then click Settings in the Operation panel. Deselect the Extended File Attributes check box to regain some performance. |
The NAS Client is reading small sized files. Slow performance. | Slower read performance is expected when reading small files. |
Issue | Resolution |
---|---|
There may be a second NAS client attempting to access the Scalar LTFS iBlade at the same time that your NAS client is writing files. | Shut down the second NAS client application or file browser. |
The Scalar LTFS iBlade may be receiving filesystem requests to manipulate security attributes while your NAS client is writing files. | Select NAS - iBlade from the Navigation panel, select the iBlade in the North Panel, then click Settings in the Operations panel. Deselect the Extended File Attributes check box to regain some performance. |
The NAS Client is writing small sized files. Slow performance. | Slower write performance is expected when writing small files. |
Issue | Resolution |
---|---|
VG capacity cannot grow if there are no scratch media in the scratch media pool. | Select NAS - iBlade from the Navigation panel and check for available scratch media in the scratch pool. If none in the scratch pool, then import more media and format them. Format Media will create scratch media in the scratch pool. Retry your operation. |
VG capacity cannot grow if the per-VG scratch policy is disabled. | Select NAS - iBlade from the Navigation panel and select your VG in the North Panel. Click Modify in the Operation panel. Select the Scratch Pool checkbox to enable scratch pool use. Retry your operation. |
VG capacity cannot grow if the per-Device scratch policy is disabled. | Select Devices from the Navigation panel and select the Scalar LTFS iBlade. Click Settings in the Operations panel. Check the Scratch Pool checkbox to enable scratch pool use. Retry your operation. |
Issue | Resolution |
---|---|
The Export job cannot complete if there are not any empty slots. | Select NAS - iBlade from the Navigation panel, then click Job Queue in the Operation panel and check on the job state of your export job. If the job state indicates it’s waiting for an export slot, then remove tapes from the export slots. |
Some jobs require a free drive to complete. | Select NAS - iBlade from the Navigation panel, then click Job Queue in the Operations panel and check on the job state of your job. If the job state indicates it’s paused and waiting for a drive, then give the job adequate time to wait its turn at allocating a drive to complete the job. Note whether prior jobs are finishing and releasing the drives. If a job continues to wait for a drive, then an option to cancel the job is available. |
Some jobs require scratch media to complete. |
Select NAS - iBlade from the Navigation panel, then click Job Queue in the Operations panel and check on the job state of your job. If the job state indicates it’s paused and waiting for scratch media, then add more scratch media using Format Media. First, find sequestered media that can be formatted. Import more media into the discovered pool as needed. After sequestered media and/or discovered media have been selected in the North Panel, then click Format to add scratch media to the scratch media pool. |
NAS filesystem activity can prevent a job from completing. | Select NAS - iBlade from the Navigation panel, then click Job Queue in the Operations panel and check on the job state of your job. If the job state indicates it’s paused and waiting for filesystem to become idle, then wait until the filesystem becomes idle and the job will resume on its own. An option is to cancel the job and retry the job later. |
Issue | Resolution |
---|---|
The file resides on vaulted media and it cannot be overwritten, read, or deleted. | Using Windows Explorer, click on the file to attempt to read it. An alert will be generated in the System Message page to indicate an attempt was made to read a vaulted media. Note the vaulted volume name (e.g., barcode) from the system message. Import the vaulted media to reunite the media with its VG. The file icon will then not have an X across it and the file can then be overwritten, read or deleted. |
Issue | Resolution |
---|---|
Your drives might be varied-off. | Select Drives from the Navigation panel and select all drives that are varied off in the North Panel. Click Vary-On. |
Your drives might be offline. | Select Drives from the Navigation panel and select all drives that are offline in the North Panel. Click Online. |
Your drives are busy with other filesystem or job requests. | Retry the application or script again when there’s less drive pressure. |
Issue | Resolution |
---|---|
M101 - Vaulted file not available for read: means an attempt was made to read a vaulted file. | Import the volume indicated by the barcode from the system message details. The import action will reunite the volume with its VG. The desired file can then be read. |
M102 - Vaulted file not available for write: means an attempt was made to write or delete a vaulted file. | Import the volume indicated by the barcode from the system message details. The import action will reunite the volume with its VG. The desired file can then be written or deleted. |
M103 - Volume group low capacity threshold reached: means the VG capacity has reached the threshold that was set when the VG was created. This threshold policy triggers this system message. To change threshold, select the VG from the North Panel and Modify a Volume Group. | Attach scratch media to VG if scratch pool is disabled. Otherwise, do nothing because scratch media will be auto-attached to the VG if scratch pool is enabled and scratch media is available. To enable scratch pool, select the VG from the North Panel and Modify a Volume Group. |
M104 - Scratch pool low capacity threshold reached: means the number of scratch media in the scratch pool is insufficient for proper VG capacity growth. | Add more scratch media to the scratch pool by formatting media. |
M105 - No drives available: means there are more requests than there are drives available. |
Resolution 1 Do nothing if the job queue indicates jobs are progressing to completion and the remaining jobs will need to wait its turn to get drives to complete the request. Resolution 2 Check the Scalar LTFS partition( Select Partitions from the Navigation panel) to make sure drives have not been taken offline because of failures as indicated by RAS tickets. Attempt to bring the drive online. Resolution 3 Wait for filesystem activities to diminish and retry the operation. |
M106 - Write failed due to no space remaining: means a VG couldn’t grow its capacity automatically. | Check that the scratch pool is enabled. Then, check to see if there are scratch media in the scratch pool. Format media to add scratch media to the scratch pool as needed. To change these settings for the VG, select the VG from the North Panel and Modify a Volume Group. |
M108 - Filename collision: means a filesystem request, an attach job, an assign job, or a merge job has encountered file name duplicates. | The duplicate file names must be changed or removed on the source volume indicated in the system message details. For merge or assign jobs you can attach the source volume indicated in order to modify or remove the files. If the indicated volume is sequestered and is a constituent of the destination VG, then a Safe Repair Volume Group can be issued for the VG in order to attach the indicated volume with the new VG name in order to change or removed the duplicate file names. |
M109 - Invalid volume media type: means a filesystem request was attempted on a media that is invalid for the drive type. This includes writing an LTO-5 media with an LTO-7 drive or writing an LTO-6 media with an LTO-8 drive. | Replicate a Volume Group to migrate the files from the older generation media to a newer generation media that’s supported by the LTO tape drive. |
M110 - File spanning failed: means a filesystem request or a replication job needed a drive to complete the file spanning, but drives were unavailable. | Retry the operation when more drives are available. |
Feature | Scalar LTFS iBlade | Scalar LTFS Appliance |
---|---|---|
Drive License | Not Required | Supported |
Volume Groups | Supported | Supported |
File Spanning | Supported | Supported |
CIFS ACLs | Unsupported | Supported |
NFS ACLs | Unsupported | Unsupported |
Vaulted File Indicator | Supported | Supported |
System Messages | Supported | Supported |
RAS Tickets | Integrated with Scalar i3 & i6 |
Tickets dedicated Scalar LTFS tickets |
Max File Objects Per Media | 1 million |
User selectable up to 10 Million |
Max LTFS Media | 2000 | 5000 |
Max LTFS Drives | 4 |
8 |
Max LTFS Partitions |
Single |
Multiple |
NAS WebGUI | Integrated with Scalar i3 & i6 | Dedicated |
NAS Share Name | User selectable | must be ScalarLTFS |
When writing small sized files, the upper limit of 1 million file objects (i.e., files, directories, or symbolic links) may have been reached on the other VG media. The Scalar LTFS iBlade will consider these media as being full and look to add scratch media to continue writing to the VG.
The other VG media may be considered invalid media types for the Scalar LTFS partition type. For example, a Scalar LTFS partition with only LTO-7 drives cannot write to LTO-5 media, or a Scalar LTFS partition with only LTO-8 drives cannot write to LTO-6 media and earlier generation media. The Scalar LTFS iBlade will consider these VG media as invalid media and look to add scratch media to continue writing to the VG.
No, WORM media is unsupported by the Scalar LTFS iBlade. The Scalar LTFS iBlade uses LTO drives, which cannot partition WORM media. Dual-partition media is required for Scalar LTFS format compliance.
Yes, the Scalar LTFS iBlade supports Library Managed Encryption. See Scalar i3 Encryption for more details.
Issue | Resolution |
---|---|
Deleted files are deleted from the tape index, but the files remain on the Scalar LTFS iBlade tape and continue to use up tape space. | Replicate a Volume Group on the VG to create a replica VG with reclaimed tape space. |
The user should avoid using the following special characters for VG, directory, and file names:
/ :” * ? > < | \
Issue | Resolution |
---|---|
Your VG may have become unavailable due to a Replication or Merge job where your VG was the destination VG. | Specify the source VG from the failed replication or merge job as the target of Repair a Volume Group. |
A repair job was not immediately administered after a failed replication or merge job. The unavailable VG may have lost the repair states due to sequester job. | Export the Scalar LTFS media by hand and then reinsert into a different slot. This will put the media back in the discovered pool where Attach Media will reunite the media with its VG. |
Issue | Resolution |
---|---|
No, only Prepare a Volume Group for Export can be canceled and your VG made available for NAS filesystem requests or VG jobs. | Run Repair a Volume Group on the VG to return it to available. |