Troubleshooting StorNext File System
This section contains troubleshooting suggestions for issues which pertain to StorNext File System.

Sometimes the local FSMPM does not automatically see newly labeled LUNs. This results in a system log message like the following:
or
Answer: To resolve this issue, you should force a rescan of the disks by executing the command cvadmin -e 'disks refresh'
on each system that is unable to see the LUN(s).

'install path'\debug\mount..out
mount.cvfs: Can't mount filesystem 'FILESYSTEMNAME'.
Check system log for details. Invalid argument
Answer: This condition occurs when the system cannot resolve the IP address or hostname defined in the fsnameservers file.
Use the following procedure to troubleshoot this problem.
-
Find the failure reported in the file
install_path/debug/nssdbg.out
.ERR NSS: Establish Coordinator failed GetHostByName of '[HOST01]'
No such file or directory)
INFO NSS: Primary Name Server is 'HOST01' (unknown IP)
ERR NSS: Establish Coordinator failed GetHostByName of '[HOST02]'
(No such file or directory)
INFO NSS: Secondary #1 Name Server is '[HOST02]' (unknown IP)
-
If it is similar to the events reported above, please check the
fsnameservers
file on all clients and verify thefsnameservers
file match what the MDCs display. Thefsnameservers
file is located in the following directory, depending upon the product and operating system:-
For Windows StorNext File System:
C:\Program Files\StorNext\config
-
For Linux or UNIX:
/usr/cvfs/config
-
-
Correct the
fsnameservers
file to resemble the following IP addresses:10.65.160.42
10.65.160.78
- If the same error reoccurs, contact Quantum Technical Support.

Answer: One of the common issues in working with StorNext clients is the inability to connect to the StorNext metadata controllers (MDCs). Usually you can show this problem either by running cvadmin
on UNIX-based and Windows-based clients, and not being able to see the file systems from the StorNext MDC(s). If file systems are not visible at this level, the client is not connected to the MDC(s).
As described in the StorNext documentation, the MDC(s) and all clients should be on a dedicated and isolated metadata network. The dedicated metadata network should be open to all ports for UDP and TCP traffic. In addition, the metadata controller(s) and network switches should not have firewalling enabled for the dedicated metadata network.
If the client is still not able to connect to the MDCs through the dedicated metadata network, check for the following:
Is the hostname or IP address of the correct MDC(s) listed in the fsnameservers file (found in /user/cvfs/config
for UNIX-based clients and C:\Program Files\StorNext\config
for Windows-based clients)?
- Is the hostname or IP address of the correct MDC(s) listed in the fsnameservers file (found in
/user/cvfs/config
for UNIX-based clients andC:\Program Files\StorNext\config
for Windows-based clients)? - Is the hostname or IP address of the correct MDC(s) listed in the fsnameservers file (found in
/user/cvfs/config
for UNIX-based clients andC:\Program Files\StorNext\config
for Windows-based clients)? - If the hostname (rather than the IP address) is listed in
fsnameservers
, can the client resolve the hostname (usingnslookup
at the UNIX prompt or at the command prompt on a Windows-based client)? - If the hostname (rather than the IP address) is listed in
fsnameservers
, can the client resolve the hostname (usingnslookup
at the UNIX prompt or at the command prompt on a Windows-based client)? - If the hostname (rather than the IP address) is listed in
fsnameservers
, can the client resolve the hostname (usingnslookup
at the UNIX prompt or at the command prompt on a Windows-based client)?
If the client cannot resolve the hostname, do one of the following:
- Resolve either the DNS setup or hosts file setup
- Enter the IP address of the MDC(s) in the
fsnameservers
file instead of the hostname.
-
Can the client ping the metadata controller?
If the client cannot ping the metadata controller, resolve the networking issue to make sure the client is on the same dedicated metadata network and can ping the MDC(s).
-
If the client can ping the MDC(s), can the client either telnet, ftp, or ssh from the client to the MDC(s)?
If the client cannot run telnet, ftp or ssh from the client to the MDC(s), it is likely that there is some manner of firewalling set up between the client and the MDC(s). If possible, disable this firewalling.
-
If firewalling is set up on the dedicated metadata network and it is not possible to disable it due to some internal policy (the metadata network should be a dedicated and isolated network), the client can specify a range of ports to be used for metadata traffic.
By creating an
fsports
file (located in/usr/cvfs/config
for UNIX-based clients andC:\Program Files\StorNext\config
for Windows-based clients), you can specify a range of ports, both UDP and TCP, that can be allowed to pass through the firewall between the client and the MDC(s).If other clients are having problems connecting to the MDC(s), they must also use their own copy of the
fsports
file. The following is an example of thefsports
file:## File System Port Restriction File
#
# The fsports file provides a way to constrain the TCP
# and UDP ports used by the SNFS server processes.
# This is usually only necessary when the SNFS
# control network configuration must pass through
# a firewall. Use of the fsports file permits
# firewall 'pin-holing' for improved security.
# If no fsports file is used, then port assignment
# is operating system dependent.
#
# If an fsports file exists in the SNFS 'config' directory it
# restricts the TCP and UDP port bindings to the user specified
# window. The format of the fsports file consists of two lines.
# Comments starting with pound-sign (#) in column one
# are skipped.
#
# MinPort VALUE
# MaxPort VALUE
#
# where VALUE is a number. The MinPort to MaxPort values define
# a range of ports that the SNFS server processes can use.
#
#
# Example:
#
# Restrict SNFS server processes to port range 22,000 to 22,100:
#
# MinPort 22000
# MaxPort 22100
#

Answer: StorNext reserves the first 1 MB of the disk for the label.
- For EFI disk labels, the critical area of the label varies with the disk sector size:
- For 512-byte sectors it is the first 18,432 bytes (36 sectors).
- EFI is used by StorNext 2.7 for LUNs larger than 2GB sectors.
If a StorNext disk label is ever inadvertently overwritten or otherwise damaged, the only method of recovery is to run the cvlabel
utility with the original parameters used when the disk was initially labeled. The nssdbg.out
log file for the system often proves useful in determining what label each disk device on the system had before the problem occurred.
Contact Quantum Technical Support for assistance recovering damaged disk labels.

Umount
hangs or fails for StorNext File Systems even though the fuser
displays nothing. What’s going on?
Answer: If a process opens a UNIX domain socket in a StorNext File System and does not close it, umount
hangs or fails even though fuser
does not show anyone using the file system. Use the "lsof -U"
command to show the UNIX domain socket. The process can be killed with the socket open.

Answer: You may receive the error:
File System FSS 'File System Name[0]': Invalid inode lookup: 0x2a5b9f markers 0x0/0x0 gen 0x0 nextiel 0x0
Deleting an old file system while an NFS client is still mounted leaves legacy data about inodes that no longer exist on that client. The client is out of sync with the file system and calls for inodes that no longer exist. This leaves StorNext users with the concern that they have lost files that cannot be recovered. Because of this issue, the MDC generates alarming messages about metadata corruption.
Checking the "epoch"
field of the NFS request and the file system will show that these inodes are all zeros and thus invalid. Code can be changed in the NFS handles so they include a unique identifier such as the “epoch” (microsecond creation time) for the file system.

Answer: Here are 3 possible scenarios, which assume that the file data is no longer on disk and only exists on tape:
- Scenario 1: If the managed directories are on the same file system and have the same policy class, then tape is not accessed.
- Scenario 2: If the managed directories are on different file systems and have the same policy class, the data is retrieved from tape so it can be moved to the new file system, and then gets stored again by the policy.
- Scenario 3: If the managed directories have different policy classes, then the data is retrieved, moved, and then gets stored to media associated with the new policy class.
You might receive the following error message if a StorNext file system client system continuously reports restarting the file system and fills up the nssdbg.out
file (excerpted from logfile </usr/cvfs/debug/nssdbg.out>
):
:[0327 14:40:59] 0x40305960 NOTICE PortMapper: RESTART FSS service 'stornext-fs1[0]' on host stornext-client.
[0327 14:40:59] 0x40305960 NOTICE PortMapper: Starting FSS service 'stornext-fs1[0]' on stornext-client.
[0327 14:40:59] 0x40305960 (debug) Portmapper: FSS 'stornext-fs1' (pid 8666) exited with status 2 (unknown)
[0327 14:40:59] 0x40305960 (debug) FSS 'stornext-fs1' LAUNCHED -> RELAUNCH, next event in 60s
[0327 14:41:59] 0x40305960 (debug) FSS 'stornext-fs1' RELAUNCH -> LAUNCHED, next event in 60s
[0327 14:41:59] 0x40305960 NOTICE PortMapper: RESTART FSS service 'stornext-fs1[0]' on host stornext-client.
[0327 14:41:59] 0x40305960 NOTICE PortMapper: Starting FSS service 'stornext-fs1[0]' on stornext-client.
[0327 14:41:59] 0x40305960 (debug) Portmapper: FSS 'stornext-fs1' (pid 8667) exited with status 2 (unknown)
[0327 14:41:59] 0x40305960 (debug) FSS 'stornext-fs1' LAUNCHED -> RELAUNCH, next event in 60s
[0327 14:42:59] 0x40305960 (debug) FSS 'stornext-fs1' RELAUNCH -> LAUNCHED, next event in 60s
:
This error occurs because on the StorNext client system the file /usr/cvfs/config/fsmlist was set up and configured. However, the fsmlist file belongs to the server components of StorNext and is set up on the MDC only. Verify this by running the following command on the StorNext client:
ls -l /usr/cvfs/config/fsmlist
On the StorNext client, only the client portion of the StorNext product suite is installed. Verify this by running the command:
/usr/cvfs/bin/cvversions
The following output appears:
qlha2:~ # cvversionsServer not installed.
File System Client:
Client Revision 4.2.0 Build 21233 Branch branches_4.2.0
Built for Linux 2.6.16.60-0.21-smp x86_64
Created on Thu Aug 4 04:11:01 MDT 2011
Built in /home/mlund/nightly/VM-0-SuSE100ES-26x86-64-SP2/sn/buildinfo
Host OS Version:
Linux 2.6.16.60-0.85.1-smp #1 SMP Thu Mar 17 11:45:06 UTC 2011 x86_64
To resolve this issue, delete /usr/cvfs/config/fsmlist
and then restart the StorNext services. Before you restart the StorNext services, verify the size of the /usr/cvfs/debug/nssdbg.out
. If the output file is considerably large, delete or rename the file and then restart StorNext. If the problem persists, contact Quantum Technical Support.