View NAS Cluster Information
You can view information about a NAS cluster, including the IP addresses of all nodes, the path to the StorNext file system, node status, VIPs assigned to the cluster, and NFS failover status.
- Log in to the console command line from the master node. See Access the Appliance Controller Console.
- At the prompt, enter the following:
Example Output from Master Node:
> nascluster show
NAS Cluster IP: 10.10.100.100/eth0, Master: Yes, SNFS Root: /stornext/snfs1, Joined: Yes
Cluster Hostname: cluster.example.com
DNS Enabled: Yes
DNS Zone Name: cluster.example.com
DNS Name server: ns.cluster.example.com (10.10.100.100)
Load balancing: Proxy Disabled
Master IP: 10.00.000.00
VIP: 10.01.001.00 (active, node:master)
VIP: 10.01.001.03 (active, node:3)
VIP: 10.01.001.02 (active, node:2)
VIP: 10.01.001.01 (active, node:1)
1: 10.65.188.89 (Joined, MDC)
2: 10.65.188.91 (Joined, MDC)
3: 10.65.188.96 (Joined, GW)
The following table presents the different states of nodes.
State Description Enabled
The node is ready to join the NAS cluster, but it is not actively ready to take client connections.
The node is ready to take client connections, or to become the master node in the event of a NAS failover.
One of the following:
- The node cannot be reached due to an issue with the node's Appliance Controller, network, or NAS cluster management processes.
- The node has been removed from the cluster with the
nascluster leave <ip_addr>command. It can still receive communications from other nodes in the NAS cluster, but it will not actively participate in the NAS cluster. Nodes will report this state when a system administrator has taken the node offline to perform maintenance.
The node is still a member of the cluster, but it cannot receive NAS connections, be used as a NAS failover target, or receive configuration updates.
Nodes typically report this state when a system administrator has taken a node offline to perform maintenance, or when the node has gone offline for some other reason.