Cluster-Wide Central Control
The purpose of this feature is to provide cluster-wide central control.
Note: The central control file is supported on the Linux platform only.
A central control file called nss_cctl.xml provides a way to restrict the behavior of SNFS cluster nodes (fsm, file system client, cvadmin client) from a central place: an NSS server.
This feature currently supports the following controls that allow you to specify:
- Whether a client is allowed to mount as a proxy client.
- Whether a client is allowed to mount as read/write or read-only.
- Whether a user (especially a local administrator on Windows clients,) is allowed to take ownership of a file or directory on a StorNext file system.
- Whether cvadmin running on a certain client is allowed to have super admin privilege to run destructive commands such as starting/stopping the file system, refreshing disks, changing quota settings, and so on.
- Whether cvadmin running on certain client is allowed to connect to other fsms via the -H option.
- Whether binary executable files on the StorNext file system are allowed to be executed.
- Whether the setuid bit of a file is allowed to take effect.
The control file is in xml format and has a hierarchical structure. The top level element, snfsControl, contains control elements with the securityControl label for certain file systems. If you have different controls for different file systems, each file system should have its own control definition. A special virtual file system, #SNFS_ALL#, is used as the default control for file systems not defined in this control file. It is also used to define the cvadmin-related control on clients.
Note: You cannot have a file system named #SNFS_ALL#.
Each file system-related element (indicated by the label securityControl) has a list of controlEntry items. Each controlEntry item defines the client and the controls. The client type can be either host or netgrp. A host can be the IP address or the host name. (Both IP V4 and IP V6 are supported.) Netgrp specifies a group of consecutive IP addresses and has a network IP address (either IP V4 or V6,) and network mask bits. It is possible for there to be overlapping in IP address between an individual host and netgrp, but the individual host should be defined before the netgrp. If a client node has more than one IP address, then define the controls for each IP address.
The following controls are currently supported:
- mountReadOnly: Control whether a client should mount as read-only. The default is read/write.
- mountDlanClient: Control whether a client can mount as a proxy client. The default is not allowed.
- takeOwnership: Control whether users on a Windows client are allowed to take ownership of a file or directory in a StorNext file system.
- snfsAdmin: Controls whether cvadmin running on a host is allowed to have super admin privilege to run privileged commands such as start/stop fs.
- snfsAdminConnect: Controls whether cvadmin running on a client is allowed to connect to other fsm via the -H option.
- exec: Controls whether binary executable files on the file system are allowed to be executed. The default value is “true” (that is, the execution is allowed).
- suid: Controls whether the setuid bit is allowed to take effect. (The default value is “true”.)
If no match is found for a given client's IP address, the client has no privileges. If a file system has been defined but the client is not defined in that file system’s control section (securityControl), the client has no access privileges to the specified file system.
The denyRetrieves
control prevents a client from triggering on-demand file retrieves. Set this to true
to prevent the client from triggering retrieves. This control functionality runs on the MDC, so older clients will have the control enforced after you have configured the file nss_cctl.xml
.
The client controls relate to SAN and LAN clients. SMB and NFS client control is routed through those client's NAS server and cannot be individually controlled. This is particularly important when configuring denyRetrieves
for Offline File Manager.
- SAN and DLC clients: The addresses of the clients themselves.
- NAS clients via a NAS server on an HA pair: The VIP of the HA pair.
- NAS clients via a NAS server on an MDC: The address of the MDC.
- NAS clients via a NAS server on a gateway: The address of gateway.
Note: If you set denyRetrieves
to true
for a NAS server, then this disables retrieves for that server and all NAS clients connecting using that server.
The control file called nss_cctl.xml provides a feature overview and describes how to configure the central control file. The values entered in the following sample file are for example purposes only and should not necessarily be entered in your own control files.
<?xml version="1.0" encoding="UTF-8"?> <!-- Copyright 2016. Quantum Corporation. All Rights Reserved. --> <!-- StorNext is either a trademark or registered trademark of --> <!-- Quantum Corporation in the US and/or other countries. --> <!-- Cluster-Wide Central Control File --> <!-- The nss_cctl.xml file provides a way to restrict the behavior of --> <!-- SNFS cluster nodes (fsm, file system client, cvadmin client) from --> <!-- a central place, i.e on nss server. As for SNFS 3.5, we support --> <!-- the following controls: --> <!-- 1. Whether a client is allowed to mount as a proxy client --> <!-- 2. Whether a client is allowed to mount as read/write or read-only.--> <!-- 3. Whether a user especially local admin on windows is allowed to --> <!-- take ownership of file or directory on a Stornext file system. --> <!-- 4. Whether cvadmin running on certain client is allowed to have --> <!-- super admin privilege to run destructive commands such as --> <!-- start/stop file system, refresh disks, and change quota setting,--> <!-- etc. --> <!-- 5. whether cvadmin running on certain client is allowed to connect --> <!-- to other fsm via "-H" option. --> <!-- 6. whether an executable file on the file system can be executed. --> <!-- 7. whether to allow set-user-identifier bit to take effect. --> <!-- The control file is in xml format and has hierarchical structure. --> <!-- The top level element is "snfsControl", it contains control element --> <!-- "securityControl" for certain file system. If you have different --> <!-- controls for different file systems, then each file system should --> <!-- has its own control definition. A special virtual file system --> <!-- "#SNFS_ALL#" is used as the default control for file systems not --> <!-- defined in this control file. It is also the required file system --> <!-- name when configuring the snfsAdmin and snfsAdminControl options. --> <!-- Note: you cannot have a real file system named as "#SNFS_ALL#". --> <!-- Each file system related control element (securityControl) has a --> <!-- list of "controlEntry", each "controlEntry" defines the client and --> <!-- the controls. The simplest and preferred way of defining a client --> <!-- is by specifying its IP address (or hostname) by itself, or followed--> <!-- by a netmask length separated by a slash (e.g "192.0.2.0/24") --> <!-- if one would like to specify a subnet. Both IPv4 and IPv6 are --> <!-- supported. For backwards compatibility, we support two other ways --> <!-- of defining a client wherein we explicitly specify its type: "host" --> <!-- or "netgrp". A "host" can be an IP address or host name. "netgrp" --> <!-- specifies a group of consecutive IP addresses. It has a network IP --> <!-- address (either IPv4 or IPv6) and netmask length. In the case of --> <!-- overlap between client IP addresses, the controls which correspond --> <!-- to the IP address with the longest netmask length will take --> <!-- precedence. --> <!-- Currently there are eight controls supported: --> <!-- 1. mountReadOnly: control whether a client should mount as --> <!-- readonly. The default is read/write. --> <!-- 2. mountDlanClient: control whether a client can mount as proxy --> <!-- client, the default is "mount not allowed". --> <!-- 3. takeOwnership: control whether users on a windows client is --> <!-- allowed to take ownership of file or directory of a stornext --> <!-- file system. The default is "take ownership not allowed". --> <!-- 4. snfsAdmin: whether cvadmin running on a host is allowed to have --> <!-- super admin privilege to run privileged commands such as --> <!-- start/stop fs. The default is that super admin privilege is not --> <!-- honored. --> <!-- 5. snfsAdminConnect: whether cvadmin running on a client is allowed --> <!-- to connect to other fsm via "-H" option. The default is "-H" is --> <!-- not allowed. --> <!-- 6. exec: whether binary files on the file system is allowed to --> <!-- be executed. --> <!-- 7. suid: whether set-user-identifier bit is allowed to take effect. --> <!-- 8. denyRetrieves: whether the client is allowed to trigger --> <!-- dmapi read events and retrieve offline files by reading then --> <!-- default to false, set to true to deny retrieves. The client --> <!-- will get permission denied errors when reading a truncated file. --> <!-- If no match is found for a given client's IP address, then the --> <!-- client has no privilege to access a SNFS cluster. If a file system --> <!-- has been defined but the client is not defined in that file --> <!-- system's control (securityControl), then the client has no access --> <!-- privilege to the specified file system. --> <!-- The element "nonVotingCluster" can be included (on the same level as--> <!-- the "securityControl" element) to set the default client behavior --> <!-- (voting or non-voting) within the cluster during the election that --> <!-- will choose the host on which a specific file system manager will --> <!-- run. The cluster to which this control is applied will be the one --> <!-- specified in the filename. If no cluster is specified in the --> <!-- filename, please refer to the beginning of the DESCRIPTION section --> <!-- of the nss_cctl man page for more information on which cluster this --> <!-- control will take effect. --> <!-- NOTE: There always needs to be voting clients within the cluster so --> <!-- that a decision can be derived from the election. Therefore, when --> <!-- the "nonVotingCluster" element is set to true, it should be used in --> <!-- conjunction with the "votingClients" element (described in the --> <!-- following paragraphs) which allows one to specify an explicit list --> <!-- of voting clients. --> <!-- It is also possible to specify a group of non-voting clients within --> <!-- a cluster by creating a list of client addresses with the element --> <!-- "nonVotingClients" (also used on the same level as that of the --> <!-- "securityControl" element). The format of the client addresses --> <!-- within the "nonVotingClients" element is the same as that used in --> <!-- defining a client in the simplest and preferred way within a --> <!-- "controlEntry". And there must be at least one address in the list. --> <!-- To specify a group of voting clients, the same format is used but --> <!-- replacing "nonVotingClients" with "votingClients". --> <!-- All three elements (i.e. "nonVotingCluster", "nonVotingClients" and --> <!-- "votingClients") may be in the nss_cctl(4) man page at the same --> <!-- time. The "votingClients" and "nonVotingClients" elements will take --> <!-- precedence over the "nonVotingCluster" element. When a client IP --> <!-- address matches elements in both "nonVotingClients" and --> <!-- "votingClients", the element with the longest netmask will take --> <!-- precedence; if there is a tie, the "votingClients" element will be --> <!-- used. --> <!-- Currently only Linux platform is supported to be a nss server --> <!-- capable of parsing this xml file. --> <!-- The following is an example to define the nss_cctl.xml. It defines --> <!-- the control of file system "snfs", and also the special virtual --> <!-- file system "#SNFS_ALL#". --> <snfsControl xmlns="https://www.quantum.com/snfs/cctl/v1.0"> <nonVotingCluster value="true"/> <votingclients> <address value="192.0.2.108/24"/> <address value="198.51.100.215"/> </votingClients> <securityControl fileSystem="snfs"> <controlEntry> <client> <address value="192.0.2.108"/> <address value="198.51.100.215"/> </client> <controls> <mountReadOnly value="false"/> <mountDlanClient value="false"/> <takeOwnership value="false"/> <exec value="true"/> <suid value="false"/> </controls> </controlEntry> <controlEntry> <client type="host"> <hostName value="192.0.2.132"/> </client> <controls> <mountReadOnly value="true"/> <mountDlanClient value="true"/> <takeOwnership value="false"/> <denyRetrieves value="true"/> <exec value="true"/> <suid value="false"/> </controls> </controlEntry> <controlEntry> <client type="netgrp"> <network value="192.0.2.0"/> <maskbits value="24"/> </client> <controls> <takeOwnership value="true"/> <mountReadOnly value="true"/> <denyRetrieves value="true"/> </controls> </controlEntry> </securityControl> <securityControl fileSystem="#SNFS_ALL#"> <controlEntry> <client type="host"> <hostName value="linux_ludev"/> </client> <controls> <snfsAdmin value="true"/> <snfsAdminConnect value="true"/> </controls> </controlEntry> </securityControl> </snfsControl>
You can also validate a central control file for the portmapper of a StorNext file system with the snvalidatecctl program. See the snvalidatecctl (8) command in the StorNext Man Pages Reference Guide for details.