The QoS Bandwidth Management feature allows administrators to configure I/O bandwidth on a data stripe group/client basis within a particular file system.
- A data stripe group is configured with a specified bandwidth capacity.
- That capacity is then divided among connected clients, based on the configuration.
- Each stripe group may be configured and managed independently.
- Clients are assigned bandwidth based on the configuration of classes and bandwidth.
- Three classes are defined, with different behaviors.
An example would be a file system with a single data stripe group and five clients using that file system. If the bandwidth capacity of the stripe group is 1000MB/s, configuring the default client class as fair share would result in each client using 200MB/s if all 5 clients were actively doing I/O and consuming at least 200MB/s. If only 2 clients were actively doing I/O, they could each be allocated 500MB/s.
The section discusses the terminology and concepts of QoS Bandwidth Management. Quality of Service in this context applies only to bandwidth management. QoS bandwidth Management will hereafter be referred to as QBM.
QBM is used to manage bandwidth on data stripe groups. Any stripe group containing metadata is not eligible for management, which includes mixed metadata/data stripe groups. Thus a file system must have a minimum of two stripe groups to use this feature where one stripe group contains metadata and the other contains user data.
QBM is used to limit or throttle client bandwidth usage. But it is not a guarantee of bandwidth availability, as there is no way to accelerate I/O. However, configuration of this feature will allow particular clients to obtain specified levels of bandwidth regardless of competing client I/O.
The assignment of bandwidth to a client is dynamic, and is based on both the QBM configuration and the current I/O usage of clients. The goal is to use the bandwidth without compromising configured client bandwidth.
QBM is used to limit clients' bandwidth usage based on configuration. The total bandwidth of a stripe group is specified in a configuration file. That total is then allocated to clients using information in the configuration file. If a file system consists of a single data stripe group, allocating bandwidth to stripe groups is synonymous with allocating bandwidth to the file system.
The configuration file consists of three sections: general, stripe groups, and clients. The general section is required, while the stripe group and client sections are optional.
Consider a file system containing a metadata stripe group and a single data stripe group. The general section can be used to specify the total bandwidth capacity, the class used for clients, and default settings for clients.
qbmanage command, along with the following command options, to create and modify the configuration file:
- To start a configuration, use the “
- To modify the general section, use the “
- To add and remove stripe group entries, use the “
rmsg” and “
- To add and remove client entries, use the “
addclient” and “
rmclient” options. Multiple client entries can be specified if they are associated with different stripe groups.
Three classes are defined with different behaviors.
The three allocation classes are as follows. See Configure QBM for information on how to configure stripe groups with these classes.
- First Come: This is the highest priority of allocation. Clients configured for this class of service will either get all the bandwidth they are assigned, or will be denied allocations in the First Come class. . See First Come (FC) for additional details.
- Fair Share: This is the second highest priority of allocation. Clients configured for this class of service will either get their configured bandwidth or will share the bandwidth that has not been allocated to the First Come class of clients, in proportion to their configured amounts. You might put clients that are involved in production work in this class. Clients in this class are dynamically changing and need to share a limited resource. See Fair Share (FS) for additional details.
- Low Share: Clients configured for this class of service get their configured bandwidth, or share the bandwidth not allocated to the higher priority clients, in proportion to their configured amounts. You might put clients that mostly perform background work in this class, to do that work as resources permit. See Low Share (LS) for additional details.
These classes are the configuration building blocks. In addition to class of service, each client can be configured to have a minimum bandwidth allocation. The combination of class, bandwidth requested, and amount of bandwidth requested in each class will determine what bandwidth each client will be allocated.
This class has priority over all other classes. A client in this class that is granted bandwidth is guaranteed to retain its minimum bandwidth.
The First Come class is an all or nothing class. That is, either all of the client's configured bandwidth is granted, or the client is rejected from the class.
- If a new client mounts the file system and this new client is configured as a First Come class client. with more configured bandwidth than is currently available in the First Come class, that client's First Come bandwidth allocation request is rejected.
- The new client is then assigned to the next lowest priority class, the Fair Share class.
- The total First Come bandwidth is the configured capacity of the configured stripe group, minus any bandwidth reserved exclusively for other classes of service.
- Each client accepted to the First Come class reduces the available bandwidth for that class by its configured amount.
If the First Come class is over-subscribed and some clients are being rejected from the class, consider:
- Changing the configuration file and running
qbmanage --reread. Running
qbmanage --rereadcauses QBM to re-evaluate all client allocations according to the current QBM configuration file.
- Reducing the number of clients configured as First Come clients that could mount the file system at the same time.
Consider the case where you need to keep a First Come class client that is running a movie playback jitter free, and a large amount of bandwidth is allocated to that client. If this leaves QBM with no more bandwidth available, a subsequent First Come class client's request will be rejected, and the second client's bandwidth will be fulfilled from the bandwidth available to the Fair Share class. This guarantees that when a First Come client performs properly, it is not subject to future oversubscription problems when subsequent clients are activated. The premise is that it is better to keep some number of client applications running at adequate levels than it is to have all applications running, but in a state where none are able to run sustainably at adequate levels.
This class is second in priority for obtaining bandwidth. QBM shares allocation across clients in proportion to their configuration. For example, with a capacity of 900 MB/s and 4 clients configured with a minimum allocation of 200 and a maximum of 900, each client receives at least 200 MB/s. The 100 MB/s left can be shared by those 4 clients. If only two clients are active, they could both use 450 MB/s. Over-subscription will cause all clients to run with less than the preferred minimum bandwidth, where each client expects to have bandwidth equivalent to other clients with identical configuration. If 9 clients are active and are configured identically, each receives 100 MB/s.
Consider an office where there is a set of Fair Share clients that run applications that need a minimum bandwidth to operate effectively, while there are other clients performing f other activity, such as file movement. You could configure the clients running these applications as Fair Share and the other clients as Low Share. This would let the applications run at an adequate level even when a copy activity on a LS client its using considerable bandwidth.
This class is third in priority for obtaining bandwidth. A desired minimum and maximum can be configured to inform QBM of expected activity. Sharing among clients is not guaranteed. This class has no explicit requirements.
The configuration uses JSON formatting with a configuration file for each file system. The name of the configuration file is
<fsname>_qbm.conf. You can use the
qbmanage command to create the configuration for a file system by executing a sequence of commands. You can use these commands to create and modify the general section of the file, add a stripe group, remove a stripe group, add a client, and remove a client.
The general configuration includes the on/off status of QBM at file system start, along with whether all stripe groups are considered managed by default.
Among the configurations you can specify are:
- A default value for stripe group capacity.
- A default class and minimum/maximum bandwidth for clients.
- A default value may be overridden by explicit stripe group or client configuration entries.
- Stripe groups by their name, with a total bandwidth capacity.
- A default value for clients.
- Clients by their IP address. The values are the minimum and maximum allocations as well as the class.
There are three classes that affect the behavior of the FSM when allocating bandwidth.
You can use the
qbmanage command to create and modify configurations. See the StorNext MAN Pages Reference Guide for complete details.
The following example creates a QBM configuration with one high priority client:
qbmanage --new --fsname snfs1 --status=true --allsg=yes --capacity=1000MB
qbmanage --addclient --fsname snfs1 --clientname=10.20.72.128 --minbw=100MB --maxbw=400MB --class=first_come
The example above creates a configuration with all data stripe groups having a capacity of 1000 MB. The client 10.20.72.128 is configured to have top priority in the first_come class with a minimum of 100 MB and a maximum of 400 MB.
The following example creates a QBM configuration with multiple stripe groups and clients as fair_share:
qbmanage --create --fsname snfs1 --class fair_share
qbmanage --addsg --sgname sg1 --sgcapacity 500M --reserved_fs 400M
bmanage --addsg --sgname sg2 --sgcapacity 1000M
qbmanage --addsg --sgname sg3 --sgcapacity 800M
qbmanage --addclient --clientname 10.65.178.185 --minbw 50M --maxbw 500M
qbmanage --addclient --clientname 10.65.177.184 --sgname sg2 --minbw 100M --maxbw 200M
qbmanage --addclient --clientname 10.65.178.189 --minbw 25M --maxbw 500M
qbmanage --addclient --clientname 10.65.178.189 --sgname sg3 --minbw 200M --maxbw 800M
Running a bandwidth capacity test can help you make QBM decisions. There are two scripts provided for running these tests. The qbm-ladder test runs a set of tests using various buffer sizes, numbers of I/O streams, clients, and I/O queue depths. The I/O queue depth is the number of I/O requests outstanding at any given time. Specifying a queue depth of 1means that the I/O is single threaded. The qbm-ladder test uses the qbm-mio test to run I/O tests on the specified clients.
The qbm-ladder test submits sets of qbm-mio tests simultaneously on different clients. Both tests require the file system name and client name. Multiple clients are specified as a comma separated list. For example, the command below uses qbm-mio to run a single test using the default values specified in the StorNext MAN Pages Reference Guide.
The command below uses qbm-ladder to run sets of tests.
The number of streams will start at 1 and progress by 1 up to 16. The queue depth will start at 1 and progress by 1 to 8. The buffer size starts at 64K and progresses to 32M. The tests start with one client, then add one client at a time until all clients have been tested. The example above would run the set of tests using client clients cl1 and cl2. You can specify the --short option to run a shorter set of tests than the defaults, as describe on the man page.
The qbm-ladder test saves results in an sqlite database, which is located in the directory /usr/cvfs/data/<fsname>/qbm/qbm-db. Each test execution is assigned an identification number, labeled Test id, in which with the time the group of tests started is used to identify a set of test results.
After you run the qbm-ladder tests, you should run the qbm-analyze script to analyze qbm-ladder test results to determine the maximum bandwidth for a stripe group for a file system using multiple clients, multiple streams, variable queue depth, and variable buffer sizes.
You can use the qbm-analyze command option --action list to display the test numbers found in the database with their associated parameters and time when qbm-ladder was started. For example:qbm-analyze --action list --fsname|-f FsName | --db | -f name [--testid test_number] [--verbose]
If the database is removed, the next qbm-ladder command creates a new qbm-db for that file system.
You can use the qbm-analyze command option --testid to display the unique test numbers available for analysis. For example:qbm-analyze --action testid --fsname|-f FsName | --db | -f name [--verbose]
Use the eval option of the --action command to show the maximum bandwidth and the number of tests that achieved the maximum bandwidth. For example:
Test id 20925
Largest bandwidth is 117MB/s which occurred in 901 out of 1210 tests.