It is always valuable to understand the file size mix of the target dataset as well as the application I/O characteristics. This includes the number of concurrent streams, proportion of read versus write streams, I/O size, sequential versus random, Network File System (NFS) or Common Internet File System (CIFS) access, and so on.
For example, if the dataset is dominated by small or large files, various settings can be optimized for the target size range.
Similarly, it might be beneficial to optimize for particular application I/O characteristics. For example, to optimize for sequential 1MB I/O size it would be beneficial to configure a stripe group with four 4+1 RAID 5 LUNs with 256K stripe size.
However, optimizing for random I/O performance can incur a performance trade-off with sequential I/O.
Furthermore, NFS and CIFS access have special requirements to consider as described in the section, Direct Memory Access (DMA) I/O Transfer.
To achieve the highest possible large sequential I/O transfer throughput, SNFS provides DMA-based I/O. To utilize DMA I/O, the application must issue its reads and writes of sufficient size and alignment. This is called well-formed I/O. See the mountcommand settings auto_dma_read_length and auto_dma_write_length, described in the The Metadata Controller System.
Reads and writes that aren't well-formed utilize the SNFS buffer cache. This also includes NFS or CIFS-based traffic because the NFS and CIFS daemons defeat well-formed I/Os issued by the application.
There are several configuration parameters that affect buffer cache performance. The most critical is the RAID cache configuration because buffered I/O is usually smaller than the RAID stripe size, and therefore incurs a read/modify/write penalty. It might also be possible to match the RAID stripe size to the buffer cache I/O size. However, it is typically most important to optimize the RAID cache configuration settings described earlier.
It is usually best to configure the RAID stripe size no greater than 256K for optimal small file buffer cache performance.
For more buffer cache configuration settings, see The Metadata Controller System.
StorNext supports NFS version 3 (NFSv3) and NFS version 4 (NFSv4) with some limitations. For additional information, see Network File System (NFS) Support in StorNext, in the StorNext Compatibility Guide available online at http://www.quantum.com/snsdocs, and also the Appliance Controller Compatibility Guide.
It is best to isolate NFS and/or CIFS traffic off of the metadata network to eliminate contention that will impact performance. On NFS clients, use the
wsize=1048576 mount options. When possible, it is also best to utilize TCP Offload capabilities as well as jumbo frames.
Note: Jumbo frames should only be configured when all of the relevant networking components in the environment support them.
Note: When Jumbo frames are used, the MTU on the Ethernet interface should be configured to an appropriate size. Typically, the correct value is 9000, but may vary depending on your networking equipment. Refer to the documentation for your network adapter.
It is best practice to have clients directly attached to the same network switch as the NFS or CIFS server. Any routing required for NFS or CIFS traffic incurs additional latency that impacts performance.
It is critical to make sure the speed/duplex settings are correct, because this severely impacts performance. Most of the time auto-negotiation is the correct setting for the ethernet interface used for the NFS or CIFS traffic.
Whether auto-negotiation is the correct setting depends on the ethernet switch capabilities that the ethernet interface connects to. Some managed switches cannot negotiate the auto-negotiation capability with a host ethernet interface and instead allow setting speed/duplex (for example
1000Mb/full,) which disables auto-negotiation and requires the host to be set exactly the same. However, if the settings do not match between switch and host, it severely impacts performance. For example, if the switch is set to
auto-negotiation but the host is set to
1000Mb/full, you will observe a high error rate along with extremely poor performance. On Linux, the
ethtool tool can be very useful to investigate and adjust speed/duplex settings.
If performance requirements cannot be achieved with NFS or CIFS, consider using a StorNext LAN client or fibre-channel attached client.
It can be useful to use a tool such as netperf to help verify network performance characteristics.
Although supported in previous StorNext releases, the
subtree_check option (which controls NFS checks on a file handle being within an exported subdirectory of a file system) is no longer supported.