Datastore_fs |
Datastore_fs is a read/write FUSE filesystem that is the foundation of most data transfer between vmPRO and VMware vSphere. This file system is mounted within the vmPRO appliance at /vmfs/volumes, under which you would find a directory for each datastore on each configured ESX hypervisor. The organization is the same as how the VMFS volumes are mounted within the ESX server. The datastore directory is named with the VMFS globally unique ID if applicable; or a locally unique ID if it is not a VMFS volume (e.g. NFS share). For each ESX datastore, there is a symbolic link in /vmfs/volumes that links the datastore name to the datastore unique ID directory.
The content of /vmfs/volumes on the vmPRO appliance is a reflection of the actual ESX datastore.
Datastore_fs filesystem is intended for internal use by other vmPRO processes and not for access via NFS or CIFS. For example, it interacts with vSphere datastore to get VM files and pass some of this information such as flat vmdk files to vm_proxy_fs for backup and during recovery operation receives recovered data from import_fs and recreates the VM in ESX datastore. In addition, it depends on the controller to perform datastores discovery during a full discovery operation.
Datastore_fs interacts with vSphere using 3 interfaces:
If we have a shared datastore (mounted on many ESX hosts) and if the datastore is a VMFS volume, then datastore_fs will represent this datastore in /vmfs/volumes with a single directory using the volume’s VMFS UUID. When this datastore is accessed datastore_ fs will select one of the ESX hosts to use in getting to the datastore. It would usually pick the host that vmPRO is running on, if applicable or else it would pick a host randomly.
To improve read and write performance during VM disk extent read or write, datastore_fs has implemented some optimization settings such as read-ahead and write gathering. This allows for use of larger buffer size. Here are some of the “registry keys” that can be used to tune or debug datastore_fs:
vixdisklib.transport.loglevel - VMware transport log level. Default is 6. Log goes to /var/log/datastore_fs
vixdisklib.open.retry - Set the number of attempts to retry after a VDDK open failure. This is to workaround transient open errors transparently.
nfc.loglevel – VDDK NFC log level. The default is 1 but can be increased to 2 or 3 to get more verbose logging from VMware
nfc.session.reserved - Control the number of NFC session reserved for others to use. Each ESX host or vCenter has a maximum number of NFC connections (9 and 27 respective). This key limits the number of connection this appliance will use by limiting it to the server maximum minus the reserved. The default is 2. However, because the appliance cannot find out how many connections are already in use on the ESX server; this is only a best guess. The ESX server may still run out of connections if there are the other processes such as vMotion that is using more than 2 NFC connections.
appliance.debug.level - Set a global appliance debug level. This will affects datastore_fs and all other processes. Log messages will go to respective logs such as /var/log/datastore_fs or /var/log/messages.
feature.read_ahead.size - Set the number of bytes to read ahead of the virtual disk for sequential access. Set to 0 to disable read ahead. The default is 1MB. This is used only if the VMDK extent file is opened with the sequential flag.
feature.read_ahead.random.size - Set the number of bytes to read ahead of the virtual disk for random access. Set to 0 to disable read ahead. The default is 64KB. This is used by default if the VMDK extent file is opened without the sequential flag.
feature.read_ahead.count - Set the number of buffers in the read-ahead buffer. Set it to 0 to disable asynchronous read ahead and use synchronous read ahead instead.
The datastore_fs process makes its log entries to /var/log/datastore_fs.
This page was generated by the BrainKeeper Enterprise Wiki, © 2018 |