Using the Linux DD Command

 

Overview

 

The DD command and be useful in troubleshooting if we have transfer speed deficiencies with certain ESX hosts and Datastores.

 


 

You can use DD for the following testing.
-       Write from Datastore to /dev/null: This will test transfer from a datastore to an empty space giving an idea of how quickly we can read from the Disk the VM’s being tested resides on using the specified file system.
o    Syntax Example: time dd if=/vmfs/volumes of=/dev/null
-       Write from /dev/zero to the datatstore: Test transfer speed between /dev/zero and the datastore. This mimics a restore excluding the NAS mount.
o    Syntax Example: time dd if=/dev/zero of=/vmfs/volumes/…
-       Write from /dev/zero to a storage mount: This tests transfer speed when writing zeros to the NAS without the bottle neck of the /vmfs/volumes (datastore_fs) OR /export (vm_proxy_fs).
o    Syntax Example: time dd if=/dev/zero of=/storage/.uuid_here/vm/..
-       Write from storage mount to /dev/zero: Test transfer between mounted storage and /dev/zero. This mimics a restores excluding the datastore.
o    Syntax Example: time dd if=/storage/.uuid/test of=/dev/null

 

 

 

Here we have two separate vmPRO folders.

 

bash-4.1# pwd
/export
bash-4.1# ls -alh
total 8.0K
drwxr-xr-x   5 root root  100 Jan 15 22:07 .
drwxr-xr-x. 33 root root 4.0K Jan 15 20:47 ..
drwxrwxrwx   2 root root   40 Jan 15 22:07 10.20.230.15
drwxrwxrwx   5 root root  100 Jan 15 17:37 Critical
drwxrwxrwx   3 root root   60 Jan 15 21:47 Junk

Here are thier contents.

 

 

bash-4.1# pwd
/export/Junk
bash-4.1# ls -lh
total 0
drwxrwxrwx 2 root root 140 Jan 19 19:38 ASPS-RA-Antero

AND

 

bash-4.1# pwd
/export/Critical
bash-4.1# ls
3.01_1_15_2012_vmPRO              ASPS WEBSERVER -Keep Running
ASPS - Backup Exec 10.20.230.107
bash-4.1#

 

 

Let's explore the Datastores the .vmdk files are stored on.

 

 

 

 

 

 

 

 

  1. Run the DD command and test througput.

bash-4.1# time dd if=ASPS\ -\ Backup\ Exec\ 10.20.230.107/ASPS\ -\ Backup\ Exec-flat.vmdk of=/dev/null
1757153+0 records in
1757152+0 records out
899661824 bytes (900 MB) copied, 55.4631 s, 16.2 MB/s
 

bash-4.1# time dd if=ASPS-RA-Antero/ASPS-RA-Antero-flat.vmdk of=/dev/null
116705+0 records in
116704+0 records out
59752448 bytes (60 MB) copied, 5.75763 s, 10.4 MB

 

Notice that one vm (ASPS Webserver) is transfering 6MB/s fastern then the other vm (Antero), a 60% performance difference. Why is this? The slower Antero vm is being hosted on a archive disk system (DXi), which has alot of overhead and isn't made for production environments.

 

  1. The DD command can also help test throughput from the /export files system to the attached storage, the DXi in this case. Start by identifing the mount point of the NAS, our example is /storage/.171dfb00-440f-4601-bed6-f6a4b5ea7c3b.

bash-4.1# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda3 on /var type ext3 (rw)
/dev/sda6 on /var/cores type ext3 (rw)
/dev/sda5 on /var/log type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
ls_bitmap_fs on /inuse_bitmap type fuse.ls_bitmap_fs (ro,nosuid,default_permissions)
tmpfs on /vmfs/volumes.shadow type tmpfs (rw,size=250M,nr_inodes=50000,mode=0755)
datastore_fs on /vmfs/volumes type fuse.datastore_fs (rw,nosuid,max_read=1048576)
tmpfs on /export.shadow type tmpfs (rw,size=350M,nr_inodes=50000,mode=0755)
vm_proxy_fs on /export type fuse.vm_proxy_fs (rw,nosuid,default_permissions,allow_other,max_read=1048576)
recovery_fs_files on /recover/files type fuse.recovery_fs_files (ro,nosuid,default_permissions,allow_other)
files_fs on /files type fuse.files_fs (ro,nosuid,default_permissions,allow_other)
import_fs on /import type fuse.import_fs (rw,nosuid,nodev,default_permissions,max_read=1048576)
//10.20.224.52/vmpro on /storage/.10afbfc6-6510-4a6e-b305-64b9a38b9ca1 type cifs (rw,mand)
//10.20.230.75/vmPRO on /storage/.a1d08602-4729-4f6f-8ad0-fcfef1859bb9 type cifs (rw,mand)
10.20.230.75:/Q/shares/vmPROnfs on /storage/.171dfb00-440f-4601-bed6-f6a4b5ea7c3b type nfs (rw,soft,intr,addr=10.20.230.75)
recovery_fs on /recover/images type fuse.recovery_fs (ro,nosuid,default_permissions,allow_other)

 

  1. Now run a test.  Remember, you need a file not a directory to write to. Establish a test file by doing the following:

bash-4.1# time dd if=ASPS-RA-Antero/ASPS-RA-Antero-flat.vmdk of=/storage/.171dfb00-440f-4601-bed6-f6a4b5ea7c3b/test/test
426754048 bytes (427 MB) copied, 61.1197 s, 7.0 MB/s
 

 

  1. Use the DD command to determine disk performance.

vmPRO moves data from datastores attached to the ESX Host to a NAS device for backup storage. Datastore performance can have an effect on the speed of a backup. Storage can be Fiber, iSCSI, NAS, local storage or FCoE attached. Each has its benefits and performance can vary greatly.
 

The following screen shows examples of datastores that have different types of storage.

 

 

 

Datastores are mounted under /vmfs/volumes and managed by the data_store_fs file system. Below is an example of the VMStorage2 datastore.
 

 

bash-4.1# pwd
/vmfs/volumes/VMStorage2/w2k3-64-vm
bash-4.1# ls -alh
total 5.3G
drwxr-xr-x 3 root root 460 Mar 3 20:15 .
drwx------ 26 root root 580 Mar 3 20:15 ..
drwxr-xr-x 2 root root 60 Mar 3 20:15 phd
-rwxrwxrwx 1 root root 46K Jan 17 2012 vmware-10.log
-rwxrwxrwx 1 root root 46K Feb 25 13:15 vmware-11.log
-rwxrwxrwx 1 root root 78K Feb 25 13:18 vmware-12.log
-rwxrwxrwx 1 root root 47K Feb 25 13:21 vmware-13.log
-rwxrwxrwx 1 root root 62K Feb 25 13:36 vmware-14.log
-rwxrwxrwx 1 root root 47K Jan 17 2012 vmware-9.log
-rwxrwxrwx 1 root root 729K Mar 3 13:01 vmware.log
-rwxrwxrwx 1 root root 20G Mar 3 2013 w2k3-64-vm-000001-delta.vmdk
-rwxrwxrwx 1 root root 327 Mar 3 13:01 w2k3-64-vm-000001.vmdk
-rwxrwxrwx 1 root root 28K Jan 17 2012 w2k3-64-vm-Snapshot1.vmsn
-rwxrwxrwx 1 root root 13 Mar 3 13:01 w2k3-64-vm-aux.xml
-rwxrwxrwx 1 root root 2.0G Feb 25 13:37 w2k3-64-vm-ed440910.vswp
-rwxrwxrwx 1 root root 20G Jan 17 2012 w2k3-64-vm-flat.vmdk
-rwxrwxrwx 1 root root 8.5K Mar 2 12:42 w2k3-64-vm.nvram
-rwxrwxrwx 1 root root 495 Jan 17 2012 w2k3-64-vm.vmdk
-rwxrwxrwx 1 root root 1.4K Mar 3 13:01 w2k3-64-vm.vmsd
-rwxrwxrwx 1 root root 2.9K Mar 3 13:01 w2k3-64-vm.vmx
-rwxrwxrwx 1 root root 1.9K Feb 25 15:11 w2k3-64-vm.vmxf
bash-4.1#

 

However, since the /export mount, as managed by the vm_proxy_fs,  presents the virtual machines currently being exported, this can lead us to the same place. This only works if the virtual disk is set to be exported, but it’s a cleaner path then drilling down through /vmfs/volumes. The DD command can be issued for either 'if' source.


Take a file from each Datastore to compare performance.


In the example below, notice the /vmfs/volumes mount has double the files of the number of storage mounted on the ESX hosts. This is because symbolic links are used to make identification of datastores easier.

 

bash-4.1# pwd
/vmfs/volumes
bash-4.1# ls -alh
total 4.0K
drwxr-xr-x 13 root root 480 Feb 27 12:40 .
drwxrwxrwx 4 root root 4.0K Feb 27 12:39 ..
drwx------ 13 root root 320 Mar 3 20:20 4b03813a-80a54654-a217-0030483450b1
drwx------ 2 root root 80 Mar 3 20:20 4b1d74d6-eee84f06-fe5f-0030483450b0
drwx------ 17 root root 400 Mar 3 20:20 4b1d7a3b-d37ae471-f8f1-0030483450b0
drwx------ 2 root root 80 Mar 3 20:20 4b1d7a53-1202fbc3-b7dd-0030483450b0
drwx------ 2 root root 80 Mar 3 20:20 4b5df85e-b23819c3-2025-0030483450b0
drwx------ 26 root root 580 Mar 3 20:20 4b5e21a8-d949a3dc-7743-0030483450b0
drwx------ 2 root root 80 Mar 3 20:20 4b636dd4-cd678a07-419b-0030483450b0
drwx------ 2 root root 80 Mar 3 20:20 4d265379-c3a6d073-c3a2-0030483450b0
drwx------ 2 root root 80 Mar 3 20:20 4d274a30-26e61fe9-c06f-0030483450b0
drwx------ 34 root root 740 Mar 3 20:20 4d374ebe-aba17cba-86ff-0030483450b0
drwx------ 2 root root 80 Mar 3 20:20 5aff0913-024b17eb
lrwxrwxrwx 1 root root 35 Feb 27 12:40 EricStorage -> 4d374ebe-aba17cba-86ff-0030483450b0
lrwxrwxrwx 1 root root 35 Feb 27 12:40 RyanStorage -> 4d274a30-26e61fe9-c06f-0030483450b0
lrwxrwxrwx 1 root root 35 Feb 27 12:40 Storage1 -> 4b03813a-80a54654-a217-0030483450b1
lrwxrwxrwx 1 root root 35 Feb 27 12:40 Storage2 -> 4b1d74d6-eee84f06-fe5f-0030483450b0
lrwxrwxrwx 1 root root 35 Feb 27 12:40 Storage3 -> 4b1d7a3b-d37ae471-f8f1-0030483450b0
lrwxrwxrwx 1 root root 35 Feb 27 12:40 Storage4 -> 4b1d7a53-1202fbc3-b7dd-0030483450b0
lrwxrwxrwx 1 root root 35 Feb 27 12:40 VMStorage1 -> 4b5df85e-b23819c3-2025-0030483450b0
lrwxrwxrwx 1 root root 35 Feb 27 12:40 VMStorage2 -> 4b5e21a8-d949a3dc-7743-0030483450b0
lrwxrwxrwx 1 root root 35 Feb 27 12:40 VMStorage3 -> 4b636dd4-cd678a07-419b-0030483450b0
lrwxrwxrwx 1 root root 35 Feb 27 12:40 VMStorage4 -> 4d265379-c3a6d073-c3a2-0030483450b0
lrwxrwxrwx 1 root root 17 Feb 27 12:40 vmproDSon7500 -> 5aff0913-024b17eb

 

The example below shows that RyanStorage is Fiber-attached.

 

bash-4.1# pwd
/vmfs/volumes/RyanStorage/dxi0-rdavies2x
bash-4.1# ls -alh
total 35G
drwxr-xr-x 2 root root 340 Mar 3 20:30 .
drwx------ 15 root root 360 Mar 3 20:30 ..
-rwxrwxrwx 1 root root 50G Apr 21 2011 dxi0-rdavies2x-flat.vmdk
-rwxrwxrwx 1 root root 8.5K May 3 2011 dxi0-rdavies2x.nvram
-rwxrwxrwx 1 root root 449 Apr 21 2011 dxi0-rdavies2x.vmdk
-rwxrwxrwx 1 root root 628 May 31 2011 dxi0-rdavies2x.vmsd
-rwxrwxrwx 1 root root 3.4K May 31 2011 dxi0-rdavies2x.vmx
-rwxrwxrwx 1 root root 269 Apr 25 2011 dxi0-rdavies2x.vmxf
-rwxrwxrwx 1 root root 200G May 3 2011 dxi0-rdavies2x_1-flat.vmdk
-rwxrwxrwx 1 root root 480 May 31 2011 dxi0-rdavies2x_1.vmdk
-rwxrwxrwx 1 root root 25G May 3 2011 dxi0-rdavies2x_2-flat.vmdk
-rwxrwxrwx 1 root root 473 May 31 2011 dxi0-rdavies2x_2.vmdk
-rwxrwxrwx 1 root root 65K Apr 21 2011 vmware-1.log
-rwxrwxrwx 1 root root 47K Apr 21 2011 vmware-2.log
-rwxrwxrwx 1 root root 167K May 3 2011 vmware.log
bash-4.1#

 

In the following example, you can see that Storage1 is a local disk.

 

bash-4.1# pwd
/vmfs/volumes/Storage1/ASPS-Ubuntu_Server_9.10
bash-4.1# ls -alh
total 1.9G
drwxr-xr-x 2 root root 240 Mar 3 20:33 .
drwx------ 13 root root 320 Mar 3 20:33 ..
-rwxrwxrwx 1 root root 15G Nov 23 2009 ASPS-Ubuntu_Server_9.10-flat.vmdk
-rwxrwxrwx 1 root root 8.5K Nov 21 2009 ASPS-Ubuntu_Server_9.10.nvram
-rwxrwxrwx 1 root root 488 Nov 23 2009 ASPS-Ubuntu_Server_9.10.vmdk
-rwxrwxrwx 1 root root 518 Nov 23 2009 ASPS-Ubuntu_Server_9.10.vmsd
-rwxrwxrwx 1 root root 3.1K Nov 23 2009 ASPS-Ubuntu_Server_9.10.vmx
-rwxrwxrwx 1 root root 278 Nov 21 2009 ASPS-Ubuntu_Server_9.10.vmxf
-rwxrwxrwx 1 root root 41K Nov 21 2009 vmware-1.log
-rwxrwxrwx 1 root root 142K Nov 23 2009 vmware.log

 

The vmproDSon7500 storage below is a NAS NFS mount.

 

bash-4.1# pwd
/vmfs/volumes/vmproDSon7500
bash-4.1# ls -alh
total 653M
drwx------ 4 root root 240 Mar 3 20:35 .
drwxr-xr-x 13 root root 480 Feb 27 12:40 ..
-rwxrwxrwx 1 root root 329 Mar 3 20:30 .ff6a2cffff08ffffff29ff7866ffff3b36ff3068.json
drwxr-xr-x 2 root root 60 Mar 3 20:35 2013-02
drwxr-xr-x 2 root root 60 Mar 3 20:35 2013-03
-rwxrwxrwx 1 root root 3.5G Mar 1 16:06 CentOS-6.3-i386-bin-DVD1.iso
-rwxrwxrwx 1 root root 1.1G Mar 1 16:13 CentOS-6.3-i386-bin-DVD2.iso
-rwxrwxrwx 1 root root 4.5M Feb 25 13:34 cd110511.iso
-rwxrwxrwx 1 root root 560M Mar 1 15:31 linuxmint-14.1-mate-dvd-32bit.iso.jp7nr1d.partial
-rwxrwxrwx 1 root root 36K Mar 3 13:16 smartmotion.mysqldump
bash-4.1#

Use the DD command to dump to NULL to see how data transfer varies.

 

Below is the test for NAS NFS-Mounted storage:

 

bash-4.1# pwd
/vmfs/volumes/vmproDSon7500
bash-4.1# time dd if=CentOS-6.3-i386-bin-DVD2.iso of=/dev/null
2277956+0 records in
2277956+0 records out
1166313472 bytes (1.2 GB) copied, 11.246 s, 104 MB/s
real 0m11.342s
user 0m0.564s
sys 0m3.825s
bash-4.1#

 

Below is the test for Fiber-Attached storage:

 

bash-4.1# pwd
/vmfs/volumes/RyanStorage
bash-4.1# time dd if=DX_USB_V9.25_3f71fb9ca1d4e3b8a7ee4787b0555c59.img.gz of=/dev/null
347555+1 records in
347555+1 records out
177948471 bytes (178 MB) copied, 1.73414 s, 103 MB/s
real    0m21.964s
user    0m0.080s
sys     0m0.558s
bash-4.1#

 

Below is the test for Local Disk storage:

 

bash-4.1# pwd
/vmfs/volumes/Storage1
bash-4.1# time dd if=DX_USB_V9.25_3f71fb9ca1d4e3b8a7ee4787b0555c59.img.gz of=/dev/null
347555+1 records in
347555+1 records out
177948471 bytes (178 MB) copied, 1.50854 s, 118 MB/s
real    0m22.330s
user    0m0.071s
sys     0m0.539s
bash-4.1#

 

The above disk performance test showed that the Fiber-attached and NAS NFS-mounted storage are performing equally, at approximately 105 MB/s.  The local disk storage is performing slightly faster, coming in at 118 MB/s.

 


What's Next?

TCPdump Tool >

 

 
 
 
 

 

Here is an example of how to use the DD command to determine the Datastore with best throughput.


This page was generated by the BrainKeeper Enterprise Wiki, © 2018