Image-Level (Full) Restore

Overview

This article discusses how to perform image-level restores. It covers the following main topics: 

 

Before You Start

Before starting an image-level restore, keep the following points in mind. 

 

Performing an Image-Level Restore

  1. To start an image-level restore, open the Web GUI and select SmartMotion Backup > Recover.

 

This opens the Recover Virtual Machines Wizard. As shown below, the wizard prompts you to select the NAS storage from which you wish to recover the image. 



 

    1. Click the folder icon at the far right and browse to the file image you would like to restore. When you have done so, the Next button will become active. 
    2. Click Next.
      Note:
      Although the wizard lets you restore multiple images, in this example we will restore only one.

 SmartMotion will create a directory with the structure:
\\NAS_Host_name(or IP)\share\Year_Month\Year_Month_Day_TimeofSmartmotion\Folder_defined_in_vmPRO\name_of_vm\

Here is an example of a file path. Keep in mind that the share portion of the path is ignored by the restore wizard.
\\freenas.quantum.com\vmpro\2012-12\2012-12-30-0100\Critical\ASPS WEBSERVER -Keep Running\

After you've clicked Next, you'll see the following screen:


 

 

2.  Fill in the Configuration section for the .vmx file (shown in the previous image), as follows. DO NOT click Next until you have filled in the Virtual Disk Configuration section (at the bottom of this screen).

 

  1. VM Name: This is static and cannot be changed, since it already exists.
  2. Source VMX Name: This is static and cannot be changed, since it already exists.
  3. Target VMX Name: Shows what the VM will be named when it is restored. It is used for reference when the Stop on Conflict function comes into play.)
  4. Target Datastore: Where the restored image will be written to. The default is the datastore that the image current resides on (defined in the .cfg file by UUID).
  5. Target Directory:  The directory in the datastore into which files will be restored.
  6. Action on Conflict: What to do if there already is a file of the same name in the /export_fs mount point of /vmfs/volumes/<datastore_uuid>/vm_name. There are two options here. You can stop the restore if a file with the same .vmx file name is found. This avoids overwriting the same VM. You can also rename the restore files at the time they are written. This also prevents  overwriting the original VM.
    • Add the VM… and Register… options: Options for the for ESX host to inventory the VM.

3.  Fill in the Virtual Disk Configuration section, which specifies the location of the restored .vmdk files. The options are:

 

  • Use the Same Datastore… Puts the .vmdk files into the same directory as the .vmx file.  The .vmdk files follow the .vmk configuration above.
  • Use the Original Configuration… Puts the .vmdk files into the same directory that the .cfg files say they came from. The .vmk files are still directed as configured above.
  • Change each Virtual Disk's Configuration: To do this:
  1. Select this option.
  2. Click the Edit Configuration link.
  3. Choose from the options shown in the following image. Notice that you can direct each disk to a specific Target Datastore and Target Directory, as well as choosing an option for Disk Provisioning.

 
  1. To save your choices, click Save.
  2. When you have set all the options in Steps 2-4, click Next.
  3. The Recover Virtual Machines Wizard will display the configuration of the VM, as shown in the next image.
 

  1. If the listing is correct, skip to Step C below. If it is not correct, continue with Step B.
  2. If you need to make changes, click Back to return to the screen shown under Step 1B above. Then repeat Steps 2-5.
  3. Now that the listing is correct, click Start to begin the VM recovery.

Example Configuration Screenshot and the Resulting File Listing

The following image shows a sample configuration for an Image-Level Restore. This is the same screen as the one in Step 1B of the previous section, but with Action on Conflict set to rename, and a different Target VMX Name.

 
 
 

Note: This listing is more complete than the one in Step 6.
 

datacenter = "ha-datacenter"
resourcepool = "Resources/Production"
hostsystem = "aspsesx.quantum.com"
servername = "10.20.230.15"
vmdk.0.rename = "webserver.vmdk"
vmdk.0.name = "webserver.vmdk"
vmdk.0.dsname = "4d265379-c3a6d073-c3a2-0030483450b0"
vmdk.0.extent-rename = "webserver-flat.vmdk"
vmdk.0.extent-name = "webserver-flat.vmdk"
vmdk.0.directory = "webserver_rename(3)"
vmdk.0.type = "thick"
vmdk.0.size = "42968862720"
vmdk.1.rename = "webserver(1).vmdk"
vmdk.1.name = "webserver(1).vmdk"
vmdk.1.dsname = "4d265379-c3a6d073-c3a2-0030483450b0"
vmdk.1.extent-rename = "webserver(1)-flat.vmdk"
vmdk.1.extent-name = "webserver(1)-flat.vmdk"
vmdk.1.directory = "webserver_rename(3)"
vmdk.1.type = "thick"
vmdk.1.size = "214748364800"
dsname = "4d265379-c3a6d073-c3a2-0030483450b0"
register-after-import = "1"
vmx-name = "webserver.vmx"
directory = "webserver_rename(3)"
version = "5"
vmuuid = "564d6224-e29b-41fc-7cc2-20cdc7a52c2f"
on-conflict = "rename"
folder = "vm"
access-time = "1355994019"
vmname = "ASPS WEBSERVER -Keep Running"
vmx-rename = "webserver_rename.vmx"

Using the Logs to Troubleshoot Restore Problems

Looking at the vmPRO import and datastore logs can help you troubleshoot restore problems. The logs are:

Import_fs:
 

Function:        Prepares data to be moved by the datastore_fs to the destination DataStore
Mount Point:  The import_fs is mounted in /import type fuse.import_fs
 (rw,nosuid,nodev,default_permissions,max_read=1048576)

Datastore_fs:
 

Function:       Moves data from import_Fs to Datastore(s)
Mount Point: The datastore_fs is mounted in /vmfs/volumes type fuse.datastore_fs
(rw,nosuid,max_read=1048576)


For information that can help you when troubleshooting restore problems, look at the import and datastore logs (/var/log/datastore_fs and /var/log/import_fs). The messages log gives good general info (/var/log/messages).

Using the Logs to Troubleshoot a Failed Restore

The following log entries show an example of a failed restore. The text in bold green explains various aspects of the log entries, and how you can use them for troubleshooting.
 
This logging takes place while the .vmx and .vmdk restore parameters are being checked, before you press the Start button.
The Import_fs makes a directory and node:

2013-02-05 16:03:40.896644: import_fs_mkdir: /ASPS WEBSERVER -Keep Running_0
2013-02-05 16:03:40.897590: import_fs_mknod(path /ASPS WEBSERVER -Keep Running_0/vm.cfg, mode 100644

 
The vm.cfg file is created, and the path of the .cfg file is opened:
 

2013-02-05 16:03:40.897619: Create file /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 16:03:40.897779: import_fs_open(path /ASPS WEBSERVER -Keep Running_0/vm.cfg, flags 100001)
2013-02-05 16:03:40.897801: Open /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 16:03:40.898148: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/vm.cfg dirty 1)
2013-02-05 16:03:40.898169: Flush /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 16:03:40.898631: import_session: created /ASPS WEBSERVER -Keep Running_0
2013-02-05 16:03:45.253979: Import Session /ASPS WEBSERVER -Keep Running_0:

 
Here's the Point of Failure. The error:17 listing and the following description show that data cannot be written, because the Target Directory already exists! Since we haven't set the rename flag, we cannot continue the restore operation.

2013-02-05 16:03:45.253877: WARNING [thread:139964949931776, where:import_session.c:402, error:17] The target directory /vmfs/volumes/4b1d7a53-1202fbc3-b7dd-0030483450b0/webserver(3) already exists. It will not be overwritten.
import_fs[2787]: WARNING [thread:139964949931776, where:import_session.c:402, error:17] The target directory /vmfs/volumes/4b1d7a53-1202fbc3-b7dd-0030483450b0/webserver(3) already exists. It will not be overwritten.
2013-02-05 16:03:45.254836: stats: /ASPS WEBSERVER -Keep Running_0/vm.cfg total 1 kb, write thru 1 kb, skipped 0 kb (0 kb/s), keep alive 0, incoming avg write size 1075, outgoing avg write size 1075
2013-02-05 16:03:45.255077: import_fs_release(path /ASPS WEBSERVER -Keep Running_0/vm.cfg)
2013-02-05 16:03:45.255092: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/vm.cfg dirty 0)
==> messages <==
Feb  5 16:03:45 localhost import_fs[2787]: WARNING [thread:139964949931776, where:import_session.c:402, error:17] The target directory /vmfs/volumes/4b1d7a53-1202fbc3-b7dd-0030483450b0/webserver(3) already exists. It will not be overwritten.
==> import_fs <==

 
Creation of cfg.err file: This will create an error in the present working directory with the error present in ASCII form (human readable. We can clear this quickly, as discussed after the next image. The text is what is presented in the GUI error display shown below.

2013-02-05 16:03:45.255399: import_fs_open(path /ASPS WEBSERVER -Keep Running_0/vm.cfg.err, flags 100000)
2013-02-05 16:03:45.255932: Open /ASPS WEBSERVER -Keep Running_0/vm.cfg.err
2013-02-05 16:03:45.256499: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/vm.cfg.err dirty 0)
2013-02-05 16:03:45.256830: import_fs_release(path /ASPS WEBSERVER -Keep Running_0/vm.cfg.err)
2013-02-05 16:03:45.256845: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/vm.cfg.err dirty 0)
2013-02-05 16:03:45.257258: import_fs_unlink: /ASPS WEBSERVER -Keep Running_0/vm.cfg.err
2013-02-05 16:03:45.257770: Unlink /ASPS WEBSERVER -Keep Running_0/vm.cfg.err
2013-02-05 16:03:45.258117: import_fs_unlink: /ASPS WEBSERVER -Keep Running_0/.vmcfg
2013-02-05 16:03:45.258609: Unlink /ASPS WEBSERVER -Keep Running_0/.vmcfg
2013-02-05 16:03:45.258785: import_fs_unlink: /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 16:03:45.258864: Unlink /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 16:03:45.259048: import_fs_unlink: /ASPS WEBSERVER -Keep Running_0/import.log
2013-02-05 16:03:45.259102: Unlink /ASPS WEBSERVER -Keep Running_0/import.log
2013-02-05 16:03:45.259195: import_fs_rmdir: /ASPS WEBSERVER -Keep Running_0

 
At this point you would see an error in the GUI, for example:
 

 

 
As mentioned in the image above, click the Back button and make configuration changes to fix the error. After you fix the error, such as selecting Rename for Action On Conflict, you can proceed to a healthy recovery.

Using the Logs to Verify a Successful Restore

Here is an example of a successful store, with comments in bold green.

==> import_fs <==
The Import_fs makes a directory and node:

2013-02-05 15:43:27.170267: import_fs_mkdir: /ASPS WEBSERVER -Keep Running_0
2013-02-05 15:43:27.171262: import_fs_mknod(path /ASPS WEBSERVER -Keep Running_0/vm.cfg, mode 100644)

 
The vm.cfg file is created, and the path of the .cfg file is opened:
 

2013-02-05 15:43:27.171341: Create file /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 15:43:27.171677: import_fs_open(path /ASPS WEBSERVER -Keep Running_0/vm.cfg, flags 100001)
2013-02-05 15:43:27.171701: Open /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 15:43:27.172184: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/vm.cfg dirty 1)
2013-02-05 15:43:27.172205: Flush /ASPS WEBSERVER -Keep Running_0/vm.cfg

 
The .cfg file is closed (the Samba share will show the file touched then removed) and the import session is created:
 

2013-02-05 15:43:27.172657: import_session: created /ASPS WEBSERVER -Keep Running_0
2013-02-05 15:43:31.970042: Import Session /ASPS WEBSERVER -Keep Running_0:
2013-02-05 15:43:31.995414: vm name      : ASPS WEBSERVER -Keep Running
2013-02-05 15:43:31.995423: server       : aspsesx.quantum.com
2013-02-05 15:43:31.995432: host system  : aspsesx.quantum.com
2013-02-05 15:43:31.995440: datacenter   : ha-datacenter
2013-02-05 15:43:31.995449: resource pool: Resources/Production
2013-02-05 15:43:31.995458: mode         : 1
2013-02-05 15:43:31.995466: register vm  : 1
2013-02-05 15:43:31.995510: vmx:         : webserver.vmx => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver_rename.vmx
2013-02-05 15:43:31.995520: disk 0       : webserver.vmdk => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver.vmdk
2013-02-05 15:43:31.995531: disk 0       : webserver-flat.vmdk => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver-flat.vmdk (42968862720 bytes thick)
2013-02-05 15:43:31.995541: disk 1       : webserver(1).vmdk => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver(1).vmdk
2013-02-05 15:43:31.995552: disk 1       : webserver(1)-flat.vmdk => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver(1)-flat.vmdk (214748364800 bytes thick)
2013-02-05 15:43:31.996245: stats: /ASPS WEBSERVER -Keep Running_0/vm.cfg total 1 kb, write thru 1 kb, skipped 0 kb (0 kb/s), keep alive 0, incoming avg write size 1105, outgoing avg write size 1105
2013-02-05 15:43:31.996640: import_fs_release(path /ASPS WEBSERVER -Keep Running_0/vm.cfg)
2013-02-05 15:43:31.996694: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/vm.cfg dirty 0)
2013-02-05 15:43:31.997843: import_fs_unlink: /ASPS WEBSERVER -Keep Running_0/.vmcfg
2013-02-05 15:43:31.998365: Unlink /ASPS WEBSERVER -Keep Running_0/.vmcfg
2013-02-05 15:43:32.029875: import_fs_unlink: /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 15:43:32.030017: Unlink /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 15:43:32.045619: import_fs_unlink: /ASPS WEBSERVER -Keep Running_0/import.log
2013-02-05 15:43:32.045814: Unlink /ASPS WEBSERVER -Keep Running_0/import.log
2013-02-05 15:43:32.062663: import_fs_rmdir: /ASPS WEBSERVER -Keep Running_0

 
At this point verification is done, and the GUI is waiting for the user to click the Start button: 
 


 
We begin moving data here
=> import_fs <==
 
Import_fs makes its directories and nodes and opens its paths and import session. This is the same process that we see with verification: 
 

2013-02-05 15:44:24.942706: import_fs_mkdir: /ASPS WEBSERVER -Keep Running_0
2013-02-05 15:44:24.945176: import_fs_mknod(path /ASPS WEBSERVER -Keep Running_0/vm.cfg, mode 100644)
2013-02-05 15:44:24.945211: Create file /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 15:44:24.945582: import_fs_open(path /ASPS WEBSERVER -Keep Running_0/vm.cfg, flags 100001)
2013-02-05 15:44:24.945634: Open /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 15:44:24.946238: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/vm.cfg dirty 1)
2013-02-05 15:44:24.946261: Flush /ASPS WEBSERVER -Keep Running_0/vm.cfg
2013-02-05 15:44:24.946774: import_session: created /ASPS WEBSERVER -Keep Running_0
2013-02-05 15:44:28.376393: Import Session /ASPS WEBSERVER -Keep Running_0:
2013-02-05 15:44:28.376474: vm name      : ASPS WEBSERVER -Keep Running
2013-02-05 15:44:28.376488: server       : aspsesx.quantum.com
2013-02-05 15:44:28.376497: host system  : aspsesx.quantum.com
2013-02-05 15:44:28.376506: datacenter   : ha-datacenter
2013-02-05 15:44:28.376514: resource pool: Resources/Production
2013-02-05 15:44:28.376523: mode         : 1
2013-02-05 15:44:28.376532: register vm  : 1

 
The .vmx file is directed from its current datastore to the specified directory and name:
 

2013-02-05 15:44:28.376541: vmx:         : webserver.vmx => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver_rename.vmx
2013-02-05 15:44:28.376551: disk 0       : webserver.vmdk => [4d265379-c3a6d073-c3a2-0030483450b0]
webserver_rename(3)/webserver.vmdk

 
The .vmdk file is directed from its current datastore to the specified directory and name: 
 

2013-02-05 15:44:28.376563: disk 0       : webserver-flat.vmdk => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver-flat.vmdk (42968862720 bytes thick)
2013-02-05 15:44:28.376574: disk 1       : webserver(1).vmdk => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver(1).vmdk
2013-02-05 15:44:28.376584: disk 1       : webserver(1)-flat.vmdk => [4d265379-c3a6d073-c3a2-0030483450b0] webserver_rename(3)/webserver(1)-flat.vmdk (214748364800 bytes thick)
2013-02-05 15:44:28.377159: stats: /ASPS WEBSERVER -Keep Running_0/vm.cfg total 1 kb, write thru 1 kb, skipped 0 kb (0 kb/s), keep alive 0, incoming avg write size 1105, outgoing avg write size 1105
2013-02-05 15:44:28.377353: import_fs_release(path /ASPS WEBSERVER -Keep Running_0/vm.cfg)
2013-02-05 15:44:28.377369: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/vm.cfg dirty 0)

  
The messages file showing the open of the image and copy of the .vmx file:
 

 ==> messages <==
Feb  5 15:44:28 localhost controller[2853]: /recover/images/freenas.quantum.com/2012-12/2012-12-20-0100/Critical/ASPS WEBSERVER -Keep Running
Feb  5 15:44:28 localhost controller[2853]: Copying file: /recover/images/freenas.quantum.com/2012-12/2012-12-20-0100/Critical/ASPS WEBSERVER -Keep Running/webserver.vmx

 
Creating nodes, pathing, open file, and flush
 

==> import_fs <==
2013-02-05 15:44:28.455945: import_fs_mknod(path /ASPS WEBSERVER -Keep Running_0/webserver.vmx, mode 100755)
2013-02-05 15:44:28.456282: Create file /ASPS WEBSERVER -Keep Running_0/webserver.vmx
2013-02-05 15:44:28.456573: import_fs_open(path /ASPS WEBSERVER -Keep Running_0/webserver.vmx, flags 100001)
2013-02-05 15:44:28.456836: Open /ASPS WEBSERVER -Keep Running_0/webserver.vmx
2013-02-05 15:44:28.477210: import_fs_flush(path /ASPS WEBSERVER -Keep Running_0/webserver.vmx dirty 1)
2013-02-05 15:44:28.477309: Flush /ASPS WEBSERVER -Keep Running_0/webserver.vmx

 
Import of the .vmx file
 

2013-02-05 15:44:28.477612: importing /vmfs/volumes/4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3)/webserver_rename.vmx
2013-02-05 15:44:28.477642: redirecting disk from webserver.vmdk to webserver.vmdk
2013-02-05 15:44:28.477670: redirecting disk from webserver(1).vmdk to webserver(1).vmdk


Now the .vmdk files:
 

2013-02-05 15:44:28.477612: importing /vmfs/volumes/4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3)/webserver_rename.vmx
2013-02-05 15:44:28.477642: redirecting disk from webserver.vmdk to webserver.vmdk
2013-02-05 15:44:28.477670: redirecting disk from webserver(1).vmdk to webserver(1).vmdk

 
Datastore_FS makdir and nodes, and opening a path; files being copied: 
 

==> datastore_fs <==
2013-02-05 15:44:28.478019: datastore_fs_mkdir /4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3) 1ff
2013-02-05 15:44:29.240839: datastore_fs_mknod /4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3)/webserver_rename.vmx.tmp mode 81a4 dev 0
2013-02-05 15:44:29.454419: datastore_fs_open(path /4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3)/webserver_rename.vmx.tmp, fuse_file_info 0x7ffb6cb23e40): direct_io 0, Random I/O
2013-02-05 15:44:29.489901: Retrieved '[VMStorage4] webserver_rename(3)/webserver_rename.vmx.tmp' in 0 seconds
2013-02-05 15:44:29.932015: datastore_fs_rename /4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3)/webserver_rename.vmx.tmp /4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3)/webserver_rename.vmx
2013-02-05 15:44:32.068375: pns_prune: removing /4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3)/webserver_rename.vmx.tmp: owner 0, generation 1360104269

 
Import_FS is now done importing the .vmx into our datastore_fs mount point (vmfs/volumes/): 
 

==> import_fs <==
2013-02-05 15:44:32.068853: done importing /vmfs/volumes/4d265379-c3a6d073-c3a2-0030483450b0/webserver_rename(3)/webserver_rename.vmx
2013-02-05 15:44:37.666433: stats: /ASPS WEBSERVER -Keep Running_0/webserver.vmx total 3 kb, write thru 3 kb, skipped 0 kb (0 kb/s), keep alive 0, incoming avg write size 3356, outgoing avg write size 3356

 
There will be little movement in the logs, if you're only doing a restore. By looking at the export mount, you can see the directories for the datastores, and the links identifying them by name:

bash-4.1# pwd
/vmfs/volumes
bash-4.1# ls -alh
total 4.0K
drwxr-xr-x 12 root root  440 Feb  4 12:54 .
drwxrwxrwx  4 root root 4.0K Feb  4 12:54 ..
drwx------ 13 root root  320 Feb  6 12:55 4b03813a-80a54654-a217-0030483450b1
drwx------  2 root root   80 Feb  6 12:55 4b1d74d6-eee84f06-fe5f-0030483450b0
drwx------ 17 root root  400 Feb  6 12:55 4b1d7a3b-d37ae471-f8f1-0030483450b0
drwx------ 39 root root  840 Feb  6 12:55 4b1d7a53-1202fbc3-b7dd-0030483450b0
drwx------ 15 root root  360 Feb  6 12:55 4b5df85e-b23819c3-2025-0030483450b0
drwx------  2 root root   80 Feb  6 12:55 4b5e21a8-d949a3dc-7743-0030483450b0
drwx------  2 root root   80 Feb  6 12:55 4b636dd4-cd678a07-419b-0030483450b0
drwx------ 20 root root  460 Feb  6 12:55 4d265379-c3a6d073-c3a2-0030483450b0
drwx------  2 root root   80 Feb  6 12:55 4d274a30-26e61fe9-c06f-0030483450b0
drwx------ 29 root root  640 Feb  6 12:55 4d374ebe-aba17cba-86ff-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 EricStorage -> 4d374ebe-aba17cba-86ff-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 RyanStorage -> 4d274a30-26e61fe9-c06f-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 Storage1 -> 4b03813a-80a54654-a217-0030483450b1
lrwxrwxrwx  1 root root   35 Feb  4 12:54 Storage2 -> 4b1d74d6-eee84f06-fe5f-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 Storage3 -> 4b1d7a3b-d37ae471-f8f1-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 Storage4 -> 4b1d7a53-1202fbc3-b7dd-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 VMStorage1 -> 4b5df85e-b23819c3-2025-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 VMStorage2 -> 4b5e21a8-d949a3dc-7743-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 VMStorage3 -> 4b636dd4-cd678a07-419b-0030483450b0
lrwxrwxrwx  1 root root   35 Feb  4 12:54 VMStorage4 -> 4d265379-c3a6d073-c3a2-0030483450b0

 
We can see an example for the datastore VMStorage4 with a new directory, webserver_the_new directory. The ls command shows that our byte count increasing:

 

bash-4.1# ls -alh
total 2.0K
drwxr-xr-x  2 root root  160 Feb  6 14:00 .
drwx------ 17 root root  400 Feb  6 14:00 ..
-rwxrwxrwx  1 root root    0 Feb  6 14:00 im_a_webserver.vmsd
-rwxrwxrwx  1 root root 3.5K Feb  6 14:00 im_a_webserver.vmx
-rwxrwxrwx  1 root root  269 Feb  6 14:00 im_a_webserver.vmxf
-rwxrwxrwx  1 root root    0 Feb  6 14:00 webserver(1)-flat.vmdk
bash-4.1# ls -alh
total 26G
drwxr-xr-x  2 root root  160 Feb  6 14:01 .
drwx------ 17 root root  400 Feb  6 14:01 ..
-rwxrwxrwx  1 root root    0 Feb  6 14:00 im_a_webserver.vmsd
-rwxrwxrwx  1 root root 3.5K Feb  6 14:00 im_a_webserver.vmx
-rwxrwxrwx  1 root root  269 Feb  6 14:00 im_a_webserver.vmxf
-rwxrwxrwx  1 root root 200G Feb  6 14:01 webserver(1)-flat.vmdk
bash-4.1# ls -alh
total 26G
drwxr-xr-x  2 root root  180 Feb  6 14:04 .
drwx------ 17 root root  400 Feb  6 14:04 ..
-rwxrwxrwx  1 root root    0 Feb  6 14:00 im_a_webserver.vmsd
-rwxrwxrwx  1 root root 3.5K Feb  6 14:00 im_a_webserver.vmx
-rwxrwxrwx  1 root root  269 Feb  6 14:00 im_a_webserver.vmxf
-rwxrwxrwx  1 root root 200G Feb  6 14:01 webserver(1)-flat.vmdk
-rwxrwxrwx  1 root root  453 Feb  6 14:01 webserver(1).vmdk
bash-4.1# pwd
/vmfs/volumes/VMStorage4/webserver_the_new directory
bash-4.1#


When a restore is done, using this new directory, oranges, you will see the flat.vmdk, .vmdk, and .vmx files listed:

bash-4.1# pwd
/vmfs/volumes/Storage4/oranges
bash-4.1# ls -alh
total 1.6G
drwxr-xr-x  3 root root  160 Feb  6 15:14 .
drwx------ 39 root root  840 Feb  6 15:14 ..
-rwxrwxrwx  1 root root  12G Feb  6 15:04 Quantum vmPRO 3.0-flat.vmdk
-rwxrwxrwx  1 root root  456 Feb  6 14:54 Quantum vmPRO 3.0.vmdk
-rwxrwxrwx  1 root root 3.3K Feb  6 15:04 apples.vmx
drwxr-xr-x  2 root root   60 Feb  6 15:14 phd
bash-4.1#

 
Logging at the end of a restore looks something like this:
 

2013-02-06 15:04:39.522377: import_fs_mknod(path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx, mode 100755)
2013-02-06 15:04:39.522725: Create file /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx
2013-02-06 15:04:39.522773: pns_create_node: creating </import.shadow/Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx>, generation = 1
2013-02-06 15:04:39.522821: Creating proxy header(/import.shadow/Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx), new_time = 1360188279
2013-02-06 15:04:39.522915: import_fs_getattr(path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx)
2013-02-06 15:04:39.523032: import_fs_open(path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx, flags 100001)
2013-02-06 15:04:39.523237: Open /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx
2013-02-06 15:04:39.537017: import_fs_getxattr : path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx, name security.capability, size 0
2013-02-06 15:04:39.537147: pns_create_node: creating </import.shadow/Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx>, generation = 1
2013-02-06 15:04:39.537191: Updating proxy header(/import.shadow/Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx) prev_time = 1360188279, new_time = 1360188279
2013-02-06 15:04:39.537601: import_fs_flush(path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx dirty 1)
2013-02-06 15:04:39.537674: Flush /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx
2013-02-06 15:04:39.538040: importing /vmfs/volumes/4b1d7a53-1202fbc3-b7dd-0030483450b0/oranges/apples.vmx
2013-02-06 15:04:39.538063: redirecting disk from Quantum vmPRO 3.0.vmdk to Quantum vmPRO 3.0.vmdk
2013-02-06 15:04:49.458715: done importing /vmfs/volumes/4b1d7a53-1202fbc3-b7dd-0030483450b0/oranges/apples.vmx
2013-02-06 15:04:49.458865: stats: /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx total 3 kb, write thru 3 kb, skipped 0 kb (0 kb/s), keep alive 0, incoming avg write size 3175, outgoing avg write size 3175
2013-02-06 15:04:49.459796: import_fs_release(path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx)
2013-02-06 15:04:49.459814: import_fs_flush(path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx dirty 0)
2013-02-06 15:04:49.464775: import_fs_getattr(path /)
2013-02-06 15:04:49.464862: import_fs_getattr(path /Quantum vmPRO 3.0_1)
2013-02-06 15:04:49.465468: import_fs_getattr(path /Quantum vmPRO 3.0_1)
2013-02-06 15:04:49.465538: import_fs_getattr(path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx)
2013-02-06 15:04:49.465603: import_fs_unlink: /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx
2013-02-06 15:04:49.465907: Unlink /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx
2013-02-06 15:04:49.465920: pns_unlink: unlinking </import.shadow/Quantum vmPRO 3.0_1/Quantum vmPRO 3.0.vmx>
2013-02-06 15:04:49.466202: import_fs_getattr(path /Quantum vmPRO 3.0_1)
2013-02-06 15:04:49.466266: import_fs_getattr(path /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0-flat.vmdk)
2013-02-06 15:04:49.466335: import_fs_unlink: /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0-flat.vmdk
2013-02-06 15:04:49.466576: Unlink /Quantum vmPRO 3.0_1/Quantum vmPRO 3.0-flat.vmdk
2013-02-06 15:04:49.466653: pns_unlink: unlinking </import.shadow/Quantum vmPRO 3.0_1/Quantum vmPRO 3.0-flat.v
2013-02-06 15:04:49.466768: import_fs_getattr(path /Quantum vmPRO 3.0_1)
2013-02-06 15:04:49.466826: import_fs_getattr(path /Quantum vmPRO 3.0_1/.vmcfg)
2013-02-06 15:04:49.466878: import_fs_unlink: /Quantum vmPRO 3.0_1/.vmcfg
2013-02-06 15:04:49.467121: Unlink /Quantum vmPRO 3.0_1/.vmcfg
2013-02-06 15:04:49.467134: pns_unlink: unlinking </import.shadow/Quantum vmPRO 3.0_1/.vmcfg>
2013-02-06 15:04:49.467792: import_fs_getattr(path /Quantum vmPRO 3.0_1)
2013-02-06 15:04:49.467858: import_fs_getattr(path /Quantum vmPRO 3.0_1/vm.cfg)
2013-02-06 15:04:49.467913: import_fs_unlink: /Quantum vmPRO 3.0_1/vm.cfg
2013-02-06 15:04:49.468024: Unlink /Quantum vmPRO 3.0_1/vm.cfg
2013-02-06 15:04:49.468037: pns_unlink: unlinking </import.shadow/Quantum vmPRO 3.0_1/vm.cfg>
2013-02-06 15:04:49.468445: import_fs_getattr(path /Quantum vmPRO 3.0_1)
2013-02-06 15:04:49.468507: import_fs_getattr(path /Quantum vmPRO 3.0_1/import.log)
2013-02-06 15:04:49.468603: import_fs_unlink: /Quantum vmPRO 3.0_1/import.log
2013-02-06 15:04:49.468661: Unlink /Quantum vmPRO 3.0_1/import.log
2013-02-06 15:04:49.468673: pns_unlink: unlinking </import.shadow/Quantum vmPRO 3.0_1/import.log>
2013-02-06 15:04:49.469137: import_fs_rmdir: /Quantum vmPRO 3.0_1
2013-02-06 15:04:49.469176: pns_rmdir: unlinking </import.shadow/Quantum vmPRO 3.0_1>

  


What's Next?

> File-Level Restore
 

 

 
 
 
 


This page was generated by the BrainKeeper Enterprise Wiki, © 2018