File-Level Restore

Overview

This article discusses how to perform file-level restores. It covers the following main topics:

 

Note: File-level restores are not available from the Wizard.


Basics of File-Level Recovery

File-level recovery in vmPRO is done by browsing to the UNC path of the "Master" vmPRO appliance. You can reach this File System by using a path like /</master_vmpro_ip_address>/recovery/files.

 

Note: This shouldn't be confused with another Samba share, files.

 

Here they both are as mount points:

 

    bash-4.1# mount | grep files

    files_fs on /files type fuse.files_fs (ro,nosuid,default_permissions,allow_other)

    recovery_fs_files on /recover/files type fuse.recovery_fs_files (ro,nosuid,default_permissions,allow_other)

 

Both file systems are present via Samba. Here is an excerpt of the smb.conf file:

 

[files]

    path = /files

    comment = "File Level Access"

    browseable = yes

    writeable = no

    printable = no

    vfs objects = ntfs3g

[recover]

    path = /recover

    comment = "Quantum Recover"

    browseable = yes

    writeable = no

    printable = no

    vfs objects = ntfs3g

/files 

The /files mount is similar to the /export mount as managed by the vm_proxy_fs (/vmfs/volumes/).

 

Here is the reference for this mount point:

 

    bash-4.1# mount | grep export
    tmpfs on /export.shadow type tmpfs (rw,size=350M,nr_inodes=50000,mode=0755)
    vm_proxy_fs on /export type fuse.vm_proxy_fs (rw,nosuid,default_permissions,allow_other,max_read=1048576)
 

Here is the reference for the the Samba share:

 

[export]

    path = /export

    comment = "Quantum Export"

    browseable = yes

    writeable = no

    printable = no

    map archive = no

    acl map full control = yes

 

Because the /export file system presents the virtual machines and their associated files for SmartMotion that are ready to be moved to the target NAS (CIFS or NFS), this only gives us a view of what has been "discovered" from the /vmfs/volumes/ files system as managed by the datastore_fs.
 
Here is the mount point and its contents, in a directory listing:
 

    bash-4.1# pwd

    /export

    bash-4.1# ls -alh

    total 8.0K

    drwxr-xr-x   6 root root 120 Feb 27 12:40 .

    drwxr-xr-x. 33 root root 4.0K Feb 27 12:39 ..

    drwxrwxrwx   2 root root   40 Feb 27 12:40 10.20.230.15

    drwxrwxrwx   3 root root   60 Feb 27 12:40 WebServer

    drwxrwxrwx   4 root root   80 Feb 27 12:40 WindowsVMs

    drwxrwxrwx   3 root root   60 Feb 27 12:40 test

 

Here they are, viewed from the vmPRO GUI:

 

 

In the screen shot above, we can see that the /export file system simply shows VMs set to Yes.

 

The /files listing shows the same presentation:

 

    bash-4.1# pwd

    /files

    bash-4.1# ls -alh

    total 8.0K

    drwxr-xr-x   1 root root 120 Feb 27 12:40 .

    drwxr-xr-x. 33 root root 4.0K Feb 27 12:39 ..

    drwxrwxrwx   1 root root   40 Feb 27 12:40 10.20.230.15

    drwxrwxrwx   1 root root   60 Feb 27 12:40 WebServer

    drwxrwxrwx   1 root root   80 Feb 27 12:40 WindowsVMs

    drwxrwxrwx   1 root root   60 Feb 27 12:40 test

    bash-4.1#

 

Although both file systems present data from /vmfs/volumes as managed datastore_fs, the /files listing presents a granular file view. The /export listing provides an image level:

 

    bash-4.1# pwd

    /export/WebServer/ASPS WEBSERVER -Keep Running

    bash-4.1# ls -alh

    total 241G

    drwxrwxrwx 2 root root  180 Feb 28 09:14 .

    drwxrwxrwx 3 root root   60 Feb 27 12:40 ..

    -rwxrwxrwx 1 root root 200G Feb 7 16:17 webserver(1)-flat.vmdk

    -rwxrwxrwx 1 root root 479 Feb 28 02:13 webserver(1).vmdk

    -rwxrwxrwx 1 root root 41G Feb 28 2013 webserver-flat.vmdk

    -rwxrwxrwx 1 root root 6.0K Feb 28 02:13 webserver.cfg

    -rwxrwxrwx 1 root root 474 Feb 28 02:14 webserver.vmdk

    -rwxrwxrwx 1 root root 3.5K Feb 28 02:13 webserver.vmx

    bash-4.1#

 

    bash-4.1# pwd

    /files/WebServer/ASPS WEBSERVER -Keep Running/webserver.volume/0/Documents and Settings

    bash-4.1# ls -alh

    total 76K

    drwxrwxrwx 1 root root 4.0K Feb 17 2009 .

    drwxrwxrwx 1 root root 8.0K Nov 29 11:14 ..

    drwxrwxrwx 1 root root 4.0K Jul 30 2009 Administrator

    drwxrwxrwx 1 root root 4.0K Feb 17 2009 All Users

    drwxrwxrwx 1 root root 48K Feb 17 2009 Default User

    drwxrwxrwx 1 root root 4.0K Feb 17 2009 LocalService

    drwxrwxrwx 1 root root 4.0K Feb 17 2009 NetworkService

    bash-4.1#

 

This means that /files is suitable for a third-party backup application to use for file-level backups, and that /export can be used for image-level backups. Neither is useful for recovery, because only the current discovered image or files are presented.


Restoring Individual Files

Individual files in backed-up virtual machines can be restored without running the recovery process. File-level recovery allows you to use the vmPRO appliance to access the files within the virtual disks that are backed up.

 

To restore individual files:
 

  1. Using Windows Explorer (not a browser) on your local computer, enter the UNC path

\\<vmPRO-Host_IP>\recover\files

 

and then drill down the directory structure to the file.

 

Here's how the recovery path is structured:

 

\\<vmPRO-Host_IP>\recover\files\NAS\Year-Month\Year\Month-Day-Time\Folder\VM\Volume\Disk

 

An example of this might be:

 

\\10.20.230.151\recover\files\freenas.quantum.com\2013-01\2013-01-05-0100\Critical\ASPS WEBSERVER -Keep Running\webserver.volume\0

 

The following image shows the Documents and Settings directory under the path listed above:
 


 
 

  1. To restore one or more files, browse and select the file(s). Then copy and paste to transfer the files to the desired destination

Note: File-level recovery for dynamic volumes that span multiple disks is currently not supported.  


Dealing with Problems When Restoring Individual Files

If you are having problems restoring on a file level, try the following:

 

  1. Check the communication with the NAS.

     

  2. Look in /var/log and /recover.log. This is an excellent way to see activity for file-level restores and get information that will help with troubleshooting.

     

    The /recovery/files mount, which is browsable from Samba, is managed by the recovery_fs_files system. When browsing to a file from a UNC path, the recovery_files_fs engages the /dev/mapper to create block files. This is tracked in the /var/log/ls_bitmap_fs (not a Samba share).

Good resources for troubleshooting a file-level restore problem are:

 

 Recovery_fs_files (in /var/log/)

 Ls_bitmap_Fs (in /var/log/)

 

While tailing the /var/log/recovery_fs, you can see the activity as you browse the file path shown in the log. See the Windows Explorer screenshot below. 

 

 


Tracking a File Level Recovery in the Recovery_FS

The following listings give information that can help you track the recovery process, whether it has been successful or unsuccessful. The comments in bold green give information to help you interpret the listings.

 

Recovery FS during file-level recovery:

bash-4.1# less /var/log/recovery_fs

    2013-03-14 15:04:15.407647: pan_dev_open /10.20.230.75/2013-03/2013-03-02-0000/WebServer/ASPS WEBSERVER -Keep Running/webserver(1)-flat.vmdk using direct     path /storage/10.20.230.75/2013-03/2013-03-02-0000/WebServer/ASPS WEBSERVER -Keep Running/webserver(1)-flat.vmdk and device mapper /dev/mapper/80359d85.629862

    2013-03-14 15:04:15.471846: pan_dev_open: failed to open /inuse_bitmap/80359d85.629862, errno 5

    2013-03-14 15:04:15.471868: vdsk_check_volumes:

        vdsk_extent_path: /storage/10.20.230.75/2013-03/2013-03-02-0000/WebServer/ASPS WEBSERVER -Keep Running/webserver(1)-flat.vmdk

        smartread_enabled: 1

        smartread_lvm_enabled: 0

        smartread_fsck_enabled: 0

        Unrecognised disk label.

 

    2013-03-14 15:04:15.484001: successfully validated /storage/10.20.230.75/2013-03/2013-03-02-0000/WebServer/ASPS WEBSERVER -Keep Running/webserver-flat.vmdk

    2013-03-14 15:04:15.964412: pan_dev_open /10.20.230.75/2013-03/2013-03-02-0000/WebServer/ASPS WEBSERVER -Keep Running/webserver-flat.vmdk using direct path /storage/10.20.230.75/2013-    03/2013-03-02-0000/WebServer/ASPS WEBSERVER -Keep Running/webserver-flat.vmdk and device mapper /dev/mapper/cd2efa6f.629862

    2013-03-14 15:04:16.490356: vdsk_check_volumes:

        vdsk_extent_path: /storage/10.20.230.75/2013-03/2013-03-02-0000/WebServer/ASPS WEBSERVER -Keep Running/webserver-flat.vmdk

        smartread_enabled: 1

        smartread_lvm_enabled: 0

        smartread_fsck_enabled: 0

        dev_path: /dev/mapper/cd2efa6f.629862p1

                vtype: FILESYS (1)

                fstype: ntfs (2)

                fsck: skipped

 

During this interaction the /dev/mapper is linking .dm files.  The view of the /dev/mapper directory shows this: 

    Every 2.0s: ls -alh /dev/mapper/                        Thu Mar 14 15:04:16 2013

 

    total 0

    drwxr-xr-x  2 root root    580 Mar 14 15:04 .

    drwxr-xr-x 16 root root   9.0K Mar 14 15:04 ..

    ………

    lrwxrwxrwx  1 root root      8 Mar 14 15:04 47a03977.228404 -> ../dm-11

    lrwxrwxrwx  1 root root      8 Mar 14 15:04 47a03977.228404.dup -> ../dm-12

    lrwxrwxrwx  1 root root      8 Mar 14 15:04 47a03977.228404.top -> ../dm-13

    lrwxrwxrwx  1 root root      8 Mar 14 15:04 47a03977.228404p1 -> ../dm-14

    …………………

    lrwxrwxrwx  1 root root      8 Mar 14 15:04 80359d85.629862 -> ../dm-19

    lrwxrwxrwx  1 root root      8 Mar 14 15:04 80359d85.629862.dup -> ../dm-20

    lrwxrwxrwx  1 root root      8 Mar 14 15:04 80359d85.629862.top -> ../dm-21

    lrwxrwxrwx  1 root root      8 Mar 14 15:04 cd2efa6f.629862 -> ../dm-22

 

 

We can see the /dev/mapper creating device maps to the var file system. Notice the correlation between the two:

    Every 2.0s: mount | grep /dev/mapper/                    Thu Mar 14 15:00:01 2013

 

    /dev/mapper/47a03977.228404p1 on /var/pancetera/recover/files/10.20.230.75/2013-02/2013-02-08-1529/WebServer/ASPS WEBSERVER -Keep Running/webserver.volume/47

    a03977.228404/0 type fuseblk (ro,allow_other,blksize=4096)

    /dev/mapper/cd2efa6f.629862p1 on /var/pancetera/recover/files/10.20.230.75/2013-03/2013-03-02-0000/WebServer/ASPS WEBSERVER -Keep Running/webserver.volume/cd

    2efa6f.629862/0 type fuseblk (ro,allow_other,blksize=4096)

 

Tailing the ls_bit_mapper log can help you find errors that will help you troubleshoot problems with-file level restore. For escalations, review these, using the logs and directories mentioned above:

    2013-03-15 13:13:23.389890: ERROR: ls_bitmap_open failed : unrecognized block device /dev/mapper/80359d85.123198
    2013-03-15 13:13:23.389922: ERROR: ls_bitmap_fs_open failed to open /80359d85.123198 (res 22)

    Error: /dev/mapper/80359d85.569269: unrecognised disk label
    2013-03-15 13:13:27.056191: ERROR: ls_bitmap_open failed : unrecognized block device /dev/mapper/80359d85.569269
    2013-03-15 13:13:27.056223: ERROR: ls_bitmap_fs_open failed to open /80359d85.569269 (res 22)
    Error: /dev/mapper/80359d85.106016: unrecognised disk label
    2013-03-15 13:13:35.771278: ERROR: ls_bitmap_open failed : unrecognized block device /dev/mapper/80359d85.106016
    2013-03-15 13:13:35.771310: ERROR: ls_bitmap_fs_open failed to open /80359d85.106016 (res 22)
    2013-03-15 13:13:38.706267: ERROR: fscanf(..., line) output of spawn(/sbin/blkid -t /dev/mapper/72fbc6fa.806234p1) failed (ret -1)
    2013-03-15 13:13:38.706444: spawn_wait() failed [where: spawn.c: 257]: /sbin/blkid (rc 2)
    2013-03-15 13:13:38.706459: ERROR: spawn_wait(27985, 0, /sbin/blkid, 120) failed (ret 2)
    Error: /dev/mapper/80359d85.806234: unrecognised disk label
    2013-03-15 13:13:39.527354: ERROR: ls_bitmap_open failed : unrecognized block device /dev/mapper/80359d85.806234
    2013-03-15 13:13:39.527378: ERROR: ls_bitmap_fs_open failed to open /80359d85.806234 (res 22)
    Error: /dev/mapper/80359d85.517278: unrecognised disk label
    2013-03-15 13:15:01.857361: ERROR: ls_bitmap_open failed : unrecognized block device /dev/mapper/80359d85.517278
    2013-03-15 13:15:01.857397: ERROR: ls_bitmap_fs_open failed to open /80359d85.517278 (res 22)
    2013-03-15 13:16:34.110818: ERROR: fscanf(..., line) output of spawn(/sbin/blkid -t /dev/mapper/2ae0c210.717518p1) failed (ret -1)
    2013-03-15 13:16:34.111111: spawn_wait() failed [where: spawn.c: 257]: /sbin/blkid (rc 2)
    2013-03-15 13:16:34.111126: ERROR: spawn_wait(28664, 0, /sbin/blkid, 120) failed (ret 2)
    Error: /dev/mapper/80359d85.929099: unrecognised disk label
    2013-03-15 13:16:34.726264: ERROR: ls_bitmap_open failed : unrecognized block device /dev/mapper/80359d85.929099
    2013-03-15 13:16:34.726318: ERROR: ls_bitmap_fs_open failed to open /80359d85.929099 (res 22)

 

 
 
 
 


This page was generated by the BrainKeeper Enterprise Wiki, © 2018