vmPRO: Managing CBT Resets

Overview

Changed Block Tracking (CBT) resets can require some time to complete, depending on how many VMs are being backed up by a vmPRO appliance. This article covers GUI configuration and log tracking that you will need to carry out during the reset process.  After reading this article, you should have a better understanding of how much time you should allow after a CBT reset before SmartMotion backups are scheduled to run.

 


 

Changed Block Tracking (CBT) is managed from the vmPRO GUI, individually on each backed up VM. To manage a VM's CBT:

 

  1. From the Virtual Machines tab, select the VM on which you want to enable CBT (shown on the right in the following image), then click the Edit Selected VMs button (shown on the left). 

  

 

  1. In the Configure Virtual Machine screen, check the Changed Block Tracking Enabled box, then click Save.

 

   

You will see the following in the logs: 

The controller runs discovery and configuration updates:

==> /var/log/controller <==
2013-07-05 08:48:35: controller/INFO:: ControllerCommands.py:3362 New controller command: vm_config_update (fefd1a58-ff84-4535-b0c8-779698380091)
2013-07-05 08:48:35: controller/INFO:: ControllerCommands.py:3289 Enter command: vm_config_update (fefd1a58-ff84-4535-b0c8-779698380091), pid: 7193
2013-07-05 08:48:38: controller/INFO:: ControllerCommands.py:3362 New controller command: discover_handler (23322fc2-25cc-4253-b89a-2981d78d29e2)
2013-07-05 08:48:38: controller/INFO:: ControllerCommands.py:3289 Enter command: discover_handler (23322fc2-25cc-4253-b89a-2981d78d29e2), pid: 7204
2013-07-05 08:48:39: controller/INFO:: ControllerCommands.py:3293 Exit command normally: discover_handler (23322fc2-25cc-4253-b89a-2981d78d29e2), pid: 7204

Datastores are updated:

2013-07-05 08:48:39.527333: Updated 14 datastores (0 bad, 2 inaccessible) to gen 1373039319 in 0 seconds
2013-07-05 08:48:39.529547: Pruning old datastores to generation 1373039319
2013-07-05 08:48:39.583716: cnt_thr: done updating generation 1373039319 (controller generation 0-575

The discovery finishes(PID 7204):

==> /var/log/controller <==
2013-07-05 08:48:39: controller/INFO:: utils.py:81 check_for_finished_cmds(ControllerCommands.pyc:3425): command process finished: 7204

The Controller updates the vm proxy fs:

==> /var/log/vm_proxy_fs <==
2013-07-05 08:48:40.446554: cnt_thr: refreshed 1 HV 36 VM (1 enabled, 1 updated, 0 failed) in 1 seconds (gen 574)
2013-07-05 08:48:40.504002: cnt_thr: updating vm_proxy_fs genId to 0-575
2013-07-05 08:48:41.900682: flat_to_pancbt flat_path: /export.shadow/No retention/ASPS-Commvault/ASPS-Commvault-flat.vmdk
2013-07-05 08:48:41.900738: flat_to_pancbt cbt_path: /export.shadow/No retention/ASPS-Commvault/ASPS-Commvault-pancbt.vmdk

The datastore path is updated:

==> /var/log/datastore_fs <==
2013-07-05 08:48:41.901051: datastore_fs_open(path /4b5e21a8-d949a3dc-7743-0030483450b0/ASPS-Commvault/ASPS-Commvault.vmx, fuse_file_info 0x7f66585c3e40): direct_io 0, Random I/O
2013-07-05 08:48:42.287476: Retrieved '[VMStorage2] ASPS-Commvault/ASPS-Commvault.vmx' in 1 seconds

The VM Proxy finishes:

==> /var/log/vm_proxy_fs <==
2013-07-05 08:49:04.901599: get_mtime shadow: /export.shadow/No retention/ASPS-Commvault/ASPS-Commvault-flat.vmdk
2013-07-05 08:49:04.901688: get_mtime vdisk_name: ASPS-Commvault-flat.vmdk
2013-07-05 08:49:04.901698: flat_to_pancbt flat_path: ASPS-Commvault-flat.vmdk
2013-07-05 08:49:04.901707: flat_to_pancbt cbt_path: ASPS-Commvault-pancbt.vmdk
2013-07-05 08:49:04.976526: get_mtime cbt_reset_pending:   1
2013-07-05 08:49:04.976595: get_mtime actual_mtime:        1373041560
2013-07-05 08:49:04.976603: get_mtime cbt_reset_time:      1373039342
2013-07-05 08:49:04.976611: get_mtime cbt_reset_time_prev: 1372434542
2013-07-05 08:49:04.976619: get_mtime actual >= cbt_reset_time_prev
2013-07-05 08:49:04.976627: get_mtime updating vdisk table
JSON result:
{ "vhost_uuid": "423250e7-9e6a-e63e-1f5a-ac276d2dc885", "name": "ASPS-Commvault-pancbt.vmdk", "changeid": "uninitialized", "pancbt_size_percentage": 5, "cbt_reset_time_vdisk": 1373039344, "flat_file_size": "16106127360" }
2013-07-05 08:49:05.048942: get_mtime returning:           1373039344
2013-07-05 08:49:05.049161: vd_update adding pancbt file for /vmfs/volumes/4b5e21a8-d949a3dc-7743-0030483450b0/ASPS-Commvault/ASPS-Commvault.vmdk
2013-07-05 08:49:05.049306: flat_to_pancbt flat_path: /export.shadow/No retention/ASPS-Commvault/ASPS-Commvault-flat.vmdk
2013-07-05 08:49:05.049316: flat_to_pancbt cbt_path: /export.shadow/No retention/ASPS-Commvault/ASPS-Commvault-pancbt.vmdk
2013-07-05 08:49:05.049325: get_pancbt_size_percent vm_uuid: 423250e7-9e6a-e63e-1f5a-ac276d2dc885
2013-07-05 08:49:05.049334: get_pancbt_size_percent path: /export.shadow/No retention/ASPS-Commvault/ASPS-Commvault-pancbt.vmdk
2013-07-05 08:49:05.049342: get_pancbt_size_percent disk_name: ASPS-Commvault-pancbt.vmdk
2013-07-05 08:49:05.138008: vd_update overriding pancbt_size with: 4096
2013-07-05 08:49:05.138083: vm_proxy_touch_intern pancbt size: 4096

 

In the GUI, you can now see that CBT is enabled:

 

 

 

 

 

CBT Resets are configured at the folder level:

 


 

 

You will need to allow enough time before the policy start time for resets to happen on each VM.  Policies dictate when a folder is backed up by SmartMotion, and one policy can back up multiple folders.

 

Here we can see that this policy runs daily at 5 AM.

 

 

The reset on this folder is Friday at 11:00 AM. vmPRO will do a full backup on the next run after the reset, which will be Saturday at 5:00 AM. If you want full backups on Friday, configure the reset for a time before 5:00 AM.

 

Here is the expected logging when a folder resets:

A Cron Job will be started:

==> /var/log/cron <==
Jul 5 10:00:01 localhost CROND[10626]: (root) CMD (/usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e tomato_sync_check -a "{\"time\": \"`date +\"%H%M\"`\"}")
Jul 5 10:00:01 localhost CROND[10627]: (root) CMD (/usr/local/pancetera/bin/upload_cores.sh)
Jul 5 10:00:01 localhost CROND[10628]: (root) CMD (/usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e reset_cbt -a '{"folder": "993671ae-bd06-4c75-bf94-69cf7c788487"}'

If you look at the /etc/crontab you will see a cron job for every folder found in the Web GUI:

bash-4.1# cat /etc/crontab
#
# DO NOT EDIT THIS FILE -- it was automatically generated by the Pancetera Controller
#
 
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
 
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
50 3 * * * root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e check_software_updates
20 7 * * * root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e send_report
0 5,11,17,23 * * * root /usr/local/pancetera/bin/upload_cores.sh
30 3 * * 0 root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e reset_cbt -a '{"folder": "f2dfc016-
30 3 * * 0 root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e reset_cbt -a '{"folder": "84746783-
30 3 * * 0 root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e reset_cbt -a '{"folder": "12006690-
0 11 * * 0 root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e reset_cbt -a '{"folder": "dc1a22f2-
30 3 * * 0 root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e reset_cbt -a '{"folder": "db0e7fd0-
0 11 * * 5 root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e reset_cbt -a '{"folder": "993671ae-
*/15 * * * * root /usr/local/pancetera/bin/python-redirect /usr/local/pancetera/bin/dispatch-event.py -e tomato_sync_check -a "{\"time\":
# msm_mode == __master__
# Multi-site management: This host is a master.

The controller runs the cbt reset command. Track this by PID, indicating that the reset has begun:

2013-07-02 11:00:02: controller/INFO:: ControllerCommands.py:3362 New controller command: reset_cbt (318c8d2c-eb3f-47fb-9a27-845372325678)
2013-07-02 11:00:02: controller/INFO:: ControllerCommands.py:3289 Enter command: reset_cbt (318c8d2c-eb3f-47fb-9a27-845372325678), pid: 31643

The exit command indicates that the reset from the controller is finished:

2013-07-02 11:00:02: controller/INFO:: ControllerCommands.py:3293 Exit command normally: reset_cbt (318c8d2c-eb3f-47fb-9a27-845372325678), pid: 31643

The logging in vm_proxy indicates that this file system has finished the CBT:

2013-07-05 11:00:05.923547: Changed Block Tracking reset/enabled for VM 'ASPS-Commvault' on hypervisor '10.20.230.15'
 

 


Conclusion

This example required only a couple of seconds to reset one VM. Resets on multiple VMs can be a significant task for the appliance, so we recommend staggering multiple VM resets accordingly, to allow sufficient reset time.
 

 
 
 
 
 
 
 

 



This page was generated by the BrainKeeper Enterprise Wiki, © 2018