How Does OST Work? (DRAFT) |
There is no magic behind OST -- it is just another way to back up data to disk. You can use the backup application and create your own plugin to implement your backup solution with the backup application, which makes it very interesting to keep records of copies that you transfer to a different location for disaster recovery (DR). The backup goes directly to the LSU through the backup media server, which creates a backup image on the share and stores the data, plus a header file with the same OST-image-name. The data is transferred from the client to the media server to the DXi system.
If customers want to have the second copy from the backup that was just created, they can now tell their backup application to do this within the backup application, using the de-duplication engine.
Before OST, you could copy the raw data from the disk through the media server to a second system, or you could use the disk feature to replicate the data to a second system. In this case, the backup server did not know about the second copy.
With OST, you combine this and can do an optimized duplication.
The backup server starts with the OST plugin in place for the duplication job, but the data will be replicated/transferred from DXi1 to DXi2 with the internal replication engine. Now the backup server knows about the second copy and can restore from it. We used the replication facility implemented in the de-duplication engine, to transfer as little data as possible to the DR side.
Looking at the DXi file system, you will see that the STS represents a shared folder, and that the LSU is a folder inside the share. All data in this folder will be created from the backup application, using the OST calls.
STS/LSU
/Q/shares/Dxi1/LSU1/
Inside, we will have files like this:
1349779456 Jul 13 12:30 dundee_1279020655_C1_F1_1279020655.img
1359478784 Jul 13 12:40 dundee_1279020655_C1_F2_1279020655.img
1447559168 Jul 13 12:50 dundee_1279020655_C1_F3_1279020655.img
1657434112 Jul 13 13:04 dundee_1279020655_C1_F4_1279020655.img
8192 Jul 13 12:27 dundee_1279020655_C1_HDR_1279020655.img
Note: The size of these files is defined with the STU setting in NBU.
hostname = dundee
date time = 1279020655
Copy # = for every copy of the same image C1 – 99
Type =
F? == Fragment + number of this image
HDR == Header information
Backup ID = 1279020655
And we have the same metadata files for this backup set:
136 Jul 13 12:38 .quantum_meta_dundee_1279020655_C1_F1_1279020655.img
136 Jul 13 12:49 .quantum_meta_dundee_1279020655_C1_F2_1279020655.img
136 Jul 13 13:00 .quantum_meta_dundee_1279020655_C1_F3_1279020655.img
136 Jul 13 13:13 .quantum_meta_dundee_1279020655_C1_F4_1279020655.img
136 Jul 13 12:27 .quantum_meta_dundee_1279020655_C1_HDR_1279020655.img
Looking at the header file, you can see what information is stored in there:
dundee_1279020655_C1_HDR_1279020655.img
CLIENT_TYPE 13
RETENTION_LEVEL 1
SCHEDULE_TYPE 0
COMPRESSION 0
ENCRYPTION 0
TIR_INFO 0
IND_FILE_RESTORE_FROM_RAW 0
IMAGE_DUMP_LEVEL 0
PREV_BLOCK_INCR_TIME 0
BLOCK_INCR_FULL_TIME 0
STREAM_NUMBER 0
BACKUP_COPY 0
BACKUP_ID dundee_1279020655
SCHED_LABEL Full
POLICY dundee
BLOCKSIZE 262144
MEDIA_TYPE 0
MEDIA_SUBTYPE 6
KBYTES_WRITTEN 0
KBYTES_REMAINDER 0
IMAGE_ATTRIBUTE 0
# Any additional entries must be added above this line.
END_OF_HDR
Caution: Never manually modify a header file.
Doing a duplication job for the images shown above, here is what appears in the Job Details ( from the GUI in NBU):
7/26/2010 4:31:33 PM - begin Duplicate
7/26/2010 4:31:34 PM - requesting resource TEST-DATA
7/26/2010 4:31:34 PM - granted resource MediaID=@aaaab;DiskVolume=PP1;DiskPool=TEST-DATA;Path=PP1;StorageServer=PP-GDV_10.105.12.65;MediaServer=rename-me
7/26/2010 4:31:34 PM - granted resource TEST-DATA
7/26/2010 4:31:37 PM - requesting resource @aaaad
7/26/2010 4:31:38 PM - granted resource MediaID=@aaaad;DiskVolume=full;DiskPool=Duplication;Path=full;StorageServer=PP-GDV-DUP_10.105.12.34;MediaServer=rename-me
7/26/2010 4:31:39 PM - Info Duplicate(pid=2636) Initiating optimized duplication from @aaaad to @aaaab
7/26/2010 4:31:40 PM - started process bpdm (3776)
7/26/2010 4:32:30 PM - begin writing
7/26/2010 4:32:55 PM - end writing; write time: 00:00:25
7/26/2010 4:33:06 PM - begin writing
7/26/2010 4:33:30 PM - end writing; write time: 00:00:24
7/26/2010 4:33:37 PM - begin writing
7/26/2010 4:33:56 PM - end writing; write time: 00:00:19
7/26/2010 4:34:07 PM - begin writing
7/26/2010 4:34:29 PM - end writing; write time: 00:00:22
7/26/2010 4:34:43 PM - end Duplicate; elapsed time: 00:03:10 the requested operation was successfully completed(0)
The log above is the output of the NBU Detailed Job description. It shows that we start 4 times to write data. This equals the number of file fragments on the filesystem.
In the filesystem on the DXi2 in the STS DXi2 and LSU LSU2, we see now the following files available:
1349779456 Jul 26 16:32 dundee_1279020655_C2_F1_R1_1279020655.img
1359478784 Jul 26 16:32 dundee_1279020655_C2_F2_R1_1279020655.img
1447559168 Jul 26 16:33 dundee_1279020655_C2_F3_R1_1279020655.img
1657434112 Jul 26 16:34 dundee_1279020655_C2_F4_R1_1279020655.img
8192 Jul 26 16:34 dundee_1279020655_C2_HDR_R1_1279020655.img
136 Jul 26 16:32 .quantum_meta_dundee_1279020655_C2_F1_R1_1279020655.img
136 Jul 26 16:32 .quantum_meta_dundee_1279020655_C2_F2_R1_1279020655.img
136 Jul 26 16:33 .quantum_meta_dundee_1279020655_C2_F3_R1_1279020655.img
136 Jul 26 16:34 .quantum_meta_dundee_1279020655_C2_F4_R1_1279020655.img
136 Jul 26 16:34 .quantum_meta_dundee_1279020655_C2_HDR_R1_1279020655.img
If the files were removed in some way other than through OST, the configuration files of this DXi system would not be updated. We would not be able to delete these LSUs without modifying the configuration files.
Every fragment will be transferred by the source DXi system to the target DXi, using the replication API. An ACK will be sent for each fragment that is sent to the OST plugin.
If an error is encountered during the transfer, WARN or ERROR messages will appear in both tsunami.log and the ostplugin logs.
This page was generated by the BrainKeeper Enterprise Wiki, © 2018 |