Snowball – Data Migration To AWS S3 Storage
Snowball is an AWS data migration solution that allows you to easily transfer a large amount of data in or out of the AWS cloud. It uses physical devices to migrate your on-site data into AWS. The devices are shipped to your site, where your data is transferred into them via very high speed local network. The devices are then shipped back to an AWS data center, where the data on each device is loaded into AWS S3 storage. The data migration process is simple, fast, secure and cost-effective. See AWS Snowball for more information.
You can leverage a Snowball device if you need to migrate a large amount of StorNext Storage-managed data into an AWS S3 bucket, when creating a new AWS S3 storage tier.

StorNext Storage Manager maintains knowledge of the managed content in each storage tier, such as object name, store time, size, and so on, in the database. For most storage types, it populates the database tables in parallel with the store operations performed by its own store mechanism. Similarly, in order for StorNext Manager to manage content which has been migrated using Snowball, the corresponding database tables must be populated after Snowball migration.
Snowball offers two ways to transfer your data into the Snowball devices, as follows:
- Snowball Client: A standalone utility that you run on your local machine to perform the data transfer. It provides all the functionality you need to transfer data without extra coding, including handling errors and writing logs to your local machine. However, this data transfer does not have StorNext Manager involved. In order to manage the migrated content after the data has been loaded back to the AWS S3 bucket, you must use the StorNext fsobjimport utility to import the migrated content and populate the database tables. See Import Object Storage Media to import content existing in object storage buckets.
- Amazon S3 Adapter for Snowball: A tool that transfers data programmatically, using a subset of the Amazon Simple Storage Service (Amazon S3) REST API. StorNext Storage Manager fully supports the needed REST APIs with the signature requirements such as non-chunked mode V4 signature signing. Using this option, you can use StorNext Storage Manager to perform normal store operations to migrate data to the Snowball devices. It is very fast and simple. See Migrate Data into AWS S3 Storage Using an AWS Snowball Device.

Perform the following to migrate your data using a Snowball S3 adapter.
- Step 1: Create a Snowball Import Job using the AWS Snowball Management Console
- Step 2: Receive the AWS Snowball Device and Obtain Snowball Connecting Credentials
- Step 3: Connect the AWS Snowball Device to Your Local Network
- Step 4: Configure and Transfer Data to the AWS Snowball Device
- Step 5: Stop the AWS Snowball Client, and Power Off the AWS Snowball Device
- Step 6: Disconnect the AWS Snowball Device
- Step 7: Return the AWS Snowball Device
- Step 8: Monitor the Import Status
- Step 9: Redirect the Bucket’s I/O Path to the Amazon AWS Endpoint

Perform this step to create a Snowball import job using the AWS Snowball Management Console, provide shipping details, and provide IAM role and KMS key information.
Listed below, are two storage capacity types of Snowball devices. Select the storage capacity type that fits your need.
- 50 TB of storage space
- 80 TB of storage space
- Log in to the AWS Snowball Management Console.
- Click Create Job.
- Click Import into Amazon S3, and then click Next.
-
Complete the form and provide your shipping details.
Note: On this page, you provide the shipping address that you want the Snowball for this job delivered to. In some regions you choose your shipping speed as well. See Shipping Speeds, for more information.
- Click Next.
-
Provide the job details.
Note: On this page, specify the details of your job. These details include the name of your import job, the region for your destination Amazon S3 bucket, the specific Amazon S3 bucket to receive your imported data, and the storage size of the Snowball. If you don't already have an Amazon S3 bucket, you can create one on this page. If you create a new Amazon S3 bucket for your destination, note that the Amazon S3 namespace for buckets is shared universally by all AWS users as a feature of the service. Use a bucket name that is specific and clear for your usage.
- Click Next.
-
Set security.
On this page, you specify the following:
- The Amazon Resource Name (ARN) for the IAM role that Snowball assumes to import your data to your destination S3 bucket when you return the Snowball.
- The ARN for the AWS Key Management Service (AWS KMS) master key to be used to protect your data within the Snowball.
- Click Create/Set IAM role. A new page appears requesting permission to use resources in your account.
- Click Allow to give AWS Snowball write access to resources in your account. AWS Snowball uses this role to import your data into Amazon S3. The security page appears, and the selected IAM role ARN is displayed.
-
In the KMS key* field, click (default) aws/importexport
Note: You could create a customized one from the AWS KMS service and use it. For example, the KMS key ID field displays a value such as: e8628981-e962-4f3e-badf-e2e56854defa
- Click Next.
-
Set notifications.
Note: On this page, specify the Amazon Simple Notification Service (Amazon SNS) notification options for your job and provide a list of comma-separated email addresses to receive email notifications for this job. You can also choose which job status values trigger these notifications. Alternatively, click Don't send notifications.
- Click Next.
-
Review the configuration and information you have provided.
Caution: Review this information carefully, because incorrect information can result in unwanted delays.
Note: To make changes, click the Edit button next to the step to change in the navigation pane, or click Back.
- Click Create job.
Once your job is created, you are taken to the Job dashboard, where you can view and manage your jobs. The last job you created is selected by default, with its Job status pane open.

When you receive the Snowball appliance, you will notice that it does not come in a box. The Snowball is its own physically rugged shipping container. When the Snowball first arrives, inspect it for damage or obvious tampering. If you notice anything that looks suspicious about the Snowball, do not connect it to your internal network. Instead, contact AWS Support and inform them of the issue so that a new Snowball can be shipped to you.
Each AWS Snowball job has a set of credentials that you must get from the AWS Snowball Management Console or the job management API to authenticate your access to the Snowball. These credentials are:
- An encrypted manifest file: The manifest file contains important information about the job and permissions associated with it. Without it, you will not be able to transfer data
- An unlock code: The unlock code is used to decrypt the manifest. Without it, you won't be able to communicate with the Snowball.
Note: You can only get your credentials after the Snowball device has been delivered to you. After the device has been returned to AWS, the credentials for your job are no longer available.
Use the following procedure to obtain your credentials by using the console.
- Sign in to the AWS Management Console and open the AWS Snowball Management Console at AWS Snowball Management Console.
- In the AWS Snowball Management Console, search the table for the specific job to download the job manifest for, and then choose that job.
- Expand that job's Job status pane, and select View job details.
-
In the details pane that appears, expand Credentials.
Note: Make a note of the unlock code (including the hyphens), because you will need to provide all 29 characters to transfer data.
- Click Download manifest in the dialog box and follow the instructions to download the job manifest file to your computer. The name of your manifest file includes your Job ID.
Note: As a best practice, do not save a copy of the unlock code in the same location in the workstation as the manifest for that job.
Now that you have your credentials, you are ready to transfer data.

-
See Connect the AWS Snowball device to Your Local Network to connect the Snowball to your network.
Note: Quantum recommends you use a 10 Gigabit connection from the StorNext MDC to the Snowball device in order to quickly transfer large amounts of data.
-
On the E ink display, tap Network to configure the device’s network using either DHCP or a static IP. The IP address should be reachable from the StorNext MDC. Write down the IP address as you will need it for subsequent configuration.

-
Download the Amazon S3 Adapter for Snowball tool.
See AWS Snowball Resources to download the Amazon S3 Adapter for Snowball tool. For Linux, it is the aws-snowball-adapter-linux.zip file. Install the tool (unzip the file) on a powerful workstation. See Workstation Specifications to view suggested specifications for your workstation.
You will transfer your local data managed by Storage Manager from StorNext MDC or distributed movers (if configured) to the workstation. This workstation could be the MDC, a distributed mover or a separate machine.
-
Start the adapter.
This step requires the following
- The credentials from Step 2: Receive the AWS Snowball Device and Obtain Snowball Connecting Credentials.
- The AWS access key credentials.
- The IP address configured for the Snowball device.
-
Navigate to the unzipped directory:
snowball-adapter-linux/bin -
Run the following command to set the utility executable:
chmod +x snowball-adapter -
Run the following command to make sure the command will run properly:
./snowball-adapter -hNote: See the Readme.txt in the unzipped directory. By default, the Snowball adapter uses the credentials defined in ~/.aws/credentials, under the default profile. You may configure the credentials with the access key id and access secret key from AWS.
-
Run the following command:
./snowball-adapter -i <Snowball IP addres>s -m <path to manifest file> -u <29 character unlock code>Note: This configuration uses HTTP protocol and port number 8080. If you want to use a different profile’s credential, use the option -a <profile-name>, or use the option -s to explicitly specify the access secret key. Other options are for HTTPS configuration if HTTPS is preferred.
-
Test the access to the adapter. You can do this by using the AWS CLI or s3cmd to test the read/write/list operations to the adapter.
Note: You must configure the AWS CLI to specify the use of V4 signing and the path-style URL. If there is error in accessing the Snowball device, check the log file under the directory ~/.aws/snowball/logs for troubleshooting. For additional troubleshooting informtation, see Troubleshooting for a Standard Snowball.
-
Prepare Storage Manager for the AWS S3 tier.
- Run the fsobjcfg command to configure the AWS appliance, controller, I/O path and namespace.
- Configure the I/O path using the Snowball adapter’s IP address, path-style url, REST protocol (HTTP or HTTPS) and port (8080).
- Configure the namespace with non-chunked mode (option -J), v4 signing.
-
Specify the copy number for the data to be transferred (option -c).
- Associate the namespace with a particular policy class that is associated with the relation points from which you want to transfer data.
-
Configure the policy class to steer a copy to AWS media.
Note: If required, Snowball transfer supports AWS server-side encryption.
- Configure the policy accordingly.
-
Run fsstore for each relation point from which you wish to transfer data into AWS Snowball.
Note: Wait for all of the store operations to complete before proceeding to the next step.
Note: If a policy is already storing data to AWS media, and the media is configured with the same policy class as the AWS Snowball or can be used by any policy class, then you should write-protect these media to direct the stores to the AWS Snowball. Use the fschmedstate command to write-protect the media.
-
When all data store operations have completed, you should disallow further store and retrieval operations. Run the following command to change the corresponding media (buckets)’s state to unavail:
fschmedstate -s unavail mediaIDNote: You can also remove the credentials in the default profile, if you do not plan to leave them there for security reasons.

When you've finished transferring data on to the Snowball, prepare it for its return trip to AWS. To prepare it, run the snowball stop command in the terminal of your workstation. Running this command stops all communication to the Snowball from your workstation and performs local cleanup operations in the background. When that command has finished, power off the Snowball by pressing the power button above the E Ink display.

Disconnect the Snowball cables. Secure the Snowball's cables into the cable caddie on the inside of the Snowball back panel and seal the Snowball. When the return shipping label appears on the Snowball's E Ink display, you're ready to drop it off with your region's carrier to be shipped back to AWS.

The prepaid shipping label on the E Ink display contains the correct address to return the Snowball. The Snowball will be delivered to an AWS sorting facility and forwarded to the AWS data center. The carrier will automatically report back a tracking number for your job to the AWS Snowball Management Console. You can access that tracking number, and also a link to the tracking website, by viewing the job's status details in the console, or by making calls to the job management API.
Note: Unless personally instructed otherwise by AWS, never affix a separate shipping label to the Snowball. Always use the shipping label that is displayed on the Snowball's E Ink display.
Additionally, you can track the status changes of your job through the AWS Snowball Management Console, by Amazon SNS notifications if you selected that option during job creation, or by making calls to the job management API. For more information on this API, see AWS Snowball API Reference. The final status values include when the Snowball has been received by AWS, when data import begins, and when the import job is completed.

You can track the status of your job at any time through the AWS Snowball Management Console or by making calls to the job management API. For more information this API, see AWS Snowball API Reference. Whenever the Snowball is in transit, you can get detailed shipping status information from the tracking website using the tracking number you obtained when your region's carrier received the Snowball.
To monitor the status of your import job in the console, sign in to the AWS Snowball Management Console. Choose the job you want to track from the table, or search for it by your chosen parameters in the search bar above the table. Once you select the job, detailed information appears for that job within the table, including a bar that shows real-time status of your job.
Once your package arrives at AWS and the Snowball is delivered to processing, your job status changes from In transit to AWS to At AWS. On average, it takes a day for your data import into Amazon S3 to begin. When it does, the status of your job changes to Importing. From this point on, it takes an average of two business days for your import to reach Completed status.

When the importing is complete, run the fsobjcfg command to modify theI/O path and namespace, accordingly.
-
Do one of the following:
-
If you have not configured AWS Storage, then run the following command and options to change theI/O path’s AWS S3 endpoint, port, protocol, URL style, signing mode, and so on.
fsobjcfg -m -o iopath_alias [-i connection_endpoint] -e https [-u PATH | VHOST] -n controller_alias -
If you configured a new appliance for the AWS Snowball and another appliance already exists specifying the AWS S3 endpoint, then modify the appliance name for the AWS Snowball to specify the name of that AWS Storage appliance.
fsobjcfg -m –f media-id aws-applianceYou can then remove the appliance, controller and I/O path configurations added for access to the AWS Snowball.
fsobjcfg -d -o snowball-io -n snowball-ctl
fsobjcfg -d -n snowball-ctl
fsobjcfg -d snowball
-
-
If needed, run the following command and options to change the bucket’s signature signing and credential.
fsobjcfg -m -f mediaid [-U username] [-P password] [-S signing_type] [-J y|n] -
Run the following command to set the media to available so that the media is available for access.
fschmedstate -s avail mediaID -
Run the following command to check whether you can retrieve the objects you imported using the Snowball device.
fsretrieveNote: You can only execute one recursive retrieve command at a time. If you execute multiple recursive retrieve commands concurrently, the processes fail and you are notified that an existing recursive retrieve command is in progress.