Solstice Backup 5.1 Administration Guide

Chapter 4 Device and Media Management

This chapter describes device and media operations you can perform through the Backup server. This chapter consists of the following sections:

Device Configuration

A device is a drive that reads and writes data to storage volumes during backup, recover, and other operations. The Devices resource contains the attributes for each device. The instructions for configuring your devices differ depending on whether the device is standalone or is contained in an autochanger or silo.

For the Backup server to recognize your storage devices, you must configure each storage device individually.

If you use tape drives as your storage devices, you must use no-rewind devices because Backup writes a file mark on the volume at the end of each backup and then appends data onto the volume based on the position of the file mark. If the device rewinds the media, the file mark position is lost and previously written data is overwritten by the next backup. The pathnames for these devices must follow the Berkeley Storage Device (BSD) semantic rules, for example, /dev/rmt/Ombn. The b in the pathname satisfies the requirement.

If you use a file device, you must enter it as a directory path (the same as other device types) rather than as just a filename. The path /tmpfs is not allowed on Solaris servers.

Storage Devices and Media Types Supported by Backup

Backup ships with the following list of supported storage devices and corresponding backup media types:

Standalone Device Configuration

If you have a standalone device attached to the Backup server or storage node, display the Devices resource on the Backup server and enter or change the settings in that resource's attributes.

Autochanger Device Configuration

Machines such as autochangers and silos contain several devices. The way to configure devices in a machine that contains several devices involves a number of steps, which differ depending on whether the machine is an autochanger or silo.

To configure the devices in an autochanger, install and enable the Backup device drivers on the Backup server or storage node machine, then use the jb_config program to configure the autochanger and define the individual devices in the autochanger in the Devices resource. For detailed information about autochangers, see Chapter 7, Autochanger Module.

To configure devices in a silo for Backup to use, first install and enable the Backup Silo Support Module on the Backup server or storage node machine. Then use the jb_config program to configure the silo and its devices. Do not use the Devices resource to change or delete devices in a silo. See Chapter 10, Silo Support Modulefor more details about silos.

Hardware Compression Versus Software Compression

Backup client machines can compress data during backup, before the data is moved over the network or written to tape. You can implement software compression by selecting compression directives in the Clients resource or adding compressasm to a custom backup command. The compressasm feature typically achieves a 2:1 compression ratio. In addition to the performance advantages of moving less data across the network, software compression works better than some types of hardware compression in cases where a tape has a bad spot.

To handle EOT (end of tape) errors caused by bad spots on a tape, Backup maintains a fixed size, write-behind buffer. When Backup requests the next tape, it flushes the write-behind buffer to the new tape. (EOT will not be handled if the size of the unflushed data is greater than the Backup buffer). The write-behind buffer has a finite size to handle noncompressing tape drives. This write-behind buffer also works with tape drives that compress data as it is written from the drive's buffer to tape, but not with drives that compress data as it is copied into the drive's buffer. The drive's buffer represents a ratio of 1 1/2 to 3 times as much data as it holds, byte for byte, and possibly much more (some drives claim compression ratios of 10:1). The write-behind buffer must be very large to handle a best-case 10:1 compression ratio possible with some drives. Real memory and swap space consumption make this prohibitive.

Use the following tips to decide which compression method is better for your environment:

Storage Nodes, Remote Devices, and Multiple Network Interfaces

You can control most operations on local and remote devices, including autochangers and silos, from the Backup administration program on the server. But for some remote autochanger operations (for example, reset) you must use the nsrjb command or the jb_config program on the storage node machine. During data transfer operations, the Backup server uses remote devices the same way it uses local devices.


Caution - Caution -

Backup clients at release 4.2 and later are able to use remote devices for backup, archive, and HSM (hierarchical storage management) functions. Earlier Backup clients cannot back up data to remote devices.


This section also discusses network interfaces. You can change the default network interfaces. You can also direct different clients to different network interfaces into the same storage node.

Remote Device Configuration

You configure remote standalone devices in an administration session with the controlling Backup server the same way you configure a standalone device that is connected to the Backup server. When you create each device, add a prefix to the device name that includes rd= and the storage node's hostname. For example, rd=omega:/dev/rmt/1mbn creates a device called /dev/rmt/1mbn on a storage node machine called omega. For specific instructions, see the online help for configuring devices.

There are two steps to configure a remote autochanger or silo device. First, verify that the storage node is listed in the Administrator attribute in the Server resource of the controlling server. It must have the form root@hostname, where hostname is the hostname of the storage node. Then run the jb_config program on the storage node machine to define each device in the autochanger or silo. See "jb_config" or refer to the jb_config man page for the syntax and options for this program.

When the jb_config program is completed, you can remove the storage node's hostname from the Administrator list. If you add another autochanger later, you must add the storage node's host name to the Administrator's attribute before running the jb_config program again.

Multiple Network Interfaces

If you prefer to use an interface other than the default interface, use the server network interface attribute. Enter the preferred interface in the client's server network interface attribute.

When doing a save, you can have multiple network interfaces defined on a storage node. You specify the storage node's interfaces in the Storage Nodes attribute list. This allows different clients to use different network interfaces for the same storage node.

Media Management

This section gives conceptual information about the media management features of Backup. You configure media management functions using the Backup GUI administration program (nwadmin), the nsradmin interface, or the nsrmm command. Detailed explanations of specific attributes are available in the online help. Refer to the nsradmin and nsrmm man pages for details concerning these Backup interfaces.

Pools

A pool is a specific collection of media to which Backup writes data. Backup uses pools to sort and store data. The configuration settings for each pool act as filters that tell Backup which volumes should receive specific data. Backup uses pools in conjunction with label templates to keep track of which data is on which specific volume. For detailed information about label templates, see "Labeling Storage Volumes ".

How Backup Uses Pools

The way you configure pools determines which volumes receive data. Each pool configuration contains a list of criteria that the data must meet for the data to be written to associated volumes. When you specify save sets to include in a pool, you can specify exact save set names, or you can use regular expression matching to send a group of save sets to a specific pool. For an example using regular expression matching, see "Example: Directing Client Indexes and Bootstrap to a Separate Pool ". For detailed information about regular expression matching, refer to the nsr_regexp man page.

When a scheduled backup occurs, Backup tries to match the save set to a pool configuration. If the save set matches the criteria of a pool configuration, Backup directs the save set to a labeled volume from that pool.

Backup then checks to see whether a correctly labeled volume is mounted on a storage device. If a correctly labeled volume is mounted on a storage device, Backup writes data to the volume. If a correctly labeled volume is not mounted on a storage device, Backup requests that such a volume be mounted and waits until an operator mounts the appropriate volume.

Backup Pool Types

Backup provides preconfigured pool types to keep different types of data separate. Backup does not mix the following types of data on volumes within a pool:

Unless you specify other pools, all backup data is routed to the Default pool and all archive data is routed to the Archive pool. Cloned backup data is routed to the Default Clone pool, and cloned archive data is routed to the Archive Clone pool.

How Backup Uses Pool Criteria to Sort Data

When you configure Backup, you can create additional pools and sort data by pool type and any combination of the following criteria:

If you begin by entering a group name in the Group attribute, the pool is immediately restricted to accept only data associated with the named group. If you add a second group name to the Group attribute, the pool accepts data associated with either group, but no others. Entries for a single attribute function as "OR" clauses; that is, the pool accepts data from clients in either group.

Each of the four configuration criteria, however, functions with the others as an "AND" clause. That is, if you enter configuration criteria in both the Group attribute and Save Set attribute, only data that meets both the Group criteria and the Save Set criteria is written to volumes from the specified pool.

You cannot create pools that share identical settings for pool type, group, client, save set, or level. If the settings for a new pool match the settings for an existing pool, you receive a warning message. Change the appropriate settings and reapply to save the pool resource.

For further information about save sets, see "Specifying Which Data Is Backed Up ". For further information about groups or backup levels, see "Backup Levels ".

Example: Directing Client Indexes and Bootstrap to a Separate Pool

You can use regular expression matching to direct the client indexes and bootstrap to a different pool than you send the backup data.

In the following example, the client file indexes are in /nsr/index. To send the Backup server's bootstrap and all the client file indexes from this filesystem to the same pool, create a pool (in the Pools resource) with the following attributes:


name: Index;
pool type: Backup;
save sets: bootstrap, /nsr/index/.*;
levels: ;

When the group's scheduled backup runs, the client save sets are written to a volume labeled for the appropriate save set pools, while the Backup server's bootstrap and /nsr/index save sets are written to a separate volume labeled for the "Index" pool.

When Data Meets the Criteria for More Than One Pool Configuration

Depending on the pool configurations you create, you might have data that matches the criteria for more than one pool configuration. For example, if you configure one pool to accept data from a group called "Accounting," and you configure another pool to accept data from all full backups, Backup has to determine to which pool a full backup for the Accounting group is written. Backup uses the following pool selection criteria:

  1. Group (highest precedence)

  2. Client

  3. Save set

  4. Level (lowest precedence)

When data matches the attributes for two pools, for example, Group and Level, the pool data is written to the pool specified in the Group attribute. In the example where the data matched the criteria for two pools, the one configured to accept data from the Accounting group and the other configured to accept data from all full backups, the data is routed to the pool that accepts data from the Accounting group.

When Data Does Not Meet the Criteria for Any Pool

When you use customized pool configurations to sort your data, you might inadvertently omit a client or save set. During a scheduled backup, if data does not meet the criteria for any customized pool configuration, Backup automatically sends the data to the Default pool. Backup uses the Default pool to ensure that all data for clients in a backup group is backed up to a volume.

When Backup sends data to the Default pool, Backup looks for a labeled volume from the Default pool mounted on a storage device. If no Default pool volume is mounted on a storage device, Backup requests the appropriate volume and waits until an operator mounts the volume. If Backup asks for a Default pool volume in the middle of a scheduled backup but an operator is not present to mount a Default pool volume, the backup pauses until an operator mounts a Default pool volume. If you have an operator available to monitor the backups, it is a good idea to keep a volume labeled for the Default pool close at hand in case this situation unexpectedly arises.

If you plan to use Backup for unattended backups, run a test of the backup after making any configuration changes to ensure that all data is written to the appropriate volumes and to avoid an unexpected Backup request for a Default pool volume. For the procedure to test your scheduled backup, see "Immediate Start of a Scheduled Group Backup".

Configuring a Pool for Incremental Backups

If you want to create a separate pool for incremental backups, be aware that the Backup hierarchy of precedence affects the way the data is stored. If the Level attribute value is "incremental," incremental data is routed to the associated pool but the corresponding changes to the client's file index are not. Backup saves all client file indexes at level 9 to speed the recovery operation, if one is needed.

If the client file indexes do not meet the criteria for the pool associated with the incremental backups, Backup matches the indexes to another pool (usually the Default pool) and looks for an appropriately labeled volume to write to. If you need to recover your data, you might have to use a large number of volumes to recover all your data. Thus, to store the client file indexes along with the incremental backup data and to speed the recovery operation, define the Level value in the Pools resource to accept both level 9 and incremental data.

You can use the Backup preconfigured NonFull pool settings to ensure that the client file indexes belong to the same pool as their incremental backups. When you keep the indexes in the same pool as their incremental backups, you reduce the number of volumes you need for a recovery.

Configuring a Pool for Clone Data

If you want to clone data, Backup requires a specific pool to receive the clone data, and a minimum of two devices, one to read the source volume and the other to write the clone. If you do not associate data to be cloned with a customized clone pool, Backup automatically uses the Default Clone pool. You must mount an appropriately labeled volume on a separate storage device for the cloning process to proceed smoothly. See "Cloning " for more information on the Backup cloning feature.

Configuring a Pool for Archive Data

If you want to use Backup Archive to archive data, Backup requires a specific pool to receive the archive data. You can then store these volumes offsite, if you want. If you do not associate data to be archived with a customized archive pool, Backup automatically uses the preconfigured Archive pool. You must mount an appropriately labeled volume on a storage device for the archive process to proceed smoothly. See Chapter 6, Backup Archive for more information on the Backup archive feature.

Configuring a Pool for Migration Data

If you use the HSM feature, Backup requires a specific pool to receive the premigrated and migrated save sets. If you do not associate the migration data with a customized migration pool, Backup automatically uses the preconfigured Migration pool. You must mount an appropriately labeled volume on a storage device for the premigration and migration processes to proceed smoothly. See Failed Cross Reference Format for more information on the Backup HSM feature.


Caution - Caution -

Archive and migration data are in a different format than regular Backup save set data. Therefore, they must be written to different volumes. Because of these differences, the client file indexes and bootstrap save set created during an archive, premigration, or migration operation are also not written to the same volume as the archived or migrated save sets. By default, they are written to a volume from the Default pool. If you need to direct the client file indexes and bootstrap to a volume pool other than Default, see "Example: Directing Client Indexes and Bootstrap to a Separate Pool " for information.


Configuring a Pool for Manual Backups

You can create a customized pool to receive data from a manual backup by specifying "manual" in the Level attribute. Backup, however, sorts data from a manual backup differently than data from a regularly scheduled backup. Because a manual backup is not performed as part of a scheduled backup group, the data is not associated with any group name. Thus, when you perform a manual backup in which only a single client's save set data is saved, the group normally associated with that client's save set is not included as a criterion for pool assignment. As a consequence, data from a manual backup may be sent to a different pool than the pool in which data from this client's save set is stored during a regularly scheduled backup operation.

If you do not create a customized pool to receive data from manual backups, Backup uses the Default pool and looks for a mounted volume from the Default pool on which to write manually backed-up data. Because Backup tracks the volume location of all backup data, you do not need to worry about tracking which volume contains the manually backed-up data. If you need to recover the data, Backup requests the correct volume.


Caution - Caution -

When you perform a manual backup, the client index and server bootstrap are not included in the backup. If you never perform regularly scheduled backups of the clients and server machines, the information vital to data recovery in the event of a disaster is not available.


Using Storage Devices and Pool Configuration to Sort Data

You can configure pools to sort data to different storage devices. You can either use specific media to receive data or designate a specific storage device to receive data from a designated pool.

Volume Pools for Backup Data Directed to a Specific Device

You can associate a pool with a specific storage device. For example, you may want your full backups written to optical disk for off-site storage. You have two ways to ensure that data goes to one specific storage device:

Volume Pools for Backup Data Written to Different Media Types

You can write data across several volumes of different media type (for example, magnetic disk and tapes), as long as the volumes mounted on the storage devices have the appropriate label associated with the pool.

Labeling Storage Volumes

Backup labels (initializes) each storage volume with a unique internal label that corresponds to a pool. During backup and other operations, Backup can identify the pool to which a volume belongs by its label. Backup applies a label template to create a unique internal label for each volume.

Backup uses label templates and pool configuration settings to sort, store, and track data on media volumes. If you need to recover data, Backup prompts you for the specific volume that contains the required data, by volume name and sequence number.

How Backup Uses Label Templates

Backup writes a given set of data to a specific pool. For Backup to recognize that a particular volume belongs to the correct pool, the volume must have an internal identification label that associates it with the correct pool. The contents of the volume label follow rules defined in a specific label template that you create in the Label Templates resource. You then associate a label template with a specific pool in the Pools resource. If you do not associate data with a specific pool, Backup uses the preconfigured Default pool and corresponding Default label template. Figure 4-1illustrates how a pool configuration uses its associated label template to label a volume. You must configure a label template before you configure the associated pool for your custom template to be available in the Pools resource.

Figure 4-1 How Backup Labels a Volume Using a Label Template

Graphic

How to Customize Label Templates

To customize label templates, display the Label Template resource and specify values for the following attributes:

Table 4-1 Examples: Number Sequences for Volume Labels

Type of Components 

Fields 

Number Sequence Result 

Total Number of Labels 

Range of numbers 

001-100

001, 002,

003,...100

 

Character string 

Range of numbers 

SalesFull 

001-100

SalesFull.001,

...SalesFull.100

 

Range of lowercase letters 

Range of numbers 

aa-zz

00-99

aa.00,...aa.99,

ab.00,...ab.99,

ac.00,...ac.99,

:

az.00...az.99,

ba.00,...ba.99

:

zz.00,...zz.99

67,600 (262 times 102)

Your label template should allow for expansion of your backup media storage system. For example, it is better to create a template for 100 tapes and not use all of them than to create a template for only 10 tapes and run out of labels. When Backup reaches the end of the template number sequence, Backup wraps around to the start value. In Table 4-1, for example, after Backup uses zz.99 for the 67,600th label, Backup uses aa.00 for the 67,601st label.

Using Label Template Components

Backup is shipped with preconfigured label templates that correspond to the preconfigured pools. If you choose to create your own templates, you can include as many components in the Fields attribute as necessary to suit your organizational structure. However, it is a good idea to keep the template simple with few components. For example, if you create a label template for your Accounting Department, you can customize your label template in several ways, depending on the size of your storage system and media device capabilities. Table 4-2 illustrates several ways you can use components to organize your labels.

Table 4-2 Label Table Components

Type of Organizational Structure 

Fields (Components) 

 

Separator 

Resulting Volume Labels 

Sequential 

AcctFull

001-100

Period 

AcctFull.001

(100 total labels)

Storage oriented (for example, 3 storage racks with 5 shelves each, each shelf holding 100 tapes) 

1-3

1-5

001-100

Dash 

1-1-001

This label is for the first tape in rack 1 on shelf 1.

(1,500 total labels)

Two-sided media (for example, optical devices) 

AcctFull

000-999

a-b

Underscore 

AcctFull_000_a (side 1)

AcctFull_000_b (side 2)

(2,000 total labels)

Storage Management Operations (Labeling and Mounting)

The internal label on a volume contains a unique name that Backup uses to track and recognize storage media. In the media database, Backup refers to volumes by their volume labels. Backup uses the media database records to determine which volumes are needed for backing up or recovering data.

Every volume belongs to a pool. Each pool has a matching label template associated with it. Volumes are labeled according to the rules of these label templates. Label templates provide a way to consistently name and label volumes so you do not have to track the number of volumes you have used. You can use the preconfigured pools and preconfigured (and associated) label templates that come with the Backup product, or create your own pools, label templates, and pool/template associations. Customizing your own label templates gives you more control over your data storage organization.

When you put a new internal label on a volume or relabel a volume to recycle, any existing data stored on the volume under the previous label is no longer available for recovery.

Backup Criteria for Volume Selection and Mounting

When a scheduled or manual backup occurs, Backup searches for a volume from the appropriate pool to accept the data that needs to be written. The storage volumes available for Backup to use are the volumes that are mounted on standalone devices and the volumes accessible to Backup through auto media management or available to Backup through an autochanger or silo.

If you try to back up files when an appropriate volume is not mounted, Backup requests a writable volume by displaying a message similar to the following in the Pending display:


media waiting: backup to pool `Default' waiting for 1 writable
backup tape or disk

When you start a data recovery, Backup displays a message in the Pending display that requests a mount of the volume name that contains the backed-up data, as in:


media waiting: recover waiting for 8mm 5GB volume-name

If you need more than one volume to recover the files, the Pending display lists all of the volumes in the order they are needed. During the recovery process, Backup requests the volumes it needs, one at a time.

If you mount more than one volume on the storage devices used by Backup, Backup uses the following hierarchy to select a volume on which to write data:

How to Label a Volume

A volume label is a unique internal code applied by Backup that initializes the volume for Backup to use and identifies a storage volume as part of a specific pool. To label a volume, follow these steps:

  1. Place an unlabeled or recyclable volume in the Backup storage device.

  2. Use Backup to label the volume. You can use either the Backup administration program or the nsrmm command.

    There are three options:

    • If you do not select a pool for the volume that you are about to label, Backup automatically applies the label template associated with the Default pool.

    • To create individual label names not associated with a template, edit the Volume Name attribute in the Label resource and enter a unique label name.

    • If you enable the Manual Recycle attribute when you label a volume, the volume cannot automatically be marked as recyclable according to the retention policy. Only an administrator can mark the volume recyclable.

    When Backup labels a volume, Backup first verifies that the volume is unlabeled. Then Backup labels the volume with the name specified in the Volume Name attribute, using either the next sequential label from the label template associated with the chosen pool or an override volume name you entered.

    If you relabel a recyclable volume from the same pool, the volume label name and sequence number remain the same, but access to the original data on the volume is destroyed and the volume becomes available for new data.

    After a volume is labeled and mounted in a device, the volume is available to receive data. Because the Backup label is internal and machine-readable, it is a good idea to put an adhesive label on each volume that matches the internal volume label. To use barcode labels with an autochanger, see "How Backup Uses Barcode Labels with Autochangers". To use barcode labels with a silo, see "Labeling Volumes in a Silo".

How to Mount or Unmount a Volume

When you issue the command to mount a volume or when Backup mounts a volume through auto media management, a volume that is loaded in the storage device is prepared to receive data from Backup. For example, when a tape is mounted, the read/write head of the device is placed at the beginning of the blank part of the tape, ready to write.

To mount the volume in the device, you can use either the Backup administration program or the command line:

After you label and mount a volume, the volume name is displayed in the Devices resource beside the pathname of the device in the Backup administration program.

To perform an unattended backup using a standalone device, you must mount labeled volumes in the device before leaving it unattended.


Caution - Caution -

You can only use nonrewinding devices with Backup. If you use a rewinding device, the read/write head is repositioned at the beginning of the volume and the previously backed-up data is overwritten.


Timeout Settings for Remote Devices

You can time out a mount request on a remote device storage node and redirect the save to another storage node. Set the attributes Save mount timeout and Save lockout in the Devices resource to change the timeout of a save mount request on a remote device. If the mount request is not satisfied by the number of minutes specified by the Save Mount Timeout attribute, the storage node is locked out from receiving saved data for the number of minutes specified by the value of the Save Lockout attribute. The default value for Save mount timeout is 30 minutes. The default value for Save lockout is zero, which means the device in the storage node continues to receive mount requests for the saved data.


Caution - Caution -

The Save mount timeout only applies to the initial volume of a save request.


How to Find a Volume Name

If the adhesive label on the volume is missing or illegible, you can determine the volume's name from the Backup administration program or the command line, complete one of the two actions:

How Backup Selects a Storage Volume for Relabeling

Backup data is destined for volumes from a specific pool. When the data is ready to be written, Backup monitors the active devices to locate a volume from the appropriate pool.

If only one volume from the pool is mounted and appendable, the data is directed to that volume.

If two volumes from the same pool are mounted on devices, Backup considers the following factors to guide its volume selection:

If Backup cannot find a mounted volume from the appropriate pool, a mount request is initiated. If auto media management is not enabled or if Backup has only standalone devices available, mount requests continue to be generated until a volume is mounted and backup can begin.

Auto Media Management

The auto media management feature gives Backup automatic control over media loaded in the storage device. If you enable the auto media management feature in the Devices resource, Backup automatically labels, mounts, and overwrites a volume it considers unlabeled, and automatically recycles volumes eligible for reuse that are loaded into the device. The auto media management feature is only enabled for standalone devices in the Devices resource. To enable auto media management for devices in an autochanger, see "Auto Media Management With Autochanger Devices".

Backup considers a volume unlabeled in the following conditions:

Because the auto media management feature can relabel a volume with a different density, it is possible to inadvertently overwrite data that still has value. For this reason, be careful if Backup volumes are shared between devices with different densities.

If you do not enable the auto media management feature, Backup ignores unlabeled media and does not consider it for backup.

If you enable the auto media management feature for a standalone device, Backup exhibits the following behavior when a volume becomes full during a backup:

  1. Backup issues a notification that it is waiting for a writable volume. At the same time, Backup waits for the full, verified volume to be unmounted.

  2. Backup monitors the device and waits for another volume to be inserted into the device.

  3. After a volume is detected, Backup checks that the volume is labeled. If so, Backup mounts the volume. Backup checks to see whether the volume is a candidate to write data to. If so, the write operation continues. If not, Backup continues to wait for a writable volume to continue the backup.

  4. If the volume is recyclable and is a member of the required pool, Backup recycles it the next time a writable volume is needed.

  5. If the volume is unlabeled, Backup labels it when the next writable volume is needed for a save.

In general if a non-full volume is unmounted from a standalone drive and you enabled auto media management, Backup waits for 60 minutes before it automatically remounts the volume in the drive. This hour is considered a reasonable delay to give you or an operator time to unload the volume after unmounting.


Caution - Caution -

Backup considers volumes that were labeled by a different application to be a valid relabel candidate if automedia management is enabled. Once Backup relabels the volume, the previously stored data is lost.


Storage Volume Status

Different reports and windows provide information on the status of storage volumes using parameters such as Written, %Used, Location, and Mode. This section defines some of the most common terms contained in reports about volumes.

In the Backup administration program, the volume name displayed is the same as the name that appears on the volume label. At the end of the volume name, the following designations might be displayed:

The value of Written always indicates the exact number of bytes written to the volume.

The value of %Used is based on an estimate of the total capacity of the volume, which is derived from the specified value of the Media Type of the device resource. Backup does not use the value of %Used to determine whether to write to a volume. Even if a volume is marked 100% used (a %Used value of 100% means that the value of Written is equal to or exceeds the estimate for the volume), Backup continues to write to the volume until it is marked full. Backup marks a volume full when it reaches the end of the media or encounters a write error.

The storage volume location refers to a character field you define in the Volumes resource that describes a physical location meaningful in your environment, such as 2nd shelf, Cabinet 2, Room 42.

Table 4-3 lists all the possible storage volume modes and their definitions within Backup.

Table 4-3 Storage Volume Modes

Mode Value 

Meaning 

Description 

appen

Appendable 

This volume contains empty space. Data that meets the acceptance criteria for the pool to which this volume belongs can be appended. 

man

Manual recycle 

This volume is exempt from automatic recycling. The mode can only be changed manually.  

(R)

Read-only 

The save sets on this volume are considered read-only. The mode can only be changed manually.  

recyc

Recyclable 

This volume is eligible for automatic recycling. Before the volume can be overwritten, it must first be relabeled.  

In general, a storage volume becomes recyclable when all the individual save sets located on the volume have assumed the status of recyclable. For more information about save set status, see "Save Set Status Values".

Save Set Staging

Save set staging is a process of moving data from one storage medium to another and removing the data from its original location. If the data was on a file device type, the space is reclaimed so that the disk space can be used for other purposes. Use save set staging to move save sets that you have backed up, archived, or migrated. Staging is especially recommended for save sets that you backed up to a file device type to move the data to more permanent storage, such as an optical or tape volume.

You can configure policies in the Staging resource to have Backup perform automatic staging once criteria you set is reached. Or you can use the nsrstage program to perform staging manually.

When you issue the nsrstage command, Backup creates a clone of the save set you specify on a clone volume of the medium you specify. If you stored the save set on a file device type, Backup deletes the save set from its original location to free the space the save set occupied. Backup tracks the location of the save set in the media database. The retention policy for the save set does not change when the data is staged.

To stage a save set using the command line, enter the nsrstage command at the shell prompt. For example, to stage an individual save set, enter the following command:


# nsrstage -s server -b pool -m -S save-set-ID

Refer to the nsrstage(1m) man page for the syntax and options for the nsrstage program.

To set or change staging polices, use the nsradmin command, or use the Customize resource in the nwadmin GUI. Refer to the online help for more details about the Stage resource.

Cloning

Cloning is a process of reproducing complete save sets from a storage volume to a clone volume. You can clone save set data from backups, archives, or migration. You can clone save sets automatically (as part of a backup, archive, or migration operation) or manually at another time.

Use cloning for higher reliability or convenience. For example, you can store clones offsite, send your data to another location, or verify backed-up data.

The cloning operation happens in two steps: first, Backup recovers data from the source volume. Then, Backup writes the data to a clone volume (a volume from a pool of type "clone"). Cloning requires at least two active devices, because one is required for reading the source volume and one is required for writing the new, cloned data. During cloning, the reproduction of data is from source volume to clone volume. Cloning does not involve data stored on the clients or server. Backup allows only one clone of a save set per volume. Therefore, if you specify three clones of a save set, each is written to a separate volume.

Automatic cloning (that is, cloning associated with a scheduled group backup operation) is performed after all backup operations are complete. The savegroup completion report that is issued after a scheduled backup also includes a report of the success or failure of the cloning operation for each save set.

The location of the devices where the clone data is written is determined by the list in the Storage Nodes attribute in the Clients resource for the Backup server. You can add or remove the names of storage nodes and the Backup server at any time, but you cannot have a different list of storage nodes to receive clone data than to receive backup data.

If you want to perform cloning long after a group has finished, you must do the cloning manually, volume by volume, or from the command line using a script in combination with a batch file. If you execute cloning manually, no report is generated.

When you clone data, different capacities of storage media may mean that more or fewer clone volumes are required. The cloning operation leaves traceable information entries in both the client file index and the media database. The capability to track cloned data distinguishes cloning from an operating system or hardware device copy operation.

To initiate cloning for a complete scheduled backup operation, enable cloning as part of the Group configuration. To clone individual save sets or clone single storage volumes, use the Save Set Clone or Volume Clone windows in the nwadmin GUI, or the nsrclone program from the command line.

When you specify that a particular volume be cloned, Backup uses the save sets on the specified volume as the source data.

When you specify a clone of a particular save set, Backup determines whether the save set already has a clone. If multiple clones of a save set exist, clones of save sets on volumes in an autochanger are generally selected as the source data, rather than a volume that requires human intervention to mount. Command line options enable you to specify the precise save set clone to use as the source, if you want.

If you execute a clone operation manually, no completion report is generated. Messages generated by the nsrclone program are displayed in a message window in the administration program's GUI and are also logged to the /nsr/logs/messages Backup message file.

Clone Storage Node Affinity

The link between a client's resource of a storage node and a list of available storage nodes to receive cloned save sets from the storage node client is called "clone storage node affinity." Data is cloned from media that contains the original save sets to media on the specified clone storage node. You define clone storage node affinity in the Clone Storage Nodes attribute, which is found in the Clients resource of a storage node. When you make a change to the Clone Storage Nodes attribute in the Client resource for a storage node client, the changed value is propagated to any additional Clients resources configured for that storage node client.

The Clone Storage Nodes attribute allows you to specify a different network interface for storage nodes that perform cloning operations than the network interface you specify for the storage node's remote device.

The server utilizes the exact hostname you specify in the Clone Storage Nodes attribute, instead of using the hostname prefix for the remote device name configured in the Devices resource.

When a volume is being cloned, the Backup server checks the value of the Clone Storage Nodes attribute for that storage node client. If the Clone Storage Nodes attribute has a null value, then the value listed in the server's Clone Storage Nodes attribute is used. If that list also contains a null value, then the server's Storage Nodes attribute is used.

Compatibility is maintained with the existing clone function which follows the server's Storage Node attribute.

To independently direct clones from each storage node, add the hostname of the storage node you want to receive the directed clones to the Clone Storage Nodes attribute in the Client resource configured for the storage node. The first entry made on the list that has a functional, enabled device is selected to receive the cloned data from the storage node.

To direct clones from all storage nodes to the same destination, leave the Clone Storage Nodes attribute blank for the Clients resources you configure for the storage nodes, and only configure the Backup server's Clone Storage Nodes attribute. This tactic provides a single source of control for clone destination.

The file index and media database entries for the save sets cloned to media on a remote device on a storage node still reside on the Backup server, which enforces the browse and retention policies in the same manner as for any cloned save sets that reside on the media in a device that is locally-attached to the server.

Cloning Versus Duplication of Volumes

When you clone a volume, the volume is not simply duplicated. Each save set on the volume is reproduced completely, which could mean that more or less space is used on the clone volume than on the source volume.

You might prefer to make exact copies (duplicates) of Backup volumes to provide additional disaster recovery protection. This approach, which in UNIX relies on the tcopy command, is not recommended but might serve a specific environment adequately. If you rely on an exact copy command, you must first ensure that the destination volume can hold the number of bytes that are contained in the source Backup volume. In addition, be aware that Backup would have no knowledge of the duplicated volume since the volume is not entered into the server's media database. If you enabled automated media management and you leave the volume in an autochanger managed by Backup, the volume may be considered eligible for relabeling and use during a scheduled backup, because it does not have a valid Backup label.

Similarly, it is possible to make an exact copy of an archive volume. However, the annotation that is associated with each archive save set is information that is stored in the Backup server's media database, not on the volume itself. Therefore, a duplicate volume of the archived save set does not include the annotation. If the entry of the original archive save set is removed from the media database, the annotation that describes it is also removed.

Cloning and Data Tracking Information

The clone operation does not insert entries into the client file index. Cloned save sets are only tracked through the media database. During a clone operation, the location of a cloned save set is added to the existing save set entry in the media database. That is, each save set clone shares the same ssid as the source save set. All characteristics that are true for the source save set are also true for the clone save set. If the source save sets are still browsable, the clone status is also browsable. If the source save sets have passed their browse policies, the clone status is recoverable.

Volumes that belong to a clone pool are also tracked through volume entries in the media database. The fact that all save sets share the same media database save set entry has implications for the following actions, which are executed on a "per save set basis" and not on a "per volume" basis:


Caution - Caution -

If you manually change the mode of a cloned volume to recyc with the intent of reusing a particular clone volume, be aware that the mode of a volume only changes to recyclable when all the save sets on that volume are recyclable. Therefore, when the mode of the volume changes to recyc, you effectively change the status of all save sets on the clone volume to recyc. Because the save sets share the same entry in the media database, there is no distinction between "original" and "clone" save sets. The end result is that all the save sets that reside on the now recyclable volume or on any other volume become candidates for immediate recycling.


To prevent inadvertent loss of data, if you want to reuse a particular clone volume and still protect the instances of a save set that exist on other volumes, first change the mode of the volumes to be protected to man_recyc. This means that Backup cannot automatically recycle the volume. Then, you can safely change the volume that you intend for reuse to mode recyc.

Similarly, if you purge a clone volume, you effectively remove from the client file index all file entries associated with all save sets that reside (in whole or in part) on the particular clone volume.

If you delete a clone volume, the nsrim index management program locates the entry in the media database for each save set that resides on the clone volume. From the entry, the nsrim program marks for deletion the information about the location of one of the save set clones from the entry. This action is performed for each save set entry. In addition, nsrim marks the entry for the particular clone volume (identified by its volume ID number) for deletion from the database.

Cloning Performance

In general, a volume write that occurs as part of a backup operation and a volume write that occurs as part of a cloning operation proceed at the same speed. However, if a clone operation is automatically requested as part of a scheduled backup, you may experience a performance degradation in other scheduled backups that follow. Backup generally attempts to complete one group's scheduled backup before a scheduled backup is initiated for another group. However, Backup considers that a group backup is finished when the backup operations are complete, not when any automatic cloning is complete. Therefore, if another group starts its backup while the previous group's clone operation is underway, you may experience contention for nsrmmd resources or specific volumes. To avoid this problem, you may decide to refrain from automatic cloning and instead initiate a single clone operation by passing a set of ssids to nsrclone as part of a job that runs at a nonpeak time after all backups are complete.

Cloning and Recovery

A clone volume is used for recovery any time Backup attempts to recover a particular save set and either the original save set volume has been deleted or the status of the original save set is marked "suspect."

You can always execute the scanner program on a clone volume to rebuild entries in the client file index, the media database, or both. After you re-create the entries, traditional recovery is available. Refer to the Solstice Backup 5.1 Disaster Recovery Guide for information on how to recover data with the scanner program.