Solstice Backup 5.1 includes the following new features and improvements:
Save Set Staging Improvement - In the previous release, save set staging was manually invoked with the nsrstage command. With Solstice Backup 5.1, you now have the ability to configure automatic staging, based on defined policies. This is accessed through a new Backup resource, named Staging. The automatic staging feature only works with file devices. You must have a file device defined before it is available as a selection in the Staging resource.
Storage Node Support Improvement - You no longer have to keep root@storagenodehost on the Administrator list for storage node backups. You need to have the entry there when you run the jb_config program, but can remove it as soon as the program is completed.
Backup supports the ability to designate another system on the network that has a storage device attached to act as a storage node of the Backup server. You can designate, in order of priority, the storage node that a client's data should be directed to by entering the hostname in the Storage Nodes attribute of the Clients resource. If the first storage node listed for the client is not available, the next storage node on the list is contacted to receive the backup data.
In order to use the support for storage nodes, you must purchase an enabler for each storage node that you want to add. If you want to use an autochanger for backups to a storage node, you must obtain an autochanger enabler code for each autochanger you want to use with Backup. You enter all enabler codes on the Backup server, regardless of where the software is installed. Then you install the client and storage node software and the Backup device drivers on each system that you want to designate as a storage node.
If the backup devices reside in an autochanger, first use the administration program (nwadmin or nsradmin) on the Backup server to manually add root@storagenode-hostname to the Backup server's Administrator list; then run the jb_config program on the storage node system to configure the remote autochanger for use with Backup.
During a backup, the storage node software invokes the Backup media daemons that are responsible for sending backup data received from each client to the mounted backup media. The storage node software also invokes the daemon responsible for sending entries to the media database on the Backup server. Each client's local save program relays information for entries in the client index back to the Backup server. The Backup server keeps track of which set of backup media a client's save set data was sent to, for potential recovery later.
Refer to the Chapters 3, 4, and 5 in the Solstice Backup 5.1 Administration Guide for information on how to configure support for storage nodes and remote devices, and Appendix A for a description of how storage nodes function.
Storage Node Save Mount Timeout - This release includes a timeout mechanism for save mount requests on storage nodes. This allows a save to be redirected to another storage node, if an appropriate volume is not mounted within the timeout period.
Override of a Daily Forced Incremental Backup - A new attribute in the Groups resource, named Force Incremental, provides the ability to perform more than one full backup per 24-hour period. Use the View option from the Details menu (nwadmin) or Options > Display Options > Hidden to view and change the setting for Force Incremental.
The default setting for the Force Incremental attribute is Yes. This means an incremental backup will occur if the group is run more than once a day. Set this attribute to No to do more than one full backup per day.
64-Bit Filesystem Support - This release includes support for 64-bit filesystems for clients of Solaris 2.6. You can archive, back up, browse, and recover files larger than two gigabytes. If your clients are not 64-bit capable, you can browse files larger than 2 gigabytes, but you cannot recover them.
Pre- and Post-Processing for Client Backup Improvement - A new pre- and post-processing command, savepnpc, allows you to invoke a set of pre-processing commands once before the first save set on the client begins its backup, and a set of post-processing commands that run only after the last save set on the client completes its backup. Previously the pre- and post-processing commands were run on each save set specified for the client. You use the savepnpc command in place of save in your customized backup script. The savepnpc command uses the same syntax as the save command. See Appendix B in the Solstice Backup 5.1 Administration Guide and the man pages for more detailed information about savepnpc.
File Device Update - If you use a file device, you must enter it as a directory path (the same as other device types) rather than as just a filename. The path /tmpfs is not allowed on Solaris servers.
Power Edition Performance Improvement - In this release, immediate recovery is added for Solaris.
Clone Storage Node Affinity - This release includes clone storage node affinity. Clone storage node affinity is the link between a client's resource of a storage node and a list of available storage nodes to receive cloned save sets from the storage node client. Data is cloned from media that contain the original save sets to media on the specified clone storage node.
Backup Portmapper Name Change - The term Backup Portmapper has been changed to Storage Management Portmapper with this release.
Backup Resource and Attribute Changes - This release includes several Backup resource and attribute changes. The Name attribute of the NSR device resource has been changed to have a two-part value. The first part is an optional hostname. The hostname is prefixed with RD= for a remote device. If the hostname is not specified, it assumes the device resides on the local server. The second part is the device name, separated from the hostname by a colon.
A new hidden attribute has been added to the NSR device resource. It is called Recover Only. The default for this attribute is No. It can be set to Yes when you define the device or update its definition. A Yes indicates the device can be used for recovery operations only.
Another attribute has been added to the NSR device resource. It is called Target Sessions. The default value for Target Sessions is 4. You can change the value from 1 to 512 when a device is configured on the Backup server, depending on base enabler and other enablers loaded. This attribute replaces the sessions per device defined in the NSR resource in pre-5.0.1 releases.
A Storage Nodes attribute has been added to the NSR Client resource. This new field contains a list that contains the storage nodes for the client. This list directs the server to store data for the client to either local devices or a particular storage node's remote devices.
A Clone Storage Nodes attribute has also been added. This attribute allows you to specify a different network interface for storage nodes that perform cloning operations than the network interface you specify for the storage node's remote device.
Two new attributes have been added to the NSR resource. One is nsrmmd polling interval, which controls the time interval between polls of remote nsrmmds. The second is nsrmmd restart interval which sets the time interval that nsrd waits before restarting a nsrmmd. The restart interval begins when nsrd first detects a nsrmmd has terminated. For local nsrmmds, this detection is immediate. For remote nsrmmds, the detection will usually start when the nsrmmd is missing from a polling reply, or the polling event timed out without a reply.
The default value for both of these intervals is 2 minutes. The polling interval can range from 1 to 60 minutes, while the restart interval can have a value from 0 to 60 minutes. A value of 0 for the restart interval means an immediate restart is desired. Changing the value of these attributes resets the currently running nsrmmds. For example, setting the pooling interval from 2 minutes to 10 minutes resets the next polling interval for all currently running nsrmmds to 10 minutes from now. Similarly, changing the restart interval from 2 minutes to 0 will causes any nsrmmds waiting to restart immediately.