This appendix provides instructions for administering Solstice DiskSuite disksets and metadevices, and for administering VERITAS Volume Manager objects. The procedures documented in this appendix are dependent on your volume management software.
This section describes using DiskSuite to administer:
Disksets
Disks in disksets
Multi-host metadevices
Local metadevices
Refer to the Solstice DiskSuite documentation for a complete discussion of administering DiskSuite objects.
Metadevices and disksets are created and administered using either Solstice DiskSuite command-line utilities or the DiskSuite Tool (metatool(1M)) graphical user interface.
Read the information in this chapter before using the Solstice DiskSuite documentation to administer disksets and metadevices in a Sun Cluster configuration.
Disksets are groups of disks. The primary administration task that you perform on disksets involves adding and removing disks.
Before using a disk that you have placed in a diskset, you must set up a metadevice using the disk's slices. A metadevice can be a concatenation, stripe, mirror, or UFS logging device (also called a trans device). You can also create hot spare pools that contain slices to serve as replacements when a metadevice is errored.
Metadevice names begin with d and are followed by a number. By default in a Sun Cluster configuration, there are 128 unique metadevices in the range 0 to 127. Each UFS logging device that you create will use at least seven metadevice names. Therefore, in a large Sun Cluster configuration, you might need more than the 128 default metadevice names. For instructions on changing the default quantity, refer to the Solstice DiskSuite documentation. Hot spare pool names begin with hsp and are followed by a number. You can have up to 1,000 hot spare pools ranging from hsp000 to hsp999.
This section provides overview information on disksets and their relationship to logical hosts, and procedures on how to add and remove disks from the diskset associated with the logical host.
Sun Cluster logical hosts are mastered by physical hosts. Only the physical host that currently masters a logical host can access the logical host's diskset. When a physical host masters a logical host's diskset, it is said to have ownership of the diskset. In general, Sun Cluster takes care of diskset ownership. However, if the logical host is in maintenance state, as reported by the hastat(1M) command, you can use the DiskSuite metaset -t command to manually take diskset ownership. Before returning the logical host to service, release diskset ownership with the metaset -r command.
If the logical hosts are up and running, you should never perform diskset administration using either the -t (take ownership) or -r (release ownership) options of the metaset(1M) command. These options are used internally by the Sun Cluster software and must be coordinated between the cluster nodes.
If the disk being added to a diskset will be used as a submirror, you must have two disks available on two different multihost disk expansion units to allow for mirroring. However, if the disk will be used as a hot spare, you can add a single disk.
Ensure that no data is on the disk.
This is important because the partition table will be rewritten and space for a metadevice state database replica will be allocated on the disk.
Insert the disk device into the multihost disk expansion unit.
Use the instructions in the hardware documentation for your disk expansion unit for information on disk addition and removal procedures.
Add the disk to a diskset.
The syntax for the command is shown below. In this example, diskset is the name of the diskset to which the disk is to be added, and drive is the DID name of the disk in the form dN (for new installations of Sun Cluster), or cNtYdZ (for installations that upgraded from HA 1.3).
# metaset -s diskset -a drive |
After adding the disks to the diskset by using the metaset(1M) command, use the scadmin(1M) command to reserve and enable failfast on the specified disks.
phys-hahost1# scadmin reserve drivename |
You can remove a disk from a diskset at any time, as long as none of the slices on the disk are currently in use in metadevices or hot spare pools.
Use the metastat(1M) command to ensure that none of the slices are in use as metadevices or as hot spares.
Use the metaset(1M) command to remove the target disk from the diskset.
The syntax for the command is shown below. In this example, diskset is the name of the diskset containing the (failed) disk to be removed, and drive is the DID name of the disk in the form dN (for new installations of Sun Cluster), or cNtYdZ (for installations that upgraded from HA 1.3).
# metaset -s diskset -d drive |
This operation can take up to fifteen minutes or more, depending on the size of your configuration and the number of disks.
The following sections contain information about the differences between administering metadevices in the multihost Sun Cluster environment and in a single-host environment.
Unless noted in the following sections, you can use the instructions in the Solstice DiskSuite documentation.
The instructions in the Solstice DiskSuite books are relevant only for single-host configurations.
The following sections describe the Solstice DiskSuite command-line programs to use when performing a task. Optionally, you can use the metatool(1M) graphical user interface for all the tasks unless directed otherwise. Use the -s option when running metatool(1M), because it allows you to specify the diskset name.
For ongoing management of metadevices, you must constantly monitor the metadevices for errors in operation, as discussed in "Monitoring Utilities".
When hastat(1M) reports a problem with a diskset, use the metastat(1M) command to locate the errored metadevice.
You must use the -s option when running either metastat(1M) or metatool(1M), so that you can specify the diskset name.
You should save the metadevice configuration information when you make changes to the configuration. Use metastat -p to create output similar to what is in the md.tab file and then save the output. Refer to "Saving Disk Partition Information (Solstice DiskSuite)", for details on saving partitioning data.
Mirrored metadevices can be used as part of a logging UFS file system for Sun Cluster highly available applications.
Idle slices on disks within a diskset can be configured into metadevices by using the metainit(1M) command.
Sun Cluster highly available database applications can use raw mirrored metadevices for database storage. While these are not mentioned in the dfstab.logicalhost file or in the vfstab file for each logical host, they appear in the related Sun Cluster database configuration files. The mirror must be removed from these files, and the Sun Cluster database system must stop using the mirror. Then the mirror can be deleted by using the metaclear(1M) command.
If you are using SPARCstorage Arrays, note that before replacing or adding a disk drive in a SPARCstorage Array tray, all metadevices on that tray must be taken offline.
In symmetric configurations, taking the submirrors offline for maintenance is complex because disks from each of the two disksets might be in the same tray in the SPARCstorage Array. You must take the metadevices from each diskset offline before removing the tray.
Use the metaoffline(1M) command to take offline all submirrors on every disk in the tray.
After a disk is added to a diskset, create new metadevices using metainit(1M) or metatool(1M). If the new devices will be hot spares, use the metahs(1M) command to place the hot spares in a hot spare pool.
When replacing an errored metadevice component, use the metareplace(1M) command.
A replacement slice (or disk) must be available. This could be an existing device that is not in use, or a new device that you have added to the diskset.
You also can return to service drives that have sustained transient errors (for example, as a result of a chassis power failure) by using the metareplace -e command.
Before deleting a metadevice, verify that none of the components in the metadevice is in use by Sun Cluster HA for NFS. Then use the metaclear(1M) command to delete the metadevice.
To grow a metadevice, you must have a least two slices (disks) in different multihost disk expansion units available. Each of the two new slices should be added to a different submirror with the metainit(1M) command. You then use the growfs(1M) command to grow the file system.
When the growfs(1M) command is running, clients might experience interruptions of service.
If a takeover occurs while the file system is growing, the file system will not be grown. You must reissue the growfs(1M) command after the takeover completes.
The file system that contains /logicalhost/statmon cannot be grown. Because the statd(1M) program modifies this directory, it would be blocked for extended periods while the file system is growing. This would have unpredictable effects on the network file locking protocol. This is a problem only for configurations using Sun Cluster HA for NFS.
You can add or delete hot spare devices to or from hot spare pools at any time, as long as they are not in use. In addition, you can create new hot spare pools and associate them with submirrors using the metahs(1M) command.
All UFS logs on multihost disks are mirrored. When a submirror fails, it is reported as an errored component. Repair the failure using either metareplace(1M) or metatool(1M).
If the entire mirror that contains the UFS log fails, you must unmount the file system, back up any accessible data, repair the error, repair the file system (using fsck(1M)), and remount the file system.
All UFS file systems within a logical host must be logging UFS file systems to ensure that the failover or haswitch(1M) timeout criteria can be met. This facilitates fast switchovers and takeovers.
The logging UFS file system is set up by creating a trans device with a mirrored logging device and a mirrored UFS master file system. Both the logging device and UFS master device must be mirrored.
Typically, Slice 6 of each drive in a diskset can be used as a UFS log. The slices can be used for UFS log submirrors. If the slices are smaller than the log size you want, several can be concatenated. Typically, one Mbyte per 100 Mbytes is adequate for UFS logs, up to a maximum of 64 Mbytes. Ideally, log slices should be drive-disjoint from the UFS master device.
If you must repartition the disk to gain space for UFS logs, then preserve the existing Slice 7, which starts on Cylinder 0 and contains at least two Mbytes. This space is required and reserved for metadevice state database replicas. The Tag and Flag fields (as reported by the format(1M) command) must be preserved for Slice 7. The metaset(1M) command sets the Tag and Flag fields correctly when the initial configuration is built.
After the trans device has been configured, create the UFS file system using newfs(1M) on the trans device.
After the newfs process is completed, add the UFS file system to the vfstab file for the logical host, by editing the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost file to update the administrative and multihost UFS file system information.
Make sure that the vfstab.logicalhost files of all cluster nodes contain the same information. Use the cconsole(1) facility to make simultaneous edits to vfstab.logicalhost files on all nodes in the cluster.
Here's a sample vfstab.logicalhost file showing the administrative file system and four other UFS file systems:
#device device mount FS fsck mount mount #to mount to fsck point type pass all options# /dev/md/hahost1/dsk/d11 /dev/md/hahost1/rdsk/d11 /hahost1 ufs 1 no - /dev/md/hahost1/dsk/d1 /dev/md/hahost1/rdsk/d1 /hahost1/1 ufs 1 no - /dev/md/hahost1/dsk/d2 /dev/md/hahost1/rdsk/d2 /hahost1/2 ufs 1 no - /dev/md/hahost1/dsk/d3 /dev/md/hahostt1/rdsk/d3 /hahost1/3 ufs 1 no - /dev/md/hahost1/dsk/d4 /dev/md/hahost1/rdsk/d4 /hahost1/4 ufs 1 no - |
If the file system will be shared by Sun Cluster HA for NFS, follow the procedure for sharing NFS file systems as described in Chapter 11 in the Sun Cluster 2.2 Software Installation Guide.
The new file system will be mounted automatically at the next membership monitor reconfiguration. To force membership reconfiguration, use the following command:
# haswitch -r |
Local disks can be mirrored. If a single mirror fails, use the instructions in the Solstice DiskSuite documentation to replace the failed mirror and resynchronize the replacement disk with the good disk.
The metadevice actions that are not supported in Sun Cluster configurations include:
Creation of a configuration with too few metadevice state database replicas on the local disks
Modification of metadevice state database replicas on multihost disks, unless there are explicit instructions to do so in this or another Sun Cluster book
VERITAS Volume Manager (VxVM) and the VxVM cluster feature are variations of the same volume manager. The VxVM cluster feature is only used in Oracle Parallel Server (OPS) configurations. This section describes using disks under the control of the volume manager to administer:
Volume manager disks
Disk groups
Subdisks
Plexes
Volumes
Refer to the appropriate section for a complete discussion of administering these objects.
Objects under the control of a volume manager are created and administered using either command-line utilities or the Visual Administrator graphical user interface.
Read the information in this chapter before using the VxVM documentation to administer objects under the control of a volume manager in a Sun Cluster configuration. The procedures presented here are one method for performing the following tasks. Use the method that works best for your particular configuration.
These objects generally have the following relationship:
Disks are placed under volume manger control and are grouped into disk groups.
One or more subdisks (each representing a specific portion of a disk) are combined to form plexes, or mirrors.
A volume is composed of one or more plexes.
The default disk group is rootdg (the root disk group). You can create additional disk groups as necessary. The primary administration tasks that you perform on disk groups involve adding and removing disks.
Before using a disk that you have placed in a disk group, you must set up disks and subdisks (under volume manager control) to build plexes, or mirrors, using the physical disk's slices. A plex can be a concatenation or stripe.
With VxVM, applications access volumes (created on volume manager disks) rather than slices.
The following sections describe the VxVM command-line programs to use when performing a task. Optionally, you can use the graphical user interface for all the tasks unless directed otherwise.
On nodes running Sun Cluster HA data services, never manually run the vxdg import or deport options on a disk group that is under the control of Sun Cluster, unless the logical host for that disk group is in maintenance mode. Before manually importing or deporting a disk group, you must either stop Sun Cluster on all nodes that can master that disk group (by running scadmin stopnode on all such nodes), or use the haswitch -m command to switch any corresponding logical host into maintenance mode. When you are ready to return control of the disk group to Sun Cluster, the safest course is to deport the disk group before running scadmin startnode or before using haswitch(1M) to place the logical host back under the control of Sun Cluster.
Before a disk can be used by VxVM, it must be identified, or initialized, as a disk that is under control of a volume manager. A fully initialized disk can be added to a disk group, used to replace a previously failed disk, or used to create a new disk group.
Ensure that no data is on the disk.
This is important because existing data is destroyed if the disk is initialized.
Insert the disk device and install it in the disk enclosure by following the instructions in the accompanying hardware documentation.
Initialize the disk and add it to a disk group.
This is commonly done by using either the vxdiskadm menus or the graphical user interface. Alternately, you can use the command line utilities vxdisksetup and vxdg addisk to initialize the disk and place it in a disk group.
Occasionally, you may need to take a physical disk offline. If the disk is corrupted, you need to disable it and remove it. You also must disable a disk before moving the physical disk device to another location to be connected to another system.
To take a physical disk offline, first remove the disk from its disk group. Then place the disk offline by using the vxdisk(1M) command.
You can remove a disk to move it to another system, or you may remove the disk because the disk is failing or has failed. Alternatively, if the volumes are no longer needed, they can be removed.
To remove a disk from the disk group, use the vxdg(1M) command. To remove the disk from volume manager control by removing the private and pubic partitions, use the vxdiskunsetup(1M) command. Refer to the vxdg(1M) and vxdiskunsetup(1M) man pages for complete information on these commands.
For VxVM, it is most convenient to create and populate disk groups from the active node that is the default master of the particular disk group. In an N+1 configuration, each of these default master nodes shares multihost disk connectivity with only one other node in the cluster, the hot-standby node. By using these nodes to populate the disk groups, you avoid the risk of generating improperly configured groups.
You can use either the vxdiskadm menus or the graphical user interface to create a new disk group. Alternately, you can use the command-line utility vxdg init.
Once the disk groups have been created and populated, each one should be deported by using the vxdg deport command. Then, each group should be imported onto the hot-standby node by using the -t option. The -t option is important, as it prevents the import from persisting across the next boot. All VxVM plexes and volumes should be created, and volumes started, before continuing.
Use the following procedure to move a disk to a different disk group.
To move a disk between disk groups, remove the disk from one disk group and add it to the other.
This example moves the physical disk c1t0d1 from disk group acct to disk group log_node1 by using command-line utilities.
Use the vxprint(1M) command to determine if the disk is in use.
# vxprint -g acct TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg acct acct - - - - - - dm c1t0d0 c1t0d0s2 - 2050272 - - - - dm c1t0d1 c1t0d1s2 - 2050272 - - - - dm c2t0d0 c2t0d0s2 - 2050272 - - - - dm c2t0d1 c2t0d1s2 - 2050272 - - - - v newvol gen ENABLED 204800 - ACTIVE - - pl newvol-01 newvol ENABLED 205632 - ACTIVE - - sd c1t0d1-01 newvol-01 ENABLED 205632 0 - - - pl newvol-02 newvol ENABLED 205632 - ACTIVE - - sd c2t0d1-01 newvol-02 ENABLED 205632 0 - - - v vol01 gen ENABLED 1024000 - ACTIVE - - pl vol01-01 vol01 ENABLED 1024128 - ACTIVE - - sd c1t0d0-01 vol01-01 ENABLED 1024128 0 - - - pl vol01-02 vol01 ENABLED 1024128 - ACTIVE - - sd c2t0d0-01 vol01-02 ENABLED 1024128 0 - - - |
Use the vxedit(1M) command to remove the volume to free up the c1t0d1 disk.
You must run the vxedit command from the node mastering the shared disk group.
# vxedit -g acct -fr rm newvol |
The -f option forces an operation. The -r option makes the operation recursive.
Remove the c1t0d1 disk from the acct disk group.
You must run the vxdg command from the node mastering the shared disk group.
# vxdg -g acct rmdisk c1t0d1 |
Add the c1t0d1 disk to the log_node1 disk group.
# vxdg -g log_node1 adddisk c1t0d1 |
This procedure does not save the configuration or data on the disk.
This is the acct disk group after c1t0d1 is removed.
# vxprint -g acct TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg acct acct - - - - - - dm c1t0d0 c1t0d0s2 - 2050272 - - - - dm c2t0d0 c2t0d0s2 - 2050272 - - - - dm c2t0d1 c2t0d1s2 - 2050272 - - - - v vol01 gen ENABLED 1024000 - ACTIVE - - pl vol01-01 vol01 ENABLED 1024128 - ACTIVE - - sd c1t0d0-01 vol01-01 ENABLED 1024128 0 - - - pl vol01-02 vol01 ENABLED 1024128 - ACTIVE - - sd c2t0d0-01 vol01-02 ENABLED 1024128 0 - - - |
This is the log_node1 disk group after c1t0d1 is added.
# vxprint -g log_node1 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg log_node1 log_node1 - - - - - - dm c1t0d1 c1t0d1s2 - 2050272 - - - - dm c1t3d0 c1t3d0s2 - 2050272 - - - - dm c2t3d0 c2t3d0s2 - 2050272 - - - - # |
To change permissions or ownership of volumes, you must use the vxedit command.
Do not use chmod or chgrp. The permissions and ownership set by chmod or chgrp are automatically reset to root during a reboot.
Here is an example of the permissions and ownership of the volumes vol01 and vol02 in the /dev/vx/rdsk directory before a change.
# ls -l crw------- 1 root root nnn,nnnnn date time vol01 crw------- 1 root root nnn,nnnnn date time vol02 ... |
This an example for changing the permissions and ownership for vol01.
# vxedit -g group_name set mode=755 user=oracle vol01 |
After the edit, note how the permissions and ownership have changed.
# ls -l crwxr-xr-x 1 oracle root nnn,nnnnn date time vol01 crw------- 1 root root nnn,nnnnn date time vol02 ... |
Volumes, or virtual disks, can contain file systems or applications such as databases. A volume can consist of up to 32 plexes, each of which contains one or more subdisks. In order for a volume to be usable, it must have at least one associated plex with at least one associated subdisk. Note that all subdisks within a volume must belong to the same disk group.
Use the graphical user interface or the command-line utility vxassist(1M) to create volumes in each disk group, and to create an associated mirror for each volume.
The actual size of a VxVM device is slightly less than the full disk drive size. VxVM reserve a small amount of space for private use, called the private region.
The use of the same volume name is allowed if the volumes belong to different disk groups.
Dirty Region Logging (DRL) is an optional property of a volume, used to provide a speedy recovery of mirrored volumes after a system failure. DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume and uses this information to recover only the portions of the volume that need to be recovered.
Log subdisks are used to store the dirty region log of a volume that has DRL enabled. A volume with DRL has at least one log subdisk; multiple log subdisks can be used to mirror the dirty region log. Each log subdisk is associated with one of the volume's plexes. Only one log subdisk can exist per plex. If the plex contains only a log subdisk and no data subdisks, that plex can be referred to as a log plex. The log subdisk can also be associated with a regular plex containing data subdisks, in which case the log subdisk risks becoming unavailable in the event that the plex must be detached due to the failure of one of its data subdisks.
Use the graphical user interface or the command-line utility vxassist(1M) to create a log for an existing volume.
Hot-relocation is the ability of a system to automatically react to I/O failures on redundant (mirrored or RAID5) volume manager objects, and to restore redundancy and access to those objects. Hot-relocation is supported only on configurations using VxVM. VxVM detects I/O failures on volume manager objects and relocates the affected subdisks to disks designated as spare disks or free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them redundant and accessible again.
When a partial disk failure occurs (that is, a failure affecting only some subdisks on a disk), redundant data on the failed portion of the disk is relocated, and the existing volumes consisting of the unaffected portions of the disk remain accessible.
Hot-relocation is performed only for redundant (mirrored or RAID5) subdisks on a failed disk. Non-redundant subdisks on a failed disk are not relocated, but you are notified of their failure.
A spare disk must be initialized and placed in a disk group as a spare before it can be used for replacement purposes. If no disks have been designated as spares when a failure occurs, VxVM automatically uses any available free space in the disk group in which the failure occurs. If there is not enough spare disk space, a combination of spare space and free space is used. You can designate one or more disks as hot-relocation spares within each disk group. Disks can be designated as spares with the vxedit(1M) command.
You can configure and specify either UFS or VxFS file systems associated with a logical host's disk groups on volumes of type fsgen. When a cluster node masters a logical host, the logical host's file systems associated with the disk groups are mounted on the mastering node's specified mount points.
During a logical host reconfiguration sequence, it is necessary to check file systems with the fsck(1M) command. Though this process is performed in non-interactive parallel mode on UFS file systems, it can affect the overall time of the reconfiguration sequence. The logging feature of UFS, SDS, and VxFS file systems greatly reduce the time that fsck(1M) takes prior to mounting file systems.
When the switchover of a data service is required along with volume recovery, the recovery takes longer than allowed in the reconfiguration steps. This causes step time-outs and the node aborts.
Consequently, when setting up mirrored volumes, always add a DRL log to decrease volume recovery time in the event of a system crash. When mirrored volumes are used in the cluster environment, DRL must be assigned for volumes greater than 500 Mbytes.
Use VxFS if large file systems (greater than 500 Mbytes) are used for HA data services. Under most circumstances, VxFS is not bundled with Sun Cluster and must be purchased separately from VERITAS.
Although it is possible to configure logical hosts with very small mirrored file systems, you should use Dirty Region Logging (DRL) or VxFS file systems because of the possibility of time-outs as the size of the file system increases.
To grow a striped or RAID5 volume containing a file system, you must have the free space on the same number of disks that are currently in the stripe or RAID5 volume. For example, if you have four 1GB disks striped together (giving you a 4GB file system), and you wish to add 1GB of space (to yield a 5GB filesystem), you must have four new disks, each with at least .25GB of free space. In other words, you can not add one disk to a 4-disk stripe.
The VxVM graphical user interface will choose the disks on which to grow your file system. To select the specific disks on which to grow the file system, use the command line interface instead.
UFS file systems cannot be shrunk. The only way to "shrink" a file system is to recreate the volume, run newfs on the volume, and then restore the data from backup.
Local disks can be mirrored. If a single mirror fails, use the instructions in your volume manager documentation to replace the failed mirror and resynchronize the replacement disk with the good disk.
This section contains suggestions for using Solstice Backup(TM) to back up Sun Cluster file systems.
Solstice Backup is designed to run each copy of the server software on a single server. Solstice Backup expects files to be recovered using the same physical server from which they were backed up.
Solstice Backup has considerable data about the physical machines (host names and host IDs) corresponding to the server and clients. Solstice Backup's information about the underlying physical machines on which the logical hosts are configured affects how it stores client indexes.
Do not put the Solstice Backup /nsr database on the multihost disks. Conflicts can arise if two different Solstice Backup servers attempt to access the same /nsr database.
Because of the way Solstice Backup stores client indexes, do not back up a particular client using different Solstice Backup servers on different days. Make sure that a particular logical host is always mastered by the same physical server whenever backups are performed. This will enable future recover operations to succeed.
By default, Sun Cluster systems will not generate the full file system list for your backup configuration. If the save set list consists of the keyword All, then the /etc/vfstab file will be examined to determine which file systems should be saved. Because Sun Cluster vfstab files are kept in /etc/opt/SUNWcluster/conf/hanfs by default, Solstice Backup will not find them unless you explicitly list the Sun Cluster files systems to be saved. When you are testing your backup procedures, verify that all of the Sun Cluster file systems that need to be backed up appear in the Solstice Backup file system list.
Four methods of configuring Solstice Backup are presented here. You might prefer one depending on your particular Sun Cluster configuration. Switchover times could influence your decision. Once you decide on a method, continue using that method so that future recover operations will succeed.
Here is a description of the configuration methods:
Use a non-cluster node, non-high availability server configured as a Solstice Backup server.
Configure an additional server apart from the Sun Cluster servers to act as the Solstice Backup server. Configure the logical hosts as clients of the server. For best results, always ensure that the logical hosts are configured on their respective default masters before doing the daily backup. This might require a switchover. Having the logical hosts mastered by alternate servers on different days (possibly as the result of a takeover) could cause Solstice Backup to become confused upon attempting a recover operation, due to the way Solstice Backup stores client indexes.
Use one Sun Cluster server configured to perform local backups.
Configure one of the Sun Cluster servers to perform local backups. Always switch the logical hosts to the Solstice Backup server before performing the daily backup. That is, if phys-hahost1 and phys-hahost2 are the Sun Cluster servers, and phys-hahost1 is the Solstice Backup server, always switch the logical hosts to phys-hahost1 before performing backups. When backups are complete, switch back the logical host normally mastered by phys-hahost2.
Use the Sun Cluster servers configured as Solstice Backup servers.
Configure each Sun Cluster server to perform local backups of the logical host it masters by default. Always ensure that the logical hosts are configured on their respective default masters before performing the daily backup. This might require a switchover. Having the logical hosts mastered by alternate servers on different days (possibly as the result of a takeover) could cause Solstice Backup to become confused upon attempting a recover operation, due to the way Solstice Backup stores client indexes.
Use one Sun Cluster server configured as the Solstice Backup server.
Configure one Sun Cluster server to back up its logical host locally and to back up its sibling's logical host over the network. Always ensure that the logical hosts are configured on their respective default masters before doing the daily backup. This might require a switchover. Having the logical hosts mastered by alternate servers on different days (possibly as the result of a takeover) could cause Solstice Backup to become confused upon attempting a recover operation, due to the way Solstice Backup stores client indexes.
In all four of the above backup options, you can have another server configured to temporarily perform backups in the event the designated Solstice Backup server is down. Note that you will not be able to use the temporary Solstice Backup server to recover files backed up by the normal Solstice Backup server, and that you cannot recover files backed up by the temporary server from the normal backup server.