4204883 - The confccdssa(1M) command will fail when you select the controller that contains the boot disk, and will display the misleading error message: "First RE may not be NULL. WARNING: All disks on this SSA (ctlr: nn) are either already in disk groups, have already been selected as one of the devices for the shared CCD or are otherwise unavailable." To prevent this problem, do not select the controller that contains the boot disk.
4235744 - The scconf clustername -F logicalhost command creates the primary and mirror of the HA administrative volume dg-stat on two different disks in the same storage device. If that storage device fails, or connection to that storage device is lost, automatic volume recovery is not possible. You must manually fix the volume and restart the volume.
To diagnose and correct this problem, perform the following steps.
Check whether your existing administrative file system is created with the mirrors on the same controller. If not, then no further action is needed.
If the administrative file system volumes are mirrored on disks on same controller, proceed with the following steps to rebuild the administrative file system so that the volumes are correctly mirrored across controllers.
Back up any data that is in the administrative file system (/logicalhost) directory.
Put the logical host in maintenance mode.
Using VERITAS Volume Manager commands, manually import the disk group to where the administrative file system resides, remove the dg-stat volume, then create the volume using the same name dg-stat, specifying a mirror layout across controllers.
Recreate the administrative file system.
# scconf clustername -F logicalhost |
The command will find that an administrative file system volume (dg-stat) already exists, and will use that volume to create the administrative file system.
Unmount the newly-created file system.
Deport the disk group.
Bring up the logical host by using the haswitch(1M) command.
Restore any data to the /logicalhost directory.
4240225 - A umount operation will fail during a switchover if the df command is run before the partition is unmounted. This causes the cluster to attempt to re-master the logical host on the original node, which fails, leaving the logical host in a partially mastered state. The error message produced in this situation is cryptic: "ID[SUNWcluster.scnfs.4010]: unmount /mail/spool failed." To work around the problem, switch the logical host into maintenance mode by using haswitch or scconf(1M), and then re-master the logical host correctly, using the scconf command. See the scconf(1M) man page for details.