Sun Cluster 3.0 Release Notes Supplement

Known Problems

In addition to known problems documented in Sun Cluster 3.0 Release Notes, the following known problems affect the operation of the Sun Cluster 3.0 GA release.

Bug ID 4349995

Problem Summary: The DiskSuite Tool (metatool) graphical user interface is incompatible with Sun Cluster 3.0.

Workaround: Use command line interfaces to configure and manage shared disksets.

Bug ID 4386412

Problem Summary: Upgrade from Sun Cluster 2.2 to 3.0 fails when the shared Cluster Configuration Database (CCD) is enabled.

Workaround: For clusters using VERITAS Volume Manager (VxVM) only, disable shared CCD after you back up the cluster (Step 7 of the procedure "How to Shut Down the Cluster") and before you stop Sun Cluster 2.2 software (Step 8). The following procedure describes how to disable shared CCD in a Sun Cluster 2.2 configuration.

  1. From either node, create a backup copy of the shared CCD.


    # ccdadm -c backup-filename
    

    See the ccdadm(1M) man page for more information.

  2. On each node of the cluster, remove the shared CCD.


    # scconf clustername -S none 
    

  3. On each node, run the mount(1M) command to determine on which node the ccdvol is mounted.

    The ccdvol entry looks similar to the following.


    # mount
    ...
    /dev/vx/dsk/sc_dg/ccdvol        /etc/opt/SUNWcluster/conf/ccdssa        ufs     suid,rw,largefiles,dev=27105b8  982479320

  4. Run the cksum(1) command on each node to ensure that the ccd.database file is identical on both nodes.


    # cksum ccd.database
    

  5. If the ccd.database files are different, from either node restore the shared CCD backup that you created in Step 1.


    # ccdadm -r backup-filename
    

  6. Stop the Sun Cluster 2.2 software on the node on which the ccdvol is mounted.


    # scadmin stopnode
    

  7. From the same node, unmount the ccdvol.


    # umount /etc/opt/SUNWcluster/conf/ccdssa 
    

Bug ID 4388265

Problem Summary: You may encounter I/O errors while replacing a SCSI cable from the Sun StorEdge A3500 controller board to the disk tray. These errors are temporary and should disappear when the cable is securely in place.

Workaround: After replacing a SCSI cable in a Sun StorEdge A3500 disk array, use your volume management recovery procedure to recover from I/O errors.

Bug ID 4410535

Problem Summary: When using the Sun Cluster module to Sun Management Center 3.0, you cannot add a previously deleted resource group.

Workaround:

  1. Click on Resource Group->Status->Failover Resource Groups.

  2. Right click on the resource group name to be deleted and select Delete Selected Resource Group.

  3. Click on the Refresh icon and make sure the row corresponding to the deleted resource group is gone.

  4. Right click on Resource Groups in the left pane and select Create New Resource Group.

  5. Give the same resource group name that was deleted before and click Next>>. A dialog box pops up saying that the resource group name is already in use.

Known Documentation Problems

This section discusses documentation errors you might encounter and steps to correct these problems. This information is in addition to known documentation problems documented in the Sun Cluster 3.0 Release Notes.

Release Notes

There are three URL errors in the Sun Cluster 3.0 Release Notes. These URLs are Sun internal URLs. These references need to be changed to refer to external sites.

Installation Guide

Do not perform Step 2 in "How to Install the Solaris Operating Environment" and in "How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes." This instruction, which checks and corrects the value of the local-mac-address variable, occurs before Solaris software is installed and therefore does not work. Because the value of local-mac-address is set to false by default, the step is also unnecessary. Skip Step 2 when you perform either procedure.


Note -

Sun Cluster software requires that the local-mac-address variable must be set to false, and does not support changing the variable value to true.


Data Services Installation and Configuration Guide

Additional documentation is needed when installing Sun Cluster HA for Oracle Parallel Server using the instructions in Chapter 8, "Installing and Configuring Sun Cluster HA for Oracle Parallel Server."

In the section, "How to Install Sun Cluster HA for Oracle Parallel Server Packages," it does not describe how to set up the LUNs if you are using RAID 5 rather than VERITAS Volume Manager.

This section should state that if you are using the Sun StorEdge A3500 disk array with hardware RAID, you use the RAID Manager (RM6) software to configure the Sun StorEdge A3500 LUNs. The LUNs can be configured as raw devices on top of the /dev/did/rdsk devices.

The section should also provide an example of how to configure the LUNs when using hardware RAID. This example should include the following steps:

  1. Create LUNs on the Sun StorEdge A3500 disk array.

    Refer to the information in Appendix A, Installing and Maintaining a Sun StorEdge A3500/A3500FC System to create the LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the Sun StorEdge A3500 LUNs into as many slices as you need.


    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2


    Note -

    If you use slice 0, do not start the partition at cylinder 0.


  3. Run the scdidadm(1M) command to find the raw DID device that corresponds to the LUNs that you created in Step 1

    The following example lists output from the scdidadm -L command.


    1        phys-visa-1:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    1        phys-visa-2:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    2        phys-visa-1:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    2        phys-visa-2:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    3        phys-visa-2:/dev/rdsk/c4t4d0   /dev/did/rdsk/d3
    3        phys-visa-1:/dev/rdsk/c1t5d0   /dev/did/rdsk/d3
    4        phys-visa-2:/dev/rdsk/c3t5d0   /dev/did/rdsk/d4
    4        phys-visa-1:/dev/rdsk/c2t5d0   /dev/did/rdsk/d4
    5        phys-visa-2:/dev/rdsk/c4t4d1   /dev/did/rdsk/d5
    5        phys-visa-1:/dev/rdsk/c1t5d1   /dev/did/rdsk/d5
    6        phys-visa-2:/dev/rdsk/c3t5d1   /dev/did/rdsk/d6
    6        phys-visa-1:/dev/rdsk/c2t5d1   /dev/did/rdsk/d6

  4. Use the raw DID device that the scdidadm output identifies to configure OPS.

    For example, the scdidadm output might identify that the raw DID device that corresponds to the Sun StorEdge A3500 LUNs is d4. In this instance, use the /dev/did/rdsk/d4sx raw device, where x is the slice number, to configure OPS.