Sun Cluster 3.0 Release Notes Supplement

Chapter 1 Sun Cluster 3.0 Release Notes Supplement

This document supplements the standard user documentation, including the Sun Cluster 3.0 Release Notes, shipped with the SunTM Cluster 3.0 product. These "online release notes" provide the most current information on the Sun Cluster 3.0 product. This document includes the following information:

Revision Record

The following table lists the information contained in this document and provides the revision date for this information.

Table 1-1 Sun Cluster 3.0 Release Notes Supplement

Revision Date 

Information Added 

July 2001 

 

Support for iPlanetTM Web Server 6.0. See "Support for iPlanet Web Server 6.0".

"Bug ID 4386412", shared CCD must be disabled before upgrading from Sun Cluster 2.2 to 3.0.

Support for coexistence of Solstice DiskSuite and VERITAS Volume Manager (VxVM). See "VxVM Subcluster Installation".

New procedure to unencapsulate the root disk. See "How to Unencapsulate the Root Disk".

May 2001 

"Bug ID 4349995", restriction for using Solstice DiskSuite metatool.

Information concerning Sun Fire support using VERITAS Volume Manager 3.1.1. See "Limited Support for VxVM 3.1.1".

April 2001 

Appendix of hardware installation and maintenance procedures for Sun StorEdgeTM T3 disk arrays. 1.16a level firmware, patch number 110760-01 is required to use a Sun StorEdge T3 disk tray in a cluster. See Appendix B, Installing, Configuring, and Maintaining a Sun StorEdge T3 Disk Array.

Appendix of information about installing and using the Sun Cluster module with the Sun Management Center 3.0 graphical user interface. See Appendix C, Sun Management Center 3.0.

Correction to the Sun Cluster 3.0 Installation Guide concerning a Solaris installation instruction for setting the local-mac-address variable that does not work as documented. See "Installation Guide".

March 2001 

Solution when using Oracle Parallel Server with Sun StorEdge 3500 disk arrays. See "Data Services Installation and Configuration Guide" under "Known Documentation Problems."

Appendix of hardware installation and maintenance procedures for Sun StorEdge A3500 and A3500FC disk arrays. See Appendix A, Installing and Maintaining a Sun StorEdge A3500/A3500FC System.

November 2000 

"Bug ID 4388265".

Update to the Sun Cluster 3.0 Hardware Guide procedure "How to Replace a Host Adapter." (This procedure is now a part of Appendix A, Installing and Maintaining a Sun StorEdge A3500/A3500FC System.)

New Features

In addition to features documented in Sun Cluster 3.0 Release Notes, this release now includes support for the following features.

Support for iPlanet Web Server 6.0

Sun Cluster 3.0 now supports iPlanet Web Server 6.0.

Two procedures have changed for iPlanet Web Server 6.0.

Installing Certificates on Secure Instances of iPlanet Web Server 6.0

The procedure for installing certificates on secure instances of iPlanet Web Server has changed for version 6.0. If you plan to run secure instances of iPlanet Web Server 6.0, complete the following steps when you install security certificates. This installation procedure requires that you create a certificate on one node, and then create symbolic links to that certificate on the other cluster nodes.

  1. Run the administrative server on node1.

  2. From your Web browser, connect to the administrative server as http://node1.domain:port.

    For example, http://phys-schost-1.eng.sun.com:8888. Use whatever port number you specified as the administrative server port during installation. The default port number is 8888.

  3. Install the certificate on node1.

    This installation creates three certificate files. One file, secmod.db, is common to all nodes, and the other two are specific to node1. These files are located in the alias subdirectory, under the directory in which the iPlanet Web Server files are installed.

  4. If you installed iPlanet Web Server on a global file system, complete the following tasks. If you installed iPlanet Web Server on a local file system, go to Step 5.

    1. Note the location and file names for the three files created when installing the certificate in Step 3.

      For example, if you installed iPlanet Web Server in /global/iws/servers, and you used the IP address "IPx" when installing the certificate, then the paths to the files on node1 would be

      /global/iws/servers/alias/secmod.db

      /global/iws/servers/alias/https-IPx-node1-cert7.db

      /global/iws/servers/alias/https-IPx-node1-key3.db

    2. Create symbolic links for all other cluster nodes to the node-specific files for node1.

      In the following example, substitute the appropriate file paths for your system.


      # ln -s /global/iws/servers/alias/https-IPx-node1-cert7.db
              /global/iws/servers/alias/https-IPx-node2-cert7.db 
      # ln -s /global/iws/servers/alias/https-IPx-node1-key3.db
              /global/iws/servers/alias/https-IPx-node2-key3.db 
      

  5. If you installed iPlanet Web Server on a local file system, complete the following tasks.

    1. Note the location and file names for the three files created on node1 when installing the certificate in Step 3.

      For example, if you installed iPlanet Web Server in /local/iws/servers, and you used the IP address "IPx" when installing the certificate, then the paths to the files on node1 would be

      /local/iws/servers/alias/secmod.db

      /local/iws/servers/alias/https-IPx-node1-cert7.db

      /local/iws/servers/alias/https-IPx-node1-key3.db

    2. Move the three certificate files to a location on the global file system.

      In the following example, substitute the appropriate file paths for your system


      # mv /local/iws/servers/alias/secmod.db
           /global/secure/secmod.db
      # mv /local/iws/servers/alias/https-IPx-node1-cert7.db 
           /global/secure/https-IPx-node1-cert7.db
      # mv /local/iws/servers/alias/https-IPx-node1-key3.db 
           /global/secure/https-IPx-node1-key3.db
      

    3. Create symbolic links between the local and global paths of the three certificate files.

      Create the symbolic links on each node in the cluster.

      In the following example, substitute the appropriate file paths for your system.


      # Symbolic links for node1
      # ln -s /global/secure/secmod.db
              /local/iws/servers/alias/secmod.db 
      # ln -s /global/secure/https-IPx-node1-cert7.db
              /local/iws/servers/alias/https-IPx-node1-cert7.db
      # ln -s /global/secure/https-IPx-node1-key3.db
              /local/iws/servers/alias/https-IPx-node1-key3.db 
      
      # Symbolic links for node2
      # ln -s /global/secure/secmod.db
              /local/iws/servers/alias/secmod.db 
      # ln -s /global/secure/https-IPx-node1-cert7.db
              /local/iws/servers/alias/https-IPx-node2-cert7.db 
      # ln -s /global/secure/https-IPx-node1-key3.db
              /local/iws/servers/alias/https-IPx-node2-key3.db 
      

Specifying the Location of the Access Logs

The procedure for specifying the location of the access logs while configuring an iPlanet Web Server has changed for iPlanet Web Server 6.0. To specify the location of the access logs while configuring an iPlanet Web Server, complete the following steps.

This change replaces Step 6 through Step 8 in the procedure "How to Configure an iPlanet Web Server" in Chapter 3, "Installing and Configuring Sun Cluster HA for iPlanet Web Sever," in the Sun Cluster 3.0 Data Services Installation and Configuration Guide.

  1. Edit the ErrorLog, PidLog, and access log entries in the magnus.conf file to reflect the directory created in Step 5 of the "How to Configure an iPlanet Web Server" procedure in Chapter 3 of the Sun Cluster 3.0 Data Services Installation and Configuration Guide, and synchronize the changes from the administrator's interface.

    The magnus.conf file specifies the locations for the error, access, and PID files. Edit this file to change the error, access, and PID file locations to the directory that you created in Step 5 of the "How to Configure an iPlanet Web Server" procedure in Chapter 3 of the Sun Cluster 3.0 Data Services Installation and Configuration Guide. The magnus.conf file is located in the config directory of the iPlanet server instance. If the instance directory is located on the local file system, you must modify the magnus.conf file on each of the nodes.

    Change the following entries:


    ErrorLog /global/data/netscape/https-schost-1/logs/error
    PidLog /global/data/netscape/https-schost-1/logs/pid
    ...
    Init fn=flex-init access="$accesslog" ...
    

    to


    ErrorLog /var/pathname/http-instance/logs/error
    PidLog /var/pathname/http-instance/logs/pid
    ...
    Init fn=flex-init access="/var/pathname/http-instance/logs/access" ...
    

    As soon as the administrator's interface detects your changes, the interface displays a warning message, as follows.


    Warning: Manual edits not loaded
    Some configuration files have been edited by hand. Use the "Apply"
    button on the upper right side of the screen to load the latest
     configuration files.
  2. Click Apply as prompted.

    The administrator's interface displays a new web page.

  3. Click Load Configuration Files.

Limited Support for VxVM 3.1.1

Sun Cluster has recently announced support for the Sun Fire server platforms. As part of this support, if you are using VERITAS Volume Manager (VxVM) as your volume manager, you must use VxVM 3.1.1.

VxVM 3.1.1 requires that Dynamic Multipathing (DMP) be enabled when VxVM is installed. If DMP is disabled using the steps described in the Sun Cluster 3.0 Installation Guide, VxVM 3.1.1 will automatically re-enable DMP when the VRTSvxvm package is installed. While DMP functionality is not supported for managing multiple paths to the same devices from the same node, you can safely enable DMP when each node has only a single path to each device.

DMP support has only been qualified with VxVM 3.1.1. You should still disable DMP for VxVM 3.0.4 and 3.1.

VxVM Subcluster Installation

Sun Cluster software now supports coexistence of the Solstice DiskSuite and VERITAS Volume Manager (VxVM) volume managers in the same cluster or on the same cluster node. The following guidelines describe when this volume manager coexistence is and is not supported.

To support VxVM subcluster installation, perform the following steps on each cluster node that is not installed with VxVM.


Note -

Do not create any shared disk groups until you have performed this procedure on all non-VxVM cluster nodes.


  1. Become superuser on a node in the cluster that you do not intend to install with VxVM.

  2. Create /dev/vx/dsk and /dev/vx/rdsk directories on the node. These directories must be owned by root and have world read-write-execute permissions.


    # mkdir /dev/vx
    # mkdir /dev/vx/dsk
    # mkdir /dev/vx/rdsk
    # chmod a+rwx /dev/vx
    # chmod a+rwx /dev/vx/dsk
    # chmod a+rwx /dev/vx/rdsk
    

  3. Add a vxio entry to the /etc/name_to_major file on the node.

    The entry, in particular the value of the major number nnn, must be identical to the entries on those nodes that are installed with VxVM.


    # vi /etc/name_to_major
    vxio nnn
    


    Note -

    If you later install VxVM on this node, you must first remove the vxio entry before you begin VxVM installation.


  4. Initialize the vxio entry.


    # drvconfig -b -i vxio -m nnn
    


    Note -

    The next time you reboot this node, you might see messages similar to the following. These messages are harmless and can be ignored.



    /sbin/rcS: /usr/sbin/vxrecover: not found
    /etc/rc2.d/S75MOUNTGFSYS: /usr/sbin/vxdctl: not found
  5. Repeat Step 1 through Step 4 on all other nodes in the cluster that are not installed with VxVM.

How to Unencapsulate the Root Disk

Perform this procedure to unencapsulate the root disk.


Note -

This procedure is valid for Sun Cluster 3.0 configurations. To unencapsulate the root disk on a Sun Cluster 2.2 configuration, use procedures provided in your VxVM documentation.


  1. Ensure that only Solaris root file systems--root (/), swap, the global devices namespace, /usr, /var, /opt, and /home--are present on the root disk.

    If any other file systems reside on the root disk, back them up and remove them from the root disk.

  2. Become superuser on the node you intend to unencapsulate.

  3. Evacuate all resource groups and device groups from the node.


    # scswitch -S -h node
    
    -S

    Evacuates all resource groups and device groups

    -h node

    Specifies the name of the node from which to evacuate resource or device groups

  4. Determine the node ID number of the node.


    # clinfo -n
    N
    

  5. Unmount the global-devices file system for this node, where N is the node ID number returned in Step 4.


    # umount /global/.devices/node@N
    

  6. View the /etc/vfstab file and determine which VxVM volume corresponds to the global-devices file system.


    # vi /etc/vfstab
    #device           device        mount   FS      fsck    mount   mount
    #to mount         to fsck       point   type    pass    at boot options
    #                       
    #NOTE: volume rootdiskxNvol (/global/.devices/node@N) encapsulated partition cNtXdYsZ
    
  7. Remove the VxVM volume that corresponds to the global-devices file system from the rootdg disk group.


    # vxedit -rf rm rootdiskxNvol
    


    Note -

    All data in the global-devices file system is destroyed when you remove the VxVM volume, but is restored after the root disk is unencapsulated.


  8. Unencapsulate the root disk.


    # /etc/vx/bin/vxunroot
    

    See your VxVM documentation for details.

  9. Use the format(1M) command to add a 100-Mbyte partition to the root disk to use for the global-devices file system.


    Tip -

    Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.


  10. Set up a file system on the partition you created in Step 9.


    # newfs /dev/rdsk/cNtXdYsZ
    


  11. # scdidadm -l cNtXdY
    1        phys-schost-1:/dev/rdsk/cNtXdY   /dev/did/rdsk/dN 
    
    Determine the device ID (DID) name of the root disk.

  12. In the /etc/vfstab file, replace the path names in the global-devices file system entry with the DID path you identified in Step 11.

    The original entry would look similar to the following.


    # vi /etc/vfstab
    /dev/vx/dsk/rootdiskxNvol /dev/vx/rdsk/rootdiskxNvol /global/.devices/node@N ufs  2  no  global

    The revised entry that uses the DID path would look similar to the following.


    /dev/did/dsk/dNsX /dev/did/rdsk/dNsX /global/.devices/node@N ufs 2  no  global

  13. Mount the global-devices file system.

    You do not need to perform a global mount.


    # mount /global/.devices/node@N
    

  14. Repopulate the global-devices file system with device nodes for any raw disk and Solstice DiskSuite devices.

    VxVM


    # scgdevs
    
    devices are re-created during the next reboot.


  15. # reboot
    
    Reboot the node.

  16. Repeat this procedure on each node of the cluster to unencapsulate the root disk on those nodes.

Restrictions and Requirements

The following restrictions and requirements have been added or updated since the Sun Cluster 3.0 GA release.

Known Problems

In addition to known problems documented in Sun Cluster 3.0 Release Notes, the following known problems affect the operation of the Sun Cluster 3.0 GA release.

Bug ID 4349995

Problem Summary: The DiskSuite Tool (metatool) graphical user interface is incompatible with Sun Cluster 3.0.

Workaround: Use command line interfaces to configure and manage shared disksets.

Bug ID 4386412

Problem Summary: Upgrade from Sun Cluster 2.2 to 3.0 fails when the shared Cluster Configuration Database (CCD) is enabled.

Workaround: For clusters using VERITAS Volume Manager (VxVM) only, disable shared CCD after you back up the cluster (Step 7 of the procedure "How to Shut Down the Cluster") and before you stop Sun Cluster 2.2 software (Step 8). The following procedure describes how to disable shared CCD in a Sun Cluster 2.2 configuration.

  1. From either node, create a backup copy of the shared CCD.


    # ccdadm -c backup-filename
    

    See the ccdadm(1M) man page for more information.

  2. On each node of the cluster, remove the shared CCD.


    # scconf clustername -S none 
    

  3. On each node, run the mount(1M) command to determine on which node the ccdvol is mounted.

    The ccdvol entry looks similar to the following.


    # mount
    ...
    /dev/vx/dsk/sc_dg/ccdvol        /etc/opt/SUNWcluster/conf/ccdssa        ufs     suid,rw,largefiles,dev=27105b8  982479320

  4. Run the cksum(1) command on each node to ensure that the ccd.database file is identical on both nodes.


    # cksum ccd.database
    

  5. If the ccd.database files are different, from either node restore the shared CCD backup that you created in Step 1.


    # ccdadm -r backup-filename
    

  6. Stop the Sun Cluster 2.2 software on the node on which the ccdvol is mounted.


    # scadmin stopnode
    

  7. From the same node, unmount the ccdvol.


    # umount /etc/opt/SUNWcluster/conf/ccdssa 
    

Bug ID 4388265

Problem Summary: You may encounter I/O errors while replacing a SCSI cable from the Sun StorEdge A3500 controller board to the disk tray. These errors are temporary and should disappear when the cable is securely in place.

Workaround: After replacing a SCSI cable in a Sun StorEdge A3500 disk array, use your volume management recovery procedure to recover from I/O errors.

Bug ID 4410535

Problem Summary: When using the Sun Cluster module to Sun Management Center 3.0, you cannot add a previously deleted resource group.

Workaround:

  1. Click on Resource Group->Status->Failover Resource Groups.

  2. Right click on the resource group name to be deleted and select Delete Selected Resource Group.

  3. Click on the Refresh icon and make sure the row corresponding to the deleted resource group is gone.

  4. Right click on Resource Groups in the left pane and select Create New Resource Group.

  5. Give the same resource group name that was deleted before and click Next>>. A dialog box pops up saying that the resource group name is already in use.

Known Documentation Problems

This section discusses documentation errors you might encounter and steps to correct these problems. This information is in addition to known documentation problems documented in the Sun Cluster 3.0 Release Notes.

Release Notes

There are three URL errors in the Sun Cluster 3.0 Release Notes. These URLs are Sun internal URLs. These references need to be changed to refer to external sites.

Installation Guide

Do not perform Step 2 in "How to Install the Solaris Operating Environment" and in "How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes." This instruction, which checks and corrects the value of the local-mac-address variable, occurs before Solaris software is installed and therefore does not work. Because the value of local-mac-address is set to false by default, the step is also unnecessary. Skip Step 2 when you perform either procedure.


Note -

Sun Cluster software requires that the local-mac-address variable must be set to false, and does not support changing the variable value to true.


Data Services Installation and Configuration Guide

Additional documentation is needed when installing Sun Cluster HA for Oracle Parallel Server using the instructions in Chapter 8, "Installing and Configuring Sun Cluster HA for Oracle Parallel Server."

In the section, "How to Install Sun Cluster HA for Oracle Parallel Server Packages," it does not describe how to set up the LUNs if you are using RAID 5 rather than VERITAS Volume Manager.

This section should state that if you are using the Sun StorEdge A3500 disk array with hardware RAID, you use the RAID Manager (RM6) software to configure the Sun StorEdge A3500 LUNs. The LUNs can be configured as raw devices on top of the /dev/did/rdsk devices.

The section should also provide an example of how to configure the LUNs when using hardware RAID. This example should include the following steps:

  1. Create LUNs on the Sun StorEdge A3500 disk array.

    Refer to the information in Appendix A, Installing and Maintaining a Sun StorEdge A3500/A3500FC System to create the LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the Sun StorEdge A3500 LUNs into as many slices as you need.


    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2


    Note -

    If you use slice 0, do not start the partition at cylinder 0.


  3. Run the scdidadm(1M) command to find the raw DID device that corresponds to the LUNs that you created in Step 1

    The following example lists output from the scdidadm -L command.


    1        phys-visa-1:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    1        phys-visa-2:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    2        phys-visa-1:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    2        phys-visa-2:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    3        phys-visa-2:/dev/rdsk/c4t4d0   /dev/did/rdsk/d3
    3        phys-visa-1:/dev/rdsk/c1t5d0   /dev/did/rdsk/d3
    4        phys-visa-2:/dev/rdsk/c3t5d0   /dev/did/rdsk/d4
    4        phys-visa-1:/dev/rdsk/c2t5d0   /dev/did/rdsk/d4
    5        phys-visa-2:/dev/rdsk/c4t4d1   /dev/did/rdsk/d5
    5        phys-visa-1:/dev/rdsk/c1t5d1   /dev/did/rdsk/d5
    6        phys-visa-2:/dev/rdsk/c3t5d1   /dev/did/rdsk/d6
    6        phys-visa-1:/dev/rdsk/c2t5d1   /dev/did/rdsk/d6

  4. Use the raw DID device that the scdidadm output identifies to configure OPS.

    For example, the scdidadm output might identify that the raw DID device that corresponds to the Sun StorEdge A3500 LUNs is d4. In this instance, use the /dev/did/rdsk/d4sx raw device, where x is the slice number, to configure OPS.