Sun Cluster 3.0 Release Notes Supplement

New Features

In addition to features documented in Sun Cluster 3.0 Release Notes, this release now includes support for the following features.

Support for iPlanet Web Server 6.0

Sun Cluster 3.0 now supports iPlanet Web Server 6.0.

Two procedures have changed for iPlanet Web Server 6.0.

Installing Certificates on Secure Instances of iPlanet Web Server 6.0

The procedure for installing certificates on secure instances of iPlanet Web Server has changed for version 6.0. If you plan to run secure instances of iPlanet Web Server 6.0, complete the following steps when you install security certificates. This installation procedure requires that you create a certificate on one node, and then create symbolic links to that certificate on the other cluster nodes.

  1. Run the administrative server on node1.

  2. From your Web browser, connect to the administrative server as http://node1.domain:port.

    For example, http://phys-schost-1.eng.sun.com:8888. Use whatever port number you specified as the administrative server port during installation. The default port number is 8888.

  3. Install the certificate on node1.

    This installation creates three certificate files. One file, secmod.db, is common to all nodes, and the other two are specific to node1. These files are located in the alias subdirectory, under the directory in which the iPlanet Web Server files are installed.

  4. If you installed iPlanet Web Server on a global file system, complete the following tasks. If you installed iPlanet Web Server on a local file system, go to Step 5.

    1. Note the location and file names for the three files created when installing the certificate in Step 3.

      For example, if you installed iPlanet Web Server in /global/iws/servers, and you used the IP address "IPx" when installing the certificate, then the paths to the files on node1 would be

      /global/iws/servers/alias/secmod.db

      /global/iws/servers/alias/https-IPx-node1-cert7.db

      /global/iws/servers/alias/https-IPx-node1-key3.db

    2. Create symbolic links for all other cluster nodes to the node-specific files for node1.

      In the following example, substitute the appropriate file paths for your system.


      # ln -s /global/iws/servers/alias/https-IPx-node1-cert7.db
              /global/iws/servers/alias/https-IPx-node2-cert7.db 
      # ln -s /global/iws/servers/alias/https-IPx-node1-key3.db
              /global/iws/servers/alias/https-IPx-node2-key3.db 
      

  5. If you installed iPlanet Web Server on a local file system, complete the following tasks.

    1. Note the location and file names for the three files created on node1 when installing the certificate in Step 3.

      For example, if you installed iPlanet Web Server in /local/iws/servers, and you used the IP address "IPx" when installing the certificate, then the paths to the files on node1 would be

      /local/iws/servers/alias/secmod.db

      /local/iws/servers/alias/https-IPx-node1-cert7.db

      /local/iws/servers/alias/https-IPx-node1-key3.db

    2. Move the three certificate files to a location on the global file system.

      In the following example, substitute the appropriate file paths for your system


      # mv /local/iws/servers/alias/secmod.db
           /global/secure/secmod.db
      # mv /local/iws/servers/alias/https-IPx-node1-cert7.db 
           /global/secure/https-IPx-node1-cert7.db
      # mv /local/iws/servers/alias/https-IPx-node1-key3.db 
           /global/secure/https-IPx-node1-key3.db
      

    3. Create symbolic links between the local and global paths of the three certificate files.

      Create the symbolic links on each node in the cluster.

      In the following example, substitute the appropriate file paths for your system.


      # Symbolic links for node1
      # ln -s /global/secure/secmod.db
              /local/iws/servers/alias/secmod.db 
      # ln -s /global/secure/https-IPx-node1-cert7.db
              /local/iws/servers/alias/https-IPx-node1-cert7.db
      # ln -s /global/secure/https-IPx-node1-key3.db
              /local/iws/servers/alias/https-IPx-node1-key3.db 
      
      # Symbolic links for node2
      # ln -s /global/secure/secmod.db
              /local/iws/servers/alias/secmod.db 
      # ln -s /global/secure/https-IPx-node1-cert7.db
              /local/iws/servers/alias/https-IPx-node2-cert7.db 
      # ln -s /global/secure/https-IPx-node1-key3.db
              /local/iws/servers/alias/https-IPx-node2-key3.db 
      

Specifying the Location of the Access Logs

The procedure for specifying the location of the access logs while configuring an iPlanet Web Server has changed for iPlanet Web Server 6.0. To specify the location of the access logs while configuring an iPlanet Web Server, complete the following steps.

This change replaces Step 6 through Step 8 in the procedure "How to Configure an iPlanet Web Server" in Chapter 3, "Installing and Configuring Sun Cluster HA for iPlanet Web Sever," in the Sun Cluster 3.0 Data Services Installation and Configuration Guide.

  1. Edit the ErrorLog, PidLog, and access log entries in the magnus.conf file to reflect the directory created in Step 5 of the "How to Configure an iPlanet Web Server" procedure in Chapter 3 of the Sun Cluster 3.0 Data Services Installation and Configuration Guide, and synchronize the changes from the administrator's interface.

    The magnus.conf file specifies the locations for the error, access, and PID files. Edit this file to change the error, access, and PID file locations to the directory that you created in Step 5 of the "How to Configure an iPlanet Web Server" procedure in Chapter 3 of the Sun Cluster 3.0 Data Services Installation and Configuration Guide. The magnus.conf file is located in the config directory of the iPlanet server instance. If the instance directory is located on the local file system, you must modify the magnus.conf file on each of the nodes.

    Change the following entries:


    ErrorLog /global/data/netscape/https-schost-1/logs/error
    PidLog /global/data/netscape/https-schost-1/logs/pid
    ...
    Init fn=flex-init access="$accesslog" ...
    

    to


    ErrorLog /var/pathname/http-instance/logs/error
    PidLog /var/pathname/http-instance/logs/pid
    ...
    Init fn=flex-init access="/var/pathname/http-instance/logs/access" ...
    

    As soon as the administrator's interface detects your changes, the interface displays a warning message, as follows.


    Warning: Manual edits not loaded
    Some configuration files have been edited by hand. Use the "Apply"
    button on the upper right side of the screen to load the latest
     configuration files.
  2. Click Apply as prompted.

    The administrator's interface displays a new web page.

  3. Click Load Configuration Files.

Limited Support for VxVM 3.1.1

Sun Cluster has recently announced support for the Sun Fire server platforms. As part of this support, if you are using VERITAS Volume Manager (VxVM) as your volume manager, you must use VxVM 3.1.1.

VxVM 3.1.1 requires that Dynamic Multipathing (DMP) be enabled when VxVM is installed. If DMP is disabled using the steps described in the Sun Cluster 3.0 Installation Guide, VxVM 3.1.1 will automatically re-enable DMP when the VRTSvxvm package is installed. While DMP functionality is not supported for managing multiple paths to the same devices from the same node, you can safely enable DMP when each node has only a single path to each device.

DMP support has only been qualified with VxVM 3.1.1. You should still disable DMP for VxVM 3.0.4 and 3.1.

VxVM Subcluster Installation

Sun Cluster software now supports coexistence of the Solstice DiskSuite and VERITAS Volume Manager (VxVM) volume managers in the same cluster or on the same cluster node. The following guidelines describe when this volume manager coexistence is and is not supported.

To support VxVM subcluster installation, perform the following steps on each cluster node that is not installed with VxVM.


Note -

Do not create any shared disk groups until you have performed this procedure on all non-VxVM cluster nodes.


  1. Become superuser on a node in the cluster that you do not intend to install with VxVM.

  2. Create /dev/vx/dsk and /dev/vx/rdsk directories on the node. These directories must be owned by root and have world read-write-execute permissions.


    # mkdir /dev/vx
    # mkdir /dev/vx/dsk
    # mkdir /dev/vx/rdsk
    # chmod a+rwx /dev/vx
    # chmod a+rwx /dev/vx/dsk
    # chmod a+rwx /dev/vx/rdsk
    

  3. Add a vxio entry to the /etc/name_to_major file on the node.

    The entry, in particular the value of the major number nnn, must be identical to the entries on those nodes that are installed with VxVM.


    # vi /etc/name_to_major
    vxio nnn
    


    Note -

    If you later install VxVM on this node, you must first remove the vxio entry before you begin VxVM installation.


  4. Initialize the vxio entry.


    # drvconfig -b -i vxio -m nnn
    


    Note -

    The next time you reboot this node, you might see messages similar to the following. These messages are harmless and can be ignored.



    /sbin/rcS: /usr/sbin/vxrecover: not found
    /etc/rc2.d/S75MOUNTGFSYS: /usr/sbin/vxdctl: not found
  5. Repeat Step 1 through Step 4 on all other nodes in the cluster that are not installed with VxVM.

How to Unencapsulate the Root Disk

Perform this procedure to unencapsulate the root disk.


Note -

This procedure is valid for Sun Cluster 3.0 configurations. To unencapsulate the root disk on a Sun Cluster 2.2 configuration, use procedures provided in your VxVM documentation.


  1. Ensure that only Solaris root file systems--root (/), swap, the global devices namespace, /usr, /var, /opt, and /home--are present on the root disk.

    If any other file systems reside on the root disk, back them up and remove them from the root disk.

  2. Become superuser on the node you intend to unencapsulate.

  3. Evacuate all resource groups and device groups from the node.


    # scswitch -S -h node
    
    -S

    Evacuates all resource groups and device groups

    -h node

    Specifies the name of the node from which to evacuate resource or device groups

  4. Determine the node ID number of the node.


    # clinfo -n
    N
    

  5. Unmount the global-devices file system for this node, where N is the node ID number returned in Step 4.


    # umount /global/.devices/node@N
    

  6. View the /etc/vfstab file and determine which VxVM volume corresponds to the global-devices file system.


    # vi /etc/vfstab
    #device           device        mount   FS      fsck    mount   mount
    #to mount         to fsck       point   type    pass    at boot options
    #                       
    #NOTE: volume rootdiskxNvol (/global/.devices/node@N) encapsulated partition cNtXdYsZ
    
  7. Remove the VxVM volume that corresponds to the global-devices file system from the rootdg disk group.


    # vxedit -rf rm rootdiskxNvol
    


    Note -

    All data in the global-devices file system is destroyed when you remove the VxVM volume, but is restored after the root disk is unencapsulated.


  8. Unencapsulate the root disk.


    # /etc/vx/bin/vxunroot
    

    See your VxVM documentation for details.

  9. Use the format(1M) command to add a 100-Mbyte partition to the root disk to use for the global-devices file system.


    Tip -

    Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.


  10. Set up a file system on the partition you created in Step 9.


    # newfs /dev/rdsk/cNtXdYsZ
    


  11. # scdidadm -l cNtXdY
    1        phys-schost-1:/dev/rdsk/cNtXdY   /dev/did/rdsk/dN 
    
    Determine the device ID (DID) name of the root disk.

  12. In the /etc/vfstab file, replace the path names in the global-devices file system entry with the DID path you identified in Step 11.

    The original entry would look similar to the following.


    # vi /etc/vfstab
    /dev/vx/dsk/rootdiskxNvol /dev/vx/rdsk/rootdiskxNvol /global/.devices/node@N ufs  2  no  global

    The revised entry that uses the DID path would look similar to the following.


    /dev/did/dsk/dNsX /dev/did/rdsk/dNsX /global/.devices/node@N ufs 2  no  global

  13. Mount the global-devices file system.

    You do not need to perform a global mount.


    # mount /global/.devices/node@N
    

  14. Repopulate the global-devices file system with device nodes for any raw disk and Solstice DiskSuite devices.

    VxVM


    # scgdevs
    
    devices are re-created during the next reboot.


  15. # reboot
    
    Reboot the node.

  16. Repeat this procedure on each node of the cluster to unencapsulate the root disk on those nodes.