Sun Cluster 3.0 5/02 Release Notes

Known Documentation Problems

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

SunPlex Manager Online Help Correction

A note in SunPlex Manager's online help is inaccurate. The note appears in the Oracle data service installation procedure. The correction is as follows.

Incorrect:

Note: If no entries exist for the shmsys and semsys variables in the /etc/system file when SunPlex Manager packages are installed, default values for these variables are automatically put in the /etc/system file. The system must then be rebooted. Check Oracle installation documentation to verify that these values are appropriate for your database.

Correct:

Note: If no entries exist for the shmsys and semsys variables in the /etc/system file when you install the Oracle data service, default values for these variables can be automatically put in the /etc/system file. The system must then be rebooted. Check Oracle installation documentation to verify that these values are appropriate for your database.

Sun Cluster HA for Oracle Packages

The introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide does not discuss the additional package needed for users with clusters running Sun Cluster HA for Oracle with 64‐bit Oracle. The following section corrects the introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

Installing Sun Cluster HA for Oracle Packages

Depending on your configuration, use the scinstall(1M) utility to install one or both of the following packages on your cluster. Do not use the -s option to non‐interactive scinstall to install all of the data service packages.


Note –

SUNWscor is the prerequisite package for SUNWscorx.


If you installed the SUNWscor data service package as part of your initial Sun Cluster installation, proceed to “Registering and Configuring Sun Cluster HA for Oracle” on page 30. Otherwise, use the following procedure to install the SUNWscor and SUNWscorx packages.

Simple Root Disk Groups With VERITAS Volume Manager

Simple root disk groups are not supported as disk types with VERITAS Volume Manager on Sun Cluster software. As a result, if you perform the procedure “How to Restore a Non-Encapsulated root (/) File System (VERITAS Volume Manager)” in the Sun Cluster 3.0 12/01 System Administration Guide, you should eliminate Step 9, which tells you to determine if the root disk group (rootdg) is on a single slice on the root disk. You would complete Step 1 through Step 8, skip Step 9, and proceed with Step 10 to the end of the procedure.

Upgrading to a Sun Cluster 3.0 Software Update Release

The following is a correction to Step 8 of “How to Upgrade to a Sun Cluster 3.0 Software Update Release” in the Sun Cluster 3.0 12/01 Software Installation Guide.

    (Optional)

    (Optional) Upgrade Solaris 8 software.

    1. Temporarily comment out all global device entries in the /etc/vfstab file.

      Do this to prevent the Solaris upgrade from attempting to mount the global devices.

    2. Shut down the node to upgrade.


      # shutdown -y -g0
      ok

    3. Follow instructions in the installation guide for the Solaris 8 Maintenance Update version you want to upgrade to.


      Note –

      Do not reboot the node when prompted to reboot.


    4. Uncomment all global device entries that you commented out in Step a in the /a/etc/vfstab file.

    5. Install any Solaris software patches and hardware-related patches, and download any needed firmware contained in the hardware patches.

      If any patches require rebooting, reboot the node in non-cluster mode as described in Step f.

    6. Reboot the node in non-cluster mode.

      Include the double dashes (--) and two quotation marks (") in the command.


      # reboot -- "-x"
      

Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software

The following upgrade procedures contain changes and corrections to the procedures since release of the Sun Cluster 3.0 12/01 Software Installation Guide.

To upgrade from Sun Cluster 2.2 to Sun Cluster 3.0 5/02 software, perform the following procedures instead of the versions documented in the Sun Cluster 3.0 12/01 Software Installation Guide.

How to Upgrade Cluster Software Packages

  1. Become superuser on a cluster node.

  2. If you are installing from the CD‐ROM, insert the Sun Cluster 3.0 5/02 CD-ROM  into the CD‐ROM drive on a node.

    If the volume daemon vold(1M) is running and configured to manage CD‐ROM devices, it automatically mounts the CD‐ROM on the /cdrom/suncluster_3_0_u3 directory.

  3. Change to the /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packages directory.


    # cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packages
    

  4. If your volume manager is Solstice DiskSuite, install the latest Solstice DiskSuite mediator package (SUNWmdm) on each node.

    1. Add the SUNWmdm package.


      # pkgadd -d . SUNWmdm
      

    2. Reboot the node.


      # shutdown -g0 -y -i6
      

    3. Repeat on the other node.

  5. Reconfigure mediators.

    1. Determine which node has ownership of the diskset to which you will add the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the diskset name

    2. If no node has ownership, take ownership of the diskset.


      # metaset -s setname -t
      
      -t

      Takes ownership of the diskset

    3. Recreate the mediators.


      # metaset -s setname -a -m mediator‐host‐list
      
      -a

      Adds to the diskset

      -m mediator‐host‐list

      Specifies the names of the nodes to add as mediator hosts for the diskset

    4. Repeat for each diskset.

  6. On each node, shut down the rpc.pfmd daemon.


    # /etc/init.d/initpmf stop
    

  7. Upgrade the first node to Sun Cluster 3.0 5/02 software.

    These procedures will refer to this node as the first-installed node.

    1. On the first node to upgrade, change to the /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools directory.


      # cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools
      

    2. Upgrade the cluster software framework.


      # ./scinstall ‐u begin ‐F
      
      -F

      Specifies that this is the first-installed node in the cluster

      See the scinstall(1M) man page for more information.

    3. Install any Sun Cluster patches on the first node.

      See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions.

    4. Reboot the node.


      # shutdown -g0 -y -i6
      

      When the first node reboots into cluster mode, it establishes the cluster.

  8. Upgrade the second node to Sun Cluster 3.0 5/02 software.

    1. On the second node, change to the /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools directory.


      # cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools
      

    2. Upgrade the cluster software framework.


      # ./scinstall ‐u begin ‐N node1
      
      ‐N node1

      Specifies the name of the first-installed node in the cluster, not the name of the second node to be installed

      See the scinstall(1M) man page for more information.

    3. Install any Sun Cluster patches on the second node.

      See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions.

    4. Reboot the node.


      # shutdown -g0 -y -i6
      

  9. After both nodes are rebooted, verify from either node that both nodes are cluster members.


    -- Cluster Nodes --
                       Node name      Status
                       ---------      ------
      Cluster node:    phys-schost-1  Online
      Cluster node:    phys-schost-2  Online

    See the scstat(1M) man page for more information about displaying cluster status.

  10. Choose a shared disk to be the quorum device.

    You can use any disk shared by both nodes as a quorum device. From either node, use the scdidadm(1M) command to determine the shared disk's device ID (DID) name. You specify this device name in Step 5, in the -q globaldev=DIDname option to scinstall.


    # scdidadm ‐L
    

  11. Configure the shared quorum device.

    1. Start the scsetup(1M) utility.


      # scsetup
      

      The Initial Cluster Setup screen is displayed.

      If the quorum setup process is interrupted or fails to complete successfully, rerun scsetup.

    2. At the prompt Do you want to add any quorum disks?, configure a shared quorum device.

      A two-node cluster remains in install mode until a shared quorum device is configured. After the scsetup utility configures the quorum device, the message Command completed successfully is displayed.

    3. At the prompt Is it okay to reset "installmode"?, answer Yes.

      After the scsetup utility sets quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed and the utility returns you to the Main Menu.

    4. Exit from the scsetup utility.

  12. From any node, verify the device and node quorum configurations.

    You do not need to be superuser to run this command.


    % scstat -q
    

  13. From any node, verify that cluster install mode is disabled.

    You do not need to be superuser to run this command.


    % scconf -p | grep "Cluster install mode:"
    Cluster install mode:                                  disabled

  14. Update the directory paths.

    Go to “How to Update the Root Environment” in the Sun Cluster 3.0 12/01 Software Installation Guide.

Example—Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 5/02 Software – Begin Process

The following example shows the beginning process of upgrading a two-node cluster from Sun Cluster 2.2 to Sun Cluster 3.0 5/02 software. The cluster node names are phys‐schost‐1, the first-installed node, and phys‐schost‐2, which joins the cluster that phys‐schost‐1 established. The volume manager is Solstice DiskSuite and both nodes are used as mediator hosts for the diskset schost‐1.


(Install the latest Solstice DiskSuite mediator package
on each node)
# cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packagespkgadd -d . SUNWmdm
 
(Restore the mediator)
# metaset -s schost-1 -tmetaset -s schost‐1 -a -m phys‐schost‐1 phys‐schost‐2
 
(Shut down the rpc.pmfd daemon)
# /etc/init.d/initpmf stop
 
(Begin upgrade on the first node and reboot it)
phys‐schost‐1# cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools
phys‐schost‐1# ./scinstall ‐u begin ‐F
phys-schost-1# shutdown -g0 -y -i6
 
(Begin upgrade on the second node and reboot it)
phys‐schost‐2# cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools
phys‐schost‐2# ./scinstall ‐u begin ‐N phys‐schost‐1
phys-schost-2# shutdown -g0 -y -i6
 
(Verify cluster membership)
# scstat
 
(Choose a shared disk and configure it as the quorum
device)
# scdidadm -L
# scsetup
Select Quorum>Add a quorum disk
 
(Verify that the quorum device is configured)
# scstat -q
 
(Verify that the cluster is no longer in install
mode)
% scconf -p | grep "Cluster install mode:"
Cluster install mode:                                  disabled

How to Finish Upgrading Cluster Software

This procedure finishes the scinstall(1M) upgrade process begun in How to Upgrade Cluster Software Packages. Perform these steps on each node of the cluster.

  1. Become superuser on each node of the cluster.

  2. Is your volume manager VxVM?

    • If no, go to Step 3.

    • If yes, install VxVM and any VxVM patches and create the root disk group (rootdg) as you would for a new installation.

      • To install VxVM and encapsulate the root disk, perform the procedures in “How to Install VERITAS Volume Manager Software and Encapsulate the Root Disk” in the Sun Cluster 3.0 12/01 Software Installation Guide. To mirror the root disk, perform the procedures in “How to Mirror the Encapsulated Root Disk” in the Sun Cluster 3.0 12/01 Software Installation Guide.

      • To install VxVM and create rootdg on local, non-root disks, perform the procedures in “How to Install VERITAS Volume Manager Software Only” and in “How to Create a rootdg Disk Group on a Non-Root Disk” in the Sun Cluster 3.0 12/01 Software Installation Guide.

  3. Are you upgrading Sun Cluster HA for NFS?

    If yes, go to Step 4.

    If no, go to Step 5.

  4. Finish Sun Cluster 3.0 software upgrade and convert Sun Cluster HA for NFS configuration.

    If you are not upgrading Sun Cluster HA for NFS, perform Step 5 instead.

    1. Insert the Sun Cluster 3.0 Agents 5/02 CD-ROM into the CD‐ROM drive on a node.

      This step assumes that the volume daemon vold(1M) is running and configured to manage CD‐ROM devices.

    2. Finish the cluster software upgrade on that node.


      # scinstall ‐u finish ‐q globaldev=DIDname \
      -d /cdrom/scdataservices_3_0_u3 -s nfs
      
      -q globaldev=DIDname

      Specifies the device ID (DID) name of the quorum device

      -d /cdrom/scdataservices_3_0_u3

      Specifies the directory location of the CD‐ROM image

      -s nfs

      Specifies the Sun Cluster HA for NFS data service to configure


      Note –

      An error message similar to the following might be generated. You can safely ignore it.


      ** Installing Sun Cluster - Highly Available NFS Server **
      Skipping "SUNWscnfs" - already installed


    3. Eject the CD‐ROM.

    4. Repeat Step a through Step c on the other node.

      When completed on both nodes, cluster install mode is disabled and all quorum votes are assigned.

    5. Skip to Step 6.

  5. Finish Sun Cluster 3.0 software upgrade on each node.

    If you are upgrading Sun Cluster HA for NFS, perform Step 4 instead.


    # scinstall ‐u finish ‐q globaldev=DIDname
    
    -q globaldev=DIDname

    Specifies the device ID (DID) name of the quorum device

  6. If you are upgrading any data services other than Sun Cluster HA for NFS, configure resources for those data services as you would for a new installation.

    See the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for procedures.

  7. If your volume manager is Solstice DiskSuite, from either node bring pre-existing disk device groups online.


    # scswitch ‐z ‐D disk-device-group ‐h node
    
    -z

    Performs the switch

    -D disk-device-group

    Specifies the name of the disk device group, which for Solstice DiskSuite software is the same as the diskset name

    -h node

    Specifies the name of the cluster node that serves as the primary of the disk device group

  8. From either node, bring pre-existing data service resource groups online.

    At this point, Sun Cluster 2.2 logical hosts are converted to Sun Cluster 3.0 5/02 resource groups, and the names of logical hosts are appended with the suffix -lh. For example, a logical host named lhost‐1 is upgraded to a resource group named lhost‐1‐lh. Use these converted resource group names in the following command.


    # scswitch ‐z ‐g resource-group ‐h node
    
    -g resource-group

    Specifies the name of the resource group to bring online

    You can use the scrgadm -p command to display a list of all resource types and resource groups in the cluster. The scrgadm -pv command displays this list with more detail.

  9. If you are using Sun Management Center to monitor your Sun Cluster configuration, install the Sun Cluster module for Sun Management Center.

    1. Ensure that you are using the most recent version of Sun Management Center.

      See your Sun Management Center documentation for installation or upgrade procedures.

    2. Follow guidelines and procedures in “Installation Requirements for Sun Cluster Monitoring” in the Sun Cluster 3.0 12/01 Software Installation Guide to install the Sun Cluster module packages.

  10. Verify that all nodes have joined the cluster.

    Go to “How to Verify Cluster Membership” in the Sun Cluster 3.0 12/01 Software Installation Guide.

Example—Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 5/02 Software – Finish Process

The following example shows the finish process of upgrading a two-node cluster upgraded from Sun Cluster 2.2 to Sun Cluster 3.0 5/02 software. The cluster node names are phys‐schost‐1 and phys‐schost‐2, the device group names are dg‐schost‐1 and dg‐schost‐2, and the data service resource group names are lh‐schost‐1 and lh‐schost‐2. The scinstall command automatically converts the Sun Cluster HA for NFS configuration.


(Finish upgrade on each node)
phys‐schost‐1# scinstall ‐u finish ‐q globaldev=d1 \
-d /cdrom/scdataservices_3_0_u3 -s nfs
phys‐schost‐2# scinstall ‐u finish ‐q globaldev=d1 \
-d /cdrom/scdataservices_3_0_u3 -s nfs
 
(Bring device groups and data service resource groups
on each node online)
phys‐schost‐1# scswitch ‐z ‐D dg‐schost‐1 ‐h phys‐schost‐1
phys‐schost‐1# scswitch ‐z ‐g lh-schost-1 ‐h phys‐schost‐1
phys‐schost‐1# scswitch ‐z ‐D dg‐schost‐2 ‐h phys‐schost‐2 
phys‐schost‐1# scswitch ‐z ‐g lh-schost-2 ‐h phys‐schost‐2

Bringing a Node Out of Maintenance State

The procedure “How to Bring a Node Out of Maintenance State” in the Sun Cluster 3.0 12/01 System Administration Guide does not apply to a two-node cluster. A procedure appropriate for a two-node cluster will be evaluated for the next release.

Man Pages

scgdevs(1M) Man Page

The following paragraph clarifies behavior of the scgdevs command. This information is not currently included in the scgdevs(1M) man page.

New Information:

scgdevs(1M) called from the local node will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean it has completed its work clusterwide.

SUNW.sap_ci(5) Man Page

SUNW.sap_as(5) Man Page

rg_properties(5) Man Page

The following new resource group property should be added to the rg_properties(5) man page.

Auto_start_on_new_cluster

This property controls whether the Resource Group Manager starts the resource group automatically when a new cluster is forming.

The default is TRUE. If set to TRUE, the Resource Group Manager attempts to start the resource group automatically to achieve Desired_primaries when all nodes of the cluster are simultaneously rebooted. If set to FALSE, the Resource Group does not start automatically when the cluster is rebooted.

Category: Optional Default: True Tunable: Any time