Sun Cluster 3.0 12/01 Software Installation Guide

Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Update 2 Software

Perform the following tasks to upgrade your two-node cluster from Sun Cluster 2.2 to Sun Cluster 3.0 Update 2 (12/01) software. To upgrade Sun Cluster 3.0 7/01 (Update 1) software to Sun Cluster 3.0 12/01 software, go to "Upgrading to a Sun Cluster 3.0 Software Update Release ".

Table 3-1 Task Map: Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 12/01 Software

Task 

For Instructions, Go To ... 

Read upgrade conditions and restrictions, and plan a root disk partitioning scheme to support Sun Cluster 3.0 12/01 software. 

"Overview of Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 12/01 Software"

Take the cluster out of production. For VERITAS Volume Manager (VxVM), also disable shared CCD. 

"How to Shut Down the Cluster"

If your cluster uses VxVM, deport disk groups and remove VxVM software packages. 

"How to Uninstall VERITAS Volume Manager Software From a Sun Cluster 2.2 Configuration"

Upgrade to the Solaris 8 operating environment if necessary, add a new /globaldevices file system, and change file system allocations to support Sun Cluster 3.0 12/01 software. If your cluster uses Solstice DiskSuite software, also remove mediators and upgrade Solstice DiskSuite software.

"How to Upgrade the Solaris Operating Environment"

Upgrade to Sun Cluster 3.0 12/01 framework software. If your cluster uses Solstice DiskSuite software, also recreate mediators. 

"How to Upgrade Cluster Software Packages"

Update the PATH and MANPATH.

"How to Update the Root Environment"

Upgrade to Sun Cluster 3.0 12/01 data services software. If necessary, upgrade third-party applications. 

"How to Upgrade Data Service Software Packages"

Assign a quorum device, finish the cluster software upgrade, and start device groups and data services. If your cluster uses VERITAS Volume Manager (VxVM), reinstall VxVM software packages and import and register disk groups. If your cluster uses Solstice DiskSuite software, restore mediators. 

"How to Finish Upgrading Cluster Software"

Verify that all nodes have joined the cluster. 

"How to Verify Cluster Membership"

Overview of Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 12/01 Software

This section provides conditions, restrictions, and planning guidelines for upgrading from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software.

Conditions and Restrictions

The following conditions must be met to upgrade from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software.

Planning the Upgrade

To support Sun Cluster 3.0 12/01 software, you probably need to change your current system disk layout. Consider the following when planning your new partitioning scheme.

See "System Disk Partitions" for more information about disk space requirements to support Sun Cluster 3.0 12/01 software.

How to Shut Down the Cluster

Before you upgrade the software, take the cluster out of production.

  1. Have available the CD-ROMs, documentation, and patches for all the software products you are upgrading.

    • Solaris 8 operating environment

    • Solstice DiskSuite software or VERITAS Volume Manager

    • Sun Cluster 3.0 12/01 framework

    • Sun Cluster 3.0 12/01 data services (agents)

    • Third-party applications

    Solstice DiskSuite software and documentation are now part of the Solaris 8 product.


    Note -

    These procedures assume you are installing from CD-ROMs. If you are installing from a network, ensure that the CD-ROM image for each software product is loaded on the network.


    See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.

  2. Notify users that the cluster will be down.

  3. Become superuser on each node of the cluster.

  4. Search the /var/adm/messages log for unresolved error or warning messages.

    Correct any problems.

  5. Verify that no logical hosts are in the maintenance state.

    1. Become superuser on a node of the cluster.

    2. Use the hastat(1M) command to display the status of the cluster.


      # hastat
      HIGH AVAILABILITY CONFIGURATION AND STATUS
      -------------------------------------------
      ...
      LOGICAL HOSTS IN MAINTENANCE STATE

      If the screen output displays NONE, no logical hosts are in the maintenance state. Go to Step 6.

    3. If a logical host is in the maintenance state, use the haswitch(1M) command to perform switchover.


      # haswitch hostname logical-hostname
      
      hostname

      Specifies the name of the node that is to own the logical host

      logical-hostname

      Specifies the name of the logical host

    4. Run the hastat command to verify the switchover completed successfully.

  6. Ensure that the size of each logical host administrative file system is at least 10 Mbytes.


    # df -k /logical-hostname
    

    A logical host administrative file system that is not the required minimum size of 10 Mbytes will not be mountable after upgrade to Sun Cluster 3.0 12/01 software. If a logical host administrative file system is smaller than 10 Mbytes, follow your volume manager documentation procedure for growing this file system.

  7. Back up your system.

    Ensure that all users are logged off the system before you back it up.

  8. (VxVM only) Disable the shared Cluster Configuration Database (CCD).

    1. From either node, create a backup copy of the shared CCD.


      # ccdadm -c backup-filename
      

      See the ccdadm(1M) man page for more information.

    2. On each node of the cluster, remove the shared CCD.


      # scconf clustername -S none 
      

    3. On each node, run the mount(1M) command to determine on which node the ccdvol is mounted.

      The ccdvol entry looks similar to the following.


      # mount
      ...
      /dev/vx/dsk/sc_dg/ccdvol  /etc/opt/SUNWcluster/conf/ccdssa  ufs suid,rw,largefiles,dev=27105b8  982479320

    4. Run the cksum(1) command on each node to ensure that the ccd.database file is identical on both nodes.


      # cksum ccd.database
      

    5. If the ccd.database files are different, from either node restore the shared CCD backup that you created in Step a.


      # ccdadm -r backup-filename
      

    6. Stop the Sun Cluster 2.2 software on the node on which the ccdvol is mounted.


      # scadmin stopnode
      

    7. From the same node, unmount the ccdvol.


      # umount /etc/opt/SUNWcluster/conf/ccdssa 
      

  9. Stop the Sun Cluster 2.2 software on each node of the cluster.


    # scadmin stopnode
    

  10. Run the hastat command to verify that no nodes are in the cluster.

  11. Does the cluster use VERITAS Volume Manager?

How to Uninstall VERITAS Volume Manager Software From a Sun Cluster 2.2 Configuration

If your cluster uses VERITAS Volume Manager (VxVM), perform this procedure on each node of the cluster to uninstall the VxVM software. Existing disk groups are retained and automatically reimported after you have upgraded all software.


Note -

To upgrade to Sun Cluster 3.0 12/01 software, you must remove VxVM software and later reinstall it, regardless of whether you have the latest version of VxVM installed.


  1. Become superuser on a cluster node.

  2. Uninstall VxVM.

    Follow procedures in your VxVM documentation. This process involves the following tasks.

    • Deport all VxVM disk groups. Ensure that disks containing data to be preserved are not used for other purposes during the upgrade.

    • Unencapsulate the root disk, if it is encapsulated.

    • Shut down VxVM.

    • Remove all installed VxVM software packages.

  3. Remove the VxVM device namespace.


    # rm -rf /dev/vx
    

  4. Repeat Step 1 through Step 3 on the other cluster node.

  5. Upgrade or prepare the Solaris operating environment to support Sun Cluster 3.0 12/01 software.

    Go to "How to Upgrade the Solaris Operating Environment".

How to Upgrade the Solaris Operating Environment

Perform this procedure on each node in the cluster to upgrade or prepare the Solaris operating environment to support Sun Cluster 3.0 12/01 software.

  1. Become superuser on the cluster node.

  2. If your volume manager is Solstice DiskSuite and you are using mediators, unconfigure mediators.

    1. Run the following command to verify that no mediator data problems exist.


      # medstat -s setname
      
      -s setname

      Specifies the diskset name

      If the value in the Status field is Bad, repair the affected mediator host by following the procedure "How to Fix Bad Mediator Data".

      See the medstat(1M) man page for more information.

    2. List all mediators.

      Use this information to determine which node, if any, has ownership of the diskset from which you will remove mediators.


      # metaset -s setname
      

      Save this information for when you restore the mediators during the procedure "How to Upgrade Cluster Software Packages".

    3. If no node has ownership, take ownership of the diskset.


      # metaset -s setname -t
      
      -t

      Takes ownership of the diskset

    4. Unconfigure all mediators.


      # metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the diskset name

      -d

      Deletes from the diskset

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the diskset

      See the mediator(7) man page for further information about mediator-specific options to the metaset command.

    5. Remove the mediator software.


      # pkgrm SUNWmdm
      

  3. Does your configuration currently run Solaris 8 software?

    • If no, go to Step 4.

    • If yes, perform the following steps.

    1. Create a file system of at least 100 Mbytes and set its mount point as /globaldevices.


      Note -

      The /globaldevices file system is necessary for Sun Cluster 3.0 12/01 software installation to succeed.


    2. Reallocate space in other partitions as needed to support Sun Cluster 3.0 12/01 software.

      See "System Disk Partitions" for guidelines.

    3. Go to Step 6.

  4. Determine which procedure to use to upgrade to Solaris 8 software.

    Volume Manager 

    Procedure to Use 

    For Instructions, Go To ... 

    Solstice DiskSuite 

    Upgrading both Solaris and Solstice DiskSuite software 

    Solstice DiskSuite installation documentation 

    VxVM 

    Performing a standard Solaris software installation 

    Solaris 8 installation documentation 

  5. Upgrade to Solaris 8 software, following the procedure you selected in Step 4.

    During installation, make the following changes to the root disk partitioning scheme.

    • Create a file system of at least 100 Mbytes and set its mount point as /globaldevices. The /globaldevices file system is necessary for Sun Cluster 3.0 12/01 software installation to succeed.

    • Reallocate space in other partitions as needed to support Sun Cluster 3.0 12/01 software.

    See "System Disk Partitions" for partitioning guidelines.


    Note -

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. See the ifconfig(1M) man page for more information about Solaris interface groups.


  6. Install any Solaris software patches.

    See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.

  7. Install any hardware-related patches.

    See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.

  8. For Solstice DiskSuite software, install any Solstice DiskSuite software patches.

    See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.

  9. Upgrade to Sun Cluster 3.0 12/01 software.

    Go to "How to Upgrade Cluster Software Packages".

Example--Unconfiguring Mediators

The following example shows the mediator host phys-schost-1 unconfigured from the Solstice DiskSuite diskset schost-1 before the upgrade to Solaris 8 software.


(Check mediator status)
# medstat -s schost-1
 
(List all mediators)
# metaset -s schost-1
 
(Unconfigure the mediator)metaset -s schost-1 -d -m phys-schost-1
 
(Remove mediator software)pkgrm SUNWmdm
 
(Begin software upgrade)

How to Upgrade Cluster Software Packages

Perform this procedure on each node. You can perform this procedure on both nodes simultaneously if you have two copies of the Sun Cluster 3.0 12/01 CD-ROM CD-ROM.


Note -

The scinstall(1M) upgrade command is divided into a two-step process--the -u begin option and the -u finish option. This procedure runs the begin option. The finish option is run in "How to Finish Upgrading Cluster Software".


  1. Become superuser on a cluster node.

  2. If you are installing from the CD-ROM, insert the Sun Cluster 3.0 12/01 CD-ROM  into the CD-ROM drive on a node.

    If the volume daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0_u2 directory.

  3. Change to the /cdrom/suncluster_3_0_u2/SunCluster_3.0/Packages directory.


    # cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Packages
    

  4. If your volume manager is Solstice DiskSuite, install the latest Solstice DiskSuite mediator package (SUNWmdm) on each node.

    1. Add the SUNWmdm package.


      # pkgadd -d . SUNWmdm
      

    2. Reboot the node.


      # shutdown -g0 -y -i6
      

    3. Repeat on the other node.

  5. Reconfigure mediators.

    1. Determine which node has ownership of the diskset to which you will add the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the diskset name

    2. If no node has ownership, take ownership of the diskset.


      # metaset -s setname -t
      
      -t

      Takes ownership of the diskset

    3. Recreate the mediators.


      # metaset -s setname -a -m mediator-host-list
      
      -a

      Adds to the diskset

      -m mediator-host-list

      Specifies the names of the nodes to add as mediator hosts for the diskset

    4. Repeat for each diskset.

  6. Begin upgrade to Sun Cluster 3.0 12/01 software.

    1. On one node, change to the /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools directory.


      # cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools
      

    2. Upgrade the cluster software framework.

      Node To Upgrade 

      Command to Use 

      First node 

      ./scinstall -u begin -F

      Second node 

      ./scinstall -u begin -N node1

      -F

      Specifies that this is the first-installed node in the cluster

      -N node1

      Specifies the name of the first-installed node in the cluster, not the name of the second node to be installed

      See the scinstall(1M) man page for more information.

    3. Reboot the node.


      # shutdown -g0 -y -i6
      

      When the first node reboots into cluster mode, it establishes the cluster. The second node waits if necessary for the cluster to be established before completing its own processes and joining the cluster.

    4. Repeat on the other cluster node.

  7. On each node, install any Sun Cluster patches.

    See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.

  8. Update the directory paths.

    Go to "How to Update the Root Environment".

Example--Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 12/01 Software - Begin Process

The following example shows the beginning process of upgrading a two-node cluster from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software. The cluster node names are phys-schost-1, the first-installed node, and phys-schost-2, which joins the cluster that phys-schost-1 established. The volume manager is Solstice DiskSuite and both nodes are used as mediator hosts for the diskset schost-1.


(Install the latest Solstice DiskSuite mediator package
on each node)
# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Packagespkgadd -d . SUNWmdm
 
(Restore the mediator)
# metaset -s schost-1 -tmetaset -s schost-1 -a -m phys-schost-1 phys-schost-2
 
(Begin upgrade on the first node)
phys-schost-1# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools
phys-schost-1# ./scinstall -u begin -F
 
(Begin upgrade on the second node)
phys-schost-2# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools
phys-schost-2# ./scinstall -u begin -N phys-schost-1
 
(Reboot each node)
# shutdown -g0 -y -i6

How to Update the Root Environment

Perform the following tasks on each node of the cluster.


Note -

In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell before attempting to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See the Solaris system administration documentation for more information on customizing a user's work environment.


  1. Become superuser on a cluster node.

  2. Modify the .cshrc file PATH and MANPATH entries.

    1. Set the PATH to include /usr/sbin and /usr/cluster/bin.

      • For VERITAS Volume Manager, also set your PATH to include /etc/vx/bin. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/bin to your PATH.

      • For VERITAS File System, also set your PATH to include /opt/VRTSvxfs/sbin, /usr/lib/fs/vxfs/bin, and /etc/fs/vxfs.

    2. Set the MANPATH to include /usr/cluster/man. Also include the volume manager-specific paths.

      • For Solstice DiskSuite software, also set your MANPATH to include /usr/share/man.

      • For VERITAS Volume Manager, also set your MANPATH to include /opt/VRTSvxvm/man. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/man to your MANPATH.

      • For VERITAS File System, also add /opt/VRTS/man to your MANPATH.

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

  4. Start a new shell to activate the environment changes.

  5. Repeat Step 1 through Step 4 on the other node.

  6. Upgrade to Sun Cluster 3.0 12/01 data service software.

    Go to "How to Upgrade Data Service Software Packages".

How to Upgrade Data Service Software Packages

Perform this procedure on each cluster node.

  1. Become superuser on a node of the cluster.

  2. Upgrade applications and apply application patches as needed.

    See your application documentation for installation instructions.


    Note -

    If the applications are stored on shared disks, you must master the relevant disk groups and manually mount the relevant file systems before you upgrade the application.


  3. Add data services.

    1. Insert the Sun Cluster 3.0 Agents 12/01 CD-ROM into the CD-ROM drive on the node.

    2. Enter the scinstall(1M) utility.


      # scinstall
      

      Follow these guidelines to use the interactive scinstall utility.

      • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

      • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

    3. To add data services, type 4 (Add support for a new data service to this cluster node).

      Follow the prompts to add data services.

    4. Eject the CD-ROM.

  4. Install any Sun Cluster data service patches.

    See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.

  5. Repeat Step 1 through Step 4 on the other node of the cluster.

  6. Shut down the second node to be upgraded to Sun Cluster 3.0 12/01 software.


    phys-schost-2# shutdown -g0 -y -i0
    

    Leave the second node shut down until after the first-installed node is rebooted.

  7. Reboot the first-installed node of the cluster.

    Ensure that the second node is shut down before rebooting the first-installed node. Otherwise, the second node will panic because quorum votes are not yet assigned.


    phys-schost-1# shutdown -g0 -y -i6
    

  8. After the first-installed node has completed booting, boot the second node.


    ok boot
    

  9. After both nodes are rebooted, verify from either node that both nodes are cluster members.


    -- Cluster Nodes --
                       Node name      Status
                       ---------      ------
      Cluster node:    phys-schost-1  Online
      Cluster node:    phys-schost-2  Online

    See the scstat(1M) man page for more information about displaying cluster status.

  10. Assign a quorum device and finish the upgrade.

    Go to "How to Finish Upgrading Cluster Software".

How to Finish Upgrading Cluster Software

This procedure finishes the scinstall(1M) upgrade process begun in "How to Upgrade Cluster Software Packages". Perform these steps on each node of the cluster.


Caution - Caution -

If you must reboot the first-installed node, first shut down the cluster by using the scshutdown(1M) command, then reboot. Do not reboot the first-installed node of the cluster until after the cluster is shut down.


Until cluster install mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in install mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down. To determine which node is the first-installed node, view quorum vote assignments by using the scconf -p command. The only node that has a quorum vote is the first-installed node.

After you complete Step 7, quorum votes are assigned and this reboot restriction is no longer necessary.

  1. Become superuser on each node of the cluster.

  2. Choose a shared disk to be the quorum device.

    You can use any disk shared by both nodes as a quorum device. From either node, use the scdidadm(1M) command to determine the shared disk's device ID (DID) name. You specify this device name in Step 5, in the -q globaldev=DIDname option to scinstall.


    # scdidadm -L
    

  3. If your volume manager is VxVM, reinstall and configure the VxVM software on each node of the cluster, including any patches.

    Otherwise, go to Step 4.

    1. Install VxVM and create the root disk group (rootdg) as for a new installation.

    2. If you have any existing disk groups, import them.

      Perform the procedures in "How to Make an Existing Disk Group Into a Disk Device Group" in the Sun Cluster 3.0 12/01 System Administration Guide.

    3. Create any additional disk groups.

      Perform the procedures in "How to Create a New Disk Group When Encapsulating Disks" or "How to Create a New Disk Group When Initializing Disks" in the Sun Cluster 3.0 12/01 System Administration Guide.

  4. Insert the Sun Cluster 3.0 Agents 12/01 CD-ROM into the CD-ROM drive on a node.

    This step assumes that the volume daemon vold(1M) is running and configured to manage CD-ROM devices.

  5. Finish the cluster software upgrade on that node.


    # scinstall -u finish -q globaldev=DIDname \
    -d /cdrom/scdataservices_3_0_u2 -s srvc[,srvc]
    -q globaldev=DIDname

    Specifies the device ID (DID) name of the quorum device

    -d /cdrom/scdataservices_3_0_u2

    Specifies the directory location of the CD-ROM image

    -s srvc

    Specifies the name of the data service to configure


    Note -

    An error message similar to the following might be generated. You can safely ignore it.


    ** Installing Sun Cluster - Highly Available NFS Server **
    Skipping "SUNWscnfs" - already installed


  6. Eject the CD-ROM.

  7. Repeat Step 4 through Step 6 on the other node.

    When completed on both nodes, cluster install mode is disabled and all quorum votes are assigned.

  8. If your volume manager is Solstice DiskSuite, from either node bring pre-existing disk device groups online.


    # scswitch -z -D disk-device-group -h node
    
    -z

    Performs the switch

    -D disk-device-group

    Specifies the name of the disk device group, which for Solstice DiskSuite software is the same as the diskset name

    -h node

    Specifies the name of the cluster node that serves as the primary of the disk device group

  9. From either node, bring pre-existing data service resource groups online.

    At this point, Sun Cluster 2.2 logical hosts are converted to Sun Cluster 3.0 12/01 resource groups, and the names of logical hosts are appended with the suffix -lh. For example, a logical host named lhost-1 is upgraded to a resource group named lhost-1-lh. Use these converted resource group names in the following command.


    # scswitch -z -g resource-group -h node
    
    -g resource-group

    Specifies the name of the resource group to bring online

    You can use the scrgadm -p command to display a list of all resource types and resource groups in the cluster. The scrgadm -pv command displays this list with more detail.

  10. If you are using Sun Management Center to monitor your Sun Cluster configuration, install the Sun Cluster module for Sun Management Center.

    1. Ensure that you are using the most recent version of Sun Management Center.

      See your Sun Management Center documentation for installation or upgrade procedures.

    2. Follow guidelines and procedures in "Installation Requirements for Sun Cluster Monitoring" to install the Sun Cluster module packages.

  11. Verify that all nodes have joined the cluster.

    Go to "How to Verify Cluster Membership".

Example--Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 12/01 Software - Finish Process

The following example shows the finish process of upgrading a two-node cluster upgraded from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software. The cluster node names are phys-schost-1 and phys-schost-2, the device group names are dg-schost-1 and dg-schost-2, and the data service resource group names are lh-schost-1 and lh-schost-2.


(Determine the DID of the shared quorum device)
phys-schost-1# scdidadm -L
 
(Finish upgrade on each node)
phys-schost-1# scinstall -u finish -q globaldev=d1 \
-d /cdrom/scdataservices_3_0_u2 -s nfs
phys-schost-2# scinstall -u finish -q globaldev=d1 \
-d /cdrom/scdataservices_3_0_u2 -s nfs
 
(Bring device groups and data service resource groups
on each node online)
phys-schost-1# scswitch -z -D dg-schost-1 -h phys-schost-1
phys-schost-1# scswitch -z -g lh-schost-1 -h phys-schost-1
phys-schost-1# scswitch -z -D dg-schost-2 -h phys-schost-2 
phys-schost-1# scswitch -z -g lh-schost-2 -h phys-schost-2

How to Verify Cluster Membership

Perform this procedure to verify that all nodes have joined the cluster.

  1. Become superuser on any node in the cluster.

  2. Display cluster status.

    Verify that cluster nodes are online and that the quorum device, device groups, and data services resource groups are configured and online.


    # scstat
    

    See the scstat(1M) man page for more information about displaying cluster status.

  3. On each node, display a list of all devices the system checks to verify their connectivity to the cluster nodes.

    The output on each node should be the same.


    # scdidadm -L
    

The cluster upgrade is complete. You can now return the cluster to production.