Sun Cluster 3.2 Release Notes for Solaris OS

System Administration Guide

This section discusses error and omissions in the Sun Cluster System Administration Guide for Solaris OS.

Taking a Solaris Volume Manager Metaset From Nodes Booted in Non-Cluster Mode

ProcedureHow to Take a Solaris Volume Manager Metaset From Nodes Booted in Non-Cluster Mode

Use this procedure to run an application outside the cluster for testing purposes.

  1. Determine if the quorum device is used in the Solaris Volume Manager metaset, and determine if the quorum device uses scsi2 or scsi3 reservations.


    # clquorum show
    
    1. If the quorum device is in the Solaris Volume Manager metaset, add a new quorum device which is not part of the metaset to be taken later in non-cluster mode.


      # clquorum add did
      
    2. Remove the old quorum device.


      # clqorum remove did
      
    3. If the quorum device uses a scsi2 reservation, scrub the scsi2 reservation from the old quorum and verify that there are no scsi2 reservations remaining.


      # /usr/cluster/lib/sc/pgre -c pgre_scrub -d /dev/did/rdsk/dids2
      # /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/dids2
      
  2. Evacuate the node you want to boot in non-cluster mode.


    # clresourcegroup evacuate -n targetnode
    
  3. Take offline any resource group or resource groups that contain HAStorage or HAStoragePlus resources and contain devices or file systems affected by the metaset you want to later take in non-cluster mode.


    # clresourcegroup offline resourcegroupname
    
  4. Disable all the resources in the resource groups you took offline.


    # clresource disable resourcename
    
  5. Unmanage the resource groups.


    # clresourcegroup unmanage resourcegroupname
    
  6. Take offline the corresponding device group or device groups.


    # cldevicegroup offline devicegroupname
    
  7. Disable the device group or device groups.


    # cldevicegroup disable devicegroupname
    
  8. Boot the passive node into non-cluster mode.


    # reboot -x
    
  9. Verify that the boot process has completed on the passive node before proceeding.

    • Solaris 9

      The login prompt will only appear after the boot process has completed, so no action is required.

    • Solaris 10


      # svcs -x
      
  10. Determine if there are any scsi3 reservations on the disks in the metaset or metasets. Perform the following commands on all disks in the metasets.


    # /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dids2
    
  11. If there are any scsi3 reservations on the disks, scrub them.


    # /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dids2
    
  12. Take the metaset on the evacuated node.


    # metaset -s name -C take -f
    
  13. Mount the filesystem or filesystems containing the defined device on the metaset.


    # mount device mountpoint
    
  14. Start the application and perform the desired test. After finishing the test, stop the application.

  15. Reboot the node and wait until the boot process has finished.


    # reboot
    
  16. Bring online the device group or device groups.


    # cldevicegroup online -e devicegroupname
    
  17. Start the resource group or resource groups.


    # clresourcegroup online -eM  resourcegroupname 
    

Using Solaris IP Filtering with Sun Cluster

Sun Cluster supports Solaris IP Filtering with the following restrictions:

ProcedureHow to Set Up Solaris IP Filtering

  1. In the /etc/iu.ap file, modify the public NIC entries to list clhbsndr pfil as the module list.

    The pfil must be the last module in the list.


    Note –

    If you have the same type of adapter for private and public network, your edits to the /etc/iu.ap file will push pfil to the private network streams. However, the cluster transport module will automatically remove all unwanted modules at stream creation, so pfil will be removed from the private network streams.


  2. To ensure that the IP filter works in non-cluster mode, update the /etc/ipf/pfil.ap file.

    Updates to the /etc/iu.ap file are slightly different. See the IP Filter documentation for more information.

  3. Reboot all affected nodes.

    You can boot the nodes in a rolling fashion.

  4. Add filter rules to the /etc/ipf/ipf.conf file on all affected nodes. For information on IP filter rules syntax, see ipf(4)

    Keep in mind the following guidelines and requirements when you add filter rules to Sun Cluster nodes.

    • Sun Cluster fails over network addresses from node to node. No special procedure or code is needed at the time of failover.

    • All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.

    • Rules on a standby node will reference a non-existent IP address. This rule is still part of the IP filter's active rule set and will become effective when the node receives the address after a failover.

    • All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.

  5. Enable the ipfilter SMF service.


    # svcadm enable /network/ipfilter:default