JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

Overview of Administering the Cluster

How to Change the Cluster Name

How to Map Node ID to Node Name

How to Work With New Cluster Node Authentication

How to Reset the Time of Day in a Cluster

SPARC: How to Display the OpenBoot PROM (OBP) on a Node

How to Change the Node Private Hostname

How to Rename a Node

How to Change the Logical Hostnames Used by Existing Oracle Solaris Cluster Logical Hostname Resources

How to Put a Node Into Maintenance State

How to Bring a Node Out of Maintenance State

How to Uninstall Oracle Solaris Cluster Software From a Cluster Node

Troubleshooting a Node Uninstallation

Unremoved Cluster File System Entries

Unremoved Listing in Device Groups

Creating, Setting Up, and Managing the Oracle Solaris Cluster SNMP Event MIB

How to Enable an SNMP Event MIB

How to Disable an SNMP Event MIB

How to Change an SNMP Event MIB

How to Enable an SNMP Host to Receive SNMP Traps on a Node

How to Disable an SNMP Host From Receiving SNMP Traps on a Node

How to Add an SNMP User on a Node

How to Remove an SNMP User From a Node

Configuring Load Limits

How to Configure Load Limits on a Node

Changing Port Numbers for Services or Management Agents

How to Use the Common Agent Container to Change the Port Numbers for Services or Management Agents

Performing Zone Cluster Administrative Tasks

How to Add a Network Address to a Zone Cluster

How to Remove a Zone Cluster

How to Remove a File System From a Zone Cluster

How to Remove a Storage Device From a Zone Cluster

Troubleshooting

Running an Application Outside the Global Cluster

How to Take a Solaris Volume Manager Metaset From Nodes Booted in Noncluster Mode

Restoring a Corrupted Diskset

How to Save the Solaris Volume Manager Software Configuration

How to Purge the Corrupted Diskset

How to Recreate the Solaris Volume Manager Software Configuration

10.  Configuring Control of CPU Usage

11.  Updating Your Software

12.  Backing Up and Restoring a Cluster

A.  Example

Index

Performing Zone Cluster Administrative Tasks

You can perform other administrative tasks on a zone cluster, such as moving the zone path, preparing a zone cluster to run applications, and cloning a zone cluster. All of these commands must be performed from a node of the global cluster.


Note - You can create a new zone cluster or add a file system or storage device by using the clsetup utility to launch the zone cluster configuration wizard. The zones in a zone cluster are configured when you run clzonecluster install -c to configure the profiles. See Creating and Configuring a Zone Cluster in Oracle Solaris Cluster Software Installation Guide for instructions on using the clsetup utility or the -c config_profile option.



Note - The Oracle Solaris Cluster commands that you run only from a node in the global cluster are not valid for use with zone clusters. See the appropriate Oracle Solaris Cluster man page for information about the valid use of a command in zones.


Table 9-3 Other Zone-Cluster Tasks

Task
Instructions
Move the zone path to a new zone path
clzonecluster move -f zonepath zoneclustername
Prepare the zone cluster to run applications
clzonecluster ready -n nodename zoneclustername
Clone a zone cluster
clzonecluster clone -Z target- zoneclustername [-m copymethod] source-zoneclustername

Halt the source zone cluster before you use the clone subcommand. The target zone cluster must already be configured.

Add a network address to a zone cluster
Remove a zone cluster
Remove a file system from a zone cluster
Remove a storage device from a zone cluster
Troubleshoot a node uninstallation
Create, set up, and manage the Oracle Solaris Cluster SNMP Event MIB

How to Add a Network Address to a Zone Cluster

This procedure adds a network address for use by an existing zone cluster. A network address is used to configure logical host or shared IP address resources in the zone cluster. You can run the clsetup utility multiple times to add as many network addresses as you need.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.
  2. On the global cluster, configure the cluster file system that you want to use in the zone cluster.

    Start the clsetup utility.

    phys-schost# clsetup

    The Main Menu is displayed.

  3. Choose the Zone Cluster menu item.
  4. Choose the Add Network Address to a Zone Cluster menu item.
  5. Choose the zone cluster where you want to add the network address.
  6. Choose the property to specify the network address you want to add.
    address=value

    Specifies the network address used to configure logical host or shared IP address resources in the zone cluster. For example, 192.168.100.101.

    The following types of network addresses are supported:

    • A valid IPv4 address, optionally followed by / and a prefix length.

    • A valid IPv6 address, which must be followed by / and a prefix length.

    • A hostname which resolves to an IPv4 address. Hostnames that resolve to IPv6 addresses are not supported.

    See the zonecfg(1M) man page for more information about network addresses.

  7. To add an additional network address, type a.
  8. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

     >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
        Adding network address to the zone cluster...
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            add net
            set address=phys-schost-1
            end
    
        All network address added successfully to sczone.
  9. When finished, exit the clsetup utility.

How to Remove a Zone Cluster

You can delete a specific zone cluster or use a wildcard to remove all zone clusters that are configured on the global cluster. The zone cluster must be configured before you remove it.

  1. Assume a role that provides solaris.cluster.modify RBAC authorization on the node of the global cluster.

    Perform all steps in this procedure from a node of the global cluster.

  2. Delete all resource groups and their resources from the zone cluster.
    phys-schost# clresourcegroup delete -F -Z zoneclustername +

    Note - This step is performed from a global-cluster node. To perform this step from a node of the zone cluster instead, log into the zone-cluster node and omit -Z zonecluster from the command.


  3. Halt the zone cluster.
    phys-schost# clzonecluster halt zoneclustername
  4. Uninstall the zone cluster.
    phys-schost# clzonecluster uninstall zoneclustername
  5. Unconfigure the zone cluster.
    phys-schost# clzonecluster delete zoneclustername

Example 9-11 Removing a Zone Cluster From a Global Cluster

phys-schost# clresourcegroup delete -F -Z sczone +
phys-schost# clzonecluster halt sczone
phys-schost# clzonecluster uninstall sczone
phys-schost# clzonecluster delete sczone

How to Remove a File System From a Zone Cluster

A file system can be exported to a zone cluster using either a direct mount or a loopback mount.

Zone clusters support direct mounts for the following:

Zone clusters can manage loopback mounts for the following:

You configure an HAStoragePlus or ScalMountPoint resource to manage the mounting of the file system. For instructions on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide .

The phys-schost# prompt reflects a global-cluster prompt. This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    Some steps in this procedure are performed from a node of the global cluster. Other steps are performed from a node of the zone cluster.

  2. Delete the resources related to the file system being removed.
    1. Identify and remove the Oracle Solaris Cluster resource types, such as HAStoragePlus and SUNW.ScalMountPoint, that are configured for the zone cluster's file system that you are removing.
      phys-schost# clresource delete -F -Z zoneclustername fs_zone_resources
    2. If applicable, identify and remove the Oracle Solaris Cluster resources that are configured in the global cluster for the file system that you are removing.
      phys-schost# clresource delete -F fs_global_resources

      Use the -F option carefully because it forces the deletion of all the resources you specify, even if you did not disable them first. All the resources you specified are removed from the resource-dependency settings of other resources, which can cause a loss of service in the cluster. Dependent resources that are not deleted can be left in an invalid state or in an error state. For more information, see the clresource(1CL) man page.


    Tip - If the resource group for the removed resource later becomes empty, you can safely delete the resource group.


  3. Determine the path to the file-system mount point directory.

    For example:

    phys-schost# clzonecluster configure zoneclustername
  4. Remove the file system from the zone-cluster configuration.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> remove fs dir=filesystemdirectory
    clzc:zoneclustername> commit

    The file system mount point is specified by dir=.

  5. Verify the removal of the file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 9-12 Removing a Highly Available Local File System in a Zone Cluster

This example shows how to remove a file system with a mount-point directory (/local/ufs-1) that is configured in a zone cluster called sczone. The resource is hasp-rs and is of the type HAStoragePlus.

phys-schost# clzonecluster show -v sczone
...
 Resource Name:                           fs
   dir:                                     /local/ufs-1
   special:                                 /dev/md/ds1/dsk/d0
   raw:                                     /dev/md/ds1/rdsk/d0
   type:                                    ufs
   options:                                 [logging]
 ...
phys-schost# clresource delete -F -Z sczone hasp-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove fs dir=/local/ufs-1
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone

Example 9-13 Removing a Highly Available ZFS File System in a Zone Cluster

This example shows to remove a ZFS file systems in a ZFS pool called HAzpool, which is configured in the sczone zone cluster in resource hasp-rs of type SUNW.HAStoragePlus.

phys-schost# clzonecluster show -v sczone
...
 Resource Name:                           dataset
   name:                                     HAzpool
...
phys-schost# clresource delete -F -Z sczone hasp-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove dataset name=HAzpool
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone

How to Remove a Storage Device From a Zone Cluster

You can remove storage devices, such as Solaris Volume Manager disksets and DID devices, from a zone cluster. Perform this procedure to remove a storage device from a zone cluster.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    Some steps in this procedure are performed from a node of the global cluster. Other steps can be performed from a node of the zone cluster.

  2. Delete the resources related to the devices being removed.

    Identify and remove the Oracle Solaris Cluster resource types, such as SUNW.HAStoragePlus and SUNW.ScalDeviceGroup, that are configured for the zone cluster's devices that you are removing.

    phys-schost# clresource delete -F -Z zoneclustername dev_zone_resources
  3. Determine the match entry for the devices to be removed.
    phys-schost# clzonecluster show -v zoneclustername
    ...
     Resource Name:       device
        match:              <device_match>
     ...
  4. Remove the devices from the zone-cluster configuration.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> remove device match=<devices_match>
    clzc:zoneclustername> commit
    clzc:zoneclustername> end
  5. Reboot the zone cluster.
    phys-schost# clzonecluster reboot zoneclustername
  6. Verify the removal of the devices.
    phys-schost# clzonecluster show -v zoneclustername

Example 9-14 Removing an SVM Disk Set From a Zone Cluster

This example shows how to remove a Solaris Volume Manager disk set called apachedg configured in a zone cluster called sczone. The set number of the apachedg disk set is 3. The devices are used by the zc_rs resource that is configured in the cluster.

phys-schost# clzonecluster show -v sczone
...
  Resource Name:      device
     match:             /dev/md/apachedg/*dsk/*
  Resource Name:      device
     match:             /dev/md/shared/3/*dsk/*
...
phys-schost# clresource delete -F -Z sczone zc_rs

phys-schost# ls -l /dev/md/apachedg
lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/apachedg -> shared/3
phys-schost# clzonecluster configure sczone
clzc:sczone> remove device match=/dev/md/apachedg/*dsk/*
clzc:sczone> remove device match=/dev/md/shared/3/*dsk/*
clzc:sczone> commit
clzc:sczone> end
phys-schost# clzonecluster reboot sczone
phys-schost# clzonecluster show -v sczone

Example 9-15 Removing a DID Device From a Zone Cluster

This example shows how to remove DID devices d10 and d11, which are configured in a zone cluster called sczone. The devices are used by the zc_rs resource that is configured in the cluster.

phys-schost# clzonecluster show -v sczone
...
 Resource Name:       device
     match:             /dev/did/*dsk/d10*
 Resource Name:       device
    match:              /dev/did/*dsk/d11*
...
phys-schost# clresource delete -F -Z sczone zc_rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove device match=/dev/did/*dsk/d10*
clzc:sczone> remove device match=/dev/did/*dsk/d11*
clzc:sczone> commit
clzc:sczone> end
phys-schost# clzonecluster reboot sczone
phys-schost# clzonecluster show -v sczone