Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster 4.1 |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
7. Administering Cluster Interconnects and Public Networks
Overview of Administering the Cluster
How to Change the Cluster Name
How to Map Node ID to Node Name
How to Work With New Cluster Node Authentication
How to Reset the Time of Day in a Cluster
SPARC: How to Display the OpenBoot PROM (OBP) on a Node
How to Change the Node Private Hostname
How to Put a Node Into Maintenance State
How to Bring a Node Out of Maintenance State
How to Uninstall Oracle Solaris Cluster Software From a Cluster Node
Troubleshooting a Node Uninstallation
Unremoved Cluster File System Entries
Unremoved Listing in Device Groups
Creating, Setting Up, and Managing the Oracle Solaris Cluster SNMP Event MIB
How to Enable an SNMP Event MIB
How to Disable an SNMP Event MIB
How to Change an SNMP Event MIB
How to Enable an SNMP Host to Receive SNMP Traps on a Node
How to Disable an SNMP Host From Receiving SNMP Traps on a Node
How to Add an SNMP User on a Node
How to Remove an SNMP User From a Node
How to Configure Load Limits on a Node
Changing Port Numbers for Services or Management Agents
How to Use the Common Agent Container to Change the Port Numbers for Services or Management Agents
Performing Zone Cluster Administrative Tasks
How to Add a Network Address to a Zone Cluster
Running an Application Outside the Global Cluster
How to Take a Solaris Volume Manager Metaset From Nodes Booted in Noncluster Mode
How to Save the Solaris Volume Manager Software Configuration
How to Purge the Corrupted Diskset
How to Recreate the Solaris Volume Manager Software Configuration
10. Configuring Control of CPU Usage
You can perform other administrative tasks on a zone cluster, such as moving the zone path, preparing a zone cluster to run applications, and cloning a zone cluster. All of these commands must be performed from a node of the global cluster.
Note - You can create a new zone cluster or add a file system or storage device by using the clsetup utility to launch the zone cluster configuration wizard. The zones in a zone cluster are configured when you run clzonecluster install -c to configure the profiles. See Creating and Configuring a Zone Cluster in Oracle Solaris Cluster Software Installation Guide for instructions on using the clsetup utility or the -c config_profile option.
Note - The Oracle Solaris Cluster commands that you run only from a node in the global cluster are not valid for use with zone clusters. See the appropriate Oracle Solaris Cluster man page for information about the valid use of a command in zones.
Table 9-3 Other Zone-Cluster Tasks
|
This procedure adds a network address for use by an existing zone cluster. A network address is used to configure logical host or shared IP address resources in the zone cluster. You can run the clsetup utility multiple times to add as many network addresses as you need.
Start the clsetup utility.
phys-schost# clsetup
The Main Menu is displayed.
Specifies the network address used to configure logical host or shared IP address resources in the zone cluster. For example, 192.168.100.101.
The following types of network addresses are supported:
A valid IPv4 address, optionally followed by / and a prefix length.
A valid IPv6 address, which must be followed by / and a prefix length.
A hostname which resolves to an IPv4 address. Hostnames that resolve to IPv6 addresses are not supported.
See the zonecfg(1M) man page for more information about network addresses.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding network address to the zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add net set address=phys-schost-1 end All network address added successfully to sczone.
You can delete a specific zone cluster or use a wildcard to remove all zone clusters that are configured on the global cluster. The zone cluster must be configured before you remove it.
Perform all steps in this procedure from a node of the global cluster.
phys-schost# clresourcegroup delete -F -Z zoneclustername +
Note - This step is performed from a global-cluster node. To perform this step from a node of the zone cluster instead, log into the zone-cluster node and omit -Z zonecluster from the command.
phys-schost# clzonecluster halt zoneclustername
phys-schost# clzonecluster uninstall zoneclustername
phys-schost# clzonecluster delete zoneclustername
Example 9-11 Removing a Zone Cluster From a Global Cluster
phys-schost# clresourcegroup delete -F -Z sczone +
phys-schost# clzonecluster halt sczone
phys-schost# clzonecluster uninstall sczone
phys-schost# clzonecluster delete sczone
A file system can be exported to a zone cluster using either a direct mount or a loopback mount.
Zone clusters support direct mounts for the following:
UFS local file system
Oracle Solaris ZFS (exported as a data set)
NFS from supported NAS devices
Zone clusters can manage loopback mounts for the following:
UFS local file system
UFS cluster file system
You configure an HAStoragePlus or ScalMountPoint resource to manage the mounting of the file system. For instructions on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide .
The phys-schost# prompt reflects a global-cluster prompt. This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Some steps in this procedure are performed from a node of the global cluster. Other steps are performed from a node of the zone cluster.
phys-schost# clresource delete -F -Z zoneclustername fs_zone_resources
phys-schost# clresource delete -F fs_global_resources
Use the -F option carefully because it forces the deletion of all the resources you specify, even if you did not disable them first. All the resources you specified are removed from the resource-dependency settings of other resources, which can cause a loss of service in the cluster. Dependent resources that are not deleted can be left in an invalid state or in an error state. For more information, see the clresource(1CL) man page.
Tip - If the resource group for the removed resource later becomes empty, you can safely delete the resource group.
For example:
phys-schost# clzonecluster configure zoneclustername
phys-schost# clzonecluster configure zoneclustername
clzc:zoneclustername> remove fs dir=filesystemdirectory
clzc:zoneclustername> commit
The file system mount point is specified by dir=.
phys-schost# clzonecluster show -v zoneclustername
Example 9-12 Removing a Highly Available Local File System in a Zone Cluster
This example shows how to remove a file system with a mount-point directory (/local/ufs-1) that is configured in a zone cluster called sczone. The resource is hasp-rs and is of the type HAStoragePlus.
phys-schost# clzonecluster show -v sczone ... Resource Name: fs dir: /local/ufs-1 special: /dev/md/ds1/dsk/d0 raw: /dev/md/ds1/rdsk/d0 type: ufs options: [logging] ... phys-schost# clresource delete -F -Z sczone hasp-rs phys-schost# clzonecluster configure sczone clzc:sczone> remove fs dir=/local/ufs-1 clzc:sczone> commit phys-schost# clzonecluster show -v sczone
Example 9-13 Removing a Highly Available ZFS File System in a Zone Cluster
This example shows to remove a ZFS file systems in a ZFS pool called HAzpool, which is configured in the sczone zone cluster in resource hasp-rs of type SUNW.HAStoragePlus.
phys-schost# clzonecluster show -v sczone ... Resource Name: dataset name: HAzpool ... phys-schost# clresource delete -F -Z sczone hasp-rs phys-schost# clzonecluster configure sczone clzc:sczone> remove dataset name=HAzpool clzc:sczone> commit phys-schost# clzonecluster show -v sczone
You can remove storage devices, such as Solaris Volume Manager disksets and DID devices, from a zone cluster. Perform this procedure to remove a storage device from a zone cluster.
Some steps in this procedure are performed from a node of the global cluster. Other steps can be performed from a node of the zone cluster.
Identify and remove the Oracle Solaris Cluster resource types, such as SUNW.HAStoragePlus and SUNW.ScalDeviceGroup, that are configured for the zone cluster's devices that you are removing.
phys-schost# clresource delete -F -Z zoneclustername dev_zone_resources
phys-schost# clzonecluster show -v zoneclustername ... Resource Name: device match: <device_match> ...
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> remove device match=<devices_match> clzc:zoneclustername> commit clzc:zoneclustername> end
phys-schost# clzonecluster reboot zoneclustername
phys-schost# clzonecluster show -v zoneclustername
Example 9-14 Removing an SVM Disk Set From a Zone Cluster
This example shows how to remove a Solaris Volume Manager disk set called apachedg configured in a zone cluster called sczone. The set number of the apachedg disk set is 3. The devices are used by the zc_rs resource that is configured in the cluster.
phys-schost# clzonecluster show -v sczone ... Resource Name: device match: /dev/md/apachedg/*dsk/* Resource Name: device match: /dev/md/shared/3/*dsk/* ... phys-schost# clresource delete -F -Z sczone zc_rs phys-schost# ls -l /dev/md/apachedg lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/apachedg -> shared/3 phys-schost# clzonecluster configure sczone clzc:sczone> remove device match=/dev/md/apachedg/*dsk/* clzc:sczone> remove device match=/dev/md/shared/3/*dsk/* clzc:sczone> commit clzc:sczone> end phys-schost# clzonecluster reboot sczone phys-schost# clzonecluster show -v sczone
Example 9-15 Removing a DID Device From a Zone Cluster
This example shows how to remove DID devices d10 and d11, which are configured in a zone cluster called sczone. The devices are used by the zc_rs resource that is configured in the cluster.
phys-schost# clzonecluster show -v sczone ... Resource Name: device match: /dev/did/*dsk/d10* Resource Name: device match: /dev/did/*dsk/d11* ... phys-schost# clresource delete -F -Z sczone zc_rs phys-schost# clzonecluster configure sczone clzc:sczone> remove device match=/dev/did/*dsk/d10* clzc:sczone> remove device match=/dev/did/*dsk/d11* clzc:sczone> commit clzc:sczone> end phys-schost# clzonecluster reboot sczone phys-schost# clzonecluster show -v sczone