Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
7. Administering Cluster Interconnects and Public Networks
Overview of Administering the Cluster
How to Change the Cluster Name
How to Change the Cluster Name if You Use Veritas CVM
How to Map Node ID to Node Name
How to Work With New Cluster Node Authentication
How to Reset the Time of Day in a Cluster
SPARC: How to Display the OpenBoot PROM (OBP) on a Node
How to Change the Node Private Hostname
How to Add a Private Hostname for a Non-Voting Node on a Global Cluster
How to Change the Private Hostname on a Non-Voting Node on a Global Cluster
How to Delete the Private Hostname for a Non-Voting Node on a Global Cluster
How to Put a Node Into Maintenance State
How to Bring a Node Out of Maintenance State
How to Uninstall Oracle Solaris Cluster Software From a Cluster Node
Troubleshooting a Node Uninstallation
Unremoved Cluster File System Entries
Unremoved Listing in Device Groups
Creating, Setting Up, and Managing the Oracle Solaris Cluster SNMP Event MIB
How to Enable an SNMP Event MIB
How to Disable an SNMP Event MIB
How to Change an SNMP Event MIB
How to Enable an SNMP Host to Receive SNMP Traps on a Node
How to Disable an SNMP Host From Receiving SNMP Traps on a Node
How to Add an SNMP User on a Node
How to Remove an SNMP User From a Node
How to Configure Load Limits on a Node
Running an Application Outside the Global Cluster
How to Take a Solaris Volume Manager Metaset From Nodes Booted in Noncluster Mode
How to Save the Solaris Volume Manager Software Configuration
How to Purge the Corrupted Diskset
How to Recreate the Solaris Volume Manager Software Configuration
10. Configuring Control of CPU Usage
11. Patching Oracle Solaris Cluster Software and Firmware
12. Backing Up and Restoring a Cluster
13. Administering Oracle Solaris Cluster With the Graphical User Interfaces
You can perform other administrative tasks on a zone cluster, such as moving the zone path, preparing a zone cluster to run applications, and cloning a zone cluster. All of these commands must be performed from the voting node of the global cluster.
Note - The Oracle Solaris Cluster commands that you run only from the voting node in the global cluster are not valid for use with zone clusters. See the appropriate Oracle Solaris Cluster man page for information about the valid use of a command in zones.
Table 9-3 Other Zone-Cluster Tasks
|
You can delete a specific zone cluster or use a wildcard to remove all zone clusters that are configured on the global cluster. The zone cluster must be configured before you remove it.
phys-schost# clresourcegroup delete -F -Z zoneclustername +
Note - This step is performed from a global-cluster node. To perform this step from a node of the zone cluster instead, log into the zone-cluster node and omit -Z zonecluster from the command.
phys-schost# clzonecluster halt zoneclustername
phys-schost# clzonecluster uninstall zoneclustername
phys-schost# clzonecluster delete zoneclustername
Example 9-11 Removing a Zone Cluster From a Global Cluster
phys-schost# clresourcegroup delete -F -Z sczone +
phys-schost# clzonecluster halt sczone
phys-schost# clzonecluster uninstall sczone
phys-schost# clzonecluster delete sczone
A file system can be exported to a zone cluster using either a direct mount or a loopback mount.
Zone clusters support direct mounts for the following:
UFS local file system
VxFS local file system
QFS standalone file system
QFS shared file system, only when used to support Oracle RAC
ZFS (exported as a data set)
NFS from supported NAS devices
Zone clusters can manage loopback mounts for the following:
UFS local file system
VxFS local file system
QFS standalone file system
QFS shared file system, only when used to support Oracle RAC
UFS cluster file system
VxFS cluster file system
You configure an HAStoragePlus or ScalMountPoint resource to manage the mounting of the file system. For instructions on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.
The phys-schost# prompt reflects a global-cluster prompt. This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
phys-schost# clresource delete -F -Z zoneclustername fs_zone_resources
phys-schost# clresource delete -F fs_global_resources
Use the -F option carefully because it forces the deletion of all the resources you specify, even if you did not disable them first. All the resources you specified are removed from the resource-dependency settings of other resources, which can cause a loss of service in the cluster. Dependent resources that are not deleted can be left in an invalid state or in an error state. For more information, see the clresource(1CL) man page.
Tip - If the resource group for the removed resource later becomes empty, you can safely delete the resource group.
phys-schost# clzonecluster configure zoneclustername
phys-schost# clzonecluster configure zoneclustername
clzc:zoneclustername> remove fs dir=filesystemdirectory
clzc:zoneclustername> commit
The file system mount point is specified by dir=.
phys-schost# clzonecluster show -v zoneclustername
Example 9-12 Removing a Highly Available File System in a Zone Cluster
This example shows how to remove a file system with a mount-point directory (/local/ufs-1) that is configured in a zone cluster called sczone. The resource is hasp-rs and is of the type HAStoragePlus.
phys-schost# clzonecluster show -v sczone ... Resource Name: fs dir: /local/ufs-1 special: /dev/md/ds1/dsk/d0 raw: /dev/md/ds1/rdsk/d0 type: ufs options: [logging] ... phys-schost# clresource delete -F -Z sczone hasp-rs phys-schost# clzonecluster configure sczone clzc:sczone> remove fs dir=/local/ufs-1 clzc:sczone> commit phys-schost# clzonecluster show -v sczone
Example 9-13 Removing a Highly Available ZFS File System in a Zone Cluster
This example shows to remove a ZFS file systems in a ZFS pool called HAzpool, which is configured in the sczone zone cluster in resource hasp-rs of type SUNW.HAStoragePlus.
phys-schost# clzonecluster show -v sczone ... Resource Name: dataset name: HAzpool ... phys-schost# clresource delete -F -Z sczone hasp-rs phys-schost# clzonecluster configure sczone clzc:sczone> remove dataset name=HAzpool clzc:sczone> commit phys-schost# clzonecluster show -v sczone
You can remove storage devices, such as SVM disksets and DID devices, from a zone cluster. Perform this procedure to remove a storage device from a zone cluster.
phys-schost# clresource delete -F -Z zoneclustername dev_zone_resources
phys-schost# clzonecluster show -v zoneclustername ... Resource Name: device match: <device_match> ...
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> remove device match=<devices_match> clzc:zoneclustername> commit clzc:zoneclustername> end
phys-schost# clzonecluster reboot zoneclustername
phys-schost# clzonecluster show -v zoneclustername
Example 9-14 Removing an SVM Disk Set From a Zone Cluster
This example shows how to remove an SVM disk set called apachedg configured in a zone cluster called sczone. The set number of the apachedg disk set is 3. The devices are used by the zc_rs resource that is configured in the cluster.
phys-schost# clzonecluster show -v sczone ... Resource Name: device match: /dev/md/apachedg/*dsk/* Resource Name: device match: /dev/md/shared/3/*dsk/* ... phys-schost# clresource delete -F -Z sczone zc_rs phys-schost# ls -l /dev/md/apachedg lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/apachedg -> shared/3 phys-schost# clzonecluster configure sczone clzc:sczone> remove device match=/dev/md/apachedg/*dsk/* clzc:sczone> remove device match=/dev/md/shared/3/*dsk/* clzc:sczone> commit clzc:sczone> end phys-schost# clzonecluster reboot sczone phys-schost# clzonecluster show -v sczone
Example 9-15 Removing a DID Device From a Zone Cluster
This example shows how to remove DID devices d10 and d11, which are configured in a zone cluster called sczone. The devices are used by the zc_rs resource that is configured in the cluster.
phys-schost# clzonecluster show -v sczone ... Resource Name: device match: /dev/did/*dsk/d10* Resource Name: device match: /dev/did/*dsk/d11* ... phys-schost# clresource delete -F -Z sczone zc_rs phys-schost# clzonecluster configure sczone clzc:sczone> remove device match=/dev/did/*dsk/d10* clzc:sczone> remove device match=/dev/did/*dsk/d11* clzc:sczone> commit clzc:sczone> end phys-schost# clzonecluster reboot sczone phys-schost# clzonecluster show -v sczone