You can perform other administrative tasks on a zone cluster, such as moving the zone path, preparing a zone cluster to run applications, and cloning a zone cluster. All of these commands must be performed from the voting node of the global cluster.
The Sun Cluster commands that you run only from the voting node in the global cluster are not valid for use with zone clusters. See the appropriate Sun Cluster man page for information about the valid use of a command in zones.
Task |
Instructions |
---|---|
Move the zone path to a new zone path |
clzonecluster move -f zonepath zoneclustername |
Prepare the zone cluster to run applications |
clzonecluster ready -n nodename zoneclustername |
Clone a zone cluster |
clzonecluster clone -Z source- zoneclustername [-m copymethod] zoneclustername Halt the source zone cluster before you use the clone subcommand. The target zone cluster must already be configured. |
Remove a zone cluster | |
Remove a file system from a zone cluster | |
Remove a storage device from a zone cluster | |
Troubleshoot a node uninstallation | |
Create, set up, and manage the Sun Cluster SNMP Event MIB |
Creating, Setting Up, and Managing the Sun Cluster SNMP Event MIBCreating, Setting Up, and Managing the Sun Cluster SNMP Event MIB |
You can delete a specific zone cluster or use a wildcard to remove all zone clusters that are configured on the global cluster. The zone cluster must be configured before you remove it.
Become a superuser or assume a role that provides solaris.cluster.modify RBAC authorization on the node of the global cluster. Perform all steps in this procedure from a node of the global cluster.
Delete all resource groups and their resources from the zone cluster.
phys-schost# clresourcegroup delete -F -Z zoneclustername + |
This step is performed from a global-cluster node. To perform this step from a node of the zone cluster instead, log into the zone-cluster node and omit -Z zonecluster from the command.
Halt the zone cluster.
phys-schost# clzonecluster halt zoneclustername |
Uninstall the zone cluster.
phys-schost# clzonecluster uninstall zoneclustername |
Unconfigure the zone cluster.
phys-schost# clzonecluster delete zoneclustername |
phys-schost# clresourcegroup delete -F -Z sczone + |
phys-schost# clzonecluster halt sczone |
phys-schost# clzonecluster uninstall sczone |
phys-schost# clzonecluster delete sczone |
Perform this procedure to remove a file system from a zone cluster. Supported file system types in a zone cluster include UFS, Vxfs, standalone QFS, , ZFS (exported as a data set), and loopback file systems. For instructions on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.
The phys-schost# prompt reflects a global-cluster prompt. This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser on a node of the global cluster that hosts the zone cluster. Some steps in this procedure are performed from a node of the global cluster. Other steps are performed from a node of the zone cluster.
Delete the resources related to the file system being removed.
Identify and remove the Sun Cluster resource types, such as HAStoragePlus and SUNW.ScalMountPoint, that are configured for the zone cluster's file system that you are removing.
phys-schost# clresource delete -F -Z zoneclustername fs_zone_resources |
If applicable, identify and remove the Sun Cluster resources of type SUNW.qfs that are configured in the global cluster for the file system that you are removing.
phys-schost# clresource delete -F fs_global_resouces |
Use the -F option carefully because it forces the deletion of all the resources you specify, even if you did not disable them first. All the resources you specified are removed from the resource-dependency settings of other resources, which can cause a loss of service in the cluster. Dependent resources that are not deleted can be left in an invalid state or in an error state. For more information, see the clresource(1CL) man page.
If the resource group for the removed resource later becomes empty, you can safely delete the resource group.
Determine the path to the file-system mount point directory. For example:
phys-schost# clzonecluster configure zoneclustername |
Remove the file system from the zone-cluster configuration.
phys-schost# clzonecluster configure zoneclustername |
clzc:zoneclustername> remove fs dir=filesystemdirectory |
clzc:zoneclustername> commit |
The file system mount point is specified by dir=.
Verify the removal of the file system.
phys-schost# clzonecluster show -v zoneclustername |
This example shows how to remove a file system with a mount-point directory (/local/ufs-1) that is configured in a zone cluster called sczone. The resource is hasp-rs and is of the type HAStoragePlus.
phys-schost# clzonecluster show -v sczone ... Resource Name: fs dir: /local/ufs-1 special: /dev/md/ds1/dsk/d0 raw: /dev/md/ds1/rdsk/d0 type: ufs options: [logging] ... phys-schost# clresource delete -F -Z sczone hasp-rs phys-schost# clzonecluster configure sczone clzc:sczone> remove fs dir=/local/ufs-1 clzc:sczone> commit phys-schost# clzonecluster show -v sczone |
This example shows to remove a ZFS file systems in a ZFS pool called HAzpool, which is configured in the sczone zone cluster in resource hasp-rs of type SUNW.HAStoragePlus.
phys-schost# clzonecluster show -v sczone ... Resource Name: dataset name: HAzpool ... phys-schost# clresource delete -F -Z sczone hasp-rs phys-schost# clzonecluster configure sczone clzc:sczone> remove dataset name=HAzpool clzc:sczone> commit phys-schost# clzonecluster show -v sczone |
You can remove storage devices, such as SVM disksets and DID devices, from a zone cluster. Perform this procedure to remove a storage device from a zone cluster.
Become superuser on a node of the global cluster that hosts the zone cluster. Some steps in this procedure are performed from a node of the global cluster. Other steps can be performed from a node of the zone cluster.
Delete the resources related to the devices being removed. Identify and remove the Sun Cluster resource types, such as SUNW.HAStoragePlus and SUNW.ScalDeviceGroup, that are configured for the zone cluster's devices that you are removing.
phys-schost# clresource delete -F -Z zoneclustername dev_zone_resources |
Determine the match entry for the devices to be removed.
phys-schost# clzonecluster show -v zoneclustername ... Resource Name: device match: <device_match> ... |
Remove the devices from the zone-cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> remove device match=<devices_match> clzc:zoneclustername> commit clzc:zoneclustername> end |
Reboot the zone cluster.
phys-schost# clzonecluster reboot zoneclustername |
Verify the removal of the devices.
phys-schost# clzonecluster show -v zoneclustername |
This example shows how to remove an SVM disk set called apachedg configured in a zone cluster called sczone. The set number of the apachedg disk set is 3. The devices are used by the zc_rs resource that is configured in the cluster.
phys-schost# clzonecluster show -v sczone ... Resource Name: device match: /dev/md/apachedg/*dsk/* Resource Name: device match: /dev/md/shared/3/*dsk/* ... phys-schost# clresource delete -F -Z sczone zc_rs phys-schost# ls -l /dev/md/apachedg lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/apachedg -> shared/3 phys-schost# clzonecluster configure sczone clzc:sczone> remove device match=/dev/md/apachedg/*dsk/* clzc:sczone> remove device match=/dev/md/shared/3/*dsk/* clzc:sczone> commit clzc:sczone> end phys-schost# clzonecluster reboot sczone phys-schost# clzonecluster show -v sczone |
This example shows how to remove DID devices d10 and d11, which are configured in a zone cluster called sczone. The devices are used by the zc_rs resource that is configured in the cluster.
phys-schost# clzonecluster show -v sczone ... Resource Name: device match: /dev/did/*dsk/d10* Resource Name: device match: /dev/did/*dsk/d11* ... phys-schost# clresource delete -F -Z sczone zc_rs phys-schost# clzonecluster configure sczone clzc:sczone> remove device match=/dev/did/*dsk/d10* clzc:sczone> remove device match=/dev/did/*dsk/d11* clzc:sczone> commit clzc:sczone> end phys-schost# clzonecluster reboot sczone phys-schost# clzonecluster show -v sczone |
Perform this procedure to uninstall Sun Cluster software from a global-cluster node before you disconnect it from a fully established cluster configuration. You can use this procedure to uninstall software from the last remaining node of a cluster.
To uninstall Sun Cluster software from a node that has not yet joined the cluster or is still in installation mode, do not perform this procedure. Instead, go to “How to Uninstall Sun Cluster Software to Correct Installation Problems” in the Sun Cluster Software Installation Guide for Solaris OS.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Ensure that you have correctly completed all prerequisite tasks in the task map to remove a cluster node.
See Table 8–2.
Ensure that you have removed the node from the cluster configuration by using clnode remove before you continue with this procedure.
Become superuser on an active member of the global cluster other than the global-cluster node that you are uninstalling. Perform this procedure from a global-cluster node.
From the active cluster member, add the node that you intend to uninstall to the cluster's node authentication list.
phys-schost# claccess allow -h hostname |
Specifies the name of the node to be added to the node's authentication list.
Alternately, you can use the clsetup(1CL) utility. See How to Add a Node to the Authorized Node List for procedures.
Become superuser on the node to uninstall.
If you have a zone cluster, uninstall it.
phys-schost# clzonecluster uninstall -F zoneclustername |
For specific steps, How to Remove a Zone Cluster.
If your node has a dedicated partition for the global devices namespace, reboot the global-cluster node into noncluster mode.
On a SPARC based system, run the following command.
# shutdown -g0 -y -i0ok boot -x |
On an x86 based system, run the following commands.
# shutdown -g0 -y -i0 ... <<< Current Boot Parameters >>> Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/ sd@0,0:a Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -x |
In the /etc/vfstab file, remove all globally mounted file-system entries except the /global/.devices global mounts.
If you intend to reinstall Sun Cluster software on this node, remove the Sun Cluster entry from the Sun Java Enterprise System (Java ES) product registry.
If the Java ES product registry contains a record that Sun Cluster software was installed, the Java ES installer shows the Sun Cluster component grayed out and does not permit reinstallation.
Start the Java ES uninstaller.
Run the following command, where ver is the version of the Java ES distribution from which you installed Sun Cluster software.
# /var/sadm/prod/SUNWentsysver/uninstall |
Follow the prompts to select Sun Cluster to uninstall.
For more information about using the uninstall command, see Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.
If you do not intend to reinstall the Sun Cluster software on this cluster, disconnect the transport cables and the transport switch, if any, from the other cluster devices.
If the uninstalled node is connected to a storage device that uses a parallel SCSI interface, install a SCSI terminator to the open SCSI connector of the storage device after you disconnect the transport cables.
If the uninstalled node is connected to a storage device that uses Fibre Channel interfaces, no termination is necessary.
Follow the documentation that shipped with your host adapter and server for disconnection procedures.
If you use a loopback file interface (lofi) device, the Java ES uninstaller automatically removes the lofi file, which is called /.globaldevices. For more information about migrating a global-devices namespace to a lofi, see Migrating the Global-Devices Namespace.
This section describes error messages that you might receive when you run the scinstall -r command and the corrective actions to take.
The following error messages indicate that the global-cluster node you removed still has cluster file systems referenced in its vfstab file.
Verifying that no unexpected global mounts remain in /etc/vfstab ... failed scinstall: global-mount1 is still configured as a global mount. scinstall: global-mount1 is still configured as a global mount. scinstall: /global/dg1 is still configured as a global mount. scinstall: It is not safe to uninstall with these outstanding errors. scinstall: Refer to the documentation for complete uninstall instructions. scinstall: Uninstall failed. |
To correct this error, return to How to Uninstall Sun Cluster Software From a Cluster Node and repeat the procedure. Ensure that you successfully complete Step 7 in the procedure before you rerun the clnode remove command.
The following error messages indicate that the node you removed is still listed with a device group.
Verifying that no device services still reference this node ... failed scinstall: This node is still configured to host device service " service". scinstall: This node is still configured to host device service " service2". scinstall: This node is still configured to host device service " service3". scinstall: This node is still configured to host device service " dg1". scinstall: It is not safe to uninstall with these outstanding errors. scinstall: Refer to the documentation for complete uninstall instructions. scinstall: Uninstall failed. |
This section describes how to create, set up, and manage the Simple Network Management Protocol (SNMP) event Management Information Base (MIB). This section also describes how to enable, disable, and change the Sun Cluster SNMP event MIB.
The Sun Cluster software currently supports one MIB, the event MIB. The SNMP manager software traps cluster events in real time. When enabled, the SNMP manager automatically sends trap notifications to all hosts that are defined by the clsnmphost command. The MIB maintains a read-only table of the most current 50 events. Because clusters generate numerous notifications, only events with a severity of warning or greater are sent as trap notifications. This information does not persist across reboots.
The SNMP event MIB is defined in the sun-cluster-event-mib.mib file and is located in the /usr/cluster/lib/mib directory. You can use this definition to interpret the SNMP trap information.
The default port number for the event SNMP module is 11161, and the default port for the SNMP traps is 11162. These port numbers can be changed by modifying the Common Agent Container property file, which is /etc/cacao/instances/default/private/cacao.properties.
Creating, setting up, and managing a Sun Cluster SNMP event MIB can involve the following tasks.
Table 9–3 Task Map: Creating, Setting Up, and Managing the Sun Cluster SNMP Event MIB
Task |
Instructions |
---|---|
Enable an SNMP event MIB | |
Disable an SNMP event MIB | |
Change an SNMP event MIB | |
Add an SNMP host to the list of hosts that will receive trap notifications for the MIBs | |
Remove an SNMP host |
How to Disable an SNMP Host From Receiving SNMP Traps on a Node |
Add an SNMP user | |
Remove an SNMP user |
This procedure shows how to enable an SNMP event MIB.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Enable the SNMP event MIB.
phys-schost-1# clsnmpmib enable [-n node] MIB |
Specifies the node on which the event MIB that you want to enable is located. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.
Specifies the name of the MIB that you want to enable. In this case, the MIB name must be event.
This procedure shows how to disable an SNMP event MIB.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Disable the SNMP event MIB.
phys-schost-1# clsnmpmib disable -n node MIB |
Specifies the node on which the event MIB that you want to disable is located. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.
Specifies the type of the MIB that you want to disable. In this case, you must specify event.
This procedure shows how to change the protocol for an SNMP event MIB.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Change the protocol of the SNMP event MIB.
phys-schost-1# clsnmpmib set -n node -p version=value MIB |
Specifies the node on which the event MIB that you want to change is located. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.
Specifies the version of SNMP protocol to use with the MIBs. You specify value as follows:
version=SNMPv2
version=snmpv2
version=2
version=SNMPv3
version=snmpv3
version=3
Specifies the name of the MIB or MIBs to which to apply the subcommand. In this case, you must specify event.
This procedure shows how to add an SNMP host on a node to the list of hosts that will receive trap notifications for the MIBs.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Add the host to the SNMP host list of a community on another node.
phys-schost-1# clsnmphost add -c SNMPcommunity [-n node] host |
Specifies the SNMP community name that is used in conjunction with the hostname.
You must specify the SNMP community name SNMPcommunity when you add a host to a community other than public. If you use the add subcommand without the -c option, the subcommand uses public as the default community name.
If the specified community name does not exist, this command creates the community.
Specifies the name of the node of the SNMP host that is provided access to the SNMP MIBs in the cluster. You can specify a node name or a node ID. If you do not specify this option, the current node is used by default.
Specifies the name, IP address, or IPv6 address of a host that is provided access to the SNMP MIBs in the cluster.
This procedure shows how to remove an SNMP host on a node from the list of hosts that will receive trap notifications for the MIBs.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Remove the host from the SNMP host list of a community on the specified node.
phys-schost-1# clsnmphost remove -c SNMPcommunity -n node host |
Removes the specified SNMP host from the specified node.
Specifies the name of the SNMP community from which the SNMP host is removed.
Specifies the name of the node on which the SNMP host is removed from the configuration. You can specify a node name or a node ID. If you do not specify this option, the current node is used by default.
Specifies the name, IP address, or IPv6 address of the host that is removed from the configuration.
To remove all hosts in the specified SNMP community, use a plus sign (+) for host with the -c option. To remove all hosts, use the plus sign (+) for host.
This procedure shows how to add an SNMP user to the SNMP user configuration on a node.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Add the SNMP user.
phys-schost-1# clsnmpuser create -n node -a authentication \ -f password user |
Specifies the node on which the SNMP user is added. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.
Specifies the authentication protocol that is used to authorize the user. The value of the authentication protocol can be SHA or MD5.
Specifies a file that contains the SNMP user passwords. If you do not specify this option when you create a new user, the command prompts for a password. This option is valid only with the add subcommand.
You must specify user passwords on separate lines in the following format:
user:password
Passwords cannot contain the following characters or a space:
; (semicolon)
: (colon)
\ (backslash)
\n (newline)
Specifies the name of the SNMP user that you want to add.
This procedure shows how to remove an SNMP user from the SNMP user configuration on a node.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Remove the SNMP user.
phys-schost-1# clsnmpuser delete -n node user |
Specifies the node from which the SNMP user is removed. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.
Specifies the name of the SNMP user that you want to remove.