This chapter provides the software procedures for administering the Sun Cluster interconnects and public networks.
Administering the cluster interconnects and public networks consists of both hardware and software procedures. Typically, you configure the cluster interconnects and public networks, including Internet Protocol (IP) Network Multipathing groups, when you initially install and configure the cluster. If you later need to alter a cluster interconnect network configuration, you can use the software procedures in this chapter. For information about configuring IP Network Multipathing groups in a cluster, see the section Administering the Public Network.
This chapter provides information and procedures for the following topics.
For a high-level description of the related procedures in this chapter, see Table 7–1 and Table 7–3.
Refer to the Sun Cluster Concepts Guide for Solaris OS document for background and overview information about the cluster interconnects and public networks.
This section provides the procedures for reconfiguring cluster interconnects, such as cluster transport adapters and cluster transport cables. These procedures require that you install Sun Cluster software.
Most of the time, you can use the clsetup utility to administer the cluster transport for the cluster interconnects. See the clsetup(1CL) man page for more information. If you are running on the Solaris 10 OS, all cluster interconnect commands must be run in the global-cluster voting node.
For cluster software installation procedures, see the Sun Cluster Software Installation Guide for Solaris OS. For procedures about servicing cluster hardware components, see the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
You can usually choose to use the default port name, where appropriate, during cluster interconnect procedures. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI.
Task |
Instructions |
---|---|
Administer the cluster transport by using clsetup(1CL) | |
Check the status of the cluster interconnect by using clinterconnect status | |
Add a cluster transport cable, transport adapter, or switch by using clsetup |
How to Add Cluster Transport Cables, Transport Adapters, or Transport Switches |
Remove a cluster transport cable, transport adapter, or transport switch by using clsetup |
How to Remove Cluster Transport Cables, Transport Adapters, and Transport Switches |
Enable a cluster transport cable by using clsetup | |
Disable a cluster transport cable by using clsetup | |
Determining an transport adapter's instance number | |
Changing the IP address or the address range of an existing cluster |
How to Change the Private Network Address or Address Range of an Existing Cluster |
You must consider a few issues when completing dynamic reconfiguration (DR) operations on cluster interconnects.
All of the requirements, procedures, and restrictions that are documented for the Solaris DR feature also apply to Sun Cluster DR support (except for the operating system quiescence operation). Therefore, review the documentation for the Solaris DR feature before using the DR feature with Sun Cluster software. You should review in particular the issues that affect non-network IO devices during a DR detach operation.
The Sun Cluster software rejects DR remove-board operations performed on active private interconnect interfaces.
You must completely remove an active adapter from the cluster in order to perform DR on an active cluster interconnect. Use the scsetup menu or the appropriate scconf commands.
Sun Cluster software requires that each cluster node has at least one functioning path to every other cluster node. Do not disable a private interconnect interface that supports the last path to any cluster node.
Complete the following procedures in the order indicated when performing DR operations on public network interfaces.
Table 7–2 Task Map: Dynamic Reconfiguration with Public Network Interfaces
Task |
Instructions |
---|---|
1. Disable and remove the interface from the active interconnect | |
2. Perform the DR operation on the public network interface. |
Sun Enterprise 10000 DR Configuration Guide , Sun Enterprise 10000 Dynamic Reconfiguration Reference Manual (from the Solaris 9 on Sun Hardware, and Solaris 10 on Sun Hardware collections) |
You can also accomplish this procedure by using the Sun Cluster Manager GUI. See the Sun Cluster Manager online help for more information.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
You do not need to be logged in as superuser to perform this procedure.
Check the status of the cluster interconnect.
% clinterconnect status |
Refer to the following table for common status messages.
Status Message |
Description and Possible Action |
---|---|
Path online |
The path is currently functioning correctly. No action is necessary. |
Path waiting |
The path is currently being initialized. No action is necessary. |
Faulted |
The path is not functioning. This can be a transient state when paths are going between the waiting and online state. If the message persists when clinterconnect status is rerun, take corrective action. |
The following example shows the status of a functioning cluster interconnect.
% clinterconnect status -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: phys-schost-1:qfe1 phys-schost-2:qfe1 Path online Transport path: phys-schost-1:qfe0 phys-schost-2:qfe0 Path online Transport path: phys-schost-1:qfe1 phys-schost-3:qfe1 Path online Transport path: phys-schost-1:qfe0 phys-schost-3:qfe0 Path online Transport path: phys-schost-2:qfe1 phys-schost-3:qfe1 Path online Transport path: phys-schost-2:qfe0 phys-schost-3:qfe0 Path online |
For information about the requirements for the cluster private transport, see Interconnect Requirements and Restrictions in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
You can also accomplish this procedure by using the Sun Cluster Manager GUI. See the Sun Cluster Manager online help for more information.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Ensure that the physical cluster transport cables are installed.
For the procedure on installing a cluster transport cable, see the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
Become superuser on any node in the cluster.
Start the clsetup utility.
# clsetup |
The Main Menu is displayed.
Type the number that corresponds to option for displaying the Cluster Interconnect Menu.
If your configuration uses SCI adapters, do not accept the default when you are prompted for the adapter connections (the port name) during the “Add” portion of this procedure. Instead, provide the port name (0, 1, 2, or 3) found on the Dolphin switch, to which the node is physically cabled.
Type the number that corresponds to the option for adding a transport cable.
Follow the instructions and type the requested information.
Type the number that corresponds to the option for adding the transport adapter to a node.
Follow the instructions and type the requested information.
If you plan to use any of the following adapters for the cluster interconnect, add the relevant entry to the /etc/system file on each cluster node. The entry becomes effective after the next system boot.
Adapter |
Entry |
---|---|
ce |
set ce:ce_taskq_disable=1 |
ipge |
set ipge:ipge_taskq_disable=1 |
ixge |
set ixge:ixge_taskq_disable=1 |
Type the number that corresponds to the option for adding the transport switch.
Follow the instructions and type the requested information.
Verify that the cluster transport cable, transport adapter, or transport switch is added.
# clinterconnect show node:adapter,adapternode # clinterconnect show node:adapter # clinterconnect show node:switch |
The following example shows how to add a transport cable, transport adapter, or transport switch to a node by using the clsetup utility.
[Ensure that the physical cable is installed.] [Start the clsetup utility:] # clsetup [Select Cluster interconnect] [Select either Add a transport cable, Add a transport adapter to a node, or Add a transport switch.} [Answer the questions when prompted.] [You Will Need: ] [Information: Example:[ node names phys-schost-1 adapter names qfe2 switch names hub2 transport type dlpi [Verify that the clinterconnect command completed successfully:]Command completed successfully. Quit the clsetup Cluster Interconnect Menu and Main Menu. [Verify that the cable, adapter, and switch are added:] # clinterconnect show phys-schost-1:qfe2,hub2 ===Transport Cables === Transport Cable: phys-schost-1:qfe2@0,hub2 Endpoint1: phys-schost-2:qfe0@0 Endpoint2: ethernet-1@2 ???? Should this be hub2? State: Enabled # clinterconnect show phys-schost-1:qfe2 === Transport Adepters for qfe2 Transport Adapter: qfe2 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property (device_name): ce Adapter Property (device_instance): 0 Adapter Property (lazy_free): 1 Adapter Property (dlpi_heartbeat_timeout): 10000 Adpater Property (dlpi_heartbeat_quantum): 1000 Adapter Property (nw_bandwidth): 80 Adapter Property (bandwidth): 70 Adapter Property (ip_address): 172.16.0.129 Adapter Property (netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port STate (0): Enabled # clinterconnect show phys-schost-1:hub2 === Transport Switches === Transport Switch: hub2 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled |
To check the interconnect status of your cluster transport cable see How to Check the Status of the Cluster Interconnect.
You can also accomplish this procedure by using the Sun Cluster Manager GUI. See the Sun Cluster Manager online help for more information.
Use the following procedure to remove cluster transport cables, transport adapters, and transport switches from a node configuration. When a cable is disabled, the two endpoints of the cable remain configured. An adapter cannot be removed if it is still in use as an endpoint on a transport cable.
Each cluster node needs at least one functioning transport path to every other node in the cluster. No two nodes should be isolated from one another. Always verify the status of a node's cluster interconnect before disabling a cable. Only disable a cable connection after you have verified that it is redundant. That is, ensure that another connection is available. Disabling a node's last remaining working cable takes the node out of cluster membership.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser on any node in the cluster.
Check the status of the remaining cluster transport path.
# clinterconnect status |
If you receive an error such as “path faulted” while attempting to remove one node of a two-node cluster, investigate the problem before continuing with this procedure. Such a problem could indicate that a node path is unavailable. Removing the remaining operational path takes the node out of cluster membership and could result in a cluster reconfiguration.
Start the clsetup utility.
# clsetup |
The Main Menu is displayed.
Type the number that corresponds to the option for accessing the Cluster Interconnect menu.
Type the number that corresponds to the option for disabling the transport cable.
Follow the instructions and type the requested information. You need to know the applicable node names, adapter names, and switch names.
Type the number that corresponds to the option for removing the transport cable.
Follow the instructions and type the requested information. You need to know the applicable node names, adapter names, and switch names.
If you are removing a physical cable, disconnect the cable between the port and the destination device.
Type the number that corresponds to the option for removing the transport adapter from a node.
Follow the instructions and type the requested information. You need to know the applicable node names, adapter names, and switch names.
If you are removing a physical adapter from a node, see the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for hardware service procedures.
Type the number that corresponds to the option for removing a transport switch.
Follow the instructions and type the requested information. You need to know the applicable node names, adapter names, and switch names.
A switch cannot be removed if any of the ports are still in use as endpoints on any transport cables.
Verify that the cable, adapter, or switch has been removed.
# clinterconnect show node:adapter,adapternode # clinterconnect show node:adapter # clinterconnect show node:switch |
The transport cable or adapter removed from the respective node should not appear in the output from this command.
The following example shows how to remove a transport cable, transport adapter, or transport switch by using the clsetup command.
[Become superuser on any node in the cluster.] [Start the utility:] # clsetup [Select Cluster interconnect.[ [Select either Remove a transport cable, Remove a transport adapter to a node, or Remove a transport switch.[ [Answer the questions when prompted.[ You Will Need: Information Example: node names phys-schost-1 adapter names qfe1 switch names hub1 [Verify that the clinterconnect command was completed successfully:] Command completed successfully. [Quit the clsetup utility Cluster Interconnect Menu and Main Menu.] [Verify that the cable, adapter, or switch is removed:] # clinterconnect show phys-schost-1:qfe2,hub2 ===Transport Cables === Transport Cable: phys-schost-2:qfe2@0,hub2 Cable Endpoint1: phys-schost-2:qfe0@0 Cable Endpoint2: ethernet-1@2 ??? Should this be hub2??? Cable State: Enabled # clinterconnect show phys-schost-1:qfe2 === Transport Adepters for qfe2 Transport Adapter: qfe2 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property (device_name): ce Adapter Property (device_instance): 0 Adapter Property (lazy_free): 1 Adapter Property (dlpi_heartbeat_timeout): 10000 Adpater Property (dlpi_heartbeat_quantum): 1000 Adapter Property (nw_bandwidth): 80 Adapter Property (bandwidth): 70 Adapter Property (ip_address): 172.16.0.129 Adapter Property (netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port STate (0): Enabled # clinterconnect show phys-schost-1:hub2 === Transport Switches === Transport Switch: hub2 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled |
You can also accomplish this procedure by using the Sun Cluster Manager GUI. See the Sun Cluster Manager online help for more information.
This option is used to enable an already existing cluster transport cable.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser on any node in the cluster.
Start the clsetup utility.
# clsetup |
The Main Menu is displayed.
Type the number that corresponds to the option for accessing the Cluster Interconnect menu and press the Return key.
Type the number that corresponds to the option for enabling the transport cable and press the Return key.
Follow the instructions when prompted. You need to provide both the node and the adapter names of one of the endpoints of the cable that you are trying to identify.
Verify that the cable is enabled.
# clinterconnect show node:adapter,adapternode |
This example shows how to enable a cluster transport cable on adapter qfe-1, located on the node phys-schost-2.
[Become superuser on any node.] [Start the clsetup utility:] # clsetup [Select Cluster interconnect>Enable a transport cable.[ [Answer the questions when prompted.[ [You will need the following information.[ You Will Need: Information: Example: node names phys-schost-2 adapter names qfe1 switch names hub1 [Verify that the scinterconnect command was completed successfully:] clinterconnect enable phys-schost-2:qfe1 Command completed successfully. [Quit the clsetup Cluster Interconnect Menu and Main Menu.] [Verify that the cable is enabled:] # clinterconnect show phys-schost-1:qfe2,hub2 Transport cable: phys-schost-2:qfe1@0 ethernet-1@2 Enabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled |
You can also accomplish this procedure by using the Sun Cluster Manager GUI. See the Sun Cluster Manager online help for more information.
You might need to disable a cluster transport cable to temporarily shut down a cluster interconnect path. A temporary shutdown is useful when troubleshooting a cluster interconnect problem or when replacing cluster interconnect hardware.
When a cable is disabled, the two endpoints of the cable remain configured. An adapter cannot be removed if it is still in use as an endpoint in a transport cable.
Each cluster node needs at least one functioning transport path to every other node in the cluster. No two nodes should be isolated from one another. Always verify the status of a node's cluster interconnect before disabling a cable. Only disable a cable connection after you have verified that it is redundant. That is, ensure that another connection is available. Disabling a node's last remaining working cable takes the node out of cluster membership.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser on any node in the cluster.
Check the status of the cluster interconnect before disabling a cable.
# clinterconnect status |
If you receive an error such as “path faulted” while attempting to remove one node of a two-node cluster, investigate the problem before continuing with this procedure. Such a problem could indicate that a node path is unavailable. Removing the remaining operational path takes the node out of cluster membership and could result in a cluster reconfiguration.
Start the clsetup utility.
# clsetup |
The Main Menu is displayed.
Type the number that corresponds to the option for accessing the Cluster Interconnect Menu and press the Return key.
Type the number that corresponds to the option for disabling the transport cable and press the Return key.
Follow the instructions and provide the requested information. All of the components on this cluster interconnect will be disabled. You need to provide both the node and the adapter names of one of the endpoints of the cable that you are trying to identify.
Verify that the cable is disabled.
# clinterconnect show node:adapter,adapternode |
This example shows how to disable a cluster transport cable on adapter qfe-1, located on the node phys-schost-2.
[Become superuser on any node.] [Start the clsetup utility:] # clsetup [Select Cluster interconnect>Disable a transport cable.] [Answer the questions when prompted.] [You will need the following information.] [ You Will Need:] Information: Example: node names phys-schost-2 adapter names qfe1 switch names hub1 [Verify that the clinterconnect command was completed successfully:] Command completed successfully. [Quit the scsetup Cluster Interconnect Menu and Main Menu.] [Verify that the cable is disabled:] # clinterconnect show -p phys-schost-1:qfe2,hub2 Transport cable: phys-schost-2:qfe1@0 ethernet-1@2 Disabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled |
You need to determine a transport adapter's instance number to ensure that you add and remove the correct transport adapter through the clsetup command. The adapter name is a combination of the type of the adapter and the adapter's instance number. This procedure uses an SCI-PCI adapter as an example.
Based on the slot number, find the adapter's name.
The following screen is an example and might not reflect your hardware.
# prtdiag ... ========================= IO Cards ========================= Bus Max IO Port Bus Freq Bus Dev, Type ID Side Slot MHz Freq Func State Name Model ---- ---- ---- ---- ---- ---- ---- ----- -------------------------------- PCI 8 B 2 33 33 2,0 ok pci11c8,0-pci11c8,d665.11c8.0.0 PCI 8 B 3 33 33 3,0 ok pci11c8,0-pci11c8,d665.11c8.0.0 ... |
Using the adapter's path, find the adapter's instance number.
The following screen is an example and might not reflect your hardware.
# grep sci /etc/path_to_inst "/pci@1f,400/pci11c8,o@2" 0 "sci" "/pci@1f,4000.pci11c8,0@4 "sci" |
Using the adapter's name and slot number, find the adapter's instance number.
The following screen is an example and might not reflect your hardware.
# prtconf ... pci, instance #0 pci11c8,0, instance #0 pci11c8,0, instance #1 ... |
Use this procedure to change a private network address or the range of network addresses used or both.
Ensure that remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser is enabled to all cluster nodes.
Reboot all cluster nodes into noncluster mode by performing the following substeps on each cluster node:
Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on the cluster node to be started in noncluster mode.
Shut down the node by using the clnode evacuate and cluster shutdown commands.
The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from voting or non-voting nodes on the specified node to the next-preferred voting or non-voting node.
# clnode evacuate node # cluster shutdown -g0 -y |
From one node, start the clsetup utility.
When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.
Type the number that corresponds to the option for Change IP Address Range and press the Return key.
The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.
To change either the private-network IP address or the IP address range, type yes and press the Return key.
The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.
Change or accept the private-network IP address.
To accept the default private-network IP address and proceed to changing the IP address range, type yes and press the Return key.
The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.
To change the default private-network IP address, perform the following substeps.
Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.
The clsetup utility will prompt for the new private-network IP address.
Type the new IP address and press the Return key.
The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.
Change or accept the default private-network IP address range.
On the Solaris 9 OS, the default netmask is 255.255.248.0. This default IP address range supports up to 64 nodes and up to 10 private networks in the cluster. On the Solaris 10 OS, the default netmask is 255.255.240.0. This default IP address range supports up to 64 nodes, up to 12 zone clusters, and up to 10 private networks in the cluster.
To accept the default IP address range, type yes and press the Return key.
Then skip to the next step.
To change the IP address range, perform the following substeps.
Type no in response to the clsetup utility's question about whether it is okay to accept the default address range, then press the Return key.
When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks, and zone clusters on the Solaris 10 OS, that you expect to configure in the cluster.
Enter the number of nodes and private networks, and zone clusters on the Solaris 10 OS, that you expect to configure in the cluster.
From these numbers, the clsetup utility calculates two proposed netmasks:
The first netmask is the minimum netmask to support the number of nodes and private networks, and zone clusters on the Solaris 10 OS, that you specified.
The second netmask supports twice the number of nodes and private networks, and zone clusters on the Solaris 10 OS, that you specified, to accommodate possible future growth.
Specify either of the calculated netmasks, or specify a different netmask that supports the expected number of nodes and private networks, and zone clusters on the Solaris 10 OS.
Type yes in response to the clsetup utility's question about proceeding with the update.
When finished, exit the clsetup utility.
Reboot each cluster node back into cluster mode by completing the following substeps for each cluster node:
Boot the node.
On SPARC based systems, run the following command.
ok boot |
On x86 based systems, run the following commands.
When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
Verify that the node has booted without error, and is online.
# cluster status -t node |
Sun Cluster 3.1 4/04, Sun Cluster 3.1 8/05, Sun Cluster 3.2, and Sun Cluster 3.2 2/08 support the Solaris software implementation of Internet Protocol (IP) Network Multipathing for public networks. Basic IP Network Multipathing administration is the same for both cluster and noncluster environments. Multipathing administration is covered in the appropriate Solaris OS documentation. However, review the guidelines that follow before administering IP Network Multipathing in a Sun Cluster environment.
Before performing IP Network Multipathing procedures on a cluster, consider the following guidelines.
Each public network adapter must belong to a multipathing group.
The local-mac-address? variable must have a value of true for Ethernet adapters.
You must configure a test IP address for each adapter in the following kinds of multipathing groups:
All multiple-adapter multipathing groups in a cluster that runs on the Solaris 9 or Solaris 10 OS. Single-adapter multipathing groups on the Solaris 9 or Solaris 10 OS do not require test IP addresses.
Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.
Test IP addresses must not be used by normal applications because they are not highly available.
No restrictions are placed on multipathing group naming. However, when configuring a resource group, the netiflist naming convention is any multipathing name followed by either the nodeID number or the node name. For example, given a multipathing group named sc_ipmp0 , the netiflist naming could be either sc_ipmp0@1 or sc_ipmp0@phys-schost-1, where the adapter is on the node phys-schost-1, which has the nodeID of 1.
Avoid unconfiguring (unplumbing) or bringing down an adapter of an IP Network Multipathing group without first switching over the IP addresses from the adapter to be removed to an alternate adapter in the group, using the if_mpadm(1M) command.
Avoid rewiring adapters to different subnets without first removing them from their respective multipathing groups.
Logical adapter operations can be done on an adapter even if monitoring is on for the multipathing group.
You must maintain at least one public network connection for each node in the cluster. The cluster is inaccessible without a public network connection.
To view the status of IP Network Multipathing groups on a cluster, use the command.clinterconnect status command
For more information about IP Network Multipathing, see the appropriate documentation in the Solaris OS system administration documentation set.
Table 7–3 Task Map: Administering the Public Network
Solaris Operating System Release |
Instructions |
---|---|
SPARC: Solaris 9 Operating System |
“IP Network Multipathing Topics” in System Administration Guide: IP Services |
Solaris 10 Operating System |
“IP Network Multipathing Topics” in System Administration Guide: IP Services |
For cluster software installation procedures, see the Sun Cluster Software Installation Guide for Solaris OS. For procedures about servicing public networking hardware components, see the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
You must consider a few issues when completing dynamic reconfiguration (DR) operations on public network interfaces in a cluster.
All of the requirements, procedures, and restrictions that are documented for the Solaris DR feature also apply to Sun Cluster DR support (except for the operating system quiescence operation). Therefore, review the documentation for the Solaris DR feature before using the DR feature with Sun Cluster software. You should review in particular the issues that affect non-network IO devices during a DR detach operation.
DR remove-board operations can succeed only when public network interfaces are not active. Before removing an active public network interface, switch the IP addresses from the adapter to be removed to another adapter in the multipathing group, using the if_mpadm(1M) command.
If you try to remove a public network interface card without having properly disabled it as an active network interface, Sun Cluster rejects the operation and identifies the interface that would be affected by the operation.
For multipathing groups with two adapters, if the remaining network adapter fails while you are performing the DR remove operation on the disabled network adapter, availability is impacted. The remaining adapter has no place to fail over for the duration of the DR operation.
Complete the following procedures in the order indicated when performing DR operations on public network interfaces.
Table 7–4 Task Map: Dynamic Reconfiguration With Public Network Interfaces
Task |
Instructions |
---|---|
1. Switch the IP addresses from the adapter to be removed to another adapter in the multipathing group, using the if_mpadm |
if_mpadm(1M) man page. The appropriate Solaris OS documentation: Solaris 9: “IP Network Multipathing Topics” in System Administration Guide: IP Services Solaris 10:Part VI, IPMP, in System Administration Guide: IP Services |
2. Remove the adapter from the multipathing group by using the ifconfig command |
The appropriate Solaris documentation: Solaris 9: “IP Network Multipathing Topics” in System Administration Guide: IP Services ifconfig(1M) man page Solaris 10: Part VI, IPMP, in System Administration Guide: IP Services |
3. Perform the DR operation on the public network interface |
Sun Enterprise 10000 DR Configuration Guide and the Sun Enterprise 10000 Dynamic Reconfiguration Reference Manual (from the Solaris 9 on Sun Hardware, and Solaris 10 on Sun Hardware collections) |