This chapter provides the software procedures for administering the Sun Cluster interconnects and public networks.
Administering the cluster interconnects and public networks consists of both hardware and software procedures. Typically, you configure the cluster interconnects and public networks, including NAFO groups, when you initially install and configure the cluster. If you later need to alter a cluster interconnect or public network configuration, you can use the software procedures in this chapter.
This is a list of the procedures in this chapter.
"5.1.3 How to Add Cluster Transport Cables, Transport Adapters, or Transport Junctions"
"5.1.4 How to Remove Cluster Transport Cables, Transport Adapters, and Transport Junctions"
"5.2.8 How to Change Public Network Management Tunable Parameters"
For a high-level description of the related procedures in this chapter, see Table 5-1 and Table 5-3.
Refer to the Sun Cluster 3.0 12/01 Concepts document for background and overview information on the cluster interconnects and public networks.
This section provides the procedures for reconfiguring cluster interconnects, such as cluster transport adapters and cluster transport cables. These procedures require that you install Sun Cluster software.
Most of the time, you can use the scsetup utility to administer the cluster transport for the cluster interconnects. See the scsetup(1M) man page for more information.
For cluster software installation procedures, see the Sun Cluster 3.0 12/01 Software Installation Guide. For procedures about servicing cluster hardware components, see the Sun Cluster 3.0 12/01 Hardware Guide.
You can usually choose to use the default port name, where appropriate, during cluster interconnect procedures. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI.
Task |
For Instructions, Go To... |
---|---|
Administer the cluster transport - Use scsetup | |
Check the status of the cluster interconnect - Use scstat | |
Add a cluster transport cable, transport adapter, or transport junction - Use scsetup |
"5.1.3 How to Add Cluster Transport Cables, Transport Adapters, or Transport Junctions" |
Remove a cluster transport cable, transport adapter, or transport junction - Use scsetup |
"5.1.4 How to Remove Cluster Transport Cables, Transport Adapters, and Transport Junctions" |
Enable a cluster transport cable - Use scsetup | |
Disable a cluster transport cable - Use scsetup |
There are a few issues you must consider when completing dynamic reconfiguration (DR) operations on cluster interconnects.
All of the requirements, procedures, and restrictions that are documented for the Solaris 8 DR feature also apply to Sun Cluster DR support (except for the operating environment quiescence operation). Therefore, review the documentation for the Solaris 8 DR feature before using the DR feature with Sun Cluster software. You should review in particular the issues that affect non-network IO devices during a DR detach operation.
DR remove operations cannot be performed on active private interconnect interfaces.
If the DR remove operation would affect an active private interconnect interface, the system rejects the operation and identifies the interface that would be affected by the operation.
When an interface is replaced on the private interconnect, its state remains the same, avoiding any need for additional Sun Cluster reconfiguration steps.
Sun Cluster requires that each cluster node has at least one functioning path to every other cluster node. Do not disable a private interconnect interface that supports the last path to any cluster node.
Complete the following procedures in the order indicated when performing DR operations on public network interfaces.
Table 5-2 Task Map: Dynamic Reconfiguration with Public Network Interfaces
Task |
For Instructions, Go To... |
---|---|
1. Disable and remove the interface from the active interconnect |
"5.1.4 How to Remove Cluster Transport Cables, Transport Adapters, and Transport Junctions" |
2. Perform the DR operation on the public network interface. |
Sun Enterprise 10000 Dynamic Reconfiguration User Guide and the Sun Enterprise 10000 Dynamic Reconfiguration Reference Manual (from the Solaris 8 on Sun Hardware collection) |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
You do not need to be logged in as superuser to perform this procedure.
Check the status of the cluster interconnect.
# scstat -W |
Refer to the following for common status messages.
Status Message |
Description and Possible Action |
---|---|
Path online |
The path is currently functioning correctly. No action is necessary. |
Path waiting |
The path is currently being initialized. No action is necessary. |
Path faulted |
The path is not functioning. This can be a transient state when paths are going between the waiting and online state. If the message persists when scstat -W is rerun, take corrective action. |
The following example shows the status of a functioning cluster interconnect.
# scstat -W -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: phys-schost-1:qfe1 phys-schost-2:qfe1 Path online Transport path: phys-schost-1:qfe0 phys-schost-2:qfe0 Path online Transport path: phys-schost-1:qfe1 phys-schost-3:qfe1 Path online Transport path: phys-schost-1:qfe0 phys-schost-3:qfe0 Path online Transport path: phys-schost-2:qfe1 phys-schost-3:qfe1 Path online Transport path: phys-schost-2:qfe0 phys-schost-3:qfe0 Path online |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
Ensure that the physical cluster transport cables are installed.
For the procedure on installing a cluster transport cable, see the Sun Cluster 3.0 12/01 Hardware Guide.
Become superuser on any node in the cluster.
Enter the scsetup utility.
# scsetup |
The Main Menu is displayed.
Access the Cluster Interconnect Menu by typing 3 (Cluster interconnect).
If your configuration uses SCI adapters, do not accept the default when you are prompted for the adapter connections (the port name) during the "Add" portion of this procedure. Instead, provide the port name (0, 1, 2, or 3) found on the Dolphin switch, to which the node is physically cabled.
Add the transport cable by typing 1 (Add a transport cable).
Follow the instructions and enter the requested information.
Add the transport adapter by typing 2 (Add a transport adapter to a node).
Follow the instructions and enter the requested information.
Add the transport junction by typing 3 (Add a transport junction).
Follow the instructions and enter the requested information.
Verify that the cluster transport cable, transport adapter, or transport junction is added.
# scconf -p | grep cable # scconf -p | grep adapter # scconf -p | grep junction |
The following example shows how to add a transport cable, transport adapter, or transport junction to a node using the scsetup command.
[Ensure the physical cable is installed.] Become superuser on any node and place the node to be removed in maintenance state. # scsetup Select Cluster interconnect. Select either Add a transport cable, Add a transport adapter to a node, or Add a transport junction. Answer the questions when prompted. You Will Need: Example: node names phys-schost-1 adapter names qfe2 junction names hub2 transport type dlpi [Verify that the scconf command completed successfully:] Command completed successfully. Quit the scsetup Cluster Interconnect Menu and Main Menu. [Verify that the cable, adapter, and junction are added:] # scconf -p | grep cable Transport cable: phys-schost-2:qfe0@1 ethernet-1@2 Enabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled # scconf -p | grep adapter Node transport adapters: qfe2 hme1 qfe0 Node transport adapter: qfe0 Node transport adapters: qfe0 qfe2 hme1 Node transport adapter: qfe0 Node transport adapters: qfe0 qfe2 hme1 Node transport adapter: qfe0 # scconf -p | grep junction Cluster transport junctions: hub0 hub1 hub2 Cluster transport junction: hub0 Cluster transport junction: hub1 Cluster transport junction: hub2 |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
Use the following procedure to remove cluster transport cables, transport adapters, and transport junctions from a node configuration. When a cable is disabled, the two endpoints of the cable remain configured. An adapter cannot be removed if it is still in use as an endpoint on a transport cable.
Each cluster node needs at least one functioning transport path to every other node in the cluster. No two nodes should be isolated from one another. Always verify the status of a node's cluster interconnect before disabling a cable. Only disable a cable connection after you have verified that it is redundant; that is, that another connection is available. Disabling a node's last remaining working cable takes the node out of cluster membership.
Become superuser on any node in the cluster.
Check the status of the remaining cluster transport path.
# scstat -W |
If you receive an error such as "path faulted" while attempting to remove one node of a two-node cluster, investigate the problem before continuing with this procedure. Such a problem could indicate that a node path is unavailable. Removing the remaining good path takes the node out of cluster membership and could result in a cluster reconfiguration.
Enter the scsetup utility.
# scsetup |
The Main Menu is displayed.
Access the Cluster Interconnect Menu by typing 3 (Cluster interconnect).
Remove the cable by typing 4 (Remove a transport cable).
Follow the instructions and enter the requested information. You will need to know the applicable node names, adapter names, and junction names.
If you are removing a physical cable, disconnect the cable between the port and the destination device.
Remove the adapter by typing 5 (Remove a transport adapter from a node).
Follow the instructions and enter the requested information. You will need to know the applicable node names, adapter names, and junction names.
If you are removing a physical adapter from a node, see the Sun Cluster 3.0 12/01 Hardware Guide for hardware service procedures.
Remove the junction by typing 6 (Remove a transport junction).
Follow the instructions and enter the requested information. You will need to know the applicable node names, adapter names, and junction names.
A junction cannot be removed if any of the ports are still in use as endpoints on any transport cables.
Verify that the cable or the adapter has been removed.
# scconf -p | grep cable # scconf -p | grep adapter # scconf -p | grep junction |
The transport cable or adapter removed from the given node should not appear in the output from this command.
The following example shows how to remove a transport cable, transport adapter, or transport junction using the scsetup command.
[Become superuser on any node in the cluster.] [Enter the utility:] # scsetup Type 3 (Cluster interconnect). Select either Add a transport cable, Add a transport adapter to a node, or Add a transport junction. Answer the questions when prompted. You Will Need: Example: node names phys-schost-1 adapter names qfe1 junction names hub1 [Verify that the scconf command completed successfully:] "Command completed successfully." Quit the scsetup Cluster Interconnect Menu and Main Menu. [Verify that the cable, adapter, or junction is removed:] # scconf -p | grep cable Transport cable: phys-schost-2:qfe0@1 ethernet-1@2 Enabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled # scconf -p | grep adapter Node transport adapters: qfe2 hme1 qfe0 Node transport adapter: qfe0 Node transport adapters: qfe0 qfe2 hme1 Node transport adapter: qfe0 Node transport adapters: qfe0 qfe2 hme1 Node transport adapter: qfe0 # scconf -p | grep junction Cluster transport junctions: hub0 hub2 Cluster transport junction: hub0 Cluster transport junction: hub2 |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
This option is used to enable an already existing cluster transport cable.
Become superuser on any node in the cluster.
Enter the scsetup utility.
# scsetup |
The Main Menu is displayed.
Access the Cluster Interconnect Menu by typing 2 (Cluster interconnect).
Enable the transport cable by typing 7 (Enable a transport cable).
Follow the instructions when prompted. You need to enter both the node and the adapter names of one of the endpoints of the cable you are trying to identify.
Verify that the cable is enabled.
# scconf -p | grep cable |
This example shows how to enable a cluster transport cable on adapter qfe-1 located on the node phys-schost-2.
[Become superuser on any node.] [Enter the scsetup utility:] # scsetup Select Cluster interconnect>Enable a transport cable. Answer the questions when prompted. You will need the following information. You Will Need: Example: node names phys-schost-2 adapter names qfe1 junction names hub1 [Verify that the scconf command completed successfully:] scconf -c -m endpoint=phys-schost-2:qfe1,state=enabled Command completed successfully. Quit the scsetup Cluster Interconnect Menu and Main Menu. [Verify that the cable is enabled:] # scconf -p | grep cable Transport cable: phys-schost-2:qfe1@0 ethernet-1@2 Enabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
You might need to disable a cluster transport cable to temporarily shut down a cluster interconnect path. This is useful when troubleshooting a cluster interconnect problem or when replacing cluster interconnect hardware.
When a cable is disabled, the two endpoints of the cable remain configured. An adapter cannot be removed if it is still in use as an endpoint in a transport cable.
Each cluster node needs at least one functioning transport path to every other node in the cluster. No two nodes should be isolated from one another. Always verify the status of a node's cluster interconnect before disabling a cable. Only disable a cable connection after you have verified that it is redundant; that is, that another connection is available. Disabling a node's last remaining working cable takes the node out of cluster membership.
Become superuser on any node in the cluster.
Check the status of the cluster interconnect before disabling a cable.
# scstat -W |
If you receive an error such as "path faulted" while attempting to remove one node of a two-node cluster, investigate the problem before continuing with this procedure. Such a problem could indicate that a node path is unavailable. Removing the remaining good path takes the node out of cluster membership and could result in a cluster reconfiguration.
Enter the scsetup utility.
# scsetup |
The Main Menu is displayed.
Access the Cluster Interconnect Menu by typing 3 (Cluster interconnect).
Disable the cable by typing 8 (Disable a transport cable).
Follow the instructions and enter the requested information. All of the components on this cluster interconnect will be disabled. You need to enter both the node and the adapter names of one of the endpoints of the cable you are trying to identify.
Verify that the cable is disabled.
# scconf -p | grep cable |
This example shows how to disable a cluster transport cable on adapter qfe-1 located on the node phys-schost-2.
[Become superuser on any node.] [Enter the scsetup utility:] # scsetup Select Cluster interconnect>Disable a transport cable. Answer the questions when prompted. You will need the following information. You Will Need: Example: node names phys-schost-2 adapter names qfe1 junction names hub1 [Verify that the scconf command completed successfully:] scconf -c -m endpoint=phys-schost-2:qfe1,state=disabled Command completed successfully. Quit the scsetup Cluster Interconnect Menu and Main Menu. [Verify that the cable is disabled:] # scconf -p | grep cable Transport cable: phys-schost-2:qfe1@0 ethernet-1@2 Disabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled |
If you need to alter a public network configuration, you can use the software procedures in this section.
When administering public network adapters, pay attention to the following points.
Avoid unconfiguring (unplumbing) or bringing down the active adapter of a Network Adapter Fail Over (NAFO) group without first switching over the active adapter to a backup adapter in the group. See "5.2.6 How to Switch a NAFO Group's Active Adapter".
Avoid rewiring backup adapters to different subnets without first removing them from their respective NAFO groups.
Logical adapter operations can be done on the active adapter even if monitoring is on for the group.
You must maintain at least one public network connection for each node in the cluster. The cluster is inaccessible without a public network connection.
For cluster software installation procedures, see the Sun Cluster 3.0 12/01 Software Installation Guide. For procedures about servicing public networking hardware components, see the Sun Cluster 3.0 12/01 Hardware Guide.
Table 5-3 Task Map: Administering the Public Network
Task |
For Instructions, Go To... |
---|---|
Create a NAFO group on a node | |
Add more public network adapters to a node | |
Delete a NAFO group | |
Remove backup adapters from an existing NAFO group | |
Switch the active adapter to a backup adapter | |
Check the status of NAFO groups | |
Change parameters to tune the PNM fault detection and failover process |
"5.2.8 How to Change Public Network Management Tunable Parameters" |
There are a few issues you must consider when completing dynamic reconfiguration (DR) operations on public network interfaces in a cluster.
All of the requirements, procedures, and restrictions that are documented for the Solaris 8 DR feature also apply to Sun Cluster DR support (except for the operating environment quiescence operation). Therefore, review the documentation for the Solaris 8 DR feature before using the DR feature with Sun Cluster software. You should review in particular the issues that affect non-network IO devices during a DR detach operation.
DR remove operations can be performed on public network interfaces that are not active. Any active public network interface must first be removed from active status in a NAFO group.
When an interface is replaced on the private interconnect, its state remains the same, avoiding any need for additional Sun Cluster reconfiguration steps.
If you try to remove a public network interface card without having properly disabled it as the active network adapter, the system rejects the operation and identifies the interface that would be affected by the operation.
If the active network adapter fails while you are performing the DR remove operation on the disabled network adapter, availability is impacted. The active adapter has no place to fail over for the duration of the DR operation.
Complete the following procedures in the order indicated when performing DR operations on public network interfaces.
Table 5-4 Task Map: Dynamic Reconfiguration with Public Network Interfaces
Task |
For Instructions, Go To... |
---|---|
1. Switch the active adapter to be a backup adapter, so it can be removed from the NAFO group. | |
2. Remove the adapter from the NAFO group. | |
3. Perform the DR operation on the public network interface. |
Sun Enterprise 10000 Dynamic Reconfiguration User Guide and the Sun Enterprise 10000 Dynamic Reconfiguration Reference Manual (from the Solaris 8 on Sun Hardware collection) |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
Note the following requirements for creating a NAFO group:
All public network adapters must be configured to belong to a NAFO group.
For any given node, there can be at most one NAFO group on a given subnet.
All adapters in a given NAFO group must be connected to the same subnet.
Only one adapter in a given NAFO group can have a hostname association, that is, an /etc/hostname.adapter file.
A public network adapter can belong to only one NAFO group.
Become superuser on the node being configured for a NAFO group.
For this node, find out the public network adapters that are physically connected to the same subnet.
These adapters form the backup adapters for the NAFO group.
Create the /etc/hostname.adapter file for one of the public network adapters, if the file does not already exist.
The adapter specified in this file will be the default active adapter for the NAFO group.
# vi /etc/hostname.<adapter> phys-schost-1 |
Edit the /etc/inet/hosts file to add the IP address and corresponding hostname assigned to the public network adapter.
For example, the following shows the IP address 192.29.75.101 and hostname phys-schost-1 added to the /etc/inet/hosts file.
# vi /etc/inet/hosts 192.29.75.101 phys-schost-1 |
If a naming service is used, this information should also exist in the naming service database.
Create the NAFO group.
# pnmset -c nafo-group -o create adapter [adapter ...] |
Performs a configuration subcommand for the specified NAFO group. NAFO groups must be named nafoN, where N is a nonnegative integer identifier for the group. Group names are local to each node. Thus, the same NAFO group name can be used on multiple nodes.
Creates the new NAFO group.
Specifies the public network adapter(s) that serves as the backup adapter. See Step 3.
If an adapter is already configured, it will be chosen as the active adapter and the pnmset command does not alter its state. Otherwise, one of the backup adapters will be configured and assigned the IP address found in the /etc/hostname.adapter file for the NAFO group.
Verify the status of the NAFO group.
# pnmstat -l |
The following example shows the creation of a NAFO group (nafo0) configured with two network adapters (qfe0 and qfe1).
# pnmstat -l # vi /etc/hostname.qfe0 phys-schost-1 # vi /etc/inet/hosts 192.168.0.0 phys-schost-1 # pnmset -c nafo0 -o create qfe0 qfe1 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 |
You can add adapters to an existing NAFO group to provide additional backup adapters for the NAFO group and thereby increase the availability of public network connectivity for the cluster node.
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
Do you need to install the new public network adapter card(s) in the node(s)?
If yes, see the Sun Cluster 3.0 12/01 Hardware Guide for instructions.
If no, proceed to Step 2.
Make sure the adapter to be added to the NAFO group is connected to the same subnet as the active adapter for the NAFO group.
Make sure the adapter is not plumbed, and that it does not have an associated /etc/hostname.adapter file.
Become superuser on the node that contains the NAFO group to which the new adapter is being added.
Add the adapter to the NAFO group.
# pnmset -c nafo-group -o add adapter |
Specifies the NAFO group to which the new adapter is being added.
Specifies the public network adapter being added to the named NAFO group.
Verify the status of the NAFO group.
# pnmstat -l |
The following example shows the addition of adapter qfe2 to NAFO group nafo0 which already contained two adapters (qfe0, qfe1).
# pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 # pnmset -c nafo0 -o add qfe2 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1:qfe2 OK NEVER qfe0 |
Delete a NAFO group when you do not want monitoring and failover for any adapter in the group. To be deleted, a NAFO group cannot be in use by logical host resource groups or shared address resource groups.
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
Become superuser on the node that contains the NAFO group that is being deleted.
Identify whether the NAFO group is being used by any logical host or shared address resources.
# scrgadm -pv |
You can also use scrgadm -pvv (with two v flags) to locate the resources that are using the NAFO group you are going to delete.
Switch the logical host resource groups and shared address resource groups that use this NAFO group.
# scswitch -z -g resource-group -h nodelist |
Switches the specified resource group.
Specifies the name of the node to switch the resource group to.
Delete the NAFO group.
# pnmset -c nafo-group -o delete |
Specifies the NAFO group to be deleted.
Deletes the NAFO group.
Verify the status of the NAFO group.
The deleted NAFO group should not appear in the listing.
# pnmstat -l |
The following example shows the NAFO group named nafo1 deleted from the system. Logical host resource group lh-rg-1, which uses this NAFO group, is first switched to a different node.
# scswitch -z -g lh-rg-1 -h phys-schost-2 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 nafo1 qfe2 OK NEVER qfe2 # pnmset -c nafo1 -o delete # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 |
Remove backup adapters from an existing NAFO group to enable the adapter to be removed from the system, to be replaced, or to be reconnected to a different subnet and used as backup for another NAFO group.
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
Removing the last backup adapter from a NAFO group results in no protection against faults detected on the active adapter, reducing public network availability for the cluster node.
If you want to remove the active adapter, first switch to another adapter in the group.
As superuser, remove the adapter from the NAFO group.
# pnmset -c nafo-group -o remove adapter |
Specifies the NAFO group from which to remove the adapter.
Removes the adapter from the NAFO group.
Verify the status of the NAFO group.
The deleted adapter should not appear in the listing for the NAFO group.
# pnmstat -l |
The following example removes adapter qfe2 from NAFO group nafo0.
# pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1:qfe2 OK NEVER qfe0 # pnmset -c nafo0 -o remove qfe2 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 |
Switch the active adapter to a backup adapter so that the current active adapter can be removed from the NAFO group. The pnmd(1M) daemon moves all IP addresses hosted by the current active adapter to the new active adapter in a similar fashion as a fault-triggered adapter failover.
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
Connections can experience a delay while the switchover is taking place. This delay can last for several minutes. Otherwise, the operation is transparent to higher-level applications.
Ensure the physical connectivity of the new active adapter is identical to that of the current active adapter.
If the new active adapter fails to host some of the IP addresses as the current active adapter, network and data services that depend on those IP addresses are interrupted until the physical connectivity is fixed or a subsequent successful failover occurs.
Become superuser on the node that contains the NAFO group whose active adapter you want to switch.
Switch the active adapter.
# pnmset -c nafo-group -o switch adapter |
Specifies the NAFO group containing the adapter to switch.
Makes the specified adapter the active adapter in the NAFO group.
Rename the /etc/hostname.adapter file for the old active adapter to reflect the new active adapter.
# mv /etc/hostname.<old_adapter> /etc/hostname.<new_adapter> |
Verify the status of the NAFO group.
The "switched-to" adapter should now appear as the active adapter.
# pnmstat -l |
The following example switches the active adapter to qfe1 from qfe0.
# pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 # pnmset -c nafo0 -o switch qfe1 # mv /etc/hostname.qfe0 /etc/hostname.qfe1 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK 11 qfe1 |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
Run the pnmstat(1M) command to list information about the current setup and status of all NAFO groups on a node.
# pnmstat -l |
You can also use the pnmptor(1M) and pnmrtop(1M) commands to get information on adapters.
The following example shows the status of a node's three NAFO groups.
# pnmstat -l Group adapters status fo_time act_adp nafo0 qfe5 OK NEVER qfe5 nafo1 qfe6 OK NEVER qfe6 nafo2 qfe7 OK NEVER qfe7 |
The following example shows that the active adapter in NAFO group nafo0 is adapter qfe5.
# pnmptor nafo0 qfe5 |
The following example shows that adapter qfe5 belongs to NAFO group nafo0.
# pnmrtop qfe5 nafo0 |
There are four tunable parameters in this algorithm.
inactive_time
ping_timeout
repeat_test
slow_network
These parameters provide an adjustable trade-off between speed and correctness of fault detection. See Table 5-5 for more information.
Use this procedure to change the default Public Network Management (PNM) values for the pnmd(1M) daemon.
Become superuser on any node in the cluster.
If it doesn't already exist, create the pnmparams file.
# vi /etc/cluster/pnmparams |
Use the following table to set PNM parameters.
Settings in the /etc/cluster/pnmparams file apply to all NAFO groups on the node. Lines started with a pound sign (#) are ignored. Other lines in the file must be of the format: variable=value.
Parameter |
Description |
---|---|
inactive_time |
Number of seconds between successive probes of the packet counters of the current active adapter. Default is 5. |
ping_timeout |
Time-out value in seconds for the ALL_HOST_MULTICAST and subnet broadcast pings. Default is 4. |
repeat_test |
Number of times to do the ping sequence before declaring that the active adapter is faulty and failover is triggered. Default is 3. |
slow_network |
Number of seconds waited after each ping sequence before checking packet counters for any change. Default is 2. |
warmup_time |
Number of seconds waited after failover to a backupadapter before resuming fault monitoring. This allows extra time for any slow driver or port initialization. Default is 0. |
The changes don't take affect until the next time the pnmd daemon starts up.
The following shows a sample/etc/cluster/pnmparams file, with two parameters changed from their default values.
inactive_time=3 repeat_test=5 |