Chapter 6 - |
CST Utilities |
Several command-line utilities are included in the SUNWcstu package. These utilities are used to perform various administrative functions. Executing these utilities require root level access to the system where the packages are installed.
Below is a list of these utilities and the location of the section for each utility:
cstattach - binds a CST agent to a CST middleware system.
This utility is provided as part of CST middleware package and is to be run from the CST middleware system (Default location is /opt/SUNWcstu/bin). cstattach is needed for CST 3.0 agents that are installed in an unattached mode. By attaching an agent to CST middleware system, you can:
cstattach is supported on Solaris 2.6, 2.7, 8 and 9. It is designed to work with CST 3.0.
Before you use cstattach, please check the following:
1. The agent being attached must be up and running when the attachment process is performed. Ensure that there is no scheduled downtime for the agent around the time you want to use this command. cstattach usually takes around 30 seconds per agent.
2. The middleware CST daemons performing attach process (cstd.svr and csthb.svr) must be up and running.
Note - When working with versions of CST prior to CST 3.0, the CST daemon names are cstd and csthb. |
3. Make sure that the agent version is CST1.5.1 or later.
4. For a large number of monitored systems, it is recommended that the hierarchy tree is planned ahead to eliminate mistakes that may require moving an agent from one hierarchy to another. (hierarchy is optional) See Managing Users and Groups in Chapter 5, Administration,, of this document for steps to relocate a CST agent node to another hierarchy.
5. Collect the agent key from each agent. The agent key is the value inside /var/opt/SUNWcst/akey for each monitored agent.
cstattach [-h hierarchy] <agentname> <agentkey> [middleware]
Note - Hierarchy must be a relative path under middleware repository where the agent directory and its data are maintained. |
Assuming the middleware hostname is "cstserver" running in eng.sun.com domain. The agent name being attached is cstnode1.eng.sun.com.
The agent key (/var/opt/SUNWcst/akey) on cstnode1 machine is 123456.
The following attach command can be used:
# cstattach cstnode1.eng.sun.com 123456
[repository_path]/cstnode1.eng.sun.com
Same as Example 1 with hierarchy path specified.
# cstattach -h department1/group2 cstnode1.eng.sun.com 123456
[repository_path]/department1/group2/cstnode1.eng.sun.com
Same as Example 1, but the agent is located in different domain germany.sun.com.
# cstattach -h europe/germany cstnode2.germany.sun.com 123456 cstserver.eng.sun.com
[repository_path]/europe/germany/cstnode2.germany.
sun.com
Note - The [repository_path] is the location specified during middleware installation. This value is also stored under ROOT_PATH key in /var/opt/SUNWcst/cst.pref on the CST middleware system. |
See also the section on the utility, cstattagt.
cstattagt - binds a CST agent to a CST middleware system
This utility provides similar functionality to the cstattach utility. The differences between this utility and cstattach are:
cstattagt is supported on Solaris 2.6, 7, 8 and 9.
Before you use cstattagt, please check the following:
1. The CST agent being attached must be up and running. It can take from a few seconds to 2 minutes for the attach procedure to complete.
2. The middleware that the agent will attach to must be up and running.
3. To attach CST 3.0 agent, the middleware server must be running CST version 3.0 or later.
4. For a large number of monitored systems, it is recommended that the hierarchy tree is planned ahead to eliminate mistakes that may require moving an agent from one hierarchy to another.
See Chapter of this document for details on how to relocate a CST agent node from one hierarchy to another hierarchy.
cstattagt [-h hierarchy] <middleware>
This parameter is optional. If you have many CST agent systems, it is recommended that the hierarchy is used to organize the monitored systems and to improve UI loading performance.
Attach an agent to cstserver.eng.sun.com
Attach an agent with hierarchy defined
This puts the agent in [cstserver_repository_path]/europe/uk/eng/agenthostname
cstfind - To find nodes with matching patterns.
cstfind [ -s key=value ] [ -h pattern ] [ -p package ] [-x patch # ]
-i inputfile ] [ -o outputfile ]
The cstfind utility searches for a pattern in the "probe.current" file under each input directory node and returns the list of matching directory nodes.
'cstfind' should be run only on a machine running CST server.
The following options are supported for cstfind:
To list all the agent nodes configured under this CST server.
The list of nodes will be captured in the file 'inputfile'. We use this file in our later examples.
To list all the nodes with OS version Solaris 2.8.
The file inputfile should contain all the directory nodes under which the search has to be done. The outputfile will be created with contents as list of nodes matching OS version "2.8" under "System Information" section of the probe.current file.
To list all the nodes without a patch number ABCD1234.
Since the '-o' option is not specified, the list is printed on stdout.
To list all the nodes with number of CPUs equal to 12 and with CPU type sparc.
The following exit values are returned:
findevent 1.0 - CST Event finder
findevent scans all the nodes in the middleware repository or in the specified hierarchy for the events that are of user-interest. User can specify the criteria of the search including:
It is also possible for user to define multiple rules by creating rule file.
Rule file is useful for advanced event filtering. findevent also supports "extend" format which can be used as the input to other CST utilities i.e. setcause to globally set cause code for shutdown events that match period, weekdays and time specified. Running this program as root is recommended as this guarantees that the CST history files and cause code files can be accessed.
findevent is supported on Solaris 2.6, 7, 8, and 9. It should work with all existing CST versions including CST1.5_AMS, CST1.5.1_AMS, CST2.1_AMS, CST2.1U1_AMS, and CST 3.0.
Usage: ./findevent [-p PERIOD] [-w WEEKDAY(S)] [-t TIMESTART TIMEEND] [-h HIERARCHY] [-n NODENAME] [-f FORMAT] [-r RULEFILE] [-c] EVENT_TYPE(S)
EVENT_TYPE can be one or more of the following types:
This field is the only mandatory field. More than one event_type can be specified by separating them with space. There is a special event_type, "allboot," which prints all boot-related events.
PERIOD can be one of the following (Default is since INCEPTION):
start_date and end_date format: mm/dd/yyyy
WEEKDAY can be one or more of the following (Default is any day):
Two or more days can be specified with comma separation and without spaces.
Ex. -weekday SAT,SUN means searching for Saturday and Sunday
TIMESTART and TIMEEND is in 24-hr time format hh:mm:ss
Default behavior will search for any day time
FORMAT can be one of the following (Default is txt):
The extend format is also used as an input to setcause utility
HIERARCHY is the relative hierarchy path under CST middleware repository Searching is applied only for the agents in the specified hierarchy.
Default: the program scans for all hierarchies
RULEFILE is useful when more than one set of rules is needed.
For example, if user wants to find events that happen during non-business hours (assuming 6pm - 8am mon-fri, all day sat-sun), user can create a rule file with the following content:
Option -c indicates that the cause code text is also printed.
Note - The findevent utility can also import the node list from cstfind utility. |
User can specify -i INPUTFILE option where INPUTFILE is the output from cstfind utility.
1. Find any events that happened during this week. Type this command:
2. Find any PANIC, Shutdown, Reboot events that happened during this week. Type this command:
3. Find any PANIC, Shutdown, Reboot events that happened on Mon-Fri from 8am to 5pm since INCEPTION. Type this command:
4. Find all machine names that newly deploys CST this year. Type this command:
5. Find software upgrades including CST software upgrade during the weekend and during this year. Type this command:
6. Find any events generated by app_event utility (Not match any category)
cstlshost - To list the selected fields of system information.
cstlshost [ -f field name ] [ -i inputfile ] [ -o outputfile]
The cstlshost utility retrieves the field information specified by -f option from system information of the probe.current file under all nodes or a set of nodes specified in the file by -i option and outputs the result to either standard output or to file specified by -o option.
cstlshost should be run only on a machine running CST server.
The following options are supported for cstlshost:
Specify a field name to be retrieved from the system information section of the probe.current file. The valid field names are:
For details refer to the examples given at the end of this section
Specify an input file, which contains a list of directory nodes. The probe.current file under the specified nodes is searched to retrieve the fields. The input file should contain the directory path under which the probe.current file resides.
For details, refer to the examples given at the end of this section.
Specify an output file, where the retrieved field information is stored.
If the -o option is not specified, the output is defaulted to stdout. For details, refer to the examples given at the end of this section.
Note - Multiple usage of the option -f is allowed. Please look at the examples given in the next section for further details. |
To list system information like host name, host id of all the nodes configured under this CST server.
To list host name, serial number, system model, total disks, and hierarchy of all the nodes and output to a file.
The resulting outputfile contains:
In the output of cstlshost, each field in a record is separated by the tab character. To get more aligned and formatted output, use the utility, cstprint.
To list hostname, mac address, and ip address for a set of nodes specified in the input file.
Since the -o option is not specified, the list is printed out on stdout.
The following exit values are returned:
The utility, cstlshost, uses case sensitive, full field names to list.
setcause v1.0 - apply event reasons to CST outage events
setcause utility can be used to apply outage reasons (cause code) to certain outage events on a system, a group of systems, or all the systems in the middleware repository.
This utility is supported on middleware:
There are three patterns of usage below:
This usage sets cause code for only certain events on a certain node. HIERARCHY/NODENAME represents a relative path under the middleware repository where the node data is stored.
Note - See section "SET CAUSE CODE PROCEDURE (Usage 1)" for more information. |
PERIOD can be one of the following:
start_date and end_date format: mm/dd/yyyy
RULEFILE contains a set of rules to indicate the time range for which cause code is applied. Each line in the rule file is in the following format:
Example: If user wants to set cause code only for the time period outside business hours, e.g. anytime but Mon-Fri 8am-7pm and Sat-Sun 9am-4pm, the rule file would look like:
The 4 rules above can be translated as:
1. Obtain cause code triplet values (L1 L2 L3) by using setcause -c
2. Use the findevent utility to list all outage events, e.g., shutdown and panic.
3. Use setcause -N hierarchy/nodename -p event_number L1 L2 L3 for a selected outage.
This usage can be useful for the following cases:
1. Obtain cause code triplet values (L1 L2 L3) by using setcause -c
2. Setting cause code for a certain list of nodes requires an input file with the node list. To create an input file:
a. Change directories to the repository directory. Type:
b. Find the events to be included in the file. Type:
c. To remove some machines from the list, type:
3. Specify time PERIOD where cause code will be applied. For example, the value of THISMTH will narrow down the event range to events happened within this month.
4. Create a Rule file. See RULE FILE section for more information. Rule file is used as a filter where user can specify daytime on specific weekday(s) that cause code will be applied.
app_event - create application specific CST event
app_event "event-type" ["comment"]
app_event command is used to create an application-specific CST event which will be recorded in the CST history log file in the directory /var/opt/SUNWcst.
event-type is defined by the user to specify the characteristics of the event.
Note - The event-type operand must be enclosed in double quotes. |
Recommended format: "<word list><space><verb>"
User comment about the application event.
Recommended format: <name>=<value>;<name>=<value>
It is a good practice to avoid white space around "=" token. The name-value pairs should be separated by a ";" delimiter. For example, Comment=<comment>;Cause Code=<cause code>;Application Type=<application type>
The following exit values are returned:
The following example adds the machine password change as a CST event to cst history log.
# /opt/SUNWcstu/bin/app_event "password changed" "admin=xyz"
If you want to track the password changed automatically, you can add the following script to your crontab file, and run it at your desired time, e.g.,every hour or every day.
#!/bin/sh
# This script is used for creating a cst application event of
# event type string "Password Change" by calling cst command
# app_event. You need to add this script to your crontab file.
# Get the time info of the file: /etc/passwd
if test -f /tmp/cst_app_passwd
then
ls -l /etc/passwd|awk '{print $6, $7, $8}' > /tmp/cst_app_passwd_new
# Compare the time stamp of the passwd file, to see if there
# any difference.
diff /tmp/cst_app_passwd /tmp/cst_app_passwd_new >/tmp/cst_result
result=`ls -l /tmp/cst_result|awk '{print $5}'`
if test $result = 0
then
echo "The password is not changed" > /tmp/cstpasswd
rm /tmp/cst_app_passwd_new
rm /tmp/cst_result
else
cp /tmp/cst_app_passwd_new /tmp/cst_app_passwd
rm /tmp/cst_app_passwd_new
# Call cst_app_event to register event
/opt/SUNWcstu/bin/app_event "Password Changed" "admin=xyz"
fi
fi
# End of the script
addsunfire - configures CST to track a SunFire system controller.
The addsunfire command is provided as part of the CST middleware package and is to be run from the CST middleware system. The default location is:
The command is used to configure CST to track data from a SunFire system controller. On successful execution, the command updates the cst.conf file with the configuration information.
The command can be used to set up CST tracking for SunFire 3800, 4800, 4810, and 6800.
addsunfire is an interactive command and accepts the inputs described below. Most of the data for the inputs can be gathered by running showplatform (-v option) command on the Sun Fire system controller. Consult the Sun Fire system administration documentation for details on showplatform and setupplatform commands.
The cst.conf file has a property specification defined for Sun Fire platform tracking.
SUNFIRE<tab><platform>|<sc-name>|<sc-hostid>|<snmp-port>|<snmp-comm
unity>|<hierarchy>|<polling interval>|4800
Assuming the Platform name of the Sun Fire system to be sunfire1 and that the Sun Fire platform is configured with a logical hostname, sf1-sc.sun.com
Platform name (not the hostname, no white-space): sunfire1 SC Hostname (if a virtual hostname is configured, then it must be provided, otherwise use the main SC hostname) : sf1-sc.sun.com
SC (main) Hostid (check showplatform -v on the main SC): 8308fcd3
Is SNMP agent enabled on the SC ? [y/n]: y
SNMP agent port (check SNMP settings the SC): 161
SNMP agent community (SNMP public community): public-comm
Hierarchy (Relative to CST repository, consult CST docs/faq): lab1/sunfires
Polling Interval (in seconds) (min 900, recommended default [1800]): 1800
Setting polling interval to 1800s.
Testing. This could take upto 20 minutes. Please wait...
Activating the system controller tracking daemon.
The command returns 0 on success and an integer value on failures. The cst.conf file is updated only on completion of a successful probe.
cstdecomm - EOL agent from reporting to CST Server/AMS.
cstdecomm [-t "mm/dd/yyyy hh:mm:ss"] [-h hierarchy] <agentname>
The purpose of the cstdecomm utility is to notify AMS that no further data for this host will be transmitted, and that the data on this system is stale. A system may be decommissioned from a server when the system is no longer going to remain operational or when the system is being moved out of that environment and AMS is to no longer consider it in the availability calculations.
Note - Before you can execute this utility, you must uninstall the agent package on the system to be decommissioned. |
This utility should be executed from the server where the repository resides and requires root permission for execution.
The utility is designed to do the following:
1. Create the following event in the history file on the server for that monitored host:
Tracking Ended <timestamp> WEF:<dd:mm:yyyy hh:mm:ss>
2. Move the directory containing the agent data on the server to:
Caution - Remove the SUNWcstu package on the agent before running the decommission utility for that agent on the CST server. |
CST tracking of a monitored host is designed for and is recommended to continue until the end of life (EOL) of that system. It should not be stopped even if it is moved to a different environment, except for specific short term CST software upgrade or a similar maintenance occurrence.
cstdecomm is supported on Solaris 2.6, 7, 8 and 9.
It is designed to work with CST 3.0. This utility does not work with versions below CST3.0.
The parameter, -t, is an optional parameter
./cstdecomm -t "05/12/2000 21:34:09" -h lab1/gp1 cstnode1.eng.sun.com
./cstdecomm -h lab1/group1 cstnode1.eng.sun.com
./cstdecomm -t "02/12/2000 12:23:03" cstnode2.eng.sun.com
./cstdecomm cstnode2.eng.sun.com
The process for decommissioning specific hosts of a multi-node/ domain platform is to use the utility mentioned above for those specific hosts. Ending tracking for the entire platform requires ending tracking on all the domains as well as the main and spare system controllers/service processors (SCs/SSPs). The Decommission utility has to be used iteratively on each of the domains and the SCs and SSPs. until all the directories are moved to:
Not running a pkgrm SUNWcst in the agent machine before decommissioning it from the CST server leads to unpredictable results.
The middleware or the platform in multi-domain systems cannot be decommissioned.
Copyright © 2003, Sun Microsystems, Inc. All rights reserved.