Chapter 6-

CST Utilities

Several command-line utilities are included in the SUNWcstu package. These utilities are used to perform various administrative functions. Executing these utilities require root level access to the system where the packages are installed.

Below is a list of these utilities and the location of the section for each utility:


cstattach

Name

cstattach - binds a CST agent to a CST middleware system.

Description

This utility is part of the CST server installation and is to be run on the CST server system. The default location is:

/opt/SUNWcstu/bin

cstattach can be used to attach CST 3.5 agents that are installed and running in an unattached mode. The protocol to be used between the agent and the server can also be specified.

By attaching an agent to a CST server system, you can:

Support Platforms

cstattach is supported on Solaris 2.6, 2.7, 8 and 9. It is designed to work with CST 3.5.

Checklist

Before you use cstattach, please check the following:

1. The agent being attached must be up and running when the attachment process is performed. Ensure that there is no scheduled downtime for the agent around the time you want to use this command. cstattach usually takes around 30 seconds per agent.

2. The middleware CST daemons performing attach process (cstd.svr and csthb.svr) must be up and running.



Note - When working with versions of CST prior to CST 3.5, the CST daemon names are cstd and csthb.



3. Make sure that the agent version is CST1.5.1 or later.



Note - Older versions of CT do not support the TCP protocol. To use the TCP protocol between the agent and the server, both must be running CST 3.5. For details on the TCP/UDP protocols, see the CST Administration chapter, Network Protocols section.



4. For a large number of monitored systems, it is recommended that the hierarchy tree is planned ahead to eliminate mistakes that may require moving an agent from one hierarchy to another. (hierarchy is optional) See the CST Administration chapter, the To Move Agent Data to a Different Hierarchy, section for steps to relocate a CST agent node to another hierarchy.

5. Collect the agent key from each agent. The agent key is the value inside /var/opt/SUNWcst/akey for each monitored agent.

Usage

cstattach [-h hierarchy] -a <agentname> -k <agentkey> [-t transport][-m middleware]



Note - Hierarchy must be a relative path under middleware repository where the agent directory and its data are maintained.



Examples

Example 1

Assuming the middleware hostname is "cstserver" running in eng.sun.com domain. The agent name being attached is cstnode1.eng.sun.com.

The agent key (/var/opt/SUNWcst/akey) on cstnode1 machine is 123456.

The following attach command can be used:

# cstattach -a cstnode1.eng.sun.com -k 123456

This puts the agent data in:

[repository_path]/cstnode1.eng.sun.com

Example 2

Same as Example 1 with hierarchy path specified.

# cstattach -h department1/group2 -a cstnode1.eng.sun.com -k 123456

This puts the agent data in:

[repository_path]/department1/group2/ cstnode1.eng.sun.com

Example 3

Same as Example 1 except that the agent is located in different domain germany.sun.com.

# cstattach -h europe/germany -a cstnode2.germany.sun.com -k 123456 -m cstserver.eng.sun.com

This locates the agent data in:

[repository_path]/europe/germany/cstnode2.germany.
sun.com



Note - The [repository_path] is the location specified during middleware installation. This value is also stored under ROOT_PATH key in /var/opt/SUNWcst/cst.pref on the CST middleware system.



Example 4

Same as Example 1 except this example enables TCP transport between the CST server and the CST agent:

# cstattach -a cstnode1.eng.sun.com -k 123456 -t TCP



Note - If you decide to not use the default protocol, UDP, use this command to set up the full duplex TCP communication between the CST server and the agent. The setting, -t, is permanent, and applies to all communications after the attach operation has been completed. For information on CST 3.5 support for TCP and UDP, see the CST Administration chapter, the To Set Network Transport Between Agent and Server section.



See also the section on the utility, cstattagt.


cstattagt

Name

cstattagt - binds a CST agent to a CST middleware system

Description

This utility provides similar functionality to the cstattach utility. The differences between this utility and cstattach are:

Support Platforms

cstattagt is supported on Solaris 2.6, 7, 8 and 9.

Checklist

Before you use cstattagt, please check the following:

1. The CST agent being attached must be up and running. It can take from a few seconds to 2 minutes for the attach procedure to complete.

2. The middleware that the agent will attach to must be up and running.

3. To attach CST 3.5 agent, the middleware server must be running CST version 3.0 or later.

4. For a large number of monitored systems, it is recommended that the hierarchy tree is planned ahead to eliminate mistakes that may require moving an agent from one hierarchy to another.

See the CST Administration chapter, the To Move Agent Data to a Different Hierarchy section for details explaining how to relocate a CST agent data from one hierarchy to another hierarchy.

Usage

cstattagt [-h hierarchy] [-t transport] <middleware>

Middleware is the middleware hostname to which the agent attaches.

Transport is the transport layer protocol to use. You can select between TCP and UDP. UDP is the default.

Hierarchy is the relative path under middleware repository where the agent directory and its data are maintained.

This parameter is optional. If you have many CST agent systems, it is recommended that the hierarchy is used to organize the monitored systems and to improve UI loading performance.

Example: department1/group2

Example 1

Attach an agent to cstserver.eng.sun.com

cstattagt cstserver.eng.sun.com

Example 2

Attach an agent with hierarchy defined and specifying TCP protocol:

cstattagt -h europe/uk/eng -t TCP cstserver.eng.sun.com

This puts the agent in: [cstserver_repository_path]/europe/uk/eng/agenthostname

SEE ALSO

cstattach.


cstfind

Name

cstfind - To find nodes with matching patterns.

Synopsis

cstfind [ -o outputfile ]

cstfind [ -s key=value ] [ -h pattern ] [ -p package ] [-x patch # ]

-i inputfile ] [ -o outputfile ]

Description

The cstfind utility searches for a pattern in the "probe.current" file under each input directory node and returns the list of matching directory nodes.

'cstfind' should be run only on a machine running CST server.

Options

The following options are supported for cstfind:

TABLE 6-1 cstfind Options

Option

Description

-s key=value

Specify a key and value to be searched under the "System

Information" section of the probe.current file. For details refer to the examples given at the end of this section.

If the 'value' specified starts with a '!' the nodes returned by cstfind are the one's which does not match with the 'value' pattern.

-h hardwareinfo

Specify a hardware type to be searched under the "Installed Hardware" section of the probe.current file. For details refer to the examples given at the end of this section. If 'hardwareinfo' specified starts with a '!' the nodes returned by cstfind are the one's which does not match with the 'hardwareinfo' pattern.

-p pkgname

Specify a software package to be searched under the "Software Package(s)" section of the probe.current file.

For details refer to the examples given at the end of this section. If the specified 'pkgname' pattern starts with a '!' the nodes returned by cstfind are the one's which does not match with the 'pkgname' pattern.

-x patch #

Specify a patch number to be searched under the "Software Patch(es)" section of the probe.current file. For details refer to the examples given at the end of this section. If the specified 'patchnum' pattern starts with a

'!' the nodes returned by cstfind are the one's which does not match with the 'pkgname' pattern.

-i inputfile

Specify an input file, which contains a list of directory nodes. The "probe.current" file under the specified nodes will be searched for the matching pattern. The inputfile should contain the directory path under which the "probe.current" is residing.

 

If '-i' option is not specified and if the system is used as a CST Server, cstfind searches for the pattern in every "probe.current" file under ROOT_PATH. The variable ROOT_PATH is read from /var/opt/SUNWcst/cst.pref file.

 

For details refer to the examples given at the end of this section.

-o outputfile

Specify an output file, where the list of directory nodes whose "probe.current" matches with the specified pattern will be stored.

 

If '-o' option is not specified the output will be defaulted to stdout.

 

For details refer to the examples given at the end of this section.

 

A combination of '-s','-h','-p' & 'x' will be supported by this utility. Multiple usage of the same option is allowed. Please look at the examples given in the next section for further details.


Examples

Example 1

To list all the agent nodes configured under this CST server.

example% cstfind -o inputfile

The list of nodes will be captured in the file 'inputfile'. We use this file in our later examples.

Example 2

To list all the nodes with OS version Solaris 2.8.

example% cstfind -s os=2.8 -i inputfile -o outputfile

The file inputfile should contain all the directory nodes under which the search has to be done. The outputfile will be created with contents as list of nodes matching OS version "2.8" under "System Information" section of the probe.current file.

Example 3

To list all the nodes without a patch number ABCD1234.

example% cstfind -x !ABCD1234 -i inputfile

Since the '-o' option is not specified, the list is printed on stdout.

Example 4

To list all the nodes with number of CPUs equal to 12 and with CPU type sparc.

example% cstfind -s cpu=12 -s cpu=sparc -i inputfile

Exit Status

The following exit values are returned:

TABLE 6-2 Exit Values

Value

Definition

0

All nodes were traversed successfully

1

An error occurred.




Note - cstfind uses a case insensitive search algorithm.




findevent

Name

findevent 1.0 - CST Event finder

Description

findevent scans all the nodes in the middleware repository or in the specified hierarchy for the events that are of user-interest. User can specify the criteria of the search including:

TABLE 6-3 findevent search criteria choices

Criteria

Rules

event type(s)

(hardware change, reboot, ...)

period

(last month, since inception, ...)

weekday(s)

(mon,tue,wed,thu,fri,sat,sun)

time

(day time period i.e. from 8am to 5pm)


It is also possible for user to define multiple rules by creating rule file.

Rule file is useful for advanced event filtering. findevent also supports "extend" format which can be used as the input to other CST utilities i.e. setcause to globally set cause code for shutdown events that match period, weekdays and time specified. Running this program as root is recommended as this guarantees that the CST history files and cause code files can be accessed.

Supported Platforms

findevent is supported on Solaris 2.6, 7, 8, and 9. It should work with all existing CST versions including CST1.5_AMS, CST1.5.1_AMS, CST2.1_AMS, CST2.1U1_AMS, and CST 3.5.

Usage

Usage: ./findevent [-p PERIOD] [-w WEEKDAY(S)] [-t TIMESTART TIMEEND] [-h HIERARCHY] [-n NODENAME] [-f FORMAT] [-r RULEFILE] [-c] EVENT_TYPE(S)

EVENT_TYPE (Default is "any")

EVENT_TYPE can be one or more of the following types:

This field is the only mandatory field. More than one event_type can be specified by separating them with space. There is a special event_type, "allboot," which prints all boot-related events.

TABLE 6-4 Event Types

hardware

hardware change events

software

software change events

shutdown

system shutdown events

panic

system panic events

reboot

system reboot events

sysavail

system available events

allboot

shutdown panic reboot sysavail events

service

service events

domain

e10k domain change events

boardpwr

e10k board power on/off events

syscfgchange

e10k system configuration change events

cstpkgadd

CST package upgrade events

cstpkgrm

CST package remove events

cststart

CST manually start events

cststop

CST manually stop events

cstinception

CST inception event

custominfo

customer info update events

other

other events

any

any events


PERIOD

PERIOD can be one of the following (Default is since INCEPTION):

start_date and end_date format: mm/dd/yyyy

WEEKDAY

WEEKDAY can be one or more of the following (Default is any day):

Two or more days can be specified with comma separation and without spaces.

Ex. -weekday SAT,SUN means searching for Saturday and Sunday

TIMESTART and TIMEEND

TIMESTART and TIMEEND is in 24-hr time format hh:mm:ss

Default behavior will search for any day time

FORMAT

FORMAT can be one of the following (Default is txt):

TABLE 6-5 Format Options

txt

plain text output

extend

this format print event number and planned/unplanned value


The extend format is also used as an input to setcause utility

HIERARCHY

HIERARCHY is the relative hierarchy path under CST middleware repository Searching is applied only for the agents in the specified hierarchy.

Default: the program scans for all hierarchies

RULEFILE

RULEFILE is useful when more than one set of rules is needed.

For example, if user wants to find events that happen during non-business hours (assuming 6pm - 8am mon-fri, all day sat-sun), user can create a rule file with the following content:

MON,TUE,WED,THU,FRI 00:00:00 08:00:00

MON,TUE,WED,THU,FRI 18:00:00 23:59:59

SAT,SUN 00:00:00 23:59:59

Option

Option -c indicates that the cause code text is also printed.



Note - The findevent utility can also import the node list from cstfind utility.



User can specify -i INPUTFILE option where INPUTFILE is the output from cstfind utility.

Usage Examples

1. Find any events that happened during this week. Type this command:

# findevent -p THISWK any

2. Find any PANIC, Shutdown, Reboot events that happened during this week. Type this command:

# findevent -p THISWK allboot

3. Find any PANIC, Shutdown, Reboot events that happened on Mon-Fri from 8am to 5pm since INCEPTION. Type this command:

# findevent -w MON,TUE,WED,THU,FRI -t 08:00:00 17:00:00 allboot

4. Find all machine names that newly deploys CST this year. Type this command:

# findevent -p THISYR cstinception |grep -v " "| cut -d ":" -f 1

5. Find software upgrades including CST software upgrade during the weekend and during this year. Type this command:

# findevent -p THISYR cstinception

6. Find any events generated by app_event utility (Not match any category)

# findevent other


cstlshost

Name

cstlshost - To list the selected fields of system information.

Synopsis

cstlshost [ -f field name ] [ -i inputfile ] [ -o outputfile]

Description

The cstlshost utility retrieves the field information specified by -f option from system information of the probe.current file under all nodes or a set of nodes specified in the file by -i option and outputs the result to either standard output or to file specified by -o option.

cstlshost should be run only on a machine running CST server.

Options

The following options are supported for cstlshost:

-f field name

Specify a field name to be retrieved from the system information section of the probe.current file. The valid field names are:

For details refer to the examples given at the end of this section

-i inputfile

Specify an input file, which contains a list of directory nodes. The probe.current file under the specified nodes is searched to retrieve the fields. The input file should contain the directory path under which the probe.current file resides.

For details, refer to the examples given at the end of this section.

-o outputfile

Specify an output file, where the retrieved field information is stored.

If the -o option is not specified, the output is defaulted to stdout. For details, refer to the examples given at the end of this section.



Note - Multiple usage of the option -f is allowed. Please look at the examples given in the next section for further details.



Examples

Example 1

To list system information like host name, host id of all the nodes configured under this CST server.

example% cstlshost -f "Host Name" -f "Host ID"

TABLE 6-6 Example 1 Output

Host Name

Host ID

Diag1

80f69438

Diag2

80342e20


Example 2

To list host name, serial number, system model, total disks, and hierarchy of all the nodes and output to a file.

example% cstlshost -f "Host Name" -f "Serial Number" -f "System Model" -f "Total Disks" -f "HIERARCHY" -o outputfile

The resulting outputfile contains:

TABLE 6-7 Example 2 Output

Host Name

Serial Number

System Model

Total Disks

Hierarchy

Diag1

2159449448

Ultra 60 UPA/PCI (2 X UltraSPARC II 450MHz)

1

NULL

Diag2

2155097632

SPARCserver 1000

13

a/b/c/d


In the output of cstlshost, each field in a record is separated by the tab character. To get more aligned and formatted output, use the utility, cstprint.

Example 3

To list hostname, mac address, and ip address for a set of nodes specified in the input file.

example% cstlshost -f "Host Name" -f "MAC Address" -f "Host Address(es)" -i inputfile

TABLE 6-8 Example 3 Output

Host Name

MAC Address

Host Address(es)

TestSys1

8:0:20:b8:95:68

129.16.203.14

TestSys2

8:0:20:67:2e:20

132.29.88.33


Since the -o option is not specified, the list is printed out on stdout.

Exit Status

The following exit values are returned:

TABLE 6-9 Exit Values

Value

Description

0

All nodes were traversed successfully

1

An error occurred


The utility, cstlshost, uses case sensitive, full field names to list.


setcause

Name

setcause v1.0 - apply event reasons to CST outage events

Synopsis

setcause utility can be used to apply outage reasons (cause code) to certain outage events on a system, a group of systems, or all the systems in the middleware repository.

Supported CST Releases

This utility is supported on middleware:

Features

There are three patterns of usage below:

Usage 1

./setcause -N HIERARCHY/NODENAME -p EVENT_NUM L1 L2 L3

This usage sets cause code for only certain events on a certain node. HIERARCHY/NODENAME represents a relative path under the middleware repository where the node data is stored.



Note - See section "SET CAUSE CODE PROCEDURE (Usage 1)" for more information.



Usage 2

./setcause [-i INPUTFILE or -G] -p PERIOD -r RULEFILE L1 L2 L3

-i INPUTFILE = set cause code for a list of nodes in INPUTFILE.

INPUTFILE can be created/edited from the setcause -h or -n output

-G = set cause code for all nodes (not recommended)

PERIOD

PERIOD can be one of the following:

start_date and end_date format: mm/dd/yyyy

RULEFILE

RULEFILE contains a set of rules to indicate the time range for which cause code is applied. Each line in the rule file is in the following format:

<Day(s) of Week> <Time range in 24HRS format>

Example: If user wants to set cause code only for the time period outside business hours, e.g. anytime but Mon-Fri 8am-7pm and Sat-Sun 9am-4pm, the rule file would look like:

MON,TUE,WED,THU,FRI 00:00:00 07:59:59

MON,TUE,WED,THU,FRI 19:00:01 23:59:59

SAT,SUN 00:00:00 08:59:59

SAT,SUN 16:00:01 23:59:59

The 4 rules above can be translated as:

Monday to Friday morning from 0am to just before 8am

Monday to Friday evening from 7pm to just before midnight

Saturday and Sunday morning from 0am to just before 9am

Saturday and Sunday evening from 4pm to just before midnight

Set Cause Code Procedure (Usage 1)

1. Obtain cause code triplet values (L1 L2 L3) by using setcause -c

2. Use the findevent utility to list all outage events, e.g., shutdown and panic.

3. Use setcause -N hierarchy/nodename -p event_number L1 L2 L3 for a selected outage.

Set Cause Code Procedure (Usage 2)

This usage can be useful for the following cases:

1. Obtain cause code triplet values (L1 L2 L3) by using setcause -c

2. Setting cause code for a certain list of nodes requires an input file with the node list. To create an input file:

a. Change directories to the repository directory. Type:

cd <RepositoryDirectoryName>

To discover the repository location, type:
grep ROOT_PATH /var/opt/SUNWcst/cst.pref

 

b. Find the systems to be included in the file. Type:

find . -type d |cut -b3- |grep -v Applications \

| grep -v CLUSTER_PLATFORM > mynodelist

 

c. Edit the file, e.g. mynodelist, as needed to remove any machines to which no causecodes are assigned.

d. Find the events to be included in the file. Type:

find . -type d |cut -b3- > mynodelist

Optionally, you can sort the node list

e. To remove some machines from the list, type:

vi mynodelist ()

3. Specify time PERIOD where cause code will be applied. For example, the value of THISMTH will narrow down the event range to events happened within this month.

4. Create a Rule file. See RULE FILE section for more information. Rule file is used as a filter where user can specify daytime on specific weekday(s) that cause code will be applied.


app_event

Name

app_event - create application specific CST event

Synopsis

app_event "event-type" ["comment"]

Availability

SUNWcstu

Description

app_event command is used to create an application-specific CST event which will be recorded in the CST history log file in the directory /var/opt/SUNWcst.

Operands

event-type

event-type is defined by the user to specify the characteristics of the event.



Note - The event-type operand must be enclosed in double quotes.



Recommended format: "<word list><space><verb>"

<word list> could be the name of the application or entity to which a user can relate, potentially whose availability measurement would be useful.

<verb> would be a verb (preferably in the past tense). Possible verbs are:

comment

User comment about the application event.

Recommended format: <name>=<value>;<name>=<value>

It is a good practice to avoid white space around "=" token. The name-value pairs should be separated by a ";" delimiter. For example, Comment=<comment>;Cause Code=<cause code>;Application Type=<application type>

Exit Status

The following exit values are returned:

TABLE 6-10 Exit Status Values

0

Successful operation

-1

Unable to resolve hostname

-2

UDP transport down on the system

-3

CST processes not running

-4

RPC timeout

-5

Command could not be completed successfully


Examples

The following example adds the machine password change as a CST event to cst history log.

# /opt/SUNWcstu/bin/app_event "password changed" "admin=xyz"

If you want to track the password changed automatically, you can add the following script to your crontab file, and run it at your desired time, e.g.,every hour or every day.

 
        #!/bin/sh
        # This script is used for creating a cst application event of
        # event type string "Password Change" by calling cst command
        # app_event. You need to add this script to your crontab file.
 
        # Get the CST packet name
        for i in SUNWcstu SUNWcst SUNWcstv SUNWcstve
        do
                pkginfo $i 2> /dev/null 1> /dev/null
                if test $? = 0
                then
                        packname=$i
                        break
                fi
        done
 
        # Get the installation base
        instbase=`pkginfo -r $packname`
 
        # Set the full path to app_event
        app_event=$instbase"/"$packname"/bin/app_event"
 
        # Get the time info of the file: /etc/passwd
        if test -f /tmp/cst_app_passwd
        then
                ls -l /etc/passwd | awk '{print $6, $7, $8}' > /tmp/cst_app_passwd_new
 
                # Compare the time stamp of the passwd file, to see if there
                # any difference.
                diff /tmp/cst_app_passwd /tmp/cst_app_passwd_new > /tmp/cst_result
                result=`ls -l /tmp/cst_result | awk '{print $5}'`
                if test $result != 0
                then
                        cp /tmp/cst_app_passwd_new /tmp/cst_app_passwd
                        # Call cst_app_event to register event
                        $app_event "Password Changed" "Changed by owner" 1> /dev/null
                fi
        else
                ls -l /etc/passwd | awk '{print $6, $7, $8}' > /tmp/cst_app_passwd
        fi
        rm /tmp/cst_app_passwd_new
        rm /tmp/cst_result
        # End of the script


Note - app_event utility is not designed to handle high event volume. Do not use this utility to generate periodic or high-volume events as this may cause unnecessary traffic to the middleware server. app_event should be invoked on a per-event or occurrence basis to indicate changes to the system.



Files

/var/opt/SUNWcst/cst_history


addsunfire

Name

addsunfire - configures CST to track a SunFire system controller.

Description

The addsunfire command is provided as part of the CST middleware package and is to be run from the CST middleware system. The default location is:

/opt/SUNWcstu/bin

The command is used to configure CST to track data from a SunFire system controller. On successful execution, the command updates the cst.conf file with the configuration information.

Supported Sun Fire Platforms

The command can be used to set up CST tracking for SunFire 3800, 4800, 4810, and 6800.

Inputs

addsunfire is an interactive command and accepts the inputs described below. Most of the data for the inputs can be gathered by running the showplatform (-v option) command on the Sun Fire system controller. Consult the Sun Fire system administration documentation for details on showplatform and setupplatform commands.

TABLE 6-11 Inputs

Input

Description

Platform Name

This is the name for the complete platform. It is recommended that a single word be used with no special characters or white-space. Usually, it coincides with the "System Description" as seen on the SunFire system controller.

SC Hostname

This is the network nodename of the MAIN system controller board. If a logical hostname is configured for the system controller, then that should be used instead of the MAIN system controller hostname.

 

Do not use a SPARE system controller hostname.

SC Hostid

This is the MAIN system controller hostid. It is recommended that the hostid of SSC0 be used. The hostid can be obtained by running the showplatform (-v option) command.

 

It is very important that the correct hostid be used or data corruption can occur.

SNMP Agent Enabled

This is a question posed to the user. It is expected that the user has checked that SNMP is enabled on the system controller and that all the SNMP settings are correct before answering this query with a 'y'. Any other input to this query will cause the command to exit if SNMP is not set up on the system controller, use the command setupplatform on the system controller.

SNMP Agent Port

This is the port number to which all SNMP requests should be sent. It is usually 161.

SNMP agent community

This is the public community string from the SNMP settings. If this value is inaccurate, all SNMP queries to the system controller will fail.

Hierarchy

This is the user-defined part of the hierarchy path (relative to the CST middleware repository) where the agent data is maintained. CST appends the Platform Name to the defined hierarchy path and maintains all the data files under it.

 

The utility does not validate hierarchy paths. It is expected that the input provided isaccurate.

Polling Interval

This is time interval (in seconds) between two successive CST probes. The minimum interval is 900s. The recommend (default) value is 1800s. The maximum is 86400s (once a day).




Note - The system controller data is recorded in the CST repository, on the CST server system, under the hierarchy:
<user-specified Hierarchy>/<user-specified platform name>

This path must be unique for each system controller, otherwise data corruption could result. It is recommended that you use a valid, non-generic hierarchy path and platform name when you add tracking for a system controller using the addsunfire command.



Format

The cst.conf file has a property specification defined for Sun Fire platform tracking.

 

SUNFIRE<tab><platform>|<sc-name>|<sc-hostid>|<snmp-port>|<snmp-comm 
unity>|<hierarchy>|<polling interval>|4800

Usage Examples

Example 1

Assuming the Platform name of the Sun Fire system to be sunfire1 and that the Sun Fire platform is configured with a logical hostname, sf1-sc.sun.com

 

Platform name (not the hostname, no white-space): sunfire1 SC Hostname (if a virtual hostname is configured, then it must be provided, otherwise use the main SC hostname) : sf1-sc.sun.com
SC (main) Hostid (check showplatform -v on the main SC): 8308fcd3
Is SNMP agent enabled on the SC ? [y/n]: y
SNMP agent port (check SNMP settings the SC): 161
SNMP agent community (SNMP public community): public-comm
Hierarchy (Relative to CST repository, consult CST docs/faq): lab1/sunfires
Polling Interval (in seconds) (min 900, recommended default [1800]): 1800
Setting polling interval to 1800s.
Testing. This could take upto 20 minutes. Please wait...
Activating the system controller tracking daemon.

Exit Status

The command returns 0 on success and an integer value on failures. The cst.conf file is updated only on completion of a successful probe.


cstdecomm

Name

cstdecomm - EOL agent from reporting to CST Server/AMS.

Synopsis

cstdecomm [-t "mm/dd/yyyy hh:mm:ss"] [-h hierarchy] <agentname>

Description

The purpose of the cstdecomm utility is to notify AMS that no further data for this host will be transmitted, and that the data on this system is stale. A system may be decommissioned from a server when the system is no longer going to remain operational or when the system is being moved out of that environment and AMS is to no longer consider it in the availability calculations.



Note - Before you can execute this utility, you must uninstall the agent package on the system to be decommissioned.



This utility should be executed from the server where the repository resides and requires root permission for execution.

The utility is designed to do the following:

1. Create the following event in the history file on the server for that monitored host:

Tracking Ended <timestamp> WEF:<dd:mm:yyyy hh:mm:ss>

2. Move the directory containing the agent data on the server to:

<cst db_root>/DECOMMISSIONED_AGENTS



caution icon

Caution - Remove the SUNWcstu package on the agent before running the decommission utility for that agent on the CST server.



CST tracking of a monitored host is designed for and is recommended to continue until the end of life (EOL) of that system. It should not be stopped even if it is moved to a different environment, except for specific short term CST software upgrade or a similar maintenance occurrence.

Parameters

-t mm/dd/yyyy hh:mm:ss

The date-time stamp can be added as an optional comment if the agent is already non-operational.

-h hierarchy

The relative path after the repository that was created to store the agent information. This option must be omitted if there is no hierarchy available.

agentname is the name of the agent as stored in the repository. This is a mandatory parameter, and agentname cannot refer to a CST server.

Support Platforms

cstdecomm is supported on Solaris 2.6, 7, 8 and 9.

It is designed to work with CST 3.5. This utility does not work with versions below CST 3.0.

Examples

The parameter, -t, is an optional parameter

Nodes with Hierarchy

Example 1
./cstdecomm -t "05/12/2000 21:34:09" -h lab1/gp1 cstnode1.eng.sun.com
Example2
./cstdecomm -h lab1/group1 cstnode1.eng.sun.com

Nodes with no Hierarchy

Example 1

./cstdecomm -t "02/12/2000 12:23:03" cstnode2.eng.sun.com
Example 2
./cstdecomm cstnode2.eng.sun.com

Notes

Multi-domain/Node Systems

The process for decommissioning specific hosts of a multi-node/ domain platform is to use the utility mentioned above for those specific hosts. Ending tracking for the entire platform requires ending tracking on all the domains as well as the main and spare system controllers/service processors (SCs/SSPs). The Decommission utility has to be used iteratively on each of the domains and the SCs and SSPs. until all the directories are moved to:

<cst root_path>/DECOMMISSIONED_AGENTS

Examples of Usage

Not running a pkgrm SUNWcst in the agent machine before decommissioning it from the CST server leads to unpredictable results.

The middleware or the platform in multi-domain systems cannot be decommissioned.