Troubleshooting an Issue
Evaluating System Status
A good first diagnosis is to run the Unix top command or equivalent (topas on AIX; prstat on Solaris). This will display information such as what processes are running, current memory usage, and free memory.
Examining Log Files
Log files are the best tools for tracking down the source of a problem because very seldom does something crash or behave strangely without an entry being logged. Before reporting an issue to Oracle Customer Support, it is important to review log files for critical information that may help Oracle Customer Support solve your problem.
There are several logs that are especially useful for troubleshooting issues with NMS implementations; these include services logs and PID logs. The sections that follow describe important troubleshooting logs.
Oracle Utilities Network Management System Log Files
Application log files are located in the directory specified by the NMS_LOG_DIR environment variable, which is defined in the .nmsrc file.
Note: by default, NMS_LOG_DIR is set as $NMS_HOME/logs
There will be one log file in this directory for each actively running service.
After a process has been stopped and restarted, the old log file for that particular server is moved to the old_log subdirectory within the NMS_LOG_DIR directory.
After the number of days specified in $NMS_DAYS_TO_LOG, old log files for a given process in the $NMS_LOG_DIR/old_log directory will be purged on the next attempt to start that process. The default for NMS_DAYS_TO_LOG is 7 (days). Thus, old logs will only be retained for 1 week by default.
Service Logs
Looking for DBService errors is a common starting place in determining if the problem is a database issue or a services issue. DBService errors can appear in DBService, TCDBService, and MBDBService or some other *DBService, depending upon which service is having a problem interacting with the database.
If a particular service cores, Customer Support will want to know if the service has any error messages in the log file right before it failed. The most relevant portion of the log is the text concerning what happened right before the dump. Often, there are important messages explaining why the service exited.
Another key service log is the SMService log. This log records if/when SMService attempts to restart other services.
 
Oracle Utilities Network Management System Log File Naming Conventions
Within the log directory, the following naming conventions apply:
There is one log file for each Service actively executing on the server. Service logs are named [Service Name].[date].[time].log. Example log files would be:
DBService.2010052898.111721.log
DDService.20100528.111800.log
Trimming and Archiving Application Oracle Utilities Network Management System Log Files
As log files grow, they generally need to be removed or archived. When determining the maximum size and content of log files, consider your company's needs:
If accounting files need to be kept for an audit, a larger log file is justifiable. Backups of those files might even be in order.
After the number of days specified in $NMS_DAYS_TO_LOG (environment variable), the old log files for a given process in the $NMS_LOG_DIR/old_log directory will be purged on the next attempt to start that process. The default for $NMS_DAYS_TO_LOG is 7 (days). Thus, old logs will only be retained for 1 week by default.
Issues like these should be carefully assessed, and you should develop a policy around your company's specific needs.
PID Logs
PID logs are files with an integer value suffixed by .log. PID logs are generated in one of two ways.
cmd snapshot command. This will create PID logs for all Isis processes currently running, whether they are services or tools. They appear in the following locations:
Services will appear in the $NMS_LOG_DIR/run.<service> directory of the user that starts services.
Tools will appear in the directory where the tools were started (typically the user's HOME directory). If a tool is started from the command line, it will appear in the directory where the tool was started.
kill -usr2 <pid>. This will not kill the tool. Rather, it will send a signal to the process that will create a [pid].log for that PID.
Note: You can do this multiple times, and the logs will append additional dumps into the same log file as long as the process continues to run. It will not remove or replace logs upon additional snapshots of the same process. Customer Service recommends that these logs be cleaned up upon the end of investigating an issue.
Java Application Server Log Files
The WebLogic server log files are written to the following location:
MW_HOME/user_projects/domains/DOMAIN_NAME/servers/SERVER_NAME/logs
where:
MW_HOME: Oracle WebLogic Server installation directory.
DOMAIN_NAME: WebLogic domain name used for Oracle Utilities Network Management System.
SERVER_NAME: WebLogic server name used for Oracle Utilities Network Management System.
Using EJB_STATISTICS
EJB_STATISTICS will add timing entries to the the standard logging for each EJB call that takes more than two minutes.
To enable EJB_STATISTICS logging, run:
Action any.publisher* ejb debug EJB_STATISTICS=1
Alternatively, EJB_STATISTICS can be added to nms-log4j.xml, by adding this section:
<Logger name="EJB_STATISTICS" level="debug" additivity="false">
<AppenderRef ref="Console"/>
</Logger>
To change the threshold from 2 seconds, modify this setting in CentricityServer.properties:
log_requests_over_ms = 2000
Java Client Application Logs
Java client applications generate a log file each time the application is started. The log files are named according to the following convention:
[server name]_[application]_[server date]_[server time].log
For example, xyz.opal.com_WebWorkspace_20200603_1106899.log.
The log files are saved to the following locations:
Operating System
Path
Microsoft Windows 7/10
C:\Users\[user]\AppData\Local\Temp\OracleNMS
Linux
/tmp/[user]/OracleNMS/
 
In addition, the Java Web Start applications log to the Java console. If you want to see the log messages in real time, enable the Java console.
For example, in Microsoft Windows, enable the console by doing the following:
1. Open the Windows Control Panel.
2. Open the Java Control Panel by double-clicking on the Java icon.
3. Select the Advanced tab.
4. Set Java console parameter to ‘Show console’ (Java console will be started maximized) or ‘Hide console’ (Java console will be started minimized).
Error Codes
NMS utilizes error codes to help implementers and support personnel diagnose issues with the system. Error codes are not intended for the end user and represent an issue with the software or the project's configuration. Error codes will be displayed in the client and/or server log and will show up in the format of [ModuleName]‑[ErrorCode] where [ModuleName] would represent the NMS Module in context. WSW for Web Switching, JBOT for JBot configuration, and so forth. [ErrorCode] will uniquely define the error within each module.
If an error code is identified, further details for the error code can be found on the NMS Errors portal page. The portal page will be installed at the following location:
http://[WebLogic Host]:[port]/nms/errordocs/index.html
In some cases the error code will produce an Error Code error dialog with a short description for the error code. The dialog will also include buttons that allow you to review additional details about the code or to report the issue. The Report Issue... button will gather the environment's logs and initiate your default email client with the logs attached. The Details... button will bring up the extended details for the error code in your default web browser.
Error code short descriptions, which show up on the Error Code error dialog can be overridden by added an entry for the code in the MessageCode_en_US.properties file. Here is an example:
OmsErrorCodeException.TEST_EXAMPLE-1000 = This is a test message with arguments [{0}] [{1}].
This text will be displayed in the Error Code error dialog, but will not replace the message displayed in the log file. The message override option should not be necessary. However, if a particular issue were to be introduced with a patch that results in the error code being displayed on a regular basis, the project may want to override the message to give its users further details on how to deal with the issue.
The override option also allows the error code descriptions to be translated to another language in case the need arises.
Performing a Java Stack Dump
If a Java application hangs, it is usually necessary to provide Oracle Support with a stack dump of the application to debug this issue. To add the stack dump to the log file, press CTRL+ALT+D.
Note: If there are multiple applications open, a stack dump will be added for each.
Emailing the Log File
To email the log file, either select the environment’s Email Log Files... from the environment Help menu or press CTRL+ALT+M, which will create a new email message (using your default email client) with the log file attached; the email can then be addressed and sent.
Notes:
If there are multiple applications open, separate log files will be attached for each; any unneeded logs can be deleting from the email prior to sending it.
If for any reason the NMS environment is hung, the Help menu option will be unavailable, but the CTRL+ALT+M sequence will still continue to function. It is strongly advised that all the operators of NMS learn and utilize this feature if they ever encounter a hung environment. Oracle Customer Support will require these logs when a service request is created.
If the menu is chosen, only the log file for the current application will be attached. If the Ctrl+ALT+M hotkey is used, then the log files for all current applications, plus NMSMonitor will be attached.
Isis Log Files
There are two types of isis log files:
The isis startup log tracks configuration information along with other notable information that occurs before protos is completely started. The nms-isis start program starts isis (isis in turn starts protos) using the nohup command, which makes protos immune to hang-ups, such as exiting the terminal after starting isis. The startup log is called isis.yyyymmdd.HHmmss.log and can be found in $NMS_LOG_DIR/run.yyyymmdd.HHmmss.log. If you cannot start isis, check this log.
The protos log contains log information for the running protos process. This file is site-specific, and the name is based on the site number of the machine on which protos is running. The log for the protos process can be found in $NMS_LOG_DIR/run.isis/[site #].logdir/[site #]_protos.[date].[time].log.
When isis is restarted, the old log files will be archived into the $NMS_LOG_DIR/run.isis/[site#].logdir/old_log directory. They will be automatically removed after the number of days specified by $NMS_DAYS_TO_LOG if/when isis is restarted.
Oracle RDBMS Log Files
Many times, there is an error in an Application log file that points to some sort of database problem. DBService may log that at a certain time the database was unavailable to answer queries. Look in the database logs to find the answer. These logs can alert you to problems with the RDBMS configuration, software, and operations. Other instances of a dbservice (TCDBService, PFDBService, MBDBService) may also be configured and running. Each of these should be reviewed for errors.
Refer to the Oracle RDBMS documentation for locations and instructions for viewing Oracle RDBMS logs.
Operating System Log Files
Another place to look for problems is in the operating system logs. Refer to the operating system documentation for locations and instructions for viewing operating system logs (generally various forms of syslog, such as /var/log/messages for Linux).
It is generally recommended that syslog be turned on for a production system. In particular, the Oracle Utilities Network Management System uses the syslog to track fatal errors and log the start/stop time of every Oracle Utilities Network Management System-specific isis process.
Entries like the following can be useful when trying to track down which application binary a particular Unix process ID belongs to:
May 30 12:47:57 msp-pelin01 CES::corbagateway[26346]: my_address = (2/7:26346.0)
May 30 12:48:00 msp-pelin01 CES::corbagateway[26346]: **INFO*** [corbagateway-26346] for [msp-pelin01] exiting....
Using the Action Command to Start a New Log File
There is also a feature that uses the Action command to start a new log file without stopping anything. This can be very useful in isolating a portion of the log file when recreating a problem. The command is:
Action any.<NMS_ISIS_process_name> relog
For example: Action any.JMService relog
The Action command can also be used to turn debug on and off for services or tools. This can also be used with the relog feature to better isolate debug for a particular user scenario.
The following command will turn debug on:
Action any.<service> debug 1
The following command will turn debug off:
Action any.<service> debug 0
Each Oracle Utilities Network Management System isis (daemon/service/adapter) process typically supports facility specific debug that can be enabled to help track down issues with the facility in question. In general, you must consult with Oracle Support to get details on what facilities are currently available and what level to set them to for a given situation.
Examining Core Files
On Unix, if a process has either committed an error or over-taxed the system resources, the operating system will kill it rather than letting it take down the operating system. When this happens, the operating system dumps the contents of the memory occupied by the process into a file named "core." These files can sometimes be analyzed to better understand the reason for the failure.
Unix operating systems have a Unix user specific mechanism (ulimit -c) that determines how much disk space a core file can use. If this value is set to 0 or too low, the operating system may not be able to generate a useful core file. It is recommended that production NMS systems run with "ulimit -c" set to a sufficiently high value (typically "unlimited"). Note it is left up to NMS end users to monitor/manage any disk space used by NMS processes. These core files typically only have a very short shelf life of value (often a few days while any analysis might happen) after which they can be removed.
Normally, you should question the production of a core file to see if there are any extraneous reasons why the OS is dumping a process. If you do not find anything, retrieve the core file and analyze it.
Note: see the Pre-Installation chapter of the Oracle Utilities Network Management System Installation Guide for information on core file naming conventions.
Core files are located in the NMS_LOG_DIR/run.[service] directory in the username that started services, or in the directory where a tool was started (usually the home directory of the user).
After performing a kill -USR2 on a hung process, it can be useful to follow with
kill -abrt [pid]
This will cause the process to dump core and the process will be dead.
Note: Always use kill -USR2 before kill -abrt because the ‑abrt option terminates the process. Make sure it is okay to terminate the process before attempting kill -abrt.
The command file core will generally (depending on the operating system involved) identify which process generated the core. Later core files can overwrite earlier core files. Renaming the core file to something like core.[process] can prevent this.
SMService can be set up to automatically find, rename, and consolidate core files into a single directory ($NMS_LOG_DIR/SavedCores by default). You can change what happens to core files captured by SMService by modifying the sms‑core‑save script.
When a tool or service cores, the investigation is helped by sending the stack trace in the incident report. A stack trace can be generated using the dbx (Solaris) or gdb (Linux) tool. The syntax is as follows:
Solaris:
dbx <path to binary directory> <path to corefile>
Linux:
gdb [path to binary directory] [path to corefile]
For example:
dbx $NMS_BASE/bin/JMService ~/run.JMService/core
Press the space bar until you get a prompt and then enter the following commands:
Solaris:
where
threads
dump
regs
quit
 
AIX:
where
thread
dump
registers
quit
Linux:
where
info threads
info locals
info all-reg
thread apply all where
Include the results of these commands in your incident report.
Searching for Core Files
To search for core files, complete these steps:
1. Search for core files with the find command:
$ find . -name core* -exec ls -l {} \;
Expected result:
-rw------- 1 ces users 32216692 Oct 15 16:05 ./core
This executes an "ls -l" on any files found in the tree starting from the current working directory. This should be done from the $NMS_HOME directory and (if it differs from $NMS_HOME) the $HOME directory.
If a service cores, the core file can be found in the $NMS_LOG_DIR/SavedCores or (if SMService failed or is not configured with a CoreScript to detect and/or move the core file) the $NMS_LOG_DIR/run.[service] directory. Note that SMService will rename a service core file to [hostname]-[service]‑[date].[time].core to minimize the chance of core files overwriting each other.
2. Type the following to determine where a core file came from:
$ file ./core
Below is a sample result from an AIX server:
core: AIX core file fulldump 64-bit, JMService - received SIGBUS
The core file referenced above is the result of a JMService core dump. The output gives:
the file name (which is always "core"),
which program/process the file came from (JMService), and
optionally, the message that the program received from the OS (SIGBUS).
3. Generally the most useful thing you can do is to identify what is called the core stack trace--the specific functions that were called (in order) leading up to the violation that caused the operating system to generate the core file. The stack trace is often a useful piece of information that, if available, should be captured for later analysis. Details on navigating a core trace can be found later in this document.
4. Use the strings command to get some more information out of the file, if possible. Type:
$ strings core | head
Sometimes the messages returned, such as "Out of memory" or "I/O error," give an idea of what might have happened.
Identifying Memory Leaks with monitor‑ps‑sizes
The monitor‑ps‑sizes script monitors the size of processes to identify potential leaks. It performs periodic snapshots of all running processes and warns the user of any processes that have grown greater than the specified size. It supports the following command-line options:
Option
Description
‑A
Log the command’s output
‑a <command>
Command to perform on process when generating a warning. You can pass the program's name and/or PID via #PID# and #PROGRAM#
‑f <list of user names>
Monitor processes for this comma-separated list of users.
‑G <number>
A warning about a process is guaranteed to be generated if the process exceeds this size.
Default: 40000 (units reported by ps)
‑g <number>
The growth factor that triggers a report.
Default: 1.75 (floating point numbers greater than 1 are valid)
‑l <line number>
The line number that specifies the stable size in the process-size log file.
Default: 3 (line numbers begin counting with 1)
‑n <program names>
A comma-separated list of program names to monitor
‑O <number>
The maximum number of seconds to retain log files.
Default: 172800 (seconds) if 0, old log files are not erased.
‑P <number>
The minimum number of seconds to wait between warnings.
Default: 0 (seconds)
‑p <number>
The number of seconds to wait between snapshots.
Default: 3600 (seconds)
‑R <number>
The minimum process size that can be reported.
Default: 5000 (units reported by ps)
‑s <email subject line>
The subject line to use to title email warnings about processes that are too big.
Default: "process size warning for prod_model"
‑u <email names>
A comma-separated list of users to email when there are processes warnings.
Default: no email sent.
For example, to monitor JMService and MTService for user "nms" when either gets larger than 500 MB or grows by 10 percent, use:
monitor‑ps‑sizes ‑n MTService,JMService -f nms ‑R 500000 -g 1.1
Validating the WebLogic Caches with NMS Services
The NMS application deployed to WebLogic (nmsejb.ear), contains various caches that are used to lessen the load on the NMS services. Normally they are kept in sync automatically. However, there are certain circumstances when it is desirable to force the system to refresh the system.
The general command is:
Action any.publisher* ejb refresh
This command causes the system to reload the configuration and forces the client to re-request all data that it is currently displaying. This puts significant load on the system, so it should only be done when necessary in a production environment.
The following commands do not put much load on the server, so they are safer to call in a production environment:
If only the viewer symbology needs to be reloaded, use this command:
Action any.publisher* ejb reload_symbology
To validate the event cache use:
Action any.publisher* ejb resync
If there were any changes due to the re-synchronization, they will be logged to the WebLogic log file.
Monitoring EclipseLink Related Database Transactions
EclipseLink provides an Object-Relational Mapping (ORM) structure that NMS uses in some cases to manage database data. EclipseLink is not used by all of the NMS tools, but is heavily used by Web Switching and Web Safety. In some cases, project configuration can cause database related issues that are not obvious to the implementer via the standard log messages. When additional details about the database queries are required, the EclipseLink debugging level can be turned on and even redirected to a specific log. The log file will be output on the same server where your WebLogic installation is running. Since the debug can place additional strain on an environment, it is strongly advised that any EclipseLink related debug not be generated in a production environment unless absolutely necessary to diagnose an issue. Projects should always run their production environments with the standard EclipseLink configuration file included with NMS.
The following directions should be used to define your project version of the EclipseLink configuration file.
1. In your [project]/jconfig directory, create a subdirectory structure as follows:
[project]/jconfig/override/fwserver.jar/META-INF/
2. Copy the Product version of the persistence.xml file. This file can be found in the product/jconfig/override/fwserver.jar/META-INF/ directory.
3. Save the file to the META-INF directory that you created in step 1.
4. You should find the entries like the following commented out in the configuration file.
<property name="eclipselink.logging.logger" value="ServerLogger"/>
<property name="eclipselink.logging.file" value="/users/nms1/nmslogs/eclipselink.out" />
Uncomment the entries and set the logging.file entry to a directory that exists on the server where your WebLogic instance is running. If this directory does not exist, the NMS application deployed to WebLogic will not start.
5. Find the following line:
<property name="eclipselink.logging.level" value="SEVERE"/>
Set the value to FINE.
<property name="eclipselink.logging.level" value="FINE"/>
Valid values include ALL, FINEST, FINE, CONFIG, INFO, WARNING, SEVERE and OFF. Use caution using FINEST or ALL because they generate a large amount of debug information. For diagnosing most issues, FINE is a good place to start.
6. Locate the following line:
<jta-data-source>jdbc/intersys</jta-data-source>
Change the jta‑data‑source to match the config.datasource value in $NMS_CONFIG/jconfig/build.properties. For example, if config.datasource is set to jdbc/intersys/build23, then change the line to read:
<jta-data-source>jdbc/intersys/build23</jta-data-source>
7. When the changes are complete, build a new cesejb.ear file and deploy this to your WebLogic instance.
Logging Options
Logging is accomplished with log4j2.
WebLogic logging is configured using nms‑log4j.xml.
Client side logging is defined in log4j2.properties.
 
To configure logging in log4j2.properties you need 2 entries. For example:
#logger.ProxyInvocationHandler.name=com.splwg.oms.client.util.proxy.ProxyInvocationHandler
#logger.ProxyInvocationHandler.level = debug
logger.ProxyInvocationHandler is an arbitrary name. The name has to be unique and match both lines.
 
Full details of the log4j format can be found on the Apache website at:
Performance Testing
It is helpful to have certain debug options enabled when doing performance/scalability testing in order to analyze issues after the test. The following debug is recommended.
In system.dat:
corbagateway -pgtiming on -debug GATEWAY_MESSAGE 1
JMService -debug API 1 -debug TIMING 2
DDService -debug MESSAGES 1
MTService -debug MESSAGES 1
PFService -debug MESSAGES 1
FLMService -debug MESSAGES 1
And to turn on Agent debug in WebLogic:
Action any.publisher* ejb debug com.splwg.oms.ejb.session.Agent=DEBUG
It may also be desirable to enable EJB Logging or EJB_STATISTICS (described above) or to enable ProxyInvocationHandler debug in the clients. ProxyInvocationHandler debug outputs 1 line for each remote method call made by the client to the client log (for example, WebWorkspace.log). To enable ProxyInvocationHandler debug for individual users at run time, Turn on "com.splwg.oms.client.util.proxy.ProxyInvocationHandler" in the Set Debug dialog box (see Setting Debug in the Configuration Assistant chapter of the Oracle Utilities Network Management System User's Guide) or run the following command (changing [userid] to the user's login id):
Action any.publisher* ejb client <userid> debug com.splwg.oms.client.util.proxy.ProxyInvocationHandler=DEBUG
To enable ProxyInvocationHandler debug for all users, add the following line to $NMS_CONFIG/jconfig/global/properties/log4j2.properties:
logger.ProxyInvocationHandler.name=com.splwg.oms.client.util.proxy.ProxyInvocationHandler
logger.ProxyInvocationHandler.level = debug
For OMA and Flex Operations, MDB_TIMING and MobileBeanProxy debug can be turned on to log timing of requests from the nms-ws deployment to the cesejb deployment. MobileBeanProxy debug outputs to the WebLogic server log where the nms-ws deployment runs, with each log message giving the timing of a request it made to cesejb. The MDB_TIMING debug outputs to the WebLogic server log where the cesejb deployment runs with each log message giving the timing of a request made from nms-ws. These debug facilities can be turned on with the following command:
Action any.publisher* ejb debug MDB_TIMING=1 com.splwg.oms.ws.MobileBeanProxy=1
The WebLogic access log can also provides useful data. By default, the access log does not output the time taken for requests. This can be enabled via the WebLogic Administration Console.
1. In the WebLogic Administration Console, navigate to the managed server supporting the nms-ws deployment.
2. Click the Logging tab, and then select the HTTP tab.
3. Expand the Advanced pane.
4. In the Advanced pane, set Format to Extended.
5. In the Extended Logging Format Fields field, enter:
date time cs-method cs-uri-stem sc-status bytes time-taken
It is also recommended to run st‑call‑rate for the duration of the test. This script periodically queries the NMS database to report information about calls/incidents, outages, and device operations. The recommended options for this script are:
st-call-rate -i 60 -m <minutes>
With <minutes> specifying the number of minutes to run. For example, to run for 4 hours and output to a file named call_rate.out, run the following:
nohup st-call-rate -i 60 -m 240 > call_rate.out 2>&1 &
Run st‑call‑rate ‑h for the most current documentation of command-line options and columns of output from this script.
It is also helpful to monitor CPU and memory usage of the processes running on the various hosts running NMS. This can be accomplished by running top in batch mode like this:
top -b -d <delay-seconds> -n <number-of-iterations>
For example, this will run top at 30 second intervals for 4 hours:
nohup top -b -d 30 -n 480 > top.out 2>&1 &
The following scripts can be run on log files to summarize timing results. Each script takes a log file to read as a parameter.
st‑pg‑report: Summarizes the process group timing in service logs. Run this on the corbagateway log or any other service log that pgtiming was enabled on.
st‑agent‑report: Summarizes the Agent timing debug in the WebLogic log. Run this on the file configured to get log4j messages for com.splwg.oms.* (typically the .out file).
st‑client‑report: Summarizes the ProxyInvocationHandler timing debug in the client log.
st-ejb-stats-report: Summarizes the EJB_STATISTICS debug in the WebLogic log.
st-mdb-stats-report: Summarizes the MDB_TIMING debug in the WebLogic log.
st-queue-stats-report: Summarizes the MobileBeanProxy debug in the WebLogic log for the managed server where the nms-ws deployment runs (namely, the Flex/OMA gateway).
st-access-stats: Summarizes the WebLogic access log. Run this on the access.log for the WebLogic managed server where the nms-ws deployment runs.
The output of the above scripts is very similar with one line of output per API or method call and the following columns:
low: The lowest duration for the api/method call.
median: The median duration for the api/method call.
avg: The average (mean) duration for the api/method call.
high: The highest duration for the api/method call.
total: The total duration of all calls to this api/method.
count: The number of invocations of this api/method.
api/method: The name or identifier of the api/method.
 
After a performance test, grep the service logs for any of the following strings:
"^Dump requested"
"Congested"
"time warp"
"ORA-"
"ERROR"
 
Note that "ERROR" can still appear quite a bit during performance testing for situations like a user (automated or real) performing an action on an event that either grouped into another event or has been canceled or completed.
In the WebLogic log4j log (typically the .out file) grep for:
"ORA-"
"ERROR"
Other Troubleshooting Utilities
Using the JMS API Command Line Utility to Manually Change a Job
The JMS API command line utility (jms‑api) provides a restricted set of options to modify an event when the event cannot be changed with the NMS user interface. It is primarily intended for cleaning up stranded events or other issues where normal NMS functionality will not work. Indiscriminate use of jms‑api for frequent/high volume activity in conjunction with NMS operation can have a negative performance impact on NMS and is not recommended
Standard Usage
$ jms‑api [option] [event]
where
[option] is the jms‑api option
[event] is an event handle of the form 800.[event#] (for example, 800.10257)
To return the jms‑api usage options and arguments:
$ jms-api
To complete an event:
$ jms-api complete [event] "[comments]"
Note: does not allow an RDO still affecting customers to be completed.
To cancel an event:
$ jms‑api cancel [event] "[comments]"
Note: does not allow an RDO still affecting customers to be canceled.
To complete a Master Switching Job or Planned Outage:
$ jms‑api swplan_complete [event] "[comments]"
Note: does not allow an active Planned Outage or a Master Switching Job with active Planned Outage(s) to be completed
To cancel ("reschedule") a Master Switching Job:
$ jms‑api swplan_cancel [event] "[comments]"
To remove association between event and switch sheet:
$ jms‑api remove_assoc [event] "[comments]"
To change the estimated restore time of a job:
$ jms‑api set_est_rest_time [event] "[comments]"
Note: <time> must be a valid ISO-8601 date/time string (for example, 2020-02-27T15:30).
To set the external id of a job:
$ jms‑api set_external_id [event] [value]
To set the customers out for a job:
$ jms‑api set_cust_out [event] [value]
To set the trouble code of a job:
$ jms‑api set_trouble_code [event] [value]
Note: does not modify the calls on a job; [value] must be a valid numeric trouble code.
Alternative API for completing an event
$ jms‑api complete2 [event]
Note: The complete2 parameter does not work for a Master Switching Job or Planned Outage. It does not validate that an RDO event is restored before completing it.