Skip Headers
Oracle® Process Manager and Notification Server Administrator's Guide
10g Release 3 (10.1.3)
B15976-01
  Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
Next
Next
 

1 What's New in OPMN?

This chapter describes the new features of Oracle Process Manager and Notification Server (OPMN) available in Oracle Application Server 10g Release 3 (10.1.3).

This chapter includes the following topics:

1.1 Grid Computing and OPMN

Grid computing is a software architecture designed to effectively pool together large groups of modular servers to create a virtual computing resource across which work can be transparently distributed. Grid computing enables computing capacity to be used effectively, at low cost, and with high availability.

With the new configurations and functionality of OPMN in 10.1.3, you can effectively utilize the possibilities inherent in the grid computing model. You can manage all of the computers in the grid using available OPMN commands.

1.2 opmn.xml Configuration

In Oracle Application Server 10.1.2 (10.1.2), configuration for the Oracle Notification Server (ONS) daemon was located in the ons.conf files. This file existed separately from the opmn.xml file, which is used for configuration of OPMN.

In 10.1.3, configuration for ONS is an element of, and can be configured from the opmn.xml file.

The information that was stored in ons.conf is now configured within the topology section under the notification-server element in the opmn.xml.

The following is an example of the ons.conf element in the 10.1.3 opmn.xml file:

<notification-server>
   <topology>
      <nodes list="node-list"/>
      <discover list="discover-list"/>
      <gateway list="gateway-list"/>
   </topology>

In 10.1.2 OPMN extracted the instance id, instance name, cluster id, and cluster name values from the dcm.conf file. In 10.1.3, these values are now specified directly as attributes to the process-manager element.

The following is an example of the dcm.conf element in the 10.1.3 opmn.xml file:

<process-manager
id="instance id"
name="instance name"
cluster-id="cluster id"
cluster-name="cluster name">

1.3 OPMN Logging Mechanism

OPMN and OPMN-managed processes generate log files during processing to enable you to troubleshoot difficulties you might have in execution of the process.

In 10.1.2, standard and debug messages were located in either the ipm.log or ons.log files.

In 10.1.3, standard and debug messages are located in the following:

In 10.1.3, logging is configured by component codes rather than level codes. The logging messages contain the literal value based on logging levels rather than an integer value; for example: none, fatal, error, warn, notify, debug1, debug2, debug3, and debug4.

You can dynamically query either the standard or debug log parameters using opmnctl commands. For example:

> opmnctl query target=log
> opmnctl query target=debug

You can also dynamically set the standard or debug log parameters using opmnctl commands. For example:

> opmnctl set target=log comp=<component codes>
> opmnctl set target=debug comp=<component codes>

The new setting is reset to the value in the opmn.xml file after OPMN restarts.

1.4 Dynamic Discovery

In 10.1.2, each OPMN instance had to be configured with the host and port values of the other ONS servers that it communicated with. This list was maintained in the ons.conf file that is maintained by DCM with a list of all of the ONS servers in a cluster. Whenever this file changed, restarting OPMN was necessary to reflect this change. In a grid environment where the number of servers that OPMN will communicate with may grow into the hundreds and where servers may come and go on a frequent basis, this type of static configuration was not desirable.

In 10.1.3, OPMN uses dynamic discovery of other ONS servers. Instead of configuring a list of all other servers to connect to, a discovery mechanism consisting of a multicast address or list of discovery servers is used by OPMN. ONS uses the discovery mechanism to announce new servers and join them into the ONS topology dynamically. This reduces the amount of configuration necessary for each Oracle Application Server instance, eliminates the need to restart OPMN when the topology changes, and removes configuration changes when the topology changes.

With dynamic discovery, the ONS network topology includes all of the Oracle Application Server instances that have been configured with the same discovery information.

1.5 Dynamic Resource Management

In 10.1.2, some features of process management for grid computing included the following:

In 10.1.3, the process management for grid computing is further enhanced with Dynamic Resource Management (DRM). DRM functionality provides a way for you to customize the management of your processes through configuration changes only. DRM enables you to have process management commands issued based on system conditions according to a set of user-configured directives.

DRM is designed to operate on each Oracle Application Server instance. In a cluster environment, DRM functionality is available on each separate local instance.

DRM enables you to specify a set of conditions that will trigger process management commands to be automatically issued. This is accomplished using Resource Management Directives (RMDs). RMDs describe a condition and action that should be taken when the condition occurs.

DRM operates on the DMS metrics that are available to OPMN. Metrics-based load balancing between Oracle Application Server instances in which OC4J passes information to mod_oc4j is available in 10.1.3.

Some examples of RMDs are:

For more information about RMDs, refer to Section 3.3, "Dynamic Resource Management".

1.6 Control at the Application Level

In 10.1.2, OPMN did not permit management of processes at the process-set or process-type level. For example, in 10.1.2, the smallest unit that OPMN can manage for Oracle Containers for J2EE (OC4J) is a single Java Virtual Machine (JVM). In this example, the JVM may actually be running multiple applications that will all be affected by either starting, stopping, or restarting.

In 10.1.3, J2EE applications are supported using the OPMN process request mechanism. There is automatic application state and metric updates. You can start, stop, restart, and check the status of your J2EE application using opmnctl commands.

OPMN manages applications with the help of runtime changes from OC4J and Oracle HTTP Server. This allows the shutdown or restart of an individual application which allows much finer grained control for performing operations such as application upgrades or resetting of unresponsive applications.

1.7 Service Failover

In 10.1.2, configured Oracle Application Server components were meant to be kept running by OPMN. For example, in 10.1.2, if an opmnctl startall command is entered, OPMN will start all configured Oracle Application Server components.

In 10.1.3, to allow for more flexible decisions at runtime, a dormant state for configured components has been implemented. When a configured Oracle Application Server component is in the dormant state, it is not started initially but is ready to be activated if needed.

The new dormant state implemented in 10.1.3 also introduces the concept of service failover. Service failover is mechanism in which a single or limited number of configured Oracle Application Server components are kept running on one of the servers within the computing grid. A configured Oracle Application Server component on one of the computers within the grid is started if the original running component fails. This can be thought of as having a dormant application configured on several or all grid computers and having OPMN constantly maintain a state across the grid where one instance is currently running the application. This new service failover functionality also enables you to configure preferential selection of each computer within the grid. For example, you may to wish to insure that an OC4J instance for your enterprise is run on a computers with a specific hardware setup.

1.8 Progressive Request Report

In 10.1.2, all parts of a user OPMN request had to finish before the results were reported back to the user.

In 10.1.3, user OPMN requests are reported in sequence and are available for review as each part of the request completes. The new progressive request report functionality, which utilizes the new report=true attribute, causes OPMN to report back on each part of a request as it completes. The reports are available for process start, stop, and restart as well debug requests. For scoped requests each participating OPMN will send back reports to the originator for the request as each part completes.

1.9 Sequential Requests

In 10.1.2, an OPMN request is run for all affected processes at the same time, unless a dependency dictates a specific ordering. In 10.1.3, the implementation of sequential requests is introduced. Sequential requests are a mechanism that enables OPMN to perform a user request one process at a time.

By default OPMN issues jobs for all processes in parallel unless a dependency dictates a specific ordering.

In 10.1.3, if you issue a request in which sequential=true is specified as part of the command, then OPMN will only run the request on a single process at a time, waiting for the request to complete on the first before running the request on the second. When the request has finished on one process, it works on the next.

Dependencies are still honoured, and take part in the request sequentially as well.

1.10 opmnctl commands

In 10.1.3, the opmnctl commands offer greater ability to monitor your processes as well as an easier method for modification of the opmn.xml file.

The following list describes the new 10.1.3 opmnctl commands:

1.11 IPv6

In 10.1.2, ONS used version 4 of the Internet Protocol (IPv4), a 4 byte unsigned integer value (for 32 bit addresses) and the two byte unsigned integer (the remote port value) to identify any node in a cluster.

In 10.1.3, ONS supports IPv4 and the soon to be implemented version 6 of the Internet Protocol (IPv6) network interface.

IPv6 is intended to address the concern that there are too few IP addresses available for the future demand of device connectivity (especially cell phones and mobile devices). For more information refer to Section 3.10, "IPv6 Support"