Skip Headers
Oracle® Containers for J2EE Configuration and Administration Guide
10g Release 3 (10.1.3)
Part No. B14432-01
  Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
Next
Next
 

8 Configuring and Managing Clusters

This chapter explains how to configure and manage cluster topologies in an Oracle Application Server environment. It includes the following topics:

Note that application clustering - the clustering of applications deployed to Oracle Application Server nodes for the purpose of session or state replication - is covered in Chapter 9, "Application Clustering in OC4J".

Clustering Overview

This section provides an overview of the clustering mechanisms supported in Oracle Application Server 10g Release 3 (10.1.3), and notes the significant changes in functionality between this current release and previous releases. It includes the following topics:

How Clustering Works

In the current release, a cluster topology is defined as two or more loosely connected Oracle Application Server nodes.

The connectivity provided within a cluster is a function of Oracle Notification Server (ONS), which manages communications between Oracle Application Server components, including OC4J and OHS. The ONS server is a component of Oracle Process Manager and Notification Server (OPMN), which is installed by default on every Oracle Application Server host. When configuring a cluster topology, you are actually connecting the ONS servers running on each Oracle Application Server node.

Previous releases of Oracle Application Server supported clustering of a fully connected set of server nodes only, which meant that each node had to be explicitly specified in the ONS configuration file (ons.conf). When a node was added or removed from the cluster, the configuration had to be updated on each server node, and the server restarted.

The current release supports a new "dynamic discovery" mechanism, enabling the cluster to essentially manage itself. In this framework, each ONS maintains a map of the current cluster topology. When a new ONS is added to the cluster, each existing ONS adds the new node and its connection information to its map. At the same time, the new ONS adds all of the existing nodes to its map. Alternatively, when an ONS is removed from the cluster, the maps for the remaining nodes are updated with this change.

As of Oracle Application Server Release 3 (10.1.3), the ONS configuration file (ons.conf) is no longer used. Instead, ONS configuration data is set in the <notification-server> element within opmn.xml, the OPMN configuration file located in the ORACLE_HOME/opmn/conf directory on each node. Clustering configuration in turn is set within a <topology> subelement. Note that only one <topology> subelement is allowed.

The example below illustrates a cluster topology configuration in opmn.xml:

<notification-server>
  <topology>
    <discover list="*225.0.0.20:8001"/>
  </topology>
  ...
</notification-server>

The clustering configuration specified in the <topology> element applies to all instances of Oracle Application Server components - including OHS and OC4J - installed on the node. Note that all nodes within a cluster topology must have the same configuration specified in the opmn.xml file.

Supported Clustering Models

The following clustering models are supported:

  • Dynamic node discovery

    In this configuration, each ONS node within the same subnet announces its presence with a multicast message. The cluster topology map for each node is automatically updated as nodes are added or removed, enabling the cluster to be self-managing.

    See "Configuring Dynamic Node Discovery Using Multicast" for configuration instructions.

  • Static hubs as "discovery servers"

    Specific nodes within a cluster are configured to serve as "discovery servers", which maintain the topology map for the cluster; the remaining nodes then connect with one another via this server. Hubs in one topology can be connected to those in another.

    See "Configuring Static Discovery Servers".

  • Connection of isolated topologies via gateways

    This configuration is used to connect topologies separated by firewalls or on different subnets using specified "gateway" nodes.

    See "Configuring Cross-Topology Gateways" for details.

  • Manual node configuration

    In this configuration, the host address and port for each node in the cluster are manually specified in the configuration. This is the same clustering mechanism supported in Oracle Application Server 10g Release 3 (10.1.2) and is supported primarily to provide backward compatibility.

    See "Configuring Static Node-to-Node Communication" for instructions.

Changes in Clustering

The following are changes in cluster configuration in Oracle Application Server 10g Release 3 (10.1.3) from previous releases.

  • The Distributed Configuration Management (DCM) framework, used in prior releases of Oracle Application Server to replicate common configuration information across a cluster, is not included in the current release. This means that:

    • Configuration using the dcmctl command line utility or Application Server Control Console is no longer supported.

    • Cluster configurations must now be manually replicated in the opmn.xml file installed on each node within the cluster.

  • The ONS configuration file (ons.conf) is no longer used. ONS connection data is now set in the <notification-server> element within opmn.xml, the OPMN configuration file located in the ORACLE_HOME/opmn/conf directory on each node containing an OC4J or OHS instance.

  • Each node is no longer required to be manually configured to connect to every other node in the cluster.

Configuring a Cluster

This section contains instructions on configuring the following clustering models:

Configuring Dynamic Node Discovery Using Multicast

Dynamic node discovery is the most straightforward clustering configuration. In this model, each ONS node broadcasts a simple multicast message announcing its presence, enabling nodes within the cluster to dynamically "discover" one another.

The following tools can be used to add OC4J instances to a cluster using multicast discovery:

Each ONS maintains its own map of the cluster topology. When a new ONS is added to the cluster, each existing ONS adds the new node and its connection information to its map. At the same time, the new ONS adds all of the existing nodes to its map. Alternatively, when an ONS is removed from the cluster, the maps for the remaining nodes are updated with this change.

Figure 8-1 Dynamic Discovery Model

Description of jicon003.gif follows
Description of the illustration jicon003.gif

Because multicast messages may be restricted by different network configurations dynamic node discovery may be an option only for ONS nodes that are on the same subnet. However, multiple subnets using dynamic node discovery may be connected using gateway servers. See "Configuring Cross-Topology Gateways" for details.


Notes:

  • All nodes within the topology must be configured to use the same multicast address and port.

  • The multicast address must be within the valid address range, which is 224.0.0.1 to 239.255.255.255.

    Ideally, multicast address and port assignments should be managed by your systems administration staff to avoid potential conflicts with other applications.


The dynamic discovery configuration is set within a <discover> subelement of the <topology> element in the opmn.xml file on each Oracle Application Server instance in the topology. To add a new node to the cluster, simply add this element to its opmn.xml file. To remove a node from the cluster, remove this element.

Set the multicast IP address and port as the value for the list attribute. Note the asterisk (*) preceding the IP address. This character is critical, as it informs OPMN that the value specified is a multicast address. Multiple values can be specified, each separated from the next by a comma.

<opmn>
 <notification-server>
  <port ...  />
  <ssl ... />
  <topology>
    <discover list="*225.0.0.20:8001"/>
  </topology>
 </notification-server>
 ...
</opmn>

Note:

The opmn.xml file must be reloaded for changes made to take effect. Run the following command on the affected node to reload opmn.xml:
opmnctl reload

Note that this command will not affect OPMN-managed components, including OHS, OC4J and deployed applications.


Configuring Multicast Discovery with opmnassociate

The opmnassociate utility provides a solution for adding an OC4J instance to a cluster using multicast discovery. It performs the following steps:

  • Inserts or updates the <discover> element in opmn.xml with the specified multicast address and port

  • Configures the default Web site to receive and respond to requests from Oracle HTTP Server using the Apache JServ Protocol (AJP) by modifying the corresponding <port> element in opmn.xml

  • Restarts OPMN to load the new configuration into the runtime

The opmnassociate tool is installed in the ORACLE_HOME/bin directory on each OC4J instance. The tool must be run individually on each instance, and will update only the opmn.xml file on that instance.

The syntax is as follows:

opmnassociate "*multicastAddress:multicastPort" [-restart]

For example:

opmnassociate "*225.0.0.20:8001" -restart

The asterisk (*) preceding the IP address is required.

Note that this tool can only be used to add the default home OC4J instance to a cluster; to add other OC4J instances—for example, home2—use opmnctl as outlined below.

Configuring Multicast Discovery with opmnctl

The OPMN command-line tool, opmnctl, supports a new config topology command that allows you to specify, update or delete the multicast <discovery> entry within opmn.xml.

The opmnctl tool is installed in the ORACLE_HOME/opmn/bin directory on each node. The tool must be run individually on each node and will update only the opmn.xml file on that node.


Note for Adding OPMN-Managed Standalone OC4J Instances:

The default Web site in an OPMN-managed OC4J instance that does not include Oracle HTTP Server (J2EE Server and Process Management install type) is configured to listen for HTTP requests by default.

When adding the instance to a cluster, you must configure the Web site to use the Apache JServ Protocol (AJP). This modification is necessary to enable the OC4J instance to receive and respond to requests from Oracle HTTP Server.

Ideally, you should use the opmnctl config port update command to modify the default Web site configuration defined in opmn.xml. See "Configuring Web Sites with opmnctl" for details.


Inserting or Updating Discovery Data

The update command inserts or updates the <discover> element with the specified values. The syntax is as follows:

opmnctl config topology update discover="*multicastAddress:multicastPort"

For example:

opmnctl config topology update discover="*225.0.0.20:8001"

opmnctl reload

Deleting Discovery Data

The delete command removes the <discover> element from opmn.xml, effectively removing the node from the cluster. If the <topology> element contains no other subelements, it will be removed as well.

opmnctl config topology delete discover

opmnctl reload

Configuring Static Discovery Servers

This configuration is similar to a peer-to-peer clustering model, with one or more ONS nodes within the same cluster configured to serve as static hubs, or "discovery servers."

Each ONS node in the cluster establishes a connection with a discovery server, which maintains the topology map for the cluster. The discovery server provides the connecting node with the current topology map, enabling the connecting node to communicate with the other ONS nodes within the cluster.

Note that you can use opmnctl to configure the connection to a static discovery server. See "Configuring a Static Discovery Server Connection with opmnctl" for details.

Figure 8-2 Static Discovery Server Model

Description of jicon004.gif follows
Description of the illustration jicon004.gif

Set the TCP/IP connection information for the discovery server within the <discover> element in the opmn.xml file on each static hub node within the cluster. For example:

<opmn>
 <notification-server>
  <port ...  />
  <ssl ... />
  <topology>
   <discover list="node1.company.com:6200"/>
  </topology>
 </notification-server>
 ...
</opmn>

The required information is as follows:

  • The host name or IP address of the static discovery server

  • The OPMN remote port, which is defined in the <port> element within the opmn.xml file installed on the static server, as illustrated below.

    <port local="6100" remote="6200" request="6003"/>
    

Note:

The opmn.xml file must be reloaded for changes to take effect in the OPMN runtime. Run the following command on the affected node to reload opmn.xml:
opmnctl reload

Note that this command will not affect OPMN-managed components, including OHS, OC4J and deployed applications.


Configuring a Static Discovery Server Connection with opmnctl

The OPMN command line tool, opmnctl, supports a new config topology command which allows you to specify, update or delete the <discovery> entry within opmn.xml.

The opmnctl tool is installed in the ORACLE_HOME/opmn/bin directory on each node. The tool must be run individually on each node, and will only update the opmn.xml file on that node.

Inserting or Updating Discovery Data

The update command inserts or updates the <discover> element with the specified values. The syntax is as follows:

opmnctl config topology update discover="serverHost:opmnRemotePort"

For example:

opmnctl config topology update discover="node.company.com:6200"

opmnctl reload

Deleting Discovery Data

The delete command removes the <discover> element from opmn.xml, effectively removing the node from the cluster. If the <topology> element contains no other subelements, it will be removed as well.

opmnctl config topology delete discover

opmnctl reload

Configuring Cross-Topology Gateways

For situations in which cluster topologies are on different subnets or are isolated by firewalls or physical locations, specific ONS nodes can be configured as "gateways", enabling ONS notifications to be sent across the disparate topologies.

Figure 8-3 Using Gateway Servers to Connect Topologies

Description of jicon006.gif follows
Description of the illustration jicon006.gif

In this model, an ONS node within each isolated topology is configured as a gateway server, which serves as an entry point into the cluster. The gateway configuration is specified within a <gateway> subelement of the <topology> element.

Set the host and port for the source gateway node and each target node it will connect to as the value for the list attribute. The order in which the nodes are listed does not matter.

  • For each node, specify the host name or IP address of the server and the OPMN remote port, which is defined in the <port> element within the opmn.xml file installed on the static server, as illustrated below.

    <port local="6100" remote="6200" request="6003"/>
    
    
  • Separate the data for each node with an ampersand (&), which must be specified as &amp;.

  • Include a / at the end of the list of nodes.

The example below shows the opmn.xml configuration for node1, which will connect with gateway nodes node1 and node3. This same configuration can be set on each of these gateway nodes. Note the / at the end of the list:

<opmn>
 <notification-server>
  <port ...  />
  <ssl ... />
  <topology>
   <gateway list="node1.com:6201&node2.com:6202&amp;node3.com:6203/"/>
   <discover list="*224.0.0.37:8205"/>
  </topology>
 </notification-server>
 ...
</opmn>

Note that in addition to the <gateway> element, the <topology> element includes the <discover> element, which contains the multicast address and port used for dynamic discovery within the node's own cluster.

Alternatively, the entire <topology> element in the preceding example can be copied to the opmn.xml file on every node within the cluster topology. Only node1 will utilize the <gateway> configuration; it will be ignored by the other nodes.

To simplify configuration, you can set the connection data for all gateway nodes - sources and targets - in the <gateway> subelement and then copy this element to the opmn.xml file on each gateway node. Again, the order of the nodes does not matter; each node will simply ignore its own entry in the list.


Note:

The opmn.xml file must be reloaded for changes to take effect in the OPMN runtime. Run the following command on the affected node to reload opmn.xml:
opmnctl reload

Note that this command will not affect OPMN-managed components, including OHS, OC4J and deployed applications.


Configuring Static Node-to-Node Communication

The static configuration model is essentially the same mechanism used in Oracle Application Server 10.1.2 and 9.0.4. It continues to be supported primarily to provide backward compatibility with these earlier releases.

Figure 8-4 Static Node-to-Node Model

Description of jicon007.gif follows
Description of the illustration jicon007.gif

In this configuration, a "node list" containing the host address and ONS remote listener port for each node in the cluster is supplied. Note that prior to Oracle Application Server 10.1.3, when ONS configuration data was integrated into opmn.xml, this configuration would have been set in the ons.conf configuration file.

Define the host address and the ONS remote listener port - specified within the <port> subelement of <notification-server> - for each node in the cluster within the <nodes> subelement. Separate each node from the next with a comma.

For example:

<opmn>
 <notification-server>
  <port local="6101" remote="6202" request="6004"/>
  <ssl ... />
  <topology>
   <nodes list="node1-sun:6201,node2-sun:6202"/>
  </topology>
 </notification-server>
 ...
</opmn>

Supply the same list for each node in the cluster; each ONS instance will identify itself in the list and ignore that entry.


Note:

The opmn.xml file must be reloaded for changes to take effect in the OPMN runtime. Run the following command on the affected node to reload opmn.xml:
opmnctl reload

Note that this command will not affect OPMN-managed components, including OHS, OC4J and deployed applications.


Viewing the Status of a Cluster

You can view the current status of the Oracle Application Server components within a cluster, using either opmnctl or Application Server Control Console.

Viewing Cluster Status with opmnctl

You can check the status of the cluster using opmnctl on any Oracle Application Server node within the cluster.

opmnctl @cluster status

The output shows the status of the components installed on each active Oracle Application Server instance within the cluster:

Processes in Instance: instance1
-------------------+--------------------+---------+---------
ias-component      | process-type       |     pid | status
-------------------+--------------------+---------+---------
OC4J               | home               |   26880 | Alive
HTTP_Server        | HTTP_Server        |   26879 | Alive

Processes in Instance: instance2-------------------+--------------------+---------+---------
ias-component      | process-type       |     pid | status
-------------------+--------------------+---------+---------
OC4J               | home               |   26094 | Alive
HTTP_Server        | HTTP_Server        |   26093 | Alive

Viewing Cluster Status in Application Server Control Console

Click the Cluster Topology link in the upper left corner of the Application Server Control Console home page.

The resulting page displays each Oracle Application Server instance that is active within the cluster, as well as the active applications on each instance. Note that you can access an instance or a deployed application within the cluster through this page.

Load Balancing with Oracle HTTP Server

The term load balancing refers to the process of distributing incoming service requests over server instances within a cluster. Load balancing in an Oracle Application Server cluster is managed by the mod_oc4j module of Oracle HTTP Server (OHS). In this configuration, the OHS instance acts as front-end listener for incoming HTTP/HTTPS requests; mod_oc4j then routes each request to an OC4J instance serving the requested application.

In Oracle Application Server Release 3 (10.1.3), load balancing is completely dynamic, and no additional OHS or mod_oc4j configuration is required "out of the box". New load-balancing features include:

The only requirement is that the ONS servers within the various OHS and OC4J nodes within the cluster be connected using one of the clustering configuration mechanisms outlined in this chapter. See "Configuring a Cluster" for details.

Using Web Server Routing IDs to Control OC4J Request Routing

Every OHS and OC4J instance in an OPMN-managed installation is assigned a "routing ID" that is passed in at startup from opmn.xml. An OHS instance will route incoming Web requests only to OC4J instances that share its routing ID. This means that you can effectively define the set of OC4J instances that a specific OHS instance will route requests to.

A default routing ID is assigned to all component instances, so that upon installation, every OHS instance in a cluster can route requests to any OC4J instance within the cluster.

The routing ID is defined in opmn.xml in a <data> element where the id attribute equals routing-id. The <data> element entry is a subelement of <category id="start-parameters">, which specifies parameters passed to the instance at startup. The default routing-id value set for each instance is "g_rt_id".

<category id="start-parameters">
  <data id="routing-id" value="g_rt_id"/>
</category>

The <data> element containing the default routing ID is set within the <ias-instance> element, which contains the OPMN configuration data for the Oracle Application Server instance. Because the routing ID is set at this level, the routing-id value set in this <data> element is applied to all instances of the OHS and OC4J components installed within the OAS instance.

<opmn>
 <process-manager>
  ...
  <ias-instance id="instance1" name="instance1">
   ...
    <environment>
     ...
    </environment>
    <module-data>
     <category id="start-parameters">
      <data id="routing-id" value="g_rt_id"/>
     </category>
    </module-data>
   </environment>
   <ias-component id="HTTP_Server">
    ...
   </ias-component>
   <ias-component id="OC4J">
    ...
   </ias-component>
  </ias-instance>
 </process-manager>
</opmn>

However, the routing ID can be set at the individual OHS or OC4J instance level by adding a <data> element within the <category id="start-parameters"> element for the component. This value overrides the routing ID assigned at the Oracle Application Server instance level.

Note that you can specify any string as the value of the routing-id attribute; there is no required format for this identifier.

The following entry in opmn.xml sets the routing ID for an OHS instance:

<opmn>
 <process-manager>
  ...
  <ias-instance id="instance1" name="instance1">
   ...
   <ias-component id="HTTP_Server">
     <environment>
      ...
     </environment>
     <process-type id="HTTP_Server" module-id="OHS">
       <module-data>
        <category id="start-parameters">
          <data id="start-mode" value="ssl-enabled"/>
          <data id="routing-id" value="group_b_id"/>
        </category>
       </module-data>
       <process-set id="HTTP_Server" numprocs="1"/>
      </process-type>
     </ias-component>
  </ias-instance>
 </process-manager>
</opmn>

The following entry in opmn.xml sets the routing ID for the OC4J home instance:

<opmn>
 <process-manager>
  ...
  <ias-instance id="instance1" name="instance1">
   ...
    <ias-component id="OC4J">
     <environment>
     </environment>
     <process-type id="home" module-id="OC4J" status="enabled">
       <module-data>
        <category id="start-parameters">
          <data id="java-options" ... />
          <data id="routing-id" value="group_b_id"/>
        </category>
       </module-data>
       <process-set id="HTTP_Server" numprocs="1"/>
       <port id="default-web-site" range="12501-12600" protocol="ajp" />       <port id="rmi" range="12401-12500"/>
       <port id="jms" range="12601-12700"/
       <process-set id="default_group" numprocs="1"/>
      </process-type>
     </ias-component>
  </ias-instance>
 </process-manager>
</opmn>

Configuring Application Mount Points

To route incoming requests, OHS utilizes a list of application-specific mount points that map the URL supplied in a request with the OC4J instance that will service the request. This section includes the following topics on mount point creation:

See the Oracle HTTP Server Administrator's Guide for additional details on mount point configuration.

Enabling Dynamic Configuration of Application Mount Points

In previous releases of Oracle Application Server the list of application mount points had to be managed manually in the mod_oc4j configuration file, mod_oc4j.conf.

In the current release, the mount point list is dynamically updated as new nodes and applications are added to—or removed from—the cluster. This dynamic discovery mechanism is enabled by default and requires no additional configuration.

Using ONS notifications, every OC4J instance within the cluster sends mount point data for each of its deployed applications to mod_oc4j, which adds this information to its internal routing table.

The mount point information sent by each OC4J instance to OHS includes:

  • The OC4J host address

  • OC4J port information, including the Apache JServ Protocol (AJP) listener port

    This value is the lowest available port assigned to AJP in the opmn.xml file on the OC4J node.

  • The Web module name

    This value is defined as the value of the name attribute in the <web-app> element defined for the module in the *-web-site.xml configuration file the module is bound to.

  • The Web context(s) defined for the application

    This value is set in the root attribute of the <web-app> element defined for the module *-web-site.xml configuration file.


Note:

Dynamically-configured mount points are not written to the the mod_oc4j configuration file (mod_oc4j.conf).

When a new application is deployed to an OC4J instance, its mount point information is transmitted to OHS, enabling mod_oc4j to dynamically "discover" the application and begin routing requests to it.

Conversely, when an application is stopped or removed from an OC4J instance, the mod_oc4j routing table is updated to reflect the application's absence, causing mod_oc4j to stop routing requests to the application instance.

Changing the Mount Point Configuration Algorithm

Although dynamic mount point creation is enabled by default, you do have the option of continuing to use manually configured mount points, which is the default mechanism supported in previous releases of Oracle Application Server.

Static mount points are defined in the mod_oc4j configuration file, mod_oc4j.conf, which is installed in the ORACLE_HOME/Apache/Apache/conf directory. By default, OHS will create dynamic mount points as applications are deployed; however, static mount points defined in mod_oc4j.conf will also be honored.

The mount point configuration mechanism to use is specified in the Oc4jRoutingMode parameter in mod_oc4j.conf. Table 8-1 lists the values for this variable. See the Oracle HTTP Server Administrator's Guide for details on mount point configuration and using mod_oc4j.conf.

Table 8-1 Oc4jRoutingMode Values

Value Description
Dynamic Dynamically configured mount points are used exclusively. Static mount points will be ignored.
Static Static, manually configured mount points defined in mod_oc4j.conf are used exclusively. Dynamic mount points will not be created for new applications.
DynamicOverride Both dynamic and static mount points are used. In the event of a conflict, the dynamically configured mount point will be used.
StaticOverride Both dynamic and static mount points are used; however, in the event of a conflict, the static, manually configured mount point will be used.

This is the default mode used, although it is not defined in mod_oc4j.conf by default.


The mod_oc4j.conf example below enables the DynamicOverride mode, in which the dynamic mount points specified will take precedence over static mount points in the event of a conflich:

#########################################################
# Oracle iAS mod_oc4j configuration file: mod_oc4j.conf #
#########################################################

LoadModule oc4j_module libexec/mod_oc4j.so
Oc4jRoutingMode DynamicOverride
<IfModule mod_oc4j.c>
  <Location /oc4j-service>
    SetHandler oc4j-service-handler
  </Location>
    Oc4jMount /j2ee/*
    Oc4jMount /webapp home
    Oc4jMount /webapp/* home
    Oc4jMount /cabo home
    Oc4jMount /cabo/* home
    Oc4jMount /stressH home
    Oc4jMount /stressH/* home
</IfModule>

Viewing Mount Point Configuration Data

You can configure OHS to output mount point configuration data to a Web page generated on the OHS host.

Add the following entry to the OHS configuration file, httpd.conf, on the OHS host machine. This file is installed in ORACLE_HOME/Apache/Apache/conf.

<IfModule mod_oc4j.c>
   Oc4jSet StatusUri /oc4j-status
</IfModule>

You will now be able to view mount point data by appending the /oc4j-status context URI to the OHS server URL:

http://ohsHost:ajpPort/oc4j-status

For example:

http://node1.company.com:7777/oc4j-status

The following is sample output displayed in the resulting Web page, with comments:

hostname          : node1.company.com
local instance    : node1.company.com
select method     : Round-Robin
select affinity   : None
# OHS routing configuration
routing mode      : Static-Dynamic
routing ID        : g_rt_id

OC4J Dynamic routing
# Applications using dynamic routing

# 'ascontrol' application
application       : ascontrol
  context         : /em
  process (Jgroup): 0

# 'demos' application
application       : demos
  context         : /ojspdemos/jstl, /ojspdemos
  process (Jgroup): 0 (demos)

OC4J Process List

  process,ias instance,host,port,status
  0 : home.node1.company.com, node1.company.com, 12502, ALIVE
    1 : home.node1.company.com, node1.company.com, 12501, ALIVE
    2 : home.node1.company.com, node1.company.com, 12503, ALIVE

Replicating Changes Across a Cluster

Because the Distributed Configuration Management (DCM) framework is not provided in Oracle Application Server Release 3 (10.1.3), changes made to individual configuration files must be manually replicated to each OC4J instance within the cluster. Table 8-2 below summarizes the files that may need to be replicated.

Table 8-2 Configuration Files to Replicate Across a Cluster

File Location in ORACLE_HOME Data to Replicate/Manage
application.xml /j2ee/instance/config
  • Changes made to configuration data applied by default to all deployed applications.
  • References to data sources or other shared resources.

  • Shared library definitions within the <imported-shared-libraries> element. Note that the code sources for custom shared libraries must be installed on the OC4J host, and the libraries must be referenced in server.xml on the OC4J instance.

data-sources.xml /j2ee/instance/config
  • Configuration data for custom data sources that must be made available to deployed applications.
default-web-site.xml /j2ee/instance/config
  • Secure Web site (HTTPS) configuration, if applicable.
*-web-site.xml /j2ee/instance/config
  • Copy the configuration files for any additional Web sites that will be utilized on the OC4J instance to the specified location. Note that references to Web site configuration files must be added to opmn.xml or server.xml as outlined in "Creating a New Web Site in OC4J".
global-web- application.xml /j2ee/instance/config
j2ee-logging.xml /j2ee/instance/config
  • Any logging configuration changes.
javacache.xml /j2ee/instance/config
  • Any Java cache configuration changes.
jazn.xml /j2ee/instance/config
  • Configuration for either XML- or LDAP-based security providers.
jazn-data.xml /j2ee/instance/application-deployments/appName
  • Replicate the XML-based provider configuration to the specified location for all applications using this provider. Not required for applications using an LDAP-based provider.
jms.xml /j2ee/instance/config
  • Any destination or connection factory additions.
rmi.xml /j2ee/instance/config
  • Any RMI configuration changes, such as logging configuration.

Creating and Managing Additional OC4J Instances

OC4J includes tools for creating or removing additional OC4J instances within an Oracle Application Server instance. Once created, new OC4J instances can be accessed and managed through the Application Server Control Console.

This section includes the following topics:

Creating an Additional OC4J Instance

The createinstance utility enables you to create additional OC4J instances within an Oracle Application Server instance.

The createinstance utility is installed in the ORACLE_HOME/bin directory. The syntax is as follows:

createinstance -instanceName instanceName [-port httpPort]

Note that you must supply an HTTP listener port as the value for httpPort when creating a new instance in a standalone OPMN-managed OC4J instance (J2EE Server and Process Management install type.) This HTTP listener port will be set in the default-web-site.xml Web site configuration file created for the instance.

As part of the creation process, you will be asked to enter a password. This password will be tied to the oc4jadmin user for this instance. For consistency, you may want to enter the same password used to access the home instance with the oc4jadmin user.


Usage Notes:

  • The createinstance utility can be used regardless of whether the Oracle Application Server instance is in a running or stopped state.

  • If OPMN is running, you must reload opmn.xml to load the new instance configuration:

    opmnctl reload
    
    
  • If the new OC4J instance will be required to accept ORMI over SSL (ORMIS) requests, you must configure ORMIS in the instance-specific rmi.xml file and update opmn.xml with the ORMIS port information as described in the Oracle Containers for J2EE Security Guide.


Note that you can optionally supply an HTTP port for the value of -port. This feature can be used when the Oracle Application Server instance does not include Oracle HTTP Server. Setting an HTTP port makes it possible to access the OC4J instance's "home page" directly.

The new instance will be created within a new ORACLE_HOME/j2ee/instanceName directory, the same location as the default home OC4J instance. A new <process-type> element containing the instance configuration will also be added to the opmn.xml configuration file.

The following directories and files are generated in the new ORACLE_HOME/j2ee/instanceName directory structure:

applib/
applications/
config/
  contains default versions of all server-level configuration files
config/database-schemas/
  contains all database schema XML files packaged with OC4J
connectors/
  contains RAR files packaged with OC4J
log/
persistence/

The new instance does not include the OC4J binary libraries; instead, the instance will utilize the libraries installed in the home instance. The default application is deployed to the instance; however, binaries and configuration files for other deployed applications, including Application Server Control Console, are not copied to the instance.

Accessing and Managing a New Instance

Once the new instance is started by OPMN, you can access it through the Cluster Topology page in Application Server Control Console.

Log in as the oc4jadmin user and supply the password set when the instance was created using the createinstance utility.

Once logged in, you can perform the full range of administrator tasks on the instance, including deploying applications to it.

Removing an OC4J Instance

You can delete an OC4J instance by using the removeinstance utility, which deletes the directory created for the instance from the ORACLE_HOME/j2ee/ directory structure and removes configuration data for the instance from opmn.xml.

The removeinstance utility is installed in the ORACLE_HOME/bin directory. The syntax is as follows:

removeinstance -instanceName instanceName

Usage Notes:

  • The OC4J instance to be deleted must be in a stopped state.

  • If OPMN is running when the tool is in use, you must invoke opmnctl reload to reload the updated opmn.xml into the runtime.

  • The default home instance cannot be deleted.