This chapter describes how to install Oracle Enterprise Repository into a clustered environment.
This section contains the following sections:
Oracle Enterprise Repository uses a server-side cache on each application server. Cached data is used only if it is available, otherwise, the database delivers data content to the cache and to the application.
When Oracle Enterprise Repository runs in a cluster, the cluster members must communicate with each other using HTTP. An edit that occurs on one cluster member invalidates the cached element on that cluster member, and communicates the edit to other cluster members. This is accomplished by a system property called cachesyncurl, which accepts a URL to the application as a valid value.
On start up, the system writes the cachesyncurl to the database and fetches a list of other server's URLs from the database. A message is sent to all discovered URLs that announces the presence of a new member of the cluster. Each server then refreshes the server list from the database. On a clean server shutdown, the value is removed from the list and a cache-refresh notification is broadcast to the server list.
When edits invalidate an element in the local cache, a message is sent to all the other servers noting which cached elements must be invalidated. Upon receipt of the message, the designated element is removed from the cache. On subsequent data request, the cache contains no data, so it first caches and then the database delivers data to the application.
Server-side HTTP cache communication
Requires session management and persistent sessions
The application servers listed here are currently supported for use with clustering for Oracle Enterprise Repository:
Oracle WebLogic Server
IBM WebSphere Application Server
For information about the supported versions of these application servers, see the Supported Configurations documentation, available on Oracle Enterprise Repository index page at
To install Oracle Enterprise Repository on a clustered environment:
Install and configure Oracle Enterprise Repository.
Create the clustered environment that will host Oracle Enterprise Repository.
Install and deploy the Oracle Enterprise Repository application on one member of the clustered application servers.
Validate the deployment of the application on one member of the cluster.
Move the application properties to the database.
Shut down the cluster.
Install and deploy the application on all of the other cluster members.
Configure a cluster.properties file on each cluster member.
Start the cluster and all members.
Validate the cluster.
For information regarding the installation of Oracle Enterprise Repository, refer Chapter 2, "Installing Oracle Enterprise Repository".
For information about clustering on WebLogic or WebSphere, please refer to the application server documentation, and to organizational standards.
Refer to Using WebLogic Server Clusters, available from Oracle.
For WebSphere Application Server:
Refer to WebSphere Software Information Center. Locate the documentation for the specific appserver version and navigate to: All topics by feature -> Servers -> Clusters -> Balanced workloads with clusters.
For information on deployment and validation of Oracle Enterprise Repository on an application server, see Section 3.1, "Configure Your Application Server" . (For example, all of the sample names should be changed.)
Note:Before using the Move settings to database option, you have to enable
cmee.eventframework.clustering.enabled, if JMS clustering is to be achieved.
Property files always take precedence when reading properties into the Oracle Enterprise Repository application. The application will look for properties and their corresponding values, first within the database, and then within the property files. Any properties read from the database are overwritten by the corresponding properties in the files. However, if there are no files, the properties within the database will only be referenced; properties that exist solely within the database will never be overwritten.
This procedure begins with deploying one application.
In the Admin screen, click System Settings in the left pane. The System Settings section is displayed in the main pane, as shown in Figure 5-1.
Scroll to the bottom and click the Move settings to database button.
A confirmation message appears.
Remove the properties files from the classpath.
Restart the appserver.
Locate the configuration files folder (usually located within the
./WEB-INF/classes/ folder or oer_home) within the application server.
Remove the property files listed below from the configuration folder:
These properties are written to the entSettings table within the database.
Modify the cmee.properties file. Remove all property values except those containing URL values. Update the URL references to point to the proxy server path being used to load balance access to the cluster members.
Now, redeploy the application on all the servers.
Note:Any properties enabled after this procedure are written to the database, not to the properties files.
To configure the
cluster.properties file on each cluster member:
Stop each cluster member.
On each cluster member create a file called
cluster.properties, which resides in the same place as all other .properties files.
For exploded directory deployments this location is the WEB-INF/classes directory beneath the webapp.
For ear file deployments, this location is the oer_home directory.
The contents of
cluster.properties is based on the property cmee.server.paths.servlet in the cmee.properties file. However, the hostname in the path should refer to the hostname of the cluster member, not the proxy hostname of the entire cluster.
cluster.properties #cluster.properties cachesyncurl=http://<SERVLET-PATH>/<APP_PATH> Example: #cluster.properties cachesyncurl=http://server.example.com:7221/cluster01 Other properties that are optional # alias is used as an alternate/convenient name to refer # to the server # example: server1 # default: same value as =cachesyncurl= alias=EclipseServer # registrationIntervalSeconds is the number of seconds between # attempts to update the server's registration record in the database # default: 120 registrationIntervalSeconds=120 # registrationTimeoutSeconds is the number of seconds before a server # is considered to be inactive/not running # make sure this value is higher than the registrationIntervalSeconds # default: 240 registrationTimeoutSeconds=240 # maxFailures is the number of consecutive attempts that will be made # to deliver a message to another server after which it will be determined # to be unreachable # default: 20 maxFailures=20 # maxQueueLength is the number of messages that will be queued up to # send to another server after which server will be determined to be # unreachable # default: 4000 maxQueueLength=5000 # email.to is the address of the email recipient for clustering status # messages email@example.com # email.from is the address of the sender for clustering status messages firstname.lastname@example.org # email.subject is the subject line of the message for clustering status # messages email.subject=Oracle Enterprise Repository Clustering communication failure # email.message is the body of the message for clustering status messages email.message=This is an automated message from the Oracle Enterprise Repository informing you of a cluster member communication failure.
Example of a cluster.properties file
cachesyncurl=http://server.example.com:7221/cluster01alias=node1 registrationIntervalSeconds=120registrationTimeoutSeconds=240 maxFailures=20 maxQueueLength=5000 email@example.com firstname.lastname@example.org email.subject=Oracle Enterprise Repository Clustering communication failure email.message=This is an automated message from the Oracle Enterprise Repository informing you of a cluster member communication failure
The time delay should not be more than 120 seconds between the application server and the database server. Network Time Protocol is recommended to keep these servers in sync. The clustering process calculates the difference of time between messaging between the nodes of the cluster.
Before restarting the server, you need to add eventing.properties, if JMS Clustering is enabled, and this should contain cmee.eventframework.jms.producers.client.id property with unique value on each of the cluster member. For example, cmee.eventframework.jms.producers.client.id=OER_JmsProducer1
Restart each cluster member.
Note:Once a cluster member is inactivated due to exceepding maxFailures, then the only way to activate is by restarting the server.
Messages are sent to the standard out log of each cluster member.
"running in single server mode"
Indicates that Oracle Enterprise Repository clustering is not configured and the application is running in single server mode.
"running in multi server mode with a sync-url of..."
Indicates the Oracle Enterprise Repository clustering is functioning and the application is running in clustered mode.
The value of the cachesyncurl in the cluster.properties file, which references the same URL as the individual node's instance with the path of /cachesync appended. Most cluster configurations will have a proxy server load-balancing each node within the clustered server.
It is also possible to validate the clustering installation by viewing the clustering diagnostic page from the Oracle Enterprise Repository Diagnostics screen. Click Cluster Info on the Diagnostics screen to view the Cluster Diagnostic page. This page lists information about all servers registered in the cluster, as well as information about inter-server communications.
If cluster nodes are deployed via a centralized administration console, it may be necessary to apply a JVM Parameter to allow the appropriate Oracle Enterprise Repository clustering operation in the absence of the
This JVM parameter should be applied statically for each member of the cluster or within the managed server startup command file. This JVM parameter can be set within the
JAVA_OPTIONS environment variable for WebLogic Application servers or within
JAVA_OPTS for Tomcat servers. The JVM Parameter is as follows:
-Dcmee.cachesyncurl=http://<member host name>:<port>/<APP_PATH>
Note:This feature is only available when using the "Advanced Registration Flows" subsystem for automating the asset registration process. Also, "JMS Clustering" applies only to the embedded ActiveMQ JMS servers in Oracle Enterprise Repository and not to external JMS servers. You need to have JDBC persistence, if you are using ActiveMQ.
In a clustered Oracle Enterprise Repository environment using the Advanced Registration Flows subsystem, each member Oracle Enterprise Repository server in the cluster will have one embedded ActiveMQ JMS server for increased reliability and scalability. For example, for a two-node cluster, there would be two Oracle Enterprise Repository servers, such as server01 and server02, with each having one embedded JMS server. JMS server clustering is enabled using the Oracle Enterprise Repository "Eventing" System Settings, as described in External Integrations: Eventing. Once clustering is enabled for the embedded JMS servers, you then need to specify the connection URL information for the embedded JMS servers on server01 and server02.
For more information, see the "Configuring JMS Servers for Oracle Enterprise Repository" section of the Oracle Fusion Middleware Configuration Guide for Oracle Enterprise Repository.