Skip Headers
Oracle® Communications Order and Service Management Installation Guide
Release 7.2.2

E35412-06
Go to Documentation Home
Home
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

7 Installing OSM in a Clustered Environment

This chapter describes how you can use the clustering feature in Oracle Communications Order and Service Management (OSM) to ensure high availability of your system or to process a large volume of orders.

See Appendix B, "OSM High-Availability Guidelines and Best Practices" for information about a highly-available OSM system.

Overview of Installing an OSM Cluster

You install OSM in a WebLogic Server clustered environment by doing the following:

  1. Install WebLogic Server. See "Installing and Configuring WebLogic Server".

  2. Create the WebLogic Server domain and configure the required managed servers and cluster. See "Configuring the WebLogic Server Domain for an OSM Cluster".

  3. Configure the Coherence networking parameter file. See "Configuring Oracle Coherence for an OSM Cluster".

  4. Replicate the WebLogic Server domain on all the machines within the domain. See "Replicating the Domain on Other Machines Within the Domain".

  5. Configure Node Manager on all machines in the domain. See"Configuring Node Manager on all Machines in the Domain".

  6. Start up the administration server and the managed servers in the cluster and verify the installation. See"Starting and Verifying the Cluster".

  7. Install OSM to the cluster. See "Installing OSM in a Clustered Environment".

OSM WebLogic Clustering Overview

You can install OSM in a clustered or non-clustered environment. In an OSM clustered environment, the WebLogic domain consists of an administration server and a cluster of managed servers. Clustering provides continuous availability of your OSM server and improves performance by enabling load balancing, scalability, and failover. You may choose to use the clustering feature in OSM if:

  • You want to minimize unexpected system downtime.

  • Your order volume is very high and cannot be sustained with a single WebLogic Server instance or physical host.

OSM supports the following load balancing:

  • Load balancing for JMS messages: The native WebLogic load balancing options for JMS messages help OSM maximize server resource use and order throughput. Load balancing also enables OSM to minimize server response time and processing delays that can occur if some servers are overloaded with orders while others remain unused. Load balancing allows rolling downtimes of servers without client impact, as long as there is a sufficient amount of servers up and running.

  • Load balancing for HTTP and HTTPS messages: In addition to the native WebLogic support for load balancing JMS messages, Oracle recommends installing a software or hardware HTTP load balancer for load balancing incoming HTTP or HTTPS messages.

After the cluster is installed in the environment, the following operations can be done on a cluster to support OSM scalability:

  • Add a new server into the cluster.

  • Drop an existing server from the cluster.

To ensure high-availability, the load balancing mechanisms (both the native WebLogic JMS load balancing or the HTTP load balancer) forwards messages to other managed servers, if one of the managed servers fails. Orders that were being processed by the failed server are delayed until that server is either restarted or migrated.

Configuring the WebLogic Server Domain for an OSM Cluster

This section describes how to create the WebLogic server domain and configure the required server instances and cluster.

For additional information about WebLogic clustering, refer to the WebLogic Server documentation.

To configure the WebLogic server domain for an OSM cluster:

  1. Log on to the machine that will be hosting the administration server in your domain.

  2. Do one of the following:

    • On a Windows platform, start the configuration wizard from the Start menu.

    • On UNIX or Linux platforms, run the following command:

      WLS_home/common/bin/config.sh
      

    The Welcome window is displayed.

  3. Ensure that Create a new WebLogic domain is selected, and click Next.

    The Select Domain Source window is displayed.

  4. Select Generate a domain configured automatically to support the following products, and then select Oracle JRF - 11.1.1.0.

  5. Click Next.

    The Specify Domain Name and Location window is displayed.

  6. In the Domain name field, enter the domain name.

  7. In the Domain location field, enter the domain location, and click Next.

    The Configure Administrator Username and Password window is displayed.

  8. In the Name field, enter the WebLogic Administrator user name.

  9. In the User password field and again in the Confirm user password field, enter the WebLogic Administrator password.

    Note:

    The password must contain at least eight characters, and one or more numeric character or one or more of the following characters:
    ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~
    
  10. In the Description field add a description, and then click Next.

    The Configure Server Start Mode and JDK window is displayed.

  11. Do one of the following:

    • To install OSM in a development environment, select Development Mode.

    • To install OSM in a production environment, select Production Mode.

  12. Select a 32-bit or 64-bit JDK, and then click Next.

    The Select Optional Configuration window is displayed.

  13. Select Administration Server, Managed Servers, Clusters and Machines, and Deployments and Services, and click Next.

    The Configure the Administration Server window is displayed.

  14. In the Name field, enter a name for the administration server.

  15. In the Listen address field, enter a value for the listen address. For example, an IP address or DNS name.

  16. In the Listen port field, enter a value for the listen port.

  17. (Optional) In the SSL listen port, enter a value for the SSL listen port.

  18. (Optional) Select SSL enabled. Oracle recommends configuring an SSL listen port and enabling SSL to ensure secure communication over the Internet.

  19. Click Next.

    The Configure Managed Servers window is displayed.

  20. Click Add, which creates a managed server. Repeat this step for any additional managed servers.

  21. Click Add again if you want to add a managed server that you can later designate as a proxy HTTP server. You can perform this step if you are setting up a development system and you do not have a dedicated Oracle HTTP Server load balancer for HTTP an HTTPS messages.

  22. Enter managed server and proxy HTTP server names (if applicable), IP addresses (DNS names are recommended for high availability), and port numbers for the servers. If required, enable SSL.

  23. Click Next.

    The Configure Clusters window is displayed.

  24. Click Add, which creates a cluster.

  25. In the Name field, enter a name.

  26. In the Cluster messaging mode list, select multicast or unicast. See "About the WebLogic Messaging Mode and OSM Cluster Size" for more information about which messaging mode option to choose.

  27. If you selected multicast, do the following:

    1. In the Multicast port field, enter a multicast port number. The multicast port can be between 1 to 65535.

    2. In the Multicast address field, enter a multicast address. The multicast address range can be between 224.0.0.1 to 239.255.255.255.

  28. In the Cluster address field, in sequence, enter the IP address (or DNS name) and port of each managed server delimited by a comma. For example:

    10.177.42.220:9910,10.177.42.219:9920
    
  29. Click Next.

    The Assign Servers to Clusters window is displayed.

  30. In the Server area, select all managed servers. Do not select the proxy server (if not applicable).

  31. Click the right arrow, which moves all managed servers under the cluster.

  32. Click Next.

    The Create HTTP Proxy Applications window is displayed.

  33. If you want to create a proxy HTTP server, select Create HTTP Proxy and select the newly created proxy server from the list. Click Next.

    The Configure Machines window is displayed.

  34. Click the Unix Machine tab and click Add to add all of the UNIX or Linux machines in your domain that will run Node Manager. Enter the machine names, IP addresses, and ports from your configuration details. Click Next.

    The Assign Servers to Machines window is displayed.

  35. Add the available servers to the appropriate machines as decided upon in your configuration details. Click Next.

    The Target Deployments to Clusters or Servers window is displayed.

  36. In the Target pane, select the cluster where you want to install OSM.

  37. In the Deployments pane, click Select All (Application and Library) to include all components for the cluster, and click Next.

    Note:

    Do not select any deployments to be assigned to the proxy server (if used), or the Administration server.

    Figure 7-1 Target Deployments to Clusters or Servers Window

    Figure depicting the Target Deployments window

    The Target Services to Clusters or Servers window is displayed.

  38. In the Target pane, select the cluster where you want to install OSM.

  39. In the Service pane, click Select All, and click Next.

    Note:

    If you are creating a proxy HTTP server, do not select any services to be assigned to the proxy server.

    Figure 7-2 Configuration Summary Window for Cluster Configuration

    Depicts the configuration summary for a sample cluster

    The Configuration Summary window is displayed, which provides a summary of the applications, services, and libraries to be deployed in the domain.

  40. Review the information in the Configuration Summary window and confirm that the cluster organization matches your requirements. If you find any discrepancies, click the Previous button to return to the appropriate screen and make the necessary changes. When you are done, click Create.

    The Creating Domain window shows the progress of your domain creation.

  41. When the domain creation process completes, click Done.

Preparing the WebLogic Domain for OSM Installation

Before you install OSM, you must configure the domain to run in IPV4 mode and configure memory settings for each managed server in your cluster.

  1. Force all WebLogic Server domain IP communication to IPv4:

    1. Go to domain_home/bin for your base domain.

    2. Using a text editor, open startWeblogic.sh.

    3. Add the following property immediately after ${DOMAIN_HOME}/bin/setDomainEnv.sh $*.

      JAVA_OPTIONS="${JAVA_OPTIONS} -Djava.net.preferIPv4Stack=true"
      
    4. Save and close the file.

  2. Run the following command to start the Administration server:

    startWebLogic.sh
    
  3. Open the Administration Console.

  4. From the Domain Structure, navigate to Environment, Server, managed_server, Configuration, Server Start, and Arguments (where managed_server is the name of the managed server).

  5. Modify the arguments setting with the following values:

    -Dweblogic.wsee.useRequestHost=true -XX:MaxPermSize=356m -XX:PermSize=356m -Xms1524m -Xmx3024m
    
  6. Save the changes.

  7. Repeat these steps for all managed servers.

  8. Restart the Administration Server.

Setting up Secure HTTPS Connections

This section provides steps to secure your OSM client communication by setting up HTTPS connections.

Note:

This is a sample procedure for creating trusted root authorities and intermediate authorities using the keytool application included with Java 6. However, you can use other versions of the keytool application included with other applications. Ensure that you are using the same keytool application in both the OSM server and the OSM clients or HTTP load balancing applications.
  1. If you are using HTTPS for communicating with the OSM Web Clients, do the following:

    1. From the Administration Console server, in the Domain Structure, navigate to Environment, Clusters cluster_name, Configuration tab, and General sub tab (where cluster_name is the name of the cluster are installing OSM to).

    2. Expand Advanced.

    3. Select WebLogic Plug-In Enabled.

    4. Click Save.

  2. If you are running a software or hardware load balancer (for example, Oracle HTTP Server), do the following:

    1. From the Administration Console server, in the Domain Structure, navigate to Environment, Clusters cluster_name, Configuration tab, and HTTP sub tab (where cluster_name is the name of the cluster are installing OSM to).

    2. In the Frontend Host field, enter the host name or IP address of the load balancer.

    3. In the Frontend HTTP Port field, enter the HTTP port of the load balancer.

    4. In the Frontend HTTPS Port field, enter the HTTPS port of the load balancer.

    5. Click Save.

  3. Obtain a trusted root certificate authority with intermediate certificate authority files from, for example, a commercial vendor or from your company's IT security department. For example:

    domain_home/config/security/osm_root.pem
    domain_home/config/security/osm_server.pem
    

    where:

    • osm_root is the trusted root certificate authority.

    • osm_server is intermediate certificate authority.

  4. Import the trusted root certificate authority. For example:

    usr/local/JDK6/bin/keytool -importcert -file domain_home/config/security/osm_root.pem -trustcacerts -alias osm_root_ca -keystore osm.jks
    

    where:

    • osm_root_ca is an alias.

    • osm is the name of the Java Keystore (.jks) file.

  5. Import the intermediate trusted certificate authority. For example:

    usr/local/JDK6/bin/keytool -importcert -trustcacerts -alias osm_server_ca -file domain_home/config/security/osm_server.pem -keystore osm.jks
    

    where osm_server_ca is an alias.

  6. List the imported certificates in the keystore. For example:

    usr/local/JDK6/bin/keytool -list -keystore osm.jks
    
  7. Generate a key pair in the keystore for the machine or VM running the server instance. For example:

    usr/local/JDK6/bin/keytool -genkeypair -keyalg RSA -keysize 2048 -validity 365 -alias hostname -keystore osm.jks -dname "CN=FQDN, OU=Busines_Unit, O=Oracle, L=Ottawa, ST=ON, C=Canada"
    

    where:

    • hostname is the short form hostname.

    • FQDN is the fully qualified domain name.

  8. Create a certificate authority certificate request (CSR) for the machine or VM running the server instance. For example:

    usr/local/JDK6/bin/keytool -certreq -alias hostname -keystore osm.jks -file hostname.csr
    
  9. Repeat step 7 and 8 to generate key pairs and certificate request files for all other managed servers and the admin server. For example:

    usr/local/JDK6/bin/keytool -genkeypair -keyalg RSA -keysize 2048 -validity 365 -alias hostname1 -keystore osm.jks -dname "CN=FQDN1, OU=Business_Unit, O=Oracle, L=Ottawa, ST=ON, C=CA"
    usr/local/JDK6/bin/keytool -certreq -alias hostname1 -keystore osm.jks -file hostname1.csr
    usr/local/JDK6/bin/keytool -genkeypair -keyalg RSA -keysize 2048 -validity 365 -alias hostname2 -keystore osm.jks -dname "CN=FQDN2, OU=Business_Unit, O=Oracle, L=Ottawa, ST=ON, C=CA"
    usr/local/JDK6/bin/keytool -certreq -alias hostname2 -keystore osm.jks -file hostname2.csr
    usr/local/JDK6/bin/keytool -genkeypair -keyalg RSA -keysize 2048 -validity 365 -alias hostname3 -keystore osm.jks -dname "CN=FQDN3, OU=Business_Unit, O=Oracle, L=Ottawa, ST=ON, C=CA"
    usr/local/JDK6/bin/keytool -certreq -alias hostname3 -keystore osm.jks -file hostname3.csr
    
  10. Send all CSR request files to a trust certificate authority for issuing user certificates and wait until the trust certificate authority sends the files back.

  11. After receiving the trust certificate authority files back, import the certificates into the keystore. For example:

    usr/local/JDK6/bin/keytool -importcert -keystore osm.jks -alias hostname1 -file hostname1.pem
    usr/local/JDK6/bin/keytool -importcert -keystore osm.jks -alias hostname2 -file hostname2.pem
    usr/local/JDK6/bin/keytool -importcert -keystore osm.jks -alias hostname3 -file hostname3.pem
    usr/local/JDK6/bin/keytool -importcert -keystore osm.jks -alias hostname4 -file hostname4.pem
    
  12. Copy the keystore file (osm.jks) into the Weblogic Server domain directory on each managed server and Admin server.

  13. Do the following for each server:

    1. From the Administration Console server, navigate to Environment, Servers server_name, Configuration tab, and General sub tab (where server_name is the name of a managed server).

    2. Click Lock & Edit.

    3. Ensure that SSL Listen Port Enabled is checked.

    4. Ensure that an SSL port has been specified.

    5. Click Save.

    6. Click the Keystores subtab.

    7. In the Keystores field, click Change.

    8. In the Keystores field, select Custom Identity and Custom Trust.

    9. Click Save.

    10. In the Custom Identity Keystore and Custom Trust Keystore fields, enter the full path to the custom trust certificate authority osm.jks file.

    11. In the Custom Identity Keystore Type and Custom Trust Keystore Type fields, enter JKS.

      Note:

      You must keep all passphrase fields blank to prevent a security breach on the keystore.
    12. Click Save.

    13. Click the SSL subtab.

    14. In the Private Key Alias field, enter the alias name you used for the keystore associated to the server.

    15. In the Private Key Passphrase field, enter a passphrase.

    16. In the Confirm Private Key Passphrase field, repeat the passphrase.

    17. Click Save.

    18. Click View Changes and Restarts.

    19. Select the changes.

    20. Click Activate Changes.

  14. Restart each managed servers.

    Note:

    After you have installed OSM, test the HTTPS configuration using the OSM Web clients. See "Configuring and Verifying HTTPS Connectivity for OSM Client Browsers" for more information.
  15. Do the following:

    1. If you are using the Oracle HTTP Server, see Setting up an Oracle HTTP Server for OSM Cluster Load Balancing (Doc ID 1618630.1) KM note for information about configuring an SSL certificate wallet on the Oracle HTTP Server.

    2. If you are using another hardware or software load balancer, refer to the product documentation for more information about configuring SSL.

    3. If you are using a proxy managed server for HTTP and HTTPS, no further action is required.

Configuring Oracle Coherence for an OSM Cluster

This section provides configuration suggestions and best practices to avoid conflicts with Oracle Coherence in a clustered OSM environment.

For information about configuring and troubleshooting Oracle Coherence, refer to the Coherence documentation. For performance tuning details, see the Coherence Knowledge Base Web site:

http://wiki.tangosol.com/display/COH33UG/Performance+Tuning

Increasing Buffer Sizes to Support Coherence

Oracle recommends that you configure your OS for larger buffers:

On Oracle Linux and Red Hat Enterprise Linux, run the following commands as root:

sysctl -w net.core.rmem_max=2096304
sysctl -w net.core.wmem_max=2096304

On Oracle Solaris run the following command as root:

ndd -set /dev/udp udp_max_buf 2096304

On IBM AIX run the following command as root:

no -o rfc1323=1
no -o sb_max=4194304

Note:

Windows does not impose a buffer size restriction by default, so no changes need to be made to increase buffer sizes on Windows.

About Configuring Coherence for an OSM Installation

When installing OSM in a WebLogic cluster, you must provide network information for Oracle Coherence, which is software for managing data within the cluster. Coherence for OSM, must operate in the unicast mode where individual Coherence cluster nodes communicate over point-to-point TCP/IP connections with up to n*(n-1)/2 individual connections (where n is the number of active cluster nodes). For example, a small cluster might have 5 x (5-1)/2=10 so that each node has a dedicated connection to each of the other nodes in the cluster. A larger cluster might have 15 x (15-1)/2=105.

To configure Coherence for OSM, you must create a file that specifies the necessary Coherence parameters such as the Well Known Addresses (WKA), authorized hosts. The WKAs allow cluster members to discover other nodes and join the cluster using unicast, instead of the default multicast communication. The Authorized host list prevents unauthorized access to the Coherence node. A copy of this configuration must be accessible by every OSM managed server.

For more information, see the Coherence Knowledge Base Web site:

http://wiki.tangosol.com/display/COH33UG/well-known-addresses

To create an OSM startup script with a Coherence configuration:

  1. Create a file osm-coherence-cluster-config.xml where osm-coherence-cluster-config is the name of the OSM Coherence configuration file.

  2. Open the file with a text editor.

  3. Add the following XML snippet to the file and replace the placeholders as indicated:

    <cluster-config xml-override=”osm.coherence.cluster.node.config.override”>
        <member-identity>
        <!--
    Choose a unique name for the cluster, such that no other OSM instance in your network would have the same name
        -->
        <cluster-name system-property="tangosol.coherence.cluster">OSM_cluster</cluster-name>
        </member-identity>
        <unicast-listener>
        <!--
    Add more <socket-address>socket_address</socket-address> elements to well-known addresses, one for each member in your OSM WebLogic cluster.
        -->
        <well-known-addresses>
            <socket-address id="id">
               <address>IP_address</address>
               <port>port_number</port>
            </socket-address>
        </well-known-addresses>
        <authorized-hosts>
          <host-address id="id">IP_address</host-address>
        </authorized-hosts>
        <!--
        You can also use the host-range element to specify a range of IP addresses.
         <authorized-hosts>
           <host-range id="id">
              <from-address>IP_address</from-address>
              <to-address>IP_address</to-address>
           </host-range>
         <authorized-hosts>
        -->
        </unicast-listener>
    </cluster-config>
    

    where:

    • osm.coherence.cluster.node.config.override: This parameter specifies the name of a custom Java system property that allows you to add descriptive information about each managed server that can be accessed using the Enterprise Pack for Coherence or the Coherence JMX feature. See "Adding Coherence Details for Display in Enterprise Manager".

    • OSM_cluster is the name of your OSM cluster. Coherence uses the value of the <cluster-name> element to define what servers are allowed to join the cluster. All members of the cluster must supply the same name, and any member that supplies a different name will fail to start. This can happen if you mistakenly use different configuration on different member servers, or if your network configuration - the content of the <well-known-addresses> element - conflicts or overlaps with a different OSM cluster running at the same time.

    • id is the socket address ID that differentiates one socket address from another or one authorized host from another. Make sure the value of the first one is 1 and the value of the second is 2, and so on.

    • IP_address is the IP address or hostname for a member server in your OSM cluster. For each well known address add a <host-address>IP_address</host-address> entry under <authorized-hosts/>. If you have multiple OSM instances using the same IP address, add the IP only once to the list of authorized hosts.

      Note:

      If you want to monitor OSM Coherence from Oracle Enterprise Manager, add entries under <authorized-hosts/> for each JMX management node, but do not define corresponding WKAs.
    • port_number is the port number for a member server in your OSM cluster. You must select an unused port to avoid port conflicts.

      Note:

      The values of <port> elements must be different than the values of the ports used by WebLogic for the corresponding OSM member server. Coherence uses a different networking protocol than WebLogic to communicate with members in the cluster.

    The following example shows the configuration for an OSM cluster with two member servers, OSM_MS1, and OSM_MS2, with Coherence cluster addresses 198.51.100.219:18488 and 198.51.100.220:28488 respectively:

    ...
    <well-known-addresses>
        <socket-address id="1"> <!-- member OSM_MS1 -->
            <address>198.51.100.219</address>
            <port>18488</port>
        </socket-address>
        <socket-address id="2"> <!-- member OSM_MS2 -->
            <address>198.51.100.220</address>
            <port>28488</port>
        </socket-address>
        <authorized-hosts>
            <host-address id="1">198.51.100.219</host-address>
            <host-address id="2">198.51.100.220</host-address>
        </authorized-hosts>
    </well-known-addresses>
    ...
    
  4. Save the osm-coherence-cluster-config.xml file and make the file available to all member servers, either on a shared file system or by copying it to each member server's local file system. Ideally, choose a directory underneath the WebLogic domain directory like domain_home/config/coherence. In a later step a domain template is created for distribution to other machines and adding the coherence configuration file to the domain_home leads to its inclusion in the template.

  5. If you are using scripts to start and stop each managed server (for example, if you are not using node manager):

    1. Create a startup script. For example:

    2. Make a copy of startManagedWebLogic.sh

    3. Rename the copy.

      For example: startOSM_MS1.sh and startOSM_MS2.sh.

    4. Edit the startup script as follows:

      export JAVA_OPTIONS="${JAVA_OPTIONS} \
      -Dosm.coherence.cluster.config.override=Coherence_config_full \
      -Dtangosol.coherence.localhost=hostname \
      -Dtangosol.coherence.localport=port" \
      

      Where

    • Coherence_config_full is the name of the Coherence configuration file you created including the full absolute path to the file (for example, /opt/oracle/Middleware/user_projects/domains/domainCluster/bin/osm-coherence-cluster-config.xml).

    • hostname is the hostname or IP address of the managed server you are starting. This hostname should match a corresponding osm-coherence-cluster-config.xml file socket address IP addresses.

    • port is the port number for a member server in your OSM cluster. This port should match a corresponding osm-coherence-cluster-config.xml file socket address ports.

  6. If you are using node manager to start and stop each managed server (for example, if you are not using start and stop scripts):

    1. From the Domain Structure, navigate to Environment, Server, managed_server, Configuration, and Server Start (where managed_server is the name of the managed server).

    2. In the Arguments field, enter:

      export JAVA_OPTIONS="${JAVA_OPTIONS} \
      -Dosm.coherence.cluster.config.override=Coherence_config_full \
      -Dtangosol.coherence.localhost=hostname \
      -Dtangosol.coherence.localport=port" \
      

      Where

    • Coherence_config_full is the name of the Coherence configuration file you created including the full absolute path to the file (for example, /opt/oracle/Middleware/user_projects/domains/domainCluster/bin/osm-coherence-cluster-config.xml).

    • hostname is the hostname or IP address of the managed server you are starting. This hostname should match a corresponding osm-coherence-cluster-config.xml file socket address IP addresses.

    • port is the port number for a member server in your OSM cluster. This port should match a corresponding osm-coherence-cluster-config.xml file socket address ports.

Adding Coherence Details for Display in Enterprise Manager

To add descriptive information about the Coherence configuration that can be accessed from Enterprise Manager, do the following:

  1. For each OSM managed server create an XML file coherence-node-managedServerName.xml where coherence-node-managedServerName is a unique name for the managed server specific configuration file.

  2. Add the following xml content to each configuration file:

    Note:

    Each value after <member-identity> is optional, but you must keep the ordering.
    <cluster-config>
        <member-identity>
            <!-- comment in the following two elements only when appropriate
                <site-name>dataCenterName</site-name>
                <rack-name>rackIdentifier</rack-name>
            -->
            <machine-name>physicalHost</machine-name>
            <member-name>managedServerName</member-name>
            <!-- for the time being do NOT comment out the following element
    
            <priority>priorityValue</priority>
            -->
        </member-identity>
    </cluster-config>
    

    where:

    • dataCenterName is the name of the datacenter for the physical server.

    • rackIdentifier is the rack, blade, enclosure, or engineered system.

    • physicalHost is the physical server or virtual machine name.

    • managedServerName is the name of the Coherence instance. Oracle recommends setting this value and reusing the name of the WebLogic managed server, because the managed server name is the primary identifier for management clients like Enterprise Manager.

    • priorityValue is a parameter that influences the Coherence health check capability. Oracle recommends that you do not set this value.

  3. Depending on your choice for server start configuration, add the command line parameter -Dosm.coherence.cluster.node.config.override=managed_server_config_File_path (where managed_server_config_File_path is the path to the managed server configuration file) to either the managed server's start script or node manager arguments.

Replicating the Domain on Other Machines Within the Domain

The newly created domain is now installed on a single machine. This section describes the steps necessary to replicate the domain on other machines within the domain. WebLogic provides two utilities to do this: pack and unpack.

Creating a Managed Server Domain Template for Use on Other Machines

To create a managed server domain directory (template) that can be used on other machines within the domain:

  1. On the machine that contains the administration server and the definition of managed servers, go to the WLS_home/common/bin directory.

  2. Run the following command:

    pack.sh -domain=domain_home -template=template.jar -template_name="template_name" -managed=true 
    

    where:

    • domain_home is the full or relative path of the WebLogic domain from which the template is to be created

    • template is the full or relative path of the template, and the filename of the template to be created

    • template_name is a descriptive name for the template

    For example:

    pack.sh -domain=/opt/oracle/Middleware/user_projects/domains/cluster_demo -template=/opt/oracle/Middleware/user_projects/domains/cluster_demo.jar -template_name="cluster_demo" -managed=true 
    

Replicating the Managed Server Domain Template on all Other Machines in the Cluster

Use the following steps to replicate the created template file to all other machines in the domain.

  1. Establish a session with the remote machine and copy the template to it.

  2. Navigate to the WLS_home/common/bin directory.

  3. Run the following command:

    unpack.sh -template=template.jar -domain=domain
    

    where:

    • template is the full or relative path of the template that you copied to the remote machine

    • domain is the full or relative path of the domain to be created

    For example:

    unpack.sh -template=/opt/oracle/Middleware/user_projects/domains/cluster_demo.jar -domain=/opt/oracle/Middleware/user_projects/domains/cluster_demo 
     
    

Creating the boot.properties File

You must create a boot.properties file for each managed server in your cluster. Creating a boot.properties file for each managed server in your cluster enables the servers to start without prompting you for the administrator username and password.

To create the boot.propeties file:

  1. Open a command window.

  2. Go to the domain_home/servers directory.

  3. Create the following directory for each cluster-managed server in your system:

    servers/ServerName/security
    

    where ServerName is the cluster-managed server name

  4. In the directory you just created, use a text editor to create the boot.properties file.

  5. Add the following lines to the file:

    username=username
    password=pwd
    

    where

    • username is the administrator user name for the WebLogic administration server.

    • pwd is the password for the WebLogic administration server.

    
    

    These values are entered in clear text but will be encrypted when you start the server for the first time.

Starting the Administration Server

To start the administration server:

  1. Navigate to the domain_home directory and start the administration server by typing:

    nohup ./startWebLogic.sh 2>&1 &
    
  2. Verify that the administration server starts properly by using the tail command with nohup.out (for example, tail -f nohup.out). A log entry should indicate that the server is running with the following line:

    Server started in RUNNING mode
    

Configuring Node Manager on all Machines in the Domain

This section describes how to configure the machines in your domain that will host Node Manager. OSM recommends the use of Node Manager to automatically restart a managed server after an unexpected failure.

Editing the Properties File

For each machine that will host Node Manager, do the following:

  1. Open the WLS_home/common/nodemanager/nodemanager.properties file.

  2. Set the following values:

    StartScriptEnabled=true
    StopScriptEnabled=true
    Interface=ethx
    NetMask=networkmask
    UseMACBroadCast=true
    

    where:

    • x is the interfaces number for the floating IP.

      Do not specify the sub-interface, such as eth0:1 or eth0:2. This interface is to be used without :0 or :1. For example, the valid values in Linux environments are eth0, eth1, eth2, eth3, ethn, depending on the number of interfaces configured.

    • networkmask is the net mask for the interface for the floating IP. (networkmask should be the same as the net mask on the interface.)

Setting the Environment Variable and Superuser Privileges

For each machine that will host Node Manager, do the following:

  1. Set the UNIX PATH environment variable to the directories that contain the WebLogic Server files indicated in Table 7-1.

    Table 7-1 Files Required for the PATH Environment Variable

    File/Folder Located in this directory

    wlsifconfig.sh

    domain_home/bin/server_migration

    wlscontrol.sh

    WLS_home/common/bin

    nodemanager

    WLS_home/common


  2. Grant sudo privileges for the wlsifconfig.sh script:

    1. Configure sudo to work without a password prompt.

    2. Grant sudo privilege to the WebLogic user (oracle) with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

    3. Make sure the script is executable by the WebLogic user.

      The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle and also over ifconfig and arping:

      oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
      
  3. Run the following command:

    wlsifconfig.sh -listif eth0
    

Enrolling Each Machine with the Domain

Each machine that will host Node Manager to start and stop the managed servers and proxy server must be enrolled with the domain.

To enroll each machine that will host Node Manager:

  1. Start the administration server.

  2. Run WLS_home/common/bin/wlst.sh tool.

  3. At the command prompts, run the following commands:

    connect('username', 'pwd', 't3://IP_address:port')
    
    nmEnroll('domain_home', 'MW_home/wlserver_10.3/common/nodemanager')
    
    exit()
    

    where

    • username is the administrator user name for the WebLogic administration server.

    • pwd is the password for the WebLogic administration server.

    • IP_address is the IP address for the WebLogic administration server.

    • port is the port number for the WebLogic administration server.

Starting Node Manager on Each Machine

Once Node Manager has been prepared on each machine, it is good practice to start them up. For each machine, follow these steps:

  1. Navigate to WLS_home/server/bin/.

  2. Start Node Manager by issuing the following command:

    nohup ./startNodeManager.sh 2>&1 &
    
  3. Verify that the Node Manager starts properly by using the tail command with nohup.out (for example, tail -f nohup.out). A log entry should indicate that Node Manager is running with the following line:

    INFO: Secure socket listener started on port port
    

    where port is the port number used by the Node Manager.

Starting and Verifying the Cluster

This section contains information about starting the WebLogic Server cluster and verifying the setup of the cluster.

Starting the Administration Server

To start the administration server:

  1. Go to domain_home/bin for your base domain.

  2. Run the following command to start the Administration server:

    startWebLogic.sh
    

Starting the Managed Servers and the Proxy Server

To start the managed servers and the proxy server:

  1. Go to domain_home/bin for your base domain on each machine running a managed server.

  2. Run the following command:

    startupscript.sh server_name http://IP_address:port
    

    where

    • startupscript is the name of the startup script for the managed server you are starting.

    • server_name is the name of the managed server or proxy server you are starting.

    • IP_address is the IP address of the administration server.

    • port is the port number of the administration server.

Verify the Cluster Setup

To verify the cluster setup:

  1. Log in to the WebLogic Administration Console.

    The WebLogic Administration Console is displayed.

  2. Click Servers.

    The summary page shows which servers belong to the cluster along with their configured listen address, port, state, and health status.

Installing OSM in a Clustered Environment

To install OSM in a clustered environment:

  1. Start the administration server, at least one managed server in the cluster, and the proxy server (if used).

    Note:

    The OSM installer requires that at least one managed server is running in the cluster during the installation process. The OSM Administration server configures any remaining managed servers when they are started.
  2. Do one of the following:

    • Perform an active OSM installation using the procedure described in "Interactive Installation of OSM" with the following modifications:

      • On the WebLogic Server Connection Information window, enter the IP address and port number of the administration server.

      • On the BEA WebLogic Server/Cluster Selection window, select the cluster you created from the WebLogic Server/Cluster list, and click Next.

    • Perform a silent OSM installation as described in "Installing OSM in Silent Mode".

      If you are using an install_cfg.xml file generated by a non-clustered installation, you must set the Handler-Factory parameter to the following value:

      com.mslv.oms.handler.cluster.ClusteredHandlerFactory
      
  3. After OSM is installed, restart all the servers included in the cluster.

Adding or Removing Managed Servers in an OSM Cluster

The following sections describe how to add or remove a managed server in an OSM cluster.

Adding a New Managed Server to a Clustered Environment

To add a new managed server to a clustered environment:

  1. Ensure that the managed servers and the cluster are created. Ensure that the managed servers have been assigned to the cluster, and that OSM is installed on the cluster.

  2. Create a new managed server and do the following:

    1. If a proxy is used in the cluster configuration, add the new managed server's IP address and port number to the proxy server's configuration. For example, if you use a WebLogic HTTP proxy, update the WebLogicCluster parameter in the domain_home/apps/OracleProxy4_proxyserver_proxy/WEB-INF/web.xml file (where proxyserver is the proxy server name) and redeploy the proxy for the change to take effect.

    2. If the cluster address of the cluster is set, update it with the new managed server's IP address and port number.

  3. If you are adding a new managed server to an Oracle Real Application Cluster (Oracle RAC) active-active deployment, consider the following:

    Load balancing is achieved by dividing WebLogic managed servers into two groups that interact through a multi data source with the same primary/failover pair of Oracle RAC instances, but in reverse order. You must decide to which group the new managed server should belong based on which Oracle RAC instance you want to be the primary one for that server (the other will be used for failover). You then add the managed server to the targets of the multi data source and the regular data sources that correspond to that group; for example, osm_pool_group_a, osm_pool_rac1_group_a, and osm_pool_rac2_group_a. For load balancing purposes, Oracle recommends that you distribute managed servers evenly between the two groups (see Figure B-5, "Data Source Configuration for Oracle RAC Active-Active").

  4. Based on the persistent store you selected when you first installed OSM (for more information, see "Installing OSM in Interactive Mode"), do the following:

    1. If you use the default persistent store (not recommended), skip this step.

    2. If you use custom file stores, create a new file store for the new server.

    3. If you use JDBC stores, create a new JDBC store and associate it with the data source targeted to the new server. If you do not use Oracle RAC, this is the CP-oms_pool data source. If you use Oracle RAC, choose the multi data source targeted to the new server.

      Note:

      MS2 in the following steps indicates the name of the managed server.
  5. Create a JMS server with the following settings:

    • Name = oms_jms_server_MS2

      Note:

      The name of the JMS server must always be set to oms_jms_server_ManagedServerName where ManagedServerName is the exact name of the managed server.
    • Target = MS2

    • PersistentStore = oms_jms_store_MS2

      (required for custom file stores but not for the default persistent store or the JDBC store)

  6. Start the new managed server.

  7. Open the domain_home/config/config.xml file.

  8. In the section for the newly added server, make the following changes:

    • In <server> </server> add:

      <execute-queue> 
         <name>oms.automation</name> 
      </execute-queue> 
      <execute-queue> 
        <name>oms.web</name> 
      </execute-queue> 
      <execute-queue> 
        <name>oms.xml</name> 
      </execute-queue> 
      
    • Before </server> add:

      <web-service>
      <messaging-queue-mdb-run-as-principal-name>oms-internal</messaging-queue-mdb-run-as-principal-name>
      </web-service>
      
  9. Create SubDeployment under JMS model oms_jms_module with the following settings:

    • Name = oms_jms_server_MS2

    • Target = oms_jms_server_MS2

  10. Create the following Queues under JMS model oms_jms_module:

    Queue

    • Name = oms_behavior_queue_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_behavior_queue.Quota

    • JNDIName = mslv/oms/oms1/internal/jms/behaviors_MS2

    Queue

    • Name = oms_cartridge_deploy_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_cartridge_deploy.Quota

    • JNDIName = mslv/provisioning/internal/ejb/deployCartridgeQueue_MS2

    Queue

    • Name = oms_events_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_events.Quota

    • JNDIName = mslv/oms/oms1/internal/jms/events_MS2

    Queue

    • Name = OrchestrationDependenciesQueue_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = OrchestrationDependenciesQueue.Quota

    • JNDIName = oracle/communications/ordermanagement/OrchestrationDependenciesQueue_MS2

    Topic

    • Name = oms_order_events_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_order_events.Quota

    • JNDIName = mslv/provisioning/external/orderevents_MS2

    Queue

    • Name = oms_order_updates_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_order_updates.Quota

    • JNDIName = mslv/provisioning/internal/ejb/orderupdates_MS2

    Queue

    • Name = oms_ws_requests_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_ws_requests.Quota

    • JNDIName = oracle/communications/ordermanagement/WebServiceQueue_MS2

    Queue

    • Name = oms_ws_cluster_requests_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_ws_cluster_requests.Quota

    • JNDIName = oracle/communications/ordermanagement/WebServiceClusterRequestQueue_MS2

    Queue

    • Name = oms_ws_cluster_responses_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_ws_cluster_responses.Quota

    • JNDIName = oracle/communications/ordermanagement/WebServiceClusterResponseQueue_MS2

    Queue

    • Name = oms_ws_cluster_correlates_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota =oms_ws_cluster_correlates.Quota

    • JNDIName = oracle/communications/ordermanagement/WebServiceClusterCorrelateQueue_MS2

  11. Set the JMSPriority destination key for the oms_ws_requests_MS2 queue as follows:

    1. In the WebLogic Server Administration Console, in the Domain Structure tree, expand Services, then expand Messaging, and select JMS Modules.

    2. From the JMS Modules table, select oms_jms_module.

    3. From the Summary of Resources table, select oms_ws_requests_MS2.

    4. In the Destination Keys section, move DestinationKey-JMSPriority to the Chosen list.

    5. Save the destination key settings.

  12. Create the following Topics under JMS model oms_jms_module:

    • Name = oms_signal_topic_MS2

    • SubDeploymentName = oms_jms_server_MS2

    • Quota = oms_signal_topic.Quota

    • JNDIName = mslv/oms/oms1/internal/jms/InternalSignalTopic_MS2

  13. Add the following member queues and topics to the following distributed queues:

    • Add member queue oms_behavior_queue_MS2 to distributed queue oms_distributed_behavior_queue

    • Add member queue oms_events_MS2 to distributed queue oms_distributed_events_queue

    • Add member queue OrchestrationDependenciesQueue_MS2 to distributed queue oms_distributed_orchestration_dependencies_queue

    • Add member queue oms_order_events_MS2 to distributed topic oms_distributed_order_events_queue

    • Add member queue oms_order_updates_MS2 to distributed queue oms_distributed_order_updates_queue

    • Add member queue oms_ws_requests_MS2 to distributed queue oms_distributed_ws_requests_queue

    • Add member queue oms_ws_cluster_requests_MS2 to distributed queue oms_distributed_cluster_ws_requests_proxy_queue

    • Add member queue oms_ws_cluster_responses_MS2 to distributed queue oms_distributed_cluster_ws_response_proxy_queue

    • Add member queue oms_ws_cluster_correlates_MS2 to distributed queue oms_distributed_cluster_ws_correlates_queue

    • Add member topic oms_signal_topic_MS2 to distributed topic oms_distributed_signal_topic

  14. If an Order-to-Activate cartridge is already deployed, run ant task config_All.

  15. Add the new managed server well-known address IP address and port number to the Coherence configuration file (see "About Configuring Coherence for an OSM Installation").

Removing a Managed Server from a Clustered Environment

Before you can remove an OSM WebLogic server from a cluster, you must delete the following queues (where ManagedServer is the name of the server you want to remove):

  • oms_behavior_queue_ManagedServer

  • oms_cartridge_deploy_ManagedServer

  • oms_events_ManagedServer

  • oms_order_updates_ManagedServer

  • oms_ws_cluster_correlates_ManagedServer

  • oms_ws_cluster_requests_ManagedServer

  • oms_ws_cluster_responses_ManagedServer

  • oms_ws_requests_ManagedServer

  • OrchestrationDependenciesQueue_ManagedServer

Before you can remove these queues, you must remove their membership in the following distributed queues:

  • oms_distributed_behavior_queue

  • oms_distributed_cluster_ws_correlates_queue

  • oms_distributed_cluster_ws_requests_proxy_queue

  • oms_distributed_cluster_ws_response_proxy_queue

  • oms_distributed_events_queue

  • oms_distributed_orchestration_dependencies_queue

  • oms_distributed_order_updates_queue

  • oms_distributed_ws_requests_queue

If you have installed an Order-to-Activate cartridge, you must remove the following queues:

  • OSM_FalloutQueue_ManagedServer

  • OSM_PIPFalloutQueue_ManagedServer

  • OSM_ORPFalloutQueue_ManagedServer

  • OSM_InBoundMessageRecoveryQueue_ManagedServer

  • OSM_WebServiceFindTroubleTicketCFResponseQueue_ManagedServer

  • OSM_WebServiceFalloutCFResponseQueue_ManagedServer

  • AIA_CreateCustomerQueue_ManagedServer

  • AIA_CreateCustomerResponseQueue_ManagedServer

  • AIA_CreateProvisioningOrderQueue_ManagedServer

  • AIA_CreateProvisioningOrderResponseQueue_ManagedServer

  • AIA_CreateBillingOrderQueue_ManagedServer

  • AIA_CreateBillingOrderResponseQueue_ManagedServer

  • AIA_UpdateFulfillmentOrderQueue_ManagedServer

  • AIA_UpdateSalesOrderQueue_ManagedServer

  • AIA_CreateTroubleTicketRequestQueue_ManagedServer

  • AIA_CreateTroubleTicketResponseQueue_ManagedServer

  • AIA_UpdateTroubleTicketRequestQueue_ManagedServer

  • OSM_OrderActivityQueue_ManagedServer

  • OSM_WebServiceFalloutLFResponseQueue_ManagedServer

  • AIA_CreateErrorFaultQueue_ManagedServer

  • OSM_WebServiceResponseQueue_ManagedServer

  • OSM_ServiceProvisioningUpdateQueue_ManagedServer

  • OSM_LFAbortOrderPropagationRespQueue_ManagedServer

  • OSM_WebServiceRetryResponseQueue_ManagedServer

In the same way, before you can remove these Order-to-Activate queues, you must remove their membership in the following Order-to-Activate distributed queues:

  • Distributed_FalloutQueue

  • Distributed_OSM_PIPFalloutQueue

  • Distributed_OSM_ORPFalloutQueue

  • Distributed_OSM_InBoundMessageRecoveryQueue

  • Distributed_OSM_WebServiceFindTroubleTicketCFResponseQueue

  • Distributed_OSM_WebServiceFalloutCFResponseQueue

  • Distributed_AIA_CreateCustomerQueue

  • Distributed_AIA_CreateCustomerResponseQueue

  • Distributed_AIA_CreateProvisioningOrderQueue

  • Distributed_AIA_CreateProvisioningOrderResponseQueue

  • Distributed_AIA_CreateBillingOrderQueue

  • Distributed_AIA_CreateBillingOrderResponseQueue

  • Distributed_AIA_UpdateFulfillmentOrderQueue

  • Distributed_AIA_UpdateSalesOrderQueue

  • Distributed_AIA_CreateTroubleTicketRequestQueue

  • Distributed_AIA_CreateTroubleTicketResponseQueue

  • Distributed_AIA_UpdateTroubleTicketRequestQueue

  • Distributed_OSM_OrderActivityQueue

  • Distributed_OSM_WebServiceFalloutLFResponseQueue

  • Distributed_AIA_UpdateFulfillmentOrderQueue

  • Distributed_AIA_CreateErrorFaultQueue

  • Distributed_OSM_WebServiceResponseQueue

  • Distributed_OSM_ServiceProvisioningUpdateQueue

  • Distributed_OSM_LFAbortOrderPropagationRespQueue

  • Distributed_OSM_WebServiceRetryResponseQueue

To remove an OSM managed server from a WebLogic cluster:

  1. Shut down the Oracle database server or servers used by your OSM instance.

  2. Log in to the WebLogic Administration console.

    The WebLogic Administration Console is displayed.

  3. Click Environment.

  4. Click Servers.

    The Summary of Servers screen is displayed.

  5. Click Control.

  6. Select all managed servers. Do not select the administration server (followed by "(admin)" in the list).

    Note:

    Note which OSM managed server you want to remove.
  7. Click Shutdown.

  8. Select Force Shutdown Now.

    The State changes from RUNNING to SHUTDOWN.

  9. Click Services.

  10. Click JMS Modules.

    The JMS Modules screen is displayed.

  11. Click oms_jms_module.

    The Settings for oms_jms_module screen is displayed.

  12. For every OSM or Order-to-Activate distributed queue, do the following:

    1. Click the distributed_queue, where distributed_queue is the name of the OSM distributed queue.

      The Settings for distributed_queue screen is displayed.

    2. Click Members.

      The Distributed Queue Members screen is displayed.

    3. Select the queue_ManagedServer, where queue is the OSM or Order-to-Activate queue and ManagedServer is the managed server you want to remove.

    4. Click Delete.

  13. Click Services.

  14. Click JMS Modules.

    The JMS Modules screen is displayed.

  15. Click oms_jms_module.

    The Settings for oms_jms_module screen is displayed.

  16. Delete every queue_ManagedServer or topic_ManagedServer, where queue and topic are the names of the OSM queues and topics.

  17. Click Subdeployments.

    The Subdeployments screen is displayed.

  18. Select the oms_jms_server_ManagedServer you want to delete.

    Note:

    You cannot delete a managed server subdeployment until there are no resources associated with it. If any resources still appear, either delete the resource if it is a queue, or remove the managed server from the member list if it is a distributed queue.
  19. Click Delete.

  20. Click Services.

  21. Click JMS Servers.

    The Summary of JMS Servers screen is displayed.

  22. Select the oms_jms_server_ManagedServer you want to delete.

  23. Click Delete.

  24. Click Environment.

  25. Click Servers.

    The Summary of Servers screen is displayed.

  26. Select the managed server you want to delete.

  27. Click Delete.

  28. Click Environment.

  29. Click Clusters.

    The Summary of Clusters screen is displayed.

  30. Click the name of the cluster that the managed server you deleted was associated with.

  31. In the Clusters Address field, remove the IP address and port number for the managed server you deleted.

  32. In the Number of Servers In Cluster Address field, reduce the number by one.

  33. Click Save.

  34. If a proxy is used in the cluster configuration, delete the managed server's IP address and port number from the proxy server's configuration. For example, if you use a WebLogic HTTP proxy, update the WebLogicCluster parameter in the domain_home/apps/OracleProxy4_proxyserver_proxy/WEB-INF/web.xml file (where proxyserver is the proxy server name) and redeploy the proxy.