8 Extending the Domain with Oracle I/PM

This chapter describes how to extend a domain with Oracle Imaging and Processing Management (Oracle I/PM) using the Oracle Fusion Middleware Configuration Wizard. It contains the following sections:

Important:

Oracle strongly recommends that you read the release notes for any additional installation and deployment considerations prior to starting the setup process.

8.1 About Adding Oracle I/PM to a Domain

The Oracle Imaging and Processing Management (I/PM) system is installed using the WL_HOME and ORACLE_HOME locations created in Chapter 3, "Installing the Software" on a shared storage. ECMHOST1 and ECMHOST2 mount MW_HOME and reuse the existing WLS, SOA, and ECM binary installations. The pack and unpack utilities are used to bootstrap the domain configuration for the WLS_IPM1 and WLS_IPM2 servers in these two new nodes. As a result, you do not need to install any software in these two nodes. For the I/PM system to work properly, ECMHOST1 and ECMHOST2 must maintain the same system requirements and configuration that was required for installing Oracle Fusion Middleware in SOAHOST1 and SOAHOST2. Otherwise, unpredictable behavior in the execution of binaries may occur.

8.2 Enabling VIP4 and VIP5 in ECMHOST1and ECMHOST2

The I/PM system uses a virtual host name as the listen addresses for the managed server on which it is running. These virtual host names and corresponding virtual IPs are required to enable server migration for the I/PM component. You must enable a VIP (VIP4/VIP5) mapping to ECMHOST1VHN1 on ECMHOST1 and ECMHOST2VHN1 on ECMHOST2, and must also correctly resolve the host names in the network system used by the topology (either by DNS Server or hosts resolution).

To enable the VIPs, follow the example described in Section 6.2, "Enabling SOAHOST1VHN1 on SOAHOST1 and SOAHOST2VHN1 on SOAHOST2."

8.3 Extending the Domain to Include Oracle I/PM

You extend the domain configured in Chapter 7, "Extending the Domain with Oracle UCM" to include Oracle I/PM. The instructions in this section assume that the Oracle I/PM deployment uses the same database service as the Oracle UCM deployment (ecmedg.mycompany.com). However, a deployment may choose to use a different database service specifically for Oracle I/PM.

Note:

Before performing these steps, back up the domain as described in the Oracle Fusion Middleware Administrator's Guide.

Perform these steps to extend the domain to include Oracle I/PM:

  1. Ensure that the database where you installed the repository is running. For Oracle RAC databases, it is recommended that all instances are running, so that the validation check later on becomes more reliable.

  2. Change the directory to the location of the Oracle Fusion Middleware Configuration Wizard:

    SOAHOST1> cd ORACLE_COMMON_HOME/common/bin
    
  3. Start the Configuration Wizard:

    SOAHOST1> ./config.sh
    
  4. In the Welcome screen, select Extend an existing WebLogic domain, and click Next.

  5. In the WebLogic Domain Directory screen, select the WebLogic domain directory (ORACLE_BASE/admin/domain_name/aserver/domain_name), and click Next.

  6. In the Select Extension Source screen, do the following:

    • Select Extend my domain automatically to support the following added products.

    • Select the following product:

      • Oracle Image and Process Management - 11.1.1.0 [ecm]

    Click Next.

  7. The Configure JDBC Component Schema screen opens (Figure 8-1).

    Figure 8-1 Configure JDBC Component Schema Screen for Oracle I/PM

    Description of Figure 8-1 follows
    Description of "Figure 8-1 Configure JDBC Component Schema Screen for Oracle I/PM"

    In the Configure JDBC Component Schema screen, do the following:

    • Select IPM Schema only. Do not select any of the other existing schemas!

    • Select Configure selected component schemas as RAC multi data source schemas in the next panel.

    Click Next.

  8. The Configure RAC Multi Data Sources Component Schema screen opens (Figure 8-2).

    Figure 8-2 Configure RAC Multi Data Source Component Scheme Screen for Oracle I/PM

    Description of Figure 8-2 follows
    Description of "Figure 8-2 Configure RAC Multi Data Source Component Scheme Screen for Oracle I/PM"

    In the Configure RAC Multi Data Sources Component Schema screen, do the following:

    1. Select IPM Schema. Leave the other data sources as they are.

    2. Enter values for the following fields, specifying the connect information for the Oracle RAC database that was seeded with RCU:

      • Driver: Select Oracle driver (Thin) for RAC Service-Instance connections, Versions: 10 and later.

      • Service Name: Enter the service name of the database (ecmedg.mycompany.com).

      • Username: Enter the complete user name (including the prefix) for the schemas. The user names shown in Figure 8-2 assume that DEV was used as the prefix for schema creation from RCU.

      • Password: Enter the password to use to access the schemas.

    3. Click Add and enter the details for the first Oracle RAC instance.

    4. Repeat step c for each Oracle RAC instance.

      Note:

      Leave the UCM Schema, SOA Infrastructure, User Messaging Service, OWSM MDS Schema, and SOA MDS Schema information as they are.
    5. Click Next.

  9. In the Test JDBC Data Sources screen, the connections should be tested automatically. The Status column displays the results. Ensure that all connections were successful. If not, click Previous to return to the previous screen and correct your entries.

    Click Next when all the connections are successful.

  10. In the Optional Configuration screen, select the following:

    • JMS Distributed Destination

    • Managed Servers, Clusters and Machines

    • Deployment and Services

    Click Next.

  11. In the Select JMS Distributed Destination Type screen, select UDD from the drop-down list for the JMS modules of all Oracle Fusion Middleware components. Click Next.

    If an override warning appears, click OK to acknowledge it.

  12. In the Configure Managed Servers screen, add the required managed servers. A server called 'ipm_server1' is created automatically. Rename this to WLS_IPM1 and add a new server called WLS_IPM2. Give these servers the attributes listed in Table 8-1. Do not modify the other servers that appear in this screen; leave them as they are.

    Table 8-1 Managed Servers

    Name Listen Address Listen Port SSL Listen Port SSL Enabled

    WLS_IPM1

    ECMHOST1VHN1

    16000

    n/a

    No

    WLS_IPM2

    ECMHOST2VHN2

    16000

    n/a

    No


    Click Next.

  13. In the Configure Clusters screen, click Add to add the clusters as shown in Table 8-2. Do not modify the other clusters that appear in this screen; leave them as they are.

    Table 8-2 Clusters

    Name Cluster Messaging Mode Multicast Address Multicast Port Cluster Address

    IPM_Cluster

    unicast

    n/a

    n/a

    Leave empty


    Click Next.

  14. In the Assign Servers to Clusters screen, add the following. Do not modify the other assignments that appear in this screen; leave them as they are.

    • IPM_Cluster:

      • WLS_IPM1

      • WLS_IPM2

    Click Next.

  15. In the Configure Machines screen, click the Unix Machine tab. You should see the ECMHOST1 and ECMHOST2 machines and have the following entries:

    Table 8-3 Machines

    Name Node Manager Listen Address

    SOAHOST1

    SOAHOST1

    SOAHOST2

    SOAHOST2

    ADMINHOST

    localhost

    ECMHOST1

    ECMHOST1

    ECMHOST2

    ECMHOST2


    Leave all other fields to their default values. Click Next.

  16. In the Assign Servers to Machines screen, assign servers to machines as follows:

    • Assign WLS_IPM1 to ECMHOST1.

    • Assign WLS_IPM2 to ECMHOST2.

    Click Next.

  17. In the Target Deployments to Clusters or Servers screen, ensure the following targets:

    • usermessagingserver and usermessagingdriver-email should be targeted only to SOA_Cluster . (The usermessaging-xmpp, usermessaging-smpp, and usermessaging-voicexml applications are optional.)

    • WSM-PM should be targeted only to SOA_Cluster.

    • The oracle.rules*, oracle.sdp.* and oracle.soa.* deployments should be targeted to SOA_Cluster only, except for the oracle.soa.workplace.wc library, which should be targeted to both the SOA_Cluster and the IPM_Cluster.

    • The oracle.wsm.seedpolicies library should be targeted to SOA_Cluster and IPM_Cluster (and any servers expected to host WSM-PM protected web services).

    Click Next.

  18. In the Target Service to Cluster or Servers screen, target JOC Startup Class and JOC Shutdown Class only to SOA_Cluster.

    Click Next.

  19. In the Configuration Summary screen, click Extend.

  20. If a dialog window appears warning about conflicts in ports for the domain, click OK. This should be due to pre-existing servers in the nodes and the warning can be ignored.

  21. In the Creating Domain screen, click Done.

  22. Restart the Administration Server to make these changes to take effect. See Section 8.6, "Restarting the Administration Server."

8.4 Propagating the Domain Configuration to ECMHOST1 and ECMHOST2 Using the unpack Utility

Perform these steps to propagate the domain configuration:

  1. Run the pack command on SOAHOST1 to create a template pack using the following commands:

    SOAHOST1> cd ORACLE_COMMON_HOME/common/bin
    
    SOAHOST1> ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name -template=edgdomaintemplateIPM.jar -template_name=edgdomain_templateIPM
    
  2. Copy the template to ECMHOST2:

    Note:

    Assuming that ECMHOST1 shares the ORACLE_HOME with SOAHOST1, the template will be present in the same directory in ECMHOST1; otherwise, copy it also to ECMHOST1.
    SOAHOST1> scp edgdomaintemplateIPM.jar oracle@ECMHOST2:ORACLE_BASE/product/fmw/oracle_common/common/bin
    
  3. Run the unpack command on ECMHOST1 to unpack the propagated template.

    Note:

    Make sure to run unpack from the ORACLE_COMMON_HOME/common/bin directory, not from WL_HOME/common/bin.
    ECMHOST1> cd ORACLE_COMMON_HOME/common/bin
    
    ECMHOST1> ./unpack.sh -domain=ORACLE_BASE/admin/domain_name/mserver/domain_name -template=edgdomaintemplateIPM.jar –app_dir= ORACLE_BASE/admin/domain_name/mserver/applications –overwrite_domain=true
    

    Note:

    The ORACLE_BASE/admin/domain_name/mserver directory must exist before running unpack. In addition, the ORACLE_BASE/admin/domain_name/mserver/applications must be empty.
  4. Repeat step 3 for ECMHOST2.

8.5 Starting Node Manager on ECMHOST1 and ECMHOST2

Perform these steps to start Node Manager on ECMHOST1 and ECMHOST2 if Node Manager has not started already:

  1. On each server, run the setNMProps.sh script, which is located in the ORACLE_COMMON_HOME/common/bin directory, to set the StartScriptEnabled property to 'true' before starting Node Manager:

    ECMHOSTn> cd ORACLE_COMMON_HOME/common/bin
    ECMHOSTn> ./setNMProps.sh
    

    Note:

    You must use the StartScriptEnabled property to avoid class loading failures and other problems. See also Section 12.8.3, "Incomplete Policy Migration After Failed Restart of SOA Server."

    Note:

    If the I/PM server is sharing the MW_HOME in a local or shared storage with UCM, as suggested in the shared storage configuration described in Chapter 2, "Database and Environment Preconfiguration," it is not required to run setNMProps.sh again. In this case, Node Manager has already been configured to use a start script and it is likely already running in the node for UCM.
  2. Run the following commands on both ECMHOST1 and ECMHOST2 to start Node Manager:

    ECMHOSTn> cd WL_HOME/server/bin
    ECMHOSTn> ./startNodeManager.sh
    

8.6 Restarting the Administration Server

Restart the Administration Server to make these changes take effect. To restart the Administration Server, stop it first using the Administration Console and then start it again as described in Section 5.5, "Starting the Administration Server on SOAHOST1."

8.7 Configuring a JMS Persistence Store for Oracle I/PM JMS

Configure the location for the JMS persistence stores as a directory that is visible from both nodes. By default, the JMS servers used by Oracle I/PM are configured with no persistent store and use WebLogic Server's store (ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/data/store/default). You must change I/PM's JMS server persistent store to use a shared base directory as follows:

  1. Log in to the Oracle WebLogic Server Administration Console.

  2. In the Domain Structure window, expand the Services node and then click the Persistence Stores node. The Summary of Persistence Stores page opens.

  3. Click Lock & Edit.

  4. Click New, and then Create File Store.

  5. Enter a name (for example 'IPMJMSServer1Store', which allows you identify the service it is created for) and target WLS_IPM1. Enter a directory that is located in shared storage so that it is accessible from both ECMHOST1 and ECMHOST2 (ORACLE_BASE/admin/domain_name/ipm_cluster/jms).

  6. Click OK and activate the changes.

  7. In the Domain Structure window, expand the Services node and then click the Messaging->JMS Servers node. The Summary of JMS Servers page opens.

  8. Click on the IpmJmsServer1 JMS Server (represented as a hyperlink) from the Name column of the table. The Settings page for the JMS server opens.

  9. Click Lock & Edit.

  10. In the Persistent Store drop-down list, select IPMJMSServer1Store.

  11. Click Save and Activate.

  12. Repeat the steps and create IPMJMSServer2Store for IPMJMSServer2.

8.8 Configuring a Default Persistence Store for Transaction Recovery

Each server has a transaction log which stores information about committed transactions that are coordinated by the server that may not have been completed. The WebLogic Server uses this transaction log for recovery from system crashes or network failures. To leverage the migration capability of the Transaction Recovery Service for the servers within a cluster, store the transaction log in a location accessible to the server.

Note:

Preferably, this location should be a dual-ported SCSI disk or on a Storage Area Network (SAN).

Perform these steps to set the location for the default persistence store for WLS_IPM1:

  1. Log in to the Oracle WebLogic Server Administration Console.

  2. In the Domain Structure window, expand the Environment node and then click the Servers node. The Summary of Servers page opens.

  3. Click WLS_IPM1 (represented as a hyperlink) in the Name column of the table. The settings page for the WLS_IPM1 server opens with the Configuration tab active.

  4. Click the Services tab.

  5. Click Lock & Edit.

  6. In the Default Store section of the page, enter the path to the folder where the default persistent stores will store its data files. The directory structure of the path is as follows:

    ORACLE_BASE/admin/domain_name/ipm_cluster_name/tlogs
    
  7. Click Save and activate the changes.

  8. Repeat the step for the WLS_IPM2 server.

Note:

To enable migration of the Transaction Recovery Service, specify a location on a persistent storage solution that is available to other servers in the cluster. Both ECMHOST1 and ECMHOST2 must be able to access this directory. This directory must also exist before you restart the server.

8.9 Disabling Host Name Verification for the WLS_IPM Managed Servers

This step is required if you have not set up the appropriate certificates to authenticate the different nodes with the Administration Server (see Chapter 9, "Setting Up Node Manager"). If you have not configured the server certificates, you will receive errors when managing the different WebLogic Servers. To avoid these errors, disable host name verification while setting up and validating the topology, and enable it again once the EDG topology configuration is complete as described in Chapter 9, "Setting Up Node Manager."

Perform these steps to disable host name verification:

  1. Log in to Oracle WebLogic Server Administration Console.

  2. In the Administration Console, select WLS_IPM1, then SSL, and then Advanced.

  3. Click Lock & Edit.

  4. Set host name verification to 'None'.

  5. In the Administration Console, select WLS_IPM2, then SSL, and then Advanced.

  6. Set host name verification to 'None'.

  7. Save and activate the changes.

8.10 Starting the Oracle I/PM System

Perform these steps to start the WLS_IPM1 managed server on ECMHOST1:

  1. Start the WLS_IPM1 managed servers:

    1. Log in to the Oracle WebLogic Server Administration Console at http://ADMINVHN:7001/console.

    2. In the Domain Structure window, expand the Environment node and then select Servers. The Summary of Servers page opens.

    3. Click the Control tab.

    4. Select WLS_IPM1 from the Servers column of the table.

    5. Click Start.

  2. Access http://ECMHOST1VHN1:16000/imaging to verify the status of WLS_IPM1. The Imaging and Process Management login page appears. Enter your Oracle Weblogic administration user name and password to log in.

    Verify that the PROCESSES initialization parameter for the database is set to a high enough value. See Section 2.1.1.3, "Initialization Parameters" for details. This error often occurs when you start servers that are subsequent to the first server.

  3. Start the WLS_IPM2 managed servers:

    1. Log in to the Oracle WebLogic Server Administration Console at http://ADMINVHN:7001/console.

    2. In the Domain Structure window, expand the Environment node and then select Servers. The Summary of Servers page opens.

    3. Click the Control tab.

    4. Select WLS_IPM2 from the Servers column of the table.

    5. Click Start.

  4. Access http://ECMHOST2VHN1:16000/imaging to verify the status of WLS_IPM2. The Imaging and Process Management login page appears. Enter your Oracle Weblogic administration user name and password to log in.

Note:

These instructions assume that the host name verification displayed previously for the WS-M or SOA managed servers in SOAHOST2 and that the Node Manager is already running on SOAHOST2.

8.11 Configuring System MBeans for Oracle I/PM

Perform these steps to configure the following system MBeans for Oracle I/PM:

  • InputDirectories

  • SampleDirectory

  • GDFontPath

  1. Log in to Oracle Fusion Middleware Control at http://ADMINVHN:7001/em (Figure 8-3).

    Figure 8-3 System MBean Browser

    Description of Figure 8-3 follows
    Description of "Figure 8-3 System MBean Browser"

  2. In the left pane, expand the farm domain name, then expand WebLogic Domain, then the domain name, then IPM Cluster, and then click the "WLS_IPM1" link.

  3. At the top of the right-hand panes, click on the WebLogic Server drop-down menu and choose System MBean Browser.

  4. Expand Application Defined MBeans and then oracle.imaging.

  5. Expand Server: WLS_IPM1 and then config.

  6. Click the config bean link.

  7. In the right pane, set the InputDirectories MBean to specify the path to the input files: ORACLE_BASE/admin/domain_name/ipm_cluster_name/input_files.

    Please note that all Oracle UCM servers involved must be able to resolve this location (that is, via the NFS mount point).

  8. Set the SampleDirectory MBean: ORACLE_BASE/admin/domain_name/ipm_cluster_name/input_files/Samples.

    In order to process input files, the input agent must have the appropriate permissions on the input directory and the input directory must allow file locking. The input agent requires that the user account that is running the WebLogic Server service have read and write privileges to the input directory and all files and subdirectories in the input directory. These privileges are required so that input agent can move the files to the various directories as it works on them. File locking on the share is needed by input agent to coordinate actions between servers in the cluster.

  9. Set the GDFontPath MBean to specify the path to the GD fonts for the X Windows environment. Check with your system administrator. The defaults are likely "/usr/share/X11/fonts/TTF" or "/usr/lib/X11/fonts/TTF ."

  10. Click Apply.

8.12 Enabling Oracle I/PM in Oracle UCM

Perform these steps to enable Oracle I/PM in Oracle UCM:

  1. Log in to Oracle Content Server at http://ECMHOST1:16200/cs.

  2. Open the Administration tray or menu, and choose Admin Server. The Component Manager page opens.

  3. Enable the IpmRepository component.

  4. Click Update and confirm the action.

  5. Restart the managed server, and then restart all managed servers in the UCM cluster.

8.13 Adding the Oracle I/PM Server Listen Addresses to the List of Allowed Hosts in Oracle UCM

Perform these steps to add the host names of the WLS_IPM1 and WLS_IPM2 managed servers (ECMHOST1VHN1 and ECMHOST2VHN1, respectively) to the SocketHostNameSecurityFilter parameter list:

  1. Open the file ORACLE_BASE/admin/domain_name/ucm_cluster/cs/config/config.cfg in a text editor.

  2. Remove or comment out the following line:

    SocketHostAddressSecurityFilter=127.0.0.1|ECMHOST1|ECMHOST2|WEBHOST1|WEBHOST2
    
  3. Add the following two lines to include the WLS_IPM1 and WLS_IPM2 listen addresses to the list of addresses that are allowed to connect to Oracle UCM:

    SocketHostNameSecurityFilter=localhost|localhost.mycompany.com|ECMHOST1|ECMHOST2|ECMHOST1VHN1|ECMHOST2VHN1
    AlwaysReverseLookupForHost=Yes
    
  4. Save the modified config.cfg file and restart the UCM servers for the changes to take effect.

8.14 Creating a Connection to UCM System

Perform these steps to create a connection to the Oracle UCM system:

  1. Log in to WLS_IPM1 imaging console at http://ECMHOST1VHN1:16000/imaging.

  2. In the left-hand pane, click Manage Connections, and then Create Content Server Connection.

  3. Enter a name and description for the new connection, and then click Next.

  4. In the Connection Settings screen, do the following:

    • Make sure the Use Local Content Server check box is selected.

    • Set the Content Server port to 4444.

    • Add two servers to the Content Server pool:

      • ECMHOST1:4444

      • ECMHOST2:4444

    Click Next.

  5. In the Connection Security screen, leave the default selections for the WebLogic user, and then click Next.

  6. Review the connection details and click Submit.

8.15 Configuring BPEL CSF Credentials

When connecting to a BPEL system from Oracle I/PM, it is required to configure the required credential to communicate with the SOA system. To add these credentials, use these steps:

  1. Change directory to the common/bin location under the ECM Oracle Home in SOAHOST1 (where your administration servers resides):

    SOAHOST1>cd ORACLE_HOME/common/bin
    

    (ORACLE_HOME is the ECM home under MW_HOME/ecm. )

  2. Run the Oracle WebLogic Scripting Tool (WLST):

    SOAHOST1>./wlst.sh
    
  3. Run connect() and supply the username, password, and administration server URL (t3://ADMINVHN:7001).

    wls:/offline> connect()
    
  4. Create a CSF (Credential Store Framework) credential. This credential is the credential that I/PM will use to connect to the BPEL system. It should be a BPEL admin user. CSF credentials are username/password pairs that are keyed by an "alias" and stored inside a named "map" in the CSF. Because of its integration with OWSM web services, Oracle I/PM is currently leveraging the standard OWSM CSF map named "oracle.wsm.security". To create a credential, use the createCred WLST command:

    wls:/ecm_domain/serverConfig> createCred(map="oracle.wsm.security", key="basic.credential", user="weblogic", password="password_for_credential")
    

    The "key" in the command is the "alias," which is used for the 'Credential Alias' property of the BPEL connection definition in the Oracle I/PM administration user interface (also the Connection.CONNECTION_BPEL_CSFKEY_KEY property in the API). The alias "basic.credential" is used in the example because it is a standard default name used by OWSM and BPEL. However, the alias can be anything as long as it is unique within the map.

  5. Run the list credentials command to verify that the credential was created:

    wls:/ecm_domain/serverConfig> listCred(map="oracle.wsm.security", key="basic.credential")
    {map=oracle.wsm.security, key=basic.credential}
    Already in Domain Runtime Tree
    
    [Name : weblogic, Description : null, expiry Date : null]
    PASSWORD: password_for_credential
    

8.16 Configuring a Workflow Connection

Perform these steps to configure a workflow connection:

  1. Log in to the WLS_IPM1 imaging console at http://ECHMHOSTVHN1:16000/imaging.

  2. From the navigator pane, under Manage Connections, click the Add icon and then Create Workflow Connection. The Workflow Connection Basic Information Page opens.

  3. Enter a name for the connection. The name will display in the Manage Connections panel. This field is required. Optionally, enter a brief description of the connection. The connection type defaults to Workflow Connection.

  4. Click Next.

  5. In the Workflow Connection Settings Page, do the following:

    1. In the HTTP Front End Address field, specify the host name or IP address, domain, and port number of the workflow server: http://soainternal.mycompany.com:80. This field is required.

    2. In the Credential Alias field, provide the credential alias created earlier as described in Section 8.15, "Configuring BPEL CSF Credentials."

    3. In the Provider field, enter your two SOA server listen addresses separated by a comma: t3://SOAHOST1VHN1,SOAHOST2VHN1:8001

    4. Click the Test Connection button to confirm the connection parameters and see what composites exist on that BPEL machine.

    5. Click Next.

  6. Modify the security grants if desired.

  7. Click Next.

  8. Click Submit.

8.17 Configuring Oracle HTTP Server for the WLS_IPM Managed Servers

To enable Oracle HTTP Server to route to IPM_Cluster, which contains the WLS_IPM1 and WLS_IPM2 managed servers, you must set the WebLogicCluster parameter to the list of nodes in the cluster as follows:

  1. On WEBHOST1 and WEBHOST2, add the following lines to ORACLE_BASE/admin/instance_name/config/OHS/component_name/mod_wl_ohs.conf:

    # I/PM Application
    <Location /imaging >
        WebLogicCluster ECMHOST1VHN1:16000,ECMHOST2VHN1:16000
        SetHandler weblogic-handler
        WLProxySSL ON
        WLProxySSLPassThrough ON
    </Location>
    
    # AXF WS Invocation
    <Location /axf-ws >
        WebLogicCluster ECMHOST1VHN1:16000,ECMHOST2VHN1:16000
        SetHandler weblogic-handler
        WLProxySSL ON
        WLProxySSLPassThrough ON
    </Location>
    
  2. Restart Oracle HTTP Server on both WEBHOST1 and WEBHOST2:

    WEBHOST1> ORACLE_BASE/admin/instance_name/bin/opmnctl restartproc ias-component=ohs1
    
    WEBHOST2> ORACLE_BASE/admin/instance_name/bin/opmnctl restartproc ias-component=ohs2
    

8.18 Validating Access Through Oracle HTTP Server

Verify URLs to ensure that appropriate routing and failover is working from the HTTP Server to the IPM_Cluster. Perform these steps to verify the URLs:

  1. While WLS_IPM2 is running, stop WLS_IPM1 using the Oracle WebLogic Server Administration Console.

  2. Access http://WEBHOST1:7777/imaging to verify it is functioning properly. (Please note that you will not be able to retrieve reports or data since the I/PM server is down.)

  3. Start WLS_IPM1 from the Oracle WebLogic Server Administration Console.

  4. Stop WLS_IPM2 from the Oracle WebLogic Server Administration Console.

  5. Access http://WEBHOST1:7777/imaging to verify it is functioning properly.

  6. Start WLS_IPM2 from the Oracle WebLogic Server Administration Console.

8.19 Setting the Frontend HTTP Host and Port

You must set the frontend HTTP host and port for the Oracle WebLogic Server IPM cluster:

  1. Log in to Oracle WebLogic Server Administration Console.

  2. Go to the Change Center section and click Lock & Edit.

  3. Expand the Environment node in the Domain Structure window.

  4. Click Clusters. The Summary of Clusters page opens.

  5. Select IPM_Cluster.

  6. Click the HTTP tab.

  7. Set the following values:

    • Frontend Host: ecm.mycompany.com

    • Frontend HTTPS Port: 443

    • Frontend HTTP Port: 80 (if not using SSL)

  8. Click Save.

  9. Click Activate Changes in the Change Center section of the Administration Console.

  10. Restart the servers to make the frontend host directive in the cluster take effect.

8.20 Configuring Node Manager for the WLS_IPM Managed Servers

It is assumed that the host names used by the WLS_IPM managed servers as listen address have already been configured for host name verification as explained in Section 7.12, "Configuring Node Manager for the WLS_UCM and WLS_IPM Managed Servers."

At this point, once the WLS servers have been added to the domain, the procedure in Section 9.3.5, "Configuring Managed WLS Servers to Use the Custom Keystores" should be performed so that the servers are configured to use custom key stores.

8.21 Configuring Server Migration for the WLS_IPM Managed Servers

Server migration is required for proper failover of the Oracle I/PM components in the event of failure in any of the ECMHOST1 and ECMHOST2 nodes. See Chapter 10, "Configuring Server Migration" for further details. For Oracle I/PM, use the following values for the variables in that chapter:

  • Server names:

    • WLS_SERVER1: WLS_IPM1

    • WLS_SERVER2: WLS_IPM2

  • Host names:

    • HOST1: ECMHOST1

    • HOST2: ECMHOST2

  • Cluster name:

    • CLUSTER: IPM_Cluster

8.22 Backing Up the Installation

After you have verified that the extended domain is working, back up the installation. This is a quick backup for the express purpose of immediate restore in case of problems in the further steps. The backup destination is the local disk. This backup can be discarded once the enterprise deployment setup is complete. At that point, the regular deployment-specific backup and recovery process can be initiated. The Oracle Fusion Middleware Administrator's Guide provides further details. For information on describing the Oracle HTTP Server data that must be backed up and restored, refer to the "Backup and Recovery Recommendations for Oracle HTTP Server" section in that guide. For information on how to recover components, see the "Recovery of Components" and "Recovery After Loss of Component" sections in the guide. For recommendations specific to recovering from the loss of a host, see the "Recovering Oracle HTTP Server to a Different Host" section in the guide. Also refer to the Oracle Database Backup and Recovery Guide for information on database backup.

Perform these steps to back up the installation at this point:

  1. Back up the web tier:

    1. Shut down the instance using opmnctl.

      WEBHOST1> ORACLE_BASE/admin/instance_name/bin/opmnctl stopall
      
    2. Back up the Middleware Home on the web tier using the following command (as root):

      WEBHOST1> tar -cvpf BACKUP_LOCATION/web.tar MW_HOME
      
    3. Back up the Oracle Instance Home on the web tier using the following command:

      WEBHOST1> tar -cvpf BACKUP_LOCATION/web_instance_name.tar ORACLE_INSTANCE
      
    4. Start the instance using opmnctl:

      WEBHOST1> cd ORACLE_BASE/admin/instance_name/bin
      WEBHOST1> opmnctl startall
      
  2. Back up the database. This is a full database backup (either hot or cold) using Oracle Recovery Manager (recommended) or operating system tools such as tar for cold backups if possible.

  3. Back up the Administration Server and managed server domain directory to save your domain configuration. The configuration files all exist in the ORACLE_BASE/admin/domain_name directory. Run the following command to create the backup:

    SOAHOST1> tar -cvpf edgdomainback.tar ORACLE_BASE/admin/domain_name