19 Managing the Topology for an Enterprise Deployment

This chapter describes some operations that you can perform after you have set up the topology, including monitoring, scaling, and backing up your topology.

This chapter includes the following sections:

19.1 Overview of Managing the Oracle WebCenter Content Topology

After configuring the Oracle WebCenter Content enterprise deployment, you can use the information in this chapter to manage the topology. Table 19-1 lists some tasks you can perform to manage the topology.

Table 19-1 Tasks for Managing the Oracle WebCenter Content Topology

Task Description More Information

Configure Imaging for high performance, scalability, and high availability in an Imaging cluster

Define an optimal input file strategy.

Section 19.2, "Defining an Optimal Input File Strategy for Imaging"

Manage the Oracle SOA Suite subsystem used by Imaging

Deploy SOA composites to a server address, manage space in the Oracle SOA Suite infrastructure, and configure UMS drivers.

Section 19.3, "Deploying Composites and Artifacts in the Oracle WebCenter Content Enterprise Deployment Topology"

Section 19.4, "Managing Space in the SOA Infrastructure Database"

Section 19.5, "Configuring UMS Drivers for Oracle WebCenter Content: Imaging"

Expand the topology by scaling it up, or out

Add new Managed Servers to nodes, or add new nodes.

Section 19.6, "Scaling Up the Oracle WebCenter Content Topology"

Section 19.7, "Scaling Out the Oracle WebCenter Content Topology"

Back up the topology before and after any configuration changes

Back up directories and files to protect against failure as a result of configuration changes.

Section 19.9, "Performing Backups and Recoveries in Oracle WebCenter Content Enterprise Deployments"

Prevent connection timeouts

Configure the firewall so that the database connection is not timed out.

Section 19.10, "Preventing Timeouts for SQLNet Connections"

Configure Oracle Web Services Manager (Oracle WSM) security policies for Oracle WebCenter Content and Oracle WebCenter Content: Imaging web services

Use the appropriate Oracle WSM policy enforcements instead of basic HTTP authentication.

Section 19.12, "Configuring Oracle Web Service Manager Security Policies for Oracle WebCenter Content and Imaging Services"

Troubleshoot problems

Implement solutions for known issues that may occur after you have configured the topology.

Section 19.13, "Troubleshooting the Oracle WebCenter Content Enterprise Deployment Topology"


For more information about managing Oracle WebCenter Content, see the following documents:

19.2 Defining an Optimal Input File Strategy for Imaging

The input file is the smallest unit of work that the input agent can schedule and process. There are multiple elements to be taken into consideration to achieve the highest performance, scalability, and high availability in an Imaging cluster:

  • All of the machines in an Imaging cluster share a common input directory.

  • Input files from this directory are distributed to each machine through a JMS queue.

  • The frequency with which the directory is polled for new files is configurable.

  • Each machine has multiple parsing agents that process the input files. The number of parsing agents is configured through the Work Manager created within the Imaging deployment.

Optimum performance will be achieved as follows:

  • Each Imaging cluster instance has the maximum affordable number of parsing agents configured through the Work Manager without compromising the performance of the other Imaging activities, such as the user interface and Web services.

  • The inbound flow of documents is partitioned into input files containing the appropriate number of documents. On average there should be two input files queued for every parsing agent within the cluster.

  • If one or more machines within a cluster fails, active machines will continue processing the input files. Input files from a failed machine will remain in limbo until the server is restarted. Smaller input files ensure that machine failures do not place large numbers of documents into this limbo state.

For example, consider 10,000 inbound documents per hour being processed by two servers. A configuration of two parsing agents per server produces acceptable overall performance and ingests two documents per second per agent. The four parsing agents at two documents per second is eight documents per second, or 28,800 documents per hour. Note that a single input file of 10,000 documents will not be processed in an hour since a single parsing agent working at 7,200 documents per hour will be unable to complete it. However, if you divide the single input file up into eight input files of 1,250 documents, this ensures that all four parsing agents are fully utilized, and the 10,000 documents are completed in the one hour period. Also, if a failure should occur in one of the servers, the other can continue processing the work remaining on its parsing agents until the work is successfully completed.

19.3 Deploying Composites and Artifacts in the Oracle WebCenter Content Enterprise Deployment Topology

When deploying SOA composites to the Oracle SOA Suite subsystem used by Imaging, deploy to a specific server's address and not to the load balancer address (wcc.mycompany.com). Deploying to the load balancer address may require direct connection from the deployer nodes to the external load balancer address, which may require additional ports to be opened in the firewalls used by the system.

19.4 Managing Space in the SOA Infrastructure Database

Although not all composites may use the database frequently, the service engines generate a considerable amount of data in the CUBE_INSTANCE and MEDIATOR_INSTANCE schemas. Lack of space in the database may prevent SOA composites from functioning. Watch for generic errors, such as oracle.fabric.common.FabricInvocationException, in Oracle Enterprise Manager Fusion Middleware Control (dashboard for instances). Search also in the Oracle SOA Suite server's logs for errors, such as this one:

Error Code: 1691
...
ORA-01691: unable to extend lob segment
SOAINFRA.SYS_LOB0000108469C00017$$ by 128 in tablespace SOAINFRA

These messages are typically indicators of space issues in the database that may likely require adding more data files or more space to the existing files. The SOA database administrator should determine the extension policy and parameters to be used when adding space. Additionally, old composite instances can be purged to reduce the SOA infrastructure database size. Oracle does not recommend using Oracle Enterprise Manager Fusion Middleware Control for this type of operation because in most cases such operations cause a transaction timeout.

For more details on the possible operations included in the SQL packages provided, see "Deploying and Managing SOA Composite Applications" in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suite. Always use the scripts provided for a correct purge. Deleting rows in just the composite_dn table may leave dangling references in other tables used by the Oracle Fusion Middleware SOA Infrastructure.

19.5 Configuring UMS Drivers for Oracle WebCenter Content: Imaging

UMS driver configuration is not automatically propagated in an Oracle SOA Suite cluster. When UMS is used by the Oracle SOA Suite system that Oracle WebCenter Content: Imaging invokes, you need to configure the UMS drivers.

Note:

This step is required only if the Oracle SOA Suite system used by Oracle WebCenter Content: Imaging is using Unified Messaging System (UMS).

To configure UMS drivers for Imaging:

  1. Apply the configuration of UMS drivers in each and every one of the servers in the enterprise deployment topology that is using the driver.

  2. When server migration is used, servers are moved to a different node's domain directory. It is necessary to pre-create the UMS driver configuration in the failover node. The UMS driver configuration file location follows:

    ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/
    tmp/_WL_user/ums_driver_name/*/configuration/driverconfig.xml
    

    In the command, * represents a directory whose name is randomly generated by Oracle WebLogic Server during deployment; for example, 3682yq.

To create the file in preparation for possible failovers, users can force a server migration and copy the file from the source node. For example:

  1. Configure the driver for WLS_IMG1 in WCCHOST1.

  2. Force a failover of WLS_IMG1 to WCCHOST2. Verify the directory structure for the UMS driver configuration in the failover node:

    cd ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/
    tmp/_WL_user/ums_driver_name/*/configuration
    

    In the command, * represents a directory whose name is randomly generated by WLS during deployment; for example, 3682yq).

  3. Run the following command on WCCHOST1 to do a remote copy of the driver configuration file from WCCHOST1 to WCCHOST2:

    scp ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/
    tmp/_WL_user/ums_driver_name/*/configuration/
    driverconfig.xml oracle@WCCHOST2:ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/
    

    In the command, * represents a directory whose name is randomly generated by Oracle WebLogic Server during deployment; for example, 3682yq.

It is required to restart the driver for these changes to take effect (that is, for the driver to consume the modified configuration).

To restart the driver:

  1. Log in to the WebLogic Server Administration Console.

  2. Expand the Environment node in the navigation tree.

  3. Click Deployments.

  4. Select the driver.

  5. Click Stop and then When work completes, and confirm the operation.

  6. Wait for the driver to transition to the Prepared state (refresh the Administration Console page, if required).

  7. Select the driver again, and click Start and then Servicing all requests, and confirm the operation.

Verify in Fusion Middleware Control that the properties for the driver have been preserved.

19.6 Scaling Up the Oracle WebCenter Content Topology

When you scale up the topology, you add new Managed Servers to nodes that are already running on one or more Managed Servers. You already have a node that runs a Managed Server that is configured with the necessary components. The node contains a WebLogic Server home and a Middleware home in shared storage. Use these existing installations (such as WebLogic Server home, Middleware home, and domain directories) when you create the new Managed Servers. You do not need to install WebLogic Server binaries at a new location or to run pack and unpack.

Note:

A shared domain directory for a Managed Server with Content Server does not work because certain files within the domain, such as intradoc.cfg, are specific to each node. To prevent issues with node-specific files, use a local (per node) domain directory for each Oracle WebCenter Content and Oracle WebCenter Content: Inbound Refinery Managed Server.

The scale-up procedure depends on the topology component:

19.6.1 Scale-Up Procedure for WebCenter Content

Only one WebCenter Content Managed Server per node per domain is supported by Oracle Fusion Middleware. To add additional WebCenter Content Managed Servers, follow the steps in Section 19.7.1, "Scale-Out Procedure for WebCenter Content" to add a WebCenter Content Managed Server to a new node.

19.6.2 Scale-Up Procedure for Oracle WebCenter Content: Imaging

Before scaling up the Imaging servers, you must enable a virtual IP address on WCCHOST1 (say, WCCHOST1VHN2), and you must also correctly resolve the host names in the network system used by the topology (with either DNS server or host resolution). For information about how to enable the virtual IP addresses, see Section 11.6, "Configuring Node Manager for Managed Servers."

To scale up the Imaging servers in the enterprise deployment topology:

  1. Using the WebLogic Server Administration Console, clone WLS_IMG1 to a new Managed Server. The source Managed Server to clone should be one that already exists on the node where you want to run the new Managed Server.

    To clone a Managed Server:

    1. In the Domain Structure window of the WebLogic Server Administration Console, expand the Environment node and then Servers.

    2. On the Summary of Servers page, click Lock & Edit, and then select the Managed Server that you want to clone (WLS_IMG1).

    3. Click Clone.

    4. Name the new Managed Server WLS_IMGn, where n is a number that identifies the new Managed Server.

    5. For the server listen address, assign the host name or IP address to use for this new Managed Server. If you are planning to use server migration for this server (which Oracle recommends), this should be the virtual host name for the server. This virtual host name should be different from the one used for the existing Managed Server.

    6. For the server listen port, enter the listening port number for the Imaging cluster (IMG_Cluster). The reference topology in this book uses port number 16000.

    7. Click OK and Activate Changes. You should now see the newly created server WLS_IMGn in the summary of servers.

    The remainder of the steps that follow are based on the assumption that you are adding a new server to WCCHOST1, which is already running WLS_IMG1.

  2. Configure JMS persistence stores and JMS servers for Oracle WebCenter Content: Imaging.

    Configure the location for the JMS persistence stores as a directory that is visible from both nodes. By default, the JMS servers used by Imaging are configured with no persistence store and use the WebLogic Server store (ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/data/store/default).

    You must change each Oracle JMS server persistence store to use a shared base directory, as follows:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Services node, and then click the Persistence Stores node.

    3. On the Summary of Persistence Stores page, click Lock & Edit.

    4. Click New, and then click Create FileStore.

    5. On the Create a New File Store page, enter the following information:

      - Name: IMGJMSServerNStore (for example, IMGJMSServer3Store, which allows you identify the service it is created for)

      - Target: WLS_IMGn (for example, WLS_IMG3).

      - Directory: Specify a directory that is located in shared storage so that it is accessible from both WCCHOST1 and WCCHOST2 (ORACLE_BASE/admin/domain_name/img_cluster_name/jms).

    6. Click OK, and activate the changes.

    7. In the Domain Structure window, expand the Services node, then the Messages node, and then click JMS Servers.

    8. On the Summary of JMS Servers page, click New, and then enter the following information:

      - Name: IpmJmsServern (for example, IpmJmsServer3)

      - Persistent Store: select the persistence store that you created earlier: IMGJMSServerNStore (for example, IMGJMSServer3Store).

      Click Next, and then specify WLS_IMGn (for example, WLS_IMG3) as the target.

    9. Click Finish, and then activate the changes.

  3. Configure a default persistence store for WLS_IMGn for transaction recovery:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node, and then click the Servers node.

    3. On the Summary of Servers page, click WLS_IMGn (represented as a hyperlink) in the Name column of the table. The settings page for the WLS_IMGn server opens with the Configuration tab active.

    4. Click the Services tab.

    5. Click Lock & Edit.

    6. In the Default Store section of the page, enter the path to the folder where the default persistence store will store its data files. The directory structure of the path is as follows:

      ORACLE_BASE/admin/domain_name/img_cluster_name/tlogs
      
    7. Click Save, and activate the changes.

  4. Disable host name verification for the new Managed Server, as described in Section 9.4.5, "Disabling Host Name Verification," if not already disabled.

    Before you can start and verify the WLS_IMGn Managed Server, you must disable host name verification. You can re-enable it after you have configured server certificates for communication between the Administration Server and Node Manager in WCCHOSTn. If host name verification was already disabled on the source server from which the new one has been cloned, then these steps are not required (the host name verification setting is propagated to the cloned server).

  5. Start the newly created Managed Server (WLS_IMG):

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

    3. On the Summary of Servers page, open the Control tab, and shut down all existing WLS_IMGn Managed Servers in the cluster.

    4. Ensure that the newly created Managed Server, WLS_IMGn, is running.

  6. Add the host name of the WLS_IMGn Managed Server (WCCHOSTnVHN2) to the SocketHostNameSecurityFilter parameter list:

    1. Open the file ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/config/config.cfg in a text editor.

    2. Add the WLS_IMGn Managed Server listen addresses to the list of addresses that are allowed to connect to Oracle WebCenter Content: SocketHostNameSecurityFilter=localhost|localhost.mycompany.com|WCCHOST1|WCCHOST2|WCCHOSTnVHNn

    3. Save the modified config.cfg file, and restart the Oracle WebCenter Content servers for the changes to take effect, using the WebLogic Server Administration Console.

  7. Verify that server migration is configured for the new Managed Server.

    Note:

    Since this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration, so the following steps are only for verification. The floating IP address for the new Imaging Managed Server should also be already present.

    To verify that server migration is configured:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

    3. On the Summary of Servers page, click the name of the new Managed Server (represented as a hyperlink) in Name column of the table for which you want to configure migration.

    4. On the settings page for the selected server, open the Migration subtab.

    5. In the Migration Configuration section, select the servers that participate in migration in the Available window and click the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new Managed Servers on WCCHOST1, which is already running WLS_IMG1, select WCCHOST2. For new Managed Servers on WCCHOST2, which is already running WLS_IMG2, select WCCHOST1.

      Note:

      The appropriate resources must be available to run the Managed Servers concurrently during migration.

    6. Verify that the Automatic Server Migration Enabled option is selected.

      This option enables Node Manager to start a failed server on the target node automatically.

    7. Click Save.

    8. Restart Node Manager, the Administration Server, and the servers for which server migration has been configured:

      • First, stop the Managed Servers through the Administration Console.

      • Then restart the Node Manager and Administration Server, as described in step 11 under Section 9.4.5, "Disabling Host Name Verification,"

      • Finally, start the Managed Servers again, through the Administration Console.

  8. Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

    • Abruptly stop the WLS_IMGn Managed Server. To do this, run kill -9 pid on the PID of the Managed Server. You can identify the PID of the node using the following command:

      ps -ef | grep WLS_IMGn
      
    • Watch the Node Manager Console for a message indicating that WLS_IMGn's floating IP address has been disabled.

    • Wait for Node Manager to attempt a second restart of WLS_IMGn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    • Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the Managed Server from the WebLogic Server Administration Console and then start it again. The appropriate Node Manager will start the Managed Server on the machine to which it was originally assigned.

19.6.3 Scale-Up Procedure for Oracle WebCenter Capture

Before scaling up the Oracle WebCenter Capture servers, you must enable a virtual IP address on WCCHOST1 (say, WCCHOST1VHN3), and you must also correctly resolve the host names in the network system used by the topology (with either DNS server or host resolution). For information about how to enable the virtual IP addresses, see Section 11.6, "Configuring Node Manager for Managed Servers."

To scale up the Capture servers in the enterprise deployment topology:

  1. Using the WebLogic Server Administration Console, clone WLS_CPT1 to a new Managed Server. The source Managed Server to clone should be one that already exists on the node where you want to run the new Managed Server.

    To clone a Managed Server:

    1. In the Domain Structure window of the WebLogic Server Administration Console, expand the Environment node and then Servers.

    2. On the Summary of Servers page, click Lock & Edit and then select the Managed Server that you want to clone (WLS_CPT1).

    3. Click Clone.

    4. Name the new Managed Server WLS_CPTn, where n is a number that identifies the new Managed Server.

    5. For the server listen address, assign the host name or IP address to use for this new Managed Server. If you are planning to use server migration for this server (which Oracle recommends), this should be the virtual host name for the server. This virtual host name should be different from the one used for the existing Managed Server.

    6. For the server listen port, enter the listening port number for the Capture cluster (CPT_Cluster). The reference topology in this book uses port number 16400.

    7. Click OK and Activate Changes. You should now see the newly created server WLS_CPTn in the summary of servers.

    The remainder of the steps that follow are based on the assumption that you are adding a new server to WCCHOST1, which is already running WLS_CPT1.

  2. Configure JMS persistence stores and JMS servers for Capture.

    Configure the location for the JMS persistence stores as a directory that is visible from both nodes. By default, the JMS servers used by Capture are configured with no persistence store and use the WebLogic Server store (ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/data/store/default).

    You must change each Oracle JMS server persistence store to use a shared base directory, as follows:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Services node, and then click the Persistence Stores node.

    3. On the Summary of Persistence Stores page, click Lock & Edit.

    4. Click New, and then Create FileStore.

    5. On the Create a New File Store page, enter the following information:

      - Name: CPTJMSServernStore (for example, CPTJMSServer3Store, which enables you to identify the service it is created for)

      - Target: WLS_CPTn (for example, WLS_CPT3).

      - Directory: Specify a directory that is located in shared storage so that it is accessible from both WCCHOST1 and WCCHOST2 (ORACLE_BASE/admin/domain_name/cpt_cluster_name/jms).

    6. Click OK, and activate the changes.

    7. In the Domain Structure window, expand the Services node, then the Messages node, and then click JMS Servers.

    8. On the Summary of JMS Servers page, click New, and then enter the following information:

      - Name: CaptureJmsServern (for example, CaptureJmsServer3)

      - Persistent Store: Select the persistence store that you created earlier: CPTJMSServernStore (for example, CPTJMSServer3Store).

      Click Next, and then specify WLS_CPTn (for example, WLS_CPT3) as the target

    9. Click Finish, and activate the changes.

  3. Configure a default persistence store for CPTn for transaction recovery:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node, and then click the Servers node.

    3. On the Summary of Servers page, click CPT_IMGn (represented as a hyperlink) in the Name column of the table. The settings page for the CPT_IMGn server opens with the Configuration tab active.

    4. Click the Services tab.

    5. Click Lock & Edit.

    6. In the Default Store section of the page, enter the path to the folder where the default persistence store will store its data files. The directory structure of the path follows:

      ORACLE_BASE/admin/domain_name/cpt_cluster_name/tlogs
      
    7. Click Save and activate the changes.

  4. Disable host name verification for the new Managed Server, as described in Section 9.4.5, "Disabling Host Name Verification,", if not already disabled.

    Before you can start and verify the CPT_IMGn Managed Server, you must disable host name verification. You can reenable it after you have configured server certificates for communication between the Administration Server and Node Manager in WCCHOSTn. If host name verification was already disabled on the source server from which the new one has been cloned, then these steps are not required (the host name verification setting is propagated to the cloned server).

  5. Start the newly created Managed Server (WLS_CPT):

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

    3. On the Summary of Servers page, open the Control tab, and shut down all existing WLS_CPTn Managed Servers in the cluster.

    4. Ensure that the newly created Managed Server, WLS_CPTn, is running.

  6. Add the host name of the WLS_CPTn Managed Server (WCCHOSTnVHN3) to the SocketHostNameSecurityFilter parameter list:

    1. Open the file ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/config/config.cfg in a text editor.

    2. Add the WLS_CPTn Managed Server listen addresses to the list of addresses that are allowed to connect to Oracle WebCenter Content: SocketHostNameSecurityFilter=localhost|localhost.mycompany.com|WCCHOST1|WCCHOST2|WCCHOSTnVHNn

    3. Save the modified config.cfg file, and restart the Oracle WebCenter Content servers for the changes to take effect, using the WebLogic Server Administration Console.

  7. Verify that server migration is configured for the new Managed Server.

    Note:

    Since this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration, so the following steps are only for verification. The floating IP address for the new Imaging Managed Server should also be already present.

    To verify that server migration is configured:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

    3. On the Summary of Servers page, click the name of the new Managed Server (represented as a hyperlink) in the Name column of the table for which you want to configure migration.

    4. On the settings page for the selected server, open the Migration subtab.

    5. In the Migration Configuration section, select the servers that participate in migration in the Available window, and then click the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new Managed Servers on WCCHOST1, which is already running WLS_CPT1, select WCCHOST2. For new Managed Servers on WCCHOST2, which is already running WLS_CPT2, select WCCHOST1.

      Note:

      The appropriate resources must be available to run the Managed Servers concurrently during migration.

    6. Verify that the Automatic Server Migration Enabled option is selected.

      This option enables Node Manager to start a failed server on the target node automatically.

    7. Click Save.

    8. Restart Node Manager, the Administration Server, and the servers for which server migration has been configured:

      • First, stop the Managed Servers through the Administration Console.

      • Then restart the Node Manager and Administration Server, as described in step 11 under Section 9.4.5, "Disabling Host Name Verification,"

      • Finally, start the Managed Servers again, through the Administration Console.

  8. Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

    • Abruptly stop the WLS_CPTn Managed Server. To do this, run kill -9 pid on the PID of the Managed Server. You can identify the PID of the node using the following command:

      ps -ef | grep WLS_IMGn
      
    • Watch the Node Manager Console for a message indicating that WLS_CPTn's floating IP address has been disabled.

    • Wait for Node Manager to attempt a second restart of WLS_CPTn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    • Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the Managed Server from the WebLogic Server Administration Console, and then start it again. The appropriate Node Manager will start the Managed Server on the machine to which it was originally assigned.

19.6.4 Scale-Up Procedure for Oracle SOA Suite

Before scaling up the Imaging servers, you must enable a virtual IP address on WCCHOST1 (say, WCCHOST1VHN1), and you must also correctly resolve the host names in the network system used by the topology (with either DNS server or host resolution). For information about how to enable the virtual IP addresses, see Section 11.6, "Configuring Node Manager for Managed Servers."

Note:

To scale up the Oracle SOA Suite subsystem used by Oracle WebCenter Content: Imaging, refer to the Oracle SOA Suite enterprise deployment topology documentation.

To scale up the Oracle SOA Suite servers in the enterprise deployment topology:

  1. Using the WebLogic Server Administration Console, clone WLS_SOA1 to a new Managed Server. The source Managed Server to clone should be one that already exists on the node where you want to run the new Managed Server.

    To clone a Managed Server:

    1. In the Domain Structure window of the WebLogic Server Administration Console, expand the Environment node and then Servers.

    2. On the Summary of Servers page, select the Managed Server that you want to clone (WLS_SOA1).

    3. Click Clone.

    4. Name the new Managed Server WLS_SOAn, where n is a number that identifies the new Managed Server.

      Note:

      The remainder of the steps are based on the assumption that you are adding a new server to WCCHOST1, which is already running WLS_SOA1.

  2. For the listen address, assign the host name or IP address to use for this new Managed Server. If you are planning to use server migration for this server (which Oracle recommends), this should be the virtual IP address (also called a floating IP address) to enable it to move to another node. The virtual IP address should be different from the one used by the Managed Server that is already running.

  3. Create JMS servers for Oracle SOA Suite and UMS on the new Managed Server:

    1. Use the WebLogic Server Administration Console to create a new persistence store for the new SOAJMSServer and name it, for example, SOAJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage, as recommended in Section 4.3, "About Recommended Locations for the Different Directories":

      ORACLE_BASE/admin/domain_name/soa_cluster_name/jms/
      
    2. Create a new JMS server for Oracle SOA Suite (for example, SOAJMSServer_N). Use SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N server to the recently created Managed Server (WLS_SOAn).

    3. Create a new persistence store for the new UMSJMSServer (for example, UMSJMSFileStore_N). Specify the path for the store. This should be a directory on shared storage, as recommended in Section 4.3, "About Recommended Locations for the Different Directories":

      ORACLE_BASE/admin/domain_name/soa_cluster_name/jms/
      

      Note:

      You can also assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistence stores are used in the following steps.

    4. Create a new JMS server for UMS (for example, UMSJMSServer_N). Use UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created Managed Server (WLS_SOAn).

    5. Update the subdeployment targets for the SOA JMS Module to include the recently created SOA JMS server. To do this, expand the Services node in the Domain Structure tree on the left of the WebLogic Server Administration Console, and then expand the Messaging node. Click JMS Modules. The JMS Modules page appears. Click SOAJMSModule (represented as a hyperlink in the Names column of the table). The Settings page for SOAJMSModule appears. Open the SubDeployments tab. The subdeployment module for SOAJMS appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click the SOAJMSServerXXXXXX subdeployment. Add the new JMS server for Oracle SOA Suite called SOAJMSServer_N to this subdeployment. Click Save.

    6. Update the subdeployment targets for the UMSJMSSystemResource to include the recently created UMS JMS server. To do this, expand the Services node in the Domain Structure tree on the left of the WebLogic Server Administration Console and then expand the Messaging node. Click JMS Modules. The JMS Modules page appears. Click UMSJMSSystemResource (represented as a hyperlink in the Names column of the table). The Settings page for UMSJMSSystemResource appears. Open the SubDeployments tab. The subdeployment module for UMSJMS appears.

      Note:

      This subdeployment module name is a random name in the form of UCMJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click the UMSJMSServerXXXXXX subdeployment. Add the new JMS server for UMS called UMSJMSServer_N to this subdeployment. Click Save.

  4. Configure Oracle Coherence for deploying composites for the new server as described in Section 13.5, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field needs to be changed for the server. Replace the localhost value with the listen address of the new server added:
    Dtangosol.coherence.localhost=SOAHOST1VHNn.

  5. Configure a Tx persistence store for the new server. This should be a location visible from other nodes as indicated in the recommendations about shared storage (see Section 4.3, "About Recommended Locations for the Different Directories").

    From the Administration Console, select the server name (WLS_SOAn) in the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistence store to store its data files:

    ORACLE_BASE/admin/domain_name/soa_cluster_name/tlogs
    
  6. Disable host name verification for the new Managed Server, as described in Section 9.4.5, "Disabling Host Name Verification," if not already disabled.

    Before you can start and verify the WLS_SOAn Managed Server, you must disable host name verification. You can re-enable it after you have configured server certificates for communication between the Administration Server and Node Manager in WCCHOSTn. If host name verification was already disabled on the source server from which the new one has been cloned, then these steps are not required (the host name verification setting is propagated to the cloned server).

  7. Start and test the new Managed Server from the WebLogic Server Administration Console:

    1. Ensure that the newly created Managed Server, WLS_SOAn, is running.

    2. Access the application on the LBR (http://wccinternal.mycompany.com/soa-infra). The application should be functional.

    Note:

    The Oracle HTTP Servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in the Oracle HTTP Server *_vh.conf files. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

  8. Configure server migration for the new Managed Server.

    Note:

    Since this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration. The floating IP address for the new Oracle SOA Suite Managed Server should also be already present.

    To configure server migration:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

    3. On the Summary of Servers page, click the name of the new Managed Server (represented as a hyperlink) in Name column of the table for which you want to configure migration.

    4. On the settings page for the selected server, open the Migration subtab.

    5. In the Migration Configuration section, select the servers that participate in migration in the Available window and click the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new Managed Servers on WCCHOST1, which is already running WLS_SOA1, select WCCHOST2. For new Managed Servers on WCCHOST2, which is already running WLS_SOA2, select WCCHOST1.

      Note:

      The appropriate resources must be available to run the Managed Servers concurrently during migration.

    6. Verify that the Automatic Server Migration Enabled option is selected.

      This option enables Node Manager to start a failed server on the target node automatically.

    7. Click Save.

    8. Restart Node Manager, the Administration Server, and the servers for which server migration has been configured:

      • First, stop the Managed Servers through the Administration Console.

      • Then restart the Node Manager and Administration Server, as described in step 11 under Section 9.4.5, "Disabling Host Name Verification,"

      • Finally, start the Managed Servers again, through the Administration Console.

  9. Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

    • Abruptly stop the WLS_SOAn Managed Server. To do this, run kill -9 pid on the PID of the Managed Server. You can identify the PID of the node using the following command:

      ps -ef | grep WLS_SOAn
      
    • Watch the Node Manager Console for a message indicating that WLS_SOA1's floating IP address has been disabled.

    • Wait for Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    • Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the Managed Server from the WebLogic Server Administration Console and then start it again. The appropriate Node Manager will start the Managed Server on the machine to which it was originally assigned.

19.7 Scaling Out the Oracle WebCenter Content Topology

When scaling out the topology, you add new Managed Servers configured to new nodes.

Prerequisites

Before performing the steps in this section, check that you meet these requirements:

  • There must be existing nodes running Managed Servers configured with Oracle Fusion Middleware within the topology.

  • The new node can access the existing home directories for WebLogic Server and Fusion Middleware. (Use the existing installations in shared storage for creating a new Managed Server. You do not need to install WebLogic Server or Fusion Middleware binaries in a new location, but you do need to run pack and unpack to bootstrap the domain configuration in the new node.)

  • When an ORACLE_HOME or WL_HOME is shared by multiple servers in different nodes, it is recommended that you keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory directory in a node and attach an installation in a shared storage to it, use ORACLE_HOME/oui/bin/attachHome.sh.

    To update the Middleware home list to add or remove a WL_HOME, edit the User_Home/bea/beahomelist file. See the following steps.

  • The new server can use a new individual domain directory or, if the other Managed Servers domain directories reside on shared storage, reuse the domain directories on those servers.

    Note:

    A shared domain directory for a Managed Server with Content Server does not work because certain files within the domain, such as intradoc.cfg, are specific to each node. To prevent issues with node-specific files, use a local (per node) domain directory for each Oracle WebCenter Content and Oracle WebCenter Content: Inbound Refinery Managed Server.

Scale-Out Procedure

The scale-out procedure depends on the topology component:

19.7.1 Scale-Out Procedure for WebCenter Content

The scale-out procedure for WebCenter Content adds a WLS_WCCn Managed Server configured on a new node.

To scale out the WebCenter Content servers in the enterprise deployment topology:

  1. On the new node, mount the existing Middleware home, which should include the Oracle WebCenter Content installation and domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

    Note:

    These steps are based on the assumption that you are adding a new WebCenter Content server to node n, where no Managed Server was running previously.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following commands on WCCHOSTn (if not already done):

    cd ORACLE_COMMON_HOME/oui/bin/
    
    ./attachHome.sh -jreLoc ORACLE_BASE/product/fmw/jrockit_160_version
    

    Note:

    The examples documented in this guide use Oracle JRockit. Any certified version of Java can be used for this procedure and is fully supported unless otherwise noted.

    To update the Middleware home list, create (or edit, if another WebLogic Server installation exists in the node) the MW_HOME/bea/beahomelist file, and add ORACLE_BASE/product/fmw to it.

  3. Log in to the WebLogic Server Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

    Note:

    If the WLS_WCCn server is scaled out on a host where a domain and applications directories already exist, then the '-overwrite_domain option must be specified in the unpack command.

  5. Update the machine's Node Manager address to map the IP address of the node that is being used for scale-out.

  6. Use the WebLogic Server Administration Console to clone WLS_WCC1 into a new Managed Server. Name it WLS_WCCn, where n is a number.

    Note:

    These steps are based on the assumption that you are adding a new server to node n, where no Managed Server was running previously.

  7. Assign the host name or IP address of WCCHOSTn to use for the new Managed Server as the Server Listen Address of the Managed Server.

  8. Assign a WLS_WCCn Managed Server to the newly created machine.

    Select a WLS_WCCn server from the list of available servers, and click Finish and Activate Changes.

  9. Run the pack command on WCCHOST1 to create a template pack:

    Note:

    You need to do this step and the next two steps because the domain directory for Managed Servers has not yet been created on WCCHOSTn.

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name -template=edgdomaintemplateScaleWCC.jar -template_name=edgdomain_templateScaleWCC
    
  10. Run the following command on WCCHOST1 to copy the created template file to WCCHOSTn:

    scp edgdomaintemplateScaleWCC.jar oracle@WCCHOSTn:/ORACLE_COMMON_HOME/common/bin
    
  11. Run the unpack command on WCCHOSTn to unpack the template in the Managed Server domain directory:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./unpack.sh -domain=ORACLE_BASE/admin/domain_name/mserver/domain_name -template=edgdomaintemplateScaleWCC.jar -app_dir=ORACLE_BASE/admin/domain_name/mserver/applications
    
  12. Run the following commands on WCCHOSTn to start Node Manager:

    CD WL_HOME/server/bin
    ./startNodeManager.sh
    
  13. Disable host name verification for the new Managed Server, as described in Section 9.4.5, "Disabling Host Name Verification," if not already disabled.

    Before you can start and verify the WLS_WCCn Managed Server, you must disable host name verification. You can re-enable it after you have configured server certificates for communication between the Administration Server and Node Manager in WCCHOSTn. If host name verification was already disabled on the source server from which the new one has been cloned, then these steps are not required (the host name verification setting is propagated to the cloned server).

  14. Start the new Managed Server, WLS_WCCn, from the WebLogic Server Administration Console, and verify that the server status is reported as Running.

  15. Configure the new Managed Server, WLS_WCCn:

    1. Log in to WLS_WCCn at http://WCCHOSTn:16200/cs using your WebLogic Server administration user name and password. The WebCenter Content Configuration page opens.

      Note:

      At the end of page you should see this text: Since the Revisions table in the database is not empty, you are not able to configure this server as a new instance. You may only configure this server to be a node in an existing cluster.

    2. Change the following values on the server configuration page. Make sure that Cluster Node Identifier is set to match your Managed Server name, such as WLS_WCC1.

      - Content Server Instance Folder: Set this to ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs.

      - Native File Repository Location: Set this to ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/vault.

      - WebLayout Folder: Set this to ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/weblayout.

      - User Profile: Set this to ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/data/users/profiles.

    3. Click Submit when finished, and restart the new Managed Server, using the WebLogic Server Administration Console.

  16. Add the WLS_WCCn listen addresses to the list of allowed hosts in Oracle WebCenter Content in the file ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/config/config.cfg. Restart the Oracle WebCenter Content Managed Servers through the Administration Console.

    The servers and listen addresses to be added in the Content Server pool follow:

    • WCCHOST1:4444

    • WCCHOST2:4444

    • WCCHOST3:4444

    For more information, see Section 14.5.13, "Creating a Connection to Content Server."

  17. Test the WLS_WCCn Managed Server by accessing the application on the LBR (https://wcc.mycompany.com/cs). The application should be functional.

Note:

The HTTP Servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in the Oracle HTTP Server *_vh.conf files. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

19.7.2 Scale-Out Procedure for Oracle WebCenter Content: Imaging

The scale-out procedure for Imaging adds a WLS_IMGn Managed Server configured on a new node.

To scale out the Imaging servers in the enterprise deployment topology:

  1. On the new node, mount the existing Middleware home, which should include the Oracle WebCenter Content installation and (optionally, if the domain directory for Managed Servers in other nodes resides on shared storage) the domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following commands on WCCHOSTn (if not already done):

    cd ORACLE_COMMON_HOME/oui/bin/
    
    ./attachHome.sh -jreLoc ORACLE_BASE/product/fmw/jrockit_160_version
    

    To update the Middleware home list, create (or edit, if another WebLogic Server installation exists in the node) the MW_HOME/bea/beahomelist file, and add ORACLE_BASE/product/fmw to it.

    Note:

    The examples documented in this guide use Oracle JRockit. Any certified version of Java can be used for this procedure and is fully supported unless otherwise noted.

  3. Log in to the WebLogic Server Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

    Note:

    If the WLS_IMGn server is scaled out in the same machine as the WLS_WCCn server, then you do not need to create a new machine. Use the same Machine created in Section 19.7.1, "Scale-Out Procedure for WebCenter Content,", Step 4.

  5. Update the machine's Node Manager address to map the IP address of the node that is being used for scale-out.

  6. Use the WebLogic Server Administration Console to clone WLS_IMG1 into a new Managed Server. Name it WLS_IMGn, where n is a number.

    Note:

    These steps are based on the assumption that you are adding a new server to node n, where no Managed Server was running previously.

    1. Assign the host name or IP address to use for the new Managed Server for the listen address of the Managed Server. If you are planning to use server migration for this server (which Oracle recommends), this should be the virtual IP address (also called a floating IP address) for the server. This virtual IP address should be different from the one used for the existing Managed Server.

      Note:

      You must enable a virtual IP address on node n, and you must also correctly resolve the host names in the network system used by the topology (with either DNS server or host resolution). For information about how to enable the virtual IP addresses, see Section 6.6, "Enabling Virtual IP Addresses"

    2. Select Yes, make this server a member of an existing cluster.

    3. Assign the newly created server to the machine you added in step 4. Without this, the machine name of the cloned server will remain.

  7. Create a JMS server for Oracle WebCenter Content: Imaging on the new Managed Server:

    1. Use the WebLogic Server Administration Console to first create a new persistence store for the new IpmJmsServerN (which will be created in a later step) and name it, for example, IMGJMSServerNStore. Specify the path for the store as recommended in Section 4.3, "About Recommended Locations for the Different Directories" as the directory for the JMS persistence stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      
    2. Create a new JMS server for; for example, IpmJmsServerN.

      Use the IMGJMSServerNStore created previously for this JMS server. Target the IpmJmsServerN server to the recently created Managed Server (WLS_IMGn).

    3. Create a default persistence store for transaction recovery.

      • In the WebLogic Server Administration Console, select a newly created WLS_IMGn server.

      • Open Services.

      • Add in the directory; enter the path to the folder where the default persistence store will store its data files.

        The directory structure of the path follows:

        ORACLE_BASE/admin/domain_name/img_cluster_name/tlogs
        
  8. Run the pack command on WCCHOST1 to create a template pack:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name -template=edgdomaintemplateScaleIMG.jar -template_name=edgdomain_templateScaleIMG
    

    Note:

    If the domain directory for other Managed Servers resides on a shared directory, this step and the next two steps are not required. Instead, the new nodes mount the already existing domain directory and use it for the newly added Managed Server.

  9. Run the following command on WCCHOST1 to copy the created template file to WCCHOSTn:

    scp edgdomaintemplateScaleIMG.jar oracle@WCCHOSTn:/ORACLE_COMMON_HOME/common/bin
    

    Note:

    If the new host, WCCHOSTn, will use the same Middleware home as WCCHOST1, then this step is not required.

  10. Run the unpack command on WCCHOSTn to unpack the template in the Managed Server domain directory:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./unpack.sh -domain=ORACLE_BASE/admin/domain_name/mserver/domain_name -template= edgdomaintemplateScaleIMG.jar -app_dir=ORACLE_BASE/admin/domain_name/mserver/applications
    

    Note:

    If the WLS_IMGn server is scaled out on a host where a domain and applications directories already exist, then the '-overwrite_domain option must be specified in the unpack command.

  11. Disable host name verification for the new Managed Server, as described in Section 9.4.5, "Disabling Host Name Verification," if not already disabled.

    Before you can start and verify the WLS_WCCn Managed Server, you must disable host name verification. You can re-enable it after you have configured server certificates for communication between the Administration Server and Node Manager in WCCHOSTn. If host name verification was already disabled on the source server from which the new one has been cloned, then these steps are not required (the host name verification setting is propagated to the cloned server).

  12. Disable the Automatic Server Migration Enabled option:

    1. In the Domain Structure tree on the left of the WebLogic Server Administration Console, expand Environment, and select Servers

    2. Select the WLS_IMGn Managed Server, and click the Migration tab.

    3. Unselect Automatic Server Migration Enabled.

    4. Click Save & Activate Changes.

  13. Start Node Manager on the new node if not already started. Run the following commands on WCCHOSTn to start Node Manager:

    CD WL_HOME/server/bin
    ./startNodeManager.sh
    
  14. Start the new Managed Server, WLS_IMGn, from Fusion Middleware Control and make sure it is running.

  15. Add the new Imaging server listen addresses to the list of allowed hosts in Oracle WebCenter Content Server. Follow the steps in Section 14.5.12, "Adding the Imaging Server Listen Addresses to the List of Allowed Hosts in Oracle WebCenter Content" to add the new server to the SocketHostNameSecurityFilter configuration for Oracle WebCenter Content.

    Note:

    Make sure that the host names of all Imaging Managed Servers have been added to the SocketHostNameSecurityFilter parameter list in the ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/config/config.cfg file.

  16. Restart all Oracle WebCenter Content Managed Servers, using the WebLogic Server Administration Console.

  17. Test the WLS_IMGn Managed Server by accessing the application on the LBR (https://wcc.mycompany.com/imaging). The application should be functional.

    Note:

    The HTTP Servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in the Oracle HTTP Server *_vh.conf files. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

  18. Configure server migration for the newly added server.

    It is assumed that the source server from which the new one has been cloned (here WLS_IMGn) already had server migration configured. If this is the case, the following steps are not required because the server migration settings are propagated to the cloned server.

    If host-name verification certificates have been set up for Node Manager, then, as a prerequisite, you should perform these steps for the newly created server. For more information, see Section 16.3, "Enabling Host Name Verification Certificates for Node Manager."

    Note:

    Since this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. Verify the privileges defined in the new node to make sure server migration will work. For more information about privilege requirements, see Chapter 17, "Configuring Server Migration for an Enterprise Deployment."

    To configure server migration:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

    3. On the Summary of Servers page, click the name of the server (represented as a hyperlink) in the Name column of the table for which you want to configure migration.

    4. On the settings page for the selected server, open the Migration subtab.

    5. In the Available field of the Migration Configuration section, select the machines to which to allow migration and click the right arrow.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional Managed Server.

    6. Choose the Automatic Server Migration Enabled option. This enables Node Manager to start a failed server on the target node automatically.

    7. Click Save.

    8. Restart Node Manager, the Administration Server, and the servers for which server migration has been configured:

      • First, stop the Managed Servers through the Administration Console.

      • Then restart the Node Manager and Administration Server, as described in step 11 under Section 9.4.5, "Disabling Host Name Verification,"

      • Finally, start the Managed Servers again, through the Administration Console.

  19. Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

    • Abruptly stop the WLS_IMGn Managed Server. To do this, run kill -9 pid on the PID of the Managed Server. You can identify the PID of the node using the following command:

      ps -ef | grep WLS_IMGn
      
    • Watch the Node Manager Console for a message indicating that WLS_IMGn's floating IP address has been disabled.

    • Wait for Node Manager to attempt a second restart of WLS_IMGn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    • Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the Managed Server from the WebLogic Server Administration Console and then start it again. The appropriate Node Manager will start the Managed Server on the machine to which it was originally assigned.

19.7.3 Scale-Out Procedure for Oracle WebCenter Capture

The scale-out procedure for Imaging adds a WLS_CPTn Managed Server configured on a new node.

To scale out the Capture servers in the enterprise deployment topology:

  1. On the new node, mount the existing Middleware home, which should include the Oracle WebCenter Content installation and (optionally, if the domain directory for Managed Servers in other nodes resides on shared storage) the domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following commands on WCCHOSTn (if not already done):

    cd ORACLE_COMMON_HOME/oui/bin/
    
    ./attachHome.sh -jreLoc ORACLE_BASE/product/fmw/jrockit_160_version
    

    To update the Middleware home list, create (or edit, if another WebLogic Server installation exists in the node) the MW_HOME/bea/beahomelist file, and add ORACLE_BASE/product/fmw to it.

    Note:

    The examples documented in this guide use Oracle JRockit. Any certified version of Java can be used for this procedure and is fully supported unless otherwise noted.

  3. Log in to the WebLogic Server Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain, if not already done.

  5. Update the machine's Node Manager address to map the IP address of the node that is being used for scale-out.

  6. Use the WebLogic Server Administration Console to clone WLS_CPT1 into a new Managed Server. Name it WLS_CPTn, where n is a number.

    Note:

    These steps are based on the assumption that you are adding a new server to node n, where no Managed Server was running previously.

    1. Assign the host name or IP address to use for the new Managed Server for the listen address of the Managed Server. If you are planning to use server migration for this server (which Oracle recommends), this should be the virtual IP address (also called a floating IP address) for the server. This virtual IP address should be different from the one used for the existing Managed Server.

      Note:

      You must enable a virtual IP address on node n, and you must also correctly resolve the host names in the network system used by the topology (with either DNS server or host resolution). For information about how to enable the virtual IP addresses, see Section 6.6, "Enabling Virtual IP Addresses"

    2. Assign the newly created server to the machine you added in the step 4. Without this, the machine name of the cloned server will remain.

  7. Create a JMS server for Capture on the new Managed Server:

    1. Use the WebLogic Server Administration Console to first create a new persistence store for the new CaptureJmsServerN (which will be created in a later step) and name it, for example, CPTJMSServerNStore. Specify the path for the store, recommended in Section 4.3, "About Recommended Locations for the Different Directories," as the directory for the JMS persistence stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      
    2. Create a new JMS server for; for example, CaptureJmsServerN. Use the CPTJMSServerNStore persistence store created in the preceding step for this JMS server. Target the CaptureJmsServerN server to the recently created Managed Server (WLS_CPTn).

    3. Create a default persistence store for transaction recovery:

      • In WebLogic Server Administration Console, select a newly created WLS_IMGn server.

      • Open Services.

      • Add in the directory; enter the path to the folder where the default persistence stores will store its data files.

        The directory structure of the path is as follows:

        ORACLE_BASE/admin/domain_name/img_cluster_name/tlogs
        
  8. Run the pack command on WCCHOST1 to create a template pack:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name -template=edgdomaintemplateScaleCPT.jar -template_name=edgdomain_templateScaleCPT
    

    Note:

    If the domain directory for other Managed Servers resides on a shared directory, this step and the next two steps are not required. Instead, the new nodes mount the already existing domain directory and use it for the newly added Managed Server.

  9. Run the following command on WCCHOST1 to copy the created template file to WCCHOSTn:

    scp edgdomaintemplateScaleCPT.jar oracle@WCCHOSTn:/ORACLE_COMMON_HOME/common/bin
    

    Note:

    If the new host, WCCHOSTn, will use the same MW_HOME as WCCHOST1, then this step is not required.

  10. Run the unpack command on WCCHOSTn to unpack the template in the Managed Server domain directory:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./unpack.sh -domain=ORACLE_BASE/admin/domain_name/mserver/domain_name -template= edgdomaintemplateScaleCPT.jar -app_dir=ORACLE_BASE/admin/domain_name/mserver/applications
    

    Note:

    If the unpack command for the WLS_CPTn server runs on a host where a domain and applications directories already exist, then the '-overwrite_domain option must be specified in the unpack command.

  11. Disable host name verification for the new Managed Server, as described in Section 9.4.5, "Disabling Host Name Verification," if not already disabled.

    Before you can start and verify the WLS_WCCn Managed Server, you must disable host name verification. You can re-enable it after you have configured server certificates for communication between the Administration Server and Node Manager in WCCHOSTn. If host name verification was already disabled on the source server from which the new one has been cloned, then these steps are not required (the host name verification setting is propagated to the cloned server).

  12. Disable the Automatic Server Migration Enabled option:

    1. In the Domain Structure tree on the left of the WebLogic Server Administration Console, expand Environment, and select Servers

    2. Select the WLS_CPTn Managed Server, and click the Migration tab.

    3. Unselect Automatic Server Migration Enabled.

    4. Click Save & Activate Changes.

  13. Start Node Manager on the new node if not already started. Run the following commands on WCCHOSTn to start Node Manager:

    CD WL_HOME/server/bin
    ./startNodeManager.sh
    
  14. Start the new Managed Server, WLS_CPTn, from Fusion Middleware Control and make sure it is running.

  15. Add the new Capture server listen addresses to the list of allowed hosts in Oracle WebCenter Content Server. Follow the steps in Section 14.5.12, "Adding the Imaging Server Listen Addresses to the List of Allowed Hosts in Oracle WebCenter Content" to add the new server to the SocketHostNameSecurityFilter configuration for Oracle WebCenter Content.

    Note:

    Make sure that the host names of all Capture Managed Servers have been added to the SocketHostNameSecurityFilter parameter list in the ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/config/config.cfg file.

  16. Restart all Oracle WebCenter Content Managed Servers, using the WebLogic Server Administration Console.

  17. Test the WLS_CPTn Managed Server by accessing the application on the LBR (https://wcc.mycompany.com/dc-console for the Capture console, and https://wcc.mycompany.com/dc-client for the Capture client). The application should be functional.

    Note:

    The HTTP servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in the Oracle HTTP Server *_vh.conf files. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

  18. Configure server migration for the newly added server.

    It is assumed that the source server from which the new one has been cloned (here WLS_CPTn) already had server migration configured. If this is the case, the following steps are not required because the server migration settings are propagated to the cloned server.

    If host-name verification certificates have been set up for Node Manager, then, as a prerequisite, you should perform these steps for the newly created server. For more information, see Section 16.3, "Enabling Host Name Verification Certificates for Node Manager."

    Note:

    Since this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. Verify the privileges defined in the new node to make sure server migration will work. For more information about privilege requirements, see Chapter 17, "Configuring Server Migration for an Enterprise Deployment."

    To configure server migration:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

    3. On the Summary of Servers page, click the name of the server (represented as a hyperlink) in the Name column of the table for which you want to configure migration.

    4. On the settings page for the selected server, open the Migration subtab.

    5. In the Available field of the Migration Configuration section, select the machines to which to allow migration and click the right arrow.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional Managed Server.

    6. Choose the Automatic Server Migration Enabled option. This enables Node Manager to start a failed server on the target node automatically.

    7. Click Save.

    8. Restart Node Manager, the Administration Server, and the servers for which server migration has been configured:

      • First, stop the Managed Servers through the Administration Console.

      • Then restart the Node Manager and Administration Server, as described in step 11 under Section 9.4.5, "Disabling Host Name Verification,"

      • Finally, start the Managed Servers again, through the Administration Console.

  19. Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

    • Abruptly stop the WLS_CPTn Managed Server. To do this, run kill -9 pid on the PID of the Managed Server. You can identify the PID of the node using the following command:

      ps -ef | grep WLS_CPTn
      
    • Watch the Node Manager Console for a message indicating that WLS_CPTn's floating IP address has been disabled.

    • Wait for Node Manager to attempt a second restart of WLS_CPTn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    • Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the Managed Server from the WebLogic Server Administration Console and then start it again. The appropriate Node Manager will start the Managed Server on the machine to which it was originally assigned.

19.7.4 Scale-Out Procedure for Oracle SOA Suite

The scale-out procedure for Imaging adds a WLS_SOAn Managed Server configured on a new node.

Note:

To scale out the Oracle SOA Suite subsystem used by Capture, refer to the Oracle SOA Suite enterprise deployment topology documentation.

To scale out the Oracle SOA Suite servers in the enterprise deployment topology:

  1. On the new node, mount the existing Middleware home, which should include the Oracle SOA Suite installation and domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, run the following commands on WCCHOSTn (if not already done):

    cd ORACLE_COMMON_HOME/oui/bin/
    
    ./attachHome.sh -jreLoc ORACLE_BASE/product/fmw/jrockit_160_version
    

    To update the Middleware home list, create (or edit, if another WebLogic Server installation exists in the node) the MW_HOME/bea/beahomelist file, and add ORACLE_BASE/product/fmw to it.

    Note:

    The examples documented in this guide use Oracle JRockit. Any certified version of Java can be used for this procedure and is fully supported unless otherwise noted.

  3. Log in to the WebLogic Server Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

    Note:

    If the WLS_SOAn server is scaled out on a host where a domain and applications directories already exist, then the '-overwrite_domain option must be specified in the unpack command.

  5. Update the machine's Node Manager address to map the IP address of the node that is being used for scale-out.

  6. Use the WebLogic Server Administration Console to clone WLS_SOA1 into a new Managed Server. Name it WLS_SOAn, where n is a number.

    Note:

    These steps are based on the assumption that you are adding a new server to node n, where no Managed Server was running previously.

  7. Assign the host name or IP address to use for the new Managed Server for the listen address of the Managed Server.

    If you are planning to use server migration for this server (which Oracle recommends), this should be the virtual IP address (also called a floating IP address) for the server. This virtual IP address should be different from the one used for the existing Managed Server.

  8. Run the pack command on WCCHOST1 to create a template pack:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name -template=edgdomaintemplateScaleSOA.jar -template_name=edgdomain_templateScaleSOA
    

    Note:

    If the domain directory for other Managed Servers resides on a shared directory, this step is not required. Instead, the new nodes mount the already existing domain directory and use it for the newly added Managed Server.

  9. Run the following command on WCCHOST1 to copy the created template file to WCCHOST2:

    scp edgdomaintemplateScaleSOA.jar oracle@WCCHOST2:/ORACLE_COMMON_HOME/common/bin
    
  10. Run the unpack command on WCCHOST2 to unpack the template in the Managed Server domain directory:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./unpack.sh -domain=ORACLE_BASE/admin/domain_name/mserver/domain_name -template= edgdomaintemplateScaleSOA.jar -app_dir=ORACLE_BASE/admin/domain_name/mserver/applications
    

    Note:

    If the WLS_CPTn server is scaled out on a host where a domain and applications directories already exist, then the '-overwrite_domain option must be specified in the unpack command.

  11. Create JMS servers for Oracle SOA Suite and UMS on the new Managed Server:

    1. Use the WebLogic Server Administration Console to create a new persistence store for the new SOAJMSServer and name it, for example, SOAJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage, as recommended in Section 4.3, "About Recommended Locations for the Different Directories":

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      
    2. Create a new JMS server for Oracle SOA Suite (for example, SOAJMSServer_N). Use SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N server to the recently created Managed Server (WLS_SOAn).

    3. Create a new persistence store for the new UMSJMSServer (for example, UMSJMSFileStore_N). Specify the path for the store. This should be a directory on shared storage, as recommended in Section 4.3, "About Recommended Locations for the Different Directories":

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      You can also assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistence stores are used in the following steps.

    4. Create a new JMS server for UMS (for example, UMSJMSServer_N). Use UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created Managed Server (WLS_SOAn).

    5. Update the subdeployment targets for the SOA JMS Module to include the recently created SOA JMS server. To do this, expand the Services node in the Domain Structure tree on the left of the WebLogic Server Administration Console, and then expand the Messaging node. Click JMS Modules. The JMS Modules page appears. Click SOAJMSModule (represented as a hyperlink in the Names column of the table). The Settings page for SOAJMSModule appears. Open the SubDeployments tab. The subdeployment module for SOAJMS appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click the SOAJMSServerXXXXXX subdeployment. Add the new JMS server for Oracle SOA Suite called SOAJMSServer_N to this subdeployment. Click Save.

    6. Update the subdeployment targets for the UMSJMSSystemResource to include the recently created UMS JMS server. To do this, expand the Services node in the Domain Structure tree on the left of the WebLogic Server Administration Console, and then expand the Messaging node. Click JMS Modules. The JMS Modules page appears. Click UMSJMSSystemResource (represented as a hyperlink in the Names column of the table). The Settings page for UMSJMSSystemResource appears. Open the SubDeployments tab. The subdeployment module for UMSJMS appears.

      Note:

      This subdeployment module name is a random name in the form of UCMJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click the UMSJMSServerXXXXXX subdeployment. Add the new JMS server for UMS called UMSJMSServer_N to this subdeployment. Click Save.

  12. Configure a Tx persistence store for the new server. This should be a location visible from other nodes as indicated in the recommendations about shared storage (see Section 4.3, "About Recommended Locations for the Different Directories").

    From the Administration Console, select the server name (WLS_SOAn) in the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistence store to store its data files:

    ORACLE_BASE/admin/domain_name/soa_cluster_name/tlogs
    
  13. Disable host name verification for the new Managed Server, as described in Section 9.4.5, "Disabling Host Name Verification," if not already disabled.

    Before you can start and verify the WLS_SOAn Managed Server, you must disable host name verification. You can re-enable it after you have configured server certificates for communication between the Administration Server and Node Manager in WCCHOSTn. If host name verification was already disabled on the source server from which the new one has been cloned, then these steps are not required (the host name verification setting is propagated to the cloned server).

  14. Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the already existing nodes and then start Node Manager by passing the host name of the new node as a parameter as follows (on WCCHOSTn):

    WL_HOME/server/bin/startNodeManager.sh New_Node_IP
    

    Note:

    If you used the paths shown in Chapter 7, "Installing Oracle WebLogic Server and Creating a Middleware Home," WL_HOME would be ORACLE_BASE/product/fmw/wlserver_10.3.

  15. Start the new Managed Server, WLS_SOAn, from the WebLogic Server Administration Console, and make sure it is running.

  16. Test the WLS_SOAn Managed Server by accessing the application on the LBR (http://wccinternal.mycompany.com/soa-infra). The application should be functional.

    Note:

    The HTTP Servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in the Oracle HTTP Server *_vh.conf files. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

  17. Configure server migration for the newly added server.

    Note:

    Since this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. The floating IP address for the new Oracle SOA Suite Managed Server is already present in the new node.

    To configure server migration:

    1. Log in to the WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

    3. On the Summary of Servers page, click the name of the server (represented as a hyperlink) in the Name column of the table for which you want to configure migration.

    4. On the settings page for the selected server, open the Migration subtab.

    5. In the Available field of the Migration Configuration section, select the machines to which to allow migration and click the right arrow.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional Managed Server.

    6. Choose the Automatic Server Migration Enabled option. This enables Node Manager to start a failed server on the target node automatically.

    7. Click Save.

    8. Restart Node Manager, the Administration Server, and the servers for which server migration has been configured:

      • First, stop the Managed Servers through the Administration Console.

      • Then restart the Node Manager and Administration Server, as described in step 11 under Section 9.4.5, "Disabling Host Name Verification,"

      • Finally, start the Managed Servers again, through the Administration Console.

  18. Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

    • Abruptly stop the WLS_SOAn Managed Server. To do this, run kill -9 pid on the PID of the Managed Server. You can identify the PID of the node using the following command:

      ps -ef | grep WLS_SOAn
      
    • Watch the Node Manager Console for a message indicating that WLS_SOAn's floating IP address has been disabled.

    • Wait for Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    • Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the Managed Server from the WebLogic Server Administration Console and then start it again. The appropriate Node Manager will start the Managed Server on the machine to which it was originally assigned.

19.8 Verifying Manual Failover of the Administration Server

In case a node fails, you can fail over the Administration Server to another node. This section describes how to fail over the Administration Server from WCCHOST1 to WCCHOST2:

19.8.1 Assumptions and Procedure

Note the following assumptions:

  • The Administration Server is configured to listen on ADMINVHN, and not on any address. See step 13 in Section 9.3, "Running the Configuration Wizard on WCCHOST1 to Create a Domain."

  • The Administration Server is failed over from WCCHOST1 to WCCHOST2, and the two nodes have these IP addresses:

    • WCCHOST1: 100.200.140.165

    • WCCHOST2: 100.200.140.205

    • ADMINVHN: 100.200.140.206. This is the virtual IP address where the Administration Server is running, assigned to ethX:Y.

  • The domain directory where the Administration Server is running in WCCHOST1 is on a shared storage and is mounted also from WCCHOST2.

  • Oracle WebLogic Server and Oracle Fusion Middleware components have been installed on WCCHOST2 as described in Chapter 7, "Installing the Software for an Enterprise Deployment" (that is, the same paths for ORACLE_HOME and MW_HOME that exist on WCCHOST1 are also available on WCCHOST2).

To fail over the Administration Server to a different node (WCCHOST2):

  1. Stop the Administration Server if it is running.

  2. Migrate the IP address to the second node:

    1. Run the following command as root on WCCHOST1 (where X:Y is the current interface used by ADMINVHN):

      /sbin/ifconfig ethX:Y down
      
    2. Run the following command on WCCHOST2:

      /sbin/ifconfig interface:index IP_address netmask netmask
      

      For example:

      /sbin/ifconfig eth0:1 100.200.140.206 netmask 255.255.255.0
      

      Note:

      Make sure that the netmask and interface to be used match the available network configuration in WCCHOST2. Also, make sure that the location of the Administration Server application directory is mounted as described in Section 4.3, "About Recommended Locations for the Different Directories."

    3. Update the routing tables on WCCHOST2 using arping; for example:

      /sbin/arping -q -U -c 3 -I eth0 100.200.140.206
      
  3. Start Node Manager on WCCHOST2, as described in Section 9.4.2, "Starting Node Manager on WCCHOST1."

  4. Start the Administration Server on WCCHOST2, as described in Section 9.4.3, "Starting the Administration Server on WCCHOST1."

  5. Test that you can access the Administration Server on WCCHOST2 as follows:

    1. Ensure that you can access the WebLogic Server Administration Console at http://ADMINVHN:7001/console.

    2. Check that you can access and verify the status of components in Oracle Enterprise Manager Fusion Middleware Control at http://ADMINVHN:7001/em.

19.8.2 Validating Access to WCCHOST2 Through the Load Balancer

Perform the same steps as in Section 9.5.5, "Validating Access Through the Load Balancer." This is to check that you can access the Administration Server when it is running on WCCHOST2.

19.8.3 Failing the Administration Server Back to WCCHOST1

This step checks that you can fail back the Administration Server; that is, stop it on WCCHOST2 and run it on WCCHOST1 again.

To migrate ADMINVHN back to the WCCHOST1 node:

  1. Make sure the Administration Server is not running.

  2. Run the following command on WCCHOST2.

    /sbin/ifconfig ethZ:N down
    
  3. Run the following command on WCCHOST1:

    /sbin/ifconfig ethX:Y 100.200.140.206 netmask 255.255.255.0
    

    Note:

    Make sure that the netmask and interface to be used match the available network configuration in WCCHOST1.

  4. Update the routing tables using arping. Run the following command from WCCHOST1:

    /sbin/arping -q -U -c 3 -I ethZ 100.200.140.206
    
  5. Start the Administration Server again on WCCHOST1, as described in Section 9.4.3, "Starting the Administration Server on WCCHOST1."

  6. Test that you can access the WebLogic Server Administration Console at http://ADMINVHN:7001/console.

  7. Check that you can access and verify the status of components in Oracle Enterprise Manager Fusion Middleware Control at http://ADMINVHN:7001/em.

19.9 Performing Backups and Recoveries in Oracle WebCenter Content Enterprise Deployments

Table 19-2 lists the static artifacts to back up in the Oracle WebCenter Content 11g enterprise deployment.

Table 19-2 Static Artifacts to Back Up in the Oracle WebCenter Content 11g Enterprise Deployment

Type Host Location Tier

Oracle home (DB)

CUSTDBHOST1 and CUSTDBHOST2

The location is user defined.

Data tier

Middleware home (OHS)

WEBHOST1 and WEBHOST2

ORACLE_BASE/product/fmw/

Web tier

Middleware home (this includes the Oracle WebCenter Content and Oracle SOA Suite Oracle homes as well)

WCCHOST1 and WCCHOST2*

MW_HOME

The Oracle WebCenter Content and Oracle SOA Suite Oracle homes are also under MW_HOME: ORACLE_HOME

Application tier

Installation-related files

 

OraInventory, User_Home/bea/beahomelist,
oraInst.loc, oratab

N/A


Table 19-3 lists the runtime artifacts to back up in the Oracle WebCenter Content 11g enterprise deployment.

Table 19-3 Runtime Artifacts to Back Up in the Oracle WebCenter Content 11g Enterprise Deployment

Type Host Location Tier

Application artifacts (EAR and WAR files)

WCCHOST1 and WCCHOST2

Find the application artifacts by viewing all of the deployments through the administration console.

Application tier

Oracle WebCenter Content runtime artifacts

WCCHOST1 or WCCHOST2

ORACLE_BASE/admin/domain_name/wcc_cluster_name

Application tier

Capture runtime artifacts

WCCHOST1 or WCCHOST2

ORACLE_BASE/admin/domain_name/cpt_cluster_name

Application tier

Oracle WebCenter Capture runtime artifacts

WCCHOST1 or WCCHOST2

ORACLE_BASE/admin/domain_name/cpt_cluster_name

Application tier

Oracle SOA Suite runtime artifacts

WCCHOST1 or WCCHOST2

ORACLE_BASE/admin/domain_name/soa_cluster_name

Application tier

Customized Managed Server configuration for Oracle WebCenter Content

WCCHOST1 or WCCHOST2

ORACLE_BASE/admin/domain_name/mserver/domain_name/ucm/cs/bin/intradoc.cfg

and

ORACLE_BASE/admin/domain_name/mserver/domain_name/bin/server_migration/wlsifconfig.sh

Application tier

Customized Managed Server configuration for Oracle SOA Suite

WCCHOST1 or WCCHOST2

If using UMS: DOMAIN_HOME/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/driverconfig.xml

(where * represents a directory whose name is randomly generated by WebLogic Server during deployment; for example, 3682yq)

and

ORACLE_BASE/admin/domain_name/mserver/domain_name/bin/server_migration/wlsifconfig.sh

Application tier

Oracle HTTP Server instance home

WEBHOST1 and WEBHOST2

ORACLE_BASE/admin/instance_name

Web tier

Oracle RAC databases

CUSTDBHOST1 and CUSTDBHOST2

The location is user-defined.

Data tier


For more information on backup and recovery of Oracle Fusion Middleware components, see the Oracle Fusion Middleware Administrator's Guide.

19.10 Preventing Timeouts for SQLNet Connections

Much of the production enterprise deployment involves firewalls. Because database connections are made across firewalls, Oracle recommends that the firewall be configured so that the database connection is not timed out. For Oracle Real Application Clusters (RAC), the database connections are made on Oracle RAC virtual IP addresses and the database listener port. You must configure the firewall to not time out such connections. If such a configuration is not possible, set the*SQLNET.EXPIRE_TIME=n* parameter in the ORACLE_HOME/network/admin/sqlnet.ora file on the database server, where n is the time in minutes. Set this value to less than the known value of the timeout for the network device (that is, a firewall). For Oracle RAC, set this parameter in all of the Oracle home directories.

19.11 Resetting Imaging Security for the New Identity Store

If you have already configured your Imaging Managed Server and you change the LDAP provider, the global user IDs (GUIDs) in the Imaging security tables will be invalid. Imaging caches the GUIDs from an external LDAP provider in its local security tables and uses these IDs for authentication. You can refresh the GUID values in the Imaging security tables with WLST commands or with Oracle Enterprise Manager Fusion Middleware Control.

Only users and groups that exist in both LDAP providers will have GUIDs refreshed. Imaging permissions assigned to users and groups from the previous LDAP will be refreshed to the users and groups that match in the new LDAP. The refreshIPMSecurity will ignore any users or groups that do not match any users or groups in the new LDAP provider.

Note:

During the refresh, users or groups for whom matching identifying information is not found are ignored. As security changes are made, invalid users or groups are removed from the Imaging database.

19.11.1 Refreshing GUID values in Imaging Security Tables with WLST

If you want to refresh GUID values from a command line, you can use the Oracle WebLogic Scripting Tool (WLST).

To refresh GUID values in Imaging security tables with WLST:

  1. Start the Administration Server for your Oracle WebLogic Server domain, as described in Section 9.4.3, "Starting the Administration Server on WCCHOST1."

  2. Log in to the Oracle WebLogic Server Administration Server.

  3. Navigate to the Oracle WebCenter Content home directory: MW_HOME/WCC_ORACLE_HOME.

  4. Invoke WLST:

    cd common/bin
    ./wlst.sh
    
  5. At the WLST command prompt, enter these commands:

    wls:/offline> connect() 
    Please enter your username :weblogic 
    Please enter your password : XXXXXXXXXXXXX 
    Please enter your server URL [t3://localhost:7001] 
     :t3://host_name:16000 
    Connecting to t3://host_name:16000 with userid weblogic ... 
    Successfully connected to Managed Server 'IPM_server1' that belongs to domain 
    'domainName'. 
     
    Warning: An insecure protocol was used to connect to the 
    server. To ensure on-the-wire security, the SSL port or 
    Admin port should be used instead. 
     
    wls:/domainName/serverConfig> listIPMConfig()   <This is just to check 
    that the connection is to the right Imaging server> 
     
    wls:/domainName/serverConfig> 
    refreshIPMSecurity()  <This is the command that will refresh the GUIDs in the 
    Security tables.> 
     
    wls:/domainName/serverConfig> exit() 
    
  6. Log in to Imaging to verify user and group security.

19.11.2 Refreshing GUID values in Imaging Security Tables with Fusion Middleware Control

If you want to refresh GUID values through an MBean, you can use the System MBean Browser in Fusion Middleware Control.

To refresh GUID values in Imaging security tables with Fusion Middleware Control:

  1. Log in to Fusion Middleware Control.

  2. In the navigation tree on the left, expand WebLogic Domain, then the Oracle WebCenter Content domain folder, then IPM_Cluster, and then the name of the Imaging server, such as IPM_server1.

  3. On the right, click the WebLogic Server drop-down menu, and choose System MBean Browser.

  4. In the System MBean Browser navigation tree, expand Application Defined MBeans, then oracle.imaging, then Server: IPM_server1, and then cmd, and click cmd.

  5. Click refreshIPMSecurity on the right.

  6. Press the Invoke button.

  7. Log in to Imaging to verify user and group security.

19.12 Configuring Oracle Web Service Manager Security Policies for Oracle WebCenter Content and Imaging Services

When first installed, the Oracle WebCenter Content and Imaging web services are configured with no Oracle Web Service Manager (OWSM) security policies applied. When no security policies are applied, the services leverage the basic HTTP authentication mechanism, where user credentials (user ID and password) are transmitted in the web service HTTP message header. Oracle recommends using the appropriate Oracle WSM policy enforcements instead of basic HTTP authentication. To configure Oracle WSM security policies for Oracle WebCenter Content and Imaging web services, follow the steps in Oracle Fusion Middleware Developing Oracle WebCenter Content: Imaging and Oracle Fusion Middleware Installing and Configuring Oracle WebCenter Content.

19.13 Troubleshooting the Oracle WebCenter Content Enterprise Deployment Topology

This section covers the following topics:

19.13.1 Page Not Found When Accessing soa-infra Application Through Load Balancer

Problem: A 404 page not found message is displayed in the web browser when you try to access the soa-infra application using the load balancer address. The error is intermittent and Oracle SOA Suite servers appear as Running in the WebLogic Server Administration Console.

Solution: Even when the Oracle SOA Suite Managed Servers are up and running, some of the applications contained in them can be in Admin, Prepared or other states different from Active. The soa-infra application may be unavailable while the Oracle SOA Suite server is running. Check the Deployments page in the Administration Console to verify the status of the soa-infra application. It should be in the Active state. Check the Oracle SOA Suite server's output log for errors pertaining to the soa-infra application and try to start it from the Deployments page in the Administration Console.

19.13.2 Soa-infra Application Fails to Start Due to Deployment Framework Issues (Coherence)

Problem: The soa-infra application fails to start after changes to the Coherence configuration for deployment have been applied. The Oracle SOA Suite server output log reports the following:

Cluster communication initialization failed. If you are using multicast, Please make sure multicast is enabled on your network and that there is no interference on the address in use. Please see the documentation for more details.

Solutions:

  1. When using multicast instead of unicast for cluster deployments of SOA composites, you may get a message similar to the preceding one if a multicast conflict arises when starting the soa-infra application (that is, starting the Managed Server on which Oracle SOA Suite runs). These messages, which occur when Oracle Coherence throws a runtime exception, also include the details of the exception itself. If such a message appears, check the multicast configuration in your network. Verify that you can ping multicast addresses. In addition, check for other clusters that may have the same multicast address but have a different cluster name in your network, as this may cause a conflict that prevents soa-infra from starting. If multicast is not enabled in your network, you can change the deployment framework to use unicast as described in the Oracle Coherence Developer's Guide.

  2. When entering well-known address list for unicast (in server start parameters), make sure that the node's addresses entered for the localhost and clustered servers are correct. Error messages like the following one are reported in the server's output log if any of the addresses is not resolved correctly:

    oracle.integration.platform.blocks.deploy.CompositeDeploymentCoordinatorMessages errorUnableToStartCoherence
    

19.13.3 Incomplete Policy Migration After Failed Restart of SOA Server

Problem: The Oracle SOA Suite server fails to start through the administration console before setting Node Manager property startScriptEnabled=true. The server does not come up after the property is set either. The Oracle SOA Suite server output log reports the following:

SEVERE: <.> Unable to Encrypt data
Unable to Encrypt data.
Check installation/post-installation steps for errors. Check for errors during SOA server startup.

ORABPEL-35010
 .
Unable to Encrypt data.
Unable to Encrypt data.
Check installation/post-installation steps for errors. Check for errors
 during SOA server startup.
 .
 at oracle.bpel.services.common.util.EncryptionService.encrypt(EncryptionService.java:56)
...

Solution: Incomplete policy migration results from an unsuccessful start of the first Oracle SOA Suite server in a cluster. To enable full migration, edit the <jazn-policy> element in the system-jazn-data.xml file to grant permission to bpm-services.jar:

<grant>
  <grantee>
    <codesource>
<url>file:${oracle.home}/soa/modules/oracle.soa.workflow_11.1.1/bpm-services.jar</url>
    </codesource>
  </grantee>
  <permissions>
    <permission>
      <class>java.security.AllPermission</class>
    </permission>
  </permissions>
</grant>

19.13.4 WebCenter Content, Oracle SOA Suite, or Imaging Server Fails to Start Due to Maximum Number of Processes Available in Database

Problem: A WebCenter Content, Oracle SOA Suite, or Imaging server fails to start. The domain has been extended for new a type of Managed Server (for example, WebCenter Content extended for Imaging) or the system has been scaled up (added new servers of the same type). The WebCenter Content, Oracle SOA Suite, or Imaging server output log reports the following exception:

<Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "SOADataSource-rac0": Listener refused the connection with the following error:

ORA-12516, TNS:listener could not find available handler with matching protocol stack >

Solution: Verify the number of processes in the database and adjust accordingly. As the SYS user, issue the SHOW PARAMETER command:

SHOW PARAMETER processes

Set the initialization parameter using the following SQL command:

ALTER SYSTEM SET processes=300 SCOPE=SPFILE

Restart the database.

Note:

The method that you use to change a parameter's value depends on whether the parameter is static or dynamic, and on whether your database uses a parameter file or a server parameter file. For information about parameter files, server parameter files, and how to change parameter value, see the Oracle Database Administrator's Guide.

19.13.5 Administration Server Fails to Start After a Manual Failover

Problem: Administration Server fails to start after the Administration Server node failed and manual failover to another nodes is performed. The Administration Server output log reports the following:

<Feb 19, 2009 3:43:05 AM PST> <Warning> <EmbeddedLDAP> <BEA-171520> <Could not obtain an exclusive lock for directory: ORACLE_BASE/admin/edg_domain/aserver/edg_domain/servers/AdminServer/data/ldap/ldapfiles. Waiting for 10 seconds and then retrying in case existing WebLogic Server is still shutting down.>

Solution: When restoring a node after a node crash and using shared storage for the domain directory, you may see this error in the log for the Administration Server due to unsuccessful lock cleanup. To resolve this error, remove the file ORACLE_BASE/admin/domain_name/aserver/domain_name/servers/AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok.

19.13.6 Error While Activating Changes in Administration Console

Problem: Activation of changes in Administration Console fails after changes to a server's start configuration have been performed. The Administration Console reports the following when Activate Changes is clicked:

An error occurred during activation of changes, please see the log for details.
 [Management:141190]The commit phase of the configuration update failed with an exception:
In production mode, it's not allowed to set a clear text value to the property: PasswordEncrypted of ServerStartMBean

Solution: This may happen when start parameters are changed for a server in the Administration Console. In this case, provide user name and password information in the server start configuration in the Administration Console for the specific server whose configuration was being changed.

19.13.7 SOA, Imaging, or Capture Server Not Failed Over After Server Migration

Problem: After the maximum restart attempts by the local Node Manager has been reached, Node Manager in the failover node tries to restart the server, but the it does not come up. The server seems to be failed over as reported by Node Manager's output information. The virtual IP address used by the Oracle SOA Suite, Oracle WebCenter Content: Imaging, or Oracle WebCenter Capture server is not enabled in the failover node after Node Manager tries to migrate it (ifconfig in the failover node does not report the virtual IP address in any interface). Executing the command sudo ifconfig $INTERFACE $ADDRESS $NETMASK does not enable the IP address in the failover node.

Solution: The rights and configuration for sudo execution should not prompt for a password. Verify the configuration of sudo with your system administrator so that sudo works without a password prompt.

19.13.8 SOA, Imaging, or Capture Server Not Reachable from Browser After Server Migration

Problem: Server migration is working (Oracle SOA Suite or Oracle WebCenter Content: Imaging server is restarted in the failed-over node), but the Virtual_Hostname:8001/soa-infra URL cannot be accessed in the web browser. The server has been killed in its original host, and Node Manager in the failover node reports that the virtual IP address has been migrated and the server started. The virtual IP address used by the Oracle SOA Suite, Imaging, or Capture server cannot be pinged from the client's node (that is, the node where the browser is being used).

Solution: The arping command executed by Node Mnager to update ARP caches did not broadcast the update properly. In this case, the node is not reachable to external nodes. Either update the nodemanager.properties file to include the MACBroadcast or execute a manual arping:

/sbin/arping -b -q -c 3 -A -I INTERFACE ADDRESS > $NullDevice 2>&1

Where INTERFACE is the network interface where the virtual IP address is enabled and ADDRESS is the virtual IP address.

19.13.9 OAM Configuration Tool Does Not Remove URLs

Problem: The OAM Configuration Tool has been used and a set of URLs was added to the policies in Oracle Access Manager. One of multiple URLs had a typo. Executing the OAM Configuration Tool again with the correct URLs completes successfully; however, when accessing Policy Manager, the incorrect URL is still there.

Solution: The OAM Configuration Tool only adds new URLs to existing policies when executed with the same app_domain name. To remove a URL, use the Policy Manager Console in Oracle Access Manager. Log in to the Access Administration site for Oracle Access Manager, click My Policy Domains, click the created policy domain (SOA_EDG), then click the Resources tab, and remove the incorrect URLs.

19.13.10 Redirection of Users to Login Screen After Activating Changes in the Administration Console

Problem: After configuring OHS and LBR to access the WebLogic Server Administration Console, some activation changes cause the redirection to the login screen for the Administration Console.

Solution: This is the result of the console attempting to follow changes to port, channel, and security settings as a user makes these changes. For certain changes, the console may redirect to the Administration Server's listen address. Activation is completed regardless of the redirection. It is not required to log in again; users can simply update the URL to wcc.mycompany.com/console/console.portal and directly access the home page for the Administration Console.

Note:

This problem will not occur if you have disabled tracking of the changes described in this section.

19.13.11 Redirection of Users to Administration Console's Home Page After Activating Changes to Oracle Access Manager

Problem: After configuring Oracle Access Manager, some activation changes cause the redirection to the Administration Console's home page (instead of the context menu where the activation was performed).

Solution: This is expected when Oracle Access Manager SSO is configured and the Administration Console is set to follow configuration changes (redirections are performed by the Administration Server when activating some changes). Activations should complete regardless of this redirection. For successive changes not to redirect, access the Administration Console, choose Preferences, then Shared Preferences, and unselect the Follow Configuration Changes checkbox.

19.13.12 Configured JOC Port Already in Use

Problem: Attempts to start a Managed Server that uses the Java Object Cache, such as OWSM Managed Servers, fail. The following errors appear in the logs:

J2EE JOC-058 distributed cache initialization failure
J2EE JOC-043 base exception:
J2EE JOC-803 unexpected EOF during read.

Solution: Another process is using the same port that JOC is attempting to obtain. Either stop that process, or reconfigure JOC for this cluster to use another port in the recommended port range.

19.13.13 Using CredentialAccessPermissions to Allow Oracle WebCenter Content: Imaging to Read Credentials from the Credential Store

Problem: Oracle WebCenter Content: Imaging creates the credential access permissions during startup and updates its local domain directory copy of the system-jazn-data.xml file. While testing the environment without an LDAP policy store being configured, the Administration Server may push manual updates to the system.jazn-data.xml file to the domain directories where the Imaging servers reside. This can cause the copy of the file to be overwritten, given rise to a variety of exceptions and errors in the restarts or access to the Imaging console.

Solution: To re-create the credential access permissions and update the Administration Server's domain directory copy of the system-jazn-data.xml file, use the grantIPMCredAccess command from the Oracle WebLogic Scripting Tool. To do this, start wlst.sh from the ORACLE_HOME associated with Oracle WebCenter Content, connect to the Administration Server, and execute the grantIPMCredAccess() command on WCCHOST1:

cd ORACLE_HOME/common/bin

./wlst.sh

wls:/offline> connect()

wls:/domain_name/serverConfig> grantIPMCredAccess()

Note:

When connecting, provide the credentials and address for the Administration Server.

19.13.14 Improving Performance with Very Intensive Document Uploads from Oracle WebCenter Content: Imaging to Oracle WebCenter Content

Problem: If a host name-based security filter is used in Oracle WebCenter Content (config.cfg file), a high latency and performance impact may be observed in the system in the event of very intensive document uploads from Oracle WebCenter Content: Imaging to Oracle WebCenter Content. This is caused by the reverse DNS lookup which is required in Oracle WebCenter Content to allow the connections from the Imaging servers.

Solution: Using a host name-based security filter is recommended in preparation for configuring the system for disaster protection and to restore to a different host (since the configuration used is IP-agnostic when using a host name-based security filter). However, if the performance of the uploads needs to be improved, you can use an IP-based security filter instead of a host name-based filter.

To change the host name-based security filter in Oracle WebCenter Content to an IP-based filter:

  1. Open the file ORACLE_BASE/admin/domain_name/wcc_cluster_name/cs/config/config.cfg in a text editor.

  2. Remove or comment out the following lines:

    SocketHostNameSecurityFilter=localhost|localhost.mycompany.com|wcchost1vhn1|wcchost2vhn1
    AlwaysReverseLookupForHost=Yes
    
  3. Add the IP addresses (listen addresses) of the WLS_CPT1 and WLS_CPT2 Managed Servers (WCCHOST1VHN3 and WCCHOST2VHN3, respectively) to the SocketHostAddressSecurityFilter parameter list:

    SocketHostAddressSecurityFilter=127.0.0.1|0:0:0:0:0:0:0:1|X.X.X.X|Y.Y.Y.Y
    

    In this example, X.X.X.X and Y.Y.Y.Y are the listen addresses of WLS_CPT1 and WLS_CPT2, respectively. (The address 127.0.0.1 must be included in the list as well.)

  4. Save the modified config.cfg file, and restart the Oracle WebCenter Content servers for the changes to take effect, using the WebLogic Server Administration Console.

19.13.15 Out-of-Memory Issues on Managed Servers

Problem: You are experiencing out-of-memory issues on Managed Servers.

Solution: Increase the size of the memory heap allocated for the Java VM to at least one gigabyte.

To increase the size of the memory heap allocated for the Java VM:

  1. Log in to the WebLogic Server Administration Console.

  2. Click Environment, then Servers.

  3. Click a Managed Server name.

  4. Open the Configuration tab.

  5. Open the Server Start tab in the second row of tabs.

  6. Include the memory parameters in the Arguments box, for example:

    -Xms256m -Xmx1024m -XX:CompileThreshold=8000 -XX:PermSize=128m -XX:MaxPermSize=1024m
    

    Note:

    The memory parameter requirements may differ between various JVMs (Sun JDK, Oracle JRockit, or others). For more information, see "Increasing the Java VM Heap Size for Managed Servers" in Oracle Fusion Middleware Installing and Configuring Oracle WebCenter Content.

  7. Save the configuration changes.

  8. Restart all running Managed Servers, using the WebLogic Server Administration Console.

19.13.16 Regenerating the Master Password for Oracle WebCenter Content Managed Servers

If the cwallet.sso file of the Oracle WebCenter Content Managed Servers domain home becomes inconsistent across the cluster, is deleted, or is accidentally overwritten by an invalid copy in the ORACLE_BASE/admin/domain_name/aserver/domain_name/config/fmwconfig/ directory, you can regenerate the file.

To regenerate the master password for Oracle WebCenter Content Managed Servers:

  1. Stop all Oracle WebCenter Content Managed Servers (WLS_WCCx).

  2. Remove the cwallet.sso file from ORACLE_BASE/admin/domain_name/mserver/domain_name/config/fmwconfig/.

  3. Remove the password.hda file from ORACLE_BASE/admin/domain_name/aserver/wcc_cluster_name/cs/config/private.

  4. Start the WLS_WCC1 server in WCCHOST1.

  5. Verify the creation or update of the cwallet.sso file in ORACLE_BASE/admin/domain_name/mserver/domain_name/config/fmwconfig/ as well as the creation of the password.hda file in ORACLE_BASE/admin/domain_name/aserver/wcc_cluster_name/cs/config/private/.

  6. Use the Oracle WebCenter Content System Properties command-line tool to update the passwords for the database.

  7. Verify that the standalone Oracle WebCenter Content applications (Batchloader, System Properties, and so on) are working correctly.

  8. Copy the cwallet.sso file from ORACLE_BASE/admin/domain_name/mserver/domain_name/config/fmwconfig/ to the Administration Server's domain directory at ORACLE_BASE/admin/domain_name/aserver/domain_name/config/fmwconfig/.

  9. Start the second Oracle WebCenter Content Managed Server, and verify that the Administration Server pushes the updated cwallet.sso file to ORACLE_BASE/admin/domain_name/mserver/domain_name/config/fmwconfig/ in WCCHOST2 and that the file is the same as created or updated by the Oracle WebCenter Content Server in WCCHOST1.

  10. Verify that the standalone Oracle WebCenter Content applications (Batchloader, System Properties, and so on) are working correctly.

  11. Verify that the standalone Oracle WebCenter Content applications work correctly on both nodes at the same time.

19.13.17 Logging Out from the WebLogic Server Administration Console Does Not End the User Session

When you log in to the WebLogic Server Administration Console using Oracle Access Manager single sign-on (SSO), then clicking the logout button does not end the user session. You are not redirected to the Oracle Access Manager login page, which is in accordance with the SSO logout guidelines, but rather the home page is reloaded. To truly log out, you may need to manually clean up the cookies for your web browser.

19.13.18 Transaction Timeout Error

Problem: The following transaction timeout error appears in the log:

Internal Exception: java.sql.SQLException: Unexpected exception while enlisting
 XAConnection java.sql.SQLException: XA error: XAResource.XAER_NOTA start()
failed on resource 'SOADataSource_soaedg_domain': XAER_NOTA : The XID
is not valid

Solution: Check your transaction timeout settings, and be sure that the JTA transaction timeout is less than the DataSource XA Transaction Timeout, which is less than the distributed_lock_timeout (at the database).

With the out-of-the-box configuration, the SOA data sources do not set XA timeout to any value. The Set XA Transaction Timeout configuration parameter is unchecked in the WebLogic Server Administration Console. In this case, the data sources use the domain-level JTA timeout, which is set to 30. Also, the default distributed_lock_timeout value for the database is 60. As a result, the SOA configuration works correctly for any system where transactions are expected to have lower life expectancy than such values. Adjust these values according to the transaction times your specific operations are expected to take.

19.13.19 Caching and Locking Files

Oracle WebCenter Content uses its own locking mechanism for files, so it needs to have access to those files without file-attribute caching and without locking being done by the cluster nodes. If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process that another node is already working on. Or if a particular file is locked by one of the node clients, this could interfere with access by another node. Unfortunately, disabling file-attribute caching on the file share can impact performance. So it is important to disable caching and locking only on the particular folders that require it. For instance, if you are creating the share through Network File System (NFS), use noac and nolock for the mount options.

19.13.20 Modifying Upload and Stage Directories for Applications Deployed Remotely

If you are deploying applications remotely, you might need to update your upload and stage directories to absolute paths after you create the domain and unpack to the mserver directory. Absolute path names can prevent issues for remote deployments and confusion for deployments that use the stage mode.

The default path names for these directories follow:

./servers/AdminServer/upload

./servers/server_name/stage