2 Database and Environment Preconfiguration

This chapter describes database and network environment preconfiguration required by the Oracle ECM enterprise deployment topology, as well as recommendations for shared storage and directory structure. It contains the following sections:

2.1 Database

You must install the Oracle Fusion Middleware repository before you can configure the Oracle Fusion Middleware components. You install the Oracle Fusion Middleware metadata repository into an existing database using the Repository Creation Utility (RCU), which is available from the RCU DVD or from the location listed in Table 1-2.

For the enterprise topology, an Oracle Real Application Clusters (RAC) database is highly recommended. When you configure the Oracle ECM components, the Oracle Fusion Middleware Configuration Wizard will prompt you to enter the information for connecting to the database that contains the metadata repository.

This section covers these topics:

2.1.1 Setting Up the Database

Before loading the metadata repository into your database, check that the database meets the requirements described in these sections:

2.1.1.1 Database Host Requirements

On the hosts CUSTDBHOST1 and CUSTDBHOST2 in the data tier, note the following requirements:

  • Oracle Clusterware

    For 11g Release 1 (11.1) for Linux, refer to the Oracle Clusterware Installation Guide for Linux.

  • Oracle Real Application Clusters

    For 11g Release 1 (11.1) for Linux, refer to the Oracle Real Application Clusters Installation Guide for Linux and UNIX. For 10g Release 2 (10.2) for Linux, refer to Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Linux.

  • Automatic Storage Management (optional)

    ASM gets installed for the node as a whole. It is recommended that you install it in a separate Oracle Home from the Database Oracle Home. This option comes in at runInstaller. In the Select Configuration page, select the Configure Automatic Storage Management option to create a separate ASM home.

2.1.1.2 Supported Database Versions

Oracle Enterprise Content Management Suite requires the presence of a supported database and schemas. To check if your database is certified or to see all certified databases, refer to the "Oracle Fusion Middleware 11g Release 1 (11.1.1.x)" product area on the Oracle Fusion Middleware Supported System Configurations page:

http://www.oracle.com/technology/software/products/ias/files/fusion_certification.html

To check the release of your database, you can query the PRODUCT_COMPONENT_VERSION view as follows:

SQL> SELECT VERSION FROM SYS.PRODUCT_COMPONENT_VERSION WHERE PRODUCT LIKE 'Oracle%';

Note:

Oracle ECM requires that the database used to store its metadata (either 10g or 11g) supports the AL32UTF8 character set. Check the database documentation for information on choosing a character set for the database.

2.1.1.3 Initialization Parameters

Ensure that the following initialization parameter is set to the required minimum value. It is checked by Repository Creation Utility.

Table 2-1 Required Initialization Parameters

Configuration Parameter Required Value Parameter Class

SOA

PROCESSES

400 or greater

Static

ECM

PROCESSES

100 or greater

Static

SOA and ECM

PROCESSES

500 or greater

Static


To check the value of the initialization parameter using SQL*Plus, you can use the SHOW PARAMETER command.

As the SYS user, issue the SHOW PARAMETER command as follows:

SQL> SHOW PARAMETER processes

Set the initialization parameter using the following command:

SQL> ALTER SYSTEM SET processes=500 open_cursors=500 SCOPE=SPFILE;

Restart the database.

Note:

The method that you use to change a parameter's value depends on whether the parameter is static or dynamic, and on whether your database uses a parameter file or a server parameter file. See the Oracle Database Administrator's Guide for details on parameter files, server parameter files, and how to change parameter values.

2.1.1.4 Database Services

Oracle recommends using the Oracle Enterprise Manager Cluster Managed Services Page to create database services that client applications will use to connect to the database. For complete instructions on creating database services, see the chapter on workload management in the Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide.

You can also use SQL*Plus to configure this using the following instructions:

  1. Use the CREATE_SERVICE subprogram to create the ecmedg.mycompany.com database service. Log on to SQL*Plus as the sysdba user and run the following command:

    SQL> EXECUTE DBMS_SERVICE.CREATE_SERVICE
    (SERVICE_NAME => 'ecmedg.mycompany.com',
    NETWORK_NAME => 'ecmedg.mycompany.com'
    );
    
  2. Add the service to the database and assign it to the instances using srvctl:

    prompt> srvctl add service -d ecmdb -s ecmedg -r ecmdb1,ecmdb2
    
  3. Start the service using srvctl:

    prompt> srvctl start service -d ecmdb -s ecmedg
    

Note:

For more information about the SRVCTL command, see the Oracle Real Application Clusters Administration and Deployment Guide.

Oracle recommends that a specific database service be used for a product suite even when they share the same database. It is also recommended that the database service used is different than the default database service. In this case, the database is ecmdb.mycompany.com and the default service is one with the same name. The ECM install is configured to use the service ecmedg.mycompany.com. It is recommended that a service named soaedg.mycompany.com is used for SOA.

Note:

For simplicity, the datasource configuration screens in this guide use the same service name (ecmedg.mycompany.com)

2.1.2 Loading the Oracle Fusion Middleware Metadata Repository in the Oracle RAC Database

Perform these steps to load the Oracle Fusion Middleware Repository into a database:

  1. Insert the Repository Creation Utility (RCU) DVD, and then start RCU from the bin directory in the RCU home directory:

    cd RCU_HOME/bin
    ./rcu
    
  2. In the Welcome screen (if displayed), click Next.

  3. In the Create Repository screen, select Create to load component schemas into a database. Click Next.

  4. In the Database Connection Details screen, enter connect information for your database:

    • Database Type: Select 'Oracle Database'.

    • Host Name: Specify the name of the node on which the database resides. For the Oracle RAC database, specify the VIP name or one of the node names as the host name: CUSTDBHOST1-VIP.

    • Port: Specify the listen port number for the database: 1521.

    • Service Name: Specify the service name of the database (ecmedg.mycompany.com).

    • Username: Specify the name of the user with DBA or SYSDBA privileges: SYS.

    • Password: Enter the password for the SYS user.

    • Role: Select the database user's role from the list: SYSDBA (required by the SYS user).

    Click Next.

    Figure 2-1 Database Connection Details Screen

    Description of Figure 2-1 follows
    Description of "Figure 2-1 Database Connection Details Screen"

  5. In the Select Components screen, do the following:

    • Select Create a new Prefix, and enter a prefix to use for the database schemas, for example DEV or PROD. You can specify up to six characters as a prefix. Prefixes are used to create logical groupings of multiple repositories in a database. For more information, see Oracle Fusion Middleware Repository Creation Utility User's Guide.

      Tip:

      Note the name of the schema because the upcoming steps require this information.
    • Select the following components:

      • AS Common Schemas:

        - Metadata Services

      • SOA and BPM Infrastructure:

        - SOA Infrastructure

        - User Messaging

      • Enterprise Content Management:

        - Oracle Content Server 11g - Complete

        - Oracle Imaging and Process Management

    Click Next.

    Figure 2-2 Select Components Screen

    Description of Figure 2-2 follows
    Description of "Figure 2-2 Select Components Screen"

  6. In the Schema Passwords screen, enter passwords for the main and additional (auxiliary) schema users, and click Next.

    Tip:

    Note the name of the schema because the upcoming steps require this information.
  7. In the Map Tablespaces screen, choose the tablespaces for the selected components, and click Next.

  8. In the Summary screen, click Create.

  9. In the Completion Summary screen, click Close.

Note:

Oracle recommends using the database used for identity management (see Chapter 11, "Integration with Oracle Identity Management") to store the Oracle WSM policies. It is therefore expected to use the IM database information for the OWSM MDS schemas, which will be different from the one used for the rest of SOA schemas. To create the required schemas in the database, repeat the steps above using the IM database information, but select only "AS Common Schemas: Metadata Services" in the Select Components screen (step 5).

2.1.3 Backing Up the Database

After you have loaded the metadata repository in your database, you should make a backup.

Backing up the database is for the explicit purpose of quick recovery from any issue that may occur in the further steps. You can choose to use your backup strategy for the database for this purpose or simply make a backup using operating system tools or RMAN for this purpose. It is recommended that you use Oracle Recovery Manager for the database, particularly if the database was created using Oracle ASM. If possible, a cold backup using operating system tools such as tar can also be performed.

2.2 Network

This section covers these topics:

2.2.1 Virtual Server Names

The Oracle ECM enterprise topology uses the following virtual server names:

Ensure that the virtual server names are associated with IP addresses and are part of your DNS. The nodes running Oracle Fusion Middleware must be able to resolve these virtual server names.

2.2.1.1 ecm.mycompany.com

ecm.mycompany.com is a virtual server name that acts as the access point for all HTTP traffic to the run-time Oracle ECM components. Traffic to SSL is configured. Clients access this service using the address ecm.mycompany.com:443. This virtual server is defined on the load balancer.

2.2.1.2 admin.mycompany.com

admin.mycompany.com is a virtual server name that acts as the access point for all internal HTTP traffic that is directed to administration services such as Oracle WebLogic Administration Server Console and Oracle Enterprise Manager.

The incoming traffic from clients is not SSL-enabled. Clients access this service using the address admin.mycompany.com:80 and the requests are forwarded to port 7777 on WEBHOST1 and WEBHOST2.

This virtual server is defined on the load balancer.

2.2.1.3 soainternal.mycompany.com

soainternal.mycompany.com is a virtual server name used for internal invocations of SOA services. This URL is not exposed to the internet and is only accessible from the intranet. (For SOA systems, users can set this while modeling composites or at run time with the appropriate EM/MBeans, as the URL to be used for internal services invocations.)

The incoming traffic from clients is not SSL-enabled. Clients access this service using the address soainternal.mycompany.com:80 and the requests are forwarded to port 7777 on WEBHOST1 and WEBHOST2.

This virtual server is defined on the load balancer.

2.2.1.4 ecminternal.mycompany.com

ecminternal.mycompany.com is a virtual server name used for internal invocations of ECM services. This URL is not exposed to the internet and is only accessible from the intranet.

The incoming traffic from clients is not SSL-enabled. Clients access this service using the address ecminternal.mycompany.com:80 and the requests are forwarded to port 7777 on WEBHOST1 and WEBHOST2.

This virtual server is defined on the load balancer.

2.2.2 Load Balancers

This enterprise topology uses an external load balancer. For more information on load balancers, see Section 1.7.2, "Web Tier."

Note:

The Oracle Technology Network (http://otn.oracle.com) provides a list of validated load balancers and their configuration at http://www.oracle.com/technology/products/ias/hi_av/Tested_LBR_FW_SSLAccel.html.

Configuring the Load Balancer

Perform these steps to configure the load balancer:

  1. Create a pool of servers. You will assign this pool to virtual servers.

  2. Add the addresses of the Oracle HTTP Server hosts to the pool. For example:

    • WEBHOST1:7777

    • WEBHOST2:7777

  3. Configure a virtual server in the load balancer for soainternal.mycompany.com:80.

    • For this virtual server, use your internal SOA address as the virtual server address (for example, soainternal.mycompany.com). This address is typically not externalized.

    • Specify HTTP as the protocol.

    • Enable address and port translation.

    • Enable reset of connections when services and/or nodes are down.

    • Assign the pool created in step 1 to the virtual server.

  4. Configure a virtual server in the load balancer for ecm.mycompany.com:443.

    • For this virtual server, use your system's frontend address as the virtual server address (for example, ecm.mycompany.com). The frontend address is the externally facing host name used by your system and that will be exposed in the Internet.

    • Configure this virtual server with port 80 and port 443. Any request that goes to port 80 should be redirected to port 443.

    • Specify HTTP as the protocol.

    • Enable address and port translation.

    • Enable reset of connections when services and/or nodes are down.

    • Assign the pool created in step 1 to the virtual server.

    • Create rules to filter out access to /console and /em on this virtual server.

  5. Configure a virtual server in the load balancer for admin.mycompany.com:80.

    • For this virtual server, use your internal administration address as the virtual server address (for example, admin.mycompany.com). This address is typically not externalized.

    • Specify HTTP as the protocol.

    • Enable address and port translation.

    • Enable reset of connections when services and/or nodes are down.

    • Optionally, create rules to allow access only to /console and /em on this virtual server.

    • Assign the pool created in step 1 to the virtual server.

  6. Configure a virtual server in the load balancer for ecminternal.mycompany.com:80.

    • For this virtual server, use your internal ECM address as the virtual server address (for example, ecminternal.mycompany.com). This address is typically not externalized.

    • Specify HTTP as the protocol.

    • Enable address and port translation.

    • Enable reset of connections when services and/or nodes are down.

    • Assign the pool created in step 1 to the virtual server.

    • Optionally, create rules to filter out access to /console and /em on this virtual server.

  7. Configure monitors for the Oracle HTTP Server nodes to detect failures in these nodes.

    • Set up a monitor to regularly ping the "/" URL context.

      Tip:

      Use GET /\n\n instead if the Oracle HTTP Server's document root does not include index.htm and Oracle WebLogic Server returns a 404 error for "/".
    • For the ping interval, specify a value that does not overload your system. You can try 5 seconds as a starting point.

    • For the timeout period, specify a value that can account for the longest response time that you can expect from your SOA system, that is, specify a value greater than the longest period of time any of your requests to HTTP servers can take.

2.2.3 IPs and Virtual IPs

Configure the Administration Server and the managed servers to listen on different virtual IPs and physical IPs as illustrated in Figure 2-3.

Figure 2-3 IPs and VIPs Mapped to Administration Server and Managed Servers

IP and VIP mapping to admin and managed servers
Description of "Figure 2-3 IPs and VIPs Mapped to Administration Server and Managed Servers"

As shown in Figure 2-3, each VIP and IP is attached to the Oracle WebLogic server that uses it. VIP1 is failed manually to restart the Administration Server in SOAHOST2. VIP2 and VIP3 fail over from SOAHOST1 to SOAHOST2 and from SOAHOST2 to SOAHOST1, respectively, through the Oracle WebLogic Server migration feature. WLS_IPM1 and WLS_IPM2 also use server migration to fail over VIP4 and VIP5, respectively, from ECMHOST1 to ECMHOST2. See the Oracle Fusion Middleware High Availability Guide for information on the WebLogic Server Migration feature. Physical (non-virtual) IPs are fixed to each node. IP1 is the physical IP of ECMHOST1 and is used as the listen address by the WLS_UCM1 server. IP2 is the physical IP of ECMHOST2 and is used as the listen address by the WLS_UCM2 server.

Table 2-2 provides descriptions of the various virtual hosts.

Table 2-2 Virtual Hosts

Virtual IP VIP Maps to... Description

VIP1

ADMINVHN

ADMINVHN is the virtual host name that is the listen address for the Administration Server and fails over with manual failover of the Administration Server. It is enabled on the node where the Administration Server process is running (SOAHOST1 by default).

VIP2

SOAHOST1VHN1

SOAHOST1VHN1 is the virtual host name that maps to the listen address for WLS_SOA1 and fails over with server migration of this managed server. It is enabled on the node where WLS_SOA1 process is running (SOAHOST1 by default).

VIP3

SOAHOST2VHN1

SOAHOST2VHN1 is the virtual host name that maps to the listen address for WLS_SOA2 and fails over with server migration of this managed server. It is enabled on the node where WLS_SOA2 process is running (SOAHOST2 by default).

VIP4

ECMHOST1VHN1

ECMHOST1VHN1 is the virtual host name that maps to the listen address for WLS_IPM1 and fails over with server migration of this managed server. It is enabled on the node where WLS_IPM1 process is running (ECMHOST1 by default).

VIP5

ECMHOST2VHN1

ECMHOST2VHN1 is the virtual host name that maps to the listen address for WLS_IPM2 and fails over with server migration of this managed server. It is enabled on the node where WLS_IPM2 process is running (ECMHOST2 by default).


2.2.4 Firewalls and Ports

Many Oracle Fusion Middleware components and services use ports. As an administrator, you must know the port numbers used by these services, and to ensure that the same port number is not used by two services on a host.

Most port numbers are assigned during installation.

Table 2-3 lists the ports used in the Oracle ECM topology, including the ports that you must open on the firewalls in the topology.

Firewall notation:

  • FW0 refers to the outermost firewall.

  • FW1 refers to the firewall between the web tier and the application tier.

  • FW2 refers to the firewall between the application tier and the data tier.

Table 2-3 Ports Used

Type Firewall Port and Port Range Protocol / Application Inbound / Outbound Other Considerations and Timeout Guidelines

Browser request

FW0

80

HTTP / Load Balancer

Inbound

Timeout depends on all HTML content and the type of process model used for SOA.

Browser request

FW0

443

HTTPS / Load Balancer

Inbound

Timeout depends on all HTML content and the type of process model used for SOA.

Load balancer to Oracle HTTP Server

n/a

7777

HTTP

n/a

See Section 2.2.2, "Load Balancers."

OHS registration with Administration Server

FW1

7001

HTTP/t3

Inbound

Set the timeout to a short period (5-10 seconds).

OHS management by Administration Server

FW1

OPMN port (6701) and OHS Admin Port (7779)

TCP and HTTP, respectively

Outbound

Set the timeout to a short period (5-10 seconds).

SOA and WSM server access

FW1

8001

Range: 8000 - 8080

HTTP / WLS_SOAn

Inbound

Timeout varies based on the type of process model used for SOA.

UCM access

FW1

16200

HTTP / WLS_UCMn

Inbound

Browser-based access. Configurable session timeouts.

I/PM access

FW1

16000

HTTP / WLS_IPMn

Inbound

Browser-based access. Configurable session timeouts.

I/PM connection to UCM

n/a

4444

HTTP / WLS_IPMn

Inbound

Persistent connection. Timeout configurable on UCM Server.

Communication between SOA Cluster members

n/a

8001

TCP/IP Unicast

n/a

By default, this communication uses the same port as the server's listen address.

Communication between UCM Cluster members

n/a

16200

TCP/IP Unicast

n/a

By default, this communication uses the same port as the server's listen address.

Communication between IPM Cluster members

n/a

16000

TCP/IP Unicast

n/a

By default, this communication uses the same port as the server's listen address.

Session replication within a WebLogic Server cluster

n/a

n/a

n/a

n/a

By default, this communication uses the same port as the server's listen address.

Administration Console access

FW1

7001

HTTP / Administration Server and Enterprise Manager

t3

Both

You should tune this timeout based on the type of access to the administration console (whether it is planned to use the Oracle WebLogic Server Administration Console from application tier clients or clients external to the application tier).

Node Manager

n/a

5556

TCP/IP

n/a

n/a

For actual values, see "Firewalls and Ports" in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management.

Access Server access

FW1

6021

OAP

Inbound

For actual values, see "Firewalls and Ports" in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management.

Identity Server access

FW1

6022

OAP

Inbound

 

Database access

FW2

1521

SQL*Net

Both

Timeout depends on all database content and on the type of process model used for SOA.

Coherence for deployment

n/a

8088

Range: 8000 - 8090

 

n/a

n/a

Oracle Internet Directory access

FW2

389

LDAP

Inbound

You should tune the directory server's parameters based on load balancer, and not the other way around.

Oracle Internet Directory access

FW2

636

LDAP SSL

Inbound

You should tune the directory server's parameters based on load balancer, and not the other way around.

JOC for OWSM

n/a

9991

Range: 9988-9998

TCP/IP

n/a

n/a


Note:

The firewall ports depend on the definition of TCP/IP ports.

2.3 Shared Storage and Recommended Directory Structure

This following section details the directories and directory structure that Oracle recommends for the reference enterprise deployment topology in this guide. Other directory layouts are possible and supported, but the model adopted in this guide was chosen for maximum availability, providing both the best isolation of components and symmetry in the configuration and facilitating backup and disaster recovery. The rest of the document uses this directory structure and directory terminology.

This section covers these topics:

2.3.1 Terminology for Directories and Directory Environment Variables

This enterprise deployment guide uses the following references to directory locations:

  • ORACLE_BASE: This environment variable and related directory path refers to the base directory under which Oracle products are installed.

  • MW_HOME: This environment variable and related directory path refers to the location where Fusion Middleware (FMW) resides.

  • WL_HOME: This environment variable and related directory path contains installed files necessary to host a WebLogic Server.

  • ORACLE_HOME: This environment variable and related directory path refers to the location where Oracle Fusion Middleware SOA Suite or Oracle Enterprise Content Management Suite is installed.

  • ORACLE_COMMON_HOME: This environment variable and related directory path refers to the Oracle home that contains the binary and library files required for the Oracle Enterprise Manager Fusion Middleware Control and Java Required Files (JRF).

  • Domain directory: This directory path refers to the location where the Oracle WebLogic domain information (configuration artifacts) is stored. Different WLS Servers can use different domain directories even when in the same node.

  • ORACLE_INSTANCE: An Oracle instance contains one or more system components, such as Oracle Web Cache, Oracle HTTP Server, or Oracle Internet Directory. An Oracle instance directory contains updatable files, such as configuration files, log files, and temporary files.

Tip:

You can simplify directory navigation by using environment variables as shortcuts to the locations in this section. For example, you could use an environment variable called $ORACLE_BASE in Linux to refer to /u01/app/oracle (that is, the recommended ORACLE_BASE location). In Windows, you would use %ORACLE_BASE% and use Windows-specific commands.

2.3.2 Recommended Locations for the Different Directories

Oracle Fusion Middleware 11g allows creating multiple managed servers from one single binary installation. This allows the installation of binaries in a single location on a shared storage and the reuse of this installation by the servers in different nodes. However, for maximum availability, Oracle recommends using redundant binary installations. In the EDG model, two Oracle Fusion Middleware homes (MW_HOME), each of which has a WL_HOME and an ORACLE_HOME for each product suite, are installed in a shared storage. Additional servers (when scaling out or up) of the same type can use either one of these two locations without requiring more installations. Ideally, users should use two different volumes (referred to as VOL1 and VOL2 below) for redundant binary location, thus isolating as much as possible the failures in each volume. For additional protection, Oracle recommends that these volumes are disk-mirrored. If multiple volumes are not available, Oracle recommends using mount points to simulate the same mount location in a different directory in the shared storage. Although this does not guarantee the protection that multiple volumes provide, it does allow protection from user deletions and individual file corruption.

When an ORACLE_HOME or a WL_HOME is shared by multiple servers in different nodes, it is recommended to maintain the Oracle Inventory (oraInventory) and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use ORACLE_HOME/oui/bin/attachHome.sh. To update the Middleware home list to add or remove a WL_HOME, edit the user_home/bea/beahomelist file. This would be required for any nodes installed additionally to the two ones used in this EDG. An example of the oraInventory and beahomelist updates is provided in the scale-out steps included in this guide.

Oracle recommends also separating the domain directory used by the administration server from the domain directory used by managed servers. This allows a symmetric configuration for the domain directories used by managed server, and isolates the failover of the administration server. The domain directory for the administration server must reside in a shared storage to allow failover to another node with the same configuration. The domain directories of the managed servers can reside in a local or shared storage.

You can use a shared domain directory for all managed servers in different nodes or use one domain directory per node. Sharing domain directories for managed servers facilitates the scale-out procedures. In this case, the deployment should conform to the requirements (if any) of the storage system to facilitate multiple machines mounting the same shared volume. The configuration steps provided in this enterprise deployment topology assume that a local (per node) domain directory is used for each managed server.

All procedures that apply to multiple local domains apply to a single shared domain. Hence, this enterprise deployment guide uses a model where one domain directory is used per node. The directory can be local or reside in shared storage. JMS file stores and JTA transaction logs need to be placed on a shared storage in order to ensure that they are available from multiple boxes for recovery in the case of a server failure or migration.

Based on the above assumptions, the following paragraphs describe the directories recommended. Wherever a shared storage location is directly specified, it is implied that shared storage is required for that directory. When using local disk or shared storage is optional, the mount specification is qualified with "if using a shared disk." The shared storage locations are examples and can be changed as long as the provided mount points are used. However, Oracle recommends this structure in the shared storage device for consistency and simplicity.

ORACLE_BASE:

/u01/app/oracle

MW_HOME (application tier):

ORACLE_BASE/product/fmw

  • Mount point: ORACLE_BASE/product/fmw

  • Shared storage location: ORACLE_BASE/product/fmw (VOL1 and VOL2)

  • Mounted from: Nodes alternatively mount VOL1 or VOL2 in such a way that at least half of the nodes use an installation and the other half use the other one. In the EDG for ECM, SOAHOST1 and ECMHOST1 mount VOL1 and SOAHOST2 and ECMHOST2 mount VOL2. When only one volume is available, nodes mount two different directories in shared storage alternatively (that is, for example, SOAHOST1 would use ORACLE_BASE/product/fmw1 as shared storage location and SOAHOST2 would use ORACLE_BASE/product/fmw2 as shared storage location).

    Note:

    When there is just one volume available in the shared storage, you can provide redundancy using different directories to protect from accidental file deletions and for patching purposes. Two MW_HOMEs would be available; at least one at ORACLE_BASE/product/fmw1, and another at ORACLE_BASE/product/fmw2. These MW_HOMEs are mounted on the same mount point in all nodes.

MW_HOME (web tier):

ORACLE_BASE/product/fmw/web

  • Mount point: ORACLE_BASE/product/fmw

  • Shared storage location: ORACLE_BASE/product/fmw (VOL1 and VOL2)

  • Mounted from: For shared storage installations, nodes alternatively mount VOL1 or VOL2 in such a way that at least half of the nodes use an installation and the other half use the other one. In the EDG for ECM, WEBHOST1 would mount VOL1 and WEBHOST2 would mount VOL2. When only one volume is available, nodes mount the two suggested directories in shared storage alternatively (that is, WEBHOST1 would use ORACLE_BASE/product/fmw1 as shared storage location and WEBHOST2 would use ORACLE_BASE/product/fmw2 as shared storage location).

    Note:

    Web tier installation is usually performed on local storage to the WEBHOST nodes. When using shared storage, appropriate security restrictions for access to the storage device across tiers need to be considered.

WL_HOME:

MW_HOME/wlserver_10.3

ORACLE_HOME:

MW_HOME/soa or MW_HOME/ecm

ORACLE_COMMON_HOME:

MW_HOME/oracle_common

ORACLE_INSTANCE:

ORACLE_BASE/admin/instance_name

  • If you are using a shared disk, the mount point on the machine is ORACLE_BASE/admin/instance_name mounted to ORACLE_BASE/admin/instance_name (VOL1).

    Note:

    (VOL1) is optional; you could also use (VOL2).

Domain Directory for Administration Server Domain Directory:

ORACLE_BASE/admin/domain_name/aserver/domain_name (The last "domain_name' is added by Configuration Wizard)

  • Mount point on machine: ORACLE_BASE/admin/domain_name/aserver

  • Shared storage location: ORACLE_BASE/admin/domain_name/aserver

  • Mounted from: Only the node where the administration server is running needs to mount this directory. When the administration server is relocated (failed over) to a different node, the node then mounts the same shared storage location on the same mount point. The remaining nodes in the topology do not need to mount this location.

Domain Directory for Managed Server Directory:

ORACLE_BASE/admin/domain_name/mserver/domain_name

  • If you are using a shared disk, the mount point on the machine is ORACLE_BASE/admin/domain_name/mserver mounted to ORACLE_BASE/admin /domain_name/Noden/mserver/ (each node uses a different domain directory for managed servers).

Note:

This procedure is really shared storage dependent. The above example is specific to NAS, but other storage types may provide this redundancy with different types of mappings.

Location for JMS file-based stores and Tlogs:

ORACLE_BASE/admin/domain_name/cluster_name/jms

ORACLE_BASE/admin/domain_name/cluster_name/tlogs

  • Mount point: ORACLE_BASE/admin/domain_name/cluster_name

  • Shared storage location: ORACLE_BASE/admin/domain_name/cluster_name

  • Mounted from: All nodes running SOA and ECM components need to mount this shared storage location so that transaction logs and JMS stores are available when server migration to another node take place.

Location for Oracle I/PM input files, images, and samples input directories:

ORACLE_BASE/admin/domain_name/ipm_cluster_name/input_files

ORACLE_BASE/admin/domain_name/ipm_cluster_name/input_files/Samples

ORACLE_BASE/admin/domain_name/ipm_cluster_name/images

  • Mount point: ORACLE_BASE/admin/domain_name/ipm_cluster_name

  • Shared storage location: ORACLE_BASE/admin/domain_name/ipm_cluster_name

  • Mounted from: All nodes containing I/PM mount these locations (all nodes need to have access to input files and the images to process).

The location of input files and images may vary according to each customer's implementation needs. It is relevant, however, that image files are located in a device isolated from other concurrent accesses that can degrade the performance of the system. A separate volume can be used for this purpose. In general, it is good practice to place the files under the cluster directory structure for consistent backups and maintenance.

In a multinode installation of Oracle I/PM, this location is shared among all the input agents and must be accessible by all agents. If input agents are on different machines, this must be a shared network.

Note:

In order to process input files, the input agent must have the appropriate permissions on the input directory and the input directory must allow file locking. The input agent requires that the user account that is running the WebLogic Server service have read and write privileges to the input directory and all files and subdirectories in the input directory. These privileges are required so that the input agent can move the files to the various directories as it works on them. File locking on the share is needed by the input agent to coordinate actions between servers in the cluster.

Location for Oracle UCM's vault (native file repository):

ORACLE_BASE/admin/domain_name/ucm_cluster_name/cs/vault

  • Mount point: ORACLE_BASE/admin/domain_name/ucm_cluster_name

  • Shared storage location: ORACLE_BASE/admin/domain_name/ucm_cluster_name

  • Mounted from: All nodes containing the UCM server mount this location (all nodes need to have access to input files and the images to process).

Location for application directory for administration server:

ORACLE_BASE/admin/domain_name/aserver/applications

  • Mount point: ORACLE_BASE/admin/domain_name/aserver/applications

  • Shared storage location: ORACLE_BASE/admin/domain_name/aserver

Location for application directory for managed server:

ORACLE_BASE/admin/domain_name/mserver/applications

Note:

This directory is local in the context of the EDG for ECM.

Figure 2-4 shows this directory structure in a diagram.

Figure 2-4 EDG Directory Structure for Oracle ECM

Description of Figure 2-4 follows
Description of "Figure 2-4 EDG Directory Structure for Oracle ECM"

The directory structure in Figure 2-4 does not show other required internal directories such as oracle_common and jrockit.

Table 2-4 explains what the various color-coded elements in Figure 2-4 mean.

Table 2-4 Directory Structure Elements

Element Explanation

Admin Server element

The administration server domain directories, applications, deployment plans, file adapter control directory, JMS and TX logs, and the entire MW_HOME are on a shared disk.

Managed server elements

The managed server domain directories can be on a local disk or a shared disk. Further, if you want to share the managed server domain directories on multiple nodes, then you must mount the same shared disk location across the nodes. The instance_name directory for the web tier can be on a local disk or a shared disk.

Fixed name element

Fixed name.

Installation-dependent names

Installation-dependent name.


Figure 2-5 shows an example configuration for shared storage with multiple volumes for SOA and ECM.

Figure 2-5 Example Configuration for Shared Storage

Shared storage, explained in table following image.
Description of "Figure 2-5 Example Configuration for Shared Storage"

Table 2-5 summarizes the directory structure for the domain.

Table 2-5 Contents of Shared Storage

Server Type of Data Volume in Shared Storage Directory Files

WLS_SOA1

Tx Logs

VOL1

ORACLE_BASE/admin/domain_name/soa_cluster_name/tlogs

The transaction directory is common (decided by WebLogic Server), but the files are separate.

WLS_SOA2

Tx Logs

VOL1

ORACLE_BASE/admin/domain_name/soa_cluster_name/tlogs

The transaction directory is common (decided by WebLogic Server), but the files are separate.

WLS_SOA1

JMS Stores

VOL1

ORACLE_BASE/admin/domain_name/soa_cluster_name/jms

The transaction directory is common (decided by WebLogic Server), but the files are separate; for example: SOAJMSStore1, UMSJMSStore1, and so on.

WLS_SOA2

JMS Stores

VOL1

ORACLE_BASE/admin/domain_name/soa_cluster_name/jms

The transaction directory is common (decided by WebLogic Server), but the files are separate; for example: SOAJMSStore2, UMSJMSStore2, etc.

WLS_SOA1

WLS Install

VOL1

MW_HOME

Individual in each volume, but both servers see same directory structure.

WLS_SOA2

WLS Install

VOL2

MW_HOME

Individual in each volume, but both servers see same directory structure.

WLS_SOA1

SOA Install

VOL1

MW_HOME/soa

Individual in each volume, but both servers see same directory structure.

WLS_SOA2

SOA Install

VOL2

MW_HOME/soa

Individual in each volume, but both servers see same directory structure.

WLS_SOA1

Domain Config

VOL1

ORACLE_BASE/admin/domain_name/mserver/domain_name

Individual in each volume, but both servers see same directory structure.

WLS_SOA2

Domain Config

VOL2

ORACLE_BASE/admin/domain_name/mserver/domain_name

Individual in each volume, but both servers see same directory structure.


2.3.3 Shared Storage Configuration

The following steps show to create and mount shared storage locations so that SOAHOST1 and SOAHOST2 can see the same location for binary installation in two separate volumes.

"nasfiler" is the shared storage filer.

From SOAHOST1:

SOAHOST1> mount nasfiler:/vol/vol1/u01/app/oracle/product/fmw /u01/app/oracle/product/fmw -t nfs

From SOAHOST2:

SOAHOST2> mount nasfiler:/vol/vol2/u01/app/oracle/product/fmw /u01/app/oracle/product/fmw -t nfs

If only one volume is available, users can provide redundancy for the binaries by using two different directories in the shared storage and mounting them to the same dir in the SOA Servers:

From SOAHOST1:

SOAHOST1> mount nasfiler:/vol/vol1/u01/app/oracle/product/fmw1 /u01/app/oracle/product/fmw -t nfs

From SOAHOST2:

SOAHOST2> mount nasfiler:/vol/vol2/u01/app/oracle/product/fmw2 /u01/app/oracle/product/fmw -t nfs

The following commands show how to share the SOA TX logs location across different nodes:

SOAHOST1> mount nasfiler:/vol/vol1/u01/app/oracle/stores/soadomain/soa_cluster/tlogs /u01/app/oracle/stores/soadomain/soa_cluster/tlogs -t nfs

SOAHOST2> mount nasfiler:/vol/vol1/u01/app/oracle/stores/soadomain/soa_cluster/tlogs /u01/app/oracle/stores/soadomain/soa_cluster/tlogs -t nfs

Note:

The shared storage can be a NAS or SAN device. The following illustrates an example of creating storage for a NAS device from SOAHOST1. The options may differ.
SOAHOST1> mount nasfiler:/vol/vol1/fmw11shared ORACLE_BASE/wls -t nfs -o rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768

Contact your storage vendor and machine administrator for the correct options for your environment.

2.4 LDAP as Credential and Policy Store

With Oracle Fusion Middleware, you can use different types of credential and policy stores in a WebLogic domain. Domains can use stores based on XML files or on different types of LDAP providers. When a domain uses an LDAP store, all policy and credential data is kept and maintained in a centralized store. However, when using XML policy stores, the changes made on managed servers are not propagated to the administration server unless they use the same domain home.

An Oracle ECM enterprise deployment topology uses different domain homes for the administration server and the managed server as described in the Section 2.3, "Shared Storage and Recommended Directory Structure." Derived from this, and for integrity and consistency purposes, Oracle requires the use of an LDAP server as policy and credential store in context of an Oracle ECM enterprise deployment topology. Follow the steps in Section 11.1.2.1, "Creating the LDAP Authenticator" to configure the Oracle ECM enterprise deployment with an LDAP server as credential and policy store.