5 Configuring High Availability for Oracle Identity Manager Components

This chapter describes how to design and deploy a high availability environment for Oracle Identity Manager.

Oracle Identity Manager (OIM) is a user provisioning and administration solution that automates the process of adding, updating, and deleting user accounts from applications and directories. It also improves regulatory compliance by providing granular reports that attest to who has access to what. OIM is available as a stand-alone product or as part of Oracle Identity and Access Management Suite.

For details about OIM, see the Oracle Fusion Middleware Administrator's Guide for Oracle Identity Manager.

This section includes the following topics:

5.1 Oracle Identity Manager Component Architecture

Figure 5-1 shows the Oracle Identity Manager architecture:

Figure 5-1 Oracle Identity Manager Component Architecture

Description of Figure 5-1 follows
Description of "Figure 5-1 Oracle Identity Manager Component Architecture"

5.1.1 Oracle Identity Manager Component Characteristics

Oracle Identity Manager Server is Oracle's self-contained, standalone identity management solution. It provides User Administration, Workflow and Policy, Password Management, Audit and Compliance Management, User Provisioning and Organization and Role Management functionalities.

Oracle Identity Manager (OIM) is a standard Java EE application that is deployed on WebLogic Server and uses a database to store runtime and configuration data. The MDS schema contains configuration information; the runtime and user information is stored in the OIM schema.

OIM connects to the SOA Managed Servers over RMI to invoke SOA EJBs.

OIM uses the human workflow module of Oracle SOA Suite to manage its request workflow. OIM connects to SOA using the T3 URL for the SOA server, which is the front end URL for SOA. Oracle recommends using the load balancer or web server URL for clustered SOA servers. When the workflow completes, SOA calls back OIM web services using OIMFrontEndURL. Oracle SOA is deployed along with the OIM.

Several OIM modules use JMS queues. Each queue is processed by a separate Message Driven Bean (MDB), which is also part of the OIM application. Message producers are also part of the OIM application.

OIM uses embedded Oracle Entitlements Server, which is also part of the OIM engine. Oracle Entitlements Server (OES) is used for authorization checks inside OIM. For example, one of the policy constraints determines that only users with certain roles can create users. This is defined using the OIM user interface.

OIM uses a Quartz based scheduler for scheduled activities. Various scheduled activities occur in the background, such as disabling users after their end date.

You deploy and configure Oracle BI Publisher as part of the OIM domain. BI Publisher uses the same OIM database schema for reporting purposes. Oracle recommends that you locate BI Publisher in a different domain from OIM or the same domain to facilitate integration; that is, integration will consist of integrating a static URL. There is no interaction between BI Publisher and OIM runtime components. BI Publisher is configured to use the same OIM database schema for reporting purposes.

When you enable LDAPSync to communicate directly with external Directory Servers such as Oracle Internet Directory, ODSEE, and Microsoft Active Directory, support for high availability/failover features requires that you configure the Identity Virtualization Library (libOVD).

To configure libOVD, use the WLST command addLDAPHost. To manage libOVD, see Managing Identity Virtualization Library (libOVD) Adapters in Oracle Fusion Middleware Administrator's Guide for Oracle Identity Manager for a list of WLST commands.

5.1.2 Runtime Processes

Oracle Identity Manager deploys on WebLogic Server as a no-stage application. The OIM server initializes when the WebLogic Server it is deployed on starts up. As part of application initialization, the quartz-based scheduler is also started. Once initialization is done, the system is ready to receive requests from clients.

You must start Remote Manager and Design Console as standalone utilities separately.

5.1.3 Component and Process Lifecycle

Oracle Identity Manager deploys to a WebLogic Server as an externally managed application. By default, WebLogic Server starts, stops, monitors and manages other lifecycle events for the OIM application.

OIM starts after the application server components start. It uses the authenticator which is part of the OIM component mechanism; it starts up before the WebLogic JNDI initializes and the application starts.

OIM uses a Quartz technology-based scheduler that starts the scheduler thread on all WebLogic Server instances. It uses the database as centralized storage for picking and running scheduled activities. If one scheduler instance picks up a job, other instances do not pick up that same job.

You can configure Node Manager to monitor the server process and restart it in case of failure.

The Oracle Enterprise Manager Fusion Middleware Control is used to monitor as well as to modify the configuration of the application.

5.1.4 Starting and Stopping Oracle Identity Manager

You manage OIM lifecycle events with these command line tools and consoles:

  • Oracle WebLogic Scripting Tool (WLST)

  • WebLogic Server Administration Console

  • Oracle Enterprise Manager Fusion Middleware Control

  • Oracle WebLogic Node Manager

5.1.5 Configuration Artifacts

The OIM server configuration is stored in the MDS repository at /db/oim-config.xml. The oim-config.xml file is the main configuration file. Manage OIM configuration using the MBean browser through Oracle Enterprise Manager Fusion Middleware Control or command line MDS utilities. For more information about MDS utilities, see the MDS utilities section in Developing and Customizing Applications for Oracle Identity Manager.

The installer configures JMS out-of-the-box; all necessary JMS queues, connection pools, data sources are configured on WebLogic application servers. These queues are created when OIM deploys:

  • oimAttestationQueue

  • oimAuditQueue

  • oimDefaultQueue

  • oimKernelQueue

  • oimProcessQueue

  • oimReconQueue

  • oimSODQueue

The xlconfig.xml file stores Design Console and Remote Manager configuration.

5.1.6 External Dependencies

Oracle Identity Manager uses the Worklist and Human workflow modules of the Oracle SOA Suite for request flow management. OIM interacts with external repositories to store configuration and runtime data, and the repositories must be available during initialization and runtime. The OIM repository stores all OIM credentials. External components that OIM requires are:

  • WebLogic Server

    • Administration Server

    • Managed Server

  • Data Repositories

    • Configuration Repository (MDS Schema)

    • Runtime Repository (OIM Schema)

    • User Repository (OIM Schema)

    • SOA Repository (SOA Schema)

    • BI Publisher Repository (BIPLATFORM Schema)

  • External LDAP Stores (when using LDAP Sync)

  • BI Publisher

The Design Console is a tool used by the administrator for development and customization. The Design Console communicates directly with the OIM engine, so it relies on the same components that the OIM server relies on.

Remote Manager is an optional independent standalone application, which calls the custom APIs on the local system. It needs JAR files for custom APIs in its classpath.

5.1.7 Oracle Identity Manager Log File Locations

As a Java EE application deployed on WebLogic Server, all server log messages log to the server log file. OIM-specific messages log into the WebLogic Server diagnostic log file where the application is deployed.

WebLogic Server log files are in the directory:

DOMAIN_HOME/servers/serverName/logs

The three main log files are serverName.log, serverName.out, and serverName-diagnostic.log, where serverName is the name of the WebLogic Server. For example, if the WebLogic Server name is wls_OIM1, then the diagnostic log file name is wls_OIM1-diagnostic.log. Use Oracle Enterprise Manager Fusion Middleware Control to view log files.

5.2 Oracle Identity Manager High Availability Concepts

This section includes the following topics:

Note:

Note the following when you deploy OIM:
  • You can deploy OIM on an Oracle RAC database, but Oracle RAC failover is not transparent for OIM in this release. If Oracle RAC failover occurs, end users may have to resubmit requests.

  • OIM always requires the availability of at least one node in the SOA cluster. If the SOA cluster is not available, end user requests fail. OIM does not retry for a failed SOA call. Therefore, the end user must retry when a SOA call fails.

5.2.1 Oracle Identity Manager High Availability Architecture

Figure 5-2 shows OIM deployed in a high availability architecture.

Figure 5-2 Oracle Identity Manager High Availability Architecture

Description of Figure 5-2 follows
Description of "Figure 5-2 Oracle Identity Manager High Availability Architecture"

On OIMHOST1, the following installations have been performed:

  • An OIM instance is installed in the WLS_OIM1 Managed Server and a SOA instance is installed in the WLS_SOA1 Managed Server.

  • A BI Publisher instance is installed in the WLS_BI1 Manager Server.

  • The Oracle RAC database is configured in a GridLink data source to protect the instance from Oracle RAC node failure.

  • A WebLogic Server Administration Server is been installed. Under normal operations, this is the active Administration Server.

On OIMHOST2, the following installations have been performed:

  • An OIM instance is installed in the WLS_OIM2 Managed Server, a SOA instance is installed in the WLS_SOA2 Managed Server, and a BI Publisher instance is installed in the WLS_BI2 Managed Server.

  • The Oracle RAC database is configured in a GridLink data source to protect the instance from Oracle RAC node failure.

  • The instances in the WLS_OIM1 and WLS_OIM2 Managed Servers on OIMHOST1 and OIMHOST2 are configured as the OIM_Cluster cluster.

  • The instances in the WLS_SOA1 and WLS_SOA2 Managed Servers on OIMHOST1 and OIMHOST2 are configured as the SOA_Cluster cluster.

  • The instances in the WLS_BI1 and WLS_BI2 Managed Servers on OIMHOST1 and OIMHOST2 are configured as the BI_Cluster cluster.

  • An Administration Server is installed. Under normal operations, this is the passive Administration Server. You make this Administration Server active if the Administration Server on OIMHOST1 becomes unavailable.

Figure 5-2 uses these virtual host names in the OIM high availability configuration:

  • OIMVHN1 is the virtual hostname that maps to the listen address for the WLS_OIM1 Managed Server, and it fails over with server migration of the WLS_OIM1 Managed Server. It is enabled on the node where the WLS_OIM1 Managed Server is running (OIMHOST1 by default).

  • OIMVHN2 is the virtual hostname that maps to the listen address for the WLS_OIM2 Managed Server, and it fails over with server migration of the WLS_OIM2 Managed Server. It is enabled on the node where the WLS_OIM2 Managed Server is running (OIMHOST2 by default).

  • SOAVHN1 is the virtual hostname that is the listen address for the WLS_SOA1 Managed Server, and it fails over with server migration of the WLS_SOA1 Managed Server. It is enabled on the node where the WLS_SOA1 Managed Server is running (OIMHOST1 by default).

  • SOAVHN2 is the virtual hostname that is the listen address for the WLS_SOA2 Managed Server, and it fails over with server migration of the WLS_SOA2 Managed Server. It is enabled on the node where the WLS_SOA2 Managed Server is running (OIMHOST2 by default).

  • BIPVHN1 is the virtual hostname that is the listen address for the WLS_BI1 Managed Server, and it fails over with server migration of the WLS_BI1 Managed Server. It is enabled on the node where the WLS_BI1 Managed Server is running (OIMHOST1 by default).

  • BIPAVHN2 is the virtual hostname that is the listen address for the WLS_BI2 Managed Server, and it fails over with server migration of the WLS_BI2 Managed Server. It is enabled on the node where the WLS_BI2 Managed Server is running (OIMHOST2 by default).

  • VHN refers to the virtual IP addresses for the Oracle Real Application Clusters (Oracle RAC) database hosts.

5.2.2 Starting and Stopping the OIM Cluster

By default, WebLogic Server starts, stops, monitors, and manages lifecycle events for the application. The OIM application leverages high availability features of clusters. In case of hardware or other failures, session state is available to other cluster nodes that can resume the work of the failed node.

Use these command line tools and consoles to manage OIM lifecycle events:

  • WebLogic Server Administration Console

  • Oracle Enterprise Manager Fusion Middleware Control

  • Oracle WebLogic Scripting Tool (WLST)

5.2.3 Cluster-Wide Configuration Changes

For high availability environments, changing the configuration of one OIM instance changes the configuration of all the other instances, because all the OIM instances share the same configuration repository.

5.2.4 Considerations for Synchronizing with LDAP

Synchronization information between LDAP and the OIM database is handled by reconciliation, a scheduled process that runs in the background. If an LDAP outage occurs during the Synchronization process, the data which did not get into OIM will be picked up during the next run of the reconciliation task.

5.3 High Availability Directory Structure Prerequisites

Before you configure high availability, verify that your environment meets the requirements that Section 6.3, "High Availability Directory Structure Prerequisites" describes.

5.4 Oracle Identity Manager High Availability Configuration Steps

This section provides high-level instructions for setting up a high availability deployment for OIM and includes these topics:

5.4.1 Prerequisites for Configuring Oracle Identity Manager

Before you configure OIM for high availability, you must:

5.4.1.1 Running RCU to Create the OIM Schemas in a Database

The schemas you create depend on the products you want to install and configure. Use a Repository Creation Utility (RCU) that is version compatible with the product you install. See the Oracle Fusion Middleware Installation Planning Guide for Oracle Identity and Access Management and Oracle Fusion Middleware Repository Creation Utility User's Guide to run RCU.

5.4.1.2 Installing Oracle WebLogic Server

To install Oracle WebLogic Server, see Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.

Note:

On 64-bit platforms, the JDK does not install when you install WebLogic Server using the generic jar file. You must install the JDK separately, before installing WebLogic Server.

5.4.1.3 Installing Oracle SOA Suite on OIMHOST1 and OIMHOST2

See Installing Oracle SOA Suite (Oracle Identity Manager Users Only) in Installation Guide for Oracle Identity and Access Management.

5.4.1.4 Installing Oracle Identity and Access Management on OIMHOST1 and OIMHOST2

See "Installing and Configuring Identity and Access Management" in Installation Guide for Oracle Identity and Access Management.

5.4.1.5 Creating wlfullclient.jar Library on OIMHOST1 and OIMHOST2

Oracle Identity Manager requires the wlfullclient.jar library for some operations. For example, the Design Console uses the library for server connections. Oracle does not ship this library; you must create it manually. Oracle recommends creating this library under the MW_HOME/wlserver_10.3/server/lib directory on all machines in your environment application tier. You don't need to create this library on directory tier machines such as OIDHOST1, OIDHOST2, OVDHOST1 and OVDHOST2. See Developing a WebLogic Full Client in the guide Oracle Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server for more information.

To create the wlfullclient.jar file:

  1. Go to the MW_HOME/wlserver_10.3/server/lib directory.

  2. Set your JAVA_HOME to your JDK path and ensure that your JAVA_HOME/bin directory is in your path.

  3. Create the wlfullclient.jar file by running:

    java -jar wljarbuilder.jar
    

5.4.2 Creating and Configuring a WebLogic Domain for OIM, SOA, and BI Publisher on OIMHOST1

To create a domain, see the topic "Creating a new WebLogic Domain for Oracle Identity Manager, SOA, and BI Publisher" in Oracle Fusion Middleware Installation Guide for Oracle Identity and Access Management.

5.4.3 Configuring the Database Security Store for the Domain

You must configure the database security store after you configure the domain but before you start Administration Server. See "Configuring Database Security Store for an Oracle Identity and Access Management Domain" in Installation Guide for Oracle Identity and Access Management for more information.

5.4.4 Post-Installation Steps on OIMHOST1

This section describes post-installation steps for OIMHOST1. It includes these topics:

5.4.4.1 Creating boot.properties for the Administration Server on OIMHOST1

The boot.properties file enables the Administration Server to start without prompting for the administrator username and password.

To create the boot.properties file:

  1. On OIMHOST1, create the following directory:

    MW_HOME/user_projects/domains/domainName/servers/AdminServer/security
    

    For example:

    $ mkdir -p 
    MW_HOME/user_projects/domains/domainName/servers/AdminServer/security
    
  2. Use a text editor to create a file named boot.properties under the security directory. Enter the following lines in the file:

    username=adminUser
    password=adminUserPassword
    

    Note:

    When you start Administration Server, username and password entries in the file get encrypted. For security reasons, minimize the time that file entries are left unencrypted. After you edit the file, start the server as soon as possible so that entries get encrypted.

5.4.4.2 Update Node Manager on OIMHOST1

Before you start Managed Servers, Node Manager requires that the StartScriptEnabled property be set to true.

To do this, run the setNMProps.sh script located under the following directory:

MW_HOME/oracle_common/common/bin

5.4.4.3 Start Node Manager on OIMHOST1

Start Node Manager on OIMHOST1 using the startNodeManager.sh script located under the following directory:

MW_HOME/wlserver_10.3/server/bin

5.4.4.4 Start the Administration Server on OIMHOST1

To start the Administration Server and validate its startup:

  1. Start the Administration Server on OIMHOST1 by issuing the command:

    DOMAIN_HOME/bin/startWebLogic.sh
    
  2. Validate that the Administration Server started up successfully by opening a web browser and accessing the following pages:

    • Administration Console at:

      http://oimhost1.example.com:7001/console
      
    • Oracle Enterprise Manager Fusion Middleware Control at:

      http://oimhost1.example.com:7001/em
      

    Log into these consoles using the weblogic user credentials.

5.4.5 Configuring Oracle Identity Manager on OIMHOST1

This section includes the following topics:

5.4.5.1 Prerequisites for Configuring Oracle Identity Manager

Before configuring OIM, verify the following tasks are completed:

Note:

This section is required only for LDAPSync-enabled OIM installations and for OIM installations that integrate with Oracle Access Management.

If you do not plan to enable the LDAPSync option or to integrate with Oracle Access Management, skip this section.

  1. "Extending the Directory Schema for Oracle Identity Manager"

  2. "Creating Users and Groups for Oracle Identity Manager"

Extending the Directory Schema for Oracle Identity Manager

Pre-configuring the Identity Store extends the schema in the back end directory regardless of directory type.

To pre-configure the Identity Store, perform these steps on OIMHOST1:

  1. Set the environment variables MW_HOME, JAVA_HOME and ORACLE_HOME.

    Set ORACLE_HOME to IAM_ORACLE_HOME.

  2. Create a properties file extend.props that contains the following:

    IDSTORE_HOST : idstore.example.com
    
    IDSTORE_PORT : 389
    
    IDSTORE_BINDDN: cn=orcladmin
    
    IDSTORE_USERNAMEATTRIBUTE: cn
    
    IDSTORE_LOGINATTRIBUTE: uid
    
    IDSTORE_USERSEARCHBASE: cn=Users,dc=example,dc=com
    
    IDSTORE_GROUPSEARCHBASE: cn=Groups,dc=example,dc=com
    
    IDSTORE_SEARCHBASE: dc=example,dc=com
    
    IDSTORE_SYSTEMIDBASE: cn=systemids,dc=example,dc=com
    

    Where:

    • IDSTORE_HOST and IDSTORE_PORT are the host and port of your Identity Store directory. If you are using a non-OID directory, then specify the Oracle Virtual Directory host (which should be IDSTORE.example.com.)

    • IDSTORE_BINDDN is an administrative user in the Identity Store Directory

    • IDSTORE_USERSEARCHBASE is the directory location where Users are Stored.

    • IDSTORE_GROUPSEARCHBASE is the directory location where Groups are Stored.

    • IDSTORE_SEARCHBASE is the directory location where Users and Groups are stored.

    • IDSTORE_SYSTEMIDBASE is the location of a container in the directory where users can be placed when you do not want them in the main user container. This happens rarely but one example is the OIM reconciliation user, which is also used for the bind DN user in Oracle Virtual Directory adapters.

  3. Configure Identity Store using the command idmConfigTool, located at IAM_ORACLE_HOME/idmtools/bin.

    The command syntax is:

    idmConfigTool.sh -preConfigIDStore input_file=configfile

    For example:

    idmConfigTool.sh -preConfigIDStore input_file=extend.props
    

    After the command runs, the system prompts you to enter the password of the account with which you are connecting to the ID Store.

    Sample command output:

    ./preconfig_id.sh 
    Enter ID Store Bind DN password : 
    Apr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFile
    INFO: -> LOADING: 
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/idm_idstore_groups_template.ldif
    Apr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFile
    INFO: -> LOADING: 
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/idm_idstore_groups_acl_template.ldif
    Apr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFile
    INFO: -> LOADING:
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/systemid_pwdpolicy.ldif
    Apr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFileINFO: -> LOADING:
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/idstore_tuning.ldifApr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFileINFO: -> LOADING: 
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oid_schema_extn.ldif
    The tool has completed its operation. Details have been logged to automation.log
    
  4. Check the log file for any errors or warnings and correct them.

Creating Users and Groups for Oracle Identity Manager

To add oimadmin user to the Identity Store and assign it to an OIM administrative group. You must also create a user outside of the standard cn=Users location to perform reconciliation. Oracle recommends that you select this user as the bind DN when connecting to directories with Oracle Virtual Directory.

Note:

This command also creates a container in your Identity Store for reservations.

To add the xelsysadm user to the Identity Store and assign it to an administrative group, perform the following tasks on OIMHOST1:

  1. Set the Environment Variables: MW_HOME, JAVA_HOME, IDM_HOME, and ORACLE_HOME

    Set IDM_HOME to IDM_ORACLE_HOME

    Set ORACLE_HOME to IAM_ORACLE_HOME

  2. Create a properties file oim.props that contains the following:

    IDSTORE_HOST : idstore.example.com
    
    IDSTORE_PORT : 389
    
    IDSTORE_BINDDN : cn=orcladmin
    
    IDSTORE_USERNAMEATTRIBUTE: cn
    
    IDSTORE_LOGINATTRIBUTE: uid
    
    IDSTORE_USERSEARCHBASE:cn=Users,dc=example,dc=com
    
    IDSTORE_GROUPSEARCHBASE: cn=Groups,dc=example,dc=com
    
    IDSTORE_SEARCHBASE: dc=example,dc=com
    
    POLICYSTORE_SHARES_IDSTORE: true
    
    IDSTORE_SYSTEMIDBASE: cn=systemids,dc=example,dc=com
    
    IDSTORE_OIMADMINUSER: oimadmin
    
    IDSTORE_OIMADMINGROUP:OIMAdministrators
    
    
    

    Where:

    • IDSTORE_HOST and IDSTORE_PORT are, respectively, the host and port of your Identity Store directory. Specify the back-end directory here, rather than OVD.

    • IDSTORE_BINDDN is an administrative user in the Identity Store Directory

    • IDSTORE_OIMADMINUSER is the name of the administration user you would like to use to log in to the OIM console.

    • IDSTORE_OIMADMINGROUP is the name of the group you want to create to hold your OIM administrative users.

    • IDSTORE_USERSEARCHBASE is the location in your Identity Store where users are placed.

    • IDSTORE_GROUPSEARCHBASE is the location in your Identity Store where groups are placed.

    • IDSTORE_SYSTEMIDBASE is the location in your directory where the OIM reconciliation user are placed.

    • POLICYSTORE_SHARES_IDSTORE is set to true if your Policy and Identity stores are in the same directory. If not, it is set to false.

  3. Configure Identity Store. Go to idmConfigTool at IAM_ORACLE_HOME/idmtools/bin:

    idmConfigTool.sh -prepareIDStore mode=OIM input_file=configfile

    For example:

    idmConfigTool.sh -prepareIDStore mode=OIM input_file=oim.props
    
    
    

    When the command runs, the system prompts you for the account password and requests passwords you want to assign to the accounts:

    IDSTORE_OIMADMINUSER
    oimadmin
    

    Oracle recommends that you set oimadmin to the same value as the account you create as part of the OIM configuration.

    Sample command output:

    Enter ID Store Bind DN password : 
    *** Creation of oimadmin ***
    Apr 5, 2011 4:58:51 AM oracle.ldap.util.LDIFLoader loadOneLdifFile
    INFO: -> LOADING: 
    
    
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_user_template.ldif
    Enter User Password for oimadmin: 
    Confirm User Password for oimadmin: 
    Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFile
    INFO: -> LOADING: 
    
    
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_group_template.ldif
    Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFileINFO: -> LOADING: 
    
    
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_group_member_template.ldif
    Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFile
    INFO: -> LOADING:
    
    
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_groups_acl_template.ldif
    Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFile
    INFO: -> LOADING: 
    
    
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_reserve_template.ldif
    *** Creation of Xel Sys Admin User ***
    Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFileINFO: -> LOADING: 
    
    
    
    /u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oam_user_template.ldif
    Enter User Password for xelsysadm: 
    Confirm User Password for xelsysadm: 
    The tool has completed its operation. Details have been logged to /home/oracle/idmtools/oim.log
    
  4. Check the log file for errors and warnings and correct them.

5.4.5.2 Updating the Coherence Configuration for the Coherence Cluster

To update the Coherence configuration for the SOA Managed Servers:

  1. Log into the Administration Console.

  2. Click Lock and Edit in the top left corner.

  3. In the Domain Structure window, expand the Environment node.

  4. Click Servers. The Summary of Servers page appears.

  5. Click the name of the server (represented as a hyperlink) in the Name column of the table. The settings page for the selected server appears.

  6. Click the Server Start tab.

  7. Enter the following for WLS_SOA1 and WLS_SOA2 into the Arguments field.

    For WLS_SOA1, enter the following (on a single line, without a carriage return):

    -Dtangosol.coherence.wka1=soahost1vhn1 -Dtangosol.coherence.wka2=soahost2vhn1 -Dtangosol.coherence.localhost=soahost1vhn1
    

    For WLS_SOA2, enter the following (on a single line, without a carriage return):

    -Dtangosol.coherence.wka1=soahost1vhn1 -Dtangosol.coherence.wka2=soahost2vhn1 -Dtangosol.coherence.localhost=soahost2vhn1
    
  8. Click Save and activate the changes.

    Start WLS_SOA1 from the Administration Console.

5.4.5.3 Running the Oracle Identity Management Configuration Wizard

You must configure the OIM server instances before you can start the OIM Managed Servers. You perform these configuration steps only once: during the initial creation of the domain, for example. The Oracle Identity Management Configuration Wizard loads OIM metadata into the database and configures the instance.

Before running the Configuration Wizard, you must verify the following:

The Oracle Identity Management Configuration Wizard is located under the Identity Management Oracle home. Enter:

IAM_ORACLE_HOME/bin/config.sh

To run the OIM Configuration Wizard:

  1. On the Welcome screen, click Next

  2. On the Components to Configure screen, select OIM Server. Select OIM Design Console, if required in your topology.

    Click Next.

  3. On the Database screen, provide the following values:

    • Connect String: The connect string for the OIM database. For example:

      oimdbhost1-vip.example.com:1521:oimdb1^oimdbhost2-vip.example.com:1521:oimdb2@oim.example.com

    • OIM Schema User Name: HA_OIM

    • OIM Schema password: password

    • MDS Schema User Name: HA_MDS

    • MDS Schema Password: password

    Select Next.

  4. On the WebLogic Administration Server screen, enter the following details:

    • URL: URL to connect to the Administration Server. For example: t3://oimhost1.example.com:7001

    • UserName: weblogic

    • Password: Password for the weblogic user

    Click Next.

  5. On the OIM Server screen, enter the following values:

    • OIM Administrator Password: Password for the OIM Administrator. This is the password for the xelsysadm user, the same password you entered earlier for idmconfigtool.

    • Confirm Password: Confirm the password·

    • OIM HTTP URL: Reverse proxy URL for the OIM Server. This is the URL for the Hardware load balancer that is front ending the OHS servers for OIM. For example: http://oiminternal.example.com:80.

    • Key Store Password: Key store password. The password must have an uppercase letter and a number. For example: MyPassword1

    • Confirm KeyStore Password: Confirm the KeyStore password·

    • Enable OIM for Suite integration: Select this checkbox only if you are configuring OIM for OAM or OAM-OAAM integration.

    Click Next.

  6. On the LDAP Server screen, provide the following LDAP server details:

    • Directory Server Type: The directory server type. Select OID, ACTIVE_DIRECTORY, IPLANET, or OVD. The default is OID.

    • Directory Server ID: The directory server ID.

    • Server URL: The URL to access the LDAP server. For example: ldap://ovd.example.com:389 if you use the Oracle Virtual Directory Server, ldap://oid.example.com:389 if you use Oracle Internet Directory.

    • Server User: The username to connect to the server. For example: cn=orcladmin·

    • Server Password: The password to connect to the LDAP server.

    • Server SearchDN: The Search DN. For example: dc=example,dc=com.

    Click Next.

  7. On the LDAP Server Continued screen, enter the following LDAP server details:

    • LDAP Role Container: The DN for the Role Container, where OIM roles are stored. For example: cn=Groups,dc=example,dc=com ·

    • LDAP User Container: The DN for the User Container, where the OIM users are stored. For example: cn=Users,dc=example,dc=com·

    • User Reservation Container: The DN for the User Reservation Container.

      Note:

      Use the same container DN Values that idmconfigtool creates during the procedure "Creating Users and Groups for Oracle Identity Manager."

    Click Next.

  8. On the Remote Manager screen, provide the following values:

    Note:

    This screen appears only if you selected the Remote Manager utility in step 2.
    • Service Name: HA_RManager

    • RMI Registry Port: 12345

    • Listen Port (SSL): 12346

  9. On the Configuration Summary screen, verify the summary information.

    Click Configure to configure the Oracle Identity Manager instance.

  10. On the Configuration Progress screen, once the configuration completes successfully, click Next.

  11. On the Configuration Complete screen, view the details of the Oracle Identity Manager Instance configured.

    Click Finish to exit the Configuration Assistant.

5.4.5.4 Post-Configuration Steps: Start WLS_SOA1, WLS_OIM1, and WLS_BIP1 Managed Servers on OIMHOST1

To start the Managed Servers on OIMHOST1:

  1. Stop the Administration Server and SOA Managed Servers on OIMHOST1 using the Administration Console.

  2. Start the Administration Server on OIMHOST1 using the startWebLogic.sh script under the DOMAIN_HOME/bin directory. For example:

    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>&1 &
    
  3. Open the Administration Console to validate that the Administration Server started successfully.

  4. Start the WLS_SOA1 Managed Server using the Administration Console.

  5. Start the WLS_BIP1 Managed Server using the Administration Console.

  6. Start the WLS_OIM1 Managed Server using the Administration Console.

5.4.6 Validate the Oracle Identity Manager Instance on OIMHOST1

Validate the Oracle Identity Managed Server instance on OIMHOST1 by opening the Oracle Identity Manager Console in a web browser.

The URL for the Oracle Identity Manager Console is:

http://identityvhn1.example.com:14000/identity

Log in using the xelsysadm password.

5.4.7 Propagating Oracle Identity Manager to OIMHOST2

After the configuration succeeds on OIMHOST1, you can propagate it to OIMHOST2 by packing the domain on OIMHOST1 and unpacking it on OIMHOST2.

Note:

Oracle recommends that you perform a clean shut down of all Managed Servers on OIMHOST1 before you propagate the configuration to OIMHOST2.

To pack the domain on OIMHOST1 and unpack it on OIMHOST2:

  1. On OIMHOST1, invoke the pack utility in the ORACLE_HOME/oracle_common/common/bin directory:

    pack.sh -domain=MW_HOME/user_projects/domains/OIM_Domain -
    template =/u01/app/oracle/admin/templates/oim_domain.jar -
    template_name="OIM Domain" -managed=true
    
  2. The previous step created the oim_domain.jar file in the following directory:

    /u01/app/oracle/admin/templates
    

    Copy oim_domain.jar from OIMHOST1 to a temporary directory on OIMHOST2.

  3. On OIMHOST2, invoke the unpack utility in the MW_HOME/oracle_common/common/bin directory and specify the oim_domain.jar file location in its temporary directory:

    unpack.sh -domain=MW_HOME/user_projects/domains/OIM_Domain -
    template=/tmp/oim_domain.jar
    

5.4.8 Post-Installation Steps on OIMHOST2

This section includes these topics:

5.4.8.1 Update Node Manager on OIMHOST2

Before you can start Managed Servers with the Administration Console, you must set the Node Manager StartScriptEnabled property to true.

To do this, run the setNMProps.sh script located under the following directory:

MW_HOME/oracle_common/common/bin

5.4.8.2 Start Node Manager on OIMHOST2

Start the Node Manager on OIMHOST2 using the startNodeManager.sh script located under the following directory:

MW_HOME/wlserver_10.3/server/bin

5.4.8.3 Start WLS_SOA2, WLS_OIM2, and WLS_BIP2 Managed Servers on OIMHOST2

To start Managed Servers on OIMHOST2:

  1. Validate that the Administration Server started up successfully by bringing up the Administration Console.

  2. Start the WLS_SOA2 Managed Server using the Administration Console.

  3. Start the WLS_BIP2 Managed Server using the Administration Console.

  4. Start the WLS_OIM2 Managed Server using the Administration Console. The WLS_OIM2 Managed Server must be started after the WLS_SOA2 Managed Server is started.

5.4.9 Validate Managed Server Instances on OIMHOST2

Validate the Oracle Identity Manager (OIM) and BI Publisher Managed Server instances on OIMHOST2.

Open the OIM Console with this URL:

http://identityvhn2.example.com:14000/oim

Log in using the xelsysadm password.

The URL for the BI Publisher is:

http://identityvhn2.example.com:9704/xmlpserver

Log in using the xelsysadm password.

5.4.10 Configuring BI Publisher

To configure BI Publisher:

  1. Verify that all BI servers use the same BI configuration. To do this, copy the contents of the DOMAIN_HOME/config/bipublisher/repository directory to the shared configuration folder location.

    Note:

    You can use any folder location, as long as it exists on shared storage (NFS or cluster file system) that both hosts can access at the same mount point on each host.
  2. On OIMHOST1, log in to BI Publisher with Administrator credentials and select the Administration tab.

  3. Under System Maintenance, select Server Configuration.

  4. In the Path field under the Configuration Folder, enter the shared location for the Configuration Folder.

  5. In the BI Publisher Repository field under Catalog, enter the shared location for the BI Publisher Repository. Apply the changes.

Repeat the preceding procedure for each Managed Server that BI is running on.

To restart the BI Publisher application:

  1. Log in to the Administration Console.

  2. Click Deployments in the Domain Structure window then select bipublisher(11.1.1).

  3. Click Stop then select When work completes or Force Stop Now.

  4. When the application stops, click Start then select Servicing All Requests.

  5. Log in to BI Publisher again to confirm that the configuration change succeeded.

    Note:

    f you enter an incorrect shared configuration folder path, you may see this error when logging in to BI Publisher after restarting it:

    example.xdo.servlet.resources.ResourceNotFoundException: INCORRECT_REPO_PATH/Admin/Security/principals.xml"

    INCORRECT_REPO_PATH is the incorrect repository path. To recover from this error, manually edit DOMAIN_HOME/config/bipublisher/xmlp-server-config.xml to correct the invalid path, then restart BI Publisher.

    Continue on to the following procedures:

5.4.10.1 Setting Scheduler Configuration Options

To set Scheduler configuration options:

  1. On OIMHOST1, log in to BI Publisher with Administrator credentials and select the Administration tab.

  2. Under System Maintenance, select Scheduler Configuration.

  3. Select Quartz Clustering under the Scheduler Selection then click Apply.

5.4.10.2 Configuring JMS for BI Publisher

In this procedure, you configure the location for all persistence stores to a directory that is visible from both nodes. You then change all persistent stores to use this shared base directory.

  1. Log into the Administration Console. In the Domain Structure window, expand the Services node and click the Persistent Stores node.

  2. Click Lock & Edit in the Change Center. Click on existing File Store (for example, BipJmsStore), and verify the target. If it is WLS_BIP2, the new File Store must target WLS_BIP1.

  3. Click New and Create File Store.

  4. Enter a name, such as BipJmsStore1 and target WLS_BIP1. Enter a directory located in shared storage so that OIMHOST1 and OIMHOST2 can access it:

    ORACLE_BASE/admin/domain_name/bi_cluster/jms
    
  5. Click OK and Activate Changes.

  6. In the Domain Structure window, expand the Services node and click the Messaging > JMS Servers node.

  7. Click Lock & Edit in the Change Center then click New.

  8. Enter a name, such as BipJmsServer1. In the Persistence Store drop-down list, select BipJmsStore1 and click Next.

  9. Select WLS_BIP1 as the target. Click Finish and Activate Changes.

  10. In the Domain Structure window, expand the Services node and click the Messaging > JMS Modules node.

  11. Click Lock & Edit in the Change Center.

  12. Click BipJmsResource and click the Subdeployments tab. Select BipJmsSubDeployment under Subdeployments.

  13. Add the new BI Publisher JMS Server, BipJmsServer1, as an additional target for the subdeployment.

  14. Click Save and Activate Changes.

Note:

In a high availability set up, you must keep the clocks of BI, OIM, and SOA nodes synchronized.

To validate the JMS configuration for BI Publisher, follow the steps in Section 5.4.10.3, "Validating the BI Publisher Scheduler Configuration."

5.4.10.3 Validating the BI Publisher Scheduler Configuration

Follow this procedure to validate the JMS Shared Temp Directory for the BI Publisher Scheduler. You run this procedure on one OIMHOST only: OIMHOST1 or OIMHOST2.

To validate the BI Publisher Scheduler configuration:

  1. Log in to BI Publisher at the one of the following URLs:

    http://OIMHOST1VHN1:9704/xmlpserver
    http://OIMHOST2VHN1:9704/xmlpserver
    
  2. Click Administration then click Scheduler Configuration under System Maintenance to open the Scheduler Configuration page.

  3. Update the Shared Directory by entering a directory that is located in the shared storage. This shared storage is accessible from both OIMHOST1 and OIMHOST2.

  4. Click Test JMS.

    Note:

    If you do not see a confirmation message for a successful test, verify that the JNDI URL is set to cluster:t3://bi_cluster
  5. Click Apply. Check the Scheduler status in the Scheduler Diagnostics tab.

  6. Restart WLS_BIP1 and WLS_BIP2.

For more information, see the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Business Intelligence.

5.4.11 Configuring Oracle Internet Directory using the LDAP Configuration Post-setup Script

Note:

This section is required only for LDAPSync-enabled OIM installations and for OIM installations that integrate with Oracle Access Management.

If you do not plan to enable the LDAP-Sync option or to integrate with Oracle Access Management, you can skip this section.

In the current release, the LDAPConfigPostSetup script enables all the LDAPSync-related incremental Reconciliation Scheduler jobs, which are disabled by default. The LDAP configuration post-setup script is located under the IAM_ORACLE_HOME/server/ldap_config_util directory. To run the script, follow these steps:

  1. Edit the ldapconfig.props file located under the IAM_ORACLE_HOME/server/ldap_config_util directory and provide the following values:

    Parameter Value Description
    OIMProviderURL t3://OIMHOST1VHN.example.com:14000,OIMHOST2VHN.example.com:14000 List of Oracle Identity Manager Managed Servers
    LDAPURL Oracle Virtual Directory instance URL, for example: ldap://idstore.example.com:389 Identity Store URL. Only required if IDStore is accessed using Oracle Virtual Directory
    LDAPAdminUserName cn=oimadmin,cn=systemids,dc=example,dc=com Name of user to connect to Identity Store. Only required if your Identity Store is in Oracle Virtual Directory. This user should not be located in cn=Users,dc=example,dc=com.
    LIBOVD_PATH_PARAM MSERVER_HOME/config/fmwconfig/ovd/oim Required unless you access your identity store using Oracle Virtual Directory.

    Note:

    usercontainerName, rolecontainername, and reservationcontainername are not used in this step.
  2. Save the file.

  3. Set the JAVA_HOME, WL_HOME, APP_SERVER, OIM_ORACLE_HOME, and DOMAIN_HOME environment variables, where:

    • JAVA_HOME is set to MW_HOMEJRE-JDK_version

    • WL_HOME is set to MW_HOME/wlserver_10.3

    • APP_SERVER is set to weblogic

    • OIM_ORACLE_HOME is set to IAM_ORACLE_HOME

    • DOMAIN_HOME is set to MSERVER_HOME

  4. Run LDAPConfigPostSetup.sh. The script prompts for the LDAP administrator password and the OIM administrator password. For example:

    IAM_ORACLE_HOME/server/ldap_config_util/LDAPConfigPostSetup.sh path_to_property_file
    

    For example:

    IAM_ORACLE_HOME/server/ldap_config_util/LDAPConfigPostSetup.sh IAM_ORACLE_HOME/server/ldap_config_util
    

5.4.12 Configuring Server Migration for OIM, SOA, and BI Publisher Managed Servers

For this high availability topology, Oracle recommends that you configure server migration for the WLS_OIM1, WLS_SOA1, WLS_OIM2, and WLS_SOA2 Managed Servers. See Section 3.9, "Whole Server Migration" for information on the benefits of using Whole Server Migration and why Oracle recommends it.

  • The WLS_OIM1 and WLS_SOA1 Managed Servers on OIMHOST1 are configured to restart automatically on OIMHOST2 if a failure occurs on OIMHOST1.

  • The WLS_OIM2 and WLS_SOA2 Managed Servers on OIMHOST2 are configured to restart automatically on OIMHOST1 if a failure occur on OIMHOST2.

In this configuration, the WLS_OIM1, WLS_SOA1, WLS_OIM2 and WLS_SOA2 servers listen on specific floating IPs that WebLogic Server Migration fails over.

The following steps enable server migration for the WLS_OIM1, WLS_SOA1, WLS_OIM2, and WLS_SOA2 Managed Servers, which in turn enables a Managed Server to fail over to another node if a server or process failure occurs:

5.4.12.1 Setting Up a User and Tablespace for the Server Migration Leasing Table

The first step to set up a user and tablespace for the server migration leasing table:

Note:

If other servers in the same domain are already configured with server migration, use the same tablespace and data sources. In this case, you don't need to recreate the data sources and GridLink data source for database leasing, however, you must retarget them to the clusters you're configuring for server migration.
  1. Create a tablespace named leasing. For example, log on to SQL*Plus as the sysdba user and run the following command:

    SQL> create tablespace leasing logging datafile 'DB_HOME/oradata/orcl/leasing.dbf' size 32m autoextend on next 32m maxsize 2048m extent management local;
    

    Note: Omit DB_HOME/oradata/orcl/leasing.dbf (data file path) if you have configured Oracle Managed Files (OMF). If you are using Oracle Automatic Storage Management (ASM), you can provide the name of the ASM disk group here, for example, +DATA. If omitted, the default disk group configured in the DB_CREATE_FILE_DEST database initialization parameter will be used.:

  2. Create a user named leasing and assign to it the leasing tablespace:

    SQL> create user leasing identified by password;
    SQL> grant create table to leasing;
    SQL> grant create session to leasing;
    SQL> alter user leasing default tablespace leasing;
    SQL> alter user leasing quota unlimited on LEASING;
    
  3. Create the leasing table using the leasing.ddl script:

    1. Copy the leasing.ddl file located in either the WL_HOME/server/db/oracle/920 or the WL_HOME/server/db/oracle/920 directory to your database node.

    2. Connect to the database as the leasing user.

    3. Run the leasing.ddl script in SQL*Plus:

      SQL> @Copy_Location/leasing.ddl;
      

      Note:

      The following errors are normal; you can ignore them:
      SP2-0734: unknown command beginning "WebLogic S..." - rest of line ignored.
      SP2-0734: unknown command beginning "Copyright ..." - rest of line ignored.
      DROP TABLE ACTIVE
                 *
      ERROR at line 1:
      ORA-00942:table or view does not exist
      

5.4.12.2 Creating a GridLink Data Source

To create a GridLink data source, see "Creating a GridLink Data Source" in the Oracle Fusion Middleware Configuring and Managing JDBC Data Sources for Oracle WebLogic Server guide.**

5.4.12.3 Editing Node Manager's Properties File

You must edit the nodemanager.properties file to add the following properties for each node where you configure server migration:

Interface=eth0
eth0=*,NetMask=255.255.248.0
UseMACBroadcast=true
  • Interface: Specifies the interface name for the floating IP (such as eth0).

    Note:

    Do not specify the sub interface, such as eth0:1 or eth0:2. This interface is to be used without the :0, or :1. The Node Manager's scripts traverse the different :X enabled IPs to determine which to add or remove. For example, valid values in Linux environments are eth0, eth1, or, eth2, eth3, ethn, depending on the number of interfaces configured.
  • NetMask: Net mask for the interface for the floating IP. The net mask should be the same as the net mask on the interface; 255.255.255.0 is an example. The actual value depends on your network.

  • UseMACBroadcast: Specifies whether or not to use a node's MAC address when sending ARP packets, that is, whether or not to use the -b flag in the arping command.

Verify in Node Manager's output (shell where Node Manager starts) that these properties are being used or problems may arise during migration. (Node Manager must be restarted to do this.) You should see an entry similar to the following in Node Manager's output:

...
StateCheckInterval=500
Interface=eth0
NetMask=255.255.255.0
...

5.4.12.4 Setting Environment and Superuser Privileges for the wlsifconfig.sh Script

To set environment and superuser privileges for the wlsifconfig.sh script for each node where you configure server migration:

  1. Modify the login profile of the user account that you use to run Node Manager to ensure that the PATH environment variable for the Node Manager process includes directories housing the wlsifconfig.sh and wlscontrol.sh scripts, and the nodemanager.domains configuration file. Ensure that your PATH environment variable includes these files:

    Table 5-1 Files Required for the PATH Environment Variable

    File Located in this directory

    wlsifconfig.sh

    DOMAIN_HOME/bin/server_migration

    wlscontrol.sh

    WL_HOME/common/bin

    nodemanager.domains

    WL_HOME/common


  2. Grant sudo configuration for the wlsifconfig.sh script.

    • Configure sudo to work without a password prompt.

    • For security reasons, Oracle recommends restricting to the subset of commands required to run the wlsifconfig.sh script. For example, perform the following steps to set the environment and superuser privileges for the wlsifconfig.sh script:

    • Grant sudo privilege to the WebLogic user (oracle) with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

    • Ensure that the script is executable by the WebLogic user. The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle and also over ifconfig and arping:

      oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
      

    Note:

    Ask the system administrator for the sudo and system rights as appropriate to this step.

5.4.12.5 Configuring Server Migration Targets

You first assign all available nodes for the cluster's members and then specify candidate machines (in order of preference) for each server that is configured with server migration. To configure cluster migration in a migration in a cluster:

  1. Log into the Administration Console.

  2. In the Domain Structure window, expand Environment and select Clusters.

  3. Click the cluster you want to configure migration for in the Name column.

  4. Click the Migration tab.

  5. Click Lock and Edit.

  6. In the Available field, select the machine to which to enable migration and click the right arrow.

  7. Select the data source to use for automatic migration. In this case, select the leasing data source.

  8. Click Save.

  9. Click Activate Changes.

  10. Set the candidate machines for server migration. You must perform this task for all Managed Servers as follows:

    1. In the Domain Structure window of the Administration Console, expand Environment and select Servers.

      Tip:

      Click Customize this table in the Summary of Servers page and move Current Machine from the Available window to the Chosen window to view the machine that the server runs on. This will be different from the configuration if the server migrates automatically.
    2. Select the server that you want to configure migration for.

    3. Click the Migration tab.

    4. In the Available field, located in the Migration Configuration section, select the machines you want to enable migration to and click the right arrow.

    5. Select Automatic Server Migration Enabled. This enables Node Manager to start a failed server on the target node automatically.

    6. Click Save then Click Activate Changes.

    7. Repeat the steps above for any additional Managed Servers.

    8. Restart the administration server, Node Managers, and the servers for which server migration has been configured.

5.4.12.6 Testing the Server Migration

To verify that server migration works properly:

From OIMHOST1:

  1. Stop the WLS_OIM1 Managed Server by running the command:

    OIMHOST1> kill -9 pid
    

    where pid specifies the process ID of the Managed Server. You can identify the pid in the node by running this command:

    OIMHOST1> ps -ef | grep WLS_OIM1
    
  2. Watch the Node Manager console. You should see a message indicating that WLS_OIM1's floating IP has been disabled.

  3. Wait for Node Manager to try a second restart of WLS_OIM1. It waits for a fence period of 30 seconds before trying this restart.

  4. Once Node Manager restarts the server, stop it again. Node Manager should now log a message indicating that the server will not be restarted again locally.

From OIMHOST2:

  1. Watch the local Node Manager console. After 30 seconds since the last try to restart WLS_OIM1 on OIMHOST1, Node Manager on OIMHOST2 should prompt that the floating IP for WLS_OIM1 is being brought up and that the server is being restarted in this node.

  2. Access the soa-infra console in the same IP.

Follow the steps above to test server migration for the WLS_OIM2, WLS_SOA1, and WLS_SOA2 Managed Servers.

Table 5-2 shows the Managed Servers and the hosts they migrate to in case of a failure.

Table 5-2 WLS_OIM1, WLS_OIM2, WLS_SOA1, WLS_SOA2 Server Migration

Managed Server Migrated From Migrated To

WLS_OIM1

OIMHOST1

OIMHOST2

WLS_OIM2

OIMHOST2

OIMHOST1

WLS_SOA1

OIMHOST1

OIMHOST2

WLS_SOA2

OIMHOST2

OIMHOST1


Verification From the Administration Console

To verify migration in the Administration Console:

  1. Log into the Administration Console at http://oimhost1.example.com:7001/console using administrator credentials.

  2. Click Domain on the left console.

  3. Click the Monitoring tab and then the Migration sub tab.

    The Migration Status table provides information on the status of the migration.

Note:

After a server migrates, to fail it back to its original node/machine, stop the Managed Server in the Administration Console then start it again. The appropriate Node Manager starts the Managed Server on the machine it was originally assigned to.

5.4.13 Configuring a Default Persistence Store for Transaction Recovery

Each Managed Server has a transaction log that stores information about inflight transactions that the Managed Server coordinates that may not complete. WebLogic Server uses the transaction log to recover from system/network failures. To leverage the Transaction Recovery Service migration capability, store the transaction log in a location that all Managed Servers in a cluster can access. Without shared storage, other servers in the cluster can't run transaction recovery in the event of a server failure, so the operation may need to be retried.

Note:

Oracle recommends a location on a Network Attached Storage (NAS) device or Storage Area Network (SAN).

To set the location for default persistence stores for the OIM and SOA Servers:

  1. Log into the Administration Console at http://oimhost1.example.com:7001/console using administrator credentials.

  2. In the Domain Structure window, expand the Environment node and then click the Servers node. The Summary of Servers page opens.

  3. Select the name of the server (represented as a hyperlink) in the Name column of the table. The Settings page for the server opens to the Configuration tab.

  4. Select the Services subtab of the Configuration tab (not the Services top-level tab).

  5. In the Default Store section, enter the path to the folder where the default persistent stores store their data files. The directory structure of the path should be:

    • For the WLS_SOA1 and WLS_SOA2 servers, use a directory structure similar to:

      ORACLE_BASE/admin/domainName/soaClusterName/tlogs
      
    • For the WLS_OIM1 and WLS_OIM2 servers, use a directory structure similar to:

      ORACLE_BASE/admin/domainName/oimClusterName/tlogs
      
  6. Click Save.

    Note:

    To enable migration of Transaction Recovery Service, specify a location on a persistent storage solution that is available to the Managed Servers in the cluster. WLS_SOA1, WLS_SOA2, WLS_OIM1, and WLS_OIM2 must be able to access this directory.

5.4.14 Install Oracle HTTP Server on WEBHOST1 and WEBHOST2

Install Oracle HTTP Server on WEBHOST1 and WEBHOST2.

5.4.15 Configuring Oracle Identity Manager to Work with the Web Tier

This section describes how to configure OIM to work with the Oracle Web Tier.

5.4.15.1 Prerequisites to Configure OIM to Work with the Web Tier

Verify that the following tasks have been performed:

  1. Oracle Web Tier has been installed on WEBHOST1 and WEBHOST2.

  2. OIM is installed and configured on OIMHOST1 and OIMHOST2.

  3. The load balancer has been configured with a virtual hostname (sso.example.com) pointing to the web servers on WEBHOST1 and WEBHOST2. Isso.example.comis customer facing and the main point of entry; it is typically SSL terminated.

  4. The load balancer has been configured with a virtual hostname (oiminternal.example.com) pointing to web servers WEBHOST1 and WEBHOST2. oiminternal.example.com is for internal callbacks and is not customer facing.

5.4.15.2 Configuring Oracle HTTP Servers to Front End OIM, SOA, and BI Publisher Managed Servers

  1. On each of the web servers on WEBHOST1 and WEBHOST2, create a file named oim.conf in the directory ORACLE_INSTANCE/config/OHS/COMPONENT/moduleconf.

    This file must contain the following information:

    # oim admin console(idmshell based)
       <Location /admin>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
       </Location>
     
    # oim self and advanced admin webapp consoles(canonic webapp)
     
      <Location /oim>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
       </Location>
    
      <Location /identity>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid 
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
        </Location>
    
      <Location /sysadmin>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
        </Location>
    
    # SOA Callback webservice for SOD - Provide the SOA Managed Server Ports
      <Location /sodcheck>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster soavhn1.example.com:8001,soavhn2.example.com:8001
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
       </Location>
    
    # Callback webservice for SOA. SOA calls this when a request is approved/rejected
    # Provide the SOA Managed Server Port
      <Location /workflowservice>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
    
    # xlWebApp - Legacy 9.x webapp (struts based)
       <Location /xlWebApp>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
    
    # Nexaweb WebApp - used for workflow designer and DM
      <Location /Nexaweb>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
    
    # used for FA Callback service.
      <Location /callbackResponseService>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
    
    # spml xsd profile
      <Location /spml-xsd>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
    
      <Location /HTTPClnt>
        SetHandler weblogic-handler
        WLCookieName    oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
     
    
      <Location /reqsvc>
        SetHandler weblogic-handler
        WLCookieName oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
     
     
      <Location /integration>
        SetHandler weblogic-handler
        WLCookieName oimjsessionid
        WebLogicCluster soavhn1.example.com:8001,soavhn2.example.com:8001
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
    
     
      <Location /provisioning-callback>
        SetHandler weblogic-handler
        WLCookieName oimjsessionid
        WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
     
      <Location  /xmlpserver>
        SetHandler weblogic-handler
        WLCookieName JSESSIONID
        WebLogicCluster oimvhn1.example.com:9704,oimvhn2.example.com:9704
        WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
    
    <Location /CertificationCallbackService>
       SetHandler weblogic-handler
    WLCookieName JSESSIONID
    WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000
    WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log"
    WLProxySSL ON
    WLProxySSLPassThrough ON
     </Location>
    
  2. Create a file called virtual_hosts.conf in ORACLE_INSTANCE/config/OHS/COMPONENT/moduleconf. The file must contain the following information:

    Note:

    COMPONENT is typically ohs1 or ohs2. However, the name depends on choices you made during OHS installation.
    NameVirtualHost *:7777
    <VirtualHost *:7777>
    
      ServerName http://sso.example.com:7777
      RewriteEngine On
      RewriteOptions inherit
      UseCanonicalName On
      </VirtualHost>
    
    <VirtualHost *:7777>
      ServerName http://oiminternal.example.com:80
      RewriteEngine On
      RewriteOptions inherit
      UseCanonicalName On
    </VirtualHost>
    
  3. Save the file on both WEBHOST1 and WEBHOST2.

  4. Stop and start the Oracle HTTP Server instances on both WEBHOST1 and WEBHOST2.

5.4.16 Validate the Oracle HTTP Server Configuration

To validate that Oracle HTTP Server is configured properly, follow these steps:

  1. In a web browser, enter the following URL for the Oracle Identity Manager Console:

    http://sso.example.com:7777/identity
    

    The Oracle Identity Manager Console login page should display.

  2. Log into the Oracle Identity Manager Console using the credentials for the xelsysadm user.

5.4.17 Oracle Identity Manager Failover and Expected Behavior

In a high availability environment, you configure Node Manager to monitor Oracle WebLogic Servers. In case of failure, Node Manager restarts the WebLogic Server.

A hardware load balancer load balances requests between multiple OIM instances. If one OIM Managed Server fails, the load balancer detects the failure and routes requests to surviving instances.

In a high availability environment, state and configuration information is stored in a database that all cluster members share. Surviving OIM instances continue to seamlessly process any unfinished transactions started on the failed instance because state information is in the shared database, available to all cluster members.

When an OIM instance fails, its database and LDAP connections are released. Surviving instances in the active-active deployment make their own connections to continue processing unfinished transactions on the failed instance.

When you deploy OIM in a high availability configuration:

  • You can deploy OIM on an Oracle RAC database, but Oracle RAC failover is not transparent for OIM in this release. If Oracle RAC failover occurs, end users may have to resubmit their requests.

  • Oracle Identity Manager always requires the availability of at least one node in the SOA cluster. If the SOA cluster is not available, end user requests fail. OIM does not retry for a failed SOA call. Therefore, the end user must retry when a SOA call fails.

5.4.18 Scaling Up Oracle Identity Manager

You can scale out or scale up the OIM high availability topology. When you scale up the topology, you add new Managed Servers to nodes that are already running one or more Managed Servers. When you scale out the topology, you add new Managed Servers to new nodes. See Section 5.4.19, "Scaling Out Oracle Identity Manager" to scale out.

In this case, you have a node that runs a Managed Server configured with SOA and BI components. The node contains:

  • A Middleware home

  • An Oracle HOME (SOA)

  • An Oracle Home (BIP)

  • A domain directory for existing Managed Servers

You can use the existing installations (Middleware home and domain directories) to create new WLS_OIM, WLS_SOA, and WLS_BIP Managed Servers. You do not need to install OIM, SOA, and BIP binaries in a new location or run pack and unpack.

This procedure describes how to clone OIM, SOA, and BIP Managed Servers. You may clone all three or two of these component types, as long as one of them is OIM.

Note the following:

  • This procedure refers to WLS_OIM, WLS_SOA, and WLS_BIP. However, you may not be scaling up all three components. For each step, choose the component(s) that you are scaling up in your environment. Also, some steps do not apply to all components

  • The persistent store's shared storage directory for JMS Servers must exist before you start the Managed Server or the start operation fails.

  • Each time you specify the persistent store's path, it must be a directory on shared storage

To scale up the topology:

  1. In the Administration Console, clone WLS_OIM1/WLS_SOA1/WLS_BIP1. The Managed Server that you clone should be one that already exists on the node where you want to run the new Managed Server.

    1. Select Environment -> Servers from the Administration Console.

    2. Select the Managed Server(s) that you want to clone.

    3. Select Clone.

    4. Name the new Managed Server WLS_OIMn/WLS_SOAn/WLS_BIPn, where n is a number to identity the new Managed Server.

    The rest of the steps assume that you are adding a new Managed Server to OIMHOST1, which is already running WLS_OIM1, WLS_SOA1, and WLS_BIP1.

  2. For the listen address, assign the hostname or IP for the new Managed Server(s). If you plan to use server migration, use the VIP (floating IP) to enable Managed Server(s) to move to another node. Use a VIP different from the VIP that the existing Managed Server uses.

  3. Create JMS Servers for OIM/SOA/BIP, BPM, UMS, JRFWSAsync, and PS6SOA on the new Managed Server.

    1. In the Administration Console, create a new persistent store for the OIM/SOA/BIP JMS Server(s) and name it, for example, SOAJMSFileStore_n or BipJmsStoren. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      
    2. Create a new JMS Server for OIM/SOA/BIP, for example, SOAJMSServer_n or BipJmsServern. Use JMSFileStore_n for JMSServer. Target JMSServer_n to the new Managed Server(s).

    3. Create a persistence store for the new UMSJMSServer(s), for example, UMSJMSFileStore_n. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/UMSJMSFileStore_n
      
    4. Create a new JMS Server for UMS, for example, UMSJMSServer_n. Target it to the new Managed Server (WLS_SOAn).

    5. Create a persistence store for the new BPMJMSServer(s), for example, BPMJMSFileStore_n. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/BPMJMSFileStore_n
      
    6. Create a new JMS Server for JMS, for example, BPMJMSServer_n. Target it to the new Managed Server (WLS_SOAn).

    7. Create a persistence store for the new BipJmsServer(s), for example, BipJmsStore_n. Specify the store's path, a directory on shared storage:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/BipJmsStore_n
      
    8. Create a new persistence store for the new JRFWSAsyncJMSServer, for example, JRFWSAsyncJMSFileStore_n. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/JRFWSAsyncJMSFileStore_n
      
    9. Create a JMS Server for JRFWSAsync, for example, JRFWSAsyncJMSServer_n. Use JRFWSAsyncJMSFileStore_n for this JMSServer. Target JRFWSAsyncJMSServer_n to the new Managed Server (WLS_OIMn).

      Note:

      You can also assign SOAJMSFileStore_n as store for the new JRFWSAsync JMS Servers. For clarity and isolation, individual persistent stores are used in the following steps.
    10. Create a persistence store for the new PS6SOAJMSServer, for example, PS6SOAJMSFileStore_auto_n. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/PS6SOAJMSFileStore_auto_n
      
    11. Create a JMS Server for PS6SOA, for example, PS6SOAJMSServer_auto_n. Use PS6SOAJMSFileStore_auto_n for this JMSServer. Target PS6SOAJMSServer_auto_n to the new Managed Server (WLS_SOAn).

      Note:

      You can also assign SOAJMSFileStore_n as store for the new PS6 JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.
    12. Update SubDeployment targets for SOA JMS Module to include the new SOA JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click SOAJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the SOAJMSServerXXXXXX subdeployment and add SOAJMSServer_n to it Click Save.

      Note:

      A subdeployment module name is a random name in the form COMPONENTJMSServerXXXXXX. It comes from the Configuration Wizard JMS configuration for the first two Managed Servers, WLS_COMPONENT1 and WLS_COMPONENT2).
    13. Update SubDeployment targets for UMSJMSSystemResource to include the new UMS JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click UMSJMSSystemResource (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the UMSJMSServerXXXXXX subdeployment and add UMSJMSServer_n to it. Click Save.

    14. Update SubDeployment targets for OIMJMSModule to include the new OIM JMS Server. Expand the Services node, then expand Messaging node. Choose JMS Modules from the Domain Structure window. Click OIMJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click OIMJMSServerXXXXXX and OIMJMSServer_n to it. Click Save.

    15. Update SubDeployment targets for the JRFWSAsyncJmsModule to include the new JRFWSAsync JMS Server. Expand the Services node then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click JRFWSAsyncJmsModule (hyperlink in the Names column of the table). In the Settings page, click the SubDeployments tab. Click the JRFWSAsyncJMSServerXXXXXX subdeployment and add JRFWSAsyncJMSServer_n to this subdeployment. Click Save

    16. Update the SubDeployment targets for PS6SOAJMSModule to include the new PS6SOA JMS Server. Expand the Services node and the Messaging node. Choose JMS Modules from the Domain Structure window. Click PS6SOAJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. Click the PS6SOAJMSServerXXXXXX subdeployment. Add PS6SOAJMSServer_auto_n to this subdeployment. Click Save.

    17. Update SubDeployment targets for BPM JMS Module to include the new BPM JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click BPMJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the BIPJMSServerXXXXXX subdeployment and add BPMJMSServer_n to it. Click Save.

  4. For SOA Managed Servers only, configure Oracle Coherence for deploying composites.

    Note:

    Replace the server's localhost field only with the new server's listen address. For example:

    Dtangosol.coherence.localhost=SOAHOST1VHNn

  5. Configure the transaction persistent store for the new server in a shared storage location visible from other nodes.

    From the Administration Console, select Server_name > Services tab. Under Default Store, in Directory, enter the path to the default persistent store.

  6. Disable hostname verification for the new Managed Server (required before starting/verifying a WLS_SOAn Managed Server) You can re-enable it after you configure server certificates for Administration Server / Node Manager communication in SOAHOSTn. If the source server (from which you cloned the new Managed Server) had disabled hostname verification, these steps are not required; hostname verification settings propagate to a cloned server.

    To disable hostname verification:

    1. In the Administration Console, expand the Environment node in the Domain Structure window.

    2. Click Servers. Select WLS_SOAn in the Names column of the table.

    3. Click the SSL tab. Click Advanced.

    4. Set Hostname Verification to None. Click Save.

  7. Start and test the new Managed Server from the Administration Console.

    1. Shut down the existing Managed Servers in the cluster.

    2. Ensure that the newly created Managed Server is up.

    3. Access the application on the newly created Managed Server to verify that it works. A login page opens for OIM and BI Publisher. For SOA, a HTTP basic authorization opens.

    Table 5-3 Managed Server Test URLs

    Component Managed Server Test URL

    SOA

    http://vip:port/soa-infra

    OIM

    http://vip:port/identity

    BI Publisher

    http://vip:port/xmlpserver


  8. In the Administration Console, select Services then Foreign JNDI provider. Confirm that ForeignJNDIProvider-SOA targets cluster:t3://soa_cluster, not a Managed Server(s). You target the cluster so that new Managed Servers don't require configuration. If ForeignJNDIProvider-SOA does not target the cluster, target it to the cluster.

  9. Configure Server Migration for the new Managed Server.

    Note:

    For scale up, the node must have a Node Manager, an environment configured for server migration, and the floating IP for the new Managed Server(s).

    To configure server migration:

    1. Log into the Administration Console.

    2. In the left pane, expand Environment and select Servers.

    3. Select the server (hyperlink) that you want to configure migration for.

    4. Click the Migration tab.

    5. In the Available field, in the Migration Configuration section, select machines to enable migration for and click the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example:

      For new Managed Servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2.

      For new Managed Servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.

      Verify that the appropriate resources are available to run Managed Servers concurrently during migration.

    6. Select the Automatic Server Migration Enabled option to enable Node Manager to start a failed server on the target node automatically.

    7. Click Save.

    8. Restart the Administration Server, Managed Servers, and Node Manager.

    9. Repeat these steps to configure server migration for the newly created WLS_OIMn Managed Server.

  10. To test server migration for this new server, follow these steps from the node where you added the new server:

    1. Stop the Managed Server.

      Run kill -9 pid on the PID of the Managed Server. To identify the PID of the node, enter, for example, ps -ef | grep WLS_SOAn. Substitute BIP for SOA if necessary.

    2. Watch Node Manager Console for a message indicating that the Managed Server floating IP is disabled.

    3. Wait for Node Manager to try a second restart of the Managed Server. Node Manager waits for 30 seconds before trying this restart.

    4. After Node Manager restarts the server, stop it again. Node Manager logs a message indicating that the server will not restart again locally.

  11. Edit the OHS configuration file to add the new managed server(s). See Section 5.4.19.1, "Configuring Oracle HTTP Server to Recognize New Managed Servers."

5.4.19 Scaling Out Oracle Identity Manager

When you scale out the topology, you add new Managed Servers configured with software to new nodes.

Note:

Steps in this procedure refer to WLS_OIM, WLS_SOA, and WLS_BIP. However, you may not be scaling up all three components. For each step, choose the component(s) that you are scaling up in your environment. Some steps do not apply to all components.

Before you scale out, check that you meet these requirements:

  • Existing nodes running Managed Servers configured with OIM, SOA, and/or BIP in the topology.

  • The new node can access existing home directories for WebLogic Server, SOA, and BIP. (Use existing installations in shared storage to create new Managed Server. You do not need to install WebLogic Server or component binaries in a new location, but must run pack and unpack to bootstrap the domain configuration in the new node.)

    Note:

    If there is no existing installation in shared storage, you must install WebLogic Server and SOA in the new nodes.

    Note:

    When multiple servers in different nodes share ORACLE_HOME or WL_HOME, Oracle recommends keeping the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use ORACLE_HOME/oui/bin/attachHome.sh. To update the Middleware home list to add or remove a WL_HOME, edit user_home/bea/beahomelist with the following steps.

To scale out the topology:

  1. On the new node, mount the existing Middleware home. Include the SOA and/or BIP installation and the domain directory, and ensure the new node has access to this directory, just like the rest of the nodes in the domain.

  2. Attach ORACLE_HOME in shared storage to the local Oracle Inventory. For example:

    cd /u01/app/oracle/soa/
    ./attachHome.sh -jreLoc u01/app/JRE-JDK_version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the MW_HOME/bea/beahomelist file and add u01/app/oracle to it.

  3. Log in to the Administration Console.

  4. Create a new machine for the new node. Add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP of the node that is being used for scale out.

  6. Clone WLS_OIM1/WLS_SOA1/WLS_BIP1. The Managed Server that you clone should be one that already exists on the node where you want to run the new Managed Server.

    To clone OIM, SOA, and/or BIP:

    1. Select Environment -> Servers from the Administration Console.

    2. Select the Managed Server(s) that you want to clone.

    3. Select Clone.

    4. Name the new Managed Server WLS_OIMn/WLS_SOAn/WLS_BIPn, where n is a number to identity the new Managed Server.

    Note:

    These steps assume that you are adding a new server to node n, where no Managed Server was running previously.
  7. Assign the hostname or IP to use for the new Managed Server for the listen address of the Managed Server.

    If you plan to use server migration for this server (which Oracle recommends), this should be the server VIP (floating IP). This VIP should be different from the one used for the existing Managed Server.

  8. Create JMS servers for SOA, OIM (if applicable), UMS, BPM, JRFWSAsync, and PS6SOA on the new Managed Server.

    1. In the Administration Console, create a new persistent store for the OIM/SOA/BIP JMS Server(s) and name it, for example, SOAJMSFileStore_n or BipJmsStoren. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      
    2. Create a new JMS Server for OIM/SOA/BIP, for example, SOAJMSServer_n or BipJmsServern. Use JMSFileStore_n for JMSServer. Target JMSServer_n to the new Managed Server(s).

    3. Create a persistence store for the new UMSJMSServer(s), for example, UMSJMSFileStore_n. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/UMSJMSFileStore_n
      
    4. Create a new JMS Server for UMS, for example, UMSJMSServer_n. Target it to the new Managed Server (WLS_SOAn).

    5. Create a persistence store for the new BPMJMSServer(s), for example, BPMJMSFileStore_n. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/BPMJMSFileStore_n
      
    6. Create a new JMS Server for JMS, for example, BPMJMSServer_n. Target it to the new Managed Server (WLS_SOAn).

    7. Create a persistence store for the new BipJmsServer(s), for example, BipJmsStore_n. Specify the store's path, a directory on shared storage:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/BipJmsStore_n
      
    8. Create a new persistence store for the new JRFWSAsyncJMSServer, for example, JRFWSAsyncJMSFileStore_n. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/JRFWSAsyncJMSFileStore_n
      
    9. Create a JMS Server for JRFWSAsync, for example, JRFWSAsyncJMSServer_n. Use JRFWSAsyncJMSFileStore_n for this JMSServer. Target JRFWSAsyncJMSServer_n to the new Managed Server (WLS_OIMn).

      Note:

      You can also assign SOAJMSFileStore_n as store for the new JRFWSAsync JMS Servers. For clarity and isolation, the following steps use individual persistent stores.
    10. Create a persistence store for the new PS6SOAJMSServer, for example, PS6SOAJMSFileStore_auto_n. Specify the store's path, a directory on shared storage.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/PS6SOAJMSFileStore_auto_n
      
    11. Create a JMS Server for PS6SOA, for example, PS6SOAJMSServer_auto_n. Use PS6SOAJMSFileStore_auto_n for this JMSServer. Target PS6SOAJMSServer_auto_n to the new Managed Server (WLS_SOAn).

      Note:

      You can also assign SOAJMSFileStore_n as store for the new PS6 JMS Servers. For clarity and isolation, the following steps use individual persistent stores.
    12. Update SubDeployment targets for SOA JMS Module to include the new SOA JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click SOAJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the SOAJMSServerXXXXXX subdeployment and add SOAJMSServer_n to it Click Save.

      Note:

      A subdeployment module name is a random name in the form COMPONENTJMSServerXXXXXX. It comes from the Configuration Wizard JMS configuration for the first two Managed Servers, WLS_COMPONENT1 and WLS_COMPONENT2).
    13. Update SubDeployment targets for UMSJMSSystemResource to include the new UMS JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click UMSJMSSystemResource (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the UMSJMSServerXXXXXX subdeployment and add UMSJMSServer_n to it. Click Save.

    14. Update SubDeployment targets for OIMJMSModule to include the new OIM JMS Server. Expand the Services node, then expand Messaging node. Choose JMS Modules from the Domain Structure window. Click OIMJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click OIMJMSServerXXXXXX and OIMJMSServer_n to it. Click Save.

    15. Update SubDeployment targets for the JRFWSAsyncJmsModule to include the new JRFWSAsync JMS Server. Expand the Services node then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click JRFWSAsyncJmsModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. Click the JRFWSAsyncJMSServerXXXXXX subdeployment and add JRFWSAsyncJMSServer_n to this subdeployment. Click Save

    16. Update the SubDeployment targets for PS6SOAJMSModule to include the new PS6SOA JMS Server. Expand the Services node and the Messaging node. Choose JMS Modules from the Domain Structure window. Click PS6SOAJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. Click the PS6SOAJMSServerXXXXXX subdeployment. Add PS6SOAJMSServer_auto_n to this subdeployment. Click Save.

    17. Update SubDeployment targets for BPM JMS Module to include the new BPM JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click BPMJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the BIPJMSServerXXXXXX subdeployment and add BPMJMSServer_n to it. Click Save.

  9. Run the pack command on SOAHOST1 and/or BIPHOST1 to create a template pack. For example, for SOA:

    cd ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=true/
    -domain=MW_HOME/user_projects/domains/soadomain/
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the following command on HOST1 to copy the template file created to HOSTn:

    scp soadomaintemplateScale.jar oracle@SOAHOSTN:/
    ORACLE_BASE/product/fmw/soa/common/bin
    

    Run the unpack command on HOSTn to unpack the template in the Managed Server domain directory. For example, for SOA:

    ORACLE_BASE/product/fmw/soa/common/bin
    /unpack.sh /
    -domain=ORACLE_BASE/product/fmw/user_projects/domains/soadomain/
    -template=soadomaintemplateScale.jar
    
  10. Configure Oracle Coherence for deploying composites.

    Note:

    This step is required for SOA Managed Servers only, not OIM or BIP Managed Servers.

    Note:

    Change the localhost field only for the server. Replace localhost with the listen address of the new server, for example:

    Dtangosol.coherence.localhost=SOAHOST1VHNn

  11. Configure the transaction persistent store for the new server. This should be a shared storage location visible from other nodes.

    From the Administration Console, select Server_name > Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  12. Disable hostname verification for the new Managed Server; you must do this before starting/verifying the Managed Server. You can re-enable it after you configure server certificates for the communication between the Administration Server and Node Manager. If the source Managed Server (server you cloned the new one from) had already disabled hostname verification, these steps are not required. Hostname verification settings propagate to cloned servers.

    To disable hostname verification:

    1. Open the Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

    4. Select WLS_SOAn in the Names column of the table. The Settings page for the server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  13. Start Node Manager on the new node. To start the Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the hostname of the new node as a parameter as follows:

    WL_HOME/server/bin/startNodeManager new_node_ip
    
  14. Start and test the new Managed Server from the Administration Console.

    1. Shut down the existing Managed Server in the cluster.

    2. Ensure that the newly created Managed Server is up.

    3. Access the application on the newly created Managed Server to verify that it works. A login page appears for OIM and BI Publisher. For SOA, a HTTP basic authorization opens.

    Table 5-4 Managed Server Test URLs

    Component Managed Server Test URL

    SOA

    http://vip:port/soa-infra

    OIM

    http://vip:port/identity

    BI Publisher

    http://vip:port/xmlpserver


  15. Configure Server Migration for the new Managed Server.

    Note:

    Because this new node is using an existing shared storage installation, it is already using a Node Manager and environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges. The floating IP for the new Managed Server is already in the new node.

    To configure server migration:

    1. Log into the Administration Console.

    2. In the left pane, expand Environment and select Servers.

    3. Select the server (represented as a hyperlink) for which you want to configure migration. The Settings page for that server appears.

    4. Click the Migration tab.

    5. In the Available field, in the Migration Configuration section, select machines to which to enable migration and click the right arrow.

      Note:

      Specify the least-loaded machine as the new server's migration target. Required capacity planning must be completed so that this node has the available resources to sustain an additional Managed Server.
    6. Select the Automatic Server Migration Enabled option. This enables the Node Manager to start a failed server on the target node automatically.

    7. Click Save.

    8. Restart the Administration Server, Managed Servers, and Node Manager.

  16. Test server migration for this new server from the node where you added it:

    1. Stop the Managed Server.

      Run kill -9 pid on the PID of the Managed Server. Identify the PID of the node using, for example, ps -ef | grep WLS_SOAn.

    2. Watch the Node Manager Console for a message indicating that the floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of the new Managed Server. Node Manager waits for a fence period of 30 seconds before restarting.

    4. After Node Manager restarts the server, stop it again. Node Manager should log a message that the server will not restart again locally.

  17. Edit the OHS configuration file to add the new managed server(s). See Section 5.4.19.1, "Configuring Oracle HTTP Server to Recognize New Managed Servers."

5.4.19.1 Configuring Oracle HTTP Server to Recognize New Managed Servers

To complete scale up/scale out, you must edit the oim.conf file to add the new Managed Servers, then restart the Oracle HTTP Servers.

  1. Go to the directory ORACLE_INSTANCE/config/OHS/component/moduleconf

  2. Edit oim.conf to add the new Managed Server to the WebLogicCluster directive. You must take this step for each URLs defined for OIM, SOA, or BIPub. Each product must have a separate <Location> section. Also, ports must refer to the Managed Servers. For example:

    <Location /oim
        SetHandler weblogic-handler
        WebLogicCluster
     host1.example.com:14200,host2.example.com:14200
    </Location>
    
  3. Restart Oracle HTTP Server on WEBHOST1 and WEBHOST2:

    WEBHOST1>opmnctl stopall
    WEBHOST1>opmnctl startall
    
    WEBHOST2>opmnctl stopall
    WEBHOST2>opmnctl startall
    

Note:

If you are not using shared storage system (Oracle recommended), copy oim.conf to the other OHS servers.

Note:

See the General Parameters for WebLogic Server Plug-Ins in Oracle Fusion Middleware Using Web Server 1.1 Plug-Ins with Oracle WebLogic Server for additional parameters that can facilitate deployments.