This chapter describes how to design and deploy a high availability environment for Oracle Identity Manager.
Oracle Identity Manager (OIM) is a user provisioning and administration solution that automates the process of adding, updating, and deleting user accounts from applications and directories. It also improves regulatory compliance by providing granular reports that attest to who has access to what. OIM is available as a stand-alone product or as part of Oracle Identity and Access Management Suite.
For details about OIM, see the Oracle Fusion Middleware Administrator's Guide for Oracle Identity Manager.
This section includes the following topics:
Section 5.1, "Oracle Identity Manager Component Architecture"
Section 5.2, "Oracle Identity Manager High Availability Concepts"
Section 5.3, "High Availability Directory Structure Prerequisites"
Section 5.4, "Oracle Identity Manager High Availability Configuration Steps"
Figure 5-1 shows the Oracle Identity Manager architecture:
Figure 5-1 Oracle Identity Manager Component Architecture
Oracle Identity Manager Server is Oracle's self-contained, standalone identity management solution. It provides User Administration, Workflow and Policy, Password Management, Audit and Compliance Management, User Provisioning and Organization and Role Management functionalities.
Oracle Identity Manager (OIM) is a standard Java EE application that is deployed on WebLogic Server and uses a database to store runtime and configuration data. The MDS schema contains configuration information; the runtime and user information is stored in the OIM schema.
OIM connects to the SOA Managed Servers over RMI to invoke SOA EJBs.
OIM uses the human workflow module of Oracle SOA Suite to manage its request workflow. OIM connects to SOA using the T3 URL for the SOA server, which is the front end URL for SOA. Oracle recommends using the load balancer or web server URL for clustered SOA servers. When the workflow completes, SOA calls back OIM web services using OIMFrontEndURL. Oracle SOA is deployed along with the OIM.
Several OIM modules use JMS queues. Each queue is processed by a separate Message Driven Bean (MDB), which is also part of the OIM application. Message producers are also part of the OIM application.
OIM uses embedded Oracle Entitlements Server, which is also part of the OIM engine. Oracle Entitlements Server (OES) is used for authorization checks inside OIM. For example, one of the policy constraints determines that only users with certain roles can create users. This is defined using the OIM user interface.
OIM uses a Quartz based scheduler for scheduled activities. Various scheduled activities occur in the background, such as disabling users after their end date.
You deploy and configure Oracle BI Publisher as part of the OIM domain. BI Publisher uses the same OIM database schema for reporting purposes. Oracle recommends that you locate BI Publisher in a different domain from OIM or the same domain to facilitate integration; that is, integration will consist of integrating a static URL. There is no interaction between BI Publisher and OIM runtime components. BI Publisher is configured to use the same OIM database schema for reporting purposes.
When you enable LDAPSync to communicate directly with external Directory Servers such as Oracle Internet Directory, ODSEE, and Microsoft Active Directory, support for high availability/failover features requires that you configure the Identity Virtualization Library (libOVD).
To configure libOVD, use the WLST command addLDAPHost
. To manage libOVD, see Managing Identity Virtualization Library (libOVD) Adapters in Oracle Fusion Middleware Administrator's Guide for Oracle Identity Manager for a list of WLST commands.
Oracle Identity Manager deploys on WebLogic Server as a no-stage application. The OIM server initializes when the WebLogic Server it is deployed on starts up. As part of application initialization, the quartz-based scheduler is also started. Once initialization is done, the system is ready to receive requests from clients.
You must start Remote Manager and Design Console as standalone utilities separately.
Oracle Identity Manager deploys to a WebLogic Server as an externally managed application. By default, WebLogic Server starts, stops, monitors and manages other lifecycle events for the OIM application.
OIM starts after the application server components start. It uses the authenticator which is part of the OIM component mechanism; it starts up before the WebLogic JNDI initializes and the application starts.
OIM uses a Quartz technology-based scheduler that starts the scheduler thread on all WebLogic Server instances. It uses the database as centralized storage for picking and running scheduled activities. If one scheduler instance picks up a job, other instances do not pick up that same job.
You can configure Node Manager to monitor the server process and restart it in case of failure.
The Oracle Enterprise Manager Fusion Middleware Control is used to monitor as well as to modify the configuration of the application.
You manage OIM lifecycle events with these command line tools and consoles:
Oracle WebLogic Scripting Tool (WLST)
WebLogic Server Administration Console
Oracle Enterprise Manager Fusion Middleware Control
Oracle WebLogic Node Manager
The OIM server configuration is stored in the MDS repository at /db/oim-config.xml
. The oim-config.xml
file is the main configuration file. Manage OIM configuration using the MBean browser through Oracle Enterprise Manager Fusion Middleware Control or command line MDS utilities. For more information about MDS utilities, see the MDS utilities section in Developing and Customizing Applications for Oracle Identity Manager.
The installer configures JMS out-of-the-box; all necessary JMS queues, connection pools, data sources are configured on WebLogic application servers. These queues are created when OIM deploys:
oimAttestationQueue
oimAuditQueue
oimDefaultQueue
oimKernelQueue
oimProcessQueue
oimReconQueue
oimSODQueue
The xlconfig.xml
file stores Design Console and Remote Manager configuration.
Oracle Identity Manager uses the Worklist and Human workflow modules of the Oracle SOA Suite for request flow management. OIM interacts with external repositories to store configuration and runtime data, and the repositories must be available during initialization and runtime. The OIM repository stores all OIM credentials. External components that OIM requires are:
WebLogic Server
Administration Server
Managed Server
Data Repositories
Configuration Repository (MDS Schema)
Runtime Repository (OIM Schema)
User Repository (OIM Schema)
SOA Repository (SOA Schema)
BI Publisher Repository (BIPLATFORM Schema)
External LDAP Stores (when using LDAP Sync)
BI Publisher
The Design Console is a tool used by the administrator for development and customization. The Design Console communicates directly with the OIM engine, so it relies on the same components that the OIM server relies on.
Remote Manager is an optional independent standalone application, which calls the custom APIs on the local system. It needs JAR files for custom APIs in its classpath.
As a Java EE application deployed on WebLogic Server, all server log messages log to the server log file. OIM-specific messages log into the WebLogic Server diagnostic log file where the application is deployed.
WebLogic Server log files are in the directory:
DOMAIN_HOME/servers/serverName/logs
The three main log files are serverName.log, serverName.out, and serverName-diagnostic.log, where serverName is the name of the WebLogic Server. For example, if the WebLogic Server name is wls_OIM1, then the diagnostic log file name is wls_OIM1-diagnostic.log. Use Oracle Enterprise Manager Fusion Middleware Control to view log files.
This section includes the following topics:
Note:
Note the following when you deploy OIM:You can deploy OIM on an Oracle RAC database, but Oracle RAC failover is not transparent for OIM in this release. If Oracle RAC failover occurs, end users may have to resubmit requests.
OIM always requires the availability of at least one node in the SOA cluster. If the SOA cluster is not available, end user requests fail. OIM does not retry for a failed SOA call. Therefore, the end user must retry when a SOA call fails.
Figure 5-2 shows OIM deployed in a high availability architecture.
Figure 5-2 Oracle Identity Manager High Availability Architecture
On OIMHOST1, the following installations have been performed:
An OIM instance is installed in the WLS_OIM1 Managed Server and a SOA instance is installed in the WLS_SOA1 Managed Server.
A BI Publisher instance is installed in the WLS_BI1 Manager Server.
The Oracle RAC database is configured in a GridLink data source to protect the instance from Oracle RAC node failure.
A WebLogic Server Administration Server is been installed. Under normal operations, this is the active Administration Server.
On OIMHOST2, the following installations have been performed:
An OIM instance is installed in the WLS_OIM2 Managed Server, a SOA instance is installed in the WLS_SOA2 Managed Server, and a BI Publisher instance is installed in the WLS_BI2 Managed Server.
The Oracle RAC database is configured in a GridLink data source to protect the instance from Oracle RAC node failure.
The instances in the WLS_OIM1 and WLS_OIM2 Managed Servers on OIMHOST1 and OIMHOST2 are configured as the OIM_Cluster cluster.
The instances in the WLS_SOA1 and WLS_SOA2 Managed Servers on OIMHOST1 and OIMHOST2 are configured as the SOA_Cluster cluster.
The instances in the WLS_BI1 and WLS_BI2 Managed Servers on OIMHOST1 and OIMHOST2 are configured as the BI_Cluster cluster.
An Administration Server is installed. Under normal operations, this is the passive Administration Server. You make this Administration Server active if the Administration Server on OIMHOST1 becomes unavailable.
Figure 5-2 uses these virtual host names in the OIM high availability configuration:
OIMVHN1 is the virtual hostname that maps to the listen address for the WLS_OIM1 Managed Server, and it fails over with server migration of the WLS_OIM1 Managed Server. It is enabled on the node where the WLS_OIM1 Managed Server is running (OIMHOST1 by default).
OIMVHN2 is the virtual hostname that maps to the listen address for the WLS_OIM2 Managed Server, and it fails over with server migration of the WLS_OIM2 Managed Server. It is enabled on the node where the WLS_OIM2 Managed Server is running (OIMHOST2 by default).
SOAVHN1 is the virtual hostname that is the listen address for the WLS_SOA1 Managed Server, and it fails over with server migration of the WLS_SOA1 Managed Server. It is enabled on the node where the WLS_SOA1 Managed Server is running (OIMHOST1 by default).
SOAVHN2 is the virtual hostname that is the listen address for the WLS_SOA2 Managed Server, and it fails over with server migration of the WLS_SOA2 Managed Server. It is enabled on the node where the WLS_SOA2 Managed Server is running (OIMHOST2 by default).
BIPVHN1 is the virtual hostname that is the listen address for the WLS_BI1 Managed Server, and it fails over with server migration of the WLS_BI1 Managed Server. It is enabled on the node where the WLS_BI1 Managed Server is running (OIMHOST1 by default).
BIPAVHN2 is the virtual hostname that is the listen address for the WLS_BI2 Managed Server, and it fails over with server migration of the WLS_BI2 Managed Server. It is enabled on the node where the WLS_BI2 Managed Server is running (OIMHOST2 by default).
VHN refers to the virtual IP addresses for the Oracle Real Application Clusters (Oracle RAC) database hosts.
By default, WebLogic Server starts, stops, monitors, and manages lifecycle events for the application. The OIM application leverages high availability features of clusters. In case of hardware or other failures, session state is available to other cluster nodes that can resume the work of the failed node.
Use these command line tools and consoles to manage OIM lifecycle events:
WebLogic Server Administration Console
Oracle Enterprise Manager Fusion Middleware Control
Oracle WebLogic Scripting Tool (WLST)
For high availability environments, changing the configuration of one OIM instance changes the configuration of all the other instances, because all the OIM instances share the same configuration repository.
Synchronization information between LDAP and the OIM database is handled by reconciliation, a scheduled process that runs in the background. If an LDAP outage occurs during the Synchronization process, the data which did not get into OIM will be picked up during the next run of the reconciliation task.
Before you configure high availability, verify that your environment meets the requirements that Section 6.3, "High Availability Directory Structure Prerequisites" describes.
This section provides high-level instructions for setting up a high availability deployment for OIM and includes these topics:
Section 5.4.1, "Prerequisites for Configuring Oracle Identity Manager"
Section 5.4.3, "Configuring the Database Security Store for the Domain"
Section 5.4.5, "Configuring Oracle Identity Manager on OIMHOST1"
Section 5.4.6, "Validate the Oracle Identity Manager Instance on OIMHOST1"
Section 5.4.7, "Propagating Oracle Identity Manager to OIMHOST2"
Section 5.4.9, "Validate Managed Server Instances on OIMHOST2"
Section 5.4.12, "Configuring Server Migration for OIM, SOA, and BI Publisher Managed Servers"
Section 5.4.13, "Configuring a Default Persistence Store for Transaction Recovery"
Section 5.4.14, "Install Oracle HTTP Server on WEBHOST1 and WEBHOST2"
Section 5.4.15, "Configuring Oracle Identity Manager to Work with the Web Tier"
Section 5.4.16, "Validate the Oracle HTTP Server Configuration"
Section 5.4.17, "Oracle Identity Manager Failover and Expected Behavior"
Before you configure OIM for high availability, you must:
Install the Oracle Database. See "Database Requirements" in Installation Guide for Oracle Identity and Access Management.
Run the Repository Creation Utility to create the OIM schemas in a database. See Section 5.4.1.1, "Running RCU to Create the OIM Schemas in a Database."
Install the JDK on OIMHOST1 and OIMHOST2. See "Preparing for Installation" in Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.
Install WebLogic Server on OIMHOST1 and OIMHOST2. See Section 5.4.1.2, "Installing Oracle WebLogic Server.".
Install the Oracle SOA Suite on OIMHOST1 and OIMHOST2. See Part II, "Installing Oracle SOA Suite on OIMHOST1 and OIMHOST2.".
Install the Oracle Identity Management software on OIMHOST1 and OIMHOST2. See Section 5.4.1.4, "Installing Oracle Identity and Access Management on OIMHOST1 and OIMHOST2".
Ensure that a highly available LDAP implementation is available.
Note:
This is required only for LDAPSync-enabled OIM installations and OIM installations that integrate with Oracle Access Management. Skip this section if you don't plan to enable LDAPSync or integrate with Oracle Access Management.OIM does not communicate directly with Oracle Internet Directory (OID). It communicates with Oracle Virtual Directory, which communicates with OID.
Create the wlfullclient.jar
file on OIMHOST1 and OIMHOST2. See Section 5.4.1.5, "Creating wlfullclient.jar Library on OIMHOST1 and OIMHOST2."
The schemas you create depend on the products you want to install and configure. Use a Repository Creation Utility (RCU) that is version compatible with the product you install. See the Oracle Fusion Middleware Installation Planning Guide for Oracle Identity and Access Management and Oracle Fusion Middleware Repository Creation Utility User's Guide to run RCU.
To install Oracle WebLogic Server, see Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.
Note:
On 64-bit platforms, the JDK does not install when you install WebLogic Server using the generic jar file. You must install the JDK separately, before installing WebLogic Server.See Installing Oracle SOA Suite (Oracle Identity Manager Users Only) in Installation Guide for Oracle Identity and Access Management.
See "Installing and Configuring Identity and Access Management" in Installation Guide for Oracle Identity and Access Management.
Oracle Identity Manager requires the wlfullclient.jar
library for some operations. For example, the Design Console uses the library for server connections. Oracle does not ship this library; you must create it manually. Oracle recommends creating this library under the MW_HOME/wlserver_10.3/server/lib
directory on all machines in your environment application tier. You don't need to create this library on directory tier machines such as OIDHOST1, OIDHOST2, OVDHOST1 and OVDHOST2. See Developing a WebLogic Full Client in the guide Oracle Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server for more information.
To create the wlfullclient.jar
file:
Go to the MW_HOME/wlserver_10.3/server/lib
directory.
Set your JAVA_HOME to your JDK path and ensure that your JAVA_HOME/bin
directory is in your path.
Create the wlfullclient.jar
file by running:
java -jar wljarbuilder.jar
To create a domain, see the topic "Creating a new WebLogic Domain for Oracle Identity Manager, SOA, and BI Publisher" in Oracle Fusion Middleware Installation Guide for Oracle Identity and Access Management.
You must configure the database security store after you configure the domain but before you start Administration Server. See "Configuring Database Security Store for an Oracle Identity and Access Management Domain" in Installation Guide for Oracle Identity and Access Management for more information.
This section describes post-installation steps for OIMHOST1. It includes these topics:
Section 5.4.4.1, "Creating boot.properties for the Administration Server on OIMHOST1"
Section 5.4.4.4, "Start the Administration Server on OIMHOST1"
The boot.properties file enables the Administration Server to start without prompting for the administrator username and password.
To create the boot.properties file:
On OIMHOST1, create the following directory:
MW_HOME/user_projects/domains/domainName/servers/AdminServer/security
For example:
$ mkdir -p
MW_HOME/user_projects/domains/domainName/servers/AdminServer/security
Use a text editor to create a file named boot.properties under the security directory. Enter the following lines in the file:
username=adminUser password=adminUserPassword
Note:
When you start Administration Server, username and password entries in the file get encrypted. For security reasons, minimize the time that file entries are left unencrypted. After you edit the file, start the server as soon as possible so that entries get encrypted.Before you start Managed Servers, Node Manager requires that the StartScriptEnabled property be set to true
.
To do this, run the setNMProps.sh
script located under the following directory:
MW_HOME/oracle_common/common/bin
Start Node Manager on OIMHOST1 using the startNodeManager.sh script located under the following directory:
MW_HOME/wlserver_10.3/server/bin
To start the Administration Server and validate its startup:
Start the Administration Server on OIMHOST1 by issuing the command:
DOMAIN_HOME/bin/startWebLogic.sh
Validate that the Administration Server started up successfully by opening a web browser and accessing the following pages:
Administration Console at:
http://oimhost1.example.com:7001/console
Oracle Enterprise Manager Fusion Middleware Control at:
http://oimhost1.example.com:7001/em
Log into these consoles using the weblogic
user credentials.
This section includes the following topics:
Section 5.4.5.1, "Prerequisites for Configuring Oracle Identity Manager"
Section 5.4.5.2, "Updating the Coherence Configuration for the Coherence Cluster"
Before configuring OIM, verify the following tasks are completed:
Note:
This section is required only for LDAPSync-enabled OIM installations and for OIM installations that integrate with Oracle Access Management.If you do not plan to enable the LDAPSync option or to integrate with Oracle Access Management, skip this section.
Extending the Directory Schema for Oracle Identity Manager
Pre-configuring the Identity Store extends the schema in the back end directory regardless of directory type.
To pre-configure the Identity Store, perform these steps on OIMHOST1:
Set the environment variables MW_HOME
, JAVA_HOME
and ORACLE_HOME
.
Set ORACLE_HOME
to IAM_ORACLE_HOME
.
Create a properties file extend.props
that contains the following:
IDSTORE_HOST : idstore.example.com
IDSTORE_PORT : 389
IDSTORE_BINDDN: cn=orcladmin
IDSTORE_USERNAMEATTRIBUTE: cn
IDSTORE_LOGINATTRIBUTE: uid
IDSTORE_USERSEARCHBASE: cn=Users,dc=example,dc=com
IDSTORE_GROUPSEARCHBASE: cn=Groups,dc=example,dc=com
IDSTORE_SEARCHBASE: dc=example,dc=com
IDSTORE_SYSTEMIDBASE: cn=systemids,dc=example,dc=com
Where:
IDSTORE_HOST
and IDSTORE_PORT
are the host and port of your Identity Store directory. If you are using a non-OID directory, then specify the Oracle Virtual Directory host (which should be IDSTORE.example.com.)
IDSTORE_BINDDN
is an administrative user in the Identity Store Directory
IDSTORE_USERSEARCHBASE
is the directory location where Users are Stored.
IDSTORE_GROUPSEARCHBASE
is the directory location where Groups are Stored.
IDSTORE_SEARCHBASE
is the directory location where Users and Groups are stored.
IDSTORE_SYSTEMIDBASE
is the location of a container in the directory where users can be placed when you do not want them in the main user container. This happens rarely but one example is the OIM reconciliation user, which is also used for the bind DN user in Oracle Virtual Directory adapters.
Configure Identity Store using the command idmConfigTool
, located at IAM_ORACLE_HOME/idmtools/bin
.
The command syntax is:
idmConfigTool.sh -preConfigIDStore input_file=configfile
For example:
idmConfigTool.sh -preConfigIDStore input_file=extend.props
After the command runs, the system prompts you to enter the password of the account with which you are connecting to the ID Store.
Sample command output:
./preconfig_id.sh Enter ID Store Bind DN password : Apr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFile INFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/idm_idstore_groups_template.ldif Apr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFile INFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/idm_idstore_groups_acl_template.ldif Apr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFile INFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/systemid_pwdpolicy.ldif Apr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFileINFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/idstore_tuning.ldifApr 5, 2011 3:39:25 AM oracle.ldap.util.LDIFLoader loadOneLdifFileINFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oid_schema_extn.ldif The tool has completed its operation. Details have been logged to automation.log
Check the log file for any errors or warnings and correct them.
Creating Users and Groups for Oracle Identity Manager
To add oimadmin
user to the Identity Store and assign it to an OIM administrative group. You must also create a user outside of the standard cn=Users
location to perform reconciliation. Oracle recommends that you select this user as the bind DN when connecting to directories with Oracle Virtual Directory.
Note:
This command also creates a container in your Identity Store for reservations.To add the xelsysadm
user to the Identity Store and assign it to an administrative group, perform the following tasks on OIMHOST1
:
Set the Environment Variables: MW_HOME
, JAVA_HOME
, IDM_HOME
, and ORACLE_HOME
Set IDM_HOME
to IDM_ORACLE_HOME
Set ORACLE_HOME
to IAM_ORACLE_HOME
Create a properties file oim.props
that contains the following:
IDSTORE_HOST : idstore.example.com
IDSTORE_PORT : 389
IDSTORE_BINDDN : cn=orcladmin
IDSTORE_USERNAMEATTRIBUTE: cn
IDSTORE_LOGINATTRIBUTE: uid
IDSTORE_USERSEARCHBASE:cn=Users,dc=example,dc=com
IDSTORE_GROUPSEARCHBASE: cn=Groups,dc=example,dc=com
IDSTORE_SEARCHBASE: dc=example,dc=com
POLICYSTORE_SHARES_IDSTORE: true
IDSTORE_SYSTEMIDBASE: cn=systemids,dc=example,dc=com
IDSTORE_OIMADMINUSER: oimadmin
IDSTORE_OIMADMINGROUP:OIMAdministrators
Where:
IDSTORE_HOST
and IDSTORE_PORT
are, respectively, the host and port of your Identity Store directory. Specify the back-end directory here, rather than OVD.
IDSTORE_BINDDN
is an administrative user in the Identity Store Directory
IDSTORE_OIMADMINUSER
is the name of the administration user you would like to use to log in to the OIM console.
IDSTORE_OIMADMINGROUP
is the name of the group you want to create to hold your OIM administrative users.
IDSTORE_USERSEARCHBASE
is the location in your Identity Store where users are placed.
IDSTORE_GROUPSEARCHBASE
is the location in your Identity Store where groups are placed.
IDSTORE_SYSTEMIDBASE
is the location in your directory where the OIM reconciliation user are placed.
POLICYSTORE_SHARES_IDSTORE
is set to true if your Policy and Identity stores are in the same directory. If not, it is set to false.
Configure Identity Store. Go to idmConfigTool
at IAM_ORACLE_HOME/idmtools/bin
:
idmConfigTool.sh -prepareIDStore mode=OIM input_file=configfile
For example:
idmConfigTool.sh -prepareIDStore mode=OIM input_file=oim.props
When the command runs, the system prompts you for the account password and requests passwords you want to assign to the accounts:
IDSTORE_OIMADMINUSER oimadmin
Oracle recommends that you set oimadmin
to the same value as the account you create as part of the OIM configuration.
Sample command output:
Enter ID Store Bind DN password : *** Creation of oimadmin *** Apr 5, 2011 4:58:51 AM oracle.ldap.util.LDIFLoader loadOneLdifFile INFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_user_template.ldif Enter User Password for oimadmin: Confirm User Password for oimadmin: Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFile INFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_group_template.ldif Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFileINFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_group_member_template.ldif Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFile INFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_groups_acl_template.ldif Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFile INFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oim_reserve_template.ldif *** Creation of Xel Sys Admin User *** Apr 5, 2011 4:59:01 AM oracle.ldap.util.LDIFLoader loadOneLdifFileINFO: -> LOADING:
/u01/app/oracle/product/fmw/IAM/idmtools/templates/oid/oam_user_template.ldif Enter User Password for xelsysadm: Confirm User Password for xelsysadm: The tool has completed its operation. Details have been logged to /home/oracle/idmtools/oim.log
Check the log file for errors and warnings and correct them.
To update the Coherence configuration for the SOA Managed Servers:
Log into the Administration Console.
Click Lock and Edit in the top left corner.
In the Domain Structure window, expand the Environment node.
Click Servers. The Summary of Servers page appears.
Click the name of the server (represented as a hyperlink) in the Name column of the table. The settings page for the selected server appears.
Click the Server Start tab.
Enter the following for WLS_SOA1 and WLS_SOA2 into the Arguments field.
For WLS_SOA1, enter the following (on a single line, without a carriage return):
-Dtangosol.coherence.wka1=soahost1vhn1 -Dtangosol.coherence.wka2=soahost2vhn1 -Dtangosol.coherence.localhost=soahost1vhn1
For WLS_SOA2, enter the following (on a single line, without a carriage return):
-Dtangosol.coherence.wka1=soahost1vhn1 -Dtangosol.coherence.wka2=soahost2vhn1 -Dtangosol.coherence.localhost=soahost2vhn1
Click Save and activate the changes.
Start WLS_SOA1 from the Administration Console.
You must configure the OIM server instances before you can start the OIM Managed Servers. You perform these configuration steps only once: during the initial creation of the domain, for example. The Oracle Identity Management Configuration Wizard loads OIM metadata into the database and configures the instance.
Before running the Configuration Wizard, you must verify the following:
The administration server is up and running.
You updated the Coherence configuration for the Coherence cluster, as Section 5.4.5.2, "Updating the Coherence Configuration for the Coherence Cluster" describes.
wls_soa1
is running.
The environment variables DOMAIN_HOME
and WL_HOME
are not set in the current shell.
The Oracle Identity Management Configuration Wizard is located under the Identity Management Oracle home. Enter:
IAM_ORACLE_HOME
/bin/config.sh
To run the OIM Configuration Wizard:
On the Welcome screen, click Next
On the Components to Configure screen, select OIM Server. Select OIM Design Console, if required in your topology.
Click Next.
On the Database screen, provide the following values:
Connect String: The connect string for the OIM database. For example:
oimdbhost1-vip.example.com:1521:oimdb1^oimdbhost2-vip.example.com:1521:oimdb2@oim.example.com
OIM Schema User Name: HA_OIM
OIM Schema password: password
MDS Schema User Name: HA_MDS
MDS Schema Password: password
Select Next.
On the WebLogic Administration Server screen, enter the following details:
URL: URL to connect to the Administration Server. For example: t3://oimhost1.example.com:7001
UserName: weblogic
Password: Password for the weblogic
user
Click Next.
On the OIM Server screen, enter the following values:
OIM Administrator Password: Password for the OIM Administrator. This is the password for the xelsysadm
user, the same password you entered earlier for idmconfigtool.
Confirm Password: Confirm the password·
OIM HTTP URL: Reverse proxy URL for the OIM Server. This is the URL for the Hardware load balancer that is front ending the OHS servers for OIM. For example: http://oiminternal.example.com:80
.
Key Store Password: Key store password. The password must have an uppercase letter and a number. For example: MyPassword1
Confirm KeyStore Password: Confirm the KeyStore password·
Enable OIM for Suite integration: Select this checkbox only if you are configuring OIM for OAM or OAM-OAAM integration.
Click Next.
On the LDAP Server screen, provide the following LDAP server details:
Directory Server Type: The directory server type. Select OID, ACTIVE_DIRECTORY, IPLANET, or OVD. The default is OID.
Directory Server ID: The directory server ID.
Server URL: The URL to access the LDAP server. For example: ldap://ovd.example.com:389
if you use the Oracle Virtual Directory Server, ldap://oid.example.com:389
if you use Oracle Internet Directory.
Server User: The username to connect to the server. For example: cn=orcladmin
·
Server Password: The password to connect to the LDAP server.
Server SearchDN: The Search DN. For example: dc=example,dc=com
.
Click Next.
On the LDAP Server Continued screen, enter the following LDAP server details:
LDAP Role Container: The DN for the Role Container, where OIM roles are stored. For example: cn=Groups,dc=example,dc=com
·
LDAP User Container: The DN for the User Container, where the OIM users are stored. For example: cn=Users,dc=example,dc=com
·
User Reservation Container: The DN for the User Reservation Container.
Note:
Use the same container DN Values thatidmconfigtool
creates during the procedure "Creating Users and Groups for Oracle Identity Manager."Click Next.
On the Remote Manager screen, provide the following values:
Note:
This screen appears only if you selected the Remote Manager utility in step 2.Service Name: HA_RManager
RMI Registry Port: 12345
Listen Port (SSL): 12346
On the Configuration Summary screen, verify the summary information.
Click Configure to configure the Oracle Identity Manager instance.
On the Configuration Progress screen, once the configuration completes successfully, click Next.
On the Configuration Complete screen, view the details of the Oracle Identity Manager Instance configured.
Click Finish to exit the Configuration Assistant.
To start the Managed Servers on OIMHOST1:
Stop the Administration Server and SOA Managed Servers on OIMHOST1 using the Administration Console.
Start the Administration Server on OIMHOST1 using the startWebLogic.sh
script under the DOMAIN_HOME/bin
directory. For example:
/u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>&1 &
Open the Administration Console to validate that the Administration Server started successfully.
Start the WLS_SOA1 Managed Server using the Administration Console.
Start the WLS_BIP1 Managed Server using the Administration Console.
Start the WLS_OIM1 Managed Server using the Administration Console.
Validate the Oracle Identity Managed Server instance on OIMHOST1 by opening the Oracle Identity Manager Console in a web browser.
The URL for the Oracle Identity Manager Console is:
http://identityvhn1.example.com:14000/identity
Log in using the xelsysadm
password.
After the configuration succeeds on OIMHOST1, you can propagate it to OIMHOST2 by packing the domain on OIMHOST1 and unpacking it on OIMHOST2.
Note:
Oracle recommends that you perform a clean shut down of all Managed Servers on OIMHOST1 before you propagate the configuration to OIMHOST2.To pack the domain on OIMHOST1 and unpack it on OIMHOST2:
On OIMHOST1, invoke the pack
utility in the ORACLE_HOME/oracle_common/common/bin
directory:
pack.sh -domain=MW_HOME/user_projects/domains/OIM_Domain -
template =/u01/app/oracle/admin/templates/oim_domain.jar -
template_name="OIM Domain" -managed=true
The previous step created the oim_domain.jar
file in the following directory:
/u01/app/oracle/admin/templates
Copy oim_domain.jar
from OIMHOST1 to a temporary directory on OIMHOST2.
On OIMHOST2, invoke the unpack
utility in the MW_HOME/oracle_common/common/bin
directory and specify the oim_domain.jar
file location in its temporary directory:
unpack.sh -domain=MW_HOME/user_projects/domains/OIM_Domain -
template=/tmp/oim_domain.jar
This section includes these topics:
Before you can start Managed Servers with the Administration Console, you must set the Node Manager StartScriptEnabled
property to true.
To do this, run the setNMProps.sh
script located under the following directory:
MW_HOME/oracle_common/common/bin
Start the Node Manager on OIMHOST2 using the startNodeManager.sh
script located under the following directory:
MW_HOME/wlserver_10.3/server/bin
To start Managed Servers on OIMHOST2:
Validate that the Administration Server started up successfully by bringing up the Administration Console.
Start the WLS_SOA2 Managed Server using the Administration Console.
Start the WLS_BIP2 Managed Server using the Administration Console.
Start the WLS_OIM2 Managed Server using the Administration Console. The WLS_OIM2 Managed Server must be started after the WLS_SOA2 Managed Server is started.
Validate the Oracle Identity Manager (OIM) and BI Publisher Managed Server instances on OIMHOST2.
Open the OIM Console with this URL:
http://identityvhn2.example.com:14000/oim
Log in using the xelsysadm
password.
The URL for the BI Publisher is:
http://identityvhn2.example.com:9704/xmlpserver
Log in using the xelsysadm
password.
To configure BI Publisher:
Verify that all BI servers use the same BI configuration. To do this, copy the contents of the DOMAIN_HOME/config/bipublisher/repository
directory to the shared configuration folder location.
Note:
You can use any folder location, as long as it exists on shared storage (NFS or cluster file system) that both hosts can access at the same mount point on each host.On OIMHOST1, log in to BI Publisher with Administrator credentials and select the Administration tab.
Under System Maintenance, select Server Configuration.
In the Path field under the Configuration Folder, enter the shared location for the Configuration Folder.
In the BI Publisher Repository field under Catalog, enter the shared location for the BI Publisher Repository. Apply the changes.
Repeat the preceding procedure for each Managed Server that BI is running on.
To restart the BI Publisher application:
Log in to the Administration Console.
Click Deployments in the Domain Structure window then select bipublisher(11.1.1).
Click Stop then select When work completes or Force Stop Now.
When the application stops, click Start then select Servicing All Requests.
Log in to BI Publisher again to confirm that the configuration change succeeded.
Note:
f you enter an incorrect shared configuration folder path, you may see this error when logging in to BI Publisher after restarting it:example.xdo.servlet.resources.ResourceNotFoundException: INCORRECT_REPO_PATH/Admin/Security/principals.xml"
INCORRECT_REPO_PATH
is the incorrect repository path. To recover from this error, manually edit DOMAIN_HOME/config/bipublisher/xmlp-server-config.xml
to correct the invalid path, then restart BI Publisher.
Continue on to the following procedures:
To set Scheduler configuration options:
On OIMHOST1, log in to BI Publisher with Administrator credentials and select the Administration tab.
Under System Maintenance, select Scheduler Configuration.
Select Quartz Clustering under the Scheduler Selection then click Apply.
In this procedure, you configure the location for all persistence stores to a directory that is visible from both nodes. You then change all persistent stores to use this shared base directory.
Log into the Administration Console. In the Domain Structure window, expand the Services node and click the Persistent Stores node.
Click Lock & Edit in the Change Center. Click on existing File Store (for example, BipJmsStore), and verify the target. If it is WLS_BIP2, the new File Store must target WLS_BIP1.
Click New and Create File Store.
Enter a name, such as BipJmsStore1
and target WLS_BIP1. Enter a directory located in shared storage so that OIMHOST1 and OIMHOST2 can access it:
ORACLE_BASE/admin/domain_name/bi_cluster/jms
Click OK and Activate Changes.
In the Domain Structure window, expand the Services node and click the Messaging > JMS Servers node.
Click Lock & Edit in the Change Center then click New.
Enter a name, such as BipJmsServer1
. In the Persistence Store drop-down list, select BipJmsStore1
and click Next.
Select WLS_BIP1 as the target. Click Finish and Activate Changes.
In the Domain Structure window, expand the Services node and click the Messaging > JMS Modules node.
Click Lock & Edit in the Change Center.
Click BipJmsResource
and click the Subdeployments tab. Select BipJmsSubDeployment under Subdeployments.
Add the new BI Publisher JMS Server, BipJmsServer1
, as an additional target for the subdeployment.
Click Save and Activate Changes.
Note:
In a high availability set up, you must keep the clocks of BI, OIM, and SOA nodes synchronized.To validate the JMS configuration for BI Publisher, follow the steps in Section 5.4.10.3, "Validating the BI Publisher Scheduler Configuration."
Follow this procedure to validate the JMS Shared Temp Directory for the BI Publisher Scheduler. You run this procedure on one OIMHOST only: OIMHOST1 or OIMHOST2.
To validate the BI Publisher Scheduler configuration:
Log in to BI Publisher at the one of the following URLs:
http://OIMHOST1VHN1:9704/xmlpserver http://OIMHOST2VHN1:9704/xmlpserver
Click Administration then click Scheduler Configuration under System Maintenance to open the Scheduler Configuration page.
Update the Shared Directory by entering a directory that is located in the shared storage. This shared storage is accessible from both OIMHOST1 and OIMHOST2.
Click Test JMS.
Note:
If you do not see a confirmation message for a successful test, verify that the JNDI URL is set to cluster:t3://bi_clusterClick Apply. Check the Scheduler status in the Scheduler Diagnostics tab.
Restart WLS_BIP1 and WLS_BIP2.
For more information, see the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Business Intelligence.
Note:
This section is required only for LDAPSync-enabled OIM installations and for OIM installations that integrate with Oracle Access Management.If you do not plan to enable the LDAP-Sync option or to integrate with Oracle Access Management, you can skip this section.
In the current release, the LDAPConfigPostSetup
script enables all the LDAPSync-related incremental Reconciliation Scheduler jobs, which are disabled by default. The LDAP configuration post-setup script is located under the IAM_ORACLE_HOME
/server/ldap_config_util
directory. To run the script, follow these steps:
Edit the ldapconfig.props
file located under the IAM_ORACLE_HOME
/server/ldap_config_util
directory and provide the following values:
Parameter | Value | Description |
---|---|---|
OIMProviderURL |
t3://OIMHOST1VHN.example.com:14000,OIMHOST2VHN.example.com:14000 |
List of Oracle Identity Manager Managed Servers |
LDAPURL |
Oracle Virtual Directory instance URL, for example: ldap://idstore.example.com:389 |
Identity Store URL. Only required if IDStore is accessed using Oracle Virtual Directory |
LDAPAdminUserName |
cn=oimadmin,cn=systemids,dc=example,dc=com |
Name of user to connect to Identity Store. Only required if your Identity Store is in Oracle Virtual Directory. This user should not be located in cn=Users,dc=example,dc=com . |
LIBOVD_PATH_PARAM |
MSERVER_HOME /config/fmwconfig/ovd/oim |
Required unless you access your identity store using Oracle Virtual Directory. |
Note:
usercontainerName
, rolecontainername
, and reservationcontainername
are not used in this step.Save the file.
Set the JAVA_HOME
, WL_HOME
, APP_SERVER
, OIM_ORACLE_HOME,
and DOMAIN_HOME
environment variables, where:
JAVA_HOME
is set to MW_HOME
JRE-JDK_version
WL_HOME
is set to MW_HOME
/wlserver_10.3
APP_SERVER
is set to weblogic
OIM_ORACLE_HOME
is set to IAM_ORACLE_HOME
DOMAIN_HOME
is set to MSERVER_HOME
Run LDAPConfigPostSetup.sh. The script prompts for the LDAP administrator password and the OIM administrator password. For example:
IAM_ORACLE_HOME/server/ldap_config_util/LDAPConfigPostSetup.sh path_to_property_file
For example:
IAM_ORACLE_HOME/server/ldap_config_util/LDAPConfigPostSetup.sh IAM_ORACLE_HOME/server/ldap_config_util
For this high availability topology, Oracle recommends that you configure server migration for the WLS_OIM1, WLS_SOA1, WLS_OIM2, and WLS_SOA2 Managed Servers. See Section 3.9, "Whole Server Migration" for information on the benefits of using Whole Server Migration and why Oracle recommends it.
The WLS_OIM1 and WLS_SOA1 Managed Servers on OIMHOST1 are configured to restart automatically on OIMHOST2 if a failure occurs on OIMHOST1.
The WLS_OIM2 and WLS_SOA2 Managed Servers on OIMHOST2 are configured to restart automatically on OIMHOST1 if a failure occur on OIMHOST2.
In this configuration, the WLS_OIM1, WLS_SOA1, WLS_OIM2 and WLS_SOA2 servers listen on specific floating IPs that WebLogic Server Migration fails over.
The following steps enable server migration for the WLS_OIM1, WLS_SOA1, WLS_OIM2, and WLS_SOA2 Managed Servers, which in turn enables a Managed Server to fail over to another node if a server or process failure occurs:
Step 1: Setting Up a User and Tablespace for the Server Migration Leasing Table
Step 2: Creating a GridLink Data Source
Step 4: Setting Environment and Superuser Privileges for the wlsifconfig.sh Script
Step 6: Testing the Server Migration
The first step to set up a user and tablespace for the server migration leasing table:
Note:
If other servers in the same domain are already configured with server migration, use the same tablespace and data sources. In this case, you don't need to recreate the data sources and GridLink data source for database leasing, however, you must retarget them to the clusters you're configuring for server migration.Create a tablespace named leasing
. For example, log on to SQL*Plus as the sysdba user and run the following command:
SQL> create tablespace leasing logging datafile 'DB_HOME/oradata/orcl/leasing.dbf' size 32m autoextend on next 32m maxsize 2048m extent management local;
Note: Omit DB_HOME/oradata/orcl/leasing.dbf
(data file path) if you have configured Oracle Managed Files (OMF). If you are using Oracle Automatic Storage Management (ASM), you can provide the name of the ASM disk group here, for example, +DATA. If omitted, the default disk group configured in the DB_CREATE_FILE_DEST database initialization parameter will be used.:
Create a user named leasing
and assign to it the leasing tablespace:
SQL> create user leasing identified by password; SQL> grant create table to leasing; SQL> grant create session to leasing; SQL> alter user leasing default tablespace leasing; SQL> alter user leasing quota unlimited on LEASING;
Create the leasing table using the leasing.ddl script:
Copy the leasing.ddl file located in either the WL_HOME/server/db/oracle/920 or the WL_HOME/server/db/oracle/920 directory to your database node.
Connect to the database as the leasing user.
Run the leasing.ddl script in SQL*Plus:
SQL> @Copy_Location/leasing.ddl;
Note:
The following errors are normal; you can ignore them:SP2-0734: unknown command beginning "WebLogic S..." - rest of line ignored. SP2-0734: unknown command beginning "Copyright ..." - rest of line ignored. DROP TABLE ACTIVE * ERROR at line 1: ORA-00942:table or view does not exist
To create a GridLink data source, see "Creating a GridLink Data Source" in the Oracle Fusion Middleware Configuring and Managing JDBC Data Sources for Oracle WebLogic Server guide.**
You must edit the nodemanager.properties
file to add the following properties for each node where you configure server migration:
Interface=eth0 eth0=*,NetMask=255.255.248.0 UseMACBroadcast=true
Interface
: Specifies the interface name for the floating IP (such as eth0
).
Note:
Do not specify the sub interface, such aseth0:1
or eth0:2
. This interface is to be used without the :0
, or :1
. The Node Manager's scripts traverse the different :X
enabled IPs to determine which to add or remove. For example, valid values in Linux environments are eth0
, eth1
, or, eth2
, eth3
, eth
n, depending on the number of interfaces configured.NetMask
: Net mask for the interface for the floating IP. The net mask should be the same as the net mask on the interface; 255.255.255.0
is an example. The actual value depends on your network.
UseMACBroadcast
: Specifies whether or not to use a node's MAC address when sending ARP packets, that is, whether or not to use the -b
flag in the arping
command.
Verify in Node Manager's output (shell where Node Manager starts) that these properties are being used or problems may arise during migration. (Node Manager must be restarted to do this.) You should see an entry similar to the following in Node Manager's output:
... StateCheckInterval=500 Interface=eth0 NetMask=255.255.255.0 ...
To set environment and superuser privileges for the wlsifconfig.sh
script for each node where you configure server migration:
Modify the login profile of the user account that you use to run Node Manager to ensure that the PATH environment variable for the Node Manager process includes directories housing the wlsifconfig.sh
and wlscontrol.sh
scripts, and the nodemanager.domains
configuration file. Ensure that your PATH environment variable includes these files:
Grant sudo configuration for the wlsifconfig.sh
script.
Configure sudo to work without a password prompt.
For security reasons, Oracle recommends restricting to the subset of commands required to run the wlsifconfig.sh
script. For example, perform the following steps to set the environment and superuser privileges for the wlsifconfig.sh
script:
Grant sudo privilege to the WebLogic user (oracle
) with no password restriction, and grant execute privilege on the /sbin/ifconfig
and /sbin/arping binaries
.
Ensure that the script is executable by the WebLogic user. The following is an example of an entry inside /etc/sudoers
granting sudo execution privilege for oracle
and also over ifconfig
and arping
:
oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
Note:
Ask the system administrator for the sudo and system rights as appropriate to this step.You first assign all available nodes for the cluster's members and then specify candidate machines (in order of preference) for each server that is configured with server migration. To configure cluster migration in a migration in a cluster:
Log into the Administration Console.
In the Domain Structure window, expand Environment and select Clusters.
Click the cluster you want to configure migration for in the Name column.
Click the Migration tab.
Click Lock and Edit.
In the Available field, select the machine to which to enable migration and click the right arrow.
Select the data source to use for automatic migration. In this case, select the leasing data source.
Click Save.
Click Activate Changes.
Set the candidate machines for server migration. You must perform this task for all Managed Servers as follows:
In the Domain Structure window of the Administration Console, expand Environment and select Servers.
Tip:
Click Customize this table in the Summary of Servers page and move Current Machine from the Available window to the Chosen window to view the machine that the server runs on. This will be different from the configuration if the server migrates automatically.Select the server that you want to configure migration for.
Click the Migration tab.
In the Available field, located in the Migration Configuration section, select the machines you want to enable migration to and click the right arrow.
Select Automatic Server Migration Enabled. This enables Node Manager to start a failed server on the target node automatically.
Click Save then Click Activate Changes.
Repeat the steps above for any additional Managed Servers.
Restart the administration server, Node Managers, and the servers for which server migration has been configured.
To verify that server migration works properly:
Stop the WLS_OIM1 Managed Server by running the command:
OIMHOST1> kill -9 pid
where pid specifies the process ID of the Managed Server. You can identify the pid in the node by running this command:
OIMHOST1> ps -ef | grep WLS_OIM1
Watch the Node Manager console. You should see a message indicating that WLS_OIM1's floating IP has been disabled.
Wait for Node Manager to try a second restart of WLS_OIM1. It waits for a fence period of 30 seconds before trying this restart.
Once Node Manager restarts the server, stop it again. Node Manager should now log a message indicating that the server will not be restarted again locally.
Watch the local Node Manager console. After 30 seconds since the last try to restart WLS_OIM1 on OIMHOST1, Node Manager on OIMHOST2 should prompt that the floating IP for WLS_OIM1 is being brought up and that the server is being restarted in this node.
Access the soa-infra console in the same IP.
Follow the steps above to test server migration for the WLS_OIM2, WLS_SOA1, and WLS_SOA2 Managed Servers.
Table 5-2 shows the Managed Servers and the hosts they migrate to in case of a failure.
Table 5-2 WLS_OIM1, WLS_OIM2, WLS_SOA1, WLS_SOA2 Server Migration
Managed Server | Migrated From | Migrated To |
---|---|---|
WLS_OIM1 |
OIMHOST1 |
OIMHOST2 |
WLS_OIM2 |
OIMHOST2 |
OIMHOST1 |
WLS_SOA1 |
OIMHOST1 |
OIMHOST2 |
WLS_SOA2 |
OIMHOST2 |
OIMHOST1 |
Verification From the Administration Console
To verify migration in the Administration Console:
Log into the Administration Console at http://oimhost1.example.com:7001/console using administrator credentials.
Click Domain on the left console.
Click the Monitoring tab and then the Migration sub tab.
The Migration Status table provides information on the status of the migration.
Note:
After a server migrates, to fail it back to its original node/machine, stop the Managed Server in the Administration Console then start it again. The appropriate Node Manager starts the Managed Server on the machine it was originally assigned to.Each Managed Server has a transaction log that stores information about inflight transactions that the Managed Server coordinates that may not complete. WebLogic Server uses the transaction log to recover from system/network failures. To leverage the Transaction Recovery Service migration capability, store the transaction log in a location that all Managed Servers in a cluster can access. Without shared storage, other servers in the cluster can't run transaction recovery in the event of a server failure, so the operation may need to be retried.
Note:
Oracle recommends a location on a Network Attached Storage (NAS) device or Storage Area Network (SAN).To set the location for default persistence stores for the OIM and SOA Servers:
Log into the Administration Console at http://oimhost1.example.com:7001/console using administrator credentials.
In the Domain Structure window, expand the Environment node and then click the Servers node. The Summary of Servers page opens.
Select the name of the server (represented as a hyperlink) in the Name column of the table. The Settings page for the server opens to the Configuration tab.
Select the Services subtab of the Configuration tab (not the Services top-level tab).
In the Default Store section, enter the path to the folder where the default persistent stores store their data files. The directory structure of the path should be:
For the WLS_SOA1 and WLS_SOA2 servers, use a directory structure similar to:
ORACLE_BASE/admin/domainName/soaClusterName/tlogs
For the WLS_OIM1 and WLS_OIM2 servers, use a directory structure similar to:
ORACLE_BASE/admin/domainName/oimClusterName/tlogs
Click Save.
Note:
To enable migration of Transaction Recovery Service, specify a location on a persistent storage solution that is available to the Managed Servers in the cluster. WLS_SOA1, WLS_SOA2, WLS_OIM1, and WLS_OIM2 must be able to access this directory.Install Oracle HTTP Server on WEBHOST1 and WEBHOST2.
This section describes how to configure OIM to work with the Oracle Web Tier.
Verify that the following tasks have been performed:
Oracle Web Tier has been installed on WEBHOST1 and WEBHOST2.
OIM is installed and configured on OIMHOST1 and OIMHOST2.
The load balancer has been configured with a virtual hostname (sso.example.com
) pointing to the web servers on WEBHOST1 and WEBHOST2. Isso.example.com
is customer facing and the main point of entry; it is typically SSL terminated.
The load balancer has been configured with a virtual hostname (oiminternal.example.com
) pointing to web servers WEBHOST1 and WEBHOST2. oiminternal.example.com
is for internal callbacks and is not customer facing.
On each of the web servers on WEBHOST1
and WEBHOST2
, create a file named oim.conf
in the directory ORACLE_INSTANCE
/config/OHS/
COMPONENT/moduleconf
.
This file must contain the following information:
# oim admin console(idmshell based) <Location /admin> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> # oim self and advanced admin webapp consoles(canonic webapp) <Location /oim> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> <Location /identity> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> <Location /sysadmin> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> # SOA Callback webservice for SOD - Provide the SOA Managed Server Ports <Location /sodcheck> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster soavhn1.example.com:8001,soavhn2.example.com:8001 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> # Callback webservice for SOA. SOA calls this when a request is approved/rejected # Provide the SOA Managed Server Port <Location /workflowservice> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> # xlWebApp - Legacy 9.x webapp (struts based) <Location /xlWebApp> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> # Nexaweb WebApp - used for workflow designer and DM <Location /Nexaweb> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> # used for FA Callback service. <Location /callbackResponseService> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> # spml xsd profile <Location /spml-xsd> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> <Location /HTTPClnt> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> <Location /reqsvc> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> <Location /integration> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster soavhn1.example.com:8001,soavhn2.example.com:8001 WLProxySSL ON WLProxySSLPassThrough ON </Location> <Location /provisioning-callback> SetHandler weblogic-handler WLCookieName oimjsessionid WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> <Location /xmlpserver> SetHandler weblogic-handler WLCookieName JSESSIONID WebLogicCluster oimvhn1.example.com:9704,oimvhn2.example.com:9704 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location> <Location /CertificationCallbackService> SetHandler weblogic-handler WLCookieName JSESSIONID WebLogicCluster oimvhn1.example.com:14000,oimvhn2.example.com:14000 WLLogFile "${ORACLE_INSTANCE}/diagnostics/logs/mod_wl/oim_component.log" WLProxySSL ON WLProxySSLPassThrough ON </Location>
Create a file called virtual_hosts.conf
in ORACLE_INSTANCE/config/OHS/
COMPONENT/moduleconf
. The file must contain the following information:
Note:
COMPONENT is typicallyohs1
or ohs2
. However, the name depends on choices you made during OHS installation.NameVirtualHost *:7777 <VirtualHost *:7777> ServerName http://sso.example.com:7777 RewriteEngine On RewriteOptions inherit UseCanonicalName On </VirtualHost> <VirtualHost *:7777> ServerName http://oiminternal.example.com:80 RewriteEngine On RewriteOptions inherit UseCanonicalName On </VirtualHost>
Save the file on both WEBHOST1
and WEBHOST2
.
Stop and start the Oracle HTTP Server instances on both WEBHOST1
and WEBHOST2
.
To validate that Oracle HTTP Server is configured properly, follow these steps:
In a web browser, enter the following URL for the Oracle Identity Manager Console:
http://sso.example.com:7777/identity
The Oracle Identity Manager Console login page should display.
Log into the Oracle Identity Manager Console using the credentials for the xelsysadm
user.
In a high availability environment, you configure Node Manager to monitor Oracle WebLogic Servers. In case of failure, Node Manager restarts the WebLogic Server.
A hardware load balancer load balances requests between multiple OIM instances. If one OIM Managed Server fails, the load balancer detects the failure and routes requests to surviving instances.
In a high availability environment, state and configuration information is stored in a database that all cluster members share. Surviving OIM instances continue to seamlessly process any unfinished transactions started on the failed instance because state information is in the shared database, available to all cluster members.
When an OIM instance fails, its database and LDAP connections are released. Surviving instances in the active-active deployment make their own connections to continue processing unfinished transactions on the failed instance.
When you deploy OIM in a high availability configuration:
You can deploy OIM on an Oracle RAC database, but Oracle RAC failover is not transparent for OIM in this release. If Oracle RAC failover occurs, end users may have to resubmit their requests.
Oracle Identity Manager always requires the availability of at least one node in the SOA cluster. If the SOA cluster is not available, end user requests fail. OIM does not retry for a failed SOA call. Therefore, the end user must retry when a SOA call fails.
You can scale out or scale up the OIM high availability topology. When you scale up the topology, you add new Managed Servers to nodes that are already running one or more Managed Servers. When you scale out the topology, you add new Managed Servers to new nodes. See Section 5.4.19, "Scaling Out Oracle Identity Manager" to scale out.
In this case, you have a node that runs a Managed Server configured with SOA and BI components. The node contains:
A Middleware home
An Oracle HOME (SOA)
An Oracle Home (BIP)
A domain directory for existing Managed Servers
You can use the existing installations (Middleware home and domain directories) to create new WLS_OIM, WLS_SOA, and WLS_BIP Managed Servers. You do not need to install OIM, SOA, and BIP binaries in a new location or run pack and unpack.
This procedure describes how to clone OIM, SOA, and BIP Managed Servers. You may clone all three or two of these component types, as long as one of them is OIM.
Note the following:
This procedure refers to WLS_OIM, WLS_SOA, and WLS_BIP. However, you may not be scaling up all three components. For each step, choose the component(s) that you are scaling up in your environment. Also, some steps do not apply to all components
The persistent store's shared storage directory for JMS Servers must exist before you start the Managed Server or the start operation fails.
Each time you specify the persistent store's path, it must be a directory on shared storage
To scale up the topology:
In the Administration Console, clone WLS_OIM1/WLS_SOA1/WLS_BIP1. The Managed Server that you clone should be one that already exists on the node where you want to run the new Managed Server.
Select Environment -> Servers from the Administration Console.
Select the Managed Server(s) that you want to clone.
Select Clone.
Name the new Managed Server WLS_OIMn
/WLS_SOAn
/WLS_BIPn
, where n
is a number to identity the new Managed Server.
The rest of the steps assume that you are adding a new Managed Server to OIMHOST1, which is already running WLS_OIM1, WLS_SOA1, and WLS_BIP1.
For the listen address, assign the hostname or IP for the new Managed Server(s). If you plan to use server migration, use the VIP (floating IP) to enable Managed Server(s) to move to another node. Use a VIP different from the VIP that the existing Managed Server uses.
Create JMS Servers for OIM/SOA/BIP, BPM, UMS, JRFWSAsync, and PS6SOA on the new Managed Server.
In the Administration Console, create a new persistent store for the OIM/SOA/BIP JMS Server(s) and name it, for example, SOAJMSFileStore_
n or BipJmsStore
n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms
Create a new JMS Server for OIM/SOA/BIP, for example, SOAJMSServer_n
or BipJmsServer
n. Use JMSFileStore
_n for JMSServer. Target JMSServer_
n to the new Managed Server(s).
Create a persistence store for the new UMSJMSServer(s), for example, UMSJMSFileStore
_n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms/UMSJMSFileStore_n
Create a new JMS Server for UMS, for example, UMSJMSServer
_n. Target it to the new Managed Server (WLS_SOAn).
Create a persistence store for the new BPMJMSServer(s), for example, BPMJMSFileStore
_n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms/BPMJMSFileStore_n
Create a new JMS Server for JMS, for example, BPMJMSServer
_n. Target it to the new Managed Server (WLS_SOAn).
Create a persistence store for the new BipJmsServer(s), for example, BipJmsStore
_n. Specify the store's path, a directory on shared storage:
ORACLE_BASE/admin/domain_name/cluster_name/jms/BipJmsStore_n
Create a new persistence store for the new JRFWSAsyncJMSServer, for example, JRFWSAsyncJMSFileStore
_n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms/JRFWSAsyncJMSFileStore_n
Create a JMS Server for JRFWSAsync, for example, JRFWSAsyncJMSServer
_n. Use JRFWSAsyncJMSFileStore
_n for this JMSServer. Target JRFWSAsyncJMSServer
_n to the new Managed Server (WLS_OIMn).
Note:
You can also assignSOAJMSFileStore
_n as store for the new JRFWSAsync JMS Servers. For clarity and isolation, individual persistent stores are used in the following steps.Create a persistence store for the new PS6SOAJMSServer, for example, PS6SOAJMSFileStore_auto
_n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms/PS6SOAJMSFileStore_auto_n
Create a JMS Server for PS6SOA, for example, PS6SOAJMSServer_auto
_n. Use PS6SOAJMSFileStore_auto_
n for this JMSServer. Target PS6SOAJMSServer_auto
_n to the new Managed Server (WLS_SOAn).
Note:
You can also assignSOAJMSFileStore
_n as store for the new PS6 JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.Update SubDeployment targets for SOA JMS Module to include the new SOA JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click SOAJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the SOAJMSServerXXXXXX subdeployment and add SOAJMSServer
_n to it Click Save.
Note:
A subdeployment module name is a random name in the form COMPONENTJMSServerXXXXXX. It comes from the Configuration Wizard JMS configuration for the first two Managed Servers, WLS_COMPONENT1 and WLS_COMPONENT2).Update SubDeployment targets for UMSJMSSystemResource to include the new UMS JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click UMSJMSSystemResource (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the UMSJMSServerXXXXXX subdeployment and add UMSJMSServer
_n to it. Click Save.
Update SubDeployment targets for OIMJMSModule to include the new OIM JMS Server. Expand the Services node, then expand Messaging node. Choose JMS Modules from the Domain Structure window. Click OIMJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click OIMJMSServerXXXXXX and OIMJMSServer
_n
to it. Click Save.
Update SubDeployment targets for the JRFWSAsyncJmsModule to include the new JRFWSAsync JMS Server. Expand the Services node then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click JRFWSAsyncJmsModule (hyperlink in the Names column of the table). In the Settings page, click the SubDeployments tab. Click the JRFWSAsyncJMSServerXXXXXX subdeployment and add JRFWSAsyncJMSServer
_n to this subdeployment. Click Save
Update the SubDeployment targets for PS6SOAJMSModule to include the new PS6SOA JMS Server. Expand the Services node and the Messaging node. Choose JMS Modules from the Domain Structure window. Click PS6SOAJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. Click the PS6SOAJMSServerXXXXXX subdeployment. Add PS6SOAJMSServer_auto
_n to this subdeployment. Click Save.
Update SubDeployment targets for BPM JMS Module to include the new BPM JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click BPMJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the BIPJMSServerXXXXXX subdeployment and add BPMJMSServer
_n to it. Click Save.
For SOA Managed Servers only, configure Oracle Coherence for deploying composites.
Note:
Replace the server's localhost field only with the new server's listen address. For example:Dtangosol.coherence.localhost=SOAHOST1VHN
n
Configure the transaction persistent store for the new server in a shared storage location visible from other nodes.
From the Administration Console, select Server_name > Services tab. Under Default Store, in Directory, enter the path to the default persistent store.
Disable hostname verification for the new Managed Server (required before starting/verifying a WLS_SOAn Managed Server) You can re-enable it after you configure server certificates for Administration Server / Node Manager communication in SOAHOSTn. If the source server (from which you cloned the new Managed Server) had disabled hostname verification, these steps are not required; hostname verification settings propagate to a cloned server.
To disable hostname verification:
In the Administration Console, expand the Environment node in the Domain Structure window.
Click Servers. Select WLS_SOAn in the Names column of the table.
Click the SSL tab. Click Advanced.
Set Hostname Verification to None. Click Save.
Start and test the new Managed Server from the Administration Console.
Shut down the existing Managed Servers in the cluster.
Ensure that the newly created Managed Server is up.
Access the application on the newly created Managed Server to verify that it works. A login page opens for OIM and BI Publisher. For SOA, a HTTP basic authorization opens.
In the Administration Console, select Services then Foreign JNDI provider. Confirm that ForeignJNDIProvider-SOA targets cluster:t3://soa_cluster
, not a Managed Server(s). You target the cluster so that new Managed Servers don't require configuration. If ForeignJNDIProvider-SOA does not target the cluster, target it to the cluster.
Configure Server Migration for the new Managed Server.
Note:
For scale up, the node must have a Node Manager, an environment configured for server migration, and the floating IP for the new Managed Server(s).To configure server migration:
Log into the Administration Console.
In the left pane, expand Environment and select Servers.
Select the server (hyperlink) that you want to configure migration for.
Click the Migration tab.
In the Available field, in the Migration Configuration section, select machines to enable migration for and click the right arrow. Select the same migration targets as for the servers that already exist on the node.
For example:
For new Managed Servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2.
For new Managed Servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.
Verify that the appropriate resources are available to run Managed Servers concurrently during migration.
Select the Automatic Server Migration Enabled option to enable Node Manager to start a failed server on the target node automatically.
Click Save.
Restart the Administration Server, Managed Servers, and Node Manager.
Repeat these steps to configure server migration for the newly created WLS_OIMn Managed Server.
To test server migration for this new server, follow these steps from the node where you added the new server:
Stop the Managed Server.
Run kill -9
pid on the PID of the Managed Server. To identify the PID of the node, enter, for example, ps -ef | grep WLS_SOA
n. Substitute BIP for SOA if necessary.
Watch Node Manager Console for a message indicating that the Managed Server floating IP is disabled.
Wait for Node Manager to try a second restart of the Managed Server. Node Manager waits for 30 seconds before trying this restart.
After Node Manager restarts the server, stop it again. Node Manager logs a message indicating that the server will not restart again locally.
Edit the OHS configuration file to add the new managed server(s). See Section 5.4.19.1, "Configuring Oracle HTTP Server to Recognize New Managed Servers."
When you scale out the topology, you add new Managed Servers configured with software to new nodes.
Note:
Steps in this procedure refer to WLS_OIM, WLS_SOA, and WLS_BIP. However, you may not be scaling up all three components. For each step, choose the component(s) that you are scaling up in your environment. Some steps do not apply to all components.Before you scale out, check that you meet these requirements:
Existing nodes running Managed Servers configured with OIM, SOA, and/or BIP in the topology.
The new node can access existing home directories for WebLogic Server, SOA, and BIP. (Use existing installations in shared storage to create new Managed Server. You do not need to install WebLogic Server or component binaries in a new location, but must run pack and unpack to bootstrap the domain configuration in the new node.)
Note:
If there is no existing installation in shared storage, you must install WebLogic Server and SOA in the new nodes.Note:
When multiple servers in different nodes share ORACLE_HOME or WL_HOME, Oracle recommends keeping the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use ORACLE_HOME/oui/bin/attachHome.sh
. To update the Middleware home list to add or remove a WL_HOME, edit user_home/bea/beahomelist
with the following steps.To scale out the topology:
On the new node, mount the existing Middleware home. Include the SOA and/or BIP installation and the domain directory, and ensure the new node has access to this directory, just like the rest of the nodes in the domain.
Attach ORACLE_HOME in shared storage to the local Oracle Inventory. For example:
cd /u01/app/oracle/soa/
./attachHome.sh -jreLoc u01/app/JRE-JDK_version>
To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the MW_HOME/bea/beahomelist
file and add u01/app/oracle
to it.
Log in to the Administration Console.
Create a new machine for the new node. Add the machine to the domain.
Update the machine's Node Manager's address to map the IP of the node that is being used for scale out.
Clone WLS_OIM1/WLS_SOA1/WLS_BIP1. The Managed Server that you clone should be one that already exists on the node where you want to run the new Managed Server.
To clone OIM, SOA, and/or BIP:
Select Environment -> Servers from the Administration Console.
Select the Managed Server(s) that you want to clone.
Select Clone.
Name the new Managed Server WLS_OIMn
/WLS_SOAn
/WLS_BIPn
, where n
is a number to identity the new Managed Server.
Note:
These steps assume that you are adding a new server to node n, where no Managed Server was running previously.Assign the hostname or IP to use for the new Managed Server for the listen address of the Managed Server.
If you plan to use server migration for this server (which Oracle recommends), this should be the server VIP (floating IP). This VIP should be different from the one used for the existing Managed Server.
Create JMS servers for SOA, OIM (if applicable), UMS, BPM, JRFWSAsync, and PS6SOA on the new Managed Server.
In the Administration Console, create a new persistent store for the OIM/SOA/BIP JMS Server(s) and name it, for example, SOAJMSFileStore_
n or BipJmsStore
n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms
Create a new JMS Server for OIM/SOA/BIP, for example, SOAJMSServer_n
or BipJmsServer
n. Use JMSFileStore
_n for JMSServer. Target JMSServer_
n to the new Managed Server(s).
Create a persistence store for the new UMSJMSServer(s), for example, UMSJMSFileStore
_n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms/UMSJMSFileStore_n
Create a new JMS Server for UMS, for example, UMSJMSServer
_n. Target it to the new Managed Server (WLS_SOAn).
Create a persistence store for the new BPMJMSServer(s), for example, BPMJMSFileStore
_n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms/BPMJMSFileStore_n
Create a new JMS Server for JMS, for example, BPMJMSServer
_n. Target it to the new Managed Server (WLS_SOAn).
Create a persistence store for the new BipJmsServer(s), for example, BipJmsStore
_n. Specify the store's path, a directory on shared storage:
ORACLE_BASE/admin/domain_name/cluster_name/jms/BipJmsStore_n
Create a new persistence store for the new JRFWSAsyncJMSServer, for example, JRFWSAsyncJMSFileStore
_n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms/JRFWSAsyncJMSFileStore_n
Create a JMS Server for JRFWSAsync, for example, JRFWSAsyncJMSServer
_n. Use JRFWSAsyncJMSFileStore
_n for this JMSServer. Target JRFWSAsyncJMSServer
_n to the new Managed Server (WLS_OIMn).
Note:
You can also assignSOAJMSFileStore
_n as store for the new JRFWSAsync JMS Servers. For clarity and isolation, the following steps use individual persistent stores.Create a persistence store for the new PS6SOAJMSServer, for example, PS6SOAJMSFileStore_auto
_n. Specify the store's path, a directory on shared storage.
ORACLE_BASE/admin/domain_name/cluster_name/jms/PS6SOAJMSFileStore_auto_n
Create a JMS Server for PS6SOA, for example, PS6SOAJMSServer_auto
_n. Use PS6SOAJMSFileStore_auto_
n for this JMSServer. Target PS6SOAJMSServer_auto
_n to the new Managed Server (WLS_SOAn).
Note:
You can also assignSOAJMSFileStore
_n as store for the new PS6 JMS Servers. For clarity and isolation, the following steps use individual persistent stores.Update SubDeployment targets for SOA JMS Module to include the new SOA JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click SOAJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the SOAJMSServerXXXXXX subdeployment and add SOAJMSServer
_n to it Click Save.
Note:
A subdeployment module name is a random name in the form COMPONENTJMSServerXXXXXX. It comes from the Configuration Wizard JMS configuration for the first two Managed Servers, WLS_COMPONENT1 and WLS_COMPONENT2).Update SubDeployment targets for UMSJMSSystemResource to include the new UMS JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click UMSJMSSystemResource (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the UMSJMSServerXXXXXX subdeployment and add UMSJMSServer
_n to it. Click Save.
Update SubDeployment targets for OIMJMSModule to include the new OIM JMS Server. Expand the Services node, then expand Messaging node. Choose JMS Modules from the Domain Structure window. Click OIMJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click OIMJMSServerXXXXXX and OIMJMSServer
_n
to it. Click Save.
Update SubDeployment targets for the JRFWSAsyncJmsModule to include the new JRFWSAsync JMS Server. Expand the Services node then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click JRFWSAsyncJmsModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. Click the JRFWSAsyncJMSServerXXXXXX subdeployment and add JRFWSAsyncJMSServer
_n to this subdeployment. Click Save
Update the SubDeployment targets for PS6SOAJMSModule to include the new PS6SOA JMS Server. Expand the Services node and the Messaging node. Choose JMS Modules from the Domain Structure window. Click PS6SOAJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. Click the PS6SOAJMSServerXXXXXX subdeployment. Add PS6SOAJMSServer_auto
_n to this subdeployment. Click Save.
Update SubDeployment targets for BPM JMS Module to include the new BPM JMS Server. Expand the Services node, then expand the Messaging node. Choose JMS Modules from the Domain Structure window. Click BPMJMSModule (hyperlink in the Names column). In the Settings page, click the SubDeployments tab. In the subdeployment module, click the BIPJMSServerXXXXXX subdeployment and add BPMJMSServer
_n to it. Click Save.
Run the pack command on SOAHOST1 and/or BIPHOST1 to create a template pack. For example, for SOA:
cd ORACLE_COMMON_HOME/common/bin ./pack.sh -managed=true/ -domain=MW_HOME/user_projects/domains/soadomain/ -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
Run the following command on HOST1 to copy the template file created to HOSTn:
scp soadomaintemplateScale.jar oracle@SOAHOSTN:/
ORACLE_BASE/product/fmw/soa/common/bin
Run the unpack command on HOSTn to unpack the template in the Managed Server domain directory. For example, for SOA:
ORACLE_BASE/product/fmw/soa/common/bin /unpack.sh / -domain=ORACLE_BASE/product/fmw/user_projects/domains/soadomain/ -template=soadomaintemplateScale.jar
Configure Oracle Coherence for deploying composites.
Note:
This step is required for SOA Managed Servers only, not OIM or BIP Managed Servers.Note:
Change the localhost field only for the server. Replace localhost with the listen address of the new server, for example:Dtangosol.coherence.localhost=SOAHOST1VHNn
Configure the transaction persistent store for the new server. This should be a shared storage location visible from other nodes.
From the Administration Console, select Server_name > Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.
Disable hostname verification for the new Managed Server; you must do this before starting/verifying the Managed Server. You can re-enable it after you configure server certificates for the communication between the Administration Server and Node Manager. If the source Managed Server (server you cloned the new one from) had already disabled hostname verification, these steps are not required. Hostname verification settings propagate to cloned servers.
To disable hostname verification:
Open the Administration Console.
Expand the Environment node in the Domain Structure window.
Click Servers.
Select WLS_SOAn in the Names column of the table. The Settings page for the server appears.
Click the SSL tab.
Click Advanced.
Set Hostname Verification to None.
Click Save.
Start Node Manager on the new node. To start the Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the hostname of the new node as a parameter as follows:
WL_HOME/server/bin/startNodeManager new_node_ip
Start and test the new Managed Server from the Administration Console.
Shut down the existing Managed Server in the cluster.
Ensure that the newly created Managed Server is up.
Access the application on the newly created Managed Server to verify that it works. A login page appears for OIM and BI Publisher. For SOA, a HTTP basic authorization opens.
Configure Server Migration for the new Managed Server.
Note:
Because this new node is using an existing shared storage installation, it is already using a Node Manager and environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges. The floating IP for the new Managed Server is already in the new node.To configure server migration:
Log into the Administration Console.
In the left pane, expand Environment and select Servers.
Select the server (represented as a hyperlink) for which you want to configure migration. The Settings page for that server appears.
Click the Migration tab.
In the Available field, in the Migration Configuration section, select machines to which to enable migration and click the right arrow.
Note:
Specify the least-loaded machine as the new server's migration target. Required capacity planning must be completed so that this node has the available resources to sustain an additional Managed Server.Select the Automatic Server Migration Enabled option. This enables the Node Manager to start a failed server on the target node automatically.
Click Save.
Restart the Administration Server, Managed Servers, and Node Manager.
Test server migration for this new server from the node where you added it:
Stop the Managed Server.
Run kill -9
pid on the PID of the Managed Server. Identify the PID of the node using, for example, ps -ef | grep WLS_SOA
n.
Watch the Node Manager Console for a message indicating that the floating IP has been disabled.
Wait for the Node Manager to try a second restart of the new Managed Server. Node Manager waits for a fence period of 30 seconds before restarting.
After Node Manager restarts the server, stop it again. Node Manager should log a message that the server will not restart again locally.
Edit the OHS configuration file to add the new managed server(s). See Section 5.4.19.1, "Configuring Oracle HTTP Server to Recognize New Managed Servers."
To complete scale up/scale out, you must edit the oim.conf
file to add the new Managed Servers, then restart the Oracle HTTP Servers.
Go to the directory ORACLE_INSTANCE/config/OHS/component/moduleconf
Edit oim.conf
to add the new Managed Server to the WebLogicCluster directive. You must take this step for each URLs defined for OIM, SOA, or BIPub. Each product must have a separate <Location>
section. Also, ports must refer to the Managed Servers. For example:
<Location /oim SetHandler weblogic-handler WebLogicCluster host1.example.com:14200,host2.example.com:14200 </Location>
Restart Oracle HTTP Server on WEBHOST1 and WEBHOST2:
WEBHOST1>opmnctl stopall WEBHOST1>opmnctl startall WEBHOST2>opmnctl stopall WEBHOST2>opmnctl startall
Note:
If you are not using shared storage system (Oracle recommended), copy oim.conf to the other OHS servers.Note:
See the General Parameters for WebLogic Server Plug-Ins in Oracle Fusion Middleware Using Web Server 1.1 Plug-Ins with Oracle WebLogic Server for additional parameters that can facilitate deployments.