Oracle9iAS Portal Configuration Guide Release 3.0.9 Part Number A90096-01 |
|
In a distributed Oracle Portal configuration, there is a centralized Login Server, and two or more Oracle Portal nodes which all access the same Login Server for Single Sign-On authentication. Furthermore, each Portal node is on a separate database instance.
There are many benefits to such an environment, including the ability to share portlet provider information across all nodes as well as increased scalability, availability, and system throughput.
Specific topics covered include:
Figure 4-1 illustrates a distributed Oracle Portal environment showing the communication channels that exist between the nodes themselves, between each of the nodes and the Oracle HTTP Server powered by Apache, with the Login Server.
A distributed environment refers to several installations of Oracle Portal to create a multi-node environment. Each node is a complete Oracle Portal installation which resides in a separate database instance and is configured to operate in a distributed manner. Each node in the system may operate either independently of the other nodes or in conjunction with the other nodes.
The node containing the page that you are currently viewing is considered the local node. All other installations are considered remote nodes. However, a page in Oracle Portal can contain portlets that were created on either the local or remote nodes.
Node registration refers to associating nodes to each other so that they may be able to share information. Node registration is done by completing a set of configuration steps which are discussed later in this chapter.
A distributed or multi-node Oracle Portal environment provides the following benefits over a single node environment.
In a distributed Oracle Portal environment, provider information can be shared across nodes. During node registration, provider registration also occurs. When a provider is registered on a remote node, the portlets for that provider are populated in the node's portlet repository which allows you to build pages with portlets residing on remote nodes. In addition to sharing provider information, the distributed environment also lets you group providers accordingly.
When provider registration occurs, the provider registry information is replicated on the remote node. Only the provider registry information is replicated, not the actual provider implementation. The provider implementation package resides only on the host node of that provider.
A page may consist of portlets from any number of nodes that participate in the distributed environment. When such a page is rendered, the portlets are executed on the host node of each of the portlet providers (the node where the provider implementation package reside).
For example, consider the following scenario which can be implemented in scalable environments:
A page consists of portlet 1 which resides on node a and portlet 2 which resides on node b. When the page is rendered, portlet 1 is executed on node a and portlet 2 is executed on node b.
Figure 4-2 illustrates the display of portlets created on different nodes on a single page.
Such a scenario enables you to access multiple machines with increased performance and increased system throughput since the rendering of a page is distributed among several database instances. The execution of the portlets on the different databases is done in parallel.
The distributed environment provides high availability. If one node fails, the other nodes continue to process with full access except on portlets residing on the failed node.
You must meet the following node requirements for configuring a distributed Oracle Portal architecture:
The cookie domain for the Oracle Portal session cookie must be common for all nodes that participate in the distributed Oracle Portal environment. If the cookie domain is changed on one node, it must be changed on all other nodes. Otherwise, the Oracle Portal nodes in your environment fail in a distributed manner.
Cookies are scoped to the host which created them, unless they are specified to be scoped to a larger domain. By default, the Oracle Portal session cookies are scoped to the root path of the server that generated them. For more information, see Section A.1.5, "Login Server Configuration Table".
A Portlet Provider that resides on a node may be accessed by any other node that exists in the network and for which a communication path has been established. The Oracle HTTP server powered by Apache is responsible for establishing a communication path and for displaying portlets for each node.
Choose either of the following scenarios in your distributed environment:
When communicating between browsers, the Oracle Portal session cookie is sent to each portlet execution request. Also, the cookie domain consists of the <host.domain:port>
.
When using multiple Oracle HTTP Servers, this results in a different <host.domain:port>
. Only one node has the same cookie domain as the Login Server. Thus, in this case, when the user tries to access a node by clicking a portlet's URL, the Oracle Portal session cookie is not sent by the browser.
To resolve this situation, a common cookie domain name is required. To do this, run the ctxckupd.sql
script on all nodes in your distributed environment.
In an Oracle Portal distributed environment, each Oracle HTTP Server powered by Apache must have a Database Access Descriptor (DAD) configuration for each of the portal nodes that participate in the distributed system. Also, the Session Cookie Name field in the DAD configuration must be the same across nodes.
All the nodes that participate in the distributed Oracle Portal environment must use the same Login Server. Otherwise, you may encounter a runtime error if a node that is registered to participate in the distributed Oracle Portal environment is not using the same Login Server as the other nodes. In this case, you would fail to log onto the Oracle Portal node via Single Sign-On (SSO) and not have access to the portlets on that node.
All nodes included in the distributed architecture must be symmetrically registered between themselves. For example, if the distributed Oracle Portal environment consists of three nodes (a, b, and c), make sure that the following registrations exist.
If you are creating your own custom portlets using the Oracle Portal Development Kit (PDK), use absolute URLs (not relative URLs) for portlets destined to be run in a distributed Oracle Portal environment.
You must have the required privileges on the node and on the Login Server to perform the steps in this section:
This section describes the process for setting up a distributed Oracle Portal environment. For the purpose of the following example, the environment consists of two nodes, named node a and node b.
The steps include the following:
As stated earlier, a node is an Oracle Portal installation. To configure a distributed Oracle Portal environment, you must have at least two Oracle Portal installations, one for node a and the other for node b.
To create a node, install Oracle Portal as instructed in the Oracle9i Application Server Installation Guide for your particular operating system.
After creating the first node, additional nodes can be created without associated Login Server schemas. The -nosso
parameter creates only an Oracle Portal schema. For more information, see Section B.2, "Manually Installing Oracle Portal with the winstall Script".
You must perform an installation of Oracle Portal for each node you want to have in your distributed environment.
To resolve the issue of a different <host.domain:port>
configuration for each node, the same cookie domain must exist across nodes in a distributed Oracle Portal environment in order for the Oracle Portal session cookie to be sent successfully by the browser. The solution is to run the ctxckupd.sql
script on all the nodes in your distributed Oracle Portal environment.
To create the same cookie domain on all nodes:
<ORACLE_HOME>/Apache/Apache/bin/apachectl stop
sqlplus nodea/nodea
@ctxckupd.sql
A distributed Oracle Portal environment requires that each node has a separate Database Access Descriptor (DAD) for each Oracle HTTP Server powered by Apache. Also, the Session Cookie Name field in the DAD configuration must be the same across nodes.
Upon installation, a DAD is created for each node. This step requires you to edit the DAD on each node and specify a common cookie name across nodes.
DADs are created from the Database Access Descriptor configuration page in Oracle Portal which you can access in the following ways:
In the Services portlet, click Listener Gateway Settings. By default, the Services portlet is located on the Oracle Portal home page's Administer tab.
In your browser, enter the following:
http://<hostname.domain>:<port>/pls/admin_/dadentries.htm
To provide the same cookie name for Oracle Portal nodes in your distributed environment:
In the Session Cookie Name field, enter a name. For example:
dist_portal_session_cookie
dist_portal_session_cookie
In the Session Cookie Name field, enter a cookie name that is different from the name given for node a and node b. For example:
dist_portal_sso_session_cookie
For the purpose of our example, we must make node a and node b share the same Login Server.
Otherwise, any node that is not sharing the same Login Server as the other nodes in the distributed environment fail when performing any type of distributed functionality.
<ORACLE_HOME>/portal30/admin/plsql/
directory in which Oracle Portal for node a is installed.
ssodatan
script to associate a node to the Login Server.
For parameter descriptions, see Section B.4, "Configuring a New Oracle Portal Instance and Login Server with the ssodatan Script".
See:
Table 4-1 Partner Application Configuration Example
The Edit Partner Application page displays.
<ORACLE_HOME>/portal30/admin/plsql/
directory in which Oracle Portal for node a is installed.
ssodatax
script to associate a node to a specific Login Server:
For parameter descriptions, see Section B.5, "Updating an Existing Portal Instance with the ssodatax Script".
See:
ssodatax -i 1323 -t G06U7W36 -k a21255e6b139ca34 -w http://OraclePortalsvr.us.oracle.com:5000/pls/<node b>/ -l http://OraclePortalsvr.us.oracle.com:5000/pls/<node_A_SSO>/ -s node_B -v v1.0 -o node_A_SSO -c w816dev5
You have completed this step. Node a and node b are associated to the same Login Server.
In this step, you need to create a user on the Login Server with full administrator privileges on node b. This user must be the schema owner of node b.
Table 4-2 Login Server Create New User Example
Parameter | Sample Entry |
---|---|
User Name |
<schema_of_node_b> |
Password |
<schema_of_node_b> |
Confirm Password |
<schema_of_node_b> |
A new user for the Login Server is created.
You must have the name of each node if you plan on registering the node.
Table 4-3 Node a to node b registration information
Field | Example Entry |
---|---|
Remote Node Name |
Name of the remote node (node a) obtained in the Section 4.4.6, "Step 6: Discover the Name of Each Node". |
Oracle Portal Database User |
The schema owner for node a. |
Oracle Portal Database Password |
The schema password for node a. |
Database Link Name |
Oracle recommends that you leave this field blank. The default name is used when the database link is created on this page. Note that the default name is not displayed on this page. |
TNS Name |
The TNS Names alias (connect string) for the database on which node a is installed. Example: w816dev6 |
Remote Oracle Portal DAD |
The DAD for node a created in Section 4.4.3, "Step 3: Edit Oracle Portal DADs". |
Remote Listener URL |
The machine name on which the Oracle HTTP Server powered by Apache is installed.
Example: |
Remote Listener Port |
The port on which the Oracle HTTP Server powered by Apache is running for that node. Example: 5000 |
When this step is completed successfully, the Oracle Portal nodes are fully configured to operate in the distributed environment.
The providers for each node that is configured for a distributed Oracle Portal environment, can be used by the other node. However, the Portlet Repository needs to be refreshed on each node to see the providers and portlets created on remote nodes.
To refresh the portlet repository:
The Portlet Repository Content Area is displayed.
Once this step is completed, the distributed portlets appear in the Portlet Repository. The providers that are not local (i.e. remote) are easily identifiable by their names which are prefixed with the name of the node to which they belong.
The distributed portlets are now displayed on the Add Portlets page and can be used when creating a page.
You can always create additional nodes to participate in the distributed Oracle Portal environment. For example, to register node c, complete the following steps in the order presented:
The registration of nodes must be symmetric. In addition, it is important to register the new node, in this case node c, on an existing node, either node a or node b, before registering an existing node on the new node. This is required to maintain the cookie encryption key used by the other nodes of the distributed environment.
|
Copyright © 2001 Oracle Corporation. All Rights Reserved. |
|