In a high availability scenario, many Portal Server instances and Access Manager instances exist. An end user accesses any of the Portal Server instances. When a Session Fail Over occurs, the user automatically gets redirected to an available Portal Server instance. This chapter covers various high availability scenarios.
This chapter contains the following scenarios:
Installing Portal Server and Access Manager in a High Availability Scenario with Berkeley Database
Clustering in Portal Server 7.1 on BEA WebLogic 8.1 Service Pack 4 and Service Pack 5
Setting Up Portlet Session Failover on BEA WebLogic 8.1 Service Pack 5
This section explains how to install Portal Server and Access Manager in a high availability scenario using Berkeley database. Berkeley database is installed when you install Access Manager. In a high availability scenario, Berkeley database is used to store session variables of the user.
In the procedures in this section, you do the following:
Install Directory Server, Application Server, Access Manager, and Portal Server on Node 1 and Node 2.
Install a Portal Server instance on Node 2. (The portal ID for Node 1 and Node 2 are the same.)
Install a Load Balancer on Node 3.
These instructions require the following:
Directory Server on Node 1 is not in the multi master replication (MMR) mode. Only one instance of Directory Server exists.
Access Manager on Node 1 is installed in Legacy mode. The data can be stored only in Directory Server.
On Node 1, install Directory Server, Access Manager, and Application Server.
Verify whether Access Manager is installed properly by accessing amconsole.
http://node1.domain-name:8080/amconsole
Log in to amconsole on Node 1. In the Organization Aliases List, add the Fully Qualified Domain Name (FQDN) of Node 2.
Click Service Configuration and click Platform in the right panel.
In the Platform Server List, add the following.
http://node2.domain-name:8080|02
On Node 2, run the Java ES installer to install Access Manager.
On the page that asks whether Directory Server is already provisioned with data, select Yes and proceed with installing Access Manager.
Ensure that the password encryption key on Node 2 is the same as the password encryption key on Node 1. The password encryption key should be the same for the LDAP internal password on both of the nodes.
On Node 2, start Application Server and verify whether Access Manager is installed properly by accessing amconsole.
http://node2.domain-name:8080/amconsole
In a text editor, open the AMConfig.properties file on Node 1 and Node 2.
The file is located in the AccessManager_base/SUNWam/config directory.
Install the Load Balancer plugin on Node 3 that is provided with Application Server 8.2. Select Web Server as a component to install with the Load Balancer plugin.
In a text editor, open the loadbalancer.xml file on Node 3.
This file is located in the WebServer_base/SUNWwbsvr7/https-Node3/config directory.
Edit the file so that the Load Balancer balances the load between the two Access Manager instances.
Edit the listeners with the appropriate values.
A sample loadbalancer.xml which balances the load on Portal Server and Access Manager instances on Node 1 and Node 2 is as follows:
<!DOCTYPE loadbalancer PUBLIC "-//Sun Microsystems Inc.//DTD Sun ONE Application Server 7.1// EN" "sun-loadbalancer_1_1.dtd"> <loadbalancer> <cluster name="cluster1"> <!-- Configure the listeners as space seperated URLs like listeners="http://host:port https://host:port" For example: <instance name="instance1" enabled="true" disable-timeout-in-minutes="60" listeners="http://localhost:80 https://localhost:443"/> --> <instance name="instance1" enabled="true" disable-timeout-in-minutes="60" listeners="http://node1.domain-name:8080"/> <instance name="instance2" enabled="true" disable-timeout-in-minutes="60" listeners="http://node2.domain-name:8080"/> <web-module context-root="/portal" enabled="true" disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" /> <web-module context-root="/psconsole" enabled="true" disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" /> <web-module context-root="/amserver" enabled="true" disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" /> <web-module context-root="/amconsole" enabled="true" disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" /> <web-module context-root="/ampassword" enabled="true" disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" /> <web-module context-root="/amcommon" enabled="true" disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" /> <web-module context-root="/" enabled="true" disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" /> <health-checker url="/" interval-in-seconds="10" timeout-in-seconds="30" /> </cluster> <property name="reload-poll-interval-in-seconds" value="60"/> <property name="response-timeout-in-seconds" value="30"/> <property name="https-routing" value="true"/> <property name="require-monitor-data" value="false"/> <property name="active-healthcheck-enabled" value="false"/> <property name="number-healthcheck-retries" value="3"/> <property name="rewrite-location" value="true"/> </loadbalancer> |
Start the Web Server.
On Node 1 and Node 2, start Access Manager, Directory Server, and Application Server .
Edit the Application Server domain.xml file on Node 1 and Node 2 to add locations of the jms.jar file and imq.jarfile.
<JAVA javahome="/usr/jdk/entsys-j2se" server-classpath="/usr/share/lib/imq.jar:/usr/share/lib/jms.jar: ....? |
When you create a Message Queue instance, do not use the default Message Queue instance that starts with Application Server or the guest user for Message Queue.
Start Message Queue on Node 1 and Node 2.
/bin/imqbrokerd -tty -name mqins -port 7777 &
where mqins is the Message Queue instance name.
Add a user to this message queue.
imqusermgr add -u amsvrusr -p secret12 -i mqins -g admin
where amsvrusr is the name of the new user that is used instead of guest.
Inactivate the guest user.
imqusermgr update -u guest -i mqins -a false
Create an encrypted file for the message queue on Node 1 and Node 2.
./amsfopasswd -f /AccessManager_base/SUNWam/.password -e password-file
Edit the amsfo.conf file on both the nodes.
A list of sample entries in amsfo.conf file is displayed as follows:
AM_HOME_DIR=/opt/SUNWam AM_SFO_RESTART=true LUSTER_LIST=node1.domain-name:7777,node2.domain-name:7777 DATABASE_DIR="/tmp/amsession/sessiondb" DELETE_DATABASE=true LOG_DIR="/tmp/amsession/logs" START_BROKER=true BROKER_INSTANCE_NAME=amsfo BROKER_PORT=7777 BROKER_VM_ARGS="-Xms256m -Xmx512m" USER_NAME=amsvrusr PASSWORDFILE=$AM_HOME_DIR/.password AMSESSIONDB_ARGS="" lbServerPort=8080 lbServerProtocol=http lbServerHost=node3.domain-name SiteID=10 |
Configure amsfo.confon Node 1.
AccessManager_base/SUNWam/bin/amsfoconfig
After running the script, the following output is displayed:
Session Failover Configuration Setup script. ========================================================= ========================================================= Checking if the required files are present... ========================================================= Running with the following Settings. ------------------------------------------------- Environment file: /etc/opt/SUNWam/config/amProfile.conf Resource file: /opt/SUNWam/lib/amsfo.conf ------------------------------------------------- Using /opt/SUNWam/bin/amadmin Validating configuration information. Done... Please enter the LDAP Admin password: (nothing will be echoed): password1 Verify: password1 Please enter the JMQ Broker User password: (nothing will be echoed): password2 Verify: password2 Retrieving Platform Server list... Validating server entries. Done... Retrieving Site list... Validating site entries. Done... Validating host: http://amhost1.example.com:7001|02 Validating host: http://amhost2.example.com:7001|01 Done... Creating Platform Server XML File... Platform Server XML File created successfully. Creating Session Configuration XML File... Session Configuration XML File created successfully. Creating Organization Alias XML File... Organization Alias XML File created successfully. Loading Session Configuration schema File... Session Configuration schema loaded successfully. Loading Platform Server List File... Platform Server List server entries loaded successfully. Loading Organization Alias List File... Organization Alias List loaded successfully. Please refer to the log file /var/tmp/amsfoconfig.log for additional information. ############################################################### Session Failover Setup Script. Execution end time 10/05/05 13:34:44 ############################################################### |
Edit the amsessiondb script with the default path and directory of the following:
JAVA_HOME=/usr/jdk/entsys-j2se/ IMQ_JAR_PATH=/usr/share/lib JMS_JAR_PATH=/usr/share/lib BDB_JAR_PATH=/usr/share/db.jar BDB_SO_PATH=/usr/lib AM_HOME=/opt/SUNWam |
Start and stop the Message Queue instance running on port 7777.
AccessManager_base/SUNWam/bin/amsfo start
AccessManager_base/SUNWam/bin/amsfo stop
Restart Access Manager, Directory Server, Application Server, and Web Server on all the nodes.
Log in to the amconsole through Load Balancer.
http://node3.domain-name:80/amconsole
Stop the Application Server on Node 1.
The session is handled by Access Manager on Node 2.
Invoke the Java ES installer and install Portal Server on Node 1 in the Configure Now mode.
Access Portal Server to verify the installation.
http://node1.domain-name:8080/portal
Create a Portal Server instance on Node 2.
Invoke the Java ES installer, and install Portal Server in the Configure Now mode.
Copy example2.xml to a temporary directory to make a backup of the original file.
cp PortalServer_base/SUNWportal/samples/psconfig/example2.xml /tmp-directory
Edit the original example2.xml file to replace the tokens with the machine information for Node 1.
Configure Portal Server using the example2.xml file as the configuration XML file.
PortalServer_base/SUNWportal/bin/psconfig --config example2.xml
Copy the Webcontainer.properties template file to Portal Server installation bin directory.
cp PortalServer_base/SUNWportal/template/Webcontainer.properties \PortalServer_base/SUNWportal/bin
Modify the WebContainer.properties file as per your requirements.
vi PortalServer_base/SUNWportal/bin/WebContainer.properties
Refer to the Creating Multi-Portal for more information about changing the WebContainer.properties file.
Create a Portal Server instance.
PortalServer_base/SUNWportal/bin/psadmin create-instance -u amadmin -f ps_password -p portal1 -w Webcontainer.properties
Restart Directory Server, Application Server, Access Manager, and Portal Server on Node 1 and Node 2.
Restart Web Server on Node 3.
Access the portal through the Load Balancer.
You can verify the node to which the portal is connected by tracking the access logs of the container. After you log in to the portal, kill the Application Server on the node to which it is connected. Then, click any of the links on the desktop to maintain the session and automatically connect to Node 2.
All nodes are in the same subnet.
Servers are installed with the latest OS patch level.
Name resolution of all servers is correct on each server, either through the hosts file or DNS.
The fully qualified hostname is the first entry after the IP address in the /etc/hosts file. The other machine details are also entered in hosts file.
Any previously installed Java Enterprise System components are removed from the system before starting the installation procedure.
Installation happens in a shared memory configuration.
Check the physical memory of the nodes.
prtconf | grep Mem
Calculate the value of the shminfo_shmmax parameter.
shminfo_shmmax = ( Server's Physical Memory in MB / 256 MB ) * 10000000
For example, if the physical memory is 512 MB, the value of the shminfo_shmmax parameter is 20000000.
Add the following parameter to the /etc/system configuration file.
set shmsys:shminfo_shmmax=0x40000000 set shmsys:shminfo_shmseg=20 set semsys:seminfo_semmni=16 set semsys:seminfo_semmns=128 set semsys:seminfo_semmnu=1000 |
Reboot the server.
Set up secure shell (ssh).
The secure shell is used by the HADB component of the Sun Application Server Enterprise Edition to exchange file information between the server nodes in the application server cluster. Additionally, the HADB utility commands can operate on multiple server nodes at the same time to keep them in sync.
Root ssh login is required between servers without the need for password authentication. This is achieved by enabling non-console root login and configuring the ssh certificates.
Check and implement the following steps on each application server cluster node to ensure successful installation, configuration, and operation of the software.
Ensure that the hostname has a fully qualified domain name in the /etc/hosts file as the first entry after the IP address.
For example, 10.10.10.2 as 1.example.com as1 loghost
Check that hostname lookup and reverse lookup is functioning correctly.
Check the contents of the /etc/nsswitch.conf file hosts entry.
cat /etc/nsswitch.conf | grep hosts
Allow non-console root login by commenting out the CONSOLE=/dev/console entry in the /etc/default/login file.
cat /etc/default/login | grep "CONSOLE="
If you need to enable remote root ftp, comment out the root entry in the /etc/ftpd/ftpusers file.
cat /etc/ftpd/ftpusers | grep root
Permit ssh root login. Set PermitRootLogin to yes in the /etc/ssh/sshd_config file, and restart the ssh daemon process.
cat /etc/ssh/sshd_config | grep PermitRootLogin
/etc/init.d/sshd stop /etc/init.d/sshd start |
Generate the ssh public and private key pair.
ssh-keygen -t dsa
When running the ssh-keygen utility program, do NOT enter a passphrase and press Return. Otherwise, whenever ssh is used by the Application Server, the passphrase will be prompted for — breaking the automated scripts.
Generate the keys on all Application Server nodes before proceeding to the next step where the public key values are combined into the authorized_keys file.
Copy all the public key values to each server's authorized_keys file. Create the authorized_keys file on one server and then copy that to the other servers.
root@as1# cd ~/.ssh root@as1# cp id_dsa.pub authorized_keys.as2 root@as1# scp as2.example.com:/.ssh/id_dsa.pub authorized_keys.as2 root@as1# cat authorized_keys.as2 >> authorized_keys root@as1# rm authorized_keys.as2 root@as1# scp authorized_keys as2.example.com:/.ssh/authorized_keys |
Verify that ssh functions correctly between the Application Server nodes without the need for a password to be entered.
Create node agents on the two server on Host A , Host B, and Host C.
Create the cluster.
Create a server instance for each server at the DAS.
Start the ma on all the nodes.
cd /opt/SUNWhadb/4/bin; ./ma &
Create the ha cluster on Host A.
asadmin configure-ha-cluster --user admin --devicesize 256 --hosts HostB,HostC pscluster
This section explains how to install Portal Server 7.1 in an Application Server cluster environment. In a cluster environment, a primary node exists where Portal Server is installed. A cluster is created in the primary node. One or more secondary nodes exist where instances of Portal Server are created. The user accesses the portal through a load balancer. In such an environment, if any of the servers installed on any node goes down, the load balancer automatically redirects the user to the other available Portal Server instances.
If Portal Server is installed on a clustered environment, any deployment or undeployment of container specific files should be done on the primary instance, where DAS is installed.
On Node 1, install Access Manager, Web Server, and Directory Server using the Java ES installer. Directory Server is in the MMR (Multi Master Replication) mode. Access Manager and Directory Server must be in the HA Configuration mode.
Verify whether Access Manager is installed properly.
http://node1.domain-name:80/amconsole
Install Portal Server on Node 2.
Install Application Server and Access Manager SDK using the Java ES installer in the Configure Now mode.
Select the Application Server components, such as Domain Application Server (DAS) and Command-Line Interface.
Start Domain Application Server.
ApplicationServer_base/SUNWappserver/sbin/asadmin start-domain --user admin domain1
Create a node agent, for example ps1.
ApplicationServer_base/SUNWappserver/sbin/asadmin create-node-agent --user admin ps1
Start the node agent.
ApplicationServer_base/SUNWappserver/sbin/asadmin start-node-agent --user admin ps1
Create a cluster, for example pscluster.
ApplicationServer_base/SUNWappserver/sbin/asadmin create-cluster --user admin pscluster
Creating a cluster creates a configuration, namely pscluster-config.
Create an Application Server instance, for example server1–ps1.
ApplicationServer_base/SUNWappserver/sbin/asadmin create-instance --user admin --cluster pscluster --nodeagent ps1 --systemproperties HTTP_LISTENER_PORT=80 server1-ps1
Start the Application Server instance.
ApplicationServer_base/SUNWappserver/sbin/asadmin start-instance --user admin server1-ps1
Using the Java ES installer, install Portal Server in the Configure Later mode.
Create a Portal Server instance by modifying the example14.xml file with the installation parameters.
Also, set the WebcontainerInstanceName attribute to the Application Server Cluster, pscluster. Set the host name as the primary host, node1.domain-name.
PortalServer_base/SUNWportal/bin/psconfig --config example14.xml
Delete the com.sun.portal.instance.id option from the pscluster configuration, and add it to the server1–ps1 instance.
ApplicationServer_base/SUNWappserver/sbin/asadmin delete-jvm-options --user admin --target pscluster "-Dcom.sun.portal.instance.id=ps1-80"
ApplicationServer_base/SUNWappserver/sbin/asadmin create-system-properties --user admin --target server1-ps1 com.sun.portal.instance.id=ps1-80
ps1–80 is the name of the instance specified in the configuration file.
Install Application Server's node agent and command-line interface and Access Manager SDK in the Configure Now mode using the Java ES installer.
Configure the Application Server's node agent to use node1.domain-name as the Domain Application Server.
The Java ES installer creates a node agent, node3.
Install Portal Server in the Configure Later mode using the Java ES installer.
Configure Access Manager SDK to use Access Manager Directory Server installed on node1.domain-name.
Start the node agent.
ApplicationServer_base/SUNWappserver/sbin/asadmin start-node-agent --user admin node3
Create an Application Server instance, ps2–80.
ApplicationServer_base/SUNWappserver/sbin/asadmin create-instance --user admin --cluster pscluster --nodeagent node3 --systemproperties HTTP_LISTENER_PORT=80 ps2-80 --host node2
Start the Application Server instance.
Delete ps_util.jar Classpath from the Application Server instance.
Creating a Portal Server instance verifies whether the Application Server instance is not already been configured for a Portal Server instance. It does it by checking the ps_util.jar class path. For instances that are part of Application Server cluster the configuration and applications are automatically deployed. So, the create-instance sub command fails.
Configure the common agent container by modifying the example2.xml file with the deployment values.
PortalServer_base/SUNWportal/bin/psconfig --config example2.xml
Create a Webcontainer.properties.ps2 file by modifying the Webcontainer.properties.SJSAS81 file with the newly create instance parameters, such as host, port, scheme, and file paths.
The Webcontainer.properties.SJSAS81 file is located at the PortalServer_base/SUNWportal/template directory.
Create a Portal Server instance in the newly created Application Server instance.
PortalServer_base/SUNWportal/bin/psadmin create-instance -u amadmin -f password --portal myPortal --instance ps2-80 --webconfig Webcontainer.properties.ps2
Add the com.sun.portal.instance.id to the server1–ps1 instance.
ApplicationServer_base/SUNWappserver/sbin/asadmin create-system-properties --user admin --target ps2-80 com.sun.portal.instance.id=ps2-80 --host node1
ps2–80 is the name of the default instance specified in the configuration file.
When you configure Portal Server on Application Server cluster, the default portlets are not displayed on the WSRP tab of the desktop. Do the following to get portlets displayed on the WSRP tab.
Create a producer with the portlets that you want to add to the WSRP tab.
Configure a consumer with the producer.
Add the consumer to the WSRPSamplesTabPanel container.
For more information on how to create and configure a producer, see Technical Note: Web Services for Remote Portlets for Sun Java System Portal Server 7.1.
You can also do the following to display the defaults portlets on the WSRP tab:
Create a producer with Bookmark, JSP Remote, Notepad, and Weather portlets.
Configure a consumer with the producer.
Copy the producer entity ID after configuring the producer.
Go to Manage Channels and Container.
Under Developer Sample, select the WSRPSamplesTabPanel container.
This container displays Bookmark, JSP Remote, Notepad, and Weather portlets.
Select the portlet and paste the producer entity ID to the Producer Entity ID field.
The portal instances should be clustered and HADB should be installed.
Undeploy the portal.war from the Application Server 8.2 DAS (administration server), and add the <distributable/> tag in the WEB-INF/web.xml of portal.war. Refer to the sample web.xml file displayed below:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN " "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app> <display-name>Sun Java Enterprise System Portal Server Desktop Web Application</ display-name> <description>Provides a user customizable interface for content that is aggregat ed from Portlet applications</description> <distributable/> <context-param> <param-name>desktop.configContextClassName</param-name> <param-value>com.sun.portal.desktop.context.PropertiesConfigContext</param-value> |
Add the same <distributable/> tag to the web.xml of the portlet.war which is used for storing session variable.
Ensure that the Availability option is selected for both portal.war and the portlet.war files.
This section explains about clustering Portal Server 7.1 on BEA WebLogic 8.1 Service Pack 4 and Service Pack 5.
In this section, you do the following:
Install Directory Server and Web Server on Node 1. (psconsole of Sun Java System Portal Server does not support WebSphere and WebLogic.)
Install BEA WebLogic 8.1 Service Pack 4 or WebLogic 8.1 Service Pack 5 using Sun Java Development Kit.
Create a node agent on Node 1 on the port 5555.
Create three managed servers on the following ports of the node agent: port 7011, port 7022, and port 55555. The first two managed servers are used for creating the Portal Server instances. The third managed server is used as proxy server for load balancing.
Create a BEA WebLogic managed server on Node 2.
Create a node agent and managed server on Node 2 using the Administrator Server on Node 1.
Install Access Manager on Administrator Server on Node 1.
Install Portal Server on port 7011 of Node 1. Creates an instance of Portal Server on the managed servers created on Node 1 and Node 2.
Configure a cluster and a load balancer.
Install a Gateway on Node 3.
Ensure that the node on which you perform the task has a BEA license for clustering to work.
Ensure that all clustered nodes are in the same subnet.
Run the BEA WebLogic 8.1 installer to install BEA WebLogic 8.1.
Run the script quickStart.sh after a successful installation.
Bea-base/weblogic81/common/bin/quickStart.sh
Click Create a New Domain Configuration or Extend an Existing One.
The installation panel appears.
Enter admin and the admin password.
Leave the other defaults as they are.
Click Finish to complete the installation.
To start WebLogic, run the startWeblogic.sh script.
Bea-base/user_projects/domains/mydomain/startWeblogic.sh
Access the Administrator server.
http://node1.domain-name:7001/console
By default, the administrator server of WebLogic is created on port 7001.
This procedure creates a node agent on the port 5555, which is the default port for node agents. A node agent is required to create managed servers.
Log in to the administrator server.
http://node1.domain-name:7001/console
Click Machines and click Configure a New Machine.
Enter a name for the node agent, and click Create. For example, node_1.
By default, the node agent runs on the port 5555. If you need any other port, enter the port number in the Node Manager tab.
On Node 1, start the node agent.
Bea-base/weblogic81/server/bin/startNodeManager.sh node1-ip-address 5555
This procedure creates managed servers on port 7011, port 7022, and port 55555. The first two managed servers are used for creating Portal Server instances. The last one is used as a proxy server to set up the load balancer.
Log in to the administrator server.
http://node1.domain-name:7001/console
Click Servers, and click Configure a New Machine.
Enter a name for the managed server. For example, node1_7011.
Select node_1 from the Machine list box.
Enter the Listen Port as 7011, and click Create.
To start the managed server, click the Control tab and click Start the Server.
(Optional) You can also start the managed server using the command-line interface.
bea_base/user_projects/domains/mydomain/startManagedWeblogic.sh node1_7011
Use steps 2 through 5 to create a managed server named node1_7022 with a listen port of 7022.
Start the second managed server.
Use steps 2 through 5 to create a managed server named proxy with a listen port of 55555.
This managed server is used for the proxy settings.
Start the third managed server.
This procedure creates a node agent on port 5555 on Node 2 using the administrator console installed in Node 1.
Install BEA WebLogic on Node 2.
Log in to the administrator server of Node 1.
http://node1.domain-name:7001/console
Click Machines and click Configure a New Machine.
Enter node_2 for the node agent name, and click Create.
In the Listen Address field, enter the IP address of Node 2.
On Node 2, start the node agent.
Bea-base/weblogic81/server/bin/startNodeManager.sh node2-ip-address 5555
In the nodemanager.Hosts file of Node 2, add the IP address of Node 1.
Bea-base/weblogic81/common/nodemanager.Hosts
You need to do this to ensure that Node 2's node agent can accept commands from the administrator server on Node 1.
This procedure creates a managed server on Node 2 on the port 7033 using the administrator console installed on Node 1.
Log in to the administration server.
http://node1.domain-name:7001/console
Click on Servers, and click Configure a New Machine.
Enter a name for the managed server. For example, node2_7033.
Select node_2 from the Machine list box.
Enter the Listen Address as the IP address of Node 2.
Enter the Listen Port as 7022, and click Create.
Click the Control tab and click Start the Server.
(Optional) You can also start the managed server using the command-line interface on Node 2.
bea_base/user_projects/domains/mydomain/startManagedWeblogic.sh node2_7033
This procedure installs Access Manager on the Administrator server (BEA WebLogic 8.1 Administrator Server) on Node 1. By default, the Administrator Server port is 7001.
Install Directory Server and Web Server using the Java ES installer.
Install Access Manger in the Configure Later mode.
Run amsamplesilent.
AccessManager_base/SUNWam/amconfig -s amsamplesilent
Change the values in amsamplesilent accordingly.
Access amconsole and verify whether Access Manager is running.
http://node1.domain-name:7001/amconsole
This procedure installs Portal Server 7.1 on port 7011 of the managed server on Node 1.
Install Portal Server 7.1 using the Java ES installer.
In the installer, select Managed Server while installing.
Access the portal.
http://node1.domain-name:7011/portal/dt
The Portal Welcome page appears.
This procedure creates an instance of Portal Server on port 7022 of the managed server on Node 1.
Modify the Webcontainer.properties file in BEA WebLogic to change the values corresponding to the managed servernode1_7022.
Create an instance of Portal Server.
PortalServer_base/SUNWportal/bin/psadmin create-instance -u amadmin -f password -p portal1 -w webcontainer.properties.BEAWL8
Restart the managed server named node1_7022.
Access the portal.
http://node1.domain-name:7022/portal/dt
The Portal Welcome page appears.
This procedure creates an instance of Portal Server on port 7033 of the managed server on Node 2.
Install Portal Server 7.1 and Access Manager SDK in the Configure Later mode using the installer.
Modify the amsamplesilent script with appropriate values. Ensure that DEPLOY_LEVEL is set as 4.
The amsamplesilent script is located in the AccessManager_base/SUNWam/bin directory.
Run the amsamplesilent script.
AccessManager_base/SUNWam/bin/amconfig -s amsamplesilent.
Copy the example15.xml file to a temporary directory and modify it with appropriate values.
The example15.xml file is located in the PortalServer_base/SUNWportal/samples/psconfig directory.
Configure Portal Server using the example15.xml as the configuration XML file.
PortalServer_base/SUNWportal/bin/psconfig --config example15.xml
Modify the webcontainer.properties file in BEA WebLogic to change the values corresponding to the managed server node1_7033.
Create a Portal Server instance.
PortalServer_base/SUNWportal/bin/psadmin create-instance -u amadmin -f password -p portal1 -w webcontainer.properties.BEAWL8
Restart the managed server node1_7033.
Access the portal.
http://node1.domain-name:7033/portal/dt
The Portal Welcome page appears.
This procedure configures a cluster for the three managed server that were created: node1_7011, node1_7022, and node2_7033.
Log in to the administrator server.
http://node1.domain-name:7001/console
Click Cluster and click Configure a New Cluster.
Enter a cluster name and cluster address.
Enter the cluster name as new_cluster and enter the cluster address as node1.domain-name.
Select Default Load Algorithm as round robin, and click Create.
Click the Server tab, and select node1_7011, node1_7022 and node1_7033 and drag them into the list box.
Stop and restart all of the servers.
Each time you change the cluster configuration you must stop and restart all of the servers in the cluster.
Select the Monitoring tab.
After a successful configuration of cluster, the following is displayed.
Number of servers configured for this cluster: 2 Number of servers currently participating in this cluster: 2 |
This procedure creates a proxy server .war file for load balancing.
Telnet to Node 1.
Create files web.xml and weblogic.xml in the new proxy/WEB-INF directory.
mkdir -p proxy/WEB-INF cd proxy/WEB-INF |
Add the following contents to the web.xml file.
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3 //EN? "http://java.sun.com/dtd/web-app_2_3.dtd"><web-app> <servlet> <servlet-name>HttpClusterServlet</servlet-name> <servlet-class>weblogic.servlet.proxy.HttpClusterServlet</servlet-class> <init-param> <param-name>WebLogicCluster</param-name> <param-value>HOST.DOMAIN:7011|HOST.DOMAIN:7022</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>HttpClusterServlet</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>HttpClusterServlet</servlet-name> <url-pattern>*.jsp</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>HttpClusterServlet</servlet-name> <url-pattern>*.htm</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>HttpClusterServlet</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> </web-app> |
Add the following contents to the weblogic.xml file.
<!DOCTYPE weblogic-web-app PUBLIC "-//BEA Systems, Inc.//DTD Web Application 8.1// EN" "http://www.bea.com/servers/wls810/dtd/weblogic810-web-jar.dtd"> <weblogic-web-app> <context-root>/</context-root> </weblogic-web-app> |
Change directories to the proxy directory.
Run the following command:
/usr/jdk/jdk1.5.0_01/bin/jar cvf proxy.war WEB-INF
Access the administrator console.
http://node1.domain-name:7001/console
Click Server > Configure a New Server.
Enter the new server name as proxy and the listen port as 55555, and click Apply.
Click Deployment in the left pane, click Web Applications Module, and click Deploy New.
Select proxy.war from the node1.domain and deploy it on the independent server named proxy.
This procedure deploys all .war files on the cluster. Redeployment of all the .war files needs to be done on the cluster.
Log in to administrator server.
Click Web Application Modules.
For each Portal Web Application module, click the Target tab, select All Servers in the Cluster option, and click Apply.
Click Services > JDBC > Connection Pools.
For each connection pool related to Portal Server, click the Target tab, select All Servers in the Cluster option, and click Apply.
Click Services > JDBC > Data Source.
For each data source related to Portal Server, click the Target tab, select All Servers in the Cluster option, and click Apply.
Restart all servers.
Install Gateway on Node 3.
Enter the load balancer URL, http://node1/domain-name:55555/portal, as the portal URL while installing gateway.
Log in to gateway and access developer/developer.
If the proxy is working, the URL changes to the following: https://GWhost:443/http://node1.domain-name:55555/portal/dt
In this scenario, more than two Portal Server instances are clustered. The user accesses a portlet from a Portal Server instance. The user can set session variables in the portlet. In case of a failover of a Portal Server instance that the user accesses, the user automatically gets directed to the other Portal Server instance, and the session variables are retained.
This section explains about setting up a portlet session failover on BEA WebLogic 8.1 Service Pack 5.
Two or more portal instances should already be clustered. To cluster Portal Server instances, refer to Clustering in Portal Server 7.1 on BEA WebLogic 8.1 Service Pack 4 and Service Pack 5.
Undeploy the portal.war from the managed servers using the WebLogic administrator console.
Add the following into the weblogic.xml file of portal.war:
<session-descriptor> <session-param> <param-name>PersistentStoreType</param-name> <param-value>replicated</param-value> </session-param> </session-descriptor> |
Both portal.war and the portlet which is used for session failover should have the session-descriptor set in the weblogic.xml file.
After the modification, the content of the weblogic.xml file of portal.war is as follows:
<!DOCTYPE weblogic-web-app PUBLIC "-//BEA Systems, Inc.//DTD Web Application 8.1 //EN" "http://www.bea.com/servers/wls810/dtd/weblogic810-web-jar.dtd"> <weblogic-web-app> <reference-descriptor> <resource-description> <res-ref-name>jdbc/communitymc</res-ref-name> <jndi-name>jdbc/communitymc</jndi-name> </resource-description> </reference-descriptor> <session-descriptor> <session-param> <param-name>PersistentStoreType</param-name> <param-value>replicated</param-value> </session-param> </session-descriptor> <virtual-directory-mapping> <local-path>/opt/SUNWam/public_html</local-path> <url-pattern>/online_help/*</url-pattern> </virtual-directory-mapping> </weblogic-web-app> |
Deploy the portal.war onto the Portal Server instance using the BEA WebLogic administrator console.
Add the same session-descriptor into the weblogic.xml file of the portlet to be used for session failover.
If the weblogic.xml file does not exist, create it under WEB-INF.
Deploy the portlet through the psconsole and add the portlet to the desktop.
Set session variables in the portlet and bring down the Portal Server instance which it uses to access the desktop.
The session variables will be retained after you start accessing another Portal Server instance.
Java DB is used as a default database for Portal Server. This section explains how to replace Java DB with the Oracle database. Using this, you can ensure high availability and scalability. This procedure is divided into the following tasks:
Setting Up General Requirements
Preparing Oracle
Prepare the Database.
Establish appropriate privileges for the user accounts.
Configure Community Membership and Configuration.
Load the schema onto the database.
Edit the jdbc/communitymc JDBC resource to point to the new database.
On some of the web containers, you might need to edit the corresponding JDBC connection pool instead of the JDBC resource.
Configure and install portlet applications.
Locate the portlet applications.
portal-data-dir/portals/portal-id/portletapps |
Configure portlet applications to use the new database by editing tokens_xxx.properties.
Using the administration console or command-line tool provided by the web container, create a JDBC resource for the application using the values from the tokens_xxx.properties.
jdbc/DB_JNDI_NAME
javax.sql.DataSource
DB_DATASOURCEA
DB_USERU
DB_PASSWORDS
DB_URL
Some of the web containers might require you to set up a connection pool prior to setting up the JDBC resource.
Undeploy existing portlets that use the Java DB Database as a datastore.
Deploy the newly configured portlet applications.
Prepare Oracle.
Install Oracle 10g Release 2.
Create a database instance named portal (the SID is portal).
Log in to Oracle Enterprise Manager (http://hostname:5500/em) as SYSTEM.
Create a tablespace communitymc_portal-id for example, communitymc_portal1.
For Wiki, FileSharing, and Surveys portlets, the tablespace and user accounts are created during the deployment of the Oracle configured portlet.
Create a user account with the following information:
portal
portal
communitymc_portal-id
CONNECT and RESOURCE
Prepare the web container for the New Database
Locate the Oracle JDBC driver ojdbc14.jar.
$ORACLE_HOME/jdbc/lib/ojdbc14.jar
Alternatively, you can download the JDBC driver from the Oracle web site. Ensure that you download the version that is compatible with the Oracle RDBMS you use.
Using the administration console or the CLI, add the JDBC driver ojdbc14.jar to the JVM classpath by adding the following JVM option: -Djdbc.drivers=oracle.jdbc.OracleDriver
For Web Server 7.0:
Log in to Web Server 7 administrator console.
Click the Configuration tab and select the respective configuration.
Click the java tab and add the location of the ojdbc14.jar to the classpath suffix.
Click the JVM Settings tab.
Replace any existing -Djdbc.drivers entry as below:
-Djdbc.drivers=oracle.jdbc.OracleDriver
If the -Djdbc.drivers entry does not exit, add the following:
-Djdbc.drivers=oracle.jdbc.OracleDriver
Click Save.
Click Deploy Pending and deploy the changes.
For Application Server 8.2
Log in to Application Server administrator console.
Click Configurations > server-config (Admin Config) > JVM Settings > Path Settings > Path Settings.
Add the location of the ojdbc14.jar to the classpath suffix.
Click the JVM Settings tab.
Replace any existing -Djdbc.drivers entry as below:
-Djdbc.drivers=oracle.jdbc.OracleDriver
If the -Djdbc.drivers entry does not exit, add the following:
-Djdbc.drivers=oracle.jdbc.OracleDriver
Click Save.
Configure Community Membership and Configuration.
Edit the communitymc file database configuration file.
% vi portal-data-dir/portals/portal-id/config/portal.dbadmin db.driver=oracle.jdbc.OracleDriver db.driver.classpath=JDBC-driver-path/ojdbc14.jar url=jdbc:oracle:thin:@oracle-host:oracle-port:portal |
Remove or comment out the following property from the communitymc configuration file.
% vi portal-data-dir/portals/portal-id/config/communitymc.properties #javax.jdo.option.Mapping=derby |
Load community schema onto the Oracle database.
% cd portal-data-dir/portals/portal-id/config % ant -Dportal.id=portal-id -f config.xml configure |
Edit the JDBC and communitymc JDBC resource to point to Oracle.
For Web Server 7.0
Log in to the Web Server administration console.
Click the Configuration tab and select the respective configuration.
Click the java tab > Resources and add a new JDBC resource.
Click jdbc/communitymc and edit the Datasource Class Name.
Set the Datasource classname to oracle.jdbc.pool.OracleDataSource
Set the following properties: user: portal and password: portal.
Delete the following derby properties: Database Name, Port Number, and Server Name.
Add the following property: url: jdbc:oracle:thin:@oracle-host:oracle-port:portal
Click Save.
Click Deploy Pending and deploy the changes.
For Application Server
Log in to the Application Server administration console.
Click Resources > JDBC > Connection Pools > communitymcPool
Set the Datasource classname to oracle.jdbc.pool.OracleDataSource
Set the following properties: user: portal and password: portal.
Delete the following derby properties: Database Name, Port Number, and Server Name.
Add the following property: url: jdbc:oracle:thin:@oracle-host:oracle-port:portal
Click Save.
Configure and install Portlet applications.
Run the filesharing script.
portal-data-dir/portals/portal-id/portletapps/filesharing |
Configure tokens_ora.properties to load information when initially loading the schema.
$ORACLE_HOME/jdbc/lib/ojdbc14.jar
OracleFileSharingDB
jdbc:oracle:thin:@<ORACLE_ HOST>:<ORACLE_PORT>:portal
<ORACLE_SYSTEM_USER>
<ORACLE_SYSTEM_PASSWORD>
jdbc:oracle:thin:@<ORACLE_ HOST>:<ORACLE_PORT>:portal
portalfs
portalfs
$ORACLE_HOME/jdbc/lib/ojdbc14.jar
Filesharingdb_<PORTAL_ID>
filesharingdb_<PORTAL_ID>
100M
Using the administration console or the command-line tool provided by the web container, create the JDBC resource using the values from the tokens_ora.properties.
For Web Server 7.0:
Create a JDBC Resource with the following properties:
jdbc/OracleFilesharingDB
This value must match DB_JNDI_NAME in tokens_ora.properties.
oracle.jdbc.pool.OracleDataSource
This value must match DB_DATASOURCE in tokens_ora.properties.
portalfs
This value must match DB_USER in tokens_ora.properties.
portalfs
This value must match DB_PASSWORD in tokens_ora.properties.
jdbc:oracle:thin:@<ORACLE_ HOST>:<ORACLE_PORT>:portal
This value must match DB_URL in tokens_ora.properties.
For Application Server:
Create a new connection pool with the following properties:
OracleFilesharingDBPool
This value must match DB_JNDI_NAME in tokens_ora.properties.
javax.sql.DataSource
oracle.jdbc.pool.OracleDataSource
This value must match DB_DATASOURCE in tokens_ora.properties.
In the Properties list, delete all the default properties and add the following:
portalfs
This value must match DB_USER in tokens_ora.properties.
portalfs
This value must match DB_PASSWORD in tokens_ora.properties.
jdbc:oracle:thin:@<ORACLE_ HOST>:<ORACLE_PORT>:portal
This value must match DB_URL in tokens_ora.properties.
Create a JDBC resource with the following value:
jdbc/OracleFilesharingDB
This value must match DB_JNDI_NAME in tokens_ora.properties.
OracleFilesharingDBPool
This value must match the pool that is created in the previous step.
Add the available target to the Selected list. Click OK.
Undeploy existing portlets that use Java DB Database as a datastore.
/opt/SUNWportal/bin/psadmin undeploy-portlet -u \ uid=amadmin,ou=people,dc=acme,dc=com -f password-file \ -p portal-id -i portal-instance-id |
Deploy the newly configured file sharing portlet.
cd portal-data-dir/portals/portal-id/portletapps/filesharing ant -Dapp.version=ora |
This ant command performs several tasks including regenerating the war image, loading up the schema onto the database, and deploying the newly built portlet. If the ant command fails and you want to unload schema, use the following command:
ant -Dapp.version=ora unconfig_backend
During deployment provide the Access Manager administrator password.
If the ant -Dapp.version=ora command fails with the following error, “Error: Password file does not exist or is not readable,” run ant deploy from command line to deploy the portlet.
Repeat this procedure for the other portlet applications, such as Surveys and Wiki.