Sun Java System Portal Server 7.1 Configuration Guide

Chapter 8 Installing and Configuring Portal Server 7.1 in High Availability Scenarios

In a high availability scenario, many Portal Server instances and Access Manager instances exist. An end user accesses any of the Portal Server instances. When a Session Fail Over occurs, the user automatically gets redirected to an available Portal Server instance. This chapter covers various high availability scenarios.

This chapter contains the following scenarios:

Installing Portal Server and Access Manager in a High Availability Scenario with Berkeley Database

This section explains how to install Portal Server and Access Manager in a high availability scenario using Berkeley database. Berkeley database is installed when you install Access Manager. In a high availability scenario, Berkeley database is used to store session variables of the user.

In the procedures in this section, you do the following:

Figure 8–1 Portal Server With Berkely Database

Portal Server and Access Manager in a High Availability
Scenario with Berkeley Database

ProcedureTo Install Portal Server and Access Manager in a High Availability Scenario with Berkeley Database

These instructions require the following:

  1. On Node 1, install Directory Server, Access Manager, and Application Server.

  2. Verify whether Access Manager is installed properly by accessing amconsole.

    http://node1.domain-name:8080/amconsole

  3. Log in to amconsole on Node 1. In the Organization Aliases List, add the Fully Qualified Domain Name (FQDN) of Node 2.

  4. Click Service Configuration and click Platform in the right panel.

  5. In the Platform Server List, add the following.

    http://node2.domain-name:8080|02

  6. On Node 2, run the Java ES installer to install Access Manager.

    On the page that asks whether Directory Server is already provisioned with data, select Yes and proceed with installing Access Manager.


    Note –

    Ensure that the password encryption key on Node 2 is the same as the password encryption key on Node 1. The password encryption key should be the same for the LDAP internal password on both of the nodes.


  7. On Node 2, start Application Server and verify whether Access Manager is installed properly by accessing amconsole.

    http://node2.domain-name:8080/amconsole

  8. In a text editor, open the AMConfig.properties file on Node 1 and Node 2.

    The file is located in the AccessManager_base/SUNWam/config directory.

    1. Edit the com.iplanet.am.cookie.encode property to be false.

    2. Edit com.sun.identity.server.fqdnMapNode3.domain-name=isserver.mydomain.com with the Fully Qualified Domain Name of the Load Balancer.

ProcedureTo Install the Load Balancer on Node 3

  1. Install the Load Balancer plugin on Node 3 that is provided with Application Server 8.2. Select Web Server as a component to install with the Load Balancer plugin.

  2. In a text editor, open the loadbalancer.xml file on Node 3.

    This file is located in the WebServer_base/SUNWwbsvr7/https-Node3/config directory.

  3. Edit the file so that the Load Balancer balances the load between the two Access Manager instances.

    Edit the listeners with the appropriate values.

    A sample loadbalancer.xml which balances the load on Portal Server and Access Manager instances on Node 1 and Node 2 is as follows:


    <!DOCTYPE loadbalancer PUBLIC 
    "-//Sun Microsystems Inc.//DTD Sun ONE Application Server 7.1//
    EN" "sun-loadbalancer_1_1.dtd">
    <loadbalancer>
    <cluster name="cluster1">
    <!--
    Configure the listeners as space seperated URLs like
    listeners="http://host:port https://host:port" For example:
    <instance name="instance1" enabled="true"
    disable-timeout-in-minutes="60"
    listeners="http://localhost:80 
    https://localhost:443"/>
    -->
    <instance name="instance1" enabled="true"
    disable-timeout-in-minutes="60"
    listeners="http://node1.domain-name:8080"/>
    <instance name="instance2" enabled="true"
    disable-timeout-in-minutes="60"
    listeners="http://node2.domain-name:8080"/>
    <web-module context-root="/portal" enabled="true"
    disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" />
    <web-module context-root="/psconsole" enabled="true"
    disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" />
    <web-module context-root="/amserver" enabled="true"
    disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" />
    <web-module context-root="/amconsole" enabled="true"
    disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" />
    <web-module context-root="/ampassword" enabled="true"
    disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" />
    <web-module context-root="/amcommon" enabled="true"
    disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" />
    <web-module context-root="/" enabled="true"
    disable-timeout-in-minutes="60" error-url="sun-http-lberror.html" />
    <health-checker url="/" interval-in-seconds="10" timeout-in-seconds="30" />
    </cluster>
    <property name="reload-poll-interval-in-seconds" value="60"/>
    <property name="response-timeout-in-seconds" value="30"/>
    <property name="https-routing" value="true"/>
    <property name="require-monitor-data" value="false"/>
    <property name="active-healthcheck-enabled" value="false"/>
    <property name="number-healthcheck-retries" value="3"/>
    <property name="rewrite-location" value="true"/>
    </loadbalancer>
  4. Start the Web Server.

  5. On Node 1 and Node 2, start Access Manager, Directory Server, and Application Server .

ProcedureTo Configure Session Failover with Message Queue and Berkeley Database

  1. Edit the Application Server domain.xml file on Node 1 and Node 2 to add locations of the jms.jar file and imq.jarfile.


    <JAVA javahome="/usr/jdk/entsys-j2se"
    server-classpath="/usr/share/lib/imq.jar:/usr/share/lib/jms.jar: ....?

    Note –

    When you create a Message Queue instance, do not use the default Message Queue instance that starts with Application Server or the guest user for Message Queue.


  2. Start Message Queue on Node 1 and Node 2.

    /bin/imqbrokerd -tty -name mqins -port 7777 &

    where mqins is the Message Queue instance name.

  3. Add a user to this message queue.

    imqusermgr add -u amsvrusr -p secret12 -i mqins -g admin

    where amsvrusr is the name of the new user that is used instead of guest.

  4. Inactivate the guest user.

    imqusermgr update -u guest -i mqins -a false

  5. Create an encrypted file for the message queue on Node 1 and Node 2.

    ./amsfopasswd -f /AccessManager_base/SUNWam/.password -e password-file

  6. Edit the amsfo.conf file on both the nodes.

    A list of sample entries in amsfo.conf file is displayed as follows:


    AM_HOME_DIR=/opt/SUNWam
    AM_SFO_RESTART=true
    LUSTER_LIST=node1.domain-name:7777,node2.domain-name:7777
    DATABASE_DIR="/tmp/amsession/sessiondb"
    DELETE_DATABASE=true
    LOG_DIR="/tmp/amsession/logs"
    START_BROKER=true
    BROKER_INSTANCE_NAME=amsfo
    BROKER_PORT=7777
    BROKER_VM_ARGS="-Xms256m -Xmx512m"
    USER_NAME=amsvrusr
    PASSWORDFILE=$AM_HOME_DIR/.password
    AMSESSIONDB_ARGS=""
    lbServerPort=8080
    lbServerProtocol=http
    lbServerHost=node3.domain-name
    SiteID=10
  7. Configure amsfo.confon Node 1.

    AccessManager_base/SUNWam/bin/amsfoconfig

    After running the script, the following output is displayed:


    Session Failover Configuration Setup script.
    =========================================================
    =========================================================
    Checking if the required files are present...
    =========================================================
    
    Running with the following Settings.
    -------------------------------------------------
    Environment file: /etc/opt/SUNWam/config/amProfile.conf
    Resource file: /opt/SUNWam/lib/amsfo.conf
             -------------------------------------------------
    Using /opt/SUNWam/bin/amadmin
    
    Validating configuration information.
    Done...
    
    Please enter the LDAP Admin password: 
    (nothing will be echoed): password1
    Verify: password1
    Please enter the JMQ Broker User password: 
    (nothing will be echoed): password2
    Verify: password2
    
    Retrieving Platform Server list...
    Validating server entries.
    Done...
    
    Retrieving Site list...
    Validating site entries.
    Done...
    
    Validating host: http://amhost1.example.com:7001|02
    Validating host: http://amhost2.example.com:7001|01
    Done...
    
    Creating Platform Server XML File...
    Platform Server XML File created successfully.
    
    Creating Session Configuration XML File...
    Session Configuration XML File created successfully.
    
    Creating Organization Alias XML File...
    Organization Alias XML File created successfully.
    
    Loading Session Configuration schema File...
    Session Configuration schema loaded successfully.
    
    Loading Platform Server List File...
    Platform Server List server entries loaded successfully.
    
    Loading Organization Alias List File...
    Organization Alias List loaded successfully.
    
    Please refer to the log file /var/tmp/amsfoconfig.log for additional
    information.
    ###############################################################
    Session Failover Setup Script. Execution end time 10/05/05 13:34:44
    ###############################################################
  8. Edit the amsessiondb script with the default path and directory of the following:


    JAVA_HOME=/usr/jdk/entsys-j2se/
    IMQ_JAR_PATH=/usr/share/lib
    JMS_JAR_PATH=/usr/share/lib
    BDB_JAR_PATH=/usr/share/db.jar
    BDB_SO_PATH=/usr/lib
    AM_HOME=/opt/SUNWam
  9. Start and stop the Message Queue instance running on port 7777.

    AccessManager_base/SUNWam/bin/amsfo start

    AccessManager_base/SUNWam/bin/amsfo stop

  10. Restart Access Manager, Directory Server, Application Server, and Web Server on all the nodes.

  11. Log in to the amconsole through Load Balancer.

    http://node3.domain-name:80/amconsole

  12. Stop the Application Server on Node 1.

    The session is handled by Access Manager on Node 2.

ProcedureTo Install Portal Server on Node 1

  1. Invoke the Java ES installer and install Portal Server on Node 1 in the Configure Now mode.

  2. Access Portal Server to verify the installation.

    http://node1.domain-name:8080/portal

  3. Create a Portal Server instance on Node 2.

ProcedureTo Create a Portal Server Instance on Node 2

  1. Invoke the Java ES installer, and install Portal Server in the Configure Now mode.

  2. Copy example2.xml to a temporary directory to make a backup of the original file.

    cp PortalServer_base/SUNWportal/samples/psconfig/example2.xml /tmp-directory

  3. Edit the original example2.xml file to replace the tokens with the machine information for Node 1.

  4. Configure Portal Server using the example2.xml file as the configuration XML file.

    PortalServer_base/SUNWportal/bin/psconfig --config example2.xml

  5. Copy the Webcontainer.properties template file to Portal Server installation bin directory.

    cp PortalServer_base/SUNWportal/template/Webcontainer.properties \PortalServer_base/SUNWportal/bin

  6. Modify the WebContainer.properties file as per your requirements.

    vi PortalServer_base/SUNWportal/bin/WebContainer.properties

    Refer to the Creating Multi-Portal for more information about changing the WebContainer.properties file.

  7. Create a Portal Server instance.

    PortalServer_base/SUNWportal/bin/psadmin create-instance -u amadmin -f ps_password -p portal1 -w Webcontainer.properties

  8. Restart Directory Server, Application Server, Access Manager, and Portal Server on Node 1 and Node 2.

  9. Restart Web Server on Node 3.

  10. Access the portal through the Load Balancer.

    You can verify the node to which the portal is connected by tracking the access logs of the container. After you log in to the portal, kill the Application Server on the node to which it is connected. Then, click any of the links on the desktop to maintain the session and automatically connect to Node 2.

Configuring HADB for Session Fail Over

ProcedureTo Configure HADB for Session Fail Over

Before You Begin
  1. Check the physical memory of the nodes.

    prtconf | grep Mem

  2. Calculate the value of the shminfo_shmmax parameter.

    shminfo_shmmax = ( Server's Physical Memory in MB / 256 MB ) * 10000000

    For example, if the physical memory is 512 MB, the value of the shminfo_shmmax parameter is 20000000.

  3. Add the following parameter to the /etc/system configuration file.


    set shmsys:shminfo_shmmax=0x40000000
    set shmsys:shminfo_shmseg=20
    set semsys:seminfo_semmni=16
    set semsys:seminfo_semmns=128
    set semsys:seminfo_semmnu=1000
  4. Reboot the server.

  5. Set up secure shell (ssh).

    The secure shell is used by the HADB component of the Sun Application Server Enterprise Edition to exchange file information between the server nodes in the application server cluster. Additionally, the HADB utility commands can operate on multiple server nodes at the same time to keep them in sync.


    Note –

    Root ssh login is required between servers without the need for password authentication. This is achieved by enabling non-console root login and configuring the ssh certificates.


  6. Check and implement the following steps on each application server cluster node to ensure successful installation, configuration, and operation of the software.

  7. Ensure that the hostname has a fully qualified domain name in the /etc/hosts file as the first entry after the IP address.

    For example, 10.10.10.2 as 1.example.com as1 loghost

  8. Check that hostname lookup and reverse lookup is functioning correctly.

  9. Check the contents of the /etc/nsswitch.conf file hosts entry.

    cat /etc/nsswitch.conf | grep hosts

  10. Allow non-console root login by commenting out the CONSOLE=/dev/console entry in the /etc/default/login file.

    cat /etc/default/login | grep "CONSOLE="

  11. If you need to enable remote root ftp, comment out the root entry in the /etc/ftpd/ftpusers file.

    cat /etc/ftpd/ftpusers | grep root

  12. Permit ssh root login. Set PermitRootLogin to yes in the /etc/ssh/sshd_config file, and restart the ssh daemon process.

    cat /etc/ssh/sshd_config | grep PermitRootLogin


    /etc/init.d/sshd stop
    /etc/init.d/sshd start
  13. Generate the ssh public and private key pair.

    ssh-keygen -t dsa


    Note –

    When running the ssh-keygen utility program, do NOT enter a passphrase and press Return. Otherwise, whenever ssh is used by the Application Server, the passphrase will be prompted for — breaking the automated scripts.


  14. Generate the keys on all Application Server nodes before proceeding to the next step where the public key values are combined into the authorized_keys file.

  15. Copy all the public key values to each server's authorized_keys file. Create the authorized_keys file on one server and then copy that to the other servers.


    root@as1# cd ~/.ssh
    root@as1# cp id_dsa.pub authorized_keys.as2
    root@as1# scp as2.example.com:/.ssh/id_dsa.pub authorized_keys.as2
    root@as1# cat authorized_keys.as2 >> authorized_keys
    root@as1# rm authorized_keys.as2
    root@as1# scp authorized_keys as2.example.com:/.ssh/authorized_keys
  16. Verify that ssh functions correctly between the Application Server nodes without the need for a password to be entered.

  17. Create node agents on the two server on Host A , Host B, and Host C.

  18. Create the cluster.

  19. Create a server instance for each server at the DAS.

  20. Start the ma on all the nodes.

    cd /opt/SUNWhadb/4/bin; ./ma &

  21. Create the ha cluster on Host A.

    asadmin configure-ha-cluster --user admin --devicesize 256 --hosts HostB,HostC pscluster

Installing Portal Server on an Application Server Cluster

This section explains how to install Portal Server 7.1 in an Application Server cluster environment. In a cluster environment, a primary node exists where Portal Server is installed. A cluster is created in the primary node. One or more secondary nodes exist where instances of Portal Server are created. The user accesses the portal through a load balancer. In such an environment, if any of the servers installed on any node goes down, the load balancer automatically redirects the user to the other available Portal Server instances.


Note –

If Portal Server is installed on a clustered environment, any deployment or undeployment of container specific files should be done on the primary instance, where DAS is installed.


Figure 8–2 Portal Server on Application Server Cluster

The user accesses any of the Application Server instances.

ProcedureTo Install Portal Server on Application Server Cluster

  1. On Node 1, install Access Manager, Web Server, and Directory Server using the Java ES installer. Directory Server is in the MMR (Multi Master Replication) mode. Access Manager and Directory Server must be in the HA Configuration mode.

  2. Verify whether Access Manager is installed properly.

    http://node1.domain-name:80/amconsole

  3. Install Portal Server on Node 2.

ProcedureInstall Portal Server on Node 2

  1. Install Application Server and Access Manager SDK using the Java ES installer in the Configure Now mode.


    Note –

    Select the Application Server components, such as Domain Application Server (DAS) and Command-Line Interface.


  2. Start Domain Application Server.

    ApplicationServer_base/SUNWappserver/sbin/asadmin start-domain --user admin domain1

  3. Create a node agent, for example ps1.

    ApplicationServer_base/SUNWappserver/sbin/asadmin create-node-agent --user admin ps1

  4. Start the node agent.

    ApplicationServer_base/SUNWappserver/sbin/asadmin start-node-agent --user admin ps1

  5. Create a cluster, for example pscluster.

    ApplicationServer_base/SUNWappserver/sbin/asadmin create-cluster --user admin pscluster

    Creating a cluster creates a configuration, namely pscluster-config.

  6. Create an Application Server instance, for example server1–ps1.

    ApplicationServer_base/SUNWappserver/sbin/asadmin create-instance --user admin --cluster pscluster --nodeagent ps1 --systemproperties HTTP_LISTENER_PORT=80 server1-ps1

  7. Start the Application Server instance.

    ApplicationServer_base/SUNWappserver/sbin/asadmin start-instance --user admin server1-ps1

  8. Using the Java ES installer, install Portal Server in the Configure Later mode.

  9. Create a Portal Server instance by modifying the example14.xml file with the installation parameters.

    Also, set the WebcontainerInstanceName attribute to the Application Server Cluster, pscluster. Set the host name as the primary host, node1.domain-name.

    PortalServer_base/SUNWportal/bin/psconfig --config example14.xml

  10. Delete the com.sun.portal.instance.id option from the pscluster configuration, and add it to the server1–ps1 instance.

    ApplicationServer_base/SUNWappserver/sbin/asadmin delete-jvm-options --user admin --target pscluster "-Dcom.sun.portal.instance.id=ps1-80"

    ApplicationServer_base/SUNWappserver/sbin/asadmin create-system-properties --user admin --target server1-ps1 com.sun.portal.instance.id=ps1-80

    ps1–80 is the name of the instance specified in the configuration file.

ProcedureTo install Portal Server on Node 3

  1. Install Application Server's node agent and command-line interface and Access Manager SDK in the Configure Now mode using the Java ES installer.


    Note –

    Configure the Application Server's node agent to use node1.domain-name as the Domain Application Server.

    The Java ES installer creates a node agent, node3.


  2. Install Portal Server in the Configure Later mode using the Java ES installer.

  3. Configure Access Manager SDK to use Access Manager Directory Server installed on node1.domain-name.

  4. Start the node agent.

    ApplicationServer_base/SUNWappserver/sbin/asadmin start-node-agent --user admin node3

  5. Create an Application Server instance, ps2–80.

    ApplicationServer_base/SUNWappserver/sbin/asadmin create-instance --user admin --cluster pscluster --nodeagent node3 --systemproperties HTTP_LISTENER_PORT=80 ps2-80 --host node2

  6. Start the Application Server instance.

  7. Delete ps_util.jar Classpath from the Application Server instance.

    Creating a Portal Server instance verifies whether the Application Server instance is not already been configured for a Portal Server instance. It does it by checking the ps_util.jar class path. For instances that are part of Application Server cluster the configuration and applications are automatically deployed. So, the create-instance sub command fails.

    1. Log in to Application Server.

    2. Click Configuration > pscluster-config > JVM Settings.

    3. In the Classpath Suffix list box, delete PortalServer_base/SUNWportal/lib/ps_util.jar.

  8. Configure the common agent container by modifying the example2.xml file with the deployment values.

    PortalServer_base/SUNWportal/bin/psconfig --config example2.xml

  9. Create a Webcontainer.properties.ps2 file by modifying the Webcontainer.properties.SJSAS81 file with the newly create instance parameters, such as host, port, scheme, and file paths.

    The Webcontainer.properties.SJSAS81 file is located at the PortalServer_base/SUNWportal/template directory.

  10. Create a Portal Server instance in the newly created Application Server instance.

    PortalServer_base/SUNWportal/bin/psadmin create-instance -u amadmin -f password --portal myPortal --instance ps2-80 --webconfig Webcontainer.properties.ps2

  11. Add the com.sun.portal.instance.id to the server1–ps1 instance.

    ApplicationServer_base/SUNWappserver/sbin/asadmin create-system-properties --user admin --target ps2-80 com.sun.portal.instance.id=ps2-80 --host node1

    ps2–80 is the name of the default instance specified in the configuration file.

ProcedureTo Display the Default WSRP Portlets in the WSRP tab of Portal Desktop

When you configure Portal Server on Application Server cluster, the default portlets are not displayed on the WSRP tab of the desktop. Do the following to get portlets displayed on the WSRP tab.

  1. Create a producer with the portlets that you want to add to the WSRP tab.

  2. Configure a consumer with the producer.

  3. Add the consumer to the WSRPSamplesTabPanel container.

    For more information on how to create and configure a producer, see Technical Note: Web Services for Remote Portlets for Sun Java System Portal Server 7.1.


    Note –

    You can also do the following to display the defaults portlets on the WSRP tab:

    1. Create a producer with Bookmark, JSP Remote, Notepad, and Weather portlets.

    2. Configure a consumer with the producer.

    3. Copy the producer entity ID after configuring the producer.

    4. Go to Manage Channels and Container.

    5. Under Developer Sample, select the WSRPSamplesTabPanel container.

      This container displays Bookmark, JSP Remote, Notepad, and Weather portlets.

    6. Select the portlet and paste the producer entity ID to the Producer Entity ID field.


ProcedureTo Configure Portlet Session Failover on Application Server 8.2

Before You Begin

The portal instances should be clustered and HADB should be installed.

  1. Undeploy the portal.war from the Application Server 8.2 DAS (administration server), and add the <distributable/> tag in the WEB-INF/web.xml of portal.war. Refer to the sample web.xml file displayed below:


    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN
    " "http://java.sun.com/dtd/web-app_2_3.dtd">
    <web-app>
    <display-name>Sun Java Enterprise System Portal Server Desktop Web Application</
    display-name>
    <description>Provides a user customizable interface for content that is aggregat
    ed from Portlet applications</description>
    <distributable/>
    <context-param>
    <param-name>desktop.configContextClassName</param-name>
    <param-value>com.sun.portal.desktop.context.PropertiesConfigContext</param-value>
  2. Add the same <distributable/> tag to the web.xml of the portlet.war which is used for storing session variable.

  3. Ensure that the Availability option is selected for both portal.war and the portlet.war files.

    1. Log in to administration console of Application Server.

    2. Click Web applications -> portal.

    3. Select the Availability option in the right panel.

Clustering in Portal Server 7.1 on BEA WebLogic 8.1 Service Pack 4 and Service Pack 5

This section explains about clustering Portal Server 7.1 on BEA WebLogic 8.1 Service Pack 4 and Service Pack 5.

Figure 8–3 Portal Server on WebLogic Cluster Environment

The user accesses any of the managed servers that are
clustered.

ProcedureTo Cluster Portal Server 7.1 on WebLogic 8.1 Service Pack 4

In this section, you do the following:

Before You Begin
  1. Run the BEA WebLogic 8.1 installer to install BEA WebLogic 8.1.

  2. Run the script quickStart.sh after a successful installation.

    Bea-base/weblogic81/common/bin/quickStart.sh

  3. Click Create a New Domain Configuration or Extend an Existing One.

    The installation panel appears.

  4. Enter admin and the admin password.

    Leave the other defaults as they are.

  5. Click Finish to complete the installation.

  6. To start WebLogic, run the startWeblogic.sh script.

    Bea-base/user_projects/domains/mydomain/startWeblogic.sh

  7. Access the Administrator server.

    http://node1.domain-name:7001/console

    By default, the administrator server of WebLogic is created on port 7001.

ProcedureTo Create a Node Agent on Node 1

This procedure creates a node agent on the port 5555, which is the default port for node agents. A node agent is required to create managed servers.

  1. Log in to the administrator server.

    http://node1.domain-name:7001/console

  2. Click Machines and click Configure a New Machine.

  3. Enter a name for the node agent, and click Create. For example, node_1.

    By default, the node agent runs on the port 5555. If you need any other port, enter the port number in the Node Manager tab.

  4. On Node 1, start the node agent.

    Bea-base/weblogic81/server/bin/startNodeManager.sh node1-ip-address 5555

ProcedureTo Create WebLogic Managed Servers on Node 1

This procedure creates managed servers on port 7011, port 7022, and port 55555. The first two managed servers are used for creating Portal Server instances. The last one is used as a proxy server to set up the load balancer.

  1. Log in to the administrator server.

    http://node1.domain-name:7001/console

  2. Click Servers, and click Configure a New Machine.

  3. Enter a name for the managed server. For example, node1_7011.

  4. Select node_1 from the Machine list box.

  5. Enter the Listen Port as 7011, and click Create.

  6. To start the managed server, click the Control tab and click Start the Server.

  7. (Optional) You can also start the managed server using the command-line interface.

    bea_base/user_projects/domains/mydomain/startManagedWeblogic.sh node1_7011

  8. Use steps 2 through 5 to create a managed server named node1_7022 with a listen port of 7022.

  9. Start the second managed server.

  10. Use steps 2 through 5 to create a managed server named proxy with a listen port of 55555.

    This managed server is used for the proxy settings.

  11. Start the third managed server.

ProcedureTo Create a Node Agent on Node 2

This procedure creates a node agent on port 5555 on Node 2 using the administrator console installed in Node 1.

  1. Install BEA WebLogic on Node 2.

  2. Log in to the administrator server of Node 1.

    http://node1.domain-name:7001/console

  3. Click Machines and click Configure a New Machine.

  4. Enter node_2 for the node agent name, and click Create.

  5. In the Listen Address field, enter the IP address of Node 2.

  6. On Node 2, start the node agent.

    Bea-base/weblogic81/server/bin/startNodeManager.sh node2-ip-address 5555

  7. In the nodemanager.Hosts file of Node 2, add the IP address of Node 1.

    Bea-base/weblogic81/common/nodemanager.Hosts

    You need to do this to ensure that Node 2's node agent can accept commands from the administrator server on Node 1.

ProcedureTo Create WebLogic Managed Servers on Node 2

This procedure creates a managed server on Node 2 on the port 7033 using the administrator console installed on Node 1.

  1. Log in to the administration server.

    http://node1.domain-name:7001/console

  2. Click on Servers, and click Configure a New Machine.

  3. Enter a name for the managed server. For example, node2_7033.

  4. Select node_2 from the Machine list box.

  5. Enter the Listen Address as the IP address of Node 2.

  6. Enter the Listen Port as 7022, and click Create.

  7. Click the Control tab and click Start the Server.

  8. (Optional) You can also start the managed server using the command-line interface on Node 2.

    bea_base/user_projects/domains/mydomain/startManagedWeblogic.sh node2_7033

ProcedureTo Install Access Manager on Administrator Server

This procedure installs Access Manager on the Administrator server (BEA WebLogic 8.1 Administrator Server) on Node 1. By default, the Administrator Server port is 7001.

  1. Install Directory Server and Web Server using the Java ES installer.

  2. Install Access Manger in the Configure Later mode.

  3. Run amsamplesilent.

    AccessManager_base/SUNWam/amconfig -s amsamplesilent

  4. Change the values in amsamplesilent accordingly.

  5. Access amconsole and verify whether Access Manager is running.

    http://node1.domain-name:7001/amconsole

ProcedureTo Install Portal Server 7.1 on Node 1

This procedure installs Portal Server 7.1 on port 7011 of the managed server on Node 1.

  1. Install Portal Server 7.1 using the Java ES installer.

  2. In the installer, select Managed Server while installing.

  3. Access the portal.

    http://node1.domain-name:7011/portal/dt

    The Portal Welcome page appears.

ProcedureTo Create an Instance of Portal Server 7.1 on Node 1

This procedure creates an instance of Portal Server on port 7022 of the managed server on Node 1.

  1. Modify the Webcontainer.properties file in BEA WebLogic to change the values corresponding to the managed servernode1_7022.

  2. Create an instance of Portal Server.

    PortalServer_base/SUNWportal/bin/psadmin create-instance -u amadmin -f password -p portal1 -w webcontainer.properties.BEAWL8

  3. Restart the managed server named node1_7022.

  4. Access the portal.

    http://node1.domain-name:7022/portal/dt

    The Portal Welcome page appears.

ProcedureTo Create an Instance of Portal Server 7.1 on Node 2

This procedure creates an instance of Portal Server on port 7033 of the managed server on Node 2.

  1. Install Portal Server 7.1 and Access Manager SDK in the Configure Later mode using the installer.

  2. Modify the amsamplesilent script with appropriate values. Ensure that DEPLOY_LEVEL is set as 4.

    The amsamplesilent script is located in the AccessManager_base/SUNWam/bin directory.

  3. Run the amsamplesilent script.

    AccessManager_base/SUNWam/bin/amconfig -s amsamplesilent.

  4. Copy the example15.xml file to a temporary directory and modify it with appropriate values.

    The example15.xml file is located in the PortalServer_base/SUNWportal/samples/psconfig directory.

  5. Configure Portal Server using the example15.xml as the configuration XML file.

    PortalServer_base/SUNWportal/bin/psconfig --config example15.xml

  6. Modify the webcontainer.properties file in BEA WebLogic to change the values corresponding to the managed server node1_7033.

  7. Create a Portal Server instance.

    PortalServer_base/SUNWportal/bin/psadmin create-instance -u amadmin -f password -p portal1 -w webcontainer.properties.BEAWL8

  8. Restart the managed server node1_7033.

  9. Access the portal.

    http://node1.domain-name:7033/portal/dt

    The Portal Welcome page appears.

ProcedureTo Configure a Cluster

This procedure configures a cluster for the three managed server that were created: node1_7011, node1_7022, and node2_7033.

  1. Log in to the administrator server.

    http://node1.domain-name:7001/console

  2. Click Cluster and click Configure a New Cluster.

  3. Enter a cluster name and cluster address.

    Enter the cluster name as new_cluster and enter the cluster address as node1.domain-name.

  4. Select Default Load Algorithm as round robin, and click Create.

  5. Click the Server tab, and select node1_7011, node1_7022 and node1_7033 and drag them into the list box.

  6. Stop and restart all of the servers.

    Each time you change the cluster configuration you must stop and restart all of the servers in the cluster.

  7. Select the Monitoring tab.

    After a successful configuration of cluster, the following is displayed.


    Number of servers configured for this cluster: 2
    Number of servers currently participating in this cluster: 2

ProcedureTo Install Proxy Servlet for Load Balancing

This procedure creates a proxy server .war file for load balancing.

  1. Telnet to Node 1.

  2. Create files web.xml and weblogic.xml in the new proxy/WEB-INF directory.


    mkdir -p proxy/WEB-INF
    cd proxy/WEB-INF
  3. Add the following contents to the web.xml file.


    <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3
    //EN? "http://java.sun.com/dtd/web-app_2_3.dtd"><web-app>
    
    <servlet>
            <servlet-name>HttpClusterServlet</servlet-name>
            <servlet-class>weblogic.servlet.proxy.HttpClusterServlet</servlet-class>
            <init-param>
                    <param-name>WebLogicCluster</param-name>
                    <param-value>HOST.DOMAIN:7011|HOST.DOMAIN:7022</param-value>
            </init-param>
    </servlet>
    
    <servlet-mapping>
            <servlet-name>HttpClusterServlet</servlet-name>
            <url-pattern>/</url-pattern>
    </servlet-mapping>
    
    <servlet-mapping>
            <servlet-name>HttpClusterServlet</servlet-name>
            <url-pattern>*.jsp</url-pattern>
    </servlet-mapping>
    
    <servlet-mapping>
            <servlet-name>HttpClusterServlet</servlet-name>
            <url-pattern>*.htm</url-pattern>
    </servlet-mapping>
    
    <servlet-mapping>
            <servlet-name>HttpClusterServlet</servlet-name>
            <url-pattern>*.html</url-pattern>
    </servlet-mapping>
    
    </web-app>
  4. Add the following contents to the weblogic.xml file.


    <!DOCTYPE weblogic-web-app PUBLIC "-//BEA Systems,
     Inc.//DTD Web Application 8.1//
    EN" "http://www.bea.com/servers/wls810/dtd/weblogic810-web-jar.dtd">
    <weblogic-web-app>
            <context-root>/</context-root>
    </weblogic-web-app>
  5. Change directories to the proxy directory.

  6. Run the following command:

    /usr/jdk/jdk1.5.0_01/bin/jar cvf proxy.war WEB-INF

  7. Access the administrator console.

    http://node1.domain-name:7001/console

  8. Click Server > Configure a New Server.

  9. Enter the new server name as proxy and the listen port as 55555, and click Apply.

  10. Click Deployment in the left pane, click Web Applications Module, and click Deploy New.

  11. Select proxy.war from the node1.domain and deploy it on the independent server named proxy.

ProcedureTo Deploy .war Files on the Cluster

This procedure deploys all .war files on the cluster. Redeployment of all the .war files needs to be done on the cluster.

  1. Log in to administrator server.

  2. Click Web Application Modules.

  3. For each Portal Web Application module, click the Target tab, select All Servers in the Cluster option, and click Apply.

  4. Click Services > JDBC > Connection Pools.

  5. For each connection pool related to Portal Server, click the Target tab, select All Servers in the Cluster option, and click Apply.

  6. Click Services > JDBC > Data Source.

  7. For each data source related to Portal Server, click the Target tab, select All Servers in the Cluster option, and click Apply.

  8. Restart all servers.

ProcedureTo Install Gateway on the Gateway Host

  1. Install Gateway on Node 3.

  2. Enter the load balancer URL, http://node1/domain-name:55555/portal, as the portal URL while installing gateway.

  3. Log in to gateway and access developer/developer.

    If the proxy is working, the URL changes to the following: https://GWhost:443/http://node1.domain-name:55555/portal/dt

Setting Up Portlet Session Failover on BEA WebLogic 8.1 Service Pack 5

In this scenario, more than two Portal Server instances are clustered. The user accesses a portlet from a Portal Server instance. The user can set session variables in the portlet. In case of a failover of a Portal Server instance that the user accesses, the user automatically gets directed to the other Portal Server instance, and the session variables are retained.

This section explains about setting up a portlet session failover on BEA WebLogic 8.1 Service Pack 5.

ProcedureTo Set up Portlet Session Failover on BEA WebLogic 8.1 Service Pack 5

Before You Begin

Two or more portal instances should already be clustered. To cluster Portal Server instances, refer to Clustering in Portal Server 7.1 on BEA WebLogic 8.1 Service Pack 4 and Service Pack 5.

  1. Undeploy the portal.war from the managed servers using the WebLogic administrator console.

  2. Add the following into the weblogic.xml file of portal.war:


    <session-descriptor>
    <session-param>
    <param-name>PersistentStoreType</param-name>
    <param-value>replicated</param-value>
    </session-param>
    </session-descriptor>

    Both portal.war and the portlet which is used for session failover should have the session-descriptor set in the weblogic.xml file.

    After the modification, the content of the weblogic.xml file of portal.war is as follows:


    <!DOCTYPE weblogic-web-app PUBLIC "-//BEA Systems, Inc.//DTD Web Application 8.1 
    //EN" "http://www.bea.com/servers/wls810/dtd/weblogic810-web-jar.dtd">
    <weblogic-web-app>
    <reference-descriptor>
    <resource-description>
    <res-ref-name>jdbc/communitymc</res-ref-name>
    <jndi-name>jdbc/communitymc</jndi-name>
    </resource-description>
    </reference-descriptor>
    <session-descriptor>
    <session-param>
    <param-name>PersistentStoreType</param-name>
    <param-value>replicated</param-value>
    </session-param>
    </session-descriptor>
    <virtual-directory-mapping>
    <local-path>/opt/SUNWam/public_html</local-path>
    <url-pattern>/online_help/*</url-pattern>
    </virtual-directory-mapping>
    </weblogic-web-app>
  3. Deploy the portal.war onto the Portal Server instance using the BEA WebLogic administrator console.

  4. Add the same session-descriptor into the weblogic.xml file of the portlet to be used for session failover.


    Note –

    If the weblogic.xml file does not exist, create it under WEB-INF.


  5. Deploy the portlet through the psconsole and add the portlet to the desktop.

  6. Set session variables in the portlet and bring down the Portal Server instance which it uses to access the desktop.

    The session variables will be retained after you start accessing another Portal Server instance.

Replacing Java DB With Oracle Database

Java DB is used as a default database for Portal Server. This section explains how to replace Java DB with the Oracle database. Using this, you can ensure high availability and scalability. This procedure is divided into the following tasks:

  1. Setting Up General Requirements

  2. Preparing Oracle

ProcedureTo Prepare the Database

  1. Prepare the Database.

    1. Install RDBMS or identify the RDBMS that already exists on the system.

    2. Create a database instance (tablespace in case of Oracle) to be used for collaboration.

    3. Create the database user account or accounts.

  2. Establish appropriate privileges for the user accounts.

    1. Locate the JDBC driver.

    2. Add JDBC driver to the web container's JVM classpath.

    3. Add JVM option:


      -Djdbc.drivers=<JDBC_DRIVER_CLASS>
  3. Configure Community Membership and Configuration.

    1. Configure communitymc database configuration file by running the following scripts:


      portal-data-dir/portals/portal-id/config/portal.dbadmin
    2. Remove the Java DB specific property in the communitymc.properties file.


      portal-data-dir/portals/portal-id/config/communitymc.properties
  4. Load the schema onto the database.

  5. Edit the jdbc/communitymc JDBC resource to point to the new database.


    Note –

    On some of the web containers, you might need to edit the corresponding JDBC connection pool instead of the JDBC resource.


  6. Configure and install portlet applications.

    1. Locate the portlet applications.


      portal-data-dir/portals/portal-id/portletapps
    2. Configure portlet applications to use the new database by editing tokens_xxx.properties.

    3. Using the administration console or command-line tool provided by the web container, create a JDBC resource for the application using the values from the tokens_xxx.properties.

      Resource JNDI Name

      jdbc/DB_JNDI_NAME

      Resource Type

      javax.sql.DataSource

      Datasource Classname

      DB_DATASOURCEA

      User

      DB_USERU

      Password

      DB_PASSWORDS

      URL

      DB_URL


      Note –

      Some of the web containers might require you to set up a connection pool prior to setting up the JDBC resource.


    4. Undeploy existing portlets that use the Java DB Database as a datastore.

    5. Deploy the newly configured portlet applications.

ProcedureTo Prepare the Oracle Database

  1. Prepare Oracle.

    1. Install Oracle 10g Release 2.

    2. Create a database instance named portal (the SID is portal).

    3. Log in to Oracle Enterprise Manager (http://hostname:5500/em) as SYSTEM.

    4. Create a tablespace communitymc_portal-id for example, communitymc_portal1.


      Note –

      For Wiki, FileSharing, and Surveys portlets, the tablespace and user accounts are created during the deployment of the Oracle configured portlet.


    5. Create a user account with the following information:

      Username:

      portal

      Password:

      portal

      Default Tablespace

      communitymc_portal-id

      Assign roles

      CONNECT and RESOURCE

  2. Prepare the web container for the New Database

    1. Locate the Oracle JDBC driver ojdbc14.jar.

      $ORACLE_HOME/jdbc/lib/ojdbc14.jar

      Alternatively, you can download the JDBC driver from the Oracle web site. Ensure that you download the version that is compatible with the Oracle RDBMS you use.

    2. Using the administration console or the CLI, add the JDBC driver ojdbc14.jar to the JVM classpath by adding the following JVM option: -Djdbc.drivers=oracle.jdbc.OracleDriver

      • For Web Server 7.0:

        1. Log in to Web Server 7 administrator console.

        2. Click the Configuration tab and select the respective configuration.

        3. Click the java tab and add the location of the ojdbc14.jar to the classpath suffix.

        4. Click the JVM Settings tab.

        5. Replace any existing -Djdbc.drivers entry as below:

          -Djdbc.drivers=oracle.jdbc.OracleDriver

        6. If the -Djdbc.drivers entry does not exit, add the following:

          -Djdbc.drivers=oracle.jdbc.OracleDriver

        7. Click Save.

        8. Click Deploy Pending and deploy the changes.

      • For Application Server 8.2

        1. Log in to Application Server administrator console.

        2. Click Configurations > server-config (Admin Config) > JVM Settings > Path Settings > Path Settings.

        3. Add the location of the ojdbc14.jar to the classpath suffix.

        4. Click the JVM Settings tab.

        5. Replace any existing -Djdbc.drivers entry as below:

          -Djdbc.drivers=oracle.jdbc.OracleDriver

        6. If the -Djdbc.drivers entry does not exit, add the following:

          -Djdbc.drivers=oracle.jdbc.OracleDriver

        7. Click Save.

  3. Configure Community Membership and Configuration.

    1. Edit the communitymc file database configuration file.


      % vi portal-data-dir/portals/portal-id/config/portal.dbadmin
      db.driver=oracle.jdbc.OracleDriver
      db.driver.classpath=JDBC-driver-path/ojdbc14.jar
      url=jdbc:oracle:thin:@oracle-host:oracle-port:portal
    2. Remove or comment out the following property from the communitymc configuration file.


      % vi portal-data-dir/portals/portal-id/config/communitymc.properties
      #javax.jdo.option.Mapping=derby
    3. Load community schema onto the Oracle database.


      % cd portal-data-dir/portals/portal-id/config
      % ant -Dportal.id=portal-id -f config.xml configure 
    4. Edit the JDBC and communitymc JDBC resource to point to Oracle.

      • For Web Server 7.0

        1. Log in to the Web Server administration console.

        2. Click the Configuration tab and select the respective configuration.

        3. Click the java tab > Resources and add a new JDBC resource.

        4. Click jdbc/communitymc and edit the Datasource Class Name.

        5. Set the Datasource classname to oracle.jdbc.pool.OracleDataSource

        6. Set the following properties: user: portal and password: portal.

        7. Delete the following derby properties: Database Name, Port Number, and Server Name.

        8. Add the following property: url: jdbc:oracle:thin:@oracle-host:oracle-port:portal

        9. Click Save.

        10. Click Deploy Pending and deploy the changes.

      • For Application Server

        1. Log in to the Application Server administration console.

        2. Click Resources > JDBC > Connection Pools > communitymcPool

        3. Set the Datasource classname to oracle.jdbc.pool.OracleDataSource

        4. Set the following properties: user: portal and password: portal.

        5. Delete the following derby properties: Database Name, Port Number, and Server Name.

        6. Add the following property: url: jdbc:oracle:thin:@oracle-host:oracle-port:portal

        7. Click Save.

  4. Configure and install Portlet applications.

    1. Run the filesharing script.


      portal-data-dir/portals/portal-id/portletapps/filesharing
    2. Configure tokens_ora.properties to load information when initially loading the schema.

      DB_ADMIN_DRIVER_CLASSPATH

      $ORACLE_HOME/jdbc/lib/ojdbc14.jar

      DB_JNDI_NAME

      OracleFileSharingDB

      DB_ADMIN_URL

      jdbc:oracle:thin:@<ORACLE_ HOST>:<ORACLE_PORT>:portal

      DB_ADMIN_USER

      <ORACLE_SYSTEM_USER>

      DB_ADMIN_PASSWORD

      <ORACLE_SYSTEM_PASSWORD>

      DB_URL

      jdbc:oracle:thin:@<ORACLE_ HOST>:<ORACLE_PORT>:portal

      DB_USER

      portalfs

      DB_PASSWORD

      portalfs

      DB_DRIVER_JAR

      $ORACLE_HOME/jdbc/lib/ojdbc14.jar

      DB_TABLESPACE_NAME

      Filesharingdb_<PORTAL_ID>

      DB_TABLESPACE_DATAFILE

      filesharingdb_<PORTAL_ID>

      DB_TABLESPACE_INIT_SIZE

      100M

    3. Using the administration console or the command-line tool provided by the web container, create the JDBC resource using the values from the tokens_ora.properties.

      • For Web Server 7.0:

        1. Create a JDBC Resource with the following properties:

          Resource JNDI Name

          jdbc/OracleFilesharingDB

          This value must match DB_JNDI_NAME in tokens_ora.properties.

          Datasource Class Name

          oracle.jdbc.pool.OracleDataSource

          This value must match DB_DATASOURCE in tokens_ora.properties.

          User

          portalfs

          This value must match DB_USER in tokens_ora.properties.

          Password

          portalfs

          This value must match DB_PASSWORD in tokens_ora.properties.

          URL

          jdbc:oracle:thin:@<ORACLE_ HOST>:<ORACLE_PORT>:portal

          This value must match DB_URL in tokens_ora.properties.

      • For Application Server:

        1. Create a new connection pool with the following properties:

          Name

          OracleFilesharingDBPool

          This value must match DB_JNDI_NAME in tokens_ora.properties.

          Resource Type

          javax.sql.DataSource

          Datasource Class Name

          oracle.jdbc.pool.OracleDataSource

          This value must match DB_DATASOURCE in tokens_ora.properties.

        2. In the Properties list, delete all the default properties and add the following:

          User

          portalfs

          This value must match DB_USER in tokens_ora.properties.

          Password

          portalfs

          This value must match DB_PASSWORD in tokens_ora.properties.

          URL

          jdbc:oracle:thin:@<ORACLE_ HOST>:<ORACLE_PORT>:portal

          This value must match DB_URL in tokens_ora.properties.

        3. Create a JDBC resource with the following value:

          Resource JNDI Name

          jdbc/OracleFilesharingDB

          This value must match DB_JNDI_NAME in tokens_ora.properties.

          Connection Pool

          OracleFilesharingDBPool

          This value must match the pool that is created in the previous step.

        4. Add the available target to the Selected list. Click OK.

    4. Undeploy existing portlets that use Java DB Database as a datastore.


      /opt/SUNWportal/bin/psadmin undeploy-portlet -u \
      uid=amadmin,ou=people,dc=acme,dc=com -f password-file \
      -p portal-id -i portal-instance-id
      
    5. Deploy the newly configured file sharing portlet.


      cd portal-data-dir/portals/portal-id/portletapps/filesharing
      ant -Dapp.version=ora

      This ant command performs several tasks including regenerating the war image, loading up the schema onto the database, and deploying the newly built portlet. If the ant command fails and you want to unload schema, use the following command:

      ant -Dapp.version=ora unconfig_backend


      Note –

      During deployment provide the Access Manager administrator password.

      If the ant -Dapp.version=ora command fails with the following error, “Error: Password file does not exist or is not readable,” run ant deploy from command line to deploy the portlet.


    6. Repeat this procedure for the other portlet applications, such as Surveys and Wiki.