Skip Headers
Oracle® Collaboration Suite Installation Guide
10g Release 1 (10.1.1) for Solaris Operating System (SPARC)

Part Number B14483-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

10 Installing Oracle Collaboration Suite in High Availability Environments

This chapter contains the following sections:

10.1 Understanding High Availability Configurations: Overview and Common Requirements

This section provides an overview of the high availability configurations supported by Oracle Collaboration Suite.

This section contains the following topics:

10.1.1 Understanding the Common High Availability Principles

Oracle Collaboration Suite High Availability Solutions installation includes the following primary components:

10.1.1.1 Oracle Collaboration Suite Database Tier

The Oracle Collaboration Suite Database tier is built on Oracle Database 10g that serves as the repository for the Oracle Collaboration Suite schema information and OracleAS 10g Release 10.1.2.0.1 Metadata Repository. The default version of the database when installed from Oracle Collaboration Suite is 10.1.0.4.

The processes in this tier are the database instance processes and the database listener.

For high availability, Oracle recommends that this database be deployed as an Oracle Real Application Clusters database in an active-active configuration.

Oracle home is installed on each node of the hardware cluster. All Oracle homes use a single shared oraInventory on each node.

The hardware requirements for the Oracle Collaboration Suite Database tier are as follows:

  • Hardware cluster with vendor clusterware or Oracle Cluster Ready Services or both

  • Shared storage for the Oracle Real Application Clusters database files and CRS registry and quorum device. Oracle Database files can be on RAW devices, Network Attached Storage (NAS), OCFS for Linux, or use Oracle Automatic Storage Management (ASM).

  • A virtual IP address for each cluster node

10.1.1.2 Identity Management Service

The Identity Management tier consists of the following:

  • Oracle Internet Directory tier

    The Oracle Internet Directory tier may be colocated with the database tier or the OracleAS Single Sign-On tier or may be deployed separately. The colocation can be in terms of being on the same computer and in many cases, sharing the same Oracle home.

    The main processes in this tier are the Oracle Internet Directory and Oracle Directory Integration and Provisioning processes.

    For high availability, Oracle recommends that multiple instances of this tier be deployed or that the deployment be designed to fail over the service to any available computer. An active-active deployment of this tier requires a hardware load balancer.

  • OracleAS Single Sign-On tier

    This tier is colocated with the Oracle Internet Directory tier or may be deployed separately. The colocation can be in terms of their being on the same computer and in many cases, sharing the same Oracle home. Also, typically the OracleAS Single Sign-On and Oracle Delegated Administration Services services are deployed together.

    The main processes in this tier are the Oracle HTTP Server and the OC4J instances hosting the OracleAS Single Sign-On and Oracle Delegated Administration Services applications.

    For high availability, Oracle recommends that multiple instances of this tier be deployed or that the deployment be designed to fail over the service to any available computer. An active-active deployment of this tier requires a hardware load balancer.

Oracle home is on each node of the hardware cluster. All Oracle homes use a single shared oraInventory on each node.

The hardware requirements for the Identity Management tier are as follows:

  • Single node

  • Local storage

  • A load balancer functions as a front-end of the nodes and routes requests to the Identity Management services on both nodes of the cluster

10.1.1.3 Oracle Calendar

Oracle Calendar includes the file system-level database that stores all Calendar-related data. This database is not an Oracle Database, and therefore cannot provide the same high availability features of the Oracle Database.

To ensure an Oracle Collaboration Suite high availability solution, Oracle Calendar Server (one server for each calendar node) is placed on a Cold Failover Cluster because it is a single point of failure. This Cold Failover Cluster installation requires shared storage for the Oracle home and oraInventory directory trees. The Oracle Calendar Server file system database is contained under the Oracle home directory tree. To facilitate a cold failover cluster, a virtual IP address and host are required.

Oracle home and oraInventory are located on a dedicated shared storage of the hardware cluster. This Oracle home should have a separate oraInventory from the Oracle home of other components so that when the shared file system is failed over, the oraInventory is also failed over with it using the same mount point.

The hardware requirements for Oracle Calendar are as follows:

  • Hardware cluster. In case of Linux, Oracle Cluster Ready Services and the Red Hat Cluster Manager cannot coexist. As a result, the failover should be manual or Oracle Calendar should be put on a cluster separate from the Oracle Real Application Clusters database.

  • Shared storage for the Calendar Server ORACLE_HOME and oraInventory directory.

  • A virtual IP address.

10.1.1.4 Oracle Collaboration Suite Applications Tier

This tier contains all the Oracle Collaboration Suite Applications components, except Oracle Calendar, that are installed independently on multiple nodes. Typically, the applications tiers are deployed in the demilitarized zone (DMZ). A load balancer virtual server, forms the front end for multiple applications tiers. Client requests to the Oracle Collaboration Suite Applications tiers are load balanced across the Applications nodes by the load balancer using the load balancer virtual server.

Oracle home is installed on each node of the hardware cluster. All Oracle homes use a single shared oraInventory on each node.

The hardware requirements for Applications tier are as follows:

  • Single node

  • Local storage

  • Load Balancer virtual server

10.1.2 Overview of High Availability Configurations

This section provides a brief overview of the typical high availability configurations supported by Oracle Collaboration Suite. For a detailed description of the configurations, refer to the Oracle Collaboration Suite Deployment Guide.

Oracle Collaboration Suite supports the following types of high availability configurations:

Section 10.1.2.4 summarizes the differences among the high availability configurations.

10.1.2.1 Single Cluster Architecture

This is a minimal configuration where all Oracle Collaboration Suite High Availability components, Oracle Collaboration Suite Database, Identity Management, Oracle Calendar, and Applications, are installed on a single cluster. This architecture is not an out-of-box solution and requires multiple installations of Oracle Collaboration Suite and manual postinstallation configuration.

In this architecture, both Identity Management and Applications are configured as an active-active high availability configuration. The high availability configuration for Oracle Collaboration Suite Database is also active-active.

A Single Cluster Architecture configuration has the following characteristics:

  • Active nodes: All the nodes in a Single Cluster Architecture configuration are active. This means that all the nodes can handle requests. If a node fails, the remaining nodes handle all the requests.

  • Shared disk: Typically, you install Oracle Collaboration Suite on the shared disk. All nodes have access to the shared disk, but only one node mounts the shared disk at any given time. However, Oracle Collaboration Suite Database is not on the shared disk that is mounted by a node at any given time. Also, all nodes running the Oracle Real Application Clusters database must have concurrent access to the shared disk.

  • Hardware cluster: This can be vendor-specific clusterware, Oracle Cluster Ready Services, or both.

  • Load balancer. You need a load balancer to load-balance the requests to all the active nodes. The load balancer is only required for Identity Management and Applications-related requests. The requests for Applications tier are routed through the virtual IP address and the requests for Oracle Real Application Clusters database are routed through Oracle Net using the virtual IP addresses for the cluster nodes.

Figure 10-1 illustrates a typical Single Cluster Architecture configuration.

Figure 10-1 Typical Single Cluster Architecture Configuration

Single Cluster Architecture Configuration
Description of the illustration single_cluster.gif

Refer to Chapter 11 for details on Single Cluster Architecture installation.

10.1.2.2 Colocated Identity Management Architecture

This architecture separates the Oracle Collaboration Suite Database tier and the Identity Management tier rather than sharing nodes as in the Single Cluster architecture. This architecture is not an out-of-box solution and requires multiple installations of Oracle Collaboration Suite and manual postinstallation configuration.

As the name suggests, Colocated Identity Management Architecture is used for installing Identity Management components in a high availability configuration.

In this architecture, both Identity Management and Oracle Collaboration Suite Database are configured as an active-active high availability configuration.

A Colocated Identity Management Architecture configuration has the following characteristics:

  • Active nodes: The active node handles all the requests. The passive node becomes the active node when the active node fails. If a node fails, the remaining nodes handle all the requests.

  • Shared disk: Typically, you install Oracle Collaboration Suite on the shared disk. All nodes have access to the shared disk, but only one node mounts the shared disk at any given time. However, Oracle Collaboration Suite Database is not on the shared disk that is mounted by a node at any given time. Also, all nodes running the Oracle Real Application Clusters database must have concurrent access to the shared disk.

  • Hardware cluster. This can either be vendor-specific clusterware or Oracle Cluster Ready Services if Oracle Real Application Clusters is used.

  • Load balancer. A hardware load balancer is a front end to the nodes with the Identity Management tier and load balances the Identity Management traffic.

  • Nonclustered servers. You need multiple nonclustered servers for the Identity Management tier.

  • Virtual IP and host name: You must set up a virtual IP and host name for the nodes. During installation, you provide the virtual host name. Clients use the virtual host name to access Oracle Collaboration Suite in an Active Failover Cluster Configuration (for example, the virtual host name is used in URLs). The virtual IP and host name points to an active node. If the active node fails, the virtual IP and host name switches to point to any other active node.

Figure 10-2 illustrates a typical Colocated Identity Management Architecture configuration.

Figure 10-2 Typical Colocated Identity Management Architecture Configuration

Colocated Identity Management Architecture
Description of the illustration colocated.gif

Refer to Chapter 12 for details about Colocated Identity Management Architecture installation.

10.1.2.3 Distributed Identity Management Architecture

This configuration is similar to Colocated Identity Management Architecture except that the Identity Management components, Oracle Internet Directory and Oracle Application Server Single Sign-On, are distributed across multiple nonclustered servers in a demilitarized zone.

In this architecture, Oracle Internet Directory as well as Oracle Application Server Single Sign-On shares an active-active high availability configuration. However, the high availability configuration for Oracle Collaboration Suite Database can either be active-active or active-passive.

A Distributed Identity Management Architecture configuration has the following characteristics:

  • Active nodes: The active node handles all the requests.

  • Shared disk: Typically, you install Oracle Collaboration Suite on the shared disk. The active and passive nodes have access to the shared disk, but only one node (the active node) mounts the shared disk at any given time.

  • Hardware cluster. This can either be vendor-specific clusterware or Oracle Cluster Ready Services if Oracle Real Application Clusters is used.

  • Load balancer. You need a load balancer to load-balance the requests to all the active nodes. During installation, you enter the virtual server name configured on your load balancer. During run time, clients use the virtual server name to access the OracleAS Cluster (Identity Management) configuration. The load balancer then directs the request to the appropriate node.

  • Nonclustered servers. You need multiple nonclustered servers both for Oracle Internet Directory as well as Oracle Application Server Single Sign-On.

  • Virtual IP and host name: You must set up a virtual IP and host name for the nodes. During installation, you provide the virtual host name. Clients use the virtual host name to access the Oracle Collaboration Suite in an Active Failover Cluster Configuration (for example, the virtual host name is used in URLs). The virtual IP and host name points to an active node. If the active node fails, the virtual IP and host name switches to point to any other active node.

Figure 10-3 illustrates a typical Distributed Identity Management Architecture configuration.

Figure 10-3 Typical Distributed Identity Management Architecture Configuration

Distributed Identity Management Architecture
Description of the illustration distributed.gif

Refer to Chapter 13 for details about Distributed Identity Management Architecture installation.

10.1.2.4 Summary of Differences

Table 10-1 summarizes the differences among the high availability configurations.

Table 10-1 Differences Among the High Availability Configurations


Single Cluster Architecture Colocated Identity Management Architecture Distributed Identity Management Architecture
Node configuration Active-Active Active-Active Active-Active
Hardware cluster Yes Yes Yes
Virtual host name No Yes Yes
Load balancer Yes Yes Yes
Shared storage Yes Yes Yes
Nonclustered servers No Yes (Combined for the Identity Management tier) Yes

10.1.3 Installation Order for High Availability Configurations

For all high availability configurations, you install the components in the following order:

  1. Oracle Collaboration Suite Database

  2. Identity Management components

    If you are distributing the Identity Management components, you install them in the following order:

    1. Oracle Internet Directory and Oracle Directory Integration and Provisioning

    2. Oracle Application Server Single Sign-On and Oracle Delegated Administration Services

  3. Oracle Collaboration Suite application components

10.1.4 Requirements for High Availability Configurations

If you are plan to install Oracle Collaboration Suite in high availability environments, remember the following requirements:

  • Database requirement

    You need an existing Oracle Real Application Clusters database. You will install the OracleAS Metadata Repository on this database using the Metadata Repository Creation Assistant.

  • OracleAS Metadata Repository requirement

    When you perform the installation on the first node, you must specify an OracleAS Metadata Repository that is not registered with any Oracle Internet Directory. The installer checks for this. If the installer finds that the OracleAS Metadata Repository is already registered with an Oracle Internet Directory, then it assumes that you are installing on subsequent nodes, and that you want to join the cluster that was created when you installed on the first node. It prompts you for the existing cluster name and the connect information for the Oracle Internet Directory.

  • Components requirement

    Because the installer clusters the components in an Identity Management configuration, you must select the same components in the Select Configuration Options screen for all the nodes in the cluster.

    For example, if you select Oracle Internet Directory, OracleAS Single Sign-On, and Oracle Delegated Administration Services for the installation on node 1, then you must select the same set of components in subsequent installations.

    Clustering will fail if you select different components in each installation.

The requirements common to all high availability configurations are:

In addition to these common requirements, each configuration has its own specific requirements. Refer to the individual chapters for details.

Note:

In addition to the requirements specific to the high availability configuration that you plan to use, you still must meet the requirements listed in Chapter 2.

10.1.4.1 Check Minimum Number of Nodes

You need at least two nodes in a high availability configuration. If a node fails for any reason, the second node takes over.

10.1.4.2 Check That Groups Are Defined Identically on All Nodes

Check that the /etc/group file on all nodes in the cluster contains the operating system groups that you plan to use. You should have one group for the oraInventory directory, and one or two groups for database administration. The group names and the group IDs must be the same for all nodes.

Refer to Section 2.6 for details.

10.1.4.3 Check the Properties of the oracle User

Check that the oracle operating system user that you must log in as to install Oracle Collaboration Suite, has the following properties:

  • It belongs to the oinstall group and to the osdba group. The oinstall group is for the oraInventory directory, and the osdba group is a database administration group. Refer to Section 2.6 for details.

  • It has write privileges on remote directories.

10.1.4.4 Check for Previous Oracle Installations on All Nodes

Check that all the nodes where you want to install Oracle Collaboration Suite in a high availability configuration do not have existing oraInventory directories.

You must do this because you want the installer to prompt you to enter a location for the oraInventory directory. The location of the existing oraInventory directory might not be ideal for the Oracle Collaboration Suite instance that you are about to install. For example, you want the oraInventory directory to be on the shared storage. If the installer finds an existing oraInventory directory, it will automatically use it and will not prompt you to enter a location.

To check if a node contains an oraInventory directory that could be detected by the installer:

  1. On each node, check for the /var/opt/oracle/oraInst.loc file.

    If a node does not contain the file, then it does not have an oraInventory directory that will be used by the installer. You can check the next node.

  2. For nodes that contain the oraInst.loc file, rename the oracle directory to something else so that the installer does not see it. The installer then prompts you to enter a location for the oraInventory directory.

    The following example renames the oracle directory to oracle.orig (you must be root to do this):

    # su
    Password: root_password
    # cd /var/opt
    # mv oracle oracle.orig
    
    

When you run the installer to install Oracle Collaboration Suite, the installer creates a new /var/opt/oracledirectory and new files in it. You might need both oracle and oracle.orig directories. Do not delete either directory or rename one over the other.

The installer uses the /var/opt/oracle/etc/oracle directory and its files. Be sure that the right oracle directory is in place before running the installer (for example, if you are deinstalling or expanding a product).

10.2 Preparing to Install Oracle Collaboration Suite in High Availability Environments

This section covers the following topics:

10.2.1 Preinstallation Steps

Before installing an Identity Management configuration, you must set up the following tasks:

10.2.1.1 Use the Same Path for the Oracle Home Directory (Recommended)

For all the nodes that will be running Identity Management components, use the same full path for the Oracle home. This practice is recommended, but not required.

10.2.1.2 Synchronize Clocks on All Nodes

Synchronize the system clocks on all nodes.

10.2.1.3 Configure Virtual Server Names and Ports for the Load Balancer

Configure your load balancer with two virtual server names and associated ports:

  • Configure a virtual server name for LDAP connections. For this virtual server, you must configure two ports: one for SSL and one for non-SSL connections.

    Note:

    Ensure that the same ports that you configured for the LDAP virtual server are available on the nodes on which you will be installing Oracle Internet Directory.

    The installer will configure Oracle Internet Directory to use the same port numbers that are configured on the LDAP virtual server. In other words, Oracle Internet Directory on all the nodes and the LDAP virtual server will use the same port number.

  • Configure a virtual server name for HTTP connections. For this virtual server, you also must configure two ports: one for SSL and one for non-SSL connections.

    Note:

    The ports for the HTTP virtual server can be different from the Oracle HTTP Server Listen ports.

The installer will prompt you for the virtual server names and port numbers.

In addition, check that the virtual server names are associated with IP addresses and are part of your DNS. The nodes that will be running Oracle Collaboration Suite must be able to access these virtual server names.

10.2.1.4 Configure Your LDAP Virtual Server to Direct Requests to Node 1 Initially

Note that this procedure applies only to the LDAP virtual server configured on your load balancer. This does not apply to the HTTP virtual server configured on your load balancer.

Before you start the installation, configure your LDAP virtual server to direct requests to node 1 only. After you complete an installation on a node, then you can add that node to the virtual server.

For example, if you have three nodes:

  1. Configure the LDAP virtual server to direct requests to node 1 only.

  2. Install Identity Management components on node 1.

  3. Install Identity Management components on node 2.

  4. Add node 2 to the LDAP virtual server.

  5. Install Identity Management components on node 3.

  6. Add node 3 to the LDAP virtual server.

10.2.1.5 Set Up Cookie Persistence on the Load Balancer

On your load balancer, set up cookie persistence for HTTP traffic. Specifically, set up cookie persistence for URIs starting with /oiddas/. This is the URI for Oracle Delegated Administration Services. If your load balancer does not allow you to set cookie persistence at the URI level, then set the cookie persistence for all HTTP traffic. In either case, set the cookie to expire when the browser session expires.

Refer to your load balancer documentation for details.

10.2.2 About Oracle Internet Directory Passwords

In Identity Management configurations, you install Oracle Internet Directory on multiple nodes, and in each installation, you enter the instance password in the Specify Instance Name and ias_admin Password screen.

The password specified in the first installation is used as the password for the cn=orcladmin and orcladmin users not just in the first Oracle Internet Directory, but in all Oracle Internet Directory installations in the cluster.

This means that to access the Oracle Internet Directory on any node, you must use the password that you entered in the first installation. You cannot use the passwords that you entered in subsequent installations.

Accessing the Oracle Internet Directory includes:

  • Logging in to Oracle Delegated Administration Services (URL: http://hostname:port/oiddas)

  • Logging in to Oracle Application Server Single Sign-On (URL: http://hostname:port/pls/orasso)

  • Connecting to Oracle Internet Directory using the Oracle Directory Manager

You still need the passwords that you entered in installations for logging in to Application Server Control.

10.2.3 About Configuring SSL and Non-SSL Ports for Oracle HTTP Server

When you are installing Identity Management configurations, the installer displays the Specify HTTP Load Balancer Host and Listen Ports screen.

This screen has the following two sections:

  • In the load balancer section, you specify the HTTP virtual server name and port number of the load balancer. You also indicate whether the port is for SSL or non-SSL requests.

  • In the Oracle HTTP Server section, you specify the port number that you want for the Oracle HTTP Server Listen port. You also indicate whether the port is for SSL or non-SSL requests.

    The virtual server and the Oracle HTTP Server Listen port can use different port numbers.

You can use this screen to set up the type of communication (SSL or non-SSL) between client, load balancer, and Oracle HTTP Server. Three cases are possible:

  • Case 1: Communications between clients and the load balancer use HTTP, and communications between the load balancer and Oracle HTTP Server also use HTTP. Refer to Section 10.2.3.1.

  • Case 2: Communications between clients and the load balancer use HTTPS (secure HTTP), and communications between the load balancer and Oracle HTTP Server also use HTTPS. Refer to Section 10.2.3.2.

  • Case 3: Communications between clients and the load balancer use HTTPS, but communications between the load balancer and Oracle HTTP Server use HTTP. Refer to Section 10.2.3.3.

Note:

Because the values you specify in this dialog override the values specified in the staticports.ini file, you should not specify port numbers for the Oracle HTTP Server Listen port in the staticports.ini file.

10.2.3.1 Case 1: Client and the Load Balancer Use HTTP and the Load Balancer and Oracle HTTP Server Also Use HTTP for Communication

To set up this type of communication, specify the following values:

HTTP Listener: Port: Enter the port number that you want to use as the Oracle HTTP Server Listen port. This will be the value of the Listen directive in the httpd.conf file. Enable SSL: Do not select this option. The installer tries the default port number for the SSL port.

HTTP Load Balancer: Hostname: Enter the name of the virtual server on the load balancer configured to handle HTTP requests.

HTTP Load Balancer: Port: Enter the port number that the HTTP virtual server listens on. This will be the value of the Port directive in the httpd.conf file. Enable SSL: Do not select this option.

Table 10-2 lists the screen and configuration file values.

Table 10-2 Case 1: Screen and Configuration File Values

Values in Screen Resulting Values in Configuration Files
HTTP Listener: Port: 8000

Enable SSL: Unchecked

HTTP Load Balancer: Port: 80

Enable SSL: Unchecked

In httpd.conf:
Port 80
Listen 8000

In ssl.conf:

Port <default port number assigned by installer>
Listen <default port number assigned by installer>

10.2.3.2 Case 2: Client and the Load Balancer Use HTTPS and the Load Balancer and Oracle HTTP Server Also Use HTTPS for Communication

To set up this type of communication, specify the following values:

HTTP Listener: Port: Enter the port number that you want Oracle HTTP Server to listen on. This will be the value of the Listen directive in the ssl.conf file. Enable SSL: Select this option.

HTTP Load Balancer: Hostname: Enter the name of the virtual server on the load balancer configured to handle HTTPS requests.

HTTP Load Balancer: Port: Enter the port number that the HTTP virtual server listens on. This will be the value of the Port directive in the ssl.conf file. Enable SSL: Select this option.

In opmn.xml, the installer sets the ssl-enabled line in the Oracle HTTP Server section to true.

Table 10-3 lists the screen and resulting configuration file values.

Table 10-3 Case 2: Screen and Configuration File Values

Values in Screen Resulting Values in Configuration Files
HTTP Listener: Port: 90

Enable SSL: Checked

HTTP Load Balancer: Port: 443

Enable SSL: Checked

In httpd.conf:
Port <default port number assigned by installer>
Listen <default port number assigned by installer>

In ssl.conf:

Port 443
Listen 90

10.2.3.3 Case 3: Client and the Load Balancer Use HTTPS and the Load Balancer and Oracle HTTP Server Use HTTP for Communication

To set up this type of communication,specify the following values:

HTTP Listener: Port: Enter the port number that you want Oracle HTTP Server to listen on. This will be the value of the Listen directive in the httpd.conf file. Enable SSL: Do not select this option.

HTTP Load Balancer: Hostname: Enter the name of the virtual server on the load balancer configured to handle HTTPS requests.

HTTP Load Balancer: Port: Enter the port number that the HTTP virtual server listens on. This will be the value of the Port directive in the httpd.conf file. Enable SSL: Select this option.

The installer will change the following lines:

  • In opmn.xml, the installer sets the ssl-enabled line in the Oracle HTTP Server section to true.

  • In httpd.conf, the installer adds the following lines:

    LoadModule certheaders_module libexec/mod_certheaders.so
    SimulateHttps on
    
    

Table 10-4 lists the screen and configuration file values.

Table 10-4 Case 3: Screen and Configuration File Values

Values in Screen Resulting Values in Configuration Files
HTTP Listener: Port: 9000

Enable SSL: Unchecked

HTTP Load Balancer: Port: 443

Enable SSL: Checked

In httpd.conf:
Port 443
Listen 9000

In ssl.conf:

Port <default port number assigned by installer>
Listen <default port number assigned by installer>

10.3 Installing Oracle Calendar Server in High Availability Environments

This section describes how to install Oracle Calendar Server in Cold Failover configurations.

10.3.1 High Availability Configuration for Oracle Calendar

In the Oracle Collaboration Suite high availability architectures, a Cold Failover Cluster configuration is used for the Oracle Calendar. In a Cold Failover Cluster configuration, you have an active and a passive node, and a shared storage that can be accessed by either node.

During normal operation, the active node runs Oracle Calendar server processes and manages requests from clients. If the active node fails, then a failover event occurs. The passive node takes over and becomes the active node. It mounts the shared storage and runs the processes.

10.3.2 Preinstallation Steps for Installing Oracle Calendar in High Availability Environments

Before installing Oracle Calendar Server in a high availability environment, perform the following tasks:

10.3.2.1 Check That Clusterware Is Running

For Cold Failover Cluster, each node in a cluster must be running hardware vendor clusterware. If you are running Oracle Cluster Ready Services, you still need the clusterware from the hardware vendor. Running Oracle Cluster Ready Services without the hardware vendor clusterware is not supported for Cold Failover Cluster.

To check that the clusterware is running, use the command appropriate for your clusterware.

10.3.2.2 Map the Virtual Host Name and Virtual IP Address

Each node in an Oracle Calendar Server Cold Failover Cluster configuration is associated with its own physical Internet Protocol (IP) address. In addition, the active node in the cluster is associated with a virtual host name and virtual IP address. This enables clients to access the Cold Failover Cluster using the virtual host name.

Virtual host names and virtual IP addresses are any valid host name and IP address in the context of the subnet containing the hardware cluster.

Note:

Map the virtual host name and virtual IP address only to the active node. Do not map the virtual host name and IP address to both active and secondary nodes at the same time. When you fail over the current active node, only then do you map the virtual host name and IP address to the secondary node, which is now the active node.

The following example configures a virtual host name called vhost.mydomain.com, with a virtual IP of 138.1.12.191:

Note:

Before attempting to complete this procedure, ask the system or network administrator to review all the steps required. The procedure will reconfigure the network settings on the cluster nodes and may vary with differing network implementations.
  1. Become the root user.

    prompt> su
    Password: root_password
    
    
  2. Determine the public network interface.

    # ifconfig -a
    lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    lo0:1: flags=1008849<UP,LOOPBACK,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 8232 index 1
            inet 172.16.193.1 netmask ffffffff
    ge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.13.146 netmask fffffc00 broadcast 138.1.15.255
            ether 8:0:20:fd:1:23
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
            ether 8:0:20:fd:1:23
    hme0:2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.194.6 netmask fffffffc broadcast 172.16.194.7
    ge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
            inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
            ether 8:0:20:fd:1:23
    
    

    From the output, ge0 is the public network interface. It is not a loopback interface and not a private interface.

  3. Add the virtual IP to the ge0 network interface.

    # ifconfig ge0 addif 138.1.12.191 up
    
    

    In the preceding command, ge0 and the IP address, 138.1.12.191, are values specific to this example. Replace them with values appropriate for your cluster.

  4. Check that new interface was added:

    # ifconfig -a
    lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    lo0:1: flags=1008849<UP,LOOPBACK,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 8232 index 1
            inet 172.16.193.1 netmask ffffffff
    ge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.13.146 netmask fffffc00 broadcast 138.1.15.255
            ether 8:0:20:fd:1:23
    ge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.12.191 netmask ffff0000 broadcast 138.1.255.255
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
            ether 8:0:20:fd:1:23
    hme0:2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.194.6 netmask fffffffc broadcast 172.16.194.7
    ge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
            inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
            ether 8:0:20:fd:1:23
    
    

    The virtual IP appears in the ge0:1 entry. During installation, when you enter vhost.mydomain.com as the virtual host name in the Specify Virtual Hostname screen, the installer checks that vhost.mydomain.com is a valid interface.

On Failover

If the active node fails, then the secondary node takes over. If you do not have a clusterware agent to map the virtual IP from the failed node to the secondary node, then you must do it manually. Remove the virtual IP mapping from the failed node and map it to the secondary node.

  1. On the failed node, become superuser and remove the virtual IP.

    If the failed node fails completely (that is, it does not boot up), you can skip this step and go to Step 2. If the node fails partially (for example, disk or memory problems), and you can still ping the node, you must perform this step.

    prompt> su
    Password: root_password
    # ifconfig ge0 removeif 138.1.12.191
    
    

    "ge0" and the IP address are values specific to this example. Replace them with values appropriate for your cluster.

  2. On the secondary node, add the virtual IP to the ge0 network interface.

    # ifconfig ge0 addif 138.1.12.191 up
    
    

    In the preceding command, ge0 and the IP address, 138.1.12.191, are values specific to this example. Replace them with values appropriate for your cluster.

  3. On the secondary node, check that the new interface was added:

    # ifconfig -a
    ...
    ge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.12.191 netmask ffff0000 broadcast 138.1.255.255
    ...
    

10.3.2.3 Set Up a File System That Can Be Mounted from Both Nodes

Although the hardware cluster has shared storage, you must create a file system on this shared storage such that both nodes of the Oracle Calendar Server Cold Failover Cluster can mount this file system. You will use this file system for the following directories:

  • Oracle home directory for the Infrastructure

  • The oraInventory directory

For more information about disk space requirements, refer to Section 2.1.

If you are running a volume manager on the cluster to manage the shared storage, refer to the volume manager documentation for steps to create a volume. Once a volume is created, you can create the file system on that volume.

If you do not have a volume manager, you can create a file system on the shared disk directly. Ensure that the hardware vendor supports this, that the file system can be mounted from either node of the Oracle Calendar Cold Failover Cluster, and that the file system is repairable from either node if a node fails.

To check that the file system can be mounted from either node, perform the following steps:

  1. Set up and mount the file system from node 1.

  2. Unmount the file system from node 1.

  3. Mount the file system from node 2 using the same mount point that you used in Step 1.

  4. Unmount the file system from node 2, and mount it on node 1, because you will be running the installer from node 1.

Note:

Only one node of the Oracle Calendar Cold Failover Cluster should mount the file system at any given time. File system configuration files on all nodes of the cluster should not include an entry for the automatic mount of the file system upon a node restart or execution of a global mount command. For example, on UNIX platforms, do not include an entry for this file system in /etc/fstab file.

10.3.2.4 Review Recommendations for Automatic Storage Management (ASM)

If you plan to use ASM instances for the OracleAS Metadata Repository database, consider these recommendations:

  • If you plan to use ASM with Oracle Database instances from multiple database homes on the same node, then you should run the ASM instance from an Oracle home that is different from the database homes.

  • The ASM home should be installed on every cluster node. This prevents the accidental removal of ASM instances that are in use by databases from other homes during the deinstallation of a database Oracle home.

10.3.3 Installing Oracle Calendar

Figure 10-4 shows an Oracle Calendar Server high availability configuration.

Figure 10-4 Oracle Calendar High Availability Configuration

Description of cfc.gif follows
Description of the illustration cfc.gif

Figure 10-4 depicts:

  • Two nodes running clusterware.

  • Storage devices local to each node.

  • Storage device that can be accessed by both nodes. You install Infrastructure on this shared storage device.

During normal operation, one node ("node 1") acts as the active node. It mounts the shared storage to access Infrastructure, runs Oracle Collaboration Suite 10g Infrastructure processes, and handles all requests.

If the active node goes down for any reason, the clusterware fails over Oracle Collaboration Suite Infrastructure processes to the other node ("node 2"), which now becomes the active node. It mounts the shared storage, runs the processes, and handles all requests.

These nodes appear as one computer to clients through the use of a virtual address. To access the Oracle Collaboration Suite 10g Infrastructure, clients, including Applications tier components, use the virtual address associated with the cluster. The virtual address is associated with the active node (node 1 during normal operation, node 2 if node 1 goes down). Clients do not need to know which node (node 1 or node 2) is servicing requests.

You use the virtual host name in URLs that access the Infrastructure. For example, if vhost.mydomain.com is the virtual host name, the URLs for the Oracle HTTP Server and the Application Server Control would look like the following:

URL for: Example URL
Oracle HTTP Server, Welcome page http://vhost.mydomain.com:7777
Oracle HTTP Server, secure mode https://vhost.mydomain.com:4443
Application Server Control http://vhost.mydomain.com:1156

To install Oracle Calendar in a Oracle Application Server Cold Failover Cluster configuration, perform the steps listed the following topics:

10.3.3.1 Installing Oracle Calendar Server in a Cold Failover Cluster Configuration

Before installing Oracle Calendar in a Cold Failover Cluster configuration, make sure that the virtual IP address and host name is enabled on the install node.

To install Oracle Calendar in a Cold Failover Cluster configuration, follow the steps listed in Table 10-5.

Table 10-5 Installing Oracle Calendar Server in a Cold Failover Cluster Configuration

Step Screen Action
1. None Start the installer. Refer to Section 3.4, "Starting Oracle Universal Installer".
2. Select Installation Method Select Advanced Installation.

Note: Refer to Section 1.7 for detailed information on Basic and Advanced installations.

Click Next.

3. Specify Inventory Directory and Credentials

(Advanced installation only)

This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the full path for the inventory directory: Enter a full path to a directory for the installer files. Enter a directory that is different from the Oracle home directory for the product files.

Example: /private/oracle/oraInventory

Click OK.

4. UNIX Group Name

(Advanced installation only)

This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the name of the operating system group to have write permission for the inventory directory.

Example: dba

Click Next.

5. Run orainstRoot.sh

(Advanced installation only)

This screen appears only if this is the first installation of any Oracle product on this computer.

Run the orainstRoot.sh script in a different shell as the root user. The script is located in the oraInventory directory.

Click Continue.

6. Specify File Locations

(Advanced installation only)

Enter the full path of the Source directory in the Path field for Source, if required.

Name: Enter a name to identify this Oracle home. The name cannot contain spaces, and has a maximum length of 16 characters.

Example: OH_apptier_10_1_1

Destination Path: Enter the full path to the destination directory. This is the Oracle home. If the directory does not exist, the installer creates it. To create the directory beforehand, create it as the oracle user; do not create it as the root user.

Example: /private/oracle/OH_apptier_10_1_1

Click Next.

7. Specify Hardware Cluster Installation Mode

(Advanced installation only)

This screen appears only if the computer is part of a hardware cluster.

When you are installing Oracle Collaboration Suite Applications, select Local Installation because hardware cluster is not supported for Oracle Collaboration Suite Applications.

Click Next.

8. Select a Product to Install

(Advanced installation only)

Select Oracle Collaboration Suite Applications 10.1.1.0.2 to install Oracle Collaboration Suite Applications.

If you need to install additional languages, click Product Languages. Refer to Section 1.8 for details.

Click Next.

9. Product-specific Prerequisite Checks

(Advanced installation only)

The installer verifies requirements such as memory, disk space, and operating system version. If any check fails, make the required changes and click Retry. Refer to Chapter 2 for the list of hardware and software requirements.

Click Next.

10. Select Components to Configure

(Advanced installation only)

Select the components that you would like to configure during the installation. The selected components will automatically start at the end of the installation.

Note: You can also configure any component after installation. Refer to Section 8.7 for more information.

Click Next.

11. Register with Oracle Internet Directory

(Advanced installation only)

Host: Enter the name of the computer where Oracle Internet Directory is running.

Port: Enter the port number at which Oracle Internet Directory is listening. If you do not know the port number, refer to Section 8.5.

Use SSL to connect to Oracle Internet Directory: Select this option if you want Oracle Collaboration Suite components to use only SSL to connect to Oracle Internet Directory.

Click Next.

12. Specify UserName and Password for Oracle Internet Directory

(Advanced installation only)

Username: Enter the user name to use to log in to Oracle Internet Directory.

Password: Enter the user password.

Click Next.

Note: Use cn=orcladmin as the user name if you are the Oracle Internet Directory Superuser.

13. OracleAS Metadata Repository

(Advanced installation only)

Database Connection String: Select the OracleAS Metadata Repository that you want to use for this Applications tier instance. The installer will register this instance with the selected OracleAS Metadata Repository.

Click Next.

14. Select Database for Components

(Advanced installation only)

This screen shows the database to be used for each of the components that were earlier selected in the "Select Components to Configure" screen.

Click Next.

Note: If multiple instances of Oracle Collaboration Suite Databases are available in Oracle Internet Directory, then you must click on the Database Name column and then select the correct database for each component from the drop-down list. However, when you click Next to go to the next screen, the selection might not be retained. To ensure that the selection is retained, you must click the Database Name column again after selecting the required database for each component.

15. Specify Port Configuration Options

(Advanced installation only)

Select the method in which you want the ports to be configured for Oracle Collaboration Suite.

Click Next.

Note: If you manually configure the ports, then you must specify the port values for each port.

Note: The Automatic option only uses ports in the range 7777-7877 for Oracle HTTP Server and 4443-4543 for Oracle HTTP Server with SSL. If you need to set the port numbers as 80 for Oracle HTTP Server and 443 for Oracle HTTP Server with SSL, then you must select the Manually Specify Ports option.

16. Specify Administrative Password and Instance Name

(Advanced installation only)

Instance Name: Specify the name of the OracleAS instance for the Oracle Collaboration Suite administrative accounts.

Administrative Password: Specify the initial password for the Oracle Collaboration Suite administrative accounts.

Confirm Password: Confirm the password.

Click Next.

17. Oracle Calendar Server Host Alias

(Advanced installation only)

Host or Alias: Specify either the host address or the alias of the Calendar server instance.

Click Next.

Note: Oracle recommends that you use alias in place of host name if later you want to move the calendar server instance or change the host name. Specify the host name if an alias is not configured.

18. Summary Verify your selections and click Install.
19. Install Progress This screen displays the progress of the installation.
20. Run root.sh Note: Do not run the root.sh script until this dialog appears.
  1. When you see this dialog, run the root.sh script in a different shell as the root user. The script is located in the Oracle home directory of this instance.

  2. Click OK.

21. Configuration Assistants This screen shows the progress of the configuration assistants. Configuration assistants configure components.
22. End of Installation Click Exit to quit the installer.

10.3.3.2 Performing the Postinstallation Steps

You might have to perform the postinstallation steps for the following problems:

About the chmod Warning While the root.sh Script Is Running

While the root.sh script is running, you might get a chmod: warning. This warning appears because corresponding set-ID is disabled on emtgtctl2 because set-ID requires the execute permission.

Ignore the warning and continue with the installation process.

10.3.3.3 Installing Oracle Collaboration Suite Applications

You can install and run the Applications tiers on other nodes (nodes that are not running Infrastructure). During installation, you set up the Applications tiers to use services from the Oracle Collaboration Suite 10g Infrastructure installed on the shared storage device.