Skip Headers
Oracle® Collaboration Suite Installation Guide
10g Release 1 (10.1.2) for Solaris Operating System (SPARC)

Part Number B25462-11
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

8 Installing Oracle Collaboration Suite in High Availability Environments

This chapter contains the following sections:

8.1 Understanding High Availability Configurations: Overview and Common Requirements

This section provides an overview of the high availability configurations supported by Oracle Collaboration Suite.

This section contains the following topics:

8.1.1 Understanding the Common High Availability Principles

Oracle Collaboration Suite High Availability Solutions installation includes the following primary components:

8.1.1.1 Oracle Collaboration Suite Database Tier

The Oracle Collaboration Suite Database tier is built on Oracle Database 10g that serves as the repository for the Oracle Collaboration Suite schema information and OracleAS 10g Release 10.1.2.0.1 Metadata Repository. The default version of the database when installed from Oracle Collaboration Suite is .

The processes in this tier are the database instance processes and the database listener.

For high availability, Oracle recommends that this database be deployed as an Oracle Real Application Clusters database in an active-active configuration.

An Oracle Collaboration Suite Database Oracle home is installed on each node of the hardware cluster. Each node has its own oraInventory, which is shared by other Oracle homes on that node.

The hardware requirements for the Oracle Collaboration Suite Database tier are as follows:

  • Hardware cluster with Oracle Cluster Ready Services

  • Shared storage for the Oracle Real Application Clusters Database files and CRS Cluster registry and Voting Disk. Oracle Database files can be on RAW devices, Network Attached Storage (NAS)or use Oracle Automatic Storage Management (ASM).

  • A virtual IP address for each cluster node

8.1.1.2 Identity Management Service

The Identity Management tier consists of the following:

  • Oracle Internet Directory tier

    The Oracle Internet Directory tier may be collocated with the database tier or the OracleAS Single Sign-On tier or may be deployed separately. The colocation can be in terms of being on the same computer and in many cases, sharing the same Oracle home.

    The main processes in this tier are the Oracle Internet Directory and Oracle Directory Integration and Provisioning processes.

    For high availability, Oracle recommends that multiple instances of this tier be deployed or that the deployment be designed to fail over the service to any available computer. An active-active deployment of this tier requires a hardware load balancer.

  • OracleAS Single Sign-On tier

    This tier is collocated with the Oracle Internet Directory tier or may be deployed separately. The colocation can be in terms of their being on the same computer and in many cases, sharing the same Oracle home. Also, typically the OracleAS Single Sign-On and Oracle Delegated Administration Services services are deployed together.

    The main processes in this tier are the Oracle HTTP Server and the OC4J instances hosting the OracleAS Single Sign-On and Oracle Delegated Administration Services applications.

    For high availability, Oracle recommends that multiple instances of this tier be deployed or that the deployment be designed to fail over the service to any available computer. An active-active deployment of this tier requires a hardware load balancer.

Oracle home is on each node of the hardware cluster. All Oracle homes use a single shared oraInventory on each node.

The hardware requirements for the Identity Management tier are as follows:

  • Single node

  • Local storage

  • A load balancer functions as a front-end of the nodes and routes requests to the Identity Management services on all nodes of the Oracle Identity Management cluster.

8.1.1.3 Oracle Calendar Server

The Oracle Calendar server includes the file system-level database that stores all Calendar-related data. This database is not an Oracle Database, and therefore cannot provide the same high availability features of the Oracle Database. This Oracle Calendar installation includes only the Oracle Calendar server component and it does not include the Oracle Calendar Application System which will be deployed with the Oracle Collaboration Suite Applications Tier.

To ensure an Oracle Collaboration Suite high availability solution, the Oracle Calendar server (one server for each calendar node) is placed on a Cold Failover Cluster because it is a single point of failure. This Cold Failover Cluster installation requires shared storage for the Oracle home tree. The Oracle Calendar server file system database is contained under the Oracle home directory tree. To facilitate a cold failover cluster, a virtual IP address and host are required.

The hardware requirements for the Oracle Calendar server are as follows:

  • Hardware cluster.

  • Shared storage for the Oracle Calendar server ORACLE_HOME directory.

  • A virtual IP address.

8.1.1.4 Oracle Collaboration Suite Applications Tier

This tier contains all the Oracle Collaboration Suite Applications components, except the Oracle Calendar server, that are installed independently on multiple nodes. Typically, the applications tiers are deployed in the demilitarized zone (DMZ). DMZ is a part of the network, which is in between an intranet and the Internet, and is often referred as the neutral zone. It allows only certain services of the hosts in an intranet to be accessible to the hosts on the Internet. This sub network is specially used for public access servers such as web servers. A load balancer virtual server, forms the front end for multiple applications tiers. Client requests to the Oracle Collaboration Suite 10g Applications tiers are load balanced across the Applications nodes by the load balancer using the load balancer virtual server.

Oracle home is installed on each node of the hardware cluster. All Oracle homes use a single shared oraInventory on each node.

The hardware requirements for Applications tier are as follows:

  • Single node

  • Local storage

  • A load balancer functions as a front-end to the Oracle Collaboration Suite Applications tier nodes and balances and routes requests to the active nodes of the cluster

8.1.2 High Availability Configurations

This section explains the features of high availability configurations and provides a brief overview of the typical high availability configurations supported by Oracle Collaboration Suite. For a detailed description of the configurations, refer to the Oracle Collaboration Suite High Availability Guide.

Oracle Collaboration Suite supports the following types of high availability configurations:

The features of the high availability configurations of Oracle Collaboration Suite are as follows:

  • Shared storage: All nodes must have access to the shared storage. In the case of the Oracle Calendar server installation, only one node mounts the shared disk containing the ORACLE_HOME at any given time. All nodes running the Oracle Real Application Clusters database must have concurrent access to the shared storage which contains the Oracle Collaboration database. If the Oracle Mobile Data Sync feature is going to be used, then it also requires shared storage in the Oracle Collaboration Suite Applications tier.

  • Hardware cluster: A hardware cluster is a hardware architecture that enables multiple computers to share access to data, software, or peripheral devices. A hardware cluster is required for Oracle Real Application Clusters. Oracle Real Application Clusters takes advantage of such architecture by running multiple database instances that share a single physical database. The database instances on each node communicate with each other by means of a high speed network interconnect.

  • Nonclustered servers: You need multiple nonclustered servers for the Identity Management tier and the Oracle Collaboration Suite Applications tier. This does not apply to the minimal single cluster architecture as it is comprised of the hardware cluster nodes.

  • Load balancer: You need a load balancer to load-balance the requests to all the active nodes. The load balancer is required for Identity Management and Applications-related requests. The requests for the Identity Management and Applications tiers are routed through the load balancer virtual server names and ports.

8.1.2.1 Single Cluster Architecture

This is a minimal configuration where all Oracle Collaboration Suite High Availability components, Oracle Collaboration Suite Database, Identity Management, Oracle Calendar server, and Applications, are installed on a single cluster. This architecture is not an out-of-box solution and requires multiple installations of Oracle Collaboration Suite and manual postinstallation configuration.

In this architecture, the Oracle Collaboration Suite Database is installed on RAC and both Identity Management and Applications are configured as an active-active high availability configuration. The Oracle Calendar server is installed in a Cold Failover Cluster configuration as previously described.

A Single Cluster Architecture configuration has the following characteristics:

  • Active nodes: All the nodes in a Single Cluster Architecture configuration are active. This means that all the nodes can handle requests. If a node fails, the remaining nodes handle all the requests.

  • Shared disk: Typically, you install Oracle Collaboration Suite on the shared disk. All nodes have access to the shared disk, but only one node mounts the shared disk at any given time. However, Oracle Collaboration Suite Database (if it is installed on the same cluster as the Oracle Calendar Server cold failover cluster configuration) must be on a separate shared storage from the Oracle Calendar Server shared storage. Also, all nodes running the Oracle Real Application Clusters database must have concurrent access to the shared disk.

  • Hardware cluster: This can be vendor-specific clusterware, Oracle Cluster Ready Services, or both.

  • Load balancer: You need a load balancer to load-balance the requests to all the active nodes. The load balancer is required for Identity Management and Applications-related requests. The requests for the Identity Management and Applications tiers are routed through the load balancer virtual server names and ports.

Figure 8-1 illustrates a typical Single Cluster Architecture configuration.

Figure 8-1 Typical Single Cluster Architecture Configuration

Single Cluster Architecture Configuration
Description of the illustration single_cluster.gif

Refer to Chapter 9 for details on Single Cluster Architecture installation.

8.1.2.2 Collocated Identity Management Architecture

This architecture separates the different tiers on to multiple computers, by leaving the hardware cluster dependent tiers, the Oracle Real Application Clusters database and the Oracle Calendar Server Cold Failover Clusters installation on the hardware cluster in the secured intranet, rather than sharing nodes for all tiers as in the Single Cluster architecture. The Identity Management and Oracle Collaboration Suite Applications tiers are separated to a set of nonclustered computers residing in the DMZ. The term collocated refers to the fact the the Identity Management tier contains both the Oracle Internet Directory and Oracle Single Sign-On tiers in a single ORACLE_HOME. This architecture is not an out-of-box solution and requires multiple installations of Oracle Collaboration Suite and manual postinstallation configuration.

In this architecture, both Identity Management and Oracle Collaboration Suite Database are configured as an active-active high availability configuration.

Figure 8-2 illustrates a typical Collocated Identity Management Architecture configuration.

Figure 8-2 Typical Collocated Identity Management Architecture Configuration

Colocated Identity Management Architecture
Description of the illustration colocated.gif

Refer to Chapter 9 for details about Collocated Identity Management Architecture installation.

8.1.2.3 Distributed Identity Management Architecture

This configuration is very similar to the Collocated Identity Management Architecture. This architecture still separates the different tiers on to multiple computers, leaving the hardware cluster dependent tiers, the Oracle Real Application Clusters database and the Calendar Server Cold Failover Clusters installation, on the hardware cluster in the secured intranet. The difference from the Collocated architecture is that the Identity Management components, Oracle Internet Directory and Oracle Application Server Single Sign-On, are separately installed and distributed across multiple nonclustered servers in the DMZ. Thus, the name, Distributed Identity Management. This architecture is not an out-of-box solution and requires multiple installations of Oracle Collaboration Suite and manual postinstallation configuration.

In this architecture, Oracle Internet Directory as well as Oracle Application Server Single Sign-On shares an active-active high availability configuration. However, the high availability configuration for Oracle Collaboration Suite Database can either be active-active or active-passive.

Figure 8-3 illustrates a typical Distributed Identity Management Architecture configuration.

Figure 8-3 Typical Distributed Identity Management Architecture Configuration

Distributed Identity Management Architecture
Description of the illustration distributed.gif

Refer to Chapter 9 for details about Distributed Identity Management Architecture installation.

8.1.3 Installation Order for High Availability Configurations

For all high availability configurations, you install the components in the following order:

  1. Oracle Collaboration Suite Database

  2. Identity Management components

    If you are distributing the Identity Management components, you install them in the following order:

    1. Oracle Internet Directory and Oracle Directory Integration and Provisioning

    2. Oracle Application Server Single Sign-On and Oracle Delegated Administration Services

  3. Oracle Calendar Server

  4. Oracle Collaboration Suite Applications components

8.1.4 Requirements for High Availability Configurations

If you are plan to install Oracle Collaboration Suite in high availability environments, remember the following requirements:

  • Database requirement

    You need to have Oracle Cluster Ready Services (CRS) installed. Subsequently when running the Oracle Collaboration Suite Installer, select Cluster Installation. This will install the Oracle Collaboration Suite Real Application Clusters database, including the Oracle Collaboration Suite Infrastructure. These steps are detailed in the Install guides.

  • Components requirement

    Because the installer clusters the components in an Identity Management configuration, you must select the same components in the Select Configuration Options screen for all the nodes in the cluster.

    For example, if you select Oracle Internet Directory, OracleAS Single Sign-On, and Oracle Delegated Administration Services for the installation on node 1, then you must select the same set of components in subsequent installations.

    Clustering will fail if you select different components in each installation.

The requirements common to all high availability configurations are:

In addition to these common requirements, each configuration has its own specific requirements. Refer to the individual chapters for details.


Note:

In addition to the requirements specific to the high availability configuration that you plan to use, you still must meet the requirements listed in Chapter 2.

8.1.4.1 Check Minimum Number of Nodes

You need at least two nodes in a high availability configuration. If a node fails for any reason, the second node takes over.

8.1.4.2 Check That Groups Are Defined Identically on All Nodes

Check that the /etc/group file on all nodes in the cluster contains the operating system groups that you plan to use. You should have one group for the oraInventory directory, and one or two groups for database administration. The group names and the group IDs must be the same for all nodes.

Refer to Section 2.6 for details.

8.1.4.3 Check the Properties of the oracle User

Refer to Section 2.6 for details.

8.1.4.4 Check for Previous Oracle Installations on All Nodes

Check that all the nodes where you want to install Oracle Collaboration Suite in a high availability configuration do not have existing oraInventory directories.

You must do this because you want the installer to prompt you to enter a location for the oraInventory directory. The location of the existing oraInventory directory might not be ideal for the Oracle Collaboration Suite instance that you are about to install. For example, you want the oraInventory directory to be on the shared storage. If the installer finds an existing oraInventory directory, it will automatically use it and will not prompt you to enter a location.

To check if a node contains an oraInventory directory that could be detected by the installer:

  1. On each node, check for the /var/opt/oracle/oraInst.loc file.

    If a node does not contain the file, then it does not have an oraInventory directory that will be used by the installer. You can check the next node.

  2. For nodes that contain the oraInst.loc file, rename the oracle directory to something else so that the installer does not see it. The installer then prompts you to enter a location for the oraInventory directory.

    The following example renames the oracle directory to oracle.orig (you must be root to do this):

    # su
    Password: root_password
    # cd /var/opt
    # mv oracle oracle.orig
    
    

When you run the installer to install Oracle Collaboration Suite, the installer creates a new /var/opt/oracledirectory and new files in it. You might need both oracle and oracle.orig directories. Do not delete either directory or rename one over the other.

The installer uses the /var/opt/oracledirectory and its files. Be sure that the right oracle directory is in place before running the installer (for example, if you are deinstalling or expanding a product).

8.2 Preparing to Install Oracle Collaboration Suite in High Availability Environments

This section covers the following topics:

8.2.1 Review Recommendations for Automatic Storage Management (ASM)

If you plan to use ASM instances for the OracleAS Metadata Repository database, consider these recommendations:

  • If you plan to use ASM with Oracle Database instances from multiple database homes on the same node, then you should run the ASM instance from an Oracle home that is different from the database homes.

  • The ASM home should be installed on every cluster node. This prevents the accidental removal of ASM instances that are in use by databases from other homes during the deinstallation of a database Oracle home.

8.2.2 Identity Management Preinstallation Steps

Before installing an Identity Management configuration, you must set up the following tasks:

8.2.2.1 Use the Same Path for the Oracle Home Directory (Recommended)

For all the nodes that will be running Identity Management components, use the same full path for the Oracle home. This practice is recommended, but not required.

8.2.2.2 Synchronize Clocks on All Nodes

Synchronize the system clocks on all nodes.

8.2.2.3 Configure Virtual Server Names and Ports for the Load Balancer

Configure your load balancer with two virtual server names and associated ports:

  • Configure a virtual server name for LDAP connections. For this virtual server, you must configure two ports: one for SSL and one for non-SSL connections.


    Note:

    Ensure that the same ports that you configured for the LDAP virtual server are available on the nodes on which you will be installing Oracle Internet Directory.

    The installer will configure Oracle Internet Directory to use the same port numbers that are configured on the LDAP virtual server. In other words, Oracle Internet Directory on all the nodes and the LDAP virtual server will use the same port number.


  • Configure a virtual server name for HTTP connections. For this virtual server, you also must configure two ports: one for SSL and one for non-SSL connections.


    Note:

    The ports for the HTTP virtual server can be different from the Oracle HTTP Server Listen ports.

The installer will prompt you for the virtual server names and port numbers.

In addition, check that the virtual server names are associated with IP addresses and are part of your DNS. The nodes that will be running Oracle Collaboration Suite must be able to access these virtual server names.

8.2.2.4 Configure Your LDAP Virtual Server to Direct Requests to Node 1 Initially

Note that this procedure applies only to the LDAP virtual server configured on your load balancer. This does not apply to the HTTP virtual server configured on your load balancer.

Before you start the installation, configure your LDAP virtual server to direct requests to node 1 only. After you complete an installation on a node, then you can add that node to the virtual server.

For example, if you have three nodes:

  1. Configure the LDAP virtual server to direct requests to node 1 only.

  2. Install Identity Management components on node 1.

  3. Install Identity Management components on node 2.

  4. Add node 2 to the LDAP virtual server.

  5. Install Identity Management components on node 3.

  6. Add node 3 to the LDAP virtual server.

8.2.2.5 Set Up Cookie Persistence on the Load Balancer

On your load balancer, set up cookie persistence for HTTP traffic. Specifically, set up cookie persistence for URIs starting with /oiddas/. This is the URI for Oracle Delegated Administration Services. If your load balancer does not allow you to set cookie persistence at the URI level, then set the cookie persistence for all HTTP traffic. In either case, set the cookie to expire when the browser session expires.

Refer to your load balancer documentation for details.

8.2.2.6 Configure Shared Storage for Calendar Oracle Mobile Data Sync

In a multiple mid-tier deployment there will be multiple Calendar Sync Server tiers. Thus, you must point each mid-tier to a central, unified storage location for this information. Failure to do this can result in many unnecessary slow (full) synchronizations.

See "Oracle Mobile Data Sync Tiers and Storage of Synchronization Information" in Oracle Collaboration Suite Deployment Guide for details.

8.2.3 About Oracle Internet Directory Passwords

In Identity Management configurations, you install Oracle Internet Directory on multiple nodes, and in each installation, you enter the instance password in the Specify Instance Name and ias_admin Password screen.

The password specified in the first installation is used as the password for the cn=orcladmin and orcladmin users not just in the first Oracle Internet Directory, but in all Oracle Internet Directory installations in the cluster.

This means that to access the Oracle Internet Directory on any node, you must use the password that you entered in the first installation.

Accessing the Oracle Internet Directory includes:

  • Logging in to Oracle Delegated Administration Services (URL: http://hostname:port/oiddas)

  • Logging in to Oracle Application Server Single Sign-On (URL: http://hostname:port/pls/orasso)

  • Connecting to Oracle Internet Directory using the Oracle Directory Manager

You still need the passwords that you entered in installations for logging in to Application Server Control.

8.2.4 About Configuring SSL and Non-SSL Ports for Oracle HTTP Server

When you are installing Identity Management configurations, the installer displays the Specify HTTP Load Balancer Host and Listen Ports screen.

This screen has the following two sections:

  • In the load balancer section, you specify the HTTP virtual server name and port number of the load balancer. You also indicate whether the port is for SSL or non-SSL requests.

  • In the Oracle HTTP Server section, you specify the port number that you want for the Oracle HTTP Server Listen port. You also indicate whether the port is for SSL or non-SSL requests.

    The virtual server and the Oracle HTTP Server Listen port can use different port numbers.

You can use this screen to set up the type of communication (SSL or non-SSL) between client, load balancer, and Oracle HTTP Server. Three cases are possible:

  • Case 1: Communications between clients and the load balancer use HTTP, and communications between the load balancer and Oracle HTTP Server also use HTTP. Refer to Section 8.2.4.1.

  • Case 2: Communications between clients and the load balancer use HTTPS (secure HTTP), and communications between the load balancer and Oracle HTTP Server also use HTTPS. Refer to Section 8.2.4.2.

  • Case 3: Communications between clients and the load balancer use HTTPS, but communications between the load balancer and Oracle HTTP Server use HTTP. Refer to Section 8.2.4.3.


Note:

Because the values you specify in this dialog override the values specified in the staticports.ini file, you should not specify port numbers for the Oracle HTTP Server Listen port in the staticports.ini file.

8.2.4.1 Case 1: Client and the Load Balancer Use HTTP and the Load Balancer and Oracle HTTP Server Also Use HTTP for Communication

To set up this type of communication, specify the following values:

HTTP Listener: Port: Enter the port number that you want to use as the Oracle HTTP Server Listen port. This will be the value of the Listen directive in the httpd.conf file.

Enable SSL: Do not select this option. The installer tries the default port number for the SSL port.

HTTP Load Balancer: Hostname: Enter the name of the virtual server on the load balancer configured to handle HTTP requests.

HTTP Load Balancer: Port: Enter the port number that the HTTP virtual server listens on. This will be the value of the Port directive in the httpd.conf file.

Enable SSL: Do not select this option.

Table 8-1 lists the screen and configuration file values.

Table 8-1 Case 1: Screen and Configuration File Values

Values in Screen Resulting Values in Configuration Files
HTTP Listener: Port: 8000

Enable SSL: Unchecked

HTTP Load Balancer: Port: 80

Enable SSL: Unchecked

In httpd.conf:
Port 80
Listen 8000

In ssl.conf:

Port <default port number assigned by installer>
Listen <default port number assigned by installer>

8.2.4.2 Case 2: Client and the Load Balancer Use HTTPS and the Load Balancer and Oracle HTTP Server Also Use HTTPS for Communication

To set up this type of communication, specify the following values:

HTTP Listener: Port: Enter the port number that you want Oracle HTTP Server to listen on. This will be the value of the Listen directive in the ssl.conf file.

Enable SSL: Select this option.

HTTP Load Balancer: Hostname: Enter the name of the virtual server on the load balancer configured to handle HTTPS requests.

HTTP Load Balancer: Port: Enter the port number that the HTTP virtual server listens on. This will be the value of the Port directive in the ssl.conf file.

Enable SSL: Select this option.

In opmn.xml, the installer sets the ssl-enabled line in the Oracle HTTP Server section to true.

Table 8-2 lists the screen and resulting configuration file values.

Table 8-2 Case 2: Screen and Configuration File Values

Values in Screen Resulting Values in Configuration Files
HTTP Listener: Port: 90

Enable SSL: Checked

HTTP Load Balancer: Port: 443

Enable SSL: Checked

In httpd.conf:
Port <default port number assigned by installer>
Listen <default port number assigned by installer>

In ssl.conf:

Port 443
Listen 90

8.2.4.3 Case 3: Client and the Load Balancer Use HTTPS and the Load Balancer and Oracle HTTP Server Use HTTP for Communication

To set up this type of communication,specify the following values:

HTTP Listener: Port: Enter the port number that you want Oracle HTTP Server to listen on. This will be the value of the Listen directive in the httpd.conf file.

Enable SSL: Do not select this option.

HTTP Load Balancer: Hostname: Enter the name of the virtual server on the load balancer configured to handle HTTPS requests.

HTTP Load Balancer: Port: Enter the port number that the HTTP virtual server listens on. This will be the value of the Port directive in the httpd.conf file.

Enable SSL: Select this option.

The installer will change the following lines:

  • In opmn.xml, the installer sets the ssl-enabled line in the Oracle HTTP Server section to true.

  • In httpd.conf, the installer adds the following lines:

    LoadModule certheaders_module libexec/mod_certheaders.so
    SimulateHttps on
    
    

Table 8-3 lists the screen and configuration file values.

Table 8-3 Case 3: Screen and Configuration File Values

Values in Screen Resulting Values in Configuration Files
HTTP Listener: Port: 9000

Enable SSL: Unchecked

HTTP Load Balancer: Port: 443

Enable SSL: Checked

In httpd.conf:
Port 443
Listen 9000

In ssl.conf:

Port <default port number assigned by installer>
Listen <default port number assigned by installer>

8.3 Installing the Oracle Calendar Server in High Availability Environments

This section describes how to install the Oracle Calendar server in Cold Failover configurations.

8.3.1 High Availability Configuration for Oracle Calendar Server

In the Oracle Collaboration Suite high availability architectures, a Cold Failover Cluster configuration is used for the Oracle Calendar server. In a Cold Failover Cluster configuration, you have an active and a passive node, and shared storage that can be accessed by either node.

During normal operation, the active node runs the Oracle Calendar server processes and manages requests from clients. If the active node fails, then a failover event occurs. The passive node takes over and becomes the active node. It mounts the shared storage and runs the processes.

Figure 8-4 shows an Oracle Calendar server high availability configuration.

Figure 8-4 Oracle Calendar Server High Availability Configuration

Description of cfc.gif follows
Description of the illustration cfc.gif

Figure 8-4 depicts:

  • Two nodes running clusterware.

  • Storage devices local to each node.

  • Storage device that can be accessed by both nodes.

During normal operation, one node ("node 1") acts as the active node. It mounts the shared storage to access the Oracle Calendar server, runs the Oracle Calendar server processes, and handles all requests.

If the active node goes down for any reason, the clusterware fails over the Oracle Calendar server processes to the other node ("node 2"), which now becomes the active node. It mounts the shared storage, runs the processes, and handles all requests. It is only if they have set up a package that the clusterware will automatically detect and fail over the processes, vip and the shared storage. With the default out-of-box OCS installation, you will need to manually failover the processes, vip and the shared storage.

These nodes appear as one computer to clients through the use of a virtual address. To access the Oracle Calendar server, clients, including Applications tier components, use the virtual address associated with the cluster. The virtual address is associated with the active node (node 1 during normal operation, node 2 if node 1 goes down). Clients do not need to know which node (node 1 or node 2) is servicing requests.

You use the virtual host name in URLs that access the Oracle Calendar server. For example, if vhost.mydomain.com is the virtual host name, the URLs for the Oracle HTTP Server and the Application Server Control would look like the following:

URL for: Example URL
Oracle HTTP Server, Welcome page http://vhost.mydomain.com:7777
Oracle HTTP Server, secure mode https://vhost.mydomain.com:4443
Application Server Control http://vhost.mydomain.com:1156

8.3.2 Preinstallation Steps for Installing the Oracle Calendar Server in High Availability Environments

Before installing the Oracle Calendar server in a high availability environment, perform the following tasks:

8.3.2.1 Map the Virtual Host Name and Virtual IP Address

Each node in an Oracle Calendar server Cold Failover Cluster configuration is associated with its own physical Internet Protocol (IP) address. In addition, the active node in the cluster is associated with a virtual host name and virtual IP address. This enables clients to access the Cold Failover Cluster using the virtual host name.

Virtual host names and virtual IP addresses are any valid host name and IP address in the context of the subnet containing the hardware cluster.


Note:

Map the virtual host name and virtual IP address only to the active node. Do not map the virtual host name and IP address to both active and secondary nodes at the same time. When you fail over the current active node, only then do you map the virtual host name and IP address to the secondary node, which is now the active node.

The following example configures a virtual host name called vhost.mydomain.com, with a virtual IP of 138.1.12.191:


Note:

Before attempting to complete this procedure, ask the system or network administrator to review all the steps required. The procedure will reconfigure the network settings on the cluster nodes and may vary with differing network implementations.

  1. Become the root user.

    prompt> su
    Password: root_password
    
    
  2. Determine the public network interface.

    # ifconfig -a
    lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    lo0:1: flags=1008849<UP,LOOPBACK,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 8232 index 1
            inet 172.16.193.1 netmask ffffffff
    ge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.13.146 netmask fffffc00 broadcast 138.1.15.255
            ether 8:0:20:fd:1:23
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
            ether 8:0:20:fd:1:23
    hme0:2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.194.6 netmask fffffffc broadcast 172.16.194.7
    ge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
            inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
            ether 8:0:20:fd:1:23
    
    

    From the output, ge0 is the public network interface. It is not a loopback interface and not a private interface.

  3. Add the virtual IP to the ge0 network interface.

    # ifconfig ge0 addif 138.1.12.191 up
    
    

    In the preceding command, ge0 and the IP address, 138.1.12.191, are values specific to this example. Replace them with values appropriate for your cluster.

  4. Check that new interface was added:

    # ifconfig -a
    lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    lo0:1: flags=1008849<UP,LOOPBACK,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 8232 index 1
            inet 172.16.193.1 netmask ffffffff
    ge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.13.146 netmask fffffc00 broadcast 138.1.15.255
            ether 8:0:20:fd:1:23
    ge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.12.191 netmask ffff0000 broadcast 138.1.255.255
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
            ether 8:0:20:fd:1:23
    hme0:2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.194.6 netmask fffffffc broadcast 172.16.194.7
    ge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
            inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
            ether 8:0:20:fd:1:23
    
    

    The virtual IP appears in the ge0:1 entry. During installation, when you enter vhost.mydomain.com as the virtual host name in the Specify Virtual Hostname screen, the installer checks that vhost.mydomain.com is a valid interface.

On Failover

If the active node fails, then the secondary node takes over. If you do not have a clusterware agent to map the virtual IP from the failed node to the secondary node, then you must do it manually. Remove the virtual IP mapping from the failed node and map it to the secondary node.

  1. On the failed node, become superuser and remove the virtual IP.

    If the failed node fails completely (that is, it does not boot up), you can skip this step and go to Step 2. If the node fails partially (for example, disk or memory problems), and you can still ping the node, you must perform this step.

    prompt> su
    Password: root_password
    # ifconfig ge0 removeif 138.1.12.191
    
    

    "ge0" and the IP address are values specific to this example. Replace them with values appropriate for your cluster.

  2. On the secondary node, add the virtual IP to the ge0 network interface.

    # ifconfig ge0 addif 138.1.12.191 up
    
    

    In the preceding command, ge0 and the IP address, 138.1.12.191, are values specific to this example. Replace them with values appropriate for your cluster.

  3. On the secondary node, check that the new interface was added:

    # ifconfig -a
    ...
    ge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.12.191 netmask ffff0000 broadcast 138.1.255.255
    ...
    

8.3.2.2 Set Up a File System That Can Be Mounted from Both Nodes

Although the hardware cluster has shared storage, you must create a file system on this shared storage such that both nodes of the Oracle Calendar server Cold Failover Cluster can mount this file system. You will use this file system for the following directories:

  • Oracle home directory for the Infrastructure

  • The oraInventory directory

For more information about disk space requirements, refer to Section 2.1.

If you are running a volume manager on the cluster to manage the shared storage, refer to the volume manager documentation for steps to create a volume. Once a volume is created, you can create the file system on that volume.

If you do not have a volume manager, you can create a file system on the shared disk directly. Ensure that the hardware vendor supports this, that the file system can be mounted from either node of the Oracle Calendar server Cold Failover Cluster, and that the file system is repairable from either node if a node fails.

To check that the file system can be mounted from either node, perform the following steps:

  1. Set up and mount the file system from node 1.

  2. Unmount the file system from node 1.

  3. Mount the file system from node 2 using the same mount point that you used in Step 1.

  4. Unmount the file system from node 2, and mount it on node 1, because you will be running the installer from node 1.


Note:

Only one node of the Oracle Calendar server Cold Failover Cluster should mount the file system at any given time. File system configuration files on all nodes of the cluster should not include an entry for the automatic mount of the file system upon a node restart or execution of a global mount command. For example, on UNIX platforms, do not include an entry for this file system in /etc/fstab file.