3 Design Considerations

Learn about the design considerations to keep in mind when you adapt an Oracle Fusion Middleware Disaster Recovery solution for your enterprise deployment

This chapter provides instructions about how to set up an Oracle Fusion Middleware Disaster Recovery production and standby sites for the Linux and UNIX operating systems. The procedures use the Oracle SOA Suite enterprise deployment (see Figure 3-1) in the examples to illustrate how to set up the Oracle Fusion Middleware Disaster Recovery solution for that enterprise deployment. After you understand how to set up Disaster Recovery for the Oracle SOA Suite enterprise topology, use that information to set up a Disaster Recovery for your other enterprise deployments as well.

Note:

The Oracle Fusion Middleware Disaster Recovery symmetric topology that uses the Oracle SOA Suite enterprise deployment shown in Figure 3-1 at both the production site and the standby site. Figure 3-1 shows the deployment for only one site; the high level of detail shown for this deployment precludes showing the deployment for both sites in a single figure.

Figure 1-1 shows Oracle Fusion Middleware Disaster Recovery production and standby sites.

Figure 3-1 Deployment Used at Production and Standby Sites for Oracle Fusion Middleware Disaster Recovery

Description of Figure 3-1 follows
Description of "Figure 3-1 Deployment Used at Production and Standby Sites for Oracle Fusion Middleware Disaster Recovery"

Figure 3-1 shows a diagram of the Oracle SOA, Business Process Management (BPM), and the Oracle Service Bus enterprise deployment topology. See the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite for detailed information about installing and configuring an Oracle SOA Suite enterprise deployment.

The Oracle Fusion Middleware Disaster Recovery topology that you design must be symmetric for the following at the production site and the standby site.

  • Directory names and paths

    Every file that exists at a production site host must exist in the same directory path at the standby site peer host.

    Thus, Oracle home names and directory paths must be the same at the production site and standby site.

  • Port numbers

    Port numbers are used by listeners and for the routing of requests. Port numbers are stored in the configuration and must be the same at the production site hosts and their standby site peer hosts.

    Starting with an Existing Site describes how to check for port conflicts between production site and standby site hosts.

  • Security

    The same user accounts must exist at both the production site and standby site. Also, you must configure the file system, SSL, and single sign-on identically at the production site and standby site. For example, if the production site uses SSL, then the standby site must also use SSL that is configured in exactly the same way as the production site.

  • Load balancers and virtual server names

    A front-end load balancer should be set up with virtual server names for the production site, and an identical front-end load balancer should be set up with the same virtual server names for the standby site.

  • Software

    The same versions of software must be used on the production site and standby site. Also, the operating system patch level must be the same at both sites, and patches to Oracle or third-party software must be made to both the production site and standby site.

This chapter includes the following topics:

Network Considerations

When you plan your Disaster Recovery solution, consider host names, load balance, and external clients.

This section includes the following topics:

Planning Host Names

In a Disaster Recovery topology, the production site host names must be resolvable to the IP addresses of the corresponding peer systems at the standby site.

It is important to plan the host names for the production site and standby site. After failover from a primary site to a standby site, the alias host name for the middle tier host on the standby site becomes active. If you set up an alias for the standby site, you do not need to reconfigure the host name for the host on the standby site.

Creating aliases for physical host names is required only when you use a single global DNS server to resolve host names.

This section describes how to plan physical host names and alias host names for the middle tier hosts that use the Oracle Fusion Middleware instances at the production site and standby site. It uses the Oracle SOA Suite enterprise deployment shown in Figure 3-1 for the host name examples. The host name examples in this section assume that a symmetric Disaster Recovery site is being set up, where the production site and standby site have the same number of hosts. Each host at the production site and standby site has a peer host at the other site. The peer hosts are configured the same, for example, using the same ports as their counterparts at the other site.

When you configure each component, use host-name-based configuration instead of IP-based configuration, unless the component requires you to use IP-based configuration. For example, if you configure the listen address of an Oracle Fusion Middleware component to a specific IP address (such as 172.16.10.255), then use the host name SOAHOST1.EXAMPLE.COM, which resolves to 172.16.10.255.

The following section shows how to set up host names at the Disaster Recovery production and standby sites:

Note:

In the examples listed, IP addresses for hosts at the initial production site have the format 172.16.x.x and IP addresses for hosts at the initial standby site have the format 172.26.x.x.

Host Names for the Oracle SOA Suite Production and Standby Site Hosts

Learn about the Oracle SOA Suite production and standby sites.

Table 3-1 shows the IP addresses and physical host names that are used for the Oracle SOA Suite Enterprise Deployment Guide (EDG) deployment production site hosts. Figure 3-1 shows the configuration for the Oracle SOA Suite EDG deployment at the production site.

Table 3-1 IP Addresses and Physical Host Names for SOA Suite Production Site Hosts

IP Address Physical Host Name Host Name Alias

172.16.2.111

WEBHOST1

None

172.16.2.112

WEBHOST2

None

172.16.2.113

SOAHOST1

None

172.16.2.114

SOAHOST2

None

Figure 3-2 shows the physical host names that are used for the Oracle SOA Suite EDG deployment at the standby site.

Note:

If you use separate DNS servers to resolve host names, then you can use the same physical host names for the production site hosts and standby site hosts, and you do not need to define the alias host names on the standby site hosts. For more information about using separate DNS servers to resolve host names, see Resolving Host Names Using Separate DNS Servers.

Figure 3-2 Physical Host Names Used at Oracle SOA Suite Deployment Standby Site

Description of Figure 3-2 follows
Description of "Figure 3-2 Physical Host Names Used at Oracle SOA Suite Deployment Standby Site"

The Administration Server, the Managed Servers, and the SOA Managed Servers require a floating IP address to be provisioned on each site (Table 3-2). Ensure that you provision the floating IP addresses with the same virtual host names on the production site and the standby site.

Table 3-2 Floating IP Addresses

Physical Host Name Virtual Host Name Floating IP

AdminServer

ADMINVHN

172.16.2.134

WEBHOST1

WEBVHN1

172.16.2.135

WEBHOST2

WEBVHN2

172.16.2.136

SOAHOST1

SOAVHN1

172.16.2.137

SOAHOST2

SOAVHN2

172.16.2.138

The following topics describe the host name resolution and testing:

Host Name Resolution

Host name resolution means mapping a host name to the proper IP address for communication.

Host name resolution can be configured in one of the following ways:

  • Resolving host names locally

    Local host name resolution uses the host name to IP address mapping that is specified by the /etc/hosts file on each host.

    For more information about using the /etc/hosts file to implement local host name file resolution, see Resolving Host Names Locally .

  • Resolving host names using DNS

    A DNS server is a dedicated server or a service that provides DNS name resolution in an IP network.

    For more information about two methods of implementing DNS server host name resolution, see Resolving Host Names Using Separate DNS Servers and Resolving Host Names Using a Global DNS Server .

You must determine the method of host name resolution that you will use for your Oracle Fusion Middleware Disaster Recovery topology when you plan the deployment of the topology. Most site administrators use a combination of these resolution methods in a precedence order to manage host names.

The Oracle Fusion Middleware hosts and the shared storage system for each site must be able to communicate with each other.

Host Name Resolution Precedence

To determine the host name resolution method used by a particular host, search for the value of the hosts parameter in the /etc/nsswitch.conf file on the host.

If you want to resolve host names locally on the host, make the files entry the first entry for the hosts parameter, as shown in Example 3-1. When files is the first entry for the hosts parameter, entries in the host /etc/hosts file are used first to resolve host names.

If you want to resolve host names by using DNS on the host, make the dns entry the first entry for the hosts parameter, as shown in Example 3-2. When dns is the first entry for the hosts parameter, DNS server entries are used first to resolve host names.

For simplicity and consistency, Oracle recommends that all the hosts within a site (production site or standby site) should use the same host name resolution method (resolving host names locally or resolving host names using separate DNS servers or a global DNS server).

The recommendations in the following sections are high-level recommendations that you can adapt to meet the host name resolution standards used by your enterprise.

Example 3-1 Specifying the Use of Local Host Name Resolution

hosts:   files   dns   nis

Example 3-2 Specifying the Use of DNS Host Name Resolution

hosts:   dns    files   nis
Resolving Host Names Locally

Local host name resolution uses the host name to IP mapping that is defined in the /etc/hosts file of a host.

When you resolve host names for your Disaster Recovery topology in this way, consider the following procedure:

  1. Ensure that the hosts parameter in the /etc/nsswitch.conf file on all the production site and standby site hosts looks like this:
    hosts:   files   dns   nis
    
  2. The /etc/hosts file entries on the hosts of the production site should have their physical host names mapped to their IP addresses. For simplicity and ease of maintenance, Oracle recommends that you provide the same entries on all the hosts of the production site. Example 3-3 shows the /etc/hosts file for the production site of a SOA enterprise deployment topology.
  3. The /etc/hosts file entries on the hosts of the standby site should have their physical host names mapped to their IP addresses along with the physical host names of their corresponding peer on the production site defined as the alias host names. For simplicity and ease of maintenance, Oracle recommends that you have the same entries on all the hosts of the standby site. Example 3-4 shows the /etc/hosts file for the standby site of a SOA enterprise deployment topology.
  4. After you set up host name resolution by using /etc/host file entries, use the ping command to test host name resolution. For a system configured with static IP addressing and the /etc/hosts file entries shown in Example 3-3, a ping webhost1 command on the production site returns the correct IP address (172.16.2.111) and indicates that the host name is fully qualified.
  5. Similarly, for a system configured with static IP addressing and the /etc/hosts file entries shown in Example 3-4, a ping webhost1 command on the standby site returns the correct IP address (172.26.2.111) and it shows that the name WEBHOST1 is associated with that IP address.

Example 3-3 Making /etc/hosts File Entries for a Production Site Host

174.0.0.1      localhost.localdomain    localhost
172.16.2.111    WEBHOST1.EXAMPLE.COM    WEBHOST1
172.16.2.112    WEBHOST2.EXAMPLE.COM    WEBHOST2
172.16.2.113    SOAHOST1.EXAMPLE.COM    SOAHOST1
172.16.2.114    SOAHOST2.EXAMPLE.COM    SOAHOST2

Example 3-4 Making /etc/hosts File Entries for a Standby Site Host

176.0.0.1      localhost.localdomain    localhost
172.26.2.111    STBYWEB1.EXAMPLE.COM    STBYWEB1 WEBHOST1.EXAMPLE.COM WEBHOST1 
172.26.2.112    STBYWEB2.EXAMPLE.COM    STBYWEB2 WEBHOST2.EXAMPLE.COM WEBHOST2
172.26.2.113    STBYSOA1.EXAMPLE.COM    STBYSOA1 SOAHOST1.EXAMPLE.COM SOAHOST1
172.26.2.114    STBYSOA2.EXAMPLE.COM    STBYSOA2 SOAHOST2.EXAMPLE.COM SOAHOST2

Note:

The subnets in the production site and standby site are different.
Resolving Host Names Using Separate DNS Servers

Use separate DNS servers to resolve host names for your Disaster Recovery topology.

The term separate DNS servers refers to a Disaster Recovery topology, where the production site and the standby site have separate and distinct DNS servers. When you use separate DNS servers to resolve host names for your Disaster Recovery topology, consider the following procedure:

  1. Ensure that the hosts parameter in the /etc/nsswitch.conf file on all the production site and standby site hosts looks like this:
    hosts:   dns   files   nis
    
  2. The DNS servers on the production site and standby site must not be aware of each other and must contain entries for host names used within their own site.
  3. The DNS server entries on the production site should have the physical host names mapped to their IP addresses. Example 3-5 shows the DNS server entries for the production site of a SOA enterprise deployment topology.
  4. The DNS server entries on the standby site should have the physical host names of the production site mapped to their IP addresses. Example 3-6 shows the DNS server entries for the standby site of a SOA enterprise deployment topology.
  5. Ensure that there are no entries in the /etc/hosts file for any host at the production site or standby site.
  6. Test the host name resolution by using the ping command. For a system configured with the production site DNS entries, as shown in Example 3-5, a ping webhost1 command on the production site returns the correct IP address (172.16.2.111) and indicates that the host name is fully qualified.
  7. Similarly, for a system configured with the standby site DNS entries shown in Example 3-6, a ping webhost1 command on the standby site returns the correct IP address (172.26.2.111) and indicates that the host name is fully qualified.

Example 3-5 DNS Entries for a Production Site Host in a Separate DNS Servers Configuration

WEBHOST1.EXAMPLE.COM    IN    A    172.16.2.111
WEBHOST2.EXAMPLE.COM    IN    A    172.16.2.112
SOAHOST1.EXAMPLE.COM    IN    A    172.16.2.113
SOAHOST2.EXAMPLE.COM    IN    A    172.16.2.114

Example 3-6 DNS Entries for a Standby Site Host in a Separate DNS Servers Configuration

WEBHOST1.EXAMPLE.COM    IN    A    172.26.2.111
WEBHOST2.EXAMPLE.COM    IN    A    172.26.2.112
SOAHOST1.EXAMPLE.COM    IN    A    172.26.2.113
SOAHOST2.EXAMPLE.COM    IN    A    172.26.2.114
Resolving Host Names Using a Global DNS Server

Use a global DNS server to resolve host names for your Disaster Recovery topology.

The term global DNS server refers to a Disaster Recovery topology, where a single DNS server is used for both the production site and the standby site. When you use a global DNS server to resolve host names for your Disaster Recovery topology, consider the following procedure:

  1. When you use a global DNS server, for the sake of simplicity, use a combination of local host name resolution and DNS host name resolution.
  2. In this example, it is assumed that the production site uses DNS host name resolution and the standby site uses local host name resolution.
  3. The global DNS server should have the entries for both the production and standby site hosts. Example 3-7 shows the entries for a SOA enterprise deployment topology.
  4. Ensure that the hosts parameter in the /etc/nsswitch.conf file on all the production site hosts looks like this:
    hosts:   dns   files   nis
    
  5. Ensure that the hosts parameter in the /etc/nsswitch.conf file on all the standby site hosts looks like this:
    hosts:   files   dns   nis
    
  6. The /etc/hosts file entries on the hosts of the standby site should have their physical host names mapped to their IP addresses along with the physical host names of their corresponding peer on the production site defined as the alias host names. For simplicity and ease of maintenance, Oracle recommends that you have the same entries on all the hosts of the standby site. Example 3-8 shows the /etc/hosts file for the production site of a SOA Enterprise Deployment topology.
  7. Test the host name resolution by using the ping command. A ping webhost1 command on the production site returns the correct IP address (172.16.2.111) and indicates that the host name is fully qualified.
  8. Similarly, a ping webhost1 command on the standby site returns the correct IP address (172.26.2.111) and indicates that the host name is fully qualified.

Example 3-7 DNS Entries for Production Site and Standby Site Hosts When Using a Global DNS Server Configuration

WEBHOST1.EXAMPLE.COM    IN    A    172.16.2.111
WEBHOST2.EXAMPLE.COM    IN    A    172.16.2.112
SOAHOST1.EXAMPLE.COM    IN    A    172.16.2.113
SOAHOST2.EXAMPLE.COM    IN    A    172.16.2.114
STBYWEB1.EXAMPLE.COM    IN    A    172.26.2.111
STBYWEB2.EXAMPLE.COM    IN    A    172.26.2.112
STBYSOA1.EXAMPLE.COM    IN    A    172.26.2.113
STBYSOA2.EXAMPLE.COM    IN    A    172.26.2.114

Example 3-8 Standby Site /etc/hosts File Entries When Using a Global DNS Server Configuration

176.0.0.1      localhost.localdomain    localhost
172.26.2.111    STBYWEB1.EXAMPLE.COM    WEBHOST1
172.26.2.112    STBYWEB2.EXAMPLE.COM    WEBHOST2
172.26.2.113    STBYSOA1.EXAMPLE.COM    SOAHOST1
172.26.2.114    STBYSOA2.EXAMPLE.COM    SOAHOST2
Testing the Host Name Resolution

Validate the host name assignment by connecting to each host at the production site and by using the ping command to ensure that the host can locate the other hosts at the production site.

In addition, connect to each host at the standby site and use the ping command to ensure that the host can locate the other hosts at the standby site.

Virtual IP and Virtual Host Name Considerations

When the system hosting the Oracle WebLogic Administration Server fails, Virtual IP addresses and host names are required to enable the Oracle WebLogic Administration Server to continue servicing requests .

Virtual IP addresses enable Managed Servers in your domain to participate in server migration. Virtual servers should be provisioned in the application tier so that they can be bound to a network interface on any host in the application tier.

In a Disaster Recovery topology, the production site virtual IP host names must be resolvable to the IP addresses of the corresponding peer systems at the standby site. Therefore, it is important to plan the host names for the production site and the standby site. After failover from a primary site to a standby site, the alias host name for the middle tier host on the standby site becomes active. You do not need to reconfigure a host name for the host on the standby site if you set up aliases for the standby site.

This section describes how to plan virtual IP host names and alias host names for the middle tier hosts that use the Oracle Fusion Middleware instances at the production site and the standby site. This is required when you have a single corporate DNS.

It uses the Oracle SOA Suite enterprise deployment shown in Figure 3-1 for the host name examples. The host name examples in this section assume that a symmetric disaster recovery site is being set up, where the production site and standby site have the same number of hosts. Each host at the production site and the standby site has a peer host at the other site. The peer hosts are configured the same, for example, by using the same ports as their counterparts at the other site.

The following subsections show how to set up virtual IP addresses and host names at the Disaster Recovery production site and standby site for the following enterprise deployments:

Virtual IP Addresses and Virtual Host Names for the Oracle SOA Suite Production Site and Standby Site Hosts.

Table 3-3 shows the virtual IP addresses and virtual host names that are used for the Oracle SOA Suite EDG deployment production site hosts. Figure 3-1 shows the configuration for the Oracle SOA Suite EDG deployment at the production site.

Table 3-3 Virtual IP Addresses and Virtual Host Names for the SOA Suite Production Site Hosts

Virtual IP Address Virtual Host Name Alias Host Name

172.16.2.115

ADMINVHN

None

172.16.2.116

SOAVHN1

None

172.16.2.117

SOAVHN2

None

Table 3-4 shows the virtual IP addresses, virtual host names, and alias host names that are used for the Oracle SOA Suite EDG deployment standby site hosts. Figure 3-2 shows the physical host names that are used for the Oracle SOA Suite EDG deployment at the standby site. The alias host names shown in Table 3-4 should be defined for the SOA Oracle Suite standby site hosts, as shown in Figure 3-2.

Note:

If you use separate DNS servers to resolve host names, then you can use the same virtual IP addresses and virtual host names for the production site hosts and standby site hosts, and you do not need to define the alias host names.

For more information about using separate DNS servers to resolve host names, see Resolving Host Names Using Separate DNS Servers .

Table 3-4 Virtual IP Addresses, Virtual Host Names, and Alias Host Names for SOA Suite Standby Site Hosts

Virtual IP Address Virtual Host Name Host Name Alias

172.26.2.115

STBYADMINVHN

ADMINVHN

172.26.2.116

STBYSOAVHN1

SOAVHN1

172.26.2.117

STBYSOAVHN2

SOAVHN2

Load Balancer Considerations

Oracle Fusion Middleware components require a hardware load balancer when deployed in high availability topologies.

Oracle recommends that your hardware load balancer support the following features:

  • Ability to load balance traffic to a pool of real servers through a virtual host name: Clients access services by using the virtual host name instead of using actual host names. The load balancer can then load balance requests to the servers in the pool.

  • Port translation configuration.

  • Monitoring of ports (HTTP and HTTPS).

  • Virtual servers and port configuration: Ability to configure virtual server names and ports on your external load balancer. The virtual server names and ports must meet the following requirements:

    • The load balancer should allow configuration of multiple virtual servers. For each virtual server, the load balancer should allow configuration of traffic management on more than one port. For example, for Oracle Internet Directory clusters, you must configure the load balancer with a virtual server and ports for LDAP and LDAPS traffic.

    • The virtual server names must be associated with IP addresses and be part of your DNS. Clients must be able to access the load balancer through the virtual server names.

  • Ability to detect node failures and immediately stop routing traffic to the failed node.

  • Resource monitoring, port monitoring, and process failure detection: The load balancer must be able to detect service and node failures (through notification or some other means) and stop directing non-Oracle Net traffic to the failed node. If your load balancer can automatically detect failures, you should use this feature.

  • Fault-tolerant mode: It is highly recommended that you configure the load balancer to be in fault-tolerant mode.

  • It is highly recommended that you configure the load balancer virtual server to return immediately to the calling client when the back-end services to which it forwards traffic are unavailable. This is preferred over the client disconnecting on its own after a timeout based on the TCP/IP settings on the client system.

  • Sticky routing capability: Ability to maintain sticky connections to components based on cookies or URLs.

  • SSL acceleration: This feature is recommended, but not required.

  • For the Identity Management configuration with Oracle Access Manager, configure the virtual servers in the load balancer for the directory tier with a high value for the connection timeout for TCP connections. This value should be more than the maximum expected time over which no traffic is expected between the Oracle Access Manager and the directory tier.

  • Ability to preserve the client IP addresses: The load balancer must have the capability to insert the original client IP address of a request in an X-Forwarded-For HTTP header to preserve the client IP address.

Virtual Server Considerations

You must configure the Virtual servers and the associated ports on the load balancer for different types of network traffic and monitoring.

Configure them to the appropriate real hosts and ports for the services running. Also, the load balancer should be configured to monitor the real host and ports for availability so that the traffic to these is stopped as soon as possible when a service is down. This ensures that incoming traffic on a given virtual host is not directed to an unavailable service in the other tiers.

Oracle recommends that you use two load balancers when you deal with external and internal traffic. In such a topology, one load balancer is set up for external HTTP traffic and the other load balancer is set up for internal LDAP traffic. A deployment may choose to have a single load balancer device due to a variety of reasons. Although this is supported, the deployment should consider the security implications of doing this and if appropriate, open up the relevant firewall ports to allow traffic across the various DMZs. It is worth noting that in either case, it is highly recommended to deploy a given load balancer device in fault tolerant mode.

Some of the virtual servers defined in the load balancer are used for inter-component communication. These virtual servers are used for internal traffic and are defined in the internal DNS of a company.When you use a single global DNS server to resolve host names, Oracle highly recommends that you create aliases for these virtual servers .

Creating aliases is not required when you use separate DNS servers to resolve host names.

The virtual servers required for the various Oracle Fusion Middleware products are described in Table 3-5 and Table 3-6.

Table 3-5 Virtual Servers for Oracle SOA Suite Production Site

Components Access Virtual Server Name Alias Name

Oracle SOA

External

soa.example.com

None

Oracle SOA

Internal

soainternal.example.com

None

Administration Consoles

Internal

admin.example.com

None

Table 3-6 Virtual Servers for Oracle SOA Suite Standby Site

Components Access Virtual Server Name Alias Virtual Server Name

Oracle SOA

External

soa.example.com

None

Oracle SOA

Internal

stbysoainternal.example.com

soainternal.example.com

Administration Consoles

Internal

admin.example.com

None

External Clients Considerations

Systems directly accessing the servers in the topology need to be aware of the listen address that is used by the different Oracle WebLogic Server instances.

An appropriate host name resolution needs to be provided to the clients so that the host name alias used by the servers as listen address is correctly resolved. This is also applicable to the Oracle JDeveloper deployments. The client hosting Oracle Jdeveloper needs to map the SOAHOSTx and SOAVHNx aliases to correct the IP addresses for deployments to succeed.

Wide Area DNS Operations

When a site switchover or failover is carried out, client requests must be redirected transparently to the new site that is playing the production role.

To direct client requests to the entry point of a production site, use DNS resolution. To accomplish this redirection, the wide area DNS that resolves requests to the production site has to be switched over to the standby site. The DNS switchover can be accomplished by either using a global load balancer or manually changing DNS names.

Note:

A hardware load balancer is assumed to serve as a front end for each site. Check for supported load balancers at:

http://support.oracle.com

This section includes the following topics:

Using a Global Load Balancer

A global load balancer deployed in front of the production and standby sites provides fault detection services and performance-based routing redirection for the two sites.

In addition, the load balancer can provide authoritative DNS name server equivalent capabilities.

During normal operations, you can configure the global load balancer with the production site's load balancer name-to-IP mapping. When a DNS switchover is required, this mapping in the global load balancer is changed to map to the standby site's load balancer IP. This allows requests to be directed to the standby site, which now has the production role.

This method of DNS switchover works for both site switchover and failover. One advantage of using a global load balancer is that the time for a new name-to-IP mapping to take effect can be almost immediate. The downside is that an additional investment must be made for the global load balancer.

Manually Changing DNS Names

The DNS switch-over involves to manually change the name-to-IP mapping of the production site's load balancer.

The mapping is changed to map to the IP address of the standby site's load balancer. Follow these instructions to perform the switchover:

  1. Note the current Time to Live (TTL) value of the production site's load balancer mapping. This mapping is in the DNS cache, and it remains there until the TTL expires. As an example, assume that the TTL is 3600 seconds.
  2. Modify the TTL value to a short interval (for example, 60 seconds).
  3. Wait one interval of the original TTL. This is the original TTL of 3600 seconds from Step 1.
  4. Ensure that the standby site is switched over to receive requests.
  5. Modify the DNS mapping to resolve to the standby site's load balancer, giving it the appropriate TTL value for normal operation (for example, 3600 seconds).

This method of DNS switchover works for switchover or failover operations. The TTL value set in Step 2 should be a reasonable time period where client requests cannot be fulfilled. The modification of the TTL effectively modifies the caching semantics of the address resolution from a long period of time to a short period. Due to the shortened caching period, an increase in DNS requests can be observed.

If the clients that point to SOA are running on Java, another TTL property can be taken into account. Java has a DNS cache that can be configured for caching the successful DNS resolutions, so in that case, the change in DNS server is not refreshed until Java is restarted. This can be modified by setting the property networkaddress.cache.ttl to a low value:
  • You can do it globally, for all the applications that are running on the JVM, by modifying the property in JAVA_HOME/jre/lib/security/java.security file: networkaddress.cache.ttl=60

  • You can define it for a specific application only, by setting that property in the application's initialization code: java.security.Security.setProperty("networkaddress.cache.ttl" , "60")

Storage Considerations

When you design storage for your Disaster Recovery solution, consider Fusion Middleware artifacts, storage replication, and file-based persistent stores.

This section includes the following topics:

Oracle Fusion Middleware Artifacts

Oracle Fusion Middleware components in a given environment are usually interdependent on one another, so it is important that the components in the topology be synchronized.

This synchronization is important when you design volumes and consistency groups. Some artifacts are static whereas others are dynamic.

Static Artifacts

Static artifacts are files and directories that do not change frequently. These include:

  • home: The Oracle home usually consists of an Oracle home and an Oracle WebLogic Server home.

  • Oracle Inventory: This includes oraInst.loc and oratab files, which are located in the /etc directory.

Dynamic or Runtime Artifacts

Dynamic or runtime artifacts are files that change frequently. Runtime artifacts include:

  • Domain home: Domain directories of the Administration Server and the Managed Servers.

  • Oracle instances: Oracle Instance home directories.

  • Application artifacts, such as .ear or .war files.

  • Database artifacts, such as the MDS repository.

  • Database metadata repositories that are used by Oracle Fusion Middleware.

  • Persistent stores, such as JMS providers and transaction logs.

  • Deployment plans: Used for updating technology adapters, such as file and JMS adapters. They need to be saved in a location that is accessible to all nodes in the cluster that the artifacts are being deployed to.

Oracle Home and Oracle Inventory

Oracle Fusion Middleware allows you to create multiple Oracle WebLogic Server Managed Servers from one single binary file installation.

You can install binary files in a single location on a shared storage and reuse this installation by servers in different nodes. Note that, for maximum availability, Oracle recommends that you use redundant binary installations.

When an Oracle home or a WebLogic home is shared by multiple servers in different nodes, Oracle recommends that you keep the Oracle Inventory and Oracle home list in those nodes that are updated for consistency in the installations and application of patches.

To update the inventory files in a node and attach an installation in a shared storage to it, use the ORACLE_HOME/oui/bin/attachHome.sh file.

Storage Replication

Learn about the guidelines to create volumes on a shared storage.

Depending on the capabilities of the storage replication technology available with your preferred storage device you may need to create mount points, directories, and symbolic links on each of the nodes within a tier.

If your storage device's storage replication technology guarantees consistent replication across multiple volumes, then complete the following:

  • Create one volume per server running on that tier. For example, on the application tier, you can create one volume for the WebLogic Administration Server and another volume for the Managed Servers.

  • Create one consistency group for each tier with the volumes for that tier as its members.

  • If a volume is mounted by two systems simultaneously, a clustered file system may be required for this, depending on the storage subsystem. However, there is no known case of a single file or directory tree being concurrently accessed by Oracle processes on different systems. NFS is a clustered file system, so no additional clustered file system software is required if you are using NFS-attached storage.

If your storage device's storage replication technology does not guarantee consistent replication across multiple volumes, then complete the following:

  • Create a volume for each tier. For example, you can create one volume for the application tier, one for the web tier, and so on.

  • Create a separate directory for each node in that tier. For example, you can create a directory for SOAHOST1 under the application tier volume, create a directory for WEBHOST1 under the web tier volume, and so on.

  • Create a mount point directory on each node to the directory on the volume.

  • Create a symbolic link to the mount point directory. This enables the same directory structure to be used across the nodes in a tier.

  • If a volume is mounted by two systems simultaneously, a clustered file system may be required for this, depending on the storage subsystem. However, there is no known case of a single file or directory tree being concurrently accessed by Oracle processes on different systems. NFS is a clustered file system, so no additional clustered file system software is required if you are using NFS-attached storage.

Note:

Before you set up the shared storage for your Disaster Recovery sites, read the high availability chapter in the Oracle Fusion Middleware Release Notes to learn of any known shared storage-based deployment issues in high availability environments.

File-Based Persistent Store

The Java Message Service (JMS) and transaction logs (TLogs) can use a file-based persistent store in the Oracle SOA Suite topology. Oracle, however, recommends that for DR deployments JDBC stores are used both for JMS and TLOGS.

Oracle WebLogic Servers are usually clustered for high availability, and this file-based persistent store must reside on a shared storage that is accessible to all members of the cluster.

A Storage Area Network (SAN) storage system should use either a host-based clustered or a shared file system technology such as the Oracle Clustered File System (OCFS2). OCFS2 is a symmetric shared disk cluster file system that allows each node to read and write both metadata and data directly to the SAN.

Additional clustered file systems are not required when you use NAS storage systems.

Database Considerations

When you plan your Disaster Recovery solution, consider synchronizing the databases in your system with Oracle Data Guard.

This section provides the recommendations and considerations to set up Oracle databases that are used in an Oracle Fusion Middleware Disaster Recovery topology.

  • Oracle recommends that you create Oracle Real Application Cluster (Oracle RAC) databases on both the production site and standby site, as required by your topology.

  • Oracle Data Guard is the recommended disaster protection technology for the databases running the metadata repositories. You can also use Oracle Active Data Guard or Oracle GoldenGate.

    Note:

    You can use Oracle GoldenGate in an active-passive configuration only.

  • The Oracle Data Guard configuration that is used should be decided based on the data loss requirements of the database as well as the network considerations such as the available bandwidth and latency when compared to the redo generation. Ensure that this is determined correctly before you set up the Oracle Data Guard configuration.

  • Ensure that your network is configured for low latency with sufficient bandwidth, because synchronous redo transmission can affect the response time and throughput.

  • The LOG_ARCHIVE_DEST_n parameter on standby site databases should have the SYNC or ASYNC attributes. If no attributes are specified,ASYNC is the default attribute.

  • The standby site database should be in Managed Recovery mode. This ensures that the standby site databases are in a constant state of media recovery. Managed Recovery mode is enabled for shorter failover times.

  • The tnsnames.ora file on the production site and the standby site must have entries for databases on both the production and standby sites.

  • Oracle strongly recommends that you force Oracle Data Guard to perform manual database synchronization whenever middle tier synchronization is performed. This is especially important for components that store configuration data in the metadata repositories.

  • Oracle strongly recommends that you set up aliases for the database host names on both the production and standby sites. This enables seamless switchovers, switchbacks, and failovers.

  • When one of the databases at either site is an Oracle RAC database, it is required that the single instance database at the peer site must have the same value for instance_name.

    Note:

    • The values for ORACLE_HOME, home, ORACLE_INSTANCE, DOMAIN_HOME in the middle tier must be identical.

    • The values for DB_NAME, INSTANCE_NAME, Listen Port, and ORACLE_SID in the database tier must be identical.

    • To avoid manipulation of the WLS data sources, the SERVICE_NAME specified in the Application Data Source must be identical. However, each database can have additional services defined.

The following sections explain database points:

Recommended Setup for Two Node Cluster Database with ASM

Learn about the recommended setup for a database with ASM (Automatic Storage Management) with two cluster nodes.

ASM is a volume manager and a file system for Oracle database files that supports single-instance Oracle Database and Oracle Real Application Clusters (Oracle RAC) configurations. ASM is Oracle's recommended storage management solution that provides an alternative to conventional volume managers, file systems, and raw device.

For additional information about RAC and ASM, see the following documents:

Sysctl Setup

  1. 1. Set the values of sysctl by running the following command. To set up a database with clusters custdbhost1.example.com and custdbhost2.example.com, run the commands on both custdbhost1.example.com and custdbhost2.example.com secondary systems.

    Note:

    The values and parameters mentioned below are the minimum required; they may vary according to your particular system requirements.
    • /sbin/sysctl -w net.ipv4.ip_forward=0

    • /sbin/sysctl -w net.ipv4.conf.default.rp_filter =1

    • /sbin/sysctl -w net.ipv4.tcp_tw_recycle=1

    • /sbin/sysctl -w kernel.sysrq=1

    • /sbin/sysctl -w kernel.panic=60

    • /sbin/sysctl -w kernel.sysrq=1

    • /sbin/sysctl -w kernel.shmall=2097152

    • /sbin/sysctl -w kernel.shmmni=4096

    • /sbin/sysctl -w kernel.shmmax=8178892800

    • /sbin/sysctl -w kernel.shmall=1073741824

    • /sbin/sysctl -w fs.file-max=6815744

    • /sbin/sysctl -w kernel.msgmni=2878

    • /sbin/sysctl -w kernel.sem="50 32000 100 142"

    • /sbin/sysctl -w net.core.rmem_default=1048576

    • /sbin/sysctl -w net.core.rmem_max=4194304

    • /sbin/sysctl -w net.core.wmem_default=524288

    • /sbin/sysctl -w net.core.wmem_max=1048586

    • /sbin/sysctl -w fs.aio-max-nr=1048576

    • /sbin/sysctl -w "net.ipv4.ip_local_port_range"="9000 65500"

  2. To load the changed values, run the following command:
    • /sbin/sysctl -p

Create the Swap Size

To setup swap size:

  1. Update Swap Space to minimum 500MB.

  2. Update Temp Space to minimum 500MB(/tmp).

  3. chmod 600 /root/myswapfile

  4. /sbin/mkswap /root/myswapfile.

  5. /sbin/swapon /root/myswapfile.

  6. Edit the /etc/fstab file to add the following entry:

    /root/myswapfile swap  swap defaults  0 0

  • Example:
  • dd if=/dev/zero of=/root/myswapfile bs=1M count=6144

Set Soft & Hard Limits

Edit the /etc/security/limits.conf file and add the following entries as the root user.

Note:

The values mentioned below are the minimum required; they may vary according to your particular system requirements.

soft    nproc          2047

hard    nproc          16384

soft    nofile         8192

hard    nofile         65536

Create the Folder Structure for Mount and Grid Installation

The following commands create a sample file structure and may vary per user.

To create the folder structure, run as root the following commands:

mkdir -p /u01/app/12.1.0/grid

chown aime1:svrtech /u01/app/12.1.0/grid

mkdir -p /u01/app/aime1

mkdir -p /u01/app/oracle/db/RACDATA

chown aime1:svrtech /u01/app/oracle/db/RACDATA

mkdir -p /u01/app/oracle/db/RACDATA

chown aime1:svrtech /u01/app/oracle/db/RACDATA

mkdir -p /u01/app/oracle/db/12.1.0.1

chown aime1:svrtech /u01/app/oracle/db/12.1.0.1

mkdir -p /u01/app/oracle/crl

chown aime1:svrtech /u01/app/oracle/crl

Making TNSNAMES.ORA Entries for Databases

Oracle Data Guard is used to synchronize production and standby databases, so the production and standby databases can reference each other.

Oracle Data Guard uses the tnsnames.ora file entries to direct requests to the production and standby databases, so entries for production and standby databases must be made to the tnsnames.ora file. See Oracle Data Guard Concepts and Administration in the Oracle Database documentation set for more information about using tnsnames.ora files with Oracle Data Guard.

Synchronizing Databases Manually

Use SQL to synchronize production and standby databases.

The SQL alter system archive log all statement switches logs and thus forces a synchronization of the production and standby site databases.

To manually synchronize production and standby site databases, use the following SQL statement:

ALTER SYSTEM ARCHIVE LOG ALL;

Setting Up Dataguard-ready DataSources in the Middle Tier

Configure the data sources that Oracle Fusion Middleware uses to automate connections failover, in case of a failover or switchover of the primary database.

Configure all the data sources that are used in the domain, including datasources used by jdbc persistence stores, leasing data sources, and custom datasources. The GridLink data sources must be modified to include information about the standby database.

For detailed information to configure Dataguard-ready data sources, see Configuring Data Sources for Oracle Fusion Middleware Active-Passive Deployment.

Starting Points

When you plan your Disaster Recovery solution, consider starting with an existing site or creating a new site.

Before setting up the standby site, the administrator must evaluate the starting point of the project. The starting point for designing an Oracle Fusion Middleware Disaster Recovery topology is usually one of the following:

  • The production site is already created and the standby site is being planned and created.

    Starting with an Existing Site describes how to design the Oracle Fusion Middleware Disaster Recovery standby site when you have an existing production site.

  • There is no existing production site or standby site. Both need to be designed and created.

    Starting with a New Site describes how to design a new Oracle Fusion Middleware Disaster Recovery production site and standby site when you do not have an existing production site or standby site.

  • Some hosts or components may exist at a current production site, but new hosts or components must be added at that site or at a standby site to set up a functioning Oracle Fusion Middleware Disaster Recovery topology.

    Use the pertinent information in this chapter to design and implement an Oracle Fusion Middleware Disaster Recovery topology.

This section includes the following topics:

Starting with an Existing Site

When you start with an existing production site, the configuration data and the Oracle binary files for the production site are already on the file system.

In addition, the host names, ports, and user accounts are already defined. When you start with an existing production site, choose either of the following designs:

To migrate a production site, see the following section:

Migrating an Existing Production Site to Shared Storage

The Oracle Fusion Middleware Disaster Recovery solution relies on shared storage to implement storage replication for disaster protection of the Oracle Fusion Middleware middle tier configuration. When a production site has already been created, it is likely that the Oracle home directories for the Oracle Fusion Middleware instances that comprise the site are not located on the shared storage. If this is the case, then these homes must be migrated completely to the shared storage to implement the Oracle Fusion Middleware Disaster Recovery solution.

Follow these guidelines for migrating the production site from the local disk to shared storage:

Starting with a New Site

When you start with a new production site for an Oracle Fusion Middleware Disaster Recovery topology, consider host names and ensure that storage replication is set up to copy the configuration (based on these names) to the standby site.

When you design a new production site, plan also the standby site, and use Oracle Universal Installer to install software on the production site. Parameters such as alias host names and software paths must be carefully designed to ensure that they are the same on both sites.

When you create a new Oracle Fusion Middleware Disaster Recovery production and standby sites, consider the following choices:

  • Design your Oracle Fusion Middleware Disaster Recovery solution so that each host at the production site and at the standby site has the desired alias host name and physical host name. For more information about host name planning, see Planning Host Names.

  • Choose the Oracle home name and Oracle home directory for each Fusion Middleware installation.

    Designing and creating your own site is easier than modifying an existing site to meet the design requirements described in this chapter.

  • Assign ports for the Oracle Fusion Middleware installations for the production site hosts that do not conflict with the ports used by the standby site hosts.

    This setup is easier than checking for and resolving port conflicts between an existing production and standby sites.

Topology Considerations

When you plan your for your Disaster Recovery solution, consider designing a symmetric or an asymmetric topology.

This section includes the following topics:

Design Considerations for a Symmetric Topology

A symmetric topology is an Oracle Fusion Middleware Disaster Recovery configuration that is identical across tiers on the production and standby sites.

In a symmetric topology, the production site and standby site have the identical number of hosts, load balancers, instances, and applications. The same ports are used for both sites. The systems are configured identically and the applications access the same data. This manual describes how to set up a symmetric Oracle Fusion Middleware Disaster Recovery topology for an enterprise configuration.

Design Considerations for an Asymmetric Topology

An asymmetric topology is an Oracle Fusion Middleware Disaster Recovery configuration that differs across some tiers on the production and standby sites.

In an asymmetric topology, the standby site can use less hardware (for example, the production site could include four hosts with four Oracle Fusion Middleware instances while the standby site includes two hosts with four Oracle Fusion Middleware instances).

For example, consider an asymmetric topology where the standby site uses fewer Oracle Fusion Middleware instances (for example, the production site could include four Oracle Fusion Middleware instances while the standby site includes just two Oracle Fusion Middleware instances).

Another asymmetric topology includes a different configuration for a database (for example, using an Oracle Real Application Clusters (Oracle RAC) database at the production site and a single instance database at the standby site.