Oracle Application Server Web Cache Administrator's Guide 10g (9.0.4) Part Number B10401-01 |
|
The Oracle Application Server 10g Concepts and Oracle Application Server 10g Installation Guide provide an overview of the recommended topologies for Oracle Application Server. This chapter presents several detailed scenarios for deploying OracleAS Web Cache.
This chapter contains these topics:
Figure 5-1 shows OracleAS Web Cache in a common Oracle Application Server configuration. A tier of OracleAS Web Cache servers cache content for a tier of application Web servers. The application Web servers app1-host1
and app1-host2
provide content for site www.app1.company.com
, and app2-host
provides content for www.app2.company.com
. The two OracleAS Web Cache servers reside on dedicated, fast one or two-CPU computers. To increase the availability and capacity of a Web site, these servers are configured as either a cache cluster or failover pair.
The Load Balancer is configured to ping each OracleAS Web Cache server on a periodic basis to check the status of the cache.
As a cache cluster, the two OracleAS Web Cache servers provide failure detection and failover. If an OracleAS Web Cache server fails, other members of the cache cluster detect the failure and take over ownership of the cached content of the failed cluster member and masks any cache failure. OracleAS Web Cache maintains a virtual single cache of content despite a cache failure. The Load Balancer distributes the incoming requests among cache cluster members. The cache cluster members process the incoming requests. For requests that are not stored in the cache, OracleAS Web Cache distributes the requests to an application Web server respective to the site.
As a failover pair, both OracleAS Web Cache servers are configured to cache the same content. When both OracleAS Web Cache servers are running, a Load Balancer distributes the load among both servers. If one server fails, the other server receives and processes all incoming requests.
To configure this topology:
www.app1.company.com
and www.app2.company.com
.
webche1-host
and webche2-host
and configure it to ping each cache server periodically to check the status of the cache.
webche1-host
and webche2-host
as cluster members.
"Configuring a Cache Cluster" for instructions on creating a cache cluster
See Also:
app1-host1
, app1-host2
, and app2-host
on designated listening ports
www.app1.company.com
mapped to app1-host1
and app1-host2
www.app2.company.com
mapped to app2-host
See Also:
This section describes the following specialized topologies:
Many Web sites have several data centers. For networks with a distributed topology, you can deploy OracleAS Web Cache at each of the data centers in a distributed cache hierarchy. Figure 5-2 shows a distributed topology in which OracleAS Web Cache servers are distributed in offices in the United States and Japan. The application Web servers are located in the United States office, centralizing the data source to one geographic location. The central cache in the United States caches content for application Web servers app1-host1
, app2-host2
, and app2-host
, and the remote cache in Japan caches content from the central caches.
Browsers make requests to local DNS servers to resolve www.app1.company.com
and www.app2.company.com
. The local DNS servers are routed to the authoritative DNS server for the respective sites. The authoritative DNS server uses the IP address of the browser to pick the closest OracleAS Web Cache server to satisfy the request. Then, it returns the IP address of the appropriate OracleAS Web Cache server to the browser.
To configure this topology:
www.app1.company.com
and www.app2.company.com
.
us.webche1-host
and us.webche2-host
and configure it to ping each cache server periodically to check the status of the cache.
us.webche1-host
and us.webche1-host
with the following:
app1-host1
, app1-host2
, and app2-host
on designated listening ports
www.app1.company.com
mapped to app1-host1
and app1-host2
www.app2.company.com
mapped to app2-host
jp.webche-host
with the following:
us.webche1-host
and us.webche2-host
on designated listening ports
www.app1.company.com
mapped to app1-host1
and app1-host2
www.app2.company.com
mapped to app2-host
See Also:
You can make OracleAS Web Cache highly available by configuring the operating system to load-balance incoming requests across multiple caches. You configure multiple caches as nodes of the same cluster. Incoming requests are distributed among the caches. When the operating system detects a failure of one of the caches is detected, automatic IP takeover is used to distribute the load to the remaining caches in the cluster configuration.
This feature is supported on many operating systems, including Linux, Windows 2000 Advanced Server, Windows 2000 Datacenter Server, and Windows 2003 (all editions).
This feature is not intended to replace a hardware Load Balancer. Consider using it as an alternative to a hardware Load Balancer when the primary requirement for your topology is to distribute requests. If you require firewall operations or ping URL mechanisms, deploy a hardware Load Balancer.
Many Web sites contain cacheable public content and non-cacheable, transactional or protected content. For these Web sites, you can use OracleAS Web Cache servers to cache content for only the portions of the Web site with the cacheable content.
Figure 5-3 shows a Layer 7 (L7) switch passing catalog requests to OracleAS Web Cache servers webche1-host
and webche2-host
and order entry requests to application Web server app1-host1
. An L7 switch operates at Layer 7, the Application Layer layer, of the Open Systems Interconnection (OSI) model. L7 switches determine where to send requests based on URL content.
See Also:
|
To configure this topology:
www.app1.company.com
.
webche1-host
and webche2-host
.
app1-host1
and app1-host2
on designated listening ports
www.app1.company.com
mapped to app1-host1
and app1-host2
See Also:
This section describes the following topologies:
You can deploy OracleAS Web Cache inside or outside a firewall. Deploying OracleAS Web Cache inside a firewall ensures that HTTP traffic enters the Demilitarized Zone (DMZ), but only authorized traffic from the application Web servers can directly interact with the database. When deploying OracleAS Web Cache outside a firewall, the throughput burden is placed on OracleAS Web Cache rather than the firewall. The firewall receives only requests that must go to the application Web servers. This topology requires securing OracleAS Web Cache from intruders.
Security experts disagree about whether caches should be placed outside the DMZ. Oracle Corporation recommends that you check your company's policy before deploying OracleAS Web Cache outside the DMZ.
You can configure OracleAS Web Cache to receive both HTTP and HTTPS requests. You can off-load some of the strain of SSL operations on the OracleAS Web Cache CPUs by using SSL acceleration hardware.
Figure 5-4 shows OracleAS Web Cache servers webche1-host
and webche2-host
deployed with SSL acceleration hardware. Both servers receive HTTP and HTTP requests, with the HTTPS requests being processed by the SSL acceleration hardware. Both OracleAS Web Cache servers send origin server requests to app1-host1
and app1-host2
.
Note:
Oracle Application Server supports nCipher's BHAPI-compliant hardware for deployment on OracleAS Web Cache servers. See |
To configure this topology:
www.app1.company.com
.
webche1-host
and webche2-host
. Configure the Load Balancer to send HTTP and HTTPS requests to webche1-host
and webche2-host
and to ping each cache server periodically to check the status of the cache.
app1-host1
and app1-host2
on designated listening ports.
www.app1.company.com
, with HTTP and HTTPS ports, mapped to app1-host1
and app1-host2
.
In a cache cluster, you must disallow acceptance of headers containing client-side certificate information to prevent receipt of certificate information in a header from any entity other than a peer cluster member. See "Task 7: (Optional) Require Client-Side Certificates" for more information.
Note:
See Also:
You can configure one OracleAS Web Cache server to listen for HTTP requests and another OracleAS Web Cache server to listen for HTTPS requests.
Figure 5-5 shows two OracleAS Web Cache servers receiving requests. HTTP requests are served from server webche1-host
and HTTPS requests are served from server webche2-host
. Both OracleAS Web Cache servers send origin server requests to app1-host1
and app1-host2
.
To configure this topology:
www.app1.company.com
.
webche1-host
and webche2-host
. Configure the Load Balancer to send HTTP requests to webche1-host
and HTTPS requests to webche2-host
and to ping each cache server periodically to check the status of the cache.
webche1-host
with the following:
webche2-host
with the following:
app1-host1
and app1-host2
on designated HTTP listening ports.
www.app1.company.com
, with an HTTPS port, mapped to app1-host1
and app1-host2
.
In a cache cluster, you must disallow acceptance of headers containing client-side certificate information to prevent receipt of certificate information in a header from any entity other than a peer cluster member. See "Task 7: (Optional) Require Client-Side Certificates" for more information.
Note:
See Also:
For many applications, HTTPS is required for secure transactions that should not be cached. For example, checkout pages on an e-commerce site that require credit card information should not be cached. For this type of Web site, you can use a Load Balancer to pass all HTTP requests to OracleAS Web Cache, and pass all HTTPS requests for secure pages directly to a particular application Web server.
Figure 5-6 shows a Load Balancer passing HTTP requests to OracleAS Web Cache server webche1-host
and webche2-host
and HTTPS requests to application Web server app1-host2
. Note that HTTPS requests could also be passed to app1-host1
.
To configure this topology:
www.app1.company.com
.
webche1-host
and webche1-host
and application Web server host name app1-host2
.
app1-host1
on an HTTP listening port
www.app1.company.com
, with an HTTP port, mapped to app1-host1
See Also:
Because login and logout requests to Single Sign-On servers are both secure and time sensitive, you cannot cache content from the Single Sign-On servers. You can configure OracleAS Web Cache as a software load balancer in front of multiple Single Sign-On mid-tiers, similar to a hardware-based Load Balancer approach. You can also configure OracleAS Web Cache to cache content for Oracle HTTP Servers running Oracle Application Server Single Sign-On partner applications. By default, mod_osso
protected pages are marked as non-cacheable to ensure that requests for these pages are redirected through Oracle Application Server Single Sign-On servers.
Figure 5-7 shows OracleAS Web Cache caching content for the Oracle HTTP Servers app1-host
and app1-host2
and a Load Balancer routing the requests from the partner applications to the Single Sign-On servers sso-host1
and sso-host2
. No content from the Single Sign-On servers is cached. This topology requires that the Oracle HTTP Servers be clustered so that OracleAS Web Cache can send requests to either partner application.
To configure this topology:
www.app1.company.com
.
webche1-host
and webche2-host
. Configure the Load Balancer to send HTTP and HTTPS requests to webche1-host
and webche2-host
.
app1-host1
and app1-host2
on designated listening ports.
www.app1.company.com
, with HTTP and HTTPS ports, mapped to app1-host1
and app1-host2
.
In a cache cluster, you must disallow acceptance of headers containing client-side certificate information to prevent receipt of certificate information in a header from any entity other than a peer cluster member. See "Task 7: (Optional) Require Client-Side Certificates" for more information.
Note:
See Also:
mod_osso
on the Oracle HTTP Servers
|
![]() Copyright © 2002, 2003 Oracle Corporation. All Rights Reserved. |
|