Sun Java System Reference Configuration Series: Portal Service on Application Server Cluster

Chapter 3 Deployment Specifications

The deployment architecture is a high-level design of the portal service reference configuration. Before you can actually deploy the reference configuration in your environment, you need to specify additional information required during the installation and configuration process. The deployment specifications are meant to help you gather and organize this additional information.

This chapter describes the deployment specifications that are needed for the portal service reference configuration. It consists of the following sections:

Software Component Specification

The reference configuration that is described in this book uses specific versions of the software components in the deployment architecture shown in Figure 2–2. In particular, the deployment architecture is implemented using Sun Java Enterprise System 5, Update 1, which includes the following versions of the components that are used in the reference configuration:

Computer Hardware and Operating System Specification

A computer hardware and operating system specification describes the hardware and operating system configuration for the computers in your deployment. You want to size your hardware to the level of performance you require.

Table 3–1 lists the computer hardware that has been chosen for the Portal Service on Application Server Cluster reference configuration. This specification is meant to satisfy the requirements in Chapter 1, Performance Requirements.

In general, a hardware specification is based upon a sizing analysis that takes into account the size of the user base, the resource needs of each component, and the relative number of interactions (or hits) that are made on each component (see Interactions Between Reference Configuration Components). For the reference configuration, however, the approach has been to select the same hardware for each computer in the deployment architecture, and then use performance tests to determine the utilization of each computer under load conditions.

Using this approach, the absolute and relative sizing of the different computers in the deployment architecture can be determined and documented. For this purpose, the Sun FireTM T2000 server was selected as a basic, low-end, high-performance computer.


Note –

The T2000 server has performance limitations for deployments in which write-intensive Directory Server operations are required. Write operations are serialized and the T2000 cannot perform them in parallel. As a result, CPU utilization can be lower then 50 percent. This reference configuration does not involve write-intensive operations. However, if your solution has such requirements, consider using computers with a faster clock rate than the T2000 for the directory service module.


If your performance requirements are significantly different than the requirements of the reference configuration, you can specify hardware with more or less CPUs, more or less memory, and so on.

Table 3–1 Computer Hardware and Operating System Specification

Computer(s)  

Service Module  

Components Installed  

Hardware Model  

Operating System 

ds1, ds2

Directory Service 

Directory Server 

Sun Fire T2000 server, 8 core 1.2 GHz UltraSPARC® T1 processor, 16 Gbyte DDR2 memory

Solaris 10 8/07 OS with the Solaris Zones facility 

am1, am2

Access Manager Service 

Access Manager 

Message Queue 

Application Server 

Sun Fire T2000 server, 8 core 1.2 GHz UltraSPARC T1 processor, 16 Gbyte DDR2 memory 

Solaris 10 8/07 OS with the Solaris Zones facility 

ps1, ps2

Portal Service 

Portal Server 

Application Server 

Access Manager SDK 

Java DB 

HADB 

Sun Fire T2000 server, 8 core 1.2 GHz UltraSPARC T1 processor, 16 Gbyte DDR2 memory 

Solaris 10 8/07 OS with the Solaris Zones facility 

sra1, sra2

SRA Gateway Service 

Portal Server SRA 

Access Manager SDK 

Sun Fire T2000 server, 8 core 1.2 GHz UltraSPARC T1 processor, 16 Gbyte DDR2 memory 

Solaris 10 8/07 OS with the Solaris Zones facility 

Solaris OS Minimization and Hardening

The Solaris OS version that is used to build the Portal Service on Application Server Cluster reference configuration is Solaris 10 8/07. However, the architecture and implementation is expected to be supported by later versions of the Solaris 10 operating system.

For maximum security of your portal service, use a minimized version of the Solaris 10 OS. Most implementations of the reference configuration portal service will be exposed to the Internet or some other public or untrusted network, which makes minimization especially important. If your portal service will be exposed to these conditions, you must reduce the Solaris OS installation to the minimum number of packages that are required to support the portal service components. This minimization of services, libraries, and component software increases security by reducing the number of subsystems that must be disabled, patched, and maintained.

Minimization increases the security of the computer systems, but it also limits the software that you can run on the computer systems. Therefore, you need to use the appropriate minimal configuration for your environment. Minimizing the operating system you use for a portal service involves the following:

The operating systems that were used in testing the reference configuration described in this guide were installed with the minimal number of Solaris packages required to run the Java Enterprise System components, as described in the Platform Requirements and Issues in Sun Java Enterprise System 5 Release Notes for UNIX. Most of the required packages are included in the "Core System Solaris Software Group (SUNWCreq)." The additional packages needed are:

Solaris Zones

The Solaris 10 OS provides the Solaris Zones facility, which allows application components to be isolated from one another, even though the zones share a single instance of the operating system. From an application perspective, a zone is a fully functional Solaris OS environment. Multiple zones can be created on a single computer system, each zone serving its own set of applications. Detailed information about the use and features that are provided by Solaris zones can be found in the Solaris OS documentation.

It is possible to replace each of the computers in the portal service reference configuration's deployment architecture with a dedicated zone. The installation and configuration steps in this document would apply to a deployment in Solaris non-global zones. The installation of Java ES components in Solaris zones (whole root or sparse) is supported with certain restrictions as described in the Java Enterprise 5 Update 1 documentation.Appendix A, Java ES and Solaris 10 Zones, in Sun Java Enterprise System 5 Installation Planning Guide

One reason to use Solaris zones is for improved security. A non-global zone can be used to run applications (for example, Directory Server, Access Manager, Portal Server, and so forth), while the administration and monitoring can be done from the global zone. A non-global zone cannot access resources in the global zone. So the management and monitoring applications installed in the global zone will not be visible and will not interfere with the applications installed in the non-global zones.

Another reason to use Solaris zones is for better resource utilization. The portal service reference configuration uses a modularized deployment architecture that is based on a number of dedicated computers. This approach improves the manageability, scalability, and availability of the reference configuration. Using zones, it is possible to install multiple modules on the same computer and still achieve the reference configuration quality-of-service goals. For example, it is possible to install directory, Access Manager, and portal service modules on a single computer, with each using a dedicated Solaris zone. You need to size the individual systems properly, so the memory, disk, and processing power of each component is considered in sizing the whole computer. Solaris Resource Management can be used in conjunction with Solaris zones. The benefit of this approach is that resources (memory, CPU cycles) can be dynamically allocated for each zone, providing a better overall resource utilization.

Beyond this general explanation, this guide does not provide procedures for implementing the reference configuration in Solaris zones. The procedures are very similar, except that the zones need to be configured and networked before you install any of the Java ES components.

Network Connectivity Specification

Before you can install and configure Java ES components, the computers that you are using must be assigned IP addresses and attached to your network. The network topology for the portal service reference configuration uses several subnets with different ranges of IP addresses for each subnet. A network connectivity specification shows the network connections and the IP addresses that are needed to implement the reference configuration.

A network connectivity specification is typically a graphical representation of the required network configuration. The following figure shows the specification for the Portal Service on Application Server Cluster reference configuration. In the specification, all computers are shown in a pstest.com domain and are assigned the IP addresses that are used to establish the required network topology.


Note –

The procedures in this guide use the host names, domain name, and IP addresses shown in Figure 3–1. However, you must map these host names, domain name, and IP addresses to equivalent names and addresses in your environment. For this reason, the procedures in this guide show host names, domain name, and IP addresses as variables.


Figure 3–1 Network Connectivity Specification for the Reference Configuration Deployment

Graphic representation of the network connectivity specification,
showing three networks, as described in the text.

This figure illustrates how the different modules in the architecture are connected. Each module consists of two component instances, as well as a load balancer that provides a single entry point for the module. Each load balancer is configured to provide a virtual service address that accepts all requests for its respective service. The load balancer is configured to route such requests among the component instances in the module.

Portal Service Subnet

In Figure 3–1, the directory, Access Manager, and portal service modules reside in a network zone that is isolated from the main corporate network. Within this zone are separate subnets that are used to help secure each service.

Each service is accessed only through its respective load balancer. Clients of the service address their requests to the virtual IP address that is configured into the load balancer. Behind the load balancer, the computers that are running the component instances are isolated on their own subnets with private IP addresses. In Figure 3–1, the following five subnets are used:

The directory service load balancer is on the same subnet as the Access Manager and Portal Server instances because the latter directly access directory services.

These subnets are bridged by the load balancers, and all communications between the subnets is routed through routers. Therefore, if one subnet is compromised, there is no direct route to other services.

Gateway Service Subnet

The Gateway service runs in a separate subnet (the DMZ) that is isolated from the portal service subnet by an Internal Firewall and from the public Internet by an External Firewall, as shown in Figure 3–1.

In the DMZ, only the Gateway service load balancer (at sra.pstest.com) is exposed to traffic from the public Internet, and only through the External Firewall. Other hardware in the DMZ is assigned a private IP address, in keeping with the philosophy of minimizing the surface of attack. In Figure 3–1, the DMZ subnet is created with private IP addresses in the 10.0.4.0/24 range. These private addresses are not recognized by the Internet and are not routed outside the network.


Note –

In Figure 3–1, the gateway service load balancer is shown with the IP address 10.0.5.10. When you deploy your reference configuration, you must configure this load balancer with a real, publicly accessible IP address that is appropriate for your site.


The firewall rules that are used to establish the Gateway service subnet are shown in the following tables.

Table 3–2 Internal Firewall Rules

Rule Number 

Source 

Destination 

Type/Port 

Action 

sra1.pstest.com

sra2.pstest.com

am.pstest.com

TCP/80 

ALLOW 

sra1.pstest.com

sra2.pstest.com

ps.pstest.com (Portal Server)

TCP/80 

ALLOW 

sra1.pstest.com

sra2.pstest.com

ps.pstest.com (Rewriter Proxy)

TCP/10433 

ALLOW 

sra1.pstest.com

sra2.pstest.com

ps.pstest.com (Netlet Proxy)

TCP/10555 

ALLOW 

am1.pstest.com

am2.pstest.com

sra1.pstest.com

sra2.pstest.com

TCP/443 

ALLOW 

DENY 

The first two rules in the previous table allow the Gateway instances to reach the virtual service IP addresses (the load balancers) for the Access Manager and portal services. Rule 3 allows the session notifications that are generated by the Access Manager instances to reach the Gateway instances. The firewall automatically adds rules to allow the response traffic.

Table 3–3 External Firewall Rules

Rule Number 

Source 

Destination 

Type/Port 

Action 

sra.pstest.com

TCP/443 

ALLOW 

DENY 

The rules in the previous table allow only the Gateway service load balancer to be accessed from the Internet.

DNS Considerations

In implementing a network connectivity specification, you must coordinate the setting of virtual service IP addresses with the configuration of your DNS servers (or whatever naming service your network is using). Doing so ensures that the correct service names and IP addresses are routed publicly. In Figure 3–1, the externally accessible DNS server maps the URL www.pstest.com to the virtual service IP address for the Gateway service load balancer. For example, the internal DNS server maps the host name sra.pstest.com to the same virtual service address.

Other Networks

Figure 3–1 also shows two additional networks that are often used to implement a deployment architecture:

Load Balancer Configuration Specification

In the reference configuration's modular architecture, each module has a load balancer that routes traffic among the component instances in the module. For each module, the load balancer is configured with a virtual IP address for the service that the module provides. All of the requests for the service are delivered to the load balancer. The load balancer then routes this traffic among the component instances in the module.

For example, in Figure 3–1, the directory service module consists of two computers that are running instances of Directory Server ( ds1.pstest.com and ds2.pstest.com) and a load balancer (ds.pstest.com) that is placed in front of the two computers. Requests for directory services are addressed to the load balancer at ds.pstest.com, and the load balancer is configured to distribute these requests between the Directory Server instances running on ds1.pstest.com and ds2.pstest.com.

In configuring a load balancer, three categories of configurable parameters need to be specified, as described in the following sections:

IP Address Configuration

The virtual IP addresses and real IP addresses that are used to configure each load balancer are shown in Figure 3–1. In configuring your load balancers, substitute the service names, host names, and IP addresses that you will be using on your network. Details of setting up each load balancer are provided in the implementation procedure for the respective module.

Configuration of Routing Characteristics

The following table specifies characteristics that are required for each load balancer in the reference configuration to properly route requests. For example, the bottom row of the table below describes how each load balancer needs to be configured to maintain session persistence (stickiness).

Table 3–4 Specification for Load Balancer Routing

Parameter 

Directory Service 

Access Manager Service 

Portal Service 

Gateway Service 

Virtual Service Name

ds.pstest.com

am.pstest.com

ps.pstest.com

sra.pstest.com

Protocol

LDAP 

HTTP 

HTTP 

HTTPS or HTTP, depending on whether SSL is terminated or not 

Port

389 

80 

80 

443 

Virtual Service Type

Layer-4 (TCP) 

Layer-7 (HTTP) 

Layer-7 (HTTP) 

Layer-7 (HTTP) or SSL, depending on whether SSL is terminated 

Scheduling

Least Connections or Round Robin 

Least Connections or Round Robin 

Least Connections or Round Robin 

Least Connections or Round Robin 

Session Persistence (Stickiness)

Long Persistent TCP Connections 

Based on server-side cookie amlbcookie

Load balancer-managed cookie 

SSL Session ID or Load balancer-managed cookie 

Health-check Configuration

Load balancers use a health-check mechanism to establish if a service instance is properly working and if it can process requests from clients. If the health-checks succeed, the load balancer includes the service instance in the pool of available instances, and requests are routed to the instance based on the existing scheduling rules. However, if the health-checks fail, the instance is removed from the load balancer's scheduling list.

A health-check is considered failed if the response is different than the one expected, or if no response is received after a specified timeout value. The timeout mast be properly tuned because if it is too short, a sporadically overloaded service that is slow to respond can be considered down. If the timeout is too long, the load balancer will take too much time to detect failures, and users will notice the lack of response.

The simplest health-check is to try to open a TCP connection to the service instance. However, this health-check only proves that the application is listening on the assigned port. It does not show that the instance can process requests. To better establish that the instance is properly working, the health-check must actually exercise the service instance.

The load balancer performs health-checks at a specified interval. The interval needs to be as short as possible so that the load balancer will quickly detect failures. However, too many health-check requests can cause performance degradation. In the worst case, frequent health-checks can overload the service instances.

To determine if a server instance is down, the load balancer monitors the number of consecutive failed health-checks. If this number reaches a specified threshold, an instance is considered down. The time it takes to make this determination equals the number of consecutive failed health-checks, multiplied by the health-check interval. During this time, the load balancer considers a failed instance to be operating correctly, and users will notice a lack of response.

The health-check parameters need to be tuned separately for each service module. The following table specifies health-check parameter values that can be used as a starting point for the reference configuration.

Table 3–5 Specification for Load Balancer Health-Checks

Parameter 

Directory Service 

Access Manager Service 

Portal Service 

Gateway Service 

Health-check Type

LDAP (simple, anonymous bind) 

HTTP 

HTTP 

HTTP 

Query

DN: <None> 

Base:dc=pstest,dc=com 

Scope: Base 

Query: (objectclass=*) 

GET/amserver/isAlive.jsp

GET/portal

GET 

Expected Result

Any LDAP success code 

HTTP 200 

HTTP 302 

HTTP 302 

Health-check Timeout

20 seconds 

10 seconds 

5 seconds 

5 seconds 

Interval Between Checks

60 seconds 

30 seconds 

30 seconds 

30 seconds 

Consecutive Failed Health-check Threshold


Note –

In the reference configuration, Gateway SSL sessions are terminated at the load balancer, and the Gateway instances run plain HTTP. If the SSL sessions are terminated at the Gateway instances instead of at the Gateway load balancer, then the Health-check needs to be configured to use the SSL protocol.


Administrator Account Specification

When deploying the portal service reference configuration, you install and configure a number of components with administrative interfaces, as well as administrator accounts for accessing these interfaces. Some of these administrator accounts are used by multiple components.

In many environments, different administrator accounts are used to manage different services. However, if there are no specific reasons to use different passwords for the different administrator accounts, you can streamline the installation, configuration, and maintenance of your deployment by using the same password for all such accounts.


Note –

It is important to determine, in advance, the administrative account IDs and passwords that you will use when deploying the reference configuration.


The following table shows the administrator account IDs that are needed to deploy the reference configuration, the variables that are used in this guide to represent the corresponding passwords, and the interfaces that are managed by each of the administrator accounts.

Table 3–6 Administrator Accounts in Reference Configuration

Account ID 

Password Variable 

Interfaces 

admin

directory-admin-password

Directory Server dsconf command

Directory Service Control Center (DSCC) 

cn=Directory Manager

directory-manager-password

Accessing directory data 

ldapmodify and ldapsearch commands

amadmin

access-manager-admin-password

Access Manager amadmin command

Portal Server psadmin command

Access Manager Console 

Portal Server Console 

amldapuser

access-manager-LDAP-password

Access Manager's Directory Server account 

admin

app-server-admin-password

Application Server asadmin command

Application Server Admin Console 

 

app-server-master-password

Application Server cluster features 

admin

MQ-admin-password

Message Queue imqcmd command

When you use command-line interfaces in the implementation procedures in this guide, you can provide the administrator account password in any of the following ways:

When implementing the reference configuration, you are free to choose whichever approach you wish. For consistency, however, the last approach is used in all the implementation procedures in this guide.

User Management Specification

The process of deploying the portal service reference configuration establishes an LDAP directory schema and the basic tree structure of the LDAP directory. Before beginning the installation and configuration process, analyze your directory requirements and design a schema and a directory tree structure that supports your application system needs. Preparing a user management specification, in advance, ensures that you have the directory you need after having completed deployment.

LDAP Schema

Installing and configuring the reference configuration components creates a basic LDAP schema, as follows:

Directory Tree

Access Manager introduced a new data structure configuration with Access Manager 7.0. The new realm mode separates configuration data and user data into different repositories, thus supporting different data formats for user data and corresponding interfaces for accessing that data. In contrast to the previous legacy mode, in which both configuration data and user data are stored in a single LDAP directory tree, realm mode enables Access Manager to plug in multiple user repositories, while storing service configuration data in a single realm repository.

The Portal Service on Application Server Cluster reference configuration is based on legacy mode configuration of Access Manager. Legacy mode fully supports Portal Server access to data. In this mode, the Access Manager service and policy configuration data are merged with user data in the same LDAP directory.

However, realm mode can also support Portal Server as long as Access Manager is configured to use the Access Manager SDK datasource plugin that Portal Server uses to access service data in Directory Server. Using Access Manager in realm mode for the reference configuration requires additional configuration to map elements in the realm repository to elements in the user repository. Nevertheless, this realm mode configuration is outside the scope of this reference configuration guide.

Installing and configuring the reference configuration in legacy mode creates a basic LDAP directory tree. Input supplied during the installation and configuration process determines the directory tree root suffix, as follows:

The procedures in this guide for installing Directory Server create the directory tree structure shown in the following figure.

The root suffix in the figure is shown as dc=pstest,dc=com.


Note –

The procedures in this guide use the root suffix shown in Figure 3–2. However, you must specify a root suffix different from dc=pstest,dc=com that is suitable for your organization. For this reason, the procedures in this guide show dc=pstest,dc=com as a variable.


Additional user management specifications are needed to support custom content and service channels in your portal.

Figure 3–2 Basic LDAP Directory Tree for the Reference Configuration

Graphical representation of the reference configuration
directory tree.