Sun Java Enterprise System 2005Q4 Deployment Example: Telecommunications Provider Scenario |
Chapter 3
The ArchitectureA Java ES architecture is a high-level technical description of a Java ES solution.You develop an architecture to identify the combination of Java ES components and other technologies that will deliver the services described in your requirements. The architecture described in this chapter is based on the requirements described in Chapter 2, "The Requirements".
An architecture is developed in two stages:
- The deployment scenario. The deployment scenario identifies the Java ES components that provide the services described in the requirements, and, separately, lists the quality of service requirements.
- The deployment architecture integrates the information in the deployment scenario. Where the deployment scenario simply identifies the components, the deployment architecture specifies how many instances of each component must be installed and configured, with what redundancy strategies, on what kind of hardware, and how the instances are distributed across the network, in order to provide the required services at the required quality of service level.
This chapter describes the Java ES architecture that Telco developed to satisfy their business and technical requirements. This chapter contains the following sections:
The Deployment ScenarioThe deployment scenario for Telco’s Java ES solution comprises the following:
- The logical architecture, which identifies the Java ES components needed to provide the services described in Detailed Service Requirements.
- The quality of service requirements, which specify the performance required from the Java ES component set.
The Logical Architecture
The Java ES components needed to provide the services listed in Detailed Service Requirements are displayed in Figure 3-1.
Figure 3-1 Telco Deployment Logical Architecture
Notice that some basic design decisions are implied in Figure 3-1. The Messaging Server sub-components are to be deployed separately.
The main user interactions with this set of components are illustrated in Figure 3-2, Figure 3-3, Figure 3-4, and Figure 3-5. These figures show how the Java ES components in the proposed logical architecture deliver the specified services. As the design process continues, you analyze the component interactions represented in these figures, factor in the user base and usage patterns, and begin to make decisions about an architecture that supports these interactions with the specified quality of service.
Notice, too, that the security requirements are being considered at this stage of the analysis. The figures include proposed access zones for the deployment.
Figure 3-2 User Login Interactions
The interactions shown in Figure 3-2 are the following:
Step 1a Messenger Express web browser-based client opens a connection to the Messenger Express Multiplexor (MEM). This is an HTTP connection. Notice that this is only for consumer-class customers; business-class customers have web browser access to email through the portal desktop. For more information see Figure 3-5.
Step 1b Email client program opens a connection to Messaging Multiplexor (MMP). This is an IMAP connection.
Step 2a The MEM connects to directory via directory proxy service and authenticates login ID and password against LDAP data.
Step 2b The MMP connects to directory via directory proxy service and authenticates login ID and password against LDAP data.
Step 3a If user is authenticated, the MEM connects to the HTTP server on the message store and transfers stored messages to the Messenger Express client.
Step 3b If user is authenticated, MMP connects to message store and transfers stored messages to the email client program.
Figure 3-3 Incoming Mail Interactions
The interactions shown in Figure 3-3 are the following:
Step 1 Incoming messages are delivered to an instance of the Messaging Server Message Transfer Agent (MTA) that is configured to serve as the incoming mail relay (IMR). The IMR verifies the addresses on incoming messages against the LDAP directory. The IMR also uses LDAP directory to determine correct message store instance for the address.
Step 2 The IMR routes messages to the correct message store. This is a lightweight message transfer protocol (LMTP) connection.
Step 3 If the user is logged in with an email client, the email client periodically polls the MMP to see if there are any new messages and fetches then into the client when requested to do so.
Step 4 If the user is logged in with a web browser-based Messenger Express client, the MEM notifies the user when there are any new messages. The user can then view the message. The interaction between the web browser and the MEM is HTTP.
Figure 3-4 Outgoing Mail Interactions
The interactions shown in Figure 3-4 are the following:
Step 1a The user composes a message in the Messenger Express mail client. The Messenger Express Client connects to the MEM. This is an HTTP connection.
Step 2a The MEM routes the composed message to the HTTP server on the message store (the mshttpd).
Step 3a The HTTP server routes one copy of the message to the user’s Sent folder and another copy to the Messaging Server Message Transfer Agent (MTA) that is configured to server as the outgoing mail relay (OMR). The OMR relays the message to the Internet.
Step 1b The user composes a message in the stand-alone email client program. The email client routes one copy of the composed message to the instance of the Messaging Server Message Transfer Agent (MTA) that is configured to server as the outgoing mail relay (OMR). This is an SMTP connection. The OMR relays the message to the Internet.
Step 2b The email client also routes another copy of the message, by way of the MMP, to the user’s Sent mail folder.
Figure 3-5 Portal Access Interactions
The interactions shown in Figure 3-5 are the following:
Step 1 In a web browser, user opens the publicly accessible URL for portal desktop. This URL is actually a logical service name for the Portal Server Secure Remote Access service. Portal Server Secure Remote Access connects to Portal Server, obtains basic desktop page, relays basic desktop page to web browser. User sees User ID and password fields.
Step 2 User supplies user ID and password. Portal Server Secure Remote Access connects to Access Manager for authentication. Access Manager connects to Directory Proxy Server service, and ultimately to Directory Server service, and authenticates the user ID and password. Access Manager returns single sign-on cookie to user’s web browser session.
Step 3 Portal Server Secure Remote Access contacts Portal Server with the single-sign on cookie.
Step 4 In order to format user’s desktop, Portal Server connects to Messaging Server and Calendar Server. Portal Server uses proxy authentication mechanism to open these connections.
Step 5 Messaging Server obtains user’s mail box location from LDAP directory. The Messaging Server also formats a summary display of the user’s mail for the portal desktop mail channel. This is an IMAP and SMTP connection to the messaging back end.
Step 6 Calendar Server obtains user’s calendar preferences from LDAP directory. The Calendar Server also formats a summary display of the user’s calendar for the portal desktop calendar channel. This is a WCAP connection.
Step 7 Portal Server obtains user’s display preferences from LDAP directory. Portal Server formats desktop page, relays desktop page to Portal Server Secure Remote Access, and ultimately, to user’s web browser.
The Quality of Service Requirements
The logical architecture identifies the Java ES components that provide the services specified in the requirements, but does not tell you how to install the components on your network. In a typical production deployment, quality of service requirements such as response time, service availability, and service reliability are satisfied by installing and configuring multiple instances of the components and distributing the components among several computers. For example, configuring two computers as cluster nodes, and then installing the Messaging Server back-end software on those computers provides fail-over capability and high availability for the messaging back-end service.
The quality of service requirements for the Telco deployment are described in the following sections:
The Deployment ArchitectureThe deployment architecture integrates the information in the logical architecture and the quality of service requirements. The deployment architecture answers such questions as the following:
- Which redundancy strategies are you using to meet your availability and reliability requirements? (The main redundancy strategies available to you are installing and configuring multiple instances of a component and load balancing the instances to achieve availability and reliability, installing and configuring multiple instances of a component on Sun Cluster nodes to achieve availability and reliability, and using multiple instances of Directory Server that are synchronized through the multi-mastering and replication features to achieve availability and reliability.)
- How many instances of each component must be installed and configured in order to implement the redundancy strategies used in the solution?
- How are your component instances combined on your computers? For example, in a medium-sized solution, you could install and configure instances of both Messaging Server and Calendar Server on a single computer or cluster instance. In a larger solution with more user activity, you might install Messaging Server and Calendar Server on separate, dedicated computers to meet your performance requirements.
- How many CPUs are needed on each computer to achieve the performance specified in your quality of service requirements?
The answers to these questions lead to a deployment architecture for Telco.
The deployment architecture is the result of analyzing use cases and usage information and determining how the Java ES components can be installed to provide the specified services at the specified quality of services levels.
A deployment architecture is typically represented graphically, in a set of boxes that represent the computers in the deployment. Each box in the figure is labeled with the name of the computer and the components that are installed on the computer. The deployment architecture for the Telco deployment is illustrated in Figure 3-6.
Figure 3-6 The Deployment Architecture
Redundancy Strategies Used in the Architecture
The architecture in Figure 3-6 makes use of all three possible redundancy strategies for Java ES components. The redundancy strategies are chosen for the following reasons:
- Load balancing. This is the preferred solution for components that are stateless or mildly stateless, or for which instances of the component do not need to synchronize database updates. Load balancing uses redundant hardware and software components to distribute requests for a service among multiple components instances that provide the service, so that no single instance is overloaded. This redundancy also means that if any one instance of a components fails, other instances are available to assume a heavier load. Depending on the latent capacity built into this approach, failure might not result in significant degradation of performance. Load balancing is used for many components in the Telco architecture, for example the Portal Server and Access Manager components on jesPAM1 and jesPAM2.
- Sun Cluster software. This is the preferred solution for the back-end components that have read/write access to disk storage, namely the Messaging Server and Calendar Server components. In the Telco deployment, Sun Cluster software manages redundant hardware and software to provide failover for these components and for their access to disk storage. The Telco architecture makes use of Sun Cluster software on two separate back-end mail stores. The back-end mail store for business customers is on jesMCS1b and jesMCS2b, which function as a single logical host. The back-end mail store for consumer customers is on jesMS1c and jesMS2c, which also function as a single logical host.
- Directory Server Multimaster Replication. This is the preferred solution for Directory Server, which provides data that is crucial to the operation of the entire system. Multimaster replication is specifically designed for Directory Server and is therefore relatively easy to implement. The Telco architecture uses Directory Server multimaster replication for all Directory Server instances.
Security Strategies Used in the Architecture
Telco provides mail and calendar services that are accessible to the public over the Internet. However, the network that provides mail and calendar services also runs other services that must not be compromised. The directory service, for example, has confidential data about Telco’s employees, and similar confidential data about Telco's business and consumer class customers. (For more information, see.Security Requirements.)
Telco's challenge is to develop an architecture that both provides the required publicly accessible services and secures the other services and resources that run on the same network. Telco assumes that the most significant threats would come from outside the local area network. Therefore, Telco's security strategy concentrates on preventing unauthorized outsiders from accessing the network at all, and preventing authenticated users from accessing any services or data they are not authorized to use.
The basic approach that Telco uses is to divide the network into access zones. The access zones are demarcated by firewalls. The firewalls and the access zones are shown in Figure 3-6.
In addition to the firewalls, Telco’s plan includes a number of techniques and technologies that make it more difficult for would-be attackers to penetrate the firewalls and compromise the computers running the Java ES services.
The outermost zone in Figure 3-6 is the so-called de-militarized zone, or DMZ. The DMZ is reasonably secure. Each of the services behind the firewall can only be accessed at a specific URL. For example, business users who connect to the portal service access the service at https://www.telcomail.com:80. The firewall blocks all other ports and addresses. The firewall also imposes similar restrictions for accessing the other services in the DMZ, the messaging multiplexor and the mail relay services.
In addition to being deployed behind Firewall #3, the four Java ES services that are exposed to the internet are protected in the following ways:
- The services require users to authenticate themselves. For example, users who open www.telcomail.com in their web browsers are presented with a login page. The MMP and the mail relay services impose similar restrictions for access.
- The computers that provide these services are behind hardware load balancers. The load balancers provide a single point of contact for each service, regardless of how many component instances are running on how many computers. This means that for each service there is only one hole in the firewall, and all of the traffic for that service is routed through the load balancer.
- Access control features in the Java ES components. For example, access control rules are established for the Directory Proxy Server instances running on jesDPA1 and jesDPA2. Only traffic from the trusted proxy group (192.168.11.0 255.255.255.0 quad) is allowed to access the directory proxy servers through the load balancer. Any other traffic is blocked in the software.
- Not shown in Figure 3-6, but implied in the deployment architecture, is a network topology defined by private IP addresses. These private IP addresses define subnets that are invisible to the outside world. These subnets are connected only through the load balancers, further impeding the ability of intruders to see the actual computers behind the public URLs.
- Also not shown in Figure 3-6, the individual computers and running the Java ES services are hardened.
Figure 3-6 indicates where Firewalls 1 and Firewall 2 are placed in order to define the inner zones. Figure 3-6 also indicates some of the additional measures that Telco uses to further secure the inner zones.
- The only openings in Firewall 2 are those shown in Figure 3-6. Notice that these are connections from trusted private IP addresses.
- Firewall 2 is established with different hardware, from a different manufacturer, than Firewall 3. This ensures that an intruder who recognized Firewall 3, and was able to exploit a known weakness, would not be able to repeat the same exploit on Firewall 2.
- The actual portal service is provided by Portal Server instances in Zone 2. These instances are protected by the Portal Server Secure Remote Access service in Zone 3. All access to the portal service is through the Portal Server Secure Remote Access service. This aspect of the architecture allows the portal service to reside behind an additional firewall and an additional layer of hardware load balancers.
- The computers that provide these services in Zone 2 are also behind hardware load balancers, establishing a single point of contact for each service in Zone 2, a minimizing the openings in Firewall 2.
- The computers in Zone 2 are on a different subnet from the computers in Zone 3, which is defined by private IP addresses. The only bridges between the subnets are the hardware load balancers. The load balancers in zone 2 accept connections only from the load balancer in zone 3, and the firewall also rejects other connections.
- The individual computers running the Java ES services in Zone 2 are hardened.
Zone 1 is the most secure, and contains the directory service. In addition to Firewall 1, Zone 1 is protected by the following measures:
- There is no direct access to the Directory Server instances. All access is through the Directory Proxy Server instances in Zone 2. The directory service only accepts requests that originate with the directory proxy service.
- Firewall 1 only allows traffic from the Directory Proxy Server instances. All other traffic is blocked.
- Firewall 1 is established with different hardware, from a different manufacturer, than either Firewall 2 or Firewall 3.
- The individual computers running the directory services in Zone 1 are hardened.
For more information of the implementation of this security strategy see, The Network and Connectivity Specification.
Planning for Scalability in the Architecture
The Messaging Multiplexor (MMP) and Messenger Express Multiplexor (MEM) are both capable of handling incoming client connections that are routed to multiple back-end mail stores. In the architecture illustrated in Figure 3-6, the MMP and MEM instances are co-located on computers jesMMP1 and jesMMP2. Depending on whether incoming client connections are business or consumer customers, they are routed to one of two back-end message stores.
This architecture could be scaled to handle more incoming connections in several ways:
- The user base is currently divided into business users and consumer users, and each group is assigned to its own message store. (Note that each message store is actually comprised of multiple instances represented by a logical host.) As the number of users grows, the user accounts could be divided into more groups (for example, divide the consumer users alphabetically or by location), and the number of computers running message store instances could be increased. The architecture would remain essentially the same, but the MMP and MEM would be distributing a greater number of incoming connections among a greater number of back-end message store instances. The load on each message store instance would remain constant.
- The MMP and MEM instances could be installed on separate computers, giving each instance more computing resources.