Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java System Portal Server 6 2005Q4 Deployment Planning Guide 

Chapter 5
Creating Your Portal Design

This chapter describes how to create your high-level and low-level portal design and provides information on creating specific sections of your design plan.

This chapter contains the following sections:

Portal Design Approach

At this point in the Sun Java™ System Portal Server deployment process, you’ve identified your business and technical requirements, and communicated these requirements to the stakeholders for their approval. Now you are ready to begin the design phase, in which you develop your high- and low-level designs.

Your high-level portal design communicates the architecture of the system and provides the basis for the low-level design of your solution. Further, the high-level design needs to describe a logical architecture that meets the business and technical needs that you previously established. The logical architecture is broken down according to the various applications that comprise the system as a whole and the way in which users interact with it. In general, the logical architecture includes Portal Server Secure Remote Access (SRA) , high availability, security (including Access Manager, and Directory Server architectural components. See Logical Portal Architecture for more information.

The high- and low-level designs also need to account for any factors beyond the control of the portal, including your network, hardware failures, and improper channel design.

Once developed, the high-level design leads toward the creation of the low-level design. The low-level design specifies such items as the physical architecture, network infrastucture, Portal Desktop channel and container design and the actual hardware and software components. Once you have completed the high- and low-level designs, you can begin a trial deployment for testing within your organization.

Overview of High-Level Portal Design

The high-level design is your first iteration of an architecture approach to support both the business and technical requirements. The high-level design addresses questions such as:

Overview of Low-Level Portal Design

The low-level design focuses on specifying the processes and standards you use to build your portal solution, and specifying the actual hardware and software components of the solution, including:

Logical Portal Architecture

Your logical portal architecture defines all the components that make up the portal, including (but not limited to) the following:

Additionally, you need to consider how the following three network zones fit into your design:

The logical architecture describes the Portal Desktop look and feel, including potential items such as:

The logical architecture is where you also develop a caching strategy, if your site requires one. If the pages returned to your users contain references to large numbers of images, Portal Server can deliver these images for all users. However, if these types of requests can be offloaded to a reverse proxy type of caching appliance, you can free up system resources so that Portal Server can service additional users. Additionally, by placing a caching appliance closer to end users, these images can be delivered to end users somewhat more quickly, thus enhancing the overall end user experience.

Portal Server and Scalability

Scalability is a system’s ability to accommodate a growing user population, without performance degradation, by the addition of processing resources. The two general means of scaling a system are vertical and horizontal scaling. The subject of this section is the application of scaling techniques to the Portal Server product.

Benefits of scalable systems include:

Vertical Scaling

In vertical scaling, CPUs, memory, multiple instances of Portal Server, or other resources are added to one machine. This enables more process instances to run simultaneously. In Portal Server, you want to make use of this by planning and sizing to the number of CPUs you need. See Chapter 4, "Pre-Deployment Considerations" for more information.

Horizontal Scaling

In horizontal scaling, machines are added. This also enables multiple simultaneous processing and a distributed work load. In Portal Server, you make use of horizontal scaling because you can run the Portal Server, Directory Server and Access Manager on different nodes. Horizontal scaling can also make use of vertical scaling, by adding more CPUs, for example.

Additionally, you can scale a Portal Server installation horizontally by installing server component instances on multiple machines. Each installed server component instance executes an HTTP process, which listens on a TCP/IP port whose number is determined at installation time. Gateway components use a round-robin algorithm to assign new session requests to server instances. While a session is established, an HTTP cookie, stored on the client, indicates the session server. All subsequent requests go to that server.

The section Working with Portal Server Building Modules, discusses an approach to a specific type of configuration that provides optimum performance and horizontal scalability.

Portal Server and High Availability

High Availability ensures that your portal platform is accessible 24 hours a day, seven days a week. Today, organizations require that data and applications always be available. High availability has become a requirement that applies not only to mission-critical applications, but also to the whole IT infrastructure.

System availability is affected not only by computer hardware and software, but also by people and processes, which can account for up to 80 percent of system downtime. Availability can be improved through a systematic approach to system management and by using industry best practices to minimize the impact of human error.

One important issue to consider is that not all systems have the same level of availability requirements. Most applications can be categorized into the following three groups:

The goals of these levels are to improve the following:

The more mission critical the application, the more you need to focus on availability to eliminate any single point of failure (SPOF), and resolve people and processes issues.

Even if a system is always available, instances of failure recovery might not be transparent to end users. Depending on the kind of failure, users can lose the context of their portal application, and might have to login again to get access to their Portal Desktop.

System Availability

System availability is often expressed as a percentage of the system uptime. A basic equation to calculate system availability is:

Availability = uptime / (uptime + downtime) * 100

For instance, a service level agreement uptime of four digits (99.99 percent) means that in a month the system can be unavailable for about seven hours. Furthermore, system downtime is the total time the system is not available for use. This total includes not only unplanned downtime, such as hardware failures and network outages, but also planned downtime, preventive maintenance, software upgrade, and patches.

If the system is supposed to be available seven days a week, 24 hours a day, the architecture needs to include redundancy to avoid planned and unplanned downtime to ensure high availability.

Degrees of High Availability

High availability is not just a switch that you can turn on and off. Various degrees of high availability refer to the ability of the system to recover from failures and ways of measuring system availability. The degree of high availability depends on your specific organization’s fault tolerance requirements and ways of measuring system availability.

For example, your organization might tolerate the need to reauthenticate after a system failure, so that a request resulting in a redirection to another login screen would be considered successful. For other organizations, this might be considered a failure, even though the service is still being provided by the system.

Session failover alone is not the ultimate answer to transparent failover, because the context of a particular portal application can be lost after a failover. For example, consider the case where a user is composing a message in NetMail Lite, has attached several documents to the email, then the server fails. The user is redirected to another server and NetMail Lite will have lost the user’s session and the draft message. Other providers, which store contextual data in the current JVM™, have the same problem.

Achieving High Availability for Portal Server

Making Portal Server highly available involves ensuring high availability on each of the following components:

Portal Server System Communication Links

Figure 5-1 shows the processes and communication links of a Portal Server system that are critical to the availability of the solution.

Figure 5-1  

This figures shows the links between the services and software comonents

Portal Server Communication Links

In this figure, the box encloses the Portal Server instance running on Web Server technology. Within the instance are five servlets (Authentication, Access Manager administration console, Portal Desktop, Communication Channel, and Search), and the three SDKs (Access Manager SSO, Access Manager Logging, and Access Manager Management). The Authentication service servlet also makes use of an LDAP service provider module.

A user uses either a browser or the Gateway to communicate with Portal Server. This traffic is directed to the appropriate servlet. Communication occurs between the Authentication service’s LDAP module and the LDAP authentication server; between the Communications channel servlet and the SMTP/IMAP messaging server; between the Access Manager SSO SDK and the LDAP server; and between the Access Manager Management SDK and the LDAP server.

Working with Portal Server Building Modules

Because deploying Portal Server is a complex process involving many other systems, this section describes a specific configuration that provides optimum performance and horizontal scalability. This configuration is known as a Portal Server building module.

A Portal Server building module is a hardware and software construct with limited or no dependencies on shared services. A typical deployment uses multiple building modules to achieve optimum performance and horizontal scalability. Figure 5-2 shows the building module architecture.

Figure 5-2  Portal Server Building Module Architecture

Building Module Architecture


The Portal Server building module is simply a recommended configuration. In some cases, a different configuration might result in slightly better throughput (usually at the cost of added complexity). For example, adding another instance of Portal Server to a four CPU system might result in up to ten percent additional throughput, at the cost of requiring a load balancer even when using just a single system.

Building Modules and High Availability Scenarios

Portal Server provides three scenarios for high availability:

Possible supported architectures include the following:

This section explains implementing these architectures and leverages the building module concept, from a high-availability standpoint.

Table 5-1 summarizes these high availability scenarios along with their supporting techniques.

Table 5-1  Portal Server High Availability Scenarios

Component Requirements

Necessary for Best Effort Deployment?

Necessary for NSPOF Deployment?

Necessary for Transparent Failover Deployment?

Hardware Redundancy




Portal Server Building Modules




Multi-master Configuration




Load Balancing




Stateless Applications and Checkpointing Mechanisms




Session Failover




Directory Server Clustering





Load balancing is not provided out-of-the-box with the Web Server product.

Best Effort

In this scenario, you install Portal Server and Directory Server on a single node that has a secured hardware configuration for continuous availability, such as Sun Fire UltraSPARC® III machines. (Securing a Solaris™ Operating Environment system requires that changes be made to its default configuration.)

This type of server features full hardware redundancy, including: redundant power supplies, fans, system controllers; dynamic reconfiguration; CPU hot-plug; online upgrades; and disks rack that can be configured in RAID 0+1 (striping plus mirroring), or RAID 5 using a volume management system, which prevents loss of data in case of a disk crash. Figure 5-3 shows a small, best effort deployment using the building module architecture.

Figure 5-3  Best Effort Scenario

Best Effort scenario with browser and building module

In this scenario, for memory allocation, four CPUs by eight GB RAM (4x8) of memory is sufficient for one building module. The Access Manager console is outside of the building module so that it can be shared with other resources. (Your actual sizing calculations might result in a different allocation amount.)

This scenario might suffice for task critical requirements. Its major weakness is that a maintenance action necessitating a system shutdown results in service interruption.

When SRA is used, and a software crash occurs, a watchdog process automatically restarts the Gateway, Netlet Proxy, and Rewriter Proxy.

No Single Point of Failure

Portal Server natively supports the no single point of failure (NSPOF) scenario. NSPOF is built on top of the best effort scenario, and in addition, introduces replication and load balancing.

Figure 5-4  No Single Point of Failure Example

NSPOF shows Best Effort Scenario plus a secdon Building Module, MMR and Load balancer

As stated earlier, a building module consists of a a Portal Server instance, a Directory Server master replica for profile reads and a search engine database. As such, at least two building modules are necessary to achieve NSPOF, thereby providing a backup if one of the building modules fails. These building modules consist of four CPUs by eight GB RAM.

When the load balancer detects Portal Server failures, it redirects users’ requests to a backup building module. Accuracy of failure detection varies among load balancing products. Some products are capable of checking the availability of a system by probing a service involving several functional areas of the server, such as the servlet engine, and the JVM. In particular, most vendor solutions from Resonate, Cisco, Alteon, and others enable you to create arbitrary scripts for server availability. As the load balancer is not part of the Portal Server software, you must acquire it separately from a third-party vendor.


The Access Manager product requires that you set up load balancing to enforce sticky sessions. This means that once a session is created on a particular instance, the load balancer needs to always return to the same instance for that session. The load balancer achieves this by binding the session cookie with the instance name identification. In principle, that binding is reestablished when a failed instance is decommissioned. Sticky sessions are also recommended for performance reasons.

Multi-master replication (MMR) takes places between the building modules. The changes that occur on each directory are replicated to the other, which means that each directory plays both roles of supplier and consumer. For more information on MMR, refer to the Directory Server 6 Deployment Guide.


In general, the Directory Server instance in each building module is configured as a replica of a master directory, which runs elsewhere. However, nothing prevents you from using a master directory as part of the building module. The use of masters on dedicated nodes does not improve the availability of the solution. Use dedicated masters for performance reasons.

Redundancy is equally important to the directory master so that profile changes through the administration console or the Portal Desktop, along with consumer replication across building modules, can always be maintained. Portal Server and Access Manager support MMR. The NSPOF scenario uses a multi-master configuration. In this configuration, two suppliers can accept updates, synchronize with each other, and update all consumers. The consumers can refer update requests to both masters.

SRA follows the same replication and load balancing pattern as Portal Server to achieve NSPOF. As such, two SRA Gateways and pair of proxies are necessary in this scenario. The SRA Gateway detects a Portal Server instance failure when the instance does not respond to a request after a certain time-out value. When this occurs, the HTTPS request is routed to a backup server. The SRA Gateway performs a periodic check for availability until the first Portal Server instance is up again.

The NSPOF high availability scenario is suitable to business critical deployments. However, some high availability limitations in this scenario might not fulfill the requirements of a mission critical deployment.

Transparent Failover

Transparent failover uses the same replication model as the NSPOF scenario but provides additional high availability features, which make the failover to a backup server transparent to end users.

Figure 5-5 shows a transparent failover scenario. Two building modules are shown, consisting of four CPUs by eight GB RAM. Load balancing is responsible for detecting Portal Server failures and redirecting users’ requests to a backup Portal Server in the building module. Building Module 1 stores sessions in the sessions repository. If a crash occurs, the application server retrieves sessions created by Building Module 1 from the sessions repository.

Figure 5-5  Transparent Failover Example Scenario

Transparent Failover is NSPOF plus a Sessions Repository.

The session repository is provided by the application server software. Portal Server is running in an application server. Portal Server supports transparent failover on application servers that support HttpSession failover. See Appendix C, "Portal Server and Application Servers" for more information.

With session failover, users do not need to reauthenticate after a crash. In addition, portal applications can rely on session persistence to store context data used by the checkpointing. You configure session failover in the file by setting the property to true.

The Netlet Proxy cannot support the transparent failover scenario because of the limitation of the TCP protocol. The Netlet Proxy tunnels TCP connections, and you cannot migrate an open TCP connection to another server. A Netlet Proxy crash drops off all outstanding connections that would have to be reestablished.

Building Module Constraints

The constraints on the scalability of building modules are given by the number of LDAP writes resulting from profile updates and the maximum size of the LDAP database. For more information, see Directory Server Requirements.


If the LDAP server crashes with the _db files in the /tmp directory, the files are lost when the server restarts. This improves performance but also affects availability.

If the analysis at your specific site indicates that the number of LDAP write operations is indeed a constraint, some of the possible solutions include creating building modules that replicate only a specific branch of the directory and a layer in front that directs incoming requests to the appropriate instance of portal.

Deploying Your Building Module Solution

This section describes guidelines for deploying your building module solution.

Deployment Guidelines

How you construct your building modue affects performance. Consider the following recommendations to deploy your building module properly:

Directory Server Requirements

Identify your Directory Server requirements for your building module deployment. For specific information on Directory Server deployment, see the Directory Server Deployment Guide.

Consider the following Directory Server guidelines when you plan your Portal Server deployment:

Search Engine Structure

When you deploy the Search Engine as part of your building module solution, consider the following:

Designing Portal Use Case Scenarios

Use case scenarios are written scenarios used to test and present the system’s capabilities and form an important part of your high-level design. Though you implement use case scenarios toward the end of the project, formulate them early on in the project, once you have established your requirements.

When available, use cases can provide valuable insight into how the system is to be tested. Use cases are beneficial in identifying how you need to design the user interface from a navigational perspective. When designing use cases, compare them to your requirements to get a thorough view of their completeness and how you are to interpret the test results.

Use cases provide a method for organizing your requirements. Instead of a bulleted list of requirements, you organize them in a way that tells a story of how someone can use the system. This provides for greater completeness and consistency, and also gives you a better understanding of the importance of a requirement from a user perspective.

Use cases help to identify and clarify the functional requirements of the portal. Use cases capture all the different ways a portal would be used, including the set of interactions between the user and the portal as well as the services, tasks, and functions the portal is required to perform.

A use case defines a goal-oriented set of interactions between external actors and the portal system. (Actors are parties outside the system that interact with the system, and can be a class of users, roles users can play, or other systems.)

Use case steps are written in an easy-to-understand structured narrative using the vocabulary of the domain.

Use case scenarios are an instance of a use case, representing a single path through the use case. Thus, there may be a scenario for the main flow through the use case and other scenarios for each possible variation of flow through the use case (for example, representing each option).

Elements of Portal Use Cases

When developing use cases for your portal, keep the following elements in mind:

Example Use Case: Authenticate Portal User

Table 5-2 describes a use case for a portal user to authenticate with the portal.

Table 5-2  Use Case: Authenticate Portal User  




Must have.

Context of Use

Only authenticated users are allowed to gain access to the portal resources. This access restriction applies to all portal resources, including content and services. This portal relies on the user IDs maintained in the corporate LDAP directory.


The portal users identify themselves only once for a complete online session. In the case that an idle timeout occurs, the users must reidentify themselves. If the portal user identification fails more often than a specified amount of allowed retries, access to the intranet should be revoked or limited (deactivated) until a system administrator reactivates the account. In this case, the portal user should be advised to contact the authorized person. The identified portal users are able to access only the data and information that they are authorized for.

Primary User

Portal end user.

Special Requirements



Portal end user.


The portal user is an authorized user.
Standard corporate LDAP user ID.
Must be provided to each employee.
Authorized LDAP entry.
Every employee has access to the corporate intranet.
No guest account.

Minimal Guarantees

Friendly customer-centric message.
Status—with error message indicating whom to call.

Success Guarantees

Presented with Portal Desktop home page.
Personal information.


When any portal page is accessed and the user is not yet logged in.


  1. User enters the portal URL.
  2. If the customization parameter [remember login] is set, then automatically login the user and provide a session ID.
  3. If first time user, prompt for LDAP user ID and password.
  4. User enters previously assigned user ID and password.
  5. Information is passed to Access Manager for validation.
  6. If authentication passes, assign session ID and continue.
  7. If authentication fails, display error message, return user to login page; decrement remaining attempts; if pre-set attempts exceed limit, notify user and lock out the account.

Designing Portal Security Strategies

Security is the set of hardware, software, practices, and technologies that protect a server and its users from malicious outsiders. In that regard, security protects against unexpected behavior.

You need to address security globally and include people and processes as well as products and technologies. Unfortunately, too many organizations rely solely on firewall technology as their only security strategy. These organizations do not realize that many attacks come from employees, not outsiders. Therefore, you need to consider additional tools and processes when creating a secure portal environment.

Operating Portal Server in a secure environment involves making certain changes to the Solaris™ Operating Environment, the Gateway and server configuration, the installation of firewalls, and user authentication through Directory Server and SSO through Access Manager. In addition, you can use certificates, SSL encryption, and group and domain access.

Securing the Operating Environment

Reduce potential risk of security breaches in the operating environment by performing the following, often termed “system hardening:”

Using Platform Security

Usually you install Portal Servers in a trusted network. However, even in this secure environment, security of these servers requires special attention.

UNIX User Installation

You can install and configure Portal Server to run under three different UNIX users:

Limiting Access Control

While the traditional security UNIX model is typically viewed as all-or-nothing, you can use alternative tools to provide some additional flexibility. These tools provide the mechanisms needed to create a fine grain access control to individual resources, such as different UNIX commands. For example, this toolset enables Portal Server to be run as root, while allowing certain users and roles superuser privileges to start, stop, and maintain the Portal Server framework.

These tools include:

Using a Demilitarized Zone (DMZ)

For maximum security, the Gateway is installed in the DMZ between two firewalls. The outermost firewall enables only SSL traffic from the Internet to the Gateways, which then direct traffic to servers on the internal network.

Portal Server and Access Manager on Different Nodes

Portal Server and Access Manager can be located on different nodes. This type of deployment provides the following advantages:


When Portal Server and Access Manager are on different nodes, the Access Manager SDK must reside on the same node as Portal Server. The web application and supporting authentication daemons can reside on a separate node from the Portal Server instance.

The Access Manager SDK consists of the following components:

Identity Management SDK–provides the framework to create and manage users, roles, groups, containers, organizations, organizational units, and sub-organizations.

Authentication API and SPI–provides remote access to the full capabilities of the Authentication Service.

Utility API–manages system resources.

Loggin API and SPI–records, among other things, access approvals, access denials and user activity.

Client Detection API–detects the type of client browser that is attempting to access its resources and respond with the appropriately formatted pages.

SSO API–provides interfaces for validating and managing session tokens, and for maintaining the user’s authentication credentials.

Policy API–evaluates and manages Access Manager policies and provides additional functionality for the Policy Service.

SAML API–exchanges acts of authentication, authorization decisions and attribute information.

Federation Management API–adds functionality based on the Liberty Alliance Project specifications.

Figure 5-6 illustrates Access Manager and Portal Server residing on separate nodes.

Figure 5-6  Portal Server and Access Manager on Different Nodes

This illustration shows Identity server on one node and Portal Server on another node. The Access Manager SDK must reside on the Portal Server when Identity Server is on a separate node.

As a result of this implementation of Portal Server and Access Manager separation, other topology permutations are possible for portal services architecture deployments as shown in the next three figures.

Figure 5-7 shows two Portal Server instances configured to work with a single Access Manager and two Directory Servers where both the Access Manager and the Directory Servers operate in a Java Enterprise System Sun Clustered environment. This configuration is ideal when Access Manager and Directory Server instances are not the bottleneck.

Figure 5-7  Two Portal Server

This illustration shows two Portal Servers behind a load balancer, connected to one Access Manager which is connected to two Directory servers.

s and One Access Manager

Figure 5-9 shows a configuration for maximum horizontal scalability and higher availability achieved by a horizontal server farm. Two Portals Servers can be fronted with a load balancer for maximum throughput and high availability.

Another load balancer can be put between Portal Servers and Access Managers to achieve authentication and policy processes as a load distributor and failover mechanism for higher availability.

In this scenario, Blade 1500s can be utilized for Portal Services to distribute the load, similar Blades can be used to host Access Manager Services and Directory Services respectively. With the architecture shown in Figure 5-9 a redundancy of services exists for each of the product stack, therefore, most of the unplanned downtime can be minimized or eliminated.

However, the planned downtime is still an issue. If an upgrade or patch includes changes to the Directory Server software schema used by the Access Manager software, all of the software components must be stopped to update the schema information stored in the Directory Server. However, updating schema information can be considered a fairly rare occurence in most patch upgrades.

Figure 5-9  Two Portal Servers and Two

This illustration shows a load balancer in front of two Portal Servers and a load balancer in front of two Access Managers which are each connected to Directory Servers

Access Managers

When two instances of Portal Server and Access Manager servers share the same LDAP directories, please use this workaround for all subsequent Portal Server, Access Manager, and Gateways:

  1. Modify the following areas in to be in sync with the first installed instance of Portal Server and Access Manager servers:
  1. In /etc/opt/SUNWam/config/ums modify the following areas in serverconfig.xml to be insync with the first installed instance of Portal Server and Access Manager server:
  2. <DirDN>

    cn=puser,ou=DSAME Users,dc=sun,dc=net






    cn=dsameuser,ou=DSAME Users,dc=sun,dc=net





  3. Restart amserver services.

Designing SRA Deployment Scenarios

The SRA Gateway provides the interface and security barrier between the remote user sessions originating from the Internet and your organization’s intranet. The Gateway serves two main functions:

For Internet access, use 128-bit SSL to provide the best security arrangement and encryption or communication between the user’s browser and Portal Server. The Gateway, Netlet, NetFile, Netlet Proxy, Rewriter Proxy, and Proxylet constitute the major components of SRA.

This section lists some of the possible configurations of these components. Choose the right configuration based on your business needs. This section is meant only as a guide, not a complete deployment reference.


To set up the authlessanonymous page to display through the Gateway, add /portal/dt to the non-authenticated URLs of the gateway profile. However, this means that even for normal users, portal pages will not need authentication and no session validation is performed.

Basic SRA Configuration

Figure 5-10 shows the most simple configuration possible for SRA. The figure shows a client browser running NetFile and Netlet. The Gateway is installed on a separate machine in the DMZ between two firewalls. The Portal Server is located on a machine beyond the second firewall in the intranet. The other application hosts that the client accesses are also located beyond the second firewall in the intranet.

The Gateway is in the DMZ with the external port open in the firewall through which the client browser communicates with the Gateway. In the second firewall, for HTTP or HTTPS traffic, the Gateway can communicate directly with internal hosts. If security policies do not permit it, use SRA proxies between the Gateway and the internal hosts. For Netlet traffic, the connection is direct from the Gateway to the destination host.

Without a SRA proxy, the SSL traffic is limited to the Gateway and the traffic is unencrypted from the Gateway to the internal host (unless the internal host is running in HTTPS mode). Any internal host to which the Gateway has to initiate a Netlet connection should be directly accessible from DMZ. This can be a potential security problem and hence this configuration is recommended only for the simplest of installations.

Figure 5-10  Basic SRA Configuration

This figure shows a simple configuration, a client, gateway, portal server and host.

Disable Netlet

Figure 5-11 shows a scenario similar to the basic SRA configuration except that Netlet is disabled. If the client deployment is not going to use Netlet for securely running applications that need to communicate with intranet, then use this setup for performance improvement.

You can extend this configuration and combine it with other deployment scenarios to provide better performance and a scalable solution.

Figure 5-11  Disable Netlet

This figure shows SRA without Netlet


Figure 5-12 Proxylet enables users to securely access intranet resources through the Internet without exposing these resources to the client.

It inherits the transport mode (either HTTP or HTTPS) from the Gateway.

Figure 5-12  Proxylet

This illustration shows Proxylet applet on the client with the gateway in the DMZ and the Portal Server and Host on the intranet.

Multiple Gateway Instances

Figure 5-13 shows an extension of the SRA basic configuration. Multiple Gateway instances run on the same machine or multiple machines. You can start multiple Gateway instances with different profiles. See Chapter 2, “Configuring the Gateway,” in the Portal Server Secure Remote Access 6 Administration Guide for details.

Figure 5-13  Multiple Gateway Instances

This figure shows SRA with multiple gatewayu instances, NetFile and Netlet on the Client


Although Figure 5-13 shows a 1-to-1 correspondence between the Gateway and the Portal Servers, this need not necessarily be the case in a real deployment. You can have multiple Gateway instances, and multiple Portal Server instances, and any Gateway can contact any Portal Server depending on the configuration.

The disadvantage to this configuration is that multiple ports need to be opened in the second firewall for each connection request. This could cause potential security problems.

Netlet and Rewriter Proxies

Figure 5-14 shows a configuration with a Netlet Proxy and a Rewriter Proxy on the intranet. With these proxies, only two open ports are necessary in the second firewall.

The Gateway need not contact the application hosts directly now, but will forward all Netlet traffic to the Netlet proxy and Rewriter traffic to the Rewriter Proxy. Since the Netlet Proxy is within the intranet, it can directly contact all the required application hosts without opening multiple ports in the second firewall.

The traffic between the Gateway in the DMZ and the Netlet Proxy is encrypted, and gets decrypted only at the Netlet Proxy, thereby enhancing security.

If the Rewriter Proxy is enabled, all traffic is directed through the Rewriter Proxy, irrespective of whether the request is for the Portal Server node or not. This ensures that the traffic from the Gateway in the DMZ to the intranet is always encrypted.

Because the Netlet Proxy, Rewriter Proxy, and Portal Server are all running on the same node, there might be performance issues in such a deployment scenario. This problem is overcome when proxies are installed on a separate nodes to reduce the load on the Portal Server node.

Figure 5-14  Netlet and Rewriter Proxies

This figure shows Portal Server with NetFile and Netlet on the clients, and Rewriter and Netlet Proxies on the Portal Servers

Netlet and Rewriter Proxies on Separate Nodes

To reduce the load on the Portal Server node and still provide the same level of security at increased performance, you can install Netlet and Rewriter Proxies on separate nodes. This deployment has an added advantage in that you can use a proxy and shield the Portal Server from the DMZ. The node that runs these proxies needs to be directly accessible from the DMZ.

Figure 5-15 shows the Netlet Proxy and Rewriter Proxy on separate nodes. Traffic from the Gateway is directed to the separate node, which in turn directs the traffic through the proxies and to the required intranet hosts.

You can have multiple instances or installations of Netlet and Rewriter Proxies. You can configure each Gateway to try to contact various instances of the proxies in a round robin manner depending on availability.

Figure 5-15  Proxies on Separate Nodes

This figure shows a Portal Server with Rewriter and Netlet Proxies on separate nodes

Using Two Gateways and Netlet Proxy

Load balancers provide a failover mechanism for higher availability for redundancy of services on the Portal Servers and Access Managers.

Figure 5-16  Two Gateways and Netlet Proxy

This illustration shows two Gateways with a Load Balancer between them and connecting to a Netlet Proxy

Using an Accelerator

You can configure an external SSL device to run in front of the Gateway in open mode. It provides the SSL link between the client and SRA. For information on accelerators, see the Portal Server Secure Remote Access 6 Administration Guide.

Figure 5-17  SRA Gateway with External Accelerator

External Accelerator between the client and the Gateway

Netlet with 3rd Party Proxy

Figure 5-18 illustrates using a third-party proxy to limit the number of ports in the second firewall to one. You can configure the Gateway to use a third-party proxy to reach the Rewriter and the Netlet Proxies.

Figure 5-18  

Netlet using a third party proxy to limit number of ports in the second firewall.

Netlet and Third-Party Proxy

Reverse Proxy

A proxy server serves Internet content to the intranet, while a reverse proxy serves intranet content to the Internet. Certain deployments of reverse proxy are configured to serve the Internet content to achieve load balancing and caching.

Figure 5-19 illustrates how you can configure a reverse proxy in front of the Gateway to serve both Internet and intranet content to authorized users. Whenever the Gateway serves web content, it needs to ensure that all subsequent browser requests based on this content are routed through the Gateway. This is achieved by identifying all URLs in this content and rewriting as appropriate.

Figure 5-19  Using a Reverse Proxy in Front of the Gateway

Reverse Proxy in front of the Gateway

Designing for Localization

Localization is the process of adapting text and cultural content to a specific audience. Localization can be approached in two different ways:

  1. Localization of the entire product into a language that we don’t provide. This is usually done by a professional service organization.
  2. Localization of customizable parts of Portal Server that can be translated to support localization include:
    • Template and JSP files
    • Resource bundles
    • Display profile properties

For advanced language localization, create a well-defined directory structure for template directories.

To preserve the upgrade path, maintain custom content and code outside of default directories. See the Portal Server 6 Developer’s Guide for more information on localization.

Content and Design Implementation

The Portal Desktop provides the primary end-user interface for Portal Server and a mechanism for extensible content aggregation through the Provider Application Programming Interface (PAPI). The Portal Desktop includes a variety of providers that enable container hierarchy and the basic building blocks for building some types of channels. For storing content provider and channel data, the Portal Desktop implements a display profile data storage mechanism on top of an Access Manager service.

The various techniques you can use for content aggregation include:

See the Portal Server 6 Developer’s Guide and Portal Server 6 Desktop Customization Guide for more information.

Placement of Static Portal Content

Place your static portal content in the web-container-install-root/SUNWam/public_html directory or in a subdirectory under the web-container-install-root/SUNWam/public_html directory (the document root for the web container). Do not place your content in the web-container-install-root/SUNWps/web-apps/https-server/portal/ directory, as this is a private directory. Any content here is subject to deletion when the Portal Server web application is redeployed during a patch or other update.

Integration Design

This section provides information on integration areas that you need to account for in your low-level design.

Creating a Custom Access Manager Service

Service Management in Access Manager provides a mechanism for you to define, integrate, and manage groups of attributes as an Access Manager service. Readying a service for management involves:

  1. Creating an XML service file
  2. Configuring an LDIF file with any new object classes and importing both the XML service file and the new LDIF schema into Directory Service
  3. Registering multiple services to organizations or sub-organizations using the Access Manager administration console
  4. Managing and customizing the attributes (once registered) on a per organization basis

See the Access Manager documentation for more information.

Integrating Applications

Integrating and deploying applications with Portal Server is one of your most important deployment tasks. The application types include:

Independent Software Vendors

Listed below are some types of independent software vendor (ISV) integrations.

The “depth” to which user interface integration occurs with Portal Server indicates how complete the integration is. Depth is a term used to describe the complementary nature of the integration, and points to such items as:

In general, the degree to which an application integrates in Portal Server can be viewed as follows:

Integrating Microsoft Exchange

Using the JavaMail™ API is one of the primary options for integrating Microsoft Exchange messaging server with Portal Server. The JavaMail API provides a platform independent and protocol independent framework to build Java technology-based mail and messaging applications. The JavaMail API is implemented as a Java platform optional package and is also available as part of the Java™ 2 Platform, Enterprise Edition.

JavaMail provides a common uniform API for managing mail. It enables service providers to provide a standard interface to their standards based or proprietary messaging systems using Java programming language. Using this API, applications can access message stores and compose and send messages.

Identity and Directory Structure Design

A major part of implementing your portal involves designing your directory information tree (DIT),. The DIT organizes your users, organizations, suborganizations into a logical or hierarchical structure that enables you to efficiently administer and assign appropriate access to users.

The top of the organization tree in Access Manager is called dc=fully-qualified-domain-name by default, but can be changed or specified at install time. Additional organizations can be created after installation to manage separate enterprises. All created organizations fall beneath the top-level organization. Within these suborganizations other suborganizations can be nested. The depth of the nested structure is not limited.


The top of the tree does not have to be called dc. Your organization can change this to fit its needs. However, when a tree is organized with a generic top, for example, dc, then organizations within the tree can share roles.

Roles are a grouping mechanism designed to be more efficient and easier to use for applications. Each role has members, or entries that possess the role. As with groups, you can specify role members either explicitly or dynamically.

The roles mechanism automatically generates the nsRole attribute containing the distinguished name (DN) of all role definitions in which the entry is a member. Each role contains a privilege or set of privileges that can be granted to a user or users. Multiple roles can be assigned to a single user.

The privileges for a role are defined in Access Control Instructions (ACIs). Portal Server includes several predefined roles. The Access Manager administration console enables you to edit a role’s ACI to assign access privileges within the Directory Information Tree. Built-in examples include SuperAdmin Role and TopLevelHelpDeskAdmin roles. You can create other roles that can be shared across organizations.

See the Portal Server 6 Administration Guide, Directory Server Deployment Guide, and the Access Manager Deployment Guide for more information on planning your Access Manager and Directory Server structure.

Implementing Single Sign-On

Single sign-on (SSO) to Portal Server is managed by Access Manager. SSO provides a user with the ability to use any application that has its access policy managed by Access Manager, if allowed through the policy. The user need not re-authenticate to that application.

Various SSO scenarios include:

Portal Desktop Design

The performance of Portal Server itself largely depends upon how fast individual channels perform. In addition, the user experience of the portal is based upon the speed with which the Portal Desktop is displayed. The Portal Desktop can only load as fast as the slowest displayed channel. For example, consider a Portal Desktop composed of ten channels. If nine channels are rendered in one millisecond but the tenth takes three seconds, the Portal Desktop does not appear until that tenth channel is processed by the portal. By making sure that each channel can process a request in the shortest possible time, you provide a better performing Portal Desktop.

Choosing and Implementing the Correct Aggregration Strategy

The options for implementing portal channels for speed and scalability include:

Working with Providers

Consider the following when planning to deploy providers:

Client Support

Portal Server supports the following browsers as clients:

See the Portal Server 6 Release Notes for updates to this list.

Multiple client types, whether based on HTML, WML, or other protocols, can access Access Manager and hence Portal Server. For this functionality to work, Access Manager uses the Client Detection service (client detection API) to detect the client type that is accessing the portal. The client type is then used to select the portal template and JSP files and the character encoding that is used for output.


Currently, Access Manager defines client data only for supported HTML client browsers, including Internet Explorer and Netscape Communicator. See the Access Manager documentation for more information.

Sun Java System Portal Server Mobile Access 6.3 software extends the services and capabilities of the Portal Server platform to mobile devices and provides a framework for voice access. The software enables portal site users to obtain the same content that they access using HTML browsers.

Mobile Access software supports mobile markup languages, including xHTML, cHTML, HDML, HTML, and WML. It can support any mobile device that is connected to a wireless network through a LAN or WAN using either the HTTP or HTTPS protocol. In fact, the Portal Server Mobile Access software could support any number of devices, including automobiles, set-top boxes, PDAs, cellular phones, and voice.

Previous      Contents      Index      Next     

Part No: 819-4155.   Copyright 2005 Sun Microsystems, Inc. All rights reserved.