![]() |
Sun ONE Portal Server 6.0 Deployment Guide |
Chapter 7 Creating Your Portal Design
This chapter describes how to create your high-level and low-level portal design and provides information on creating specific sections of your design plan.
This chapter contains the following sections:
- Portal Design Approach
- Understanding the Goals of Portal High-Level Design
- Designing Portal SHARP Services
- Working with Portal Server Building Modules
- Designing Portal Use Case Scenarios
- Designing Portal Security Strategies
- Designing Secure Remote Access Deployment Scenarios
- Designing for Localization
- Specifying the Low-level Architecture Structure
Portal Design Approach
At this point in the Portal Server deployment process, you've identified your business and technical requirements, and communicated these requirements to the stakeholders for their approval. Now you are ready to begin the design phase, in which you develop your high- and low-level designs.
Your high-level portal design communicates the architecture of the system and provides the basis for the low-level design of your solution. Further, the high-level design needs to describe a logical architecture that meets the business and technical needs that you previously established. The logical architecture is broken down according to the various applications that comprise the system as a whole and the way in which users interact with it. In general, the logical architecture includes Portal Server, Secure Remote Access, high availability, security (including Identity Server), and Directory Server architectural components. See "Logical Portal Architecture" for more information.
The high- and low-level designs also need to account for any factors beyond the control of the portal, including your network, hardware failures, and improper channel design.
Once developed, the high-level design leads toward the creation of the low-level design. The low-level design specifies such items as the physical architecture, network infrastucture, Desktop channel and container design, and the actual hardware and software components. Once you have completed the high- and low-level designs, you can begin a trial deployment for testing within your organization.
Overview of High-Level Portal Design
The high-level design is your first iteration of an architecture approach to support both the business and technical requirements. The high-level design addresses questions such as:
- Does the proposed architecture support both the business and technical requirements?
- Can any modifications strengthen this design?
- Are there alternative architectures that might accomplish this?
- What is the physical layout of the system?
- What is the mapping of various components and connectivity?
- What is the logical definition describing the different categories of users and the systems and applications they have access to?
- Does the design account for adding more hardware to the system as required by the increase in web traffic over time?
Overview of Low-Level Portal Design
The low-level design focuses on specifying the processes and standards you use to build your portal solution, and specifying the actual hardware and software components of the solution, including:
- The Sun ONE Portal Server complex of servers.
- Network connectivity, describing how the portal complex attaches to the "outside world." Within this topic, you need to take into account security issues, protocols, speeds, and connections to other applications or remote sites.
- Information architecture, including user interfaces, content presentation and organization, data sources, and feeds.
- Identity architecture, including the strategy and design of organizations, suborganizations, roles, groups, and users, which is critical to long-term success.
- Integration strategy, including how the portal acts as an integration point for consolidating and integrating various information, and bringing people together in new ways.
The low-level design is described in more detail in later portions of this chapter.
Logical Portal Architecture
Your logical portal architecture defines all the components that make up the portal, including (but not limited to) the following:
- Portal Server itself
- Contents from RDBMs
- Third-party content providers
- Custom developed providers and content
- Integration with messaging and calendaring systems
- Integration with web servers
- Whether the portal runs in open or secure mode (requires Secure Remote Access)
- Usage estimates, which include your assumptions on the total number of registered users, average percentage of registered users logged in per day, average concurrent users that are logged in per day, average login time, average number of content channels that a logged in user has selected, and average number of application channels that a logged in user has selected.
Additionally, you need to consider how the following three network zones fit into your design:
- Internet - The public Internet is any network outside of the intranet and DMZ. Users securely access the gateway and portal server from here.
- Demilitarized Zone (DMZ) - A secure area between two firewalls, enabling access to internal resources while limiting potential for unauthorized entry. The gateway resides here where it can securely direct traffic from the application and content servers to the Internet.
- Intranet - Contains all resource servers. This includes intranet applications, web content servers, and application servers. The Portal Server and Directory Server reside here.
The logical architecture describes the Desktop look and feel, including potential items such as:
- Default page, with its default banner, logo, channels, and so forth; total page weight, that is, total number of bytes of all the components of the page, including HTML, style sheet, JavaScript, and image files; total number of HTTP requests for the page, that is, how many HTTP requests are required to complete downloading the page.
- Personalized pages, with channels that users can conceivably display, what preferences are available, and so forth.
The logical architecture is where you also develop a caching strategy, if your site requires one. If the pages returned to your users contain references to large numbers of images, Portal Server can deliver these images for all users. However, if these types of requests can be offloaded to a reverse proxy type of caching appliance, you can free up system resources so that Portal Server can service additional users. Additionally, by placing a caching appliance closer to end users, these images can be delivered to end users somewhat quicker, thus enhancing the overall end user experience.
Understanding the Goals of Portal High-Level Design
Table 7-1 provides a series of goals for developing your portal high-level design. Prioritize these goals according to your organization's own requirements. This two-column table lists the goal in the first column and the description in the second column.
Designing Portal SHARP Services
A major focus of your portal design concerns SHARP (Scalability, High Availability, Reliability, and Performance) services. SHARP services provide horizontal (for example, the directory's replication mechanisms) and vertical (for example, multiple instance support) scaling.
In the portal design phase, you need to consider high availability, distributed multi-site functionality, high performance, and other requirements that will impact or stress the architecture. Also consider any known technical or business requirements that are deemed a high-risk. Then address these high risks in the design phase, at least in conceptual or strategic terms.
Portal Server and Scalability
Scalability is a system's ability to accommodate a growing user population, without performance degradation, by the addition of processing resources. The subject of this section is the application of scaling techniques to the Portal Server product.
Benefits of scalable systems include:
- Improved response time
- Fault tolerance
- Manageability
- Expendability
- Simplified application development
The two general means of scaling a system are vertical and horizontal scaling. In vertical scaling, CPUs, memory, or other resources are added to one machine. This enables more process instances to run simultaneously. In Portal Server, you want to make use of this by planing and sizing to the number of CPUs you will need. See Chapter 5 "Sizing Your Portal" for more information.
In horizontal scaling, machines are added. This also enables multiple simultaneous processing, and distributed work load. In Portal Server, you make use of horizontal scaling because you run the Portal Server software on one machine, the Directory Server software on another, and so forth. Horizontal scaling can also make use of vertical scaling, by adding more CPUs, for example.
Additionally, you can scale a Portal Server installation horizontally by installing server component instances on multiple machines. Each installed server component instance executes an HTTP process, which listens on a TCP/IP port whose number is determined at installation time. Gateway components use a round-robin algorithm to assign new session requests to server instances. While a session is established, an HTTP cookie stored on the client indicates the session server. All subsequent requests go to that server.
The section "Working with Portal Server Building Modules", discusses an approach to a specific type of configuration that provides optimum performance and horizontal scalability.
Portal Server and High Availability
High Availability ensures your portal platform is accessible 24 hours a day, seven days a week. Today, organizations require that data and applications always be available. High availability has become a requirement that applies not only to mission-critical applications, but also to the whole IT infrastructure.
System availability is affected not only by computer hardware and software, but also by people and processes, which can account for up to 80 percent of system downtime. Availability can be improved through a systematic approach to system management and by using industry best practices to minimize the impact of human error.
One important issue to consider is that not all systems have the same level of availability requirements. Most applications can be categorized into the following three groups:
- Task critical - Affects limited number of users; not visible to customers; small impact on costs and profits
- Business critical - Affects significant number of users; might be visible to some customers; significant impact on costs and profits
- Mission critical - Affects a large number of users; visible to customers; major impact on costs and profits
The goals of these levels are to:
- Improve processes by reducing human error, automating procedures, and reducing planned downtime
- Improve hardware and software availability by eliminating single-point-of-failure configurations and balancing processing load
The more mission critical the application, the more you need to focus on availability to eliminate any single point of failure (SPOF), and resolve the people and processes issues.
Even if a system is always available, instances of failure recovery might not be transparent to end users. Depending on the kind of failure, users can lose the context of their portal application, and might have to log in again to get access to their Desktop.
System Availability
System availability is often expressed as a percentage of the system uptime. A basic equation to calculate system availability is:
Availability = uptime / (uptime + downtime) * 100
For instance, a service level agreement uptime of four digits (99.99 percent) means that in a month the system can be unavailable for about seven hours. Furthermore, system downtime is the total time the system is not available for use. This total includes not only unplanned downtime, such as hardware failures and network outages, but also planned downtime, preventive maintenance, software upgrade, patches, and so on.
If the system is supposed to be available 7x24 (seven days a week, 24 hours a day), the architecture needs to include redundancy to avoid planned and unplanned downtime to ensure high availability.
Degrees of High Availability
High availability is not just a switch that you can turn on and off. There are various degrees of high availability that refer to the ability of the system to recover from failures and ways of measuring system availability. High availability degrees depend on your specific organization's fault tolerance requirements and ways of measuring system availability.
For example, your organization might tolerate the need to reauthenticate after a system failure, so that a request resulting in a redirection to another login screen would be considered successful. For other organizations, this might be considered a failure, even though the service is still being provided by the system.
Session failover alone is not the ultimate answer to transparent failover, because the context of a particular portal application can be lost after a failover. For example, consider the case where a user is composing a message in NetMail Lite, has attached several documents to the email, then the server fails. The user will be redirected to another server and NetMail Lite will have lost the user's session and the draft message. Other statefull providers, which store contextual data in the current JVM, have the same problem.
Achieving High Availability for Portal Server Components
Making Portal Server highly available involves ensuring high availability on each of the following components:
- Gateway - A load balancer used with the gateway detects a failed gateway component and routes new requests to other gateways. Routing is restored when the failed gateway recovers. Gateway components are stateless (session information is stored on the client in an HTTP cookie) so rerouting around a failed gateway is transparent to users.
- Server - In open mode, you can use a load balancer to detect a failed server component and redirect requests to other servers. A load balancer also has the ability to intelligently distribute the workload across the server pool. In secure mode, gateway components can detect the presence of a failed server component and redirect requests to other servers. (This is valid as long as the web server is Sun ONE Web Server.)
- Directory Server - A number of options make the LDAP directory highly available. See "Building Modules and High Availability Scenarios" for more information.
- Netlet proxy - In the case of a software crash, a watchdog process automatically restarts the Netlet proxy. In addition, the gateway performs load balancing for the Netlet proxy as well. The gateway also supports failure detection failover for Netlet proxies.
Portal Server System Components
Figure 7-1 shows the processes and communication links of a Portal Server system that are critical to the availability of the solution.
Figure 7-1    Portal Server System Components
![]()
In this figure, the box encloses the Portal Server instance running on Sun ONE Web Server. Within the instance are the four servlets (Sun ONE Identity Server administration console, the Authentication service, the Desktop service, and the NetMail service), and the three SDKs (Identity Server SSO, Identity Server Logging, and Identity Server Management). The Authentication service servlet also makes use of an LDAP service provider module. Either a user, by using a browser and the HTTP protocol, or the gateway, by using the HTTPS protocol, communicate with Portal Server. This traffic is directed to the appropriate servlet. Communication occurs between the Authentication service's LDAP module and the LDAP authentication server; between the NetMail servlet and the SMTP/IMAP messaging server; between the Identity Server SSO SDK and the LDAP server; and between the Identity Server Management SDK and the LDAP server.
Figure 7-1 shows that if the following processes or communication links fail, the portal solution becomes unavailable to end users:
- Portal Server instance - Runs in the context of a web container (either Sun ONE Web Server or certain application servers). Components within an instance communicate through the JVM using Java APIs. An instance is a fully qualified domain name and a TCP port number. Portal Server services are web applications that are implemented as servlets or JSP files.
Portal Server is built on top of Identity Server for authentication, Single Sign-on (session) management, policy, and profile database access. Thus, Portal Server inherits all the benefits (and constraints) of Identity Server with respect to availability and fault tolerance.
By design, Identity Server's services are either stateless or they can share context data so that they can recover to the previous state in case of a service failure.
Within Portal Server, Desktop and NetMail services do not share state data among instances. This means that an instance redirect causes the user context to be rebuilt for the enabled services. Usually, redirected users do not notice this because Portal Server services can rebuild a user context from the user's profile, and by using contextual data stored in the request. While this statement is generally true for out-of-the-box services, it might not be true for channels or custom code. Developers need to be careful to not design statefull channels to avoid loss of context upon instance failover.
- Profile database server - The profile database server is implemented by Sun ONE Directory Server. Although this server is not strictly part of core Portal Server, availability of the server and integrity of the database are fundamental to the availability of the system. Indeed, the major focus of this chapter is on making the directory highly available.
- Authentication server - This is the directory server for LDAP authentication (usually, the same server as the profile database server). You can apply the same high availability techniques to this server as for the profile database server.
- Secure Remote Access gateway and proxies - The Secure Remote Access gateway is a standalone Java process that can be considered stateless, because state information can be rebuilt transparently to end users. The Secure Remote Access profile maintains a list of Portal Server instances and does round robin load balancing across those instances. Session stickiness is not required in front of a gateway, although it is recommended for performance reasons. On the other hand, session stickiness to Portal Server instances is enforced by Secure Remote Access.
Secure Remote Access includes other Java processes called Netlet proxy and Rewriter proxy. You use these proxies to extend the security perimeter from behind the firewall, and limit the number of holes in the DMZ. The Rewriter proxy sits always on the Portal Server node. You can install the Netlet proxy on a different node.
Working with Portal Server Building Modules
Because deploying Portal Server is a complex process involving many other systems, this section describes a specific configuration that provides optimum performance and horizontal scalability. This configuration is known as a Sun ONE Portal Server building module.
A Portal Server building module is a hardware and software construct with limited or no dependencies on shared services. A typical deployment uses multiple building modules to achieve optimum performance and horizontal scalability.
Understanding Building Modules
In general, vertical or horizontal scalability solutions are used when planning a deployment. For Portal Server, use the building module horizontal scalability solution that is introduced in this section.
The building module contains multiple Portal Server instances, a search engine database, and a Sun ONE Directory Server. Depending on your type of deployment, the Directory Server can be a supplier (or master) or a consumer (also referred to as a replica).
Figure 7-2 shows the building module architecture. This figure shows the building module components, including Portal Server instances, Directory Server, and the Search Engine database.
Figure 7-2    Portal Building Module Architecture
![]()
Building Modules and High Availability Scenarios
The following outlines the three high availability scenarios for Sun ONE Portal Server Release 6.0:
- Best effort - The system is available as long as the hardware does not fail and as long as the Portal Server processes can be restarted by the watchdog process.
- No single point of failure - The use of hardware and software replication creates a deployment with no single point of failure (NSPOF). The system is always available, as long as no more than one failure occurs consecutively anywhere in the chain of components. However, in the case of failures user sessions are lost.
- Transparent failover - The system is always available but in addition to NSPOF, failover to a backup instance occurs transparently to end users. In most cases, users do not even notice they have been redirected to a different node or instance. Sessions are preserved across nodes so that users do not have to reauthenticate. Portal Server services are stateless or use checkpointing mechanisms to rebuild the current execution context up to a certain point.
Possible supported architectures include the following:
- Single node, single Portal Server instance
- Multi-node, multi-instance configurations
- Using Sun Cluster software
- Multi-master Directory Server techniques
This section explains implementing these architectures and leverages the building module concept, from a high-availability standpoint.
Table 7-2 summarizes these high availability scenarios along with their supporting techniques. In this table, the first column lists the component requirements. The second column defines if this component is necessary for a best effort deployment. The third column defines if this component is necessary for a no single point of failure (NSPOF) deployment. The fourth column defines if this component is necessary for a transparent failover deployment.
Note Load balancing is not provided out-of-the-box with the Sun ONE Web Server web application container.
Portal Best Effort Scenario
In this scenario, you install Portal Server and Directory Server on a single node that has a secured hardware configuration for continuous availability, such as Sun Fire UltraSPARC® III machines. (Securing a Solaris Operating Environment system requires that changes be made to its default configuration.)
This type of server features full hardware redundancy, including: redundant power supplies, fans, system controllers; dynamic reconfiguration; CPU hot-plug; online upgrades; and disks rack that can be configured in RAID 0+1 (striping plus mirroring), or RAID 5 using a volume management system, which prevents loss of data in case of a disk crash. Mean time between failure (MTBF) on such servers is high, and they usually run for several months without interruption.
Portal Server is reliable product provided it is not operated beyond its maximum capacity. In case of a software crash the web application container is restarted automatically by a watchdog process.
Figure 7-3 shows a small, best effort deployment using the building module architecture.
Figure 7-3    Portal Best Effort Scenario
![]()
In this scenario, for memory allocation, four CPUs by eight GB RAM (4x8) of memory is sufficient for one building module. A load balancer is responsible for detecting Portal Server failures and redirecting users' requests to another portal instance. The Identity Server console is outside of the building module so that it can be shared with other resources. (Your actual sizing calculations might result in a different allocation amount.)
This scenario might suffice for task critical requirements. Its major weakness is that a maintenance action necessitating a system shutdown results in service interruption.
Best Effort Scenario and Secure Remote Access
In the case of a software crash, a watchdog process automatically restarts the Secure Remote Access gateway and Netlet proxy. The Rewriter proxy, which runs on the Portal Server node, does not have a watchdog process. For that reason, this scenario is not recommended for the Rewriter proxy.
No Single Point of Failure Scenario
Portal Server natively supports the no single point of failure (NSPOF) scenario. NSPOF is built on top of the Best Effort scenario, and in addition, introduces replication and load balancing. Figure 7-4 shows an NSPOF scenario.
Figure 7-4    No Single Point of Failure Scenario
![]()
In Figure 7-4, two Portal Server building modules are implemented. As stated earlier, a building module consists of a directory consumer (here, a replica) for profile reads and a Portal Server instance. As such, at least two building modules are necessary to achieve NSPOF, thereby providing a backup if one of the building modules fails. These building modules consist of eight CPUs by 16 GB RAM (8x16). In addition, the web application container for the Portal Server is Sun ONE Web Server.
Load balancing is responsible for detecting Portal Server failures and redirecting users' requests to a backup building module. Accuracy of failure detection varies among load balancing products. Some products are capable of checking the availability of a system by probing a service involving several functional areas of the server, such as the servlet engine, the JVM, and so on. In particular, most vendor solutions from Resonate, Cisco, Alteon, and others enable you to create arbitrary scripts for server availability. As the load balancer is not part of the Portal Server software, you must acquire it separately from a third-party vendor.
Redundancy is equally important to the directory master so that profile changes through the administration console or the Desktop, along with consumer replication across building modules, can be always maintained. Portal Server comes with Identity Server, which supports multi-mastering. The NSPOF scenario uses a multi-master configuration. In this configuration, two suppliers can accept updates, synchronize with each other, and update all consumers. The consumers can refer update requests to both masters.
The NSPOF architecture presents several advantages:
- If the node supporting the building module crashes, all user requests are redirected to the backup building module by the load balancer.
- If a Portal Server instance crashes, the load balancer detects the failure and redirects all user requests to the backup building module. The surviving replica is not used.
- If a directory replica crashes, an alternate communication link handled by the Identity Server SDK is used to switch over all LDAP requests to a backup replica.
- Because the supplier-to-consumer replication is maintained by two masters, failure of one of the masters does not interrupt the system. The LDAP SDK used by Identity Server simply switches over LDAP writes to the backup master following the LDAP referrals returned by the current replica. Profile changes initiated from the administration console or the Desktop can then be honored by the backup master.
- When using hardware redundancy, recoveries from a failure in a RAID disk array, CPU, or memory on the master side that is recovered transparently sometimes results in performance degradation.
- Because the transaction log is secured by a RAID system, a master restart always updates the database from the transaction log after the crash, and reinitiates the multi-master replication automatically. For this to work, you need to enable Directory Server Durable Transactions.
- You can deploy more building module pairs sharing the same multi-master infrastructure to cope with scalability requirements. Also, the principle of the architecture doesn't change.
Configuration and Implementation Notes
In Sun ONE Directory Server 5.1, the smallest unit of replication is a database. This means that you can replicate an entire database, but not a sub-tree within a database. Therefore, when you create your DIT, you must take your replication plans into consideration. In the case of a large user base, you can split up your DIT across organization boundaries under the Identity Servers's root entry (default o=isp). See the Sun ONE Directory Server Deployment Guide and the Sun ONE Directory Server Administrator's Guide for information on setting up your directory tree and replication.
The Identity Server SDK LDAP communication links are defined in the serverconfig.xml configuration file. To enable a backup communication link to an alternate replica modify the <iPlanetDataAccessLayer> clause accordingly:
<Server name="Server1" host="my.sesta1.com" port="389" type="SIMPLE" />
<Server name="Server2" host="my.sesta2.com" port="389" type="SIMPLE" />
Enabling a backup communication link to an alternate directory server for LDAP authentication follows the same principle. You configure it within the Identity Server LDAP module of the authentication service by using the administration console.
No Single Point of Failure Scenario and Secure Remote Access
Secure Remote Access follows the same replication and load balancing pattern as core portal to achieve no single point of failure (NSPOF). As such, two Secure Remote Access gateways and pair of proxies are necessary in this scenario. The Secure Remote Access gateway detects a Portal Server instance failure when the instance does not respond to a request after a certain time-out value. When an instance is decommissioned, the Secure Remote Access gateway enables users to reauthenticate against a backup instance. After a Portal Server instance is decommissioned, the Secure Remote Access gateway performs a periodic check for availability until that instance is up again.
The NSPOF high availability scenario is suitable to business critical deployments. However, some high availability limitations in this scenario might not fulfill the requirements of a mission critical deployment. Those limitations are:
- User sessions are lost after an instance crash, which means users have to reauthenticate.
- Applications cannot use persistent sessions to store checkpointing data.
- There is a risk of database inconsistency following a directory master crash for an undetermined period of time. Consider the following situation. An administrator changes a role or an organization profile, and then the master that handles the request crashes right after the change is committed in the transaction log, but before it gets propagated to the other master. The LDAP write returns `okay' to the client, hence the change is not available for as long as the crashed master is not restarted. This kind of inconsistency can be problematic depending on the nature of the change and given the fact that the operation did not return an error.
Transparent Failover Scenario
The transparent failover scenario is built on top of the no single point of failure (NSPOF) scenario and goes one step more in high availability features. Figure 7-5 shows a transparent failover scenario. Two building modules are shown, consisting of eight CPUs by 16 GB RAM (8x16). Load balancing is responsible for detecting Portal Server failures and redirecting users' requests to a backup building module.
Figure 7-5    Transparent Failover Scenario
![]()
In this figure, transparent failover uses the same replication model as the NSPOF scenario but provides additional high availability features, which make the failover to a backup server transparent to end users. Those high availability features depicted in the figure include:
- Session Failover- Identity Server supports session failover and hence Portal Server does as well, on application servers that support HttpSession failover. (Portal Server does not support HttpSession failover for Sun ONE Web Server.) In Figure 7-5, the session repository is part of the application server software, and Portal Server is running in a web container that is part of the application server. See Appendix C "Sun ONE Portal Server and Application Servers" for more information.
With session failover, users do not need to reauthenticate after a crash. In addition, portal applications can rely on session persistence to store context data used by the checkpointing. You configure session failover in the AMConfig.properties file by setting the com.iplanet.am.session.failover.enabled property to true.
This figure shows that if a failure were to occur with Building Block 1, the sessions it had stored in the sessions repository would be retrieved by Building Block 2.
- Clustering of the master directory server - Clustering of the master directory server using Sun Cluster 3.x or other clustering products (for example, Veritas Cluster) is necessary to prevent the profile database inconsistency issue described in the NSPOF scenario.
This figure shows that the availability scenarios build up on each other. However, the second Sun ONE Directory Server cluster and multi-master configuration depicted in this figure are optional because transparent failover can be handled by only one Sun ONE Directory Server cluster. The configuration shown in the figure illustrates transparent failover of the directory along with database distribution and scalability.
Transparent Failover Scenario and Secure Remote Access
The Netlet proxy cannot support the transparent failover scenario because of the limitation of the TCP protocol. The Netlet proxy tunnels TCP connections, and you cannot migrate an open TCP connection to another server. A Netlet proxy crash drops off all outstanding connections that would have to be reestablished by end users.
With the introduction of Sun Cluster 3.x, the high availability architecture might require two additional asymmetric standby master servers sharing, by using a dual-ported disk array, the profile database.
Building Module Constraints
Building modules scale near-linearly to an arbitrary number. The constraints on the scalability of building modules are given by the number of LDAP writes resulting from profile updates and the maximum size of the LDAP database.
Because any LDAP writes issued against a replica are redirected to the masterby using an LDAP referraland then to all other replicas, the number of write operations remains as a constraint in scalability. However, the number of concurrent users performing profile updates at any point in time tends to be small.
Because the performance tuning of a portal deployment involves moving the LDAP_db.[number] files to the /tmp directory, which Solaris maps into RAM (if possible), the total size of the _db files must not be larger than available RAM. Here, available RAM means physical RAM minus the memory required by the JVM and LDAP processes.
Note If the LDAP server crashes with the _db files in the /tmp directory, most likely they will be gone when the server restarts. This does improve performance but also affects availability.
If the analysis at your specific site indicates that the number of LDAP write operations is indeed a constraint, some of the possible solutions include creating building modules that replicate only a specific branch of the directory and a layer in front that directs incoming requests to the appropriate instance of portal.
Baseline Portal Performance Analysis
In many cases, you can track portal performance problems to infrastructure issues that are independent of Portal Server. Given that the sizing tool and building modules concept both predict a specific performance level for a given configuration, being able to match the published performance levels adds a great deal of safety to the success of a portal deployment.
In general, this level of analysis, with a simple out-of-the-box portal, uncovers the types of issues that are almost impossible to resolve at a later stage in the deployment. Therefore, as part of the portal best practices for deployment, run a baseline performance analysis that validates the published data at the earliest possible stage of your portal project.
Trial Project Performance Analysis
Several tools can be used to size a portal deployment, but ultimately each organization is different and has its own constraints. Custom code, LDAP issues, amount of merging of display profilesall these issues can cause performance to deviate from published figures.
In many cases, the scalability impact of custom code can only be understood by placing an appropriate level of load on the customized code. At this point, the deployment should have validated that the out-of-the-box performance can match published scalability and capacity results and therefore any deviation from those results can be attributed to changes in the custom code, as well as any changes in the underlying infrastructure since the baseline analysis was performed.
Keep in mind that not all code in a portal deployment is performance-critical. It is not necessary to measure the performance of all possible user actions. However, generally a critical-path must perform within acceptable response times.
It is those critical areas that must be measured by a trial project as soon as possible. See "Implementing and Verifying the Portal" for more information on the trial phase of the deployment.
Deploying Your Building Module Solution
This section describes guidelines for deploying your building module solution.
Deployment Guidelines
How you construct your building module affects performance. Consider the following recommendations to deploy your building module properly:
- Deploy a building module on a single machine.
- If you use multiple machines, or if your Portal Server machine is running a large number of instances, use a fast network interconnect. For example, under a high load, a 100 Mbps Ethernet link between a Portal Server machine and a Directory Server machine can cause performance bottlenecks, so consider using a 1 GB Ethernet link. In general, monitor the throughput of the Portal Server machine. When the throughput is greater than one third of the theoretical maximum, upgrade the Ethernet speed.
- Run Portal Server on top of a dedicated processor set. On servers with more than eight CPUs, create processor sets with either two or four CPUs. For example, if you choose to install two instances of Portal Server on an eight CPU server, create two four-CPU processor sets. On servers with four CPUs (or less), Portal Server does not need processor sets.
Directory Server Requirements
Identify your Directory Server requirements for your building module deployment. For specific information on Directory Server deployment, see the Sun ONE Directory Server Deployment Guide.
Consider the following Directory Server guidelines when you plan your Portal Server deployment:
- Run Directory Server on top of a dedicated processor set to ensure that it can deliver the level of performance required by Portal Server.
The amount of needed CPU in the Directory Server consumer replica processor set depends on the number of Portal Server instances in the building module as well as performance and capacity considerations.
- Dedicate a Directory Server instance for the sole use of the Portal Server instances in a building module. (See Figure 7-2.)
- Map the entire directory database indexes and cache in memory to avoid disk latency issues.
- When deploying multiple building modules, use a multi-master configuration to work around bottlenecks caused by the profile updates and replication overhead to the Directory Server supplier.
Search Engine Structure
The Search Engine is a taxonomy and a database service that is similar to many Internet search engines. When you deploy the Search Engine as part of your building module solution, consider the following:
- In each building module, make sure only one Portal Server instance has the Search Engine database containing the RDs. The remaining Portal Server instances have default empty Search Engine databases.
- Factors that influence whether to use a building module for the portal Search database include the intensity of search activities in a Portal Server deployment, the range of search hits, and the average number of search hits for all users, in addition to the number of concurrent searches. For example, the load generated on a server by the Search Engine can be both memory and CPU intensive for a large index and heavy query load. However, if the query rate is moderate (less than two per second), and the index is not large (less than 5,000 documents), then the load should be acceptable.
- You can install Search on a machine separate from Portal Server, to keep the main server dedicated to portal activity. When you do so, you use the searchURL property of the Search provider to point to the second machine where Search is installed. The Search instance is a normal portal instance. You install the Search instance just as you do the portal instance, but use it just for Search functionality.
- The size of the Search database dictates whether more than one machine needs to host the Search database by replicating it across machines or building module. Consider using high-end disk arrays, such as the T3 disk array with cached RAM (364 MB), for the Search database.
- Use a proxy server for caching the search hit results. When doing so, you need to disable the document level security. See the Sun ONE Portal Server 6.0 Administrator's Guide for more information on document level security.
Designing Portal Use Case Scenarios
Use case scenarios are written scenarios used to both test and present the system's capabilities, and they form an important part of your high-level design. Though you implement use case scenarios toward the end of the project, formulate them early on in the project, once you have established your requirements.
When available, use cases can provide valuable insight into how the system is to be tested. Use cases are beneficial in identifying how you need to design the user interface from a navigational perspective. When designing use cases, compare them to your requirements to get a thorough view of their completeness and how you are to interpret the test results.
Note The goal of use cases is to describe the "what," not the "how" of the portal. Do not try to design your system by using use cases.
Use cases provide a method for organizing your requirements. Instead of a bulleted list of requirements, you organize them in a way that tells a story of how someone can use the system. This provides for greater completeness and consistency, and also gives you a better understanding of the importance of a requirement from a user perspective.
Use cases help to identify and clarify the functional requirements of the portal. Use cases capture all the different ways a portal would be used, including the set of interactions between the user and the portal as well as the services, tasks, and functions the portal is required to perform.
A use case defines a goal-oriented set of interactions between external actors and the portal system. (Actors are parties outside the system that interact with the system, and can be a class of users, roles users can play, or other systems.)
Use case steps are written in an easy-to-understand structured narrative using the vocabulary of the domain.
Use case scenarios are an instance of a use case, representing a single path through the use case. Thus, there may be a scenario for the main flow through the use case and other scenarios for each possible variation of flow through the use case (for example, representing each option).
Elements of Portal Use Cases
When developing use cases for your portal, keep the following elements in mind:
- Priority - Describes the priority, or ranking of the use case. For example, this could range from High to Medium to Low.
- Context of use - Describes the setting or environment in which the use case occurs.
- Scope - Describes the conditions and limits of the use case.
- Primary user - Describes what kind of user this applies to, for example, an end user or an administrator.
- Special requirements - Describes any other conditions that apply.
- Precondition - Describes the prerequisites that must be met for the use case to occur.
- Minimal guarantees - Describes the minimum that must occur if the use case is not successfully completed.
- Success guarantees - Describes what happens if the use case is successfully completed.
- Trigger - Describes the particular item in the system that causes the event to occur.
- Description - Provides a step-by-step account of the use case, from start to finish.
Example Use Case: Authenticate Portal User
Table 7-3 describes a use case for a portal user to authenticate with the portal. This is a two-column table. The first column describes the use case item and the second column provides a description.
Designing Portal Security Strategies
Security is the set of hardware, software, practices, and technologies that protect a server and its users from malicious outsiders. In that regard, security protects against unexpected behavior.
You need to address security globally and include people and processes as well as products and technologies. Unfortunately, too many organizations rely solely on firewall technology as their only security strategy. These organizations do not realize that many attacks come from employees, not outsiders. Therefore, you need to consider additional tools and processes when creating a secure portal environment.
Operating Portal Server in a secure environment involves making certain changes to the Solaris Operating Environment, the gateway and server configuration, the installation of firewalls, and user authentication through Directory Server and SSO through Identity Server. In addition, you can use certificates, SSL encryption, and group and domain access.
Securing the Operating Environment
Reduce potential risk of security breaches in the operating environment by performing the following, often termed "system hardening:"
- Minimize the size of the operating environment installation - When installing a Sun server in an environment that is exposed to the Internet, or any untrusted network, reduce the Solaris installation to the minimum number of packages necessary to support the applications to be hosted. Achieving minimization in services, libraries, and applications helps increase security by reducing the number of subsystems that must be maintained.
The Solaris Security Toolkit provides a flexible and extensible mechanism to minimize, harden, and secure Solaris Operating Environment systems. The primary goal behind the development of this toolkit is to simplify and automate the process of securing Solaris systems. For more information see:
The SUNWreq cluster (Solaris Core) contains the packages needed for Portal Server and Secure Remote Access. In addition, you need Perl 5 (SUNWpl5u) to enable the use of the Directory Server online administration scripts (db2bak.pl, bak2db.pl, and so on). You perform the installation with the Portal Server install script, pssetup, which needs the following packages:
- SUNWadmc (contains showrev)
- SUNWadmfw (contains librariers for showrev)
You can remove these packages from the system after the installation is finished. However, future patches for Directory Server might need to be installed using the pssetup script, necessitating the reinstallation of the removed packages.
- Track and monitor file system changes - Within systems that require inclusion of security, a file change control and audit tool is indispensable as it tracks changes in files and detects possible intrusion. You can use a product such as Tripwire for Servers, or Solaris Fingerprint Database (available from SunSolve Online).
Using Platform Security
Usually you install Portal Servers in a trusted network. However, even in this secure environment, security of these servers requires special attention.
UNIX User Installation
You can install and configure Portal Server to run under three different UNIX users:
- root - This is the default option. All Portal Server components are installed and configured to run as the system superuser. Some security implications arise from this configuration:
- An application bug can be exploited to gain root access to the system.
- You need root access to modify some of the templates. This raises potential security concerns as this responsibility is typically delegated to non-system administrators who can pose a threat to the system.
- User nobody - You can install Portal Server as the user nobody (uid 60001). This can improve the security of the system, because the user nobody does not have any privileges and cannot create, read, or modify the system files. This feature prevents user nobody from using Portal Server to gain access to system files and break into the system.
The user nobody does not have a password, which prevents a regular user from becoming nobody. Only the superuser can change users without being prompted for a password. Thus, you still need root access to start and stop Portal Server services.
- Non-root user - You can run Portal Server as a regular UNIX user. The security benefits of a regular user are similar to those provided by the user nobody. A regular UNIX user has additional benefits as this type of user can start, stop, and configure services.
See the Sun ONE Portal Server 6.0 Installation Guide for more information on configuring Portal Server to run as nobody or as a non-root user.
Limiting Access Control
While the traditional security UNIX model is typically viewed as all-or-nothing, there are alternative tools that you can use, which provide some additional flexibility. These tools provide the mechanisms needed to create a fine grain access control to individual resources, such as different UNIX commands. For example, this toolset enables Portal Server to be run as root, while enabling certain users and roles superuser privileges to start, stop, and maintain the Portal Server framework.
These tools include:
- Role-Based Access Control (RBAC) - Solaris 8 includes the Role-Based Access Control (RBAC) to package superuser privileges and assign them to user accounts. RBAC enables separation of powers, controlled delegation of privileged operations to users, and a variable degree of access control.
- Sudo - Sudo is a publicly available software, which enables a system administrator to give certain users the ability to run some commands as root while logging the commands and arguments. For more information, see:
Using a Demilitarized Zone (DMZ)
Central to security is creating a demilitarized zone (DMZ) where the gateway servers typically reside. This is where the outermost firewall enables only SSL traffic from the Internet to the gateways, which then direct traffic to servers on the internal network. For maximum security, the gateway is installed in the DMZ between two firewalls.
Note Because Portal Server assigns a special SSL session ID to Netlet traffic, which is not used by any real SSL session, some firewalls will not pass Netlet packets.
Using the Gateway
The Secure Remote Access gateway provides the interface and security barrier between the remote user sessions originating from the Internet and your organization's intranet. The gateway serves two main functions:
- Provides basic authentication services to incoming user sessions, including establishing identity and allowing or denying access to the platform.
- Provides mapping and rewriting services to enable web-based links to the intranet content for users.
For Internet access, use 128-bit SSL, to provide the best security arrangement and encryption for communication between the user's browser and Portal Server.
Designing Secure Remote Access Deployment Scenarios
The gateway, Netlet, NetFile, Netlet proxy, and Rewriter proxy constitute the major components of Secure Remote Access.
This section lists some of the possible configurations of these components. Choose the right configuration based on your business needs. This section is meant only as a guide. It is not meant to be a complete deployment reference.
Secure Remote Access Deployment Scenario 1
Figure 7-6 shows the most simple configuration possible for Secure Remote Access. The figure shows a client browser running NetFile and Netlet. The gateway is installed on a separate machine in the demilitarized zone (DMZ) between two firewalls. The Portal Server is located on a machine beyond the second firewall in the intranet. The other application hosts that the client accesses are also located beyond the second firewall in the intranet.
The gateway is in the DMZ with the external port open in the firewall through which the client browser communicates with the gateway. In the second firewall, for HTTP or HTTPS traffic, the gateway can communicate directly with internal hosts. This is not recommended for security reasons. Instead, use a proxy between the gateway and the internal hosts. For Netlet traffic, the connection is direct from the gateway to the destination host.
Without a proxy the SSL traffic is limited to the gateway, and the traffic is unencrypted from the gateway to the internal host (unless the internal host is running in HTTPS mode). Any internal host to which the gateway has to initiate a Netlet connection should be directly accessible from DMZ. This can be a potential security problem and hence this configuration is recommended only for the simplest of installations.
Figure 7-6    Secure Remote Access Deployment Scenario 1
![]()
Secure Remote Access Deployment Scenario 2
Figure 7-7 shows a scenario similar to Scenario 1 except that Netlet is disabled. If the client deployment is not going to use Netlet for securely running applications that need to communicate with intranet, then use this setup for the performance improvement that it provides.
You can extend this configuration and combine it with other deployment scenarios to provide better performance and a scalable solution.
Figure 7-7    Secure Remote Access Deployment Scenario 2
![]()
Secure Remote Access Deployment Scenario 3 with Multiple Gateway Instances
Figure 7-8 shows an extension of Scenario 1. Multiple gateway instances run on the same machine. You can start multiple gateway instances with different profiles. Multiple instances of the gateway can also run on multiple machines. See Chapter 2, "Configuring the Gateway," in the Sun ONE Portal Server, Secure Remote Access 6.0 Administrator's Guide for details.
Note Although Figure 7-8 shows a 1-to-1 correspondence between the gateway and the Portal Servers, this need not necessarily be the case in a real deployment. You can have multiple gateway instances, and multiple Portal Server instances, and any gateway can contact any Portal Server depending on the configuration.
The disadvantage to this configuration is that multiple ports need to be opened in the second firewall for each connection request. This could cause potential security problems. Scenario 4 overcomes this problem by using the Netlet and Rewriter proxies.
Figure 7-8    Secure Remote Access Deployment Scenario 3
![]()
Secure Remote Access Deployment Scenario 4 with Netlet and Rewriter Proxies
Figure 7-9 shows a configuration that overcomes the security issues of Scenario 3 by having a Netlet proxy and a Rewriter proxy on the intranet.
The gateway need not contact the application hosts directly now, but will forward all Netlet traffic to the Netlet proxy. Since the Netlet proxy is within the intranet, it can directly contact all the required application hosts without opening multiple ports in the second firewall.
Also, to provide end-to-end SSL up to the Portal Server node, you can install a Rewriter proxy on the Portal Server node. This ensures that the traffic between the gateway in the DMZ to the Portal Server node within the intranet is also SSL and hence secure.
The traffic between the gateway in the DMZ and the Netlet proxy is encrypted, and gets decrypted only at the Netlet proxy, thereby enhancing security.
If the Rewriter proxy is enabled, all traffic is directed through the Rewriter proxy, irrespective of whether the request is for the Portal Server node or not. This ensures that the traffic from the gateway in the DMZ to the intranet is always encrypted.
Including the Netlet and Rewriter proxies in the configuration reduces the number of ports opened in the second firewall to 2.
Because the Netlet proxy, Rewriter proxy, and Portal Server are all running on the same node, there might be performance issues in such a deployment scenario. This problem is overcome in scenario 5 where the Netlet proxy is installed on a separate node to reduce load on the Portal Server node.
Figure 7-9    Secure Remote Access Deployment Scenario 4 with Netlet and Rewriter Proxies
![]()
Secure Remote Access Deployment Scenario 5 with Netlet Proxy on an Independent Node
To reduce the load on the Portal Server node and still provide same level of security at increased performance, install the Netlet proxy on a separate node.
This deployment has an added advantage in that you can use a proxy and shield the Portal Server from the DMZ. The node that runs Netlet proxy needs to be directly accessible from the DMZ.
Figure 7-10 shows the Netlet proxy on an independent node. All the Netlet traffic from the gateway is directed to this independent node, which in turn directs the traffic to the required intranet hosts.
You can have multiple instances or installations of the Netlet proxy. You can configure each gateway to try to contact various instances of the Netlet proxy in a round robin manner depending on availability. See Chapter 3, "Configuring the Netlet," in the Sun ONE Portal Server, Secure Remote Access 6.0 Administrator's Guide for details.
Figure 7-10    Secure Remote Access Deployment Scenario 5 with Netlet Proxy on an Independent Node
![]()
Designing for Localization
The customizable parts of Portal Server that can be translated to support localization include:
- Template and JSP files
- Resource bundles
- Display profile properties
For advanced language localization, create a well-defined directory structure for template directories. In order to preserve the upgrade path, maintain custom content and code outside of default directories. See the Sun ONE Portal Server 6.0 Developer's Guide for more information on localization.
Specifying the Low-level Architecture Structure
When specifying the low-level architecture for your portal, you need detailed design for all the aspects considered in the architecture, for example:
- The Portal (Sun ONE) complex of servers - Includes the details on what software will be installed on which machines, the directory structure if you do not accept the default, how you plan on configuring that software, and so forth.
- Low-level networking considerations - Includes the server names, IP addresses, interfaces in each box, how they connect, switches, routers, and so forth.
- Content design and implementation - Includes the detailed layout of the Desktop, including containers and tabs, nested containers, and channels and their attributes. Also includes the design of how each channel will be constructed and the necessary code to implement data feeds or content integration.
- Identity architecture - Includes the detailed design for implementation of identity and directory structures. (See the Sun ONE Directory Server Deployment Guide for more information.)
- Integration design - Includes the details on connectivity to any applications. Document the APIs, data transforms, performance, security, and other issues.
The following sections elaborate on the details of the low-level architecture.
Portal Server Installation Guidelines
Consider these guidelines for your portal installation:
- Review the default installation directories used by Portal Server. See "Sun ONE Portal Server Configuration Files and Directory Structure". Your site might require changing this default directory setup to fit its needs.
- Simplify system administration and monitoring by redirecting all log files to a /logs directory, mounted on a separate file system and partition. Create a subdirectory for each of the different log files, such as /logs/ldap, /logs/portal, and so forth.
- You can install Portal Server on the same machine as Directory Server or on a separate machine. Directory Server can also be an existing installation.
If you install Portal Server and Directory Server separately, you must install Directory Server first.
The machine running Portal Server must be able to access the machine running Directory Server. Any firewalls between the systems must not block connections to the Directory Server port.
The recommended high-level steps to install Directory Server and Portal Server on separate machines are:
- Running pssetup on the Directory Server node
- Choosing to install Directory Server
- Running pssetup on the Portal Server node
- Choosing to install Portal Server and configuring it to use an existing directory
- You must install Portal Server on the same machine as Identity Server. Portal Server can also be installed on an existing installation of Identity Server.
- You cannot install Portal Server on a machine with an existing installation of Sun ONE Web Server. The installation program installs a version of Web Server that is needed for Portal Server. If a web server is already installed, install the Sun ONE Web Server bundled with the Portal Server on a different port.
Installing Portal Server 3.0 and Portal Server 6.0 on the Same Machine
You can install Sun ONE Portal Server 6.0 onto a machine that is running Sun ONE Portal Server 3.0. This configuration is supported. However, you must be careful that none of the ports used by the two products conflict one another. This requires a non-default install of either of the two versions.
In addition to the port numbers must being different, be aware that both portals consume the same set of system resources (for example, CPU, memory, and so forth). Thus, if you are testing Sun ONE Portal Server 6.0, first shut down Sun ONE Portal Server 3.0.
Note The package name has changed from SUNWips in the Sun ONE Portal Server 3.0 version of to SUNWps in the Sun ONE Portal Server 6.0 version.
Networking Details Design
This section describes some of the networking issues associated with portal design.
Load Balancers
Portal Server supports using a load balancer in core portal deployments. Load balancers are network appliances with secure, real-time, embedded operating systems that intelligently load balance IP traffic across multiple servers. Load balancers optimize the performance of your site by distributing client requests across multiple servers, dramatically reducing the cost of providing large-scale Internet services and accelerating user access to those applications.
Persistent Load Balancing
The Portal Server instance's physical memory space temporarily stores user session information. Each time the load balancer moves a user to a different Portal Server instance, the user session is transferred, shared, and synced in memory between the different instances. This transfer and syncing operation consumes system resources. To reduce this consumption, and improve portal availability and performance, use persistent load balancing techniques.
Load balancer products, such as those from Cisco and Alteon, use cookie name and value pairs to achieve a persistent connection to a specific portal instance. However, most load balancers limit the maximum number of cookie name and value pairs that they can support. In a large portal deployment, use a server-based cookie name and value pair load balancing method. This method assigns a cookie name and value pair to each Portal Server instance. Because there are a limited number of Portal Server instances, the load balancer only needs to manage a limited number of cookie name and value pairs. The end result is a load balancer that more effectively manages loads.
Some load balancer products refer to persistent connections as sticky sessions based on cookies. Refer to your load balancer product's documentation for more information.
Network Interface Cards
In addition to considering site failover in your networking design, you need to focus on the network interface cards (NICs) within the Portal Server machines themselves. Use dedicated NICs within the Portal Server machines for the following types of traffic:
- Incoming and outgoing network traffic to and from portal users
- Backup traffic, for backing up the server at regular intervals
- Security access, including Telnet, FTP, and so forth
- Optional: Back-end traffic, including database systems, LDAP requests, messaging and calendar systems, and so forth
In a portal deployment, mirror all server NICs for redundancy.
For example, using the above information, a four CPU Portal Server production machine would require four NICs and four mirrored NICs, for a total of eight NICs.
An eight CPU Portal Server production machine would require ten NICs as follows:
- One NIC plus a mirrored NIC for incoming traffic from portal users
- One NIC plus a mirrored NIC for outgoing traffic from portal users
- One NIC plus a mirrored NIC for back-end system traffic
- One NIC plus a mirrored NIC for backup traffic
- One NIC plus a mirrored NIC for security access, including Telnet, FTP, and so forth
Content and Implementation Design
The Desktop provides the primary end-user interface for Portal Server and a mechanism for extensible content aggregation through the Provider Application Programming Interface (PAPI). The Desktop includes a variety of providers that enable container hierarchy and the basic building blocks for building some types of channels. For storing content provider and channel data, the Desktop implements a display profile data storage mechanism on top of an Identity Server service.
The various techniques you can use for content aggregation include:
- Creating channels using building block providers
- Creating channels using JSPProvider
- Creating channels using Portal Server tag libraries
- Creating channels using custom building block providers
- Organizing content using container channels
See the Sun ONE Portal Server 6.0 Developer's Guide and Sun ONE Portal Server 6.0 Desktop Customization Guide for more information.
Placement of Static Portal Content
Place your static portal content in the BaseDir/SUNWam/public_html directory, or in a subdirectory under the BaseDir/SUNWam/public_html directory (the document root for the web server). Do not place your content in the BaseDir/SUNWps/web-apps/https-server/portal/ directory, as this is a private directory. Any content here is subject to deletion when the Portal Server web application is redeployed during a patch or other update.
Identity and Directory Structure Design
A major part of implementing your portal involves designing your Directory Information Tree (DIT), which organizes your users, organizations, suborganizations, and so on into a logical or hierarchical structure that enables you to efficiently administer and assign appropriate access to the users assuming those roles or contained within those organizations.
The top of the organization tree in Identity Server is called isp by default but can be changed or specified at install time. Additional organizations can be created after installation to manage separate enterprises. All created organizations fall beneath the top-level organization. Within these suborganizations other suborganizations can be nested. There is no limitation on the depth to the nested structure.
Roles are a new grouping mechanism that are designed to be more efficient and easier to use for applications. Each role has members, or entries that possess the role. As with groups, you can specify role members either explicitly or dynamically.
The roles mechanism automatically generates the nsRole attribute containing the DN of all role definitions in which the entry is a member. Each role contains a privilege or set of privileges that can be granted to a user or users. In Sun ONE Portal Server 6.0, multiple roles can be assigned to a single user.
The privileges for a role are defined in Access Control Instructions (ACIs). Portal Server includes several predefined roles. The Identity Server administration console enables you to edit a role's ACI to assign access privileges within the Directory Information Tree. Built-in examples include SuperAdmin Role and TopLevelHelpDeskAdmin roles. You can create other roles that can be shared across organizations.
See Sun ONE Portal Server 6.0 Administrator's Guide and the Sun ONE Directory Server Deployment Guide for more information on planning your Identity Server and Directory Server structure.
Integration Design
This section provides information on integration areas that you need to account for in your low-level design.
Creating a Custom Identity Server Service
Service Management in Identity Server provides a mechanism for you to define, integrate, and manage groups of attributes as an Identity Server service. Readying a service for management involves:
- Creating an XML service file
- Configuring an LDIF file with any new object classes and importing both the XML service file and the new LDIF schema into Directory Service
- Registering multiple services to organizations or sub-organizations using the Identity Server administration console
- Managing and customizing the attributes (once registered) on a per organization basis
See the Identity Server documentation for more information.
Integrating Applications
Integrating and deploying applications with Portal Server is one of your most important deployment tasks. The application types include:
- Channel - Provides limited content options; is not a "mini-browser"
- Portal application - Launched from a channel in its own browser window; the Portal Server hosts the application; an example is NetMail; created as an Identity Server service; accesses Portal and Identity Server APIs
- Third-party application - Hosted separately from Portal Server, but accessed from Portal Server; URL Rewriter can rewrite application interface; uses Identity Server to enable Single Sign-on
See "Independent Software Vendor Integrations with Sun ONE Portal Server" for more information on third-party applications that have been integrated to work with Portal Server.
Implementing Single Sign-on
Single Sign-on (SSO) to Portal Server is managed by Identity Server. SSO provides a user with the ability to use any application has its access policy managed by Identity Server, if allowed through the policy. The user need not re-authenticate to that application.
Various SSO scenarios include:
- Portal web application - The authentication comes from Identity Server, and the application validates the user credentials with Identity Server
- Standalone web application - Here, the application is hosted on a separate web server, and the Identity Server Web Agent is used for authentication. This does not require application coding. Additionally, you can modify the application to validate against Identity Server directly.
- Standalone Java application - In this scenario, you modify the application to validate user credentials against Identity Server directly.
- Non-Identity Server aware application - This scenario involves implementing a customized SSO workaround for the application. However, this is not a real SSO solution, as the user needs to re-authenticate.
For more information, see the Identity Server documentation.
Integrating Microsoft Exchange
Using the JavaMail API is one of the primary options for integrating Microsoft Exchange messaging server with Portal Server. The JavaMail API provides a platform independent and protocol independent framework to build Java technology-based mail and messaging applications. The JavaMail API is implemented as a Java platform optional package and is also available as part of the Java 2 Platform, Enterprise Edition.
JavaMail provides a common uniform API for managing mail. It enables service providers to provide a standard interface to their standards based or proprietary messaging systems using the Java programming language. Using this API, applications can access message stores, and compose and send messages.
Desktop Design
The performance of Portal Server itself largely depends upon how fast individual channels perform. In addition, the user experience of the portal is based upon the speed with which the Desktop is displayed. The Desktop can only load as fast as the slowest deployed channel. For example, consider a Desktop composed of ten channels. If nine channels are rendered in one millisecond but the tenth takes three seconds, the Desktop does not appear until that tenth channel is processed by the portal. By making sure that each channel can process a request in the shortest possible time, you provide a better performing Desktop.
Choosing and Implementing the Correct Aggregration Strategy
The options for implementing portal channels for speed and scalability include:
- Keeping processing functions on back-end systems and application servers, not on the portal server. The portal server needs to optimize getting requests from the user. Push as much business logic processing to the back-end systems. Whenever possible, use the portal to deliver customized content to the users, not to process it.
- Ensuring the back-end systems are highly scalable and performing. The Desktop only responds as fast as the servers from which it obtains information (to be displayed in the channels).
- Understanding where data is stored when designing providers, how the portal gets that data, how the provider gets that data, and what kind of data it is. For example, is the data dynamic that pertains to an individual user, or is there code needed to retrieve that customized or personalized data? Or, is the data static and shared by a small group of users? Next, you need to understand where the data resides (for example, in an XML file, database, flat file, and so forth), and how frequently it is updated. Finally, you need to understand how the business logic is applied for processing the data, so that the provider can deliver a personalized channel to the user.
- If there is some business logic that the portal must perform, push the logic into either the getedit or processingedit method, rather than use the getcontent method. By moving the business logic out of the getcontent method, which users access most of the time, you can improve portal performance.
Working with Providers
Consider the following discussion when planning to deploy providers:
- URLScraperProvider - Typically you use this provider to access dynamic content that is supplied by another web server's web-based system. It uses HTTP calls to retrieve the content. This provider puts high requirements on the back-end system, as the back-end system has to be highly scalable and available. Performance needs to be in double-digit milliseconds or hundredths of milliseconds to show high performance. This provider is very useful for proof of concept in the trial phase of your portal deployment due to the simplicity of configuration.
URLScraperProvider also performs some level of rewriting every time it retrieves a page. For example, if a channel retrieves a news page that contains a picture that is hosted on another web site, in order for the portal to be able to display that picture, the URL of that picture needs to be rewritten. The portal does not host that picture, so URLScraperProvider needs to rewrite that picture to present it to portal users.
- JSPProvider - Uses JavaServer Pages (JSP) technology. JSPProvider obtains content from one or more JSP files. A JSP file can be a static document (HTML only) or a standard JSP file with HTML and Java code. A JSP can include other JSP files. However, only the topmost JSP file can be configured through the display profile. The topmost JSP files are defined through the contentPage, editPage, and processPage properties.
- XMLProvider - Transforms an XML document into HTML using an XSLT (XML Style Sheet Language) file. You must create the appropriate XSLT file to match the XML document type. XMLProvider is an extension of URLScraperProvider. This provider uses the JAXP 1.1 JAR files that come with Sun ONE Web Server 6.0 SP1.
- LDAP-based provider - This type of provider retrieves information about a user and use of personalization from user profile. It stays efficient as long as the number of LDAP attributes stored is low. In general, this type of provider is a good performer, second only to the file scraper provider within URLScraperProvider.
- Database provider - This type of provider utilizes some back-end database for its content. It requires that you build database connection polling and that you use small queries (either single queries, or no more than a couple). You might also have to perform extra work in the way of HTML formatting. In general, this type of provider is the worst performer, due to its use of database connection pooling, large database queries, poor coding, or lack of indexing on the retrieved data. Additionally, once the data has been retrieved, the portal needs to perform a large amount of processing to display the data in the Desktop. If you use this type of provider, push as much data processing logic to the database as possible. Also, benchmark your portal performance with and without database channels in the user profile.
Tip In general, use the following guidelines for your channels:
Client Support
Portal Server supports the following browsers as clients:
- Internet Explorer 5.5 and 6.0
- Netscape Communicator 4.7x or higher, and 6.2.1
See the Sun ONE Portal Server 6.0 Release Notes for updates to this list.
Multiple clients types, whether based on HTML, WML, or other protocols, can access Identity Server and hence Portal Server. For this functionality to work, Identity Server uses the Client Detection service (client detection API) to detect the client type that is accessing the portal. The client type is then used to select the portal template and JSP files and the character encoding that is used for output.