Sun Java Enterprise System Deployment Planning Guide

Identifying Performance Bottlenecks

One of the keys to successful deployment design is identifying potential performance bottlenecks and developing a strategy to avoid them. A performance bottleneck occurs when the rate at which data is accessed cannot meet specified system requirements.

Bottlenecks can be categorized according to various classes of hardware, as listed in the following table of data access points within a system. This table also suggests potential remedies for bottlenecks in each hardware class.

Table 5–7 Data Access Points

Hardware Class 

Relative Access Speed 

Remedies for Performance Improvement 

Processor 

Nanoseconds 

Vertical scaling: Add more processing power, improve processor cache 

Horizontal scaling: Add parallel processing power for load balancing 

System memory (RAM) 

Microseconds 

Dedicate system memory to specific tasks 

Vertical scaling: Add additional memory 

Horizontal scaling: Create additional instances for parallel processing and load balancing 

Disk read and write 

Milliseconds 

Optimize disk access with disk arrays (RAID) 

Dedicate disk access to specific functions, such as read only or write only 

Cache frequently accessed data in system memory 

Network interface 

Varies depending on bandwidth and access speed of nodes on the network 

Increase bandwidth 

Add accelerator hardware when transporting secure data 

Improve performance on nodes within the network so the data is more readily available 


Note –

Identifying Performance Bottlenecks lists hardware classes according to relative access speed, implying that slow access points, such as disks, are more likely to be the source of bottlenecks. However, processors that are underpowered to handle large loads are also likely sources of bottlenecks.


You typically begin deployment design with baseline processing power estimates for each component in the deployment and their dependencies. You then determine how to avoid bottlenecks related to system memory and disk access. Finally, you examine the network interface to determine potential bottlenecks and focus on strategies to overcome them.

Optimizing Disk Access

A critical component of deployment design is the speed of disk access to frequently accessed datasets, such as LDAP directories. Disk access provides the slowest access to data and is a likely source of a performance bottleneck.

One way to optimize disk access is to separate write operations from read operations. Not only are write operations more expensive than read operations, read operations (lookup operations for LDAP directories) typically occur with considerably more frequency than write operations (updates to data in LDAP directories).

Another way to optimize disk access is by dedicating disks to different types of I/O operations. For example, provide separate disk access for Directory Server logging operations, such as transaction logs and event logs, and LDAP read and write operations.

Also, consider implementing one or more instances of Directory Server dedicated to read and write operations and using replicated instances distributed to local servers for red and search access. Chaining and linking options are also available to optimize access to directory services.

Chapter 6, Tuning System Characteristics and Hardware Sizing, in Sun Java System Directory Server Enterprise Edition 6.0 Deployment Planning Guide discusses various factors in planning for disk access. Topics in this chapter include: