You can significantly improve the performance of Oracle Waveset (Waveset) software across nearly all activities with the proper tuning. In addition to changing settings within the software, you can make performance improvements by tuning the application server, the Java Virtual Machine (JVM machine), hardware, operating system, and network topology.
Additionally, you can use several tools to diagnose and monitor performance. Several of these tools exist within Waveset, such as trace and method timers. You can also use other Oracle and third-party tools to debug performance issues with Waveset.
This chapter describes tools, methodologies, and references you can use to improve performance and to debug performance-related issues.
The tuning process spans many entities and is specific to your deployment environment. This chapter describes different tuning methods for optimizing Waveset performance, but these methods are only intended as guidelines. You might have to adapt these methods for your deployment environment.
This chapter covers the following topics:
Review all of the information in this section before you start tuning Waveset.
The tuning methods described in this chapter are only provided as guidelines. You might have to modify some of these tunings for your deployment. In addition, Be sure to validate tunings before applying changes in a production environment.
Before you can tune Waveset, you must:
Be familiar with tuning application servers
Be familiar with Java 5.0 (required for Oracle Waveset 8.1.1)
Understand performance limitations within your deployment environment
Be able to identify areas needing performance improvements
Understand the checklists provided in this chapter
In addition to the information provided in this chapter, consult the documents and web sites listed in this section for information related to tuning Waveset.
See the following documents for information related to performance tuning.
Table 4–1 Related Documentation
Document Title |
Description |
---|---|
Contains information, techniques, and pointers related to Java performance tuning. |
|
Oracle MetaLink: Note:114671.1: Gathering Statistics for the Cost Based Optimizer |
Explains how to use system statistics and Oracle’s Cost-Based Optimizer (CBO). Note: This document is available to Oracle Metalink subscribers. Registration is required. |
Solaris Dynamic Tracing Guide |
Explains how to use DTrace to observe, debug, and tune your system's behavior. |
Sun Java System Application Server Performance Tuning Guide |
Describes how to obtain optimal performance from your Sun Java System Application Server. Download the necessary version of this book from the Oracle documentation web site. |
Describes how to tune your garbage collection application by using JVM. |
|
Explains how to download and use the PrintGCStats script and how to collect statistics to derive optimal JVM tunings. |
|
Describes how to use JConsole to monitor applications that run on the Java platform. |
The following table describes some web sites that you might find useful when trying to tune Waveset performance.
Table 4–2 Useful Web Sites
Web Site URL |
Description |
---|---|
Oracle web site containing diagnostic tools, forums, features and articles, security information, and patch contents. Note: The information on this site is divided into three areas:
|
|
Oracle Developer Network (SDN) web site where you can browse forums and post questions. |
|
JRat web site that describes how to use the Java Runtime Analysis Toolkit, an open source performance profiler for the Java platform. |
|
Oracle’s internal forum site that contains information about tuning Oracle databases. Note: You must be an Oracle Metalink subscriber to access the information provided on this site. |
|
http://performance.netbeans.org/howto/jvmswitches/index.html |
NetBeans web site containing information about tuning JVM switches for performance. |
Waveset link on Oracle’s Share Space. Note: This space is only available to Oracle employees and Oracle partners. In addition, these individuals must have a Share Space ID to access the information on this site. |
|
Waveset FAQ on Oracle’s Share Space. Note: This space is only available to Oracle employees and Oracle partners. In addition, these individuals must have a Share Space ID to access the information in this FAQ. |
|
SLAMD Distributed Load Generation Engine web site. |
|
http://hub.opensolaris.org/bin/view/Community+Group+dtrace/WebHome |
OpenSolaris Community: DTrace web page. |
Web sites containing information related to tuning the Solaris OS. |
How well your Waveset solution performs can depend on the following deployment-specific settings:
Resource configuration
How many resources are connecting
What type of resources are connecting
How attributes are mapped on the resources
Exact resource version
Network topology
Number (and distribution) of domain controllers
Number of installed Waveset Gateways
Number of concurrent settings
Number of concurrent processes (running workflows)
Number of concurrent users
Number of concurrent Waveset administrators
Total number of users under management
When you are trying to debug performance problems, start by analyzing and describing the problem. Ask yourself the following questions:
Where do you see the performance issue? In reconciliation, workflows, custom workflows, GUI page loading, provisioning, Access Review?
Are you CPU bound, memory bound, resource bound, or network bound?
Have you examined your configuration (hardware, network, parameters, and so forth) for problems?
Have you recently changed anything in your deployment environment?
Have you tried profiling troublesome resources natively to see if the problem is on the resource side and not with Waveset?
What size are the views?
Are you provisioning to several resources?
Are resources on slow networks connecting to Waveset?
Are you running additional applications on the server with Waveset?
Do your organizations have a Rule-Driven Members rule?
Have you run a series of thread dumps to see if there is a consistent pattern?
Looking at just a single thread dump can be misleading.
Have you recently turned on tracing?
Have you checked your JVM garbage collection?
Have you added organizations that are adding load to memory management?
This section provides information about tuning your deployment environment, including:
This section describes some tuning suggestions you can use to optimize your Java Platform, Enterprise Edition (Java EE platform) environment.
These tuning suggestions were derived from a series of experiments in which a considerable increase in throughputs was observed for the use cases tested. The increases were attributed to JVM sizing and to switches that affected garbage collector behavior.
For more information about tuning Java, JConsole, or JVM, visit the web sites noted in Table 4–1 and Table 4–2.
The following sections provide information about tuning Java and the JVM in your Java EE environment.
For information, best practices, and examples related to Java performance tuning, see the Java Tuning White Paper at:
http://java.sun.com/performance/reference/whitepapers/tuning.html
The following tuning scripts were used to derive the tuning suggestions noted in this section. These scripts were added to the domain.xml file (located in the domain configuration directory, which is typically domain-dir/config ) on a Sun Java System Application Server.
PrintGCStats – A data mining shell script that collects data from verbose:gc logs and displays information such as garbage collection pause times, parameter calculations, and timeline analyses over the application’s runtime by sampling the data at user-specified intervals.
For more information about how to use this script and garbage collection statistics to derive optimal JVM tunings, see the following web site:
http://java.sun.com/developer/technicalArticles/Programming/turbo/#PrintGCStats
PrintGCDetails – A shell script that can provide more verbose garbage collection statistics.
PrintGCTimeStamps – A shell script that adds time-stamp information to the garbage collection statistics collected by using the PrintGCDetails script.
To help ensure the best JVM performance, verify the following:
Be sure that you are using the required Java version noted in the “Supported Software and Environments” section of the Oracle Waveset 8.1.1 Release Notes to ensure you are using the most current features, bug fixes, and performance enhancements.
Be sure that you are using a newer version of garbage collection.
Frequently, customers do not remove the older, default garbage collection scheme when installing an application server. Running Waveset with an older garbage collector creates many objects, which forces the JVM to constantly collect garbage.
If you deployed Waveset on Sun Java System Application Server, you can increase throughput by adding garbage collection elements to the deployed Waveset instance server.xml file.
If you expect a peak load of more than 300 users, try modifying the following settings to increase performance:
For HTTP listeners configured for the deployed Waveset instance, edit the listener definition element in the server.xml file and set the number of acceptor threads to the Number of active CPUs on the host divided by the Number of active HTTP listeners on the host.
For example:
<http-listener id=”http-listener-1” \address=”0.0.0.0” port=”80” \acceptor threads=”Calculated Acceptor Threads” ...>
Because the static content of most Waveset deployments is not projected to change frequently, you can edit the File Cache settings (on the File Cache Configuration page) for static content. Specify a high number (such as the number of seconds in 24 hours) for the maximum age of content within the file cache before the content is reloaded.
To access the File Cache Configuration page, click the File Caching tab on the web-based Administrative Console for the HTTP server node. (See the latest Sun Java System Web Server Administrator’s Guide for detailed instructions.)
Sun Java System Application Server exposes tunables that affect the size of various thread pools and connection queues that are maintained by the HTTP container.
By default, most of these tunables are set for a concurrent user load of 300 users or less.
The following guidelines are provided to help you tune your application server:
Other than heap size, you can use the default parameter settings for most application servers. You might want to modify the server’s heap size, depending on the release being used.
The “Tuning the Application Server” chapter, in the latest Sun Java System Application Server Performance Tuning Guide, contains information about tuning a Sun Java System Application Server. This document is available from the following URL at http://docs.sun.com/app/docs.
In addition, if you are using Sun Java System Application Server 8.2 Enterprise Edition, the following changes solve “concurrent mode failures,” and should give you better and more predictable performance:
If you are constantly running old generation collections, review your application’s heap footprint and consider increasing the size. For example:
500 Mbytes is considered a modest size, so increasing this value to 3 Gbytes might improve performance.
With a 2 Gbyte young generation collection, each scavenge promotes about 70 Mbytes. Consider giving this 70 Mbytes at least one more scavenge before promoting.
For example, you might need the following SurvivorRatio:
2 GB/70 M X 2 = 4096/70 = 55
Where:
-XX:SurvivorRatio=32 -XX:MaxTenuringThreshold=1
This ratio prevents premature promotion and the added problem of “nepotism,” which can degrade scavenge performance.
If you specified -XX:CMSInitiatingOccupancyFraction=60, and the CMS collections are still starting before they reach that threshold. For example:
56402.647: [GC [1 CMS-initial-mark: 265707K(1048576K)] 1729560K(3129600K), 3.4141523 secs]
Try removing -XX:CMSInitiatingOccupancy=60 (using the default value of 69 percent), and add the following line:
-XX:UseCMSInitiatingOccupancyOnly
If your young generation collection is 2 Gbytes and the old generation collection is 1 Gbyte, this situation might also be causing premature CMS collections. Consider reversing this ratio. Use a 1 Gbyte young generation collection and a 2 Gbyte old generation collection, as follows:
-Xms3G -Xmx3G -Xmn1G
Also, remove -XX:NewRatio. This ratio is redundant when you have explicitly specified young generation and overall heap sizes.
If you are using a 5uXX version of the Java Development Kit (JDK software), and have excessively long “abortable preclean” cycles, you can use -XX:CMSMaxAbortablePrecleanLoops=5 as a temporary workaround.
You might have to adjust this value further.
Add the following line to view more information about garbage collector performance:
-XX:+PrintHeapAtGC
Use this command with caution because it increases how much verbose garbage collection data is produced. Be sure that you have enough disk space on which to save the garbage collector output.
If you are using the Sun Fire T2000 server, large-heap data Translation Look-aside Buffers (DTLBs) can become a scarce resource. Using large pages for the Java heap often helps performance. For example:
-XX:+UseLargePages
If you are tuning Waveset on an IBM WebSphere application server, consider limiting how much memory is allocated for the heap because heap memory can affect the memory used by threads.
If many threads are created simultaneously and the heap size increases, the application’s space limit can be quickly impacted and the following error results:
JVMCI015:OutOfMemoryError
Waveset relies on a repository database to store and manage its identity and configuration data. For this reason, database performance can greatly influence Waveset’s performance.
Detailed information about performance tuning and managing databases is beyond the scope of this document because this information is dataset-specific and vendor-specific. In addition, customer database administrators (DBAs) should already be experts on their own databases.
This section characterizes the Waveset application and provides general information about the nature of Waveset data and its typical usage patterns to help you plan, tune, and manage your databases.
This information is organized into the following sections:
You must understand the Waveset repository database scripts and how Waveset uses that content before you can effectively tune the Waveset repository.
This section provides an overview of the database and the information is organized into the following topics:
The Waveset repository contains three types of tables, and each table has slightly different usage characteristics.
Attribute Tables. These tables enable you to query for predefined single-valued or multi-valued object attributes.
For most object types, stored attributes are hard-coded.
The User and Role object types are exceptions to this rule. The inline attributes that are stored in the object table for User and Role are configurable, so you can configure additional custom attributes as queryable.
When you search for objects based on attribute conditions, Waveset accesses attribute tables in joins with the corresponding object tables. Some form of join (such as a JOIN, an EXISTS predicate, or a SUB-SELECT) occurs for each attribute condition.
The number of rows in the attribute table are proportional to the number of rows in the corresponding object table. The values distribution might exhibit skew, where multi-valued attributes have a row per value and some objects might have more attributes or more attribute values than others. Typically, there is a many-to-one relation between rows in the attribute table and rows in the object table.
Attribute tables have ATTR in the table name.
Change Tables. Waveset uses a change table to track changes made to a corresponding object table. These table sizes are proportional to the rate of object change, but the tables are not expected to grow without bound. Waveset automatically truncates change tables.
Change tables can be highly volatile because the lifetime of a row is relatively short and new rows can be created frequently.
Access to a change table is typically performed by a range scan on the time-stamp field.
Change tables have CHANGE in the table name.
Object Tables. The Waveset repository uses object tables to hold serialized data objects, such as Large Objects (LOBs). Object tables can also hold commonly queried, single-valued object attributes.
For most object types, stored attributes are hard-coded.
The User and Role object types are exceptions to this rule. The inline attributes that are stored in the object table are configurable, and you can configure additional custom attributes as queryable for User and Role.
The number of rows in an object table equals the number of objects being stored. The number of objects stored in each object table depends on which object types are being stored in the table. Some object types are numerous, while other types are few.
Generally, Waveset accesses an object table by object ID or name, though Waveset can also access the table by using one of the attributes stored in the table. Object IDs and names are unique across a single object type, but attribute values are not unique or evenly distributed. Some attributes have many values, while other attributes have relatively few values. In addition, several object types can expose the same attribute. An attribute may have many values for one object type and few values for another object type. The uneven distribution of values might cause an uneven distribution of index pages, which is a condition known as skew.
Object tables are tables that do not have ATTR or CHANGE suffixes in the table name.
Every object table contains an XML column, which is used to store each serialized object except the LOG table-set. Certain LOG table-set optional attributes are stored in the XML column if these attributes are present. For example, if digital signing is enabled.
You can roughly divide Waveset data into a number of classes that exhibit similar properties with respect to access patterns, cardinality, lifetime, volatility, and so forth. Each of the following classes corresponds to a set of tables in the repository:
User Data. User data consists of user objects.
You can expect this data to grow quite large because there is an object for each managed identity. After an initial population phase, you can expect a proportionally small number of creates because the majority of operations will be updates to existing objects.
User objects are generally long-lived and they are removed at a relatively low rate.
User data is stored in USEROBJ, USERATTR, and USERCHANGE tables.
Role Data. Role data consists of Role objects, including Roles subtypes such as Business Roles, IT Roles, Applications, and Assets.
Role data is similar to organization data, and these objects are relatively static after a customer deploys Waveset.
An exception to the preceding statement is a deployment that is integrated with an external source containing an authoritative set of roles. One integration style might be to feed role changes into Waveset, which causes Waveset Role data to be more volatile.
Generally, the number of role objects is small when compared to the number of identity objects such as users (assuming that multiple users share each role), but this depends on how each enterprise defines its roles.
Role data is stored in ROLEOBJ, ROLEATTR, and ROLECHANGE tables.
Account Data. Account data solely consists of account objects in the Account Index.
As with user data, account data can become rather large, with an object for each known resource account. Account objects are generally long-lived, removed at a relatively low rate, and after initial population, are created infrequently. Unless you frequently add or remove native accounts, or enable native change detection, account object modifications occur infrequently.
Waveset stores account data in ACCOUNT, ACCTATTR, and ACCTCHANGE tables.
Compliance Violation Data. Compliance Violation data contains violation records that indicate when the evaluation of an Audit Policy failed. These violation records exist until the same Audit Policy is evaluated against the same User and the policy passes. Violation records are created, modified, or deleted as part of an Audit Policy Scan or as part of an Access Review.
The number of violation records is proportional to the number of Audit Policies that are used in scans and the number of Users. An installation with 5000 users and 10 Audit Policies might have 500 violation records (5000 x 10 x 0.01), where the 0.01 multiplier depends on how strict the policies are and how user accounts are changed.
Waveset stores Compliance Violation records in OBJECT, ATTRIBUTE, and OBJCHANGE tables.
Entitlement Data. Entitlement data predominately consists of user entitlement objects, which are only created if you are doing compliance access reviews.
Entitlement records are created in large batches, modified slowly (days) after initial creation, and are then untouched. These records are deleted after an Access Review is deleted.
Waveset stores entitlement data in ENTITLE, ENTATTR, and ENTCHANGE tables.
Organization Data. This data consists of object group or organization objects.
Object group data is similar to configuration data, and this data is relatively static after being deployed. Generally, the number of objects is small (one for each defined organization) when compared to task objects or to identity objects such as users or accounts, however, the number can become large compared to other configuration objects.
Organization data is stored in ORG, ORGATTR, and ORGCHANGE tables.
Task Data. Task data consists of objects that are related to tasks and workflows, including state and result data.
The data contained in these tables is short-lived compared to other classes because objects are created, modified, and deleted at a high rate. The volume of data in this table is proportional to the amount of activity on the system.
Task data is stored in TASK, TASKATTR, and TASKCHANGE tables.
Configuration Data. Configuration data consists of objects related to Waveset system configuration, such as forms, roles, and rules.
Generally, configuration data is:
Relatively small compared to other classes
Only expected to change during deployment and upgrade, and changes occur in large batches
Not expected to change much after being deployed
Waveset stores configuration data in ATTRIBUTE, OBJCHANGE, and OBJECT tables.
Export Queue Data. If you enable Data Exporting, some records are queued inside Waveset until the export task writes those records to the Data Warehouse. The number of records that are queued is a function of Data Exporting configuration and the export interval for all queued types.
The following data types are queued by default, and all other data types are not:
ResourceAccount
WorkflowActivity
TaskInstace
WorkItem
The number of records in these tables grows until the export task drains the queue. The current table size is visible through a JMX Bean.
Records added to this table are never modified. These records are written during other Waveset activities, such as reconciliation, provisioning, and workflow execution. When the Data Exporter export task runs, the task drains the table.
Waveset stores Export Queue data records in QUEUE, QATTR, and QCHANGE tables.
Log Data. Log data consists of audit and error log objects. Log data is write-once only, so you can create new audit and error log objects, but you cannot modify these objects.
Log data is long-lived and can potentially become very large because you can only purge log data by explicit request. Access to log data frequently relies on attributes that are stored in the object table instead of in the attribute table. Both the distribution of attribute values and queries against the log specifically depend on how you are using Waveset.
For example, the distribution of attribute values in the log tables depends on the following:
What kind of changes are made
Which Waveset interface was used to make the changes
Which types of objects were changed
The pattern of queries against the log table also depends on which Waveset reports, which custom reports, or which external data mining queries a customer runs against the log table.
Waveset stores audit log records in LOG and LOGATTR tables, and error log records in SYSLOG and SLOGATTR tables. This data does not have corresponding change tables.
Waveset generates globally unique identifiers (GUIDs) for objects by using the VMID class provided in the JDK software.
These GUID values exhibit a property that gets sorted by its string representations, based on the order in which the objects are created. For example, when you create new objects with Waveset, the newer objects have object IDs that are greater than the older objects. Consequently, when Waveset inserts new objects into the database, the index based on object IDs can encounter contention for the same block or blocks.
Generally, Waveset uses prepared statements for activities such as inserting and updating database rows, but does not use prepared statements for queries.
If you are using Oracle, this behavior can create issues with the library cache. In particular, the large number of statements versions can cause contention on the library cache latch.
To address this contention, change the Oracle CURSOR_SHARING parameter value from EXACT to SIMILAR. Changing this value causes Oracle to replace literals in SQL statements with bind variables, thereby reducing the number of versions.
Because Waveset is a Java application that generally reads and writes character data rather than bytes, it does not restrict which encoding the database uses.
Waveset only requires that the data is sent and returned correctly. For example, the data does not become corrupted when written or reread. Use an encoding that supports multi-byte characters and is appropriate for the customer’s data. Generally, UTF-8 encoding is sufficient, but enterprises with a large number of true multi-byte characters, such as Asian or Arabic, might prefer UTF-16.
Most database administrators prefer to use an encoding that supports multi-byte characters because of the following:
Their deployments often grow to support international characters.
Their end users cut-and-paste from a Microsoft application’s text containing characters that look like ASCII but are actually multi-byte, such as em dashes (—).
This section describes how to configure some commonly configured properties in the Waveset repository.
Do not modify properties in the Waveset repository unless you are very familiar with repository databases and understand the consequences of making these changes.
If you are using a DataSource, set the connectionPoolDisable attribute to true in the RepositoryConfiguration object to disable automatic internal connection pooling in the Waveset repository.
For example, setting <RepositoryConfiguration connectionPoolDisable=’true’> allows you to avoid having two connection pools (one for Waveset and one for your application server).
You can see the current connectionPoolDisable setting in the ObjectRepository JMX MBean.
You can edit the RepostioryConfiguration object to enhance the performance of searches against specific, single-valued attributes. For example, you might edit this object to add an extended attribute, such as employeeID, that is used to search for Users or as a correlation key.
The default RepositoryConfiguration object looks like the following example:
<RepositoryConfiguration ... > <TypeDataStore Type="User" ... attr1="MemberObjectGroups", attr2="lastname" attr3="firstname" attr4="" attr5=""> </TypeDataStore> </RepositoryConfiguration> |
The ellipses represent XML attributes that are not relevant here.
Each of the attr1, attr2, attr3, attr4, and attr5 XML attributes specifies a single-valued attribute to be copied into the waveset.userobj table. The waveset.userobj table can contain up to five inline attributes. The attribute value named by attr1 in RepositoryConfiguration will be copied into the “attr1” database column in this table.
Inline attributes are stored in the base object table for a Type (rather than as rows in the child attribute table).
Using inline attributes improves the performance of repository queries against those attributes. (Because inline attributes reside in the main “object” table, queries against inline attributes are faster than those against non-inline attributes, which are stored in the child “attribute” table. A query condition against a non-inline attribute requires a “join” to the attribute table.)
By default, Waveset uses the MemberObjectGroups, lastname, and firstname inline attributes.
You can add two more attributes to enable faster searching, as long as those attributes are queryable.
For example, if your deployment contains an employeeID extended attribute, adding that attribute inline will improve the performance of repository searches against that attribute.
If you do not need lastname or firstname, you can remove them or replace them with other attributes.
Do not remove MemberObjectGroups. Waveset uses this attribute internally to speed up authorization checks.
Every public repository method acquires a connection when the method begins and then releases that connection when the method exits. This connection is almost always a JDBC connection because the repository almost always accesses a DBMS. The method releases the connection whether it completes successfully or unsuccessfully.
You can configure the number of concurrent connections by modifying the maxConcurrentConnections value in the RepositoryConfiguration object.
The number of concurrent JDBC connections is unlimited by default (maxConcurrentConnections=0). The Waveset repository can request as many connections as necessary to serve each concurrent repository request.
You specify a maximum number of concurrent connections that the repository can use at one time by changing the maxConcurrentConnections value to a positive integer. In effect, this setting limits the number of calls to the repository that can be actively executing at any one time.
If the repository reaches the configured limit, calls to the repository will wait until the number of connections being used by the repository falls back below the limit. The repository might appear to slow down, or even stop, if this is necessary to stay within the specified number of connections.
Any limit on the number of concurrent connections applies whether the repository pools connections or not. In other words, maxConcurrentConnections is unaffected by connectionPoolDisable.
To configure the maxConcurrentConnections value, you must edit the RepositoryConfiguration object from the System Settings debug page (http://host:port/idm/debug/session.jsp),
Select Configuration from the menu located next to the List Objects button.
Click the List Objects button to view all of the Configuration-type objects.
Locate RepositoryConfiguation and click Edit.
When the Checkout Object: Configuration, #ID#REPOSITORYCONFIGURATION page displays (similar to the following example), locate maxConcurrentConnections and modify the existing value as needed.
<?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE Configuration PUBLIC 'waveset.dtd' 'waveset.dtd'> <!-- MemberObjectGroups="#ID#Top" extensionClass="RepositoryConfiguration" id="#ID#RepositoryConfiguration" name="RepositoryConfiguration"--> <Configuration id='#ID#RepositoryConfiguration' name='RepositoryConfiguration' lock='Configurator#1251229174812' creator='getRepositoryConfiguration(boolean)' createDate='1249052436640' repoMod='1249048790203' type= 'RepositoryConfiguration'> <Extension> <RepositoryConfiguration instanceOf= 'com.waveset.repository.RepositoryConfiguration' lockTimeoutMillis='300000' maxConcurrentConnections='0' blockRowsGet='1000' blockRowsList='10000' maxAttrValLength='255' maxLogAcctAttrChangesLength='4000' maxLogMessageLength='255' maxLogXmlLength='2147483647' maxLogAcctAttrValueLength='128'maxLogParmValueLength='128' maxXmlLength='2147483647' maxSummaryLength='2048' optimizeReplaceAttributes= 'true' maxInList='1000' maxDelSet='1000' mcDBCall='10' mcDeleteAttrVal='3' mcInsertAttrVal='15' mcUpdateAttrVal='20' connectionPoolDisable='false' changeScanInterval='5000' changePropagationDelay='3000' changeAgeOut='60000'> <RelocatedTypes> <TypeDataStore typeName='User' deleteDestroyInterval='1' attr1='MemberObjectGroups' attr2='lastname' attr3='firstname'> </TypeDataStore> </RelocatedTypes> </RepositoryConfiguration> </Extension> <MemberObjectGroups> <ObjectRef type='ObjectGroup' id='#ID#Top' name='Top'/> </MemberObjectGroups> </Configuration>L |
Save your changes.
For more information about which object types are stored in each set of tables, see Data Classes.
JDBC supports execution of two types of SQL statements:
Statement
PreparedStatement
Generally, a PreparedStatement is more efficient for the DBMS because the DBMS server can cache and reuse a PreparedStatement for multiple executions, avoiding the cost of SQL parsing and optimizer selections. To get the benefit of this caching, Waveset associates a PreparedStatement with a JDBC connection. When Waveset is going to issue the same SQL statement multiple times using the same connection, it automatically uses a PreparedStatement.
However, most of the JDBC operations performed by Waveset do not share a connection between JDBC calls. Most JDBC operations are the result of the application server processing an HTTP request through a JSP, so that the processing thread acquires a JDBC connection when needed and releases that connection when it is done. Consequently, you must do some additional work to get the benefits of using PreparedStatements.
Many application servers provide DataSource implementations that provide both connection pooling and implicit statement caching. Implicit statement caching is the ability of the DataSource or connection pool to implicitly cache JDBC statements and associate them with the connection.
If you configure Waveset to use an application server DataSource, you must have both of these features to benefit from using Waveset's preferPreparedStatement repository option. If you are not using an application server DataSource, if the DataSource is not configured to use a connection pool, or if the connection pool does not support implicit statement caching, using preferPreparedStatement causes decreased performance because the DBMS cannot reuse the statement.
Waveset code is not structured to associate a statement with a connection, except in a few circumstances, so Waveset depends on the DataSource or connection pool to perform this caching. If you configure the Waveset repository to use a DataSource or connection pool that performs implicit statement caching, and you enable preferPreparedStatement=true in the RepositoryConfiguration configuration object, Waveset can use PreparedStatement for object lock/unlock operations and most get operations, which potentially results in performance improvements. The Waveset ObjectRepository has a JMX MBean that indicates the setting of the preferPreparedStatement switch and the number of Statement/PreparedStatement requests made by the Waveset server.
You can see the current preferPreparedStatement setting in the ObjectRepository JMX MBean.
This section describes some general guidelines for tuning a repository database:
Update statistics frequently to monitor what is happening with the repository database.
Defragment indexes and tables more frequently.
Report top queries.
Preallocate the user table to be sure it is large enough.
Estimate how many users you have and make the table big enough to accommodate the data. If you just allow the table to extend itself, loading users can become a very slow process. Periodically the table will stop to double its size, reformat itself, and push all the data back in.
Similarly, preallocate the accounts table to be sure it is large enough.
Estimate how many accounts you have per user and make the table large enough to accommodate the data.
Oracle's Professional Services group and Partners have estimator tools that you can use to estimate user table and accounts table sizes.
Periodically run Waveset's built-in Audit Log Maintenance Task and System Log Maintenance Task to configure log record expirations. Log records can grow without bound, so use these tasks to prevent the repository database from running out of space. For information, see the Oracle Waveset 8.1.1 Business Administrator’s Guide.
Keep in mind that the tasks are volatile. DBAs should consider defragmenting the database more frequently, consider what type of storage to use, and leave plenty of free space in the database.
This section describes some vendor-specific guidelines for tuning Oracle and SQL Server repository databases.
Currently, MySQL databases are only supported in development and for demonstrations.
This section describes guidelines for tuning Oracle repository databases:
The Waveset application does not require Oracle database features or options.
If you are using an Oracle repository database and Oracle Waveset Service Provider or Waveset, you might encounter problems with object table fragmentation because Waveset uses LONG, rather than LOB, data types by default. Using LONG data types can result in large amounts of “unallocated” extent space, which cannot be made into usable space.
To mitigate this problem, do the following:
Take EXPORT dumps of the Object table and re-import them to free up unallocated extent space. After importing, you must stop and restart the database.
Use LOB data types and DataDirect Technologies’ Merant drivers, which provide a standard LOB implementation for Oracle.
Use Locally Managed Tablespaces (LMTs), which offer automatic free space management. LMTs are available in Oracle 8.1.5.
Waveset does not require Oracle init.ora parameter settings for SGA sizing, buffer sizing, open cursors, processes, and so forth.
While the Waveset repository is a general-purpose database, it is best described as an object database.
Of the Waveset tables, the TASK table-set comes closest to having transaction-processing characteristics. The LOG and SYSLOG table-sets are also exceptional because these tables do not store serialized objects.
See Repository Table Types and Data Classes for descriptions of the tables, the object types stored in each table, and the general access pattern for each table.
If you have performance issues with the Oracle database, check for issues related to poor query plans being chosen for what Waveset expects to be relatively efficient queries.
For example, Waveset is configured to perform a full table-scan when an index is available for use. These issues are often visible in Automated Workload Repository (AWR) reports provided in the SQL by the buffer gets table. You can also view issues in the Enterprise Manager tool.
Performance problems typically appear to be the result of bad or missing database table statistics. Addressing this problem improves performance for both the database and Waveset.
The following articles (available from Oracle) are a good source of information about the cost-based optimizer (CBO) in Oracle:
Oracle MetaLink: Note:114671.1: Gathering Statistics for the Cost Based Optimizer
You might also investigate using SQL Profiles, which are another method for choosing the best query plans. You can use the SQL Advisor within Enterprise Manager to create these profiles when you identify poorly performing SQL.
If you detect unexpected growth in the Oracle redo log, you might have workflows that are caught in an infinite loop with a manual action. The loop causes constant updates to the repository, which in turn causes the size of each TaskInstance to grow substantially. The workflow errors are caused by improper handling of WF_ACTION_TIMEOUT and by users closing their browser in the middle of a workflow.
To prevent problematic workflows, preview each manual action before a production launch and verify the following:
Have you set a timeout?
Have you created appropriate transition logic to handle a timeout for the activity with the manual action?
Is the manual action using the exposed variables tag when there is a large amount of data in the TaskInstance?
Frequently, you can significantly improve Waveset performance if you change the CURSOR_SHARING parameter value from EXACT to SIMILAR.
Waveset uses prepared statements for some activities (such as inserting and updating database rows), but does not use these statements for most queries.
When you use Oracle, this behavior can cause issues with the library cache. In particular, the large number of statement versions can create contention on the library cache latch. Changing CURSOR_SHARING to SIMILAR causes Oracle to replace literals in SQL statements with bind variables, which greatly reduces the number of versions.
See Prepared Statementsfor more information.
Some customers who used an SQL Server 2000 database as a repository reported that as concurrency increased, SQL Server 2000 reported deadlocking problems that were related to SQL Server’s internal use of pessimistic locking (primarily lock escalation).
These deadlock errors display in the following format:
com.waveset.util.IOException: ==> com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 51) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction. |
To prevent or address deadlocking problems, do the following:
Use the SQL Server 2005 database.
Configure the READ_COMMITTED_SNAPSHOT parameter by formatting the command as follows:
ALTER DATABASE waveset SET READ_COMMITTED_SNAPSHOT ON
Enabling the READ_COMMITTED_SNAPSHOT parameter does the following:
Removes contention during the execution of SELECT statements that can cause blocks, which greatly reduces the potential for deadlocks internal to SQL Server.
Prevents uncommitted data from being read and guarantees that SELECT statements receive a consistent view of committed data.
For more information about the READ_COMMITTED_SNAPSHOT parameter, see:http://msdn.microsoft.com/en-us/library/ms188277.aspx.
Suggestions for optimizing Waveset’s performance are organized into the following areas:
In general, you can optimize Waveset performance if you do the following:
Turn off tracing (such as Java class, userform, and workflow tracing). Tracing can add substantial overhead.
Run the Waveset built-in Audit Log Maintenance Task and System Log Maintenance Task to configure log record expirations. Log records can grow without bound, so use these tasks to prevent the repository database from running out of space. For information, see the Oracle Waveset 8.1.1 Business Administrator’s Guide.
Check the README file in Waveset updates (formerly called service packs or installation packs) to see if any performance improvements have been made to the product. If so, schedule an upgrade.
Consider the performance impact when fetching data from one or more remote systems, including the Waveset repository.
Increase the number of application server instances running Waveset, either on the same server or by adding servers, and use a load-balancing tool to distribute the requests between instances.
Keep the size of files referenced in a binary attribute as small as possible. Loading extremely large graphics files, for example, can decrease Waveset performance.
Write robust and readable XML code that minimizes duplication (for example, refactored), that uses memory efficiently, and that mitigates the impact to overall system performance.
Configure Waveset system monitoring to track events in real time.
You can view these events in dashboard graphs to quickly assess system resources, spot abnormalities, understand historical performance trends (based on the time of day, the day of week, and so forth), and interactively isolate problems before looking at audit logs. Dashboards do not provide as much detail as audit logs, but can provide hints about where to look for problems in the logs.
For more information about dashboards, see Chapter 8, Reporting, in Oracle Waveset 8.1.1 Business Administrator’s Guide.
Because synchronization is a background task, how you configure an Active Sync adapter can affect server performance.
Use the Resources list to manage Active Sync adapters. Choose an Active Sync adapter and access start, stop, and status refresh control actions from the Synchronization section of the Resource Actions list.
To improve Active Sync adapter performance, do the following:
Evaluate and adjust polling intervals based on the type of activity being performed.
The polling interval determines when the Active Sync adapter will start processing new information. For example, if the adapter reads in a large list of users from a database and updates these users in Waveset each time, you could run this process in the early morning every day. Some adapters have a quick search for new items to process and can be set to run every minute.
Edit the synchronization file for the resource to specify the host where the adapters will run.
You can configure Active Sync adapters that require more memory and CPU cycles to run on dedicated servers to help load balance the systems.
If you have the appropriate administrator capability, you can change Active Sync resources to disable, manually start, or automatically start Active Sync adapters.
When you set an adapter to automatic, the adapter restarts when the application server starts. When you start an adapter, it runs immediately and executes at the specified polling interval. When you stop an adapter, it stops the next time the adapter checks for the stop flag.
Adjust the level of detail captured by the synchronization logs.
Synchronization logs capture information about the resource that is currently processing. Each resource has its own log file, path, and log level. The amount of detail captured by the adapter log depends on the specified logging level. You specify these values in the Logging section of the Synchronization Policy for the appropriate user type (Waveset or Service Provider).
Waveset provides two different queries that you can use to determine which Waveset users have Administrative rights. One of these queries can be slow to execute when there are lots of users in the Oracle Waveset repository. Waveset executes this slow query during AdminCache initialization, which occurs during Waveset startup.
Waveset now uses the faster query by default; resulting in much faster server start-up times for installations with large numbers of users in the repository. However, under certain conditions the faster query may not produce the correct results. If you upgraded your installation from version 5.0, some Administrative users may not have the attribute used by the new query correctly set.
Refresh all Administrative users by importing the following XML, which sets the new query attribute properly:
<ImportCommand type='refreshType' targetType='User'> <List> <AttributeCondition attrName='MemberAdminGroups' operator='isPresent' operand=''/> <AttributeCondition attrName='ControlledObjectGroups' operator='isPresent' operand=''/> <AttributeCondition attrName='AdminRoles' operator='notPresent' operand=''/> </List> </ImportCommand> <ImportCommand type='refreshType' targetType='User'> <List> <AttributeCondition attrName='AdminRoles' operator='isPresent' operand=''/> </List> </ImportCommand> |
Disable the new query, and continue to use the old query by adding the following line to Waveset.properties on each Waveset server: admincache.fastinit=false.
To improve performance during bulk load operations, do the following:
Simplify default workflows to improve processing time (especially for bulk processing actions such as Active Sync, bulk actions, and reconciliation) by removing the callout to the Approval subprocess.
Keep user forms that are assigned to administrators as simple as possible. For example:
When creating a form for data loading, remove any code that is designed to display data.
When using bulk add actions, be sure that your CSV file defines basic attributes such as firstname and lastname. You can then remove these attributes from the administrator user form.
Do not modify the default forms provided with Waveset. Instead, make a copy of the form, give the copy a unique name, and modify the renamed copy. This approach prevents your customized forms from being overwritten during upgrades and product updates.
See Chapter 2, Waveset Forms, in Oracle Waveset 8.1.1 Deployment Reference for more information about creating and editing forms.
Implement the following features in deployment environments where you have NIS (Network Information Service) implemented:
Add an account attribute named user_make_nis to the schema map and use this attribute in your reconciliation or other bulk provisioning workflow. Specifying this attribute causes the system to bypass the step of connecting to the NIS database after each user update on the resource.
To write the changes to the NIS database after provisioning has completed, create a ResourceAction named NIS_password_make in the workflow.
Configurable XML objects offer a broad spectrum of user interface specifications that enable you to define how data is presented to users for different tasks and to automate complex business processes. However, this same flexibility can affect efficiency, performance, and reliability.
This section describes some guidelines for tuning Waveset’s configurable XML objects, which consist of forms, rules, and workflows. The information is organized into the following sections:
You can use Waveset forms to define interfaces to interact with views or variable contexts in an executing task. Forms also provide an execution context for business and transformation logic on a set of data elements. Although you can create very powerful, dynamic forms that perform a variety of tasks, reducing the complexity of forms increases efficiency.
The following sections describe some methods for improving the performance of your customized forms:
When designing new Waveset forms, system integrators can optimize a form’s performance by doing the following:
Performing “expensive” queries only one time, wherever possible. To minimize these queries,
Use <Field> <Default> elements to execute and store query results.
Use field names to reference values in later fields.
For custom tasks
Calculate the value in the task before a ManualAction, then store that value in a task variable.
Use variables.tmpVar to reference variables in the form.
Use <setvar name=’tempVar’/> to clear the variable after a ManualAction.
Using <defvar> for calculations that are performed for the initial display and with each refresh.
To improve the performance of administrator forms, do the following:
Specify TargetResources that only fetch specific resources for editing. (See Tuning Workflows for more information.)
Use cacheList and cacheTimeout caching parameters for objects that change infrequently if you are working with FormUtil.getResourceObjects or FormUtil.listResourceObjects.
Store the results of time-consuming calculations and fetches in <Field> elements and evaluate in the <Default> expression to help ensure that an operation occurs only one time.
Use update.constraints to limit which resources are fetched at runtime (see Dynamic Tabbed User Form in Oracle Waveset 8.1.1 Deployment Reference).
Use background approval (ManualAction with different owners and one-second timeouts) for faster page submissions.
Be aware that Waveset refreshes all fields defined on all panels of a Tab Panel Form when the page reloads, regardless of which panel is selected.
To improve the performance of end-user forms, do the following:
Use TargetResources to limit view checkouts to just those resource accounts of interest, which reduces fetch time for view and the memory consumed by TaskInstance and WorkItems.
Consider using Session.getObject(Type, name) to return a WSUser if just the view properties and attributes of the Waveset user object are of interest (useful for managing multiple deferred task triggers).
Be aware that end-user tasks typically have more WorkItems than Provisioning tasks, so end user tasks are especially susceptible to WorkItem size.
Consider using temporary generic objects for “view” editing that is constructed on view check-out then merged back into a full view for check-in.
Consider using scalable forms instead of the default Create and Edit User interfaces.
When you use the default User forms to edit a user, Waveset fetches the resources owned by that user the moment you start editing the user’s account. In deployment environments where users have accounts on many resources, this potentially time-intensive operation can result in performance degradation.
Some activities performed in forms call resources that are external to Waveset. Accessing these resources can affect Waveset performance, especially if the results contain long lists of values, such as compiling a list of groups or email distribution lists.
To improve performance during these calls, follow the guidelines in “Using a Java Class to Obtain Field Data” in Oracle Waveset 8.1.1 Deployment Reference.
Also, avoid using JavaScript in performance-critical expressions such as <Disable> expressions. Short XPRESS expressions are easier to debug if you use the built-in tracing facilities. Use JavaScript for complex logic in workflow actions.
If a form is slow to display, you can use the debug/Show_Timings.jsp page to determine the problem. Look for calls to Formconvert.convertField(). This method shows how long each field took to compute its value.
You can use the FormConverter JMX MBean to identify specific fields in a form that are slow to compute or render to HTML.
You use Waveset rules to encapsulate constants and XPRESS logic that can be reused in forms, workflows, and other configurable components in the product.
When writing rules, use the following guidelines (as applicable) to obtain optimal performance:
Use static declarations to return a constant value.
Use defvar methods to implement algorithms with temporary values for incremented values or for values that are referenced only one time.
Use putmap, setlist, or setvar methods for complex or expensive calculations whose value must be returned multiple times. Be sure to eventually set the value to <null>.
You can use the Rule JMX MBean to identify rules that are executing slowly.
You customize Waveset workflows to facilitate and automate complex business processes with various human and electronic touchpoints.
You can use the following methods to improve custom workflow performance:
Simplify default workflows to improve processing time (especially for bulk processing actions such as Active Sync, bulk actions, and reconciliation) by removing the callout to the Approval subprocess.
Ensure that no infinite loops exist in your workflows. In particular, be sure that break flags are updated and properly checked in the loops that exist in approval subprocesses.
Put fetched objects into a variable for use later if you must contact the repository for the same object multiple times.
Using a variable is necessary because Waveset does not cache all objects.
Specify TargetResources options in WorkflowServers checkoutView to restrict the number of resources that are queried for account information.
The following example shows how to restrict the number of resources being queried for account information.
<Argument name=’TargetResources’> <list> <string>resource name[| #]</string> </list> </Argument> |
In the preceding example, [| #] is an optional parameter that you can use when more than one account exists on a particular resource. In most cases, the resource name is sufficient.
Clear unnecessary view variables left by forms, especially large maps and lists. For example:
<setvar name=’myLargeList’></null></setvar>
The view is copied multiple times in a TaskInstance object, so large views greatly increase the size of each TaskInstance and corresponding TaskResult.
Use resultLimit (in seconds) in the TaskDefinition, or set this option during task execution to quickly dispose of completed tasks. Large numbers of TaskInstances impact the following:
How taskResults.jsp in footers and some JSP tasks in the Administrator interface are displayed
How JSP tasks are displayed
Querying each TaskInstance for task renaming
Database size
Set the following options as needed:
(Preferred selection) delete — Causes an older TaskInstance of the same name to be deleted before the new task begins execution.
wait — Suspends the current TaskInstance until the older TaskInstance is deleted or expired due to reaching its resultLimit.
rename — Inserts a time stamp into the TaskInstance name to avoid naming collisions.
terminate — Deletes an older TaskInstance of the same name. Each currently executing TaskInstance of the same name is terminated.
Using the SaveOnlyOnError attribute (see the following description) with the resultLimit attribute can significantly improve performance.
Use the SaveOnlyOnError attribute in TaskDefinitions and TaskInstances to configure a task that is persisted only when errors or warnings occur.
If you set SaveOnlyOnError to true, and the resultLimit value is non-zero, Waveset persists the task if there is an error or a warning. If the resultLimit value is 0, Waveset will not persist the task, even if an error or warning occurs.
If you set SaveOnlyOnError to false, Waveset just considers the resultLimit value. (The default value is false.)
The number and size of WorkItems (indicated by ManualActions in a workflow) can affect memory and system performance significantly. By default, Waveset copies an entire workflow context into a WorkItem, then writes the workflow context back out after submission.
To improve performance for WorkItems and ManualActions do the following:
Reduce the size of WorkItems.
By default, ManualAction creates a WorkItem, then copies each variable in the task context into WorkItem.variables. Limiting task context variables prevents overwrites from parallel approvals.
Use ExposedVariables to limit which variables are copied back into WorkItem. For example:
<ExposedVariables><List><String>user ...
Use EditableVariables to limit the variables assimilated back into the executing task from WorkItem. For example:
<EditableVariables><List><String>user ...
Remember to include an approval flag, a form button value, and the actual approver’s name.
Change the confirmation page and background processing to improve user interface response time.
Create a confirmation ManualAction or background ManualAction, owned by another user such as Configurator.
Set timeout=’-5’ (5 seconds) and ignoreTimeout=’true’ to prevent an error message if a user submits an action after the task is executed and the WorkItems are deleted.
Optimize memory use by setting large attribute values, such as value maps and lists, to null on submission or instantiate them as Activity-scoped variables that quickly pass out of scope.
Shorten the lifetime of finished tasks.
Prevent dead-end tasks by ensuring that each WorkItem specifies a Timeout and that the workflow anticipates a Timeout for each WorkItem.
Consider using the resultLimit and resultOption options in the TaskDefinition to determine how the Scheduler handles a task after the task completes.
Use resultLimit to control how many seconds a task is allowed to live after the task has completed. The default is zero (0), which means that the task instance will be deleted immediately after task completion.
Use resultOption to specify what action to take when repeated instances of a task are started (such as wait, delete, rename, or terminate). The default is delete.
If you want to immediately delete tasks that complete successfully, but you also want to keep tasks containing errors long enough to debug, you can conditionally delete finished tasks.
Set a resultLimit in the TaskDefinition to a sufficient time period to debug issues. You can set resultLimit to zero (or a small value) if no errors are reported at runtime (such as WF_ACTION_ERROR is <null/>) after a WorkflowServices call.
Evaluate and fix poorly scoped variables. Scope variables according to where they are declared, as follows:
Global variables are values that must be used across many activities (such as the case owner, view) and as approval flags in subprocesses.
If a variable is declared as an element of <TaskDefinition>, scope the variable globally. If a variable is declared external, its value is resolved up the call stack.
Activity variables of expensive values (such as those variables that require a resource fetch or that store a large list or map of values) can be referenced in a WorkItem.
If a variable is declared as an element of <Activity>, ensure that the variable is visible to actions and transition elements in Activity.
Beginning with Waveset Version 2005Q3M1 SP1, use <Activity> variable values in Forms, rather than in workflows, to avoid copying values on WorkItem creates.
Activity variables are values used in transition logic.
If a variable is declared as an element of <Action>, the variable should pass out of scope on completion of the action. Action variables are typically used in WorkflowApplication invocations to “change” the names of variables set by the application (such as View -> User).
Do not specify synchronous execution (syncExec='true') for the last page in a wizard workflow.
If set to true, the task will run to completion when the user executes the task. The interface will hang until the task completes or encounters another stopping point (such as another ManualAction).
Remove unnecessary approval checks.
For Active Sync, use a streamlined provisioning task in place of the system-specified provisioning task specified by viewOptions.Process.
Do not modify the provisioning tasks provided with Waveset.
You must create a new task, then identify that task in the form and in the process mappings configuration (unless the task is not listed).
Sometimes automating a process requires a wizard. A wizard is a multi-step GUI operation where each step presents the user with a page used to capture or display data. Waveset provides two techniques for building a wizard:
Form-Based wizards. These wizards use form processing to change what the user sees, allowing sets of fields to be visible or invisible based on data held in the view.
Workflow-Based wizards. These wizards require task execution and suspension to provide page transitions.
A Form-Based wizard is four-times more efficient than the best Workflow-Based wizard. Workflow-Based wizards can vary by up to 10x in performance, depending on how they are constructed. Wizard efficiency is typically not noticeable until many wizards are run concurrently.
Because view processing does not necessarily require object repository access, a single Waveset server can process many Form-Based wizards concurrently without contention. However, Form-Based wizards are limited in what they can do between steps. The only processing these wizards can perform between steps is done by normal form or view processing. Form derivation and expansion, which are done when the view is being refreshed, is more limiting than the processing that is possible in a workflow. If a Form-Based wizard's processing limits preclude its use, you must use a Workflow-Based wizard. However, you should consider using a Form-Based wizard first.
Because Workflow-Based wizards require task execution and suspension to provide page transitions, and because each page in a Workflow-Based wizard corresponds to a workflow ManualAction object, these wizards are less efficient than Form-Based wizards. A problem occurs when the wizard has hundreds of concurrent invocations because all of the task start, suspend, and resume operations must contend for the object repository task table.
The object repository is being accessed synchronously with each HTTP request; consequently, a Workflow-Based wizard does not scale to large numbers of concurrent executions due to repository contention. Each start, suspend, and resume operation involves several reads and writes to the task table in the repository, and under a large concurrent load results in the page-to-page response of the GUI slowing down. Unlike a Form-Based wizard that scales by adding more Waveset servers and balancing the HTTP load between them, Workflow-Based wizards slow down due to repository contention. Adding more Waveset servers actually makes the problem worse because the repository is shared between all servers.
If you must use a Workflow-Based wizard because you need processing between page transitions, consider using transient ManualAction. A ManualAction in a Workflow-Based wizard is the mechanism used to display pages to the user. A five-step wizard typically has at least five ManualActions.
If you construct the wizard so that the user must either complete or abort the entire flow, then you can mark the ManualActions with the transient='true' attribute. Adding this attribute allows Waveset to bypass the repository instead of keeping the task in memory and sequencing it without accessing the object repository. This construction decreases the load the wizard puts on the repository, allowing it to scale to higher concurrency loads. However, using this attribute has some drawbacks. If you set transient to true (it is false by default), you cannot restart the workflow after the task enters the transient section because the in-memory state of the workflow does not match the repository state. Also, each HTTP request for the wizard must use the same server because the true state of the wizard is now only kept in memory on the initiating server. As soon as Waveset encounters a ManualAction with transient=false, Waveset writes the workflow to the repository normally, and normal workflow behavior resumes.
Consider using a Workflow-Based wizard with the following structure:
Begin Activity 1 ManualAction 1 (transient = true) Activity 2 ManualAction 2 (transient = true) ManualAction 3 (transient = true) ManualAction 4 (transient = true) ManualAction 5 (transient = false) Activity 3 ... |
When launched, this wizard creates a task that is initially stored in the object repository. All processing between ManualAction 1 and ManualAction 5 is done without any further repository work (the transient section). When Waveset executes ManualAction 5, the task is again stored in the repository (normal behavior), which is a significant performance savings because each normal suspend/resume for a ManualAction does the following:
Creates and stores a WorkItem
Saves the task state
Reads/writes the WorkItem
Locks the task
Resumes the task
Deletes the WorkItem
Each suspend/resume pair (when in wizard mode) results in more than 20 repository read/writes, all on the task table. The state of the task in the repository is running, so if the server crashes or shuts down during the execution of the wizard, the task will be deleted from the repository within a few minutes by another server or when the crashed server restarts, whichever comes first.
If you use a transient ManualAction, you can observe the effects by looking at the JMX TaskInstanceCache and WorkItemCache MBeans. These MBeans show the number of Store (repository write) operations compared to Cache (memory only) operations occurring. Each Cache operation means a Store was avoided, thus reducing object repository contention.
As a database administrator, you should frequently run statistics to monitor your repository database.
Performance problems are often caused by bad or missing database table statistics. Fixing this problem improves performance for both the database and Waveset performance.
See the following Oracle articles for more information:
Also consider using SQL Profiles, which is another method for choosing the best query plans. You can use the SQL Advisor within Enterprise Manager to create these profiles when you identify poorly performing SQL.
Data Exporter enables you to export new, changed, or deleted Waveset data to an external repository that is suitable for reporting or analytic work. The actual exporting of data is done in batches, where each type of data to be exported is able to specify its own export cycle. The data to be exported comes from the Waveset repository and, depending on the length of the export cycle and the amount of changed data, the volume of exported data can be large.
Some Waveset data types are queued into a special table for later export. Specifically, WorkflowActivity and ResourceAccount data is queued because this data is not persisted otherwise. Any persisted data type can also be queued if the warehouse needs to see all changes to the type, or if the type has a lifecycle that does not correspond to the export cycle, such as TaskInstance and WorkItem data.
To maximize performance, only queue and export the types of data that you require in the warehouse. Data exporting is disabled by default, but if you enable data exporting, it exports all data types. Turn off any data types that you do not need.
When the export task exports data, the task attempts to complete the export as quickly as possible, using multiple threads to achieve as much throughput as possible. Depending on the I/O speed of the Waveset repository and the warehouse, the export task can fully utilize the processors on the Waveset server, which causes any interactive performance to degrade. Ideally, the export should occur on a machine dedicated to that task or at least occur during periods when there is no interactive activity on the machine.
The export task supports the following tuning parameters:
Queue read block size
Queue write block size
Queue drain thread count
The drain thread count is the most important throughput. If a large number of records are in the queue table, increasing the number of threads (up to 24) tends to increase throughput. However, if the queue is dominated by one type of record, fewer drain threads might actually be faster. The export task attempts to divide the queue table contents into as many sets as there are threads allocated, and to give each thread a set to drain. Note that these threads are in addition to the drain threads that are draining the other repository tables.
You can usually optimize the general XML by using static XMLObject declarations wherever possible. For example, use:
<List> instead of <list>
<String> instead of <s>
<Map><MapEntry ...></Map> instead of <map>
Also, depending on the context, you might have to wrap objects instead of using the <o></o> element.
You can use Waveset dashboard graphs to quickly assess the current system, spot abnormalities, and understand historical trends (such as concurrent users or resource operations over a time period) for Oracle Waveset Service Provider (Service Provider).
Service Provider does not have an Administrator interface. You use the Waveset Administrator interface to perform almost all administrative tasks (such as viewing dashboard graphs).
For more information about tuning Service Provider see Oracle Waveset Service Provider 8.1.1 Deployment.
When you are working with the Waveset Web Interface, you can optimize performance by using the OpenSPML toolkit that is co-packaged with Waveset.
Using the openspml.jar file from the http://openspml.org/ web site might cause memory leaks.
To improve performance during a large, initial user load, follow this procedure:
Disable all Audit Events from the Waveset Administrator interface.
Audit Logging can add several records per operation, making future audit reports perform more slowly.
Choose Configure -> Audit.
On the Audit Configuration page, deselect the Enable auditing box and click Save.
Disable the list cache by shutting down the web server or by changing the ChangeNotifier.updateSearchIntervalCount property (on the debug/Show_WSProp.jsp debug page) to 0.
The list cache keeps lists of users in frequently accessed organizations in memory. To maintain these lists, the list cache searches for and checks all newly created users.
Clear the current list cache on the debug/Clear_List_Cache.jsp page.
Ensure that the workflow being used to process the users does not contain approvals.
Use alternative load methods, which include:
Splitting the load and running the data in zones
Using bulk loads, which are much faster
Loading from a file
Disable Data Exporter for the WorkflowActivity type.
You must determine your memory needs and set values in your application server’s JVM by adding maximum and minimum heap size to the Java command line. For example:
java -Xmx512M -Xms512M
To improve performance do the following:
Set the maximum and minimum heap size values to the same size.
Depending on your specific implementation, you might want to increase these values if you run reconciliation.
For performance tuning purposes, you may also set the following in the waveset.property file:
max.post.memory.size value
The max.post.memory.size specifies the maximum number of bytes that a posted file (for example by using an HTML FileSelect control) might contain without being spooled to the disk. For cases where you do not have permission to write to temporary files, increase the max.post.memory.size to avoid having to spool to the disk. The default value is 8 Kbytes.
For additional information about system requirements, see the Oracle Waveset 8.1.1 Release Notes.
For information about tuning Solaris and Linux operating system kernels, see the “Tuning the Operating System” chapter in the Sun Java System Application Server Enterprise Edition Performance Tuning Guide.
For information about tuning Oracle operating system kernels, see the product documentation provided with your Oracle system.
Each Waveset server captures profiling data by default. You can use this data with the Waveset IDE to diagnose a large range of performance problems. However, capturing and storing this profiling data adds a measurable load to the server, which consumes both memory and CPU. In a stable production environment, disable the profiler in production servers. Enable the profiler only when you are investigating a performance problem.
For information about using the Waveset IDE, see Identity Manager IDE Frequently Asked Questions (FAQ) in Oracle Waveset 8.1.1 Release Notes.
The following example shows how to disable the Profiler from capturing data.
Import the following XML to disable the profiler.
Setting the attribute value to true disables the profiler, and setting it to false enables the profiler.
Use the lh import command to import the following XML
<?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE Waveset PUBLIC 'waveset.dtd' 'waveset.dtd'> <Waveset> <ImportCommand class='com.waveset.session.SystemConfigurationUpdater' > <Object> <Attribute name='server.default.disableProfiling'> <Boolean>true</Boolean> </Attribute> </Object> </ImportCommand> </Waveset> |
Restart all Waveset servers.
Network latency tends to be a common cause for performance issues when dealing with view provisioning. Tracing individual resource adapters can help you determine what is causing performance problems.
To improve provisioner performance, do the following:
Set the provisioner.maxThreads parameter in the Waveset.properties file to limit the maximum number of threads that are started to perform parallel resource provisioning operations each time a user is created, modified, or deleted.
The default value is 10, which generally provides optimal performance. Specifying a value greater than 20 significantly degrades the provisioner’s performance.
For example, if a user has 15 resource accounts, a maximum of ten provisioner threads are started to simultaneously perform resource provisioning operations on ten resource accounts. The remaining five resource accounts will not be modified until the first ten threads have completed resource provisioning operations.
In a different example, if you are modifying two users, and each user has two resource accounts, four provisioner threads might be used to update those resources.
Increasing the provisioner.maxThreads value can help throughput when users have many resource accounts. However, specifying a very large value might create many provisioner threads, which can degrade performance on the server as a whole.
Configure quota settings in the Waveset.properties file to control the number of concurrent operations (such as reprovisioning) a user can execute for a specific task. Increasing the number of concurrent actions can help more operations complete faster, but trying to process too many actions at once might cause bottlenecks.
You can create configuration sets on a per-pool basis. For example, if you create configuration A, configuration B, and configuration C, when you create a TaskDefinition (workflow), you can assign a specific pool configuration to the workflow from the configurations that you defined.
The following example shows the quota settings that limit user bob to running one reprovisioning task at a time:
Quota.poolNames=ReProvision,Provision Quota.pool.ReProvision.defaultLimit=1 Quota.pool.ReProvision.unlimitedItems=Configurator Quota.pool.ReProvision.items=bob,jan,ted Quota.pool.ReProvision.item.bob.limit=1 |
To enforce the task quota, reference poolName in a TaskDefinition. The format is as follows:
<TaskDefinition ... quotaName=’{poolName}’..>
Most users start only one task at a time. For proxy administrators who perform reconciliation or Active Sync tasks, set the task quota higher.
Avoid using the Configurator user for reconciliation and Active Sync tasks. The Configurator has access to unlimited tasks and can monopolize available resources, which adversely affects concurrent processes.
The Reconciler is the Waveset component that performs reconciliation. This section suggests methods for improving Reconciler performance, including:
In general, you can improve Reconciler performance if you do the following:
Avoid using the Configurator user for reconciliation tasks. The Configurator has access to unlimited tasks and can monopolize available resources, which adversely affects concurrent processes.
Instead, use a streamlined, minimal user for reconciliation and Active Sync tasks. Because the subject executing the task is serialized as part of the task, a minimal user takes less space, or overhead, for each task and update in the repository.
Use information on the Reconciler status page (debug/Show_Reconciler.jsp) to decide which settings to adjust based on queue sizes, available system resources, and performance benchmarks. Be aware that these settings are dependent on the environment.
The Reconciler JMX MBean provides much of the information in Show_Reconciler.jsp, and shows a processing rate estimate.
Use the System Memory Summary page (debug/Show_Memory.jsp) to see how much total and free memory is available. Reconciliation is a memory-intensive function, and you can use this information to determine whether there is sufficient memory allocated to the JVM. You can also use this page to launch garbage collection or to clear unused memory in the JVM for investigating heap usage.
When you assign user forms to proxy administrators who are performing reconciliations, keep the user forms as simple as possible and only use essential fields. Depending on the schema map, including a field that calculates the waveset.organization attribute is generally sufficient.
Administrators who need to view or edit the Waveset schema for Users or Roles must be in the IDM Schema Configuration AdminGroup and must have the IDM Schema Configuration capability.
Use per-account workflows judiciously. The reconciliation process does not start provisioning tasks for performance reasons by default.
If you must use a per-account workflow task, edit the reconciliation policy to limit the Reconciler’s automatic responses to events of interest only. (See the Situation area of the Edit Reconciliation Policy page.)
Reconciliation of a resource goes through two phases. In the first phase, Waveset gets a list of all users in its internal repository that are known to have accounts on the resource. This first phase does not involve the physical resource at all, and typically happens very quickly.
The second phase requests a list of all accounts from the resource, and then processes those accounts; potentially linking them to users or even creating new users. Performance of the second phase, indicated as reconciling accounts in the resource status message, is proportional to the speed of the resource and the number of worker threads. You can compensate for a slow resource by adding more worker threads; assuming the resource can handle more concurrent AccountGet operations.
There is a JMX MBean for each resource that shows the average, minimum, and maximum response times for each resource operation. Reconciliation phase two involves lots of Account_Get operations, so the average time for each Account_Get strongly influences the overall reconciliation performance. To compensate for resources with longer Account_Get times, use more worker threads. However, because the same number of worker threads are used for all resources, setting a maximum worker thread too high might overwhelm the Waveset object repository on faster resources.
Although the default settings are usually adequate, you can sometimes improve Reconciler performance if you adjust the following settings on the Edit Server Settings page:
Parallel Resource Limit. Specifies the maximum number of resource threads that the Reconciler can process in parallel.
Resource threads allocate work items to worker threads, so if you add additional resource threads, you might also have to increase the maximum number of worker threads.
Minimum Worker Threads. Specifies the number of processing threads that the Reconciler always keeps open.
Maximum Worker Threads. Specifies the maximum number of processing threads that the Reconciler can use. The Reconciler starts only as many threads as the workload requires, which places a limit on that number. Worker threads automatically close if they are idle for a short duration.
During idle times, the threads stop if they have no work to do, but only down to the minimum number of threads specified. As the load increases, the Reconciler adds more threads until the maximum number of threads is reached. The Reconciler never has less than the minimum number of threads or more than the maximum.
Generally, more threads allow more concurrency. However, at some point, too many threads can put too much load on the machine or just do not provide additional benefit. Because the worker threads are typically reading and writing User and Account objects, having too much concurrency might overload the Waveset repository RDBMS.
Recommending generic, optimal settings is not possible because deployments are so different. Reconciler settings must be adjusted differently for each deployment environment.
Perform the following steps to change the Reconciler server settings:
Log into the Administrator interface.
Click the Configure -> Servers -> Reconciler tabs.
When the Edit Server Settings page is displayed, adjust the settings as necessary.
See Editing Default Server Settings for more information.
If you are configuring reconciliation for multiple resources in Waveset, you have several options:
All of the resources on the same server, all at the same time.
This option is the most efficient from the Waveset perspective, but if you have many resources (for example more than 20), you are likely to experience Java resource issues.
All of the resources on the same server, each at a different time.
This option is easier on Java resource loading, but puts a significant burden on your schedule configuration.
Each resource on a different server, all at the same time.
This option minimizes elapsed time, but increases the number of servers.
An ideal solution does not exist for this configuration because deployments are so different. You might have to mix and match these options to find an acceptable solution for your deployment.
Preparing a usage survey, based on the business reasons behind this functionality, might help you decide how to proceed.
Address these questions:
Why are you reconciling these resources?
Do you have the same the goal for each of these resources?
Are each of these resources equally important or critical?
Must all resources be reconciled on the same schedule, or can you spread out the reconciliations?
How often must each resource be reconciled?
Also, remember that the reconciliation server does not have to be one of the pools that handles web traffic. You can add a server that you never interact with directly because this server exists solely for transaction processing. Having a server dedicated to transaction processing might make the first option more attractive for very large systems.
Network latency tends to be a common cause of performance issues during view provisioning. Tracing individual resource adapters can help you determine what is causing performance problems.
You can improve resource query performance if you use FormUtil.getResourceObjects to implement the query.
Use one of the following methods to cache query results:
getResourceObjects(Session session, String objectType, String resID, Map options, String cacheList, String cacheTimeout, String cacheIfExists)
getResourceObjects(String subjectString, String objectType, String resId, Map options, String cacheList, String cacheTimeout, String clearCacheIfExists)
Set cacheTimeout in milliseconds.
Restrict searches to specific searchContext, if applicable.
Return the minimum number of attributes in options.searchAttrsToGet.
The Scheduler component controls task scheduling in Waveset.
This section suggests methods for improving Scheduler performance, including:
The following TaskDefinition options determine how the Scheduler handles tasks after they are completed:
resultLimit — Controls how many seconds task results are kept after the task has completed. The default setting varies for different tasks. A setting of zero immediately removes tasks after completion.
resultOption — Controls what action is taken when repeated instances of a task are started. The default setting is delete, which removes previous task instances.
These default settings are designed to optimize memory by shortening the lifetime of finished Scheduler tasks. Unless there is a compelling reason to change these settings, use the defaults.
If you want to immediately delete tasks that completed successfully, but you also want to keep tasks containing errors long enough to debug, you can do the following:
Set the resultLimit to a small value, such as 3600 seconds.
Set the saveOnlyOnError value to true.
With these two settings, Waveset will only store the task results if the task has an error or warning Result Item. This configuration can improve the performance of some tasks by allowing them to bypass being stored in the repository when they complete.
You can sometimes improve Scheduler performance by adjusting the following settings on the Edit Server Settings page:
Maximum Concurrent Tasks. Specifies the maximum number of tasks that the Scheduler can run at one time.
When more tasks are ready to run than the Maximum Concurrent Tasks setting allows, the extra tasks must wait until there is room available or until they are run on another server.
If too many tasks are being swapped out of memory and sharing CPU time, the overhead slows down performance. Alternatively, setting the maximum too low results in idle time. The Scheduler checks for available tasks every minute, so a waiting task waits at least a minute before being run.
The default Maximum Concurrent Tasks setting (100) is usually adequate. You can decide whether to adjust this setting up or down based on which tasks are being run in the deployment and by profiling the runtime behavior after the deployment is otherwise complete.
Before you change this setting, you may want to observe the Scheduler using JMX. The Scheduler.executingTaskCount attribute shows how many tasks the Scheduler is running.
In some cases, you might want to suspend or disable the Scheduler. For example, if you want a server dedicated to handling the End User interface, disabling the Scheduler will prevent tasks from running on that server. The server would only serve the End User interface pages and store launched tasks for other servers to execute.
Task Restrictions. Specifies the set of tasks that can execute on the server.
The Task Restrictions setting can provide a finer granularity of control over what tasks are allowed to run on a server. You can restrict tasks individually or through the server settings.
Recommending generic, optimal settings is not possible because deployments are so different. Scheduler settings must be adjusted differently for each deployment environment.
Log in to the Administrator interface.
Click the Configure -> Servers -> Scheduler tabs.
When the Edit Server Settings page is displayed, adjust the settings as necessary.
See Editing Default Server Settings for more information.
To improve Scheduler performance when the component is under a heavy backlog, you must modify the SystemConfiguration configuration object to enable all optimizations. The Scheduler can pick up any changes to the SystemConfiguration object while the server is running.
The Waveset Scheduler is responsible for executing scheduled tasks, resuming suspended tasks, and cleaning up task results for completed tasks. The Scheduler is single-threaded, meaning it processes its work with a single control thread. However, when the Scheduler starts or resumes a task, it runs that task in a new thread and can have several tasks running at the same time.
You can directly start tasks without any Scheduler processing. For example, workflow and report tasks are often started as a result of a HTTP request. Starting, suspending, and resuming a task causes a modest amount of object repository processing. The repository can become congested if lots of concurrent HTTP requests are starting or resuming tasks. Because an application server may service hundreds of HTTP requests simultaneously, it is easy to create a large backlog for the Scheduler.
For example, if hundreds of HTTP requests started workflows that were subsequently suspended, the Scheduler would be responsible for resuming those workflows.
Because the Scheduler has a single control thread, but many different tasks to perform, it might sometimes seem like the Scheduler is not keeping up with one task or another. Waveset uses the sleepingTaskLimit and readyTaskLimit attributes to control how long the Scheduler can spend processing sleeping or ready tasks during each control loop. These limits ensure that a Scheduler, when presented with thousands of ready tasks, does not spend too much time starting these tasks and ignoring sleeping tasks.
When you have multiple Waveset servers, each server typically runs a Scheduler. By default, these Schedulers compete for the same work by polling the object repository. Configuring blockProcessing allows each Scheduler to process a different block of tasks, resulting in somewhat less object repository contention for specific records. The blockProcessing attribute is enabled by default, but all servers process the same slot. To enable cooperative (rather than competitive) task processing, you can assign each server a different slot (starting with 0). When you assign slots (other than 0) to the servers, they segment the tasks into buckets and only process the tasks in their assigned bucket.
The Scheduler has a JMX MBean that is very useful in diagnosing what is perceived as slow Scheduler performance. The ExecuteTime attribute in the MBean is often the key to understanding the Scheduler's performance. ExecuteTime is the time (in milliseconds) it took the Scheduler to start or resume the last task it processed. On a healthy system, this time should be less then 150 milliseconds. When this value starts to get large, the server is having trouble starting tasks, typically because there is congestion on the task table in the object repository, or because of internal synchronization in the Waveset code itself. Viewing the Scheduler's thread stack in the JMX console usually reveals the problem.
The ExecutingTaskCount attribute in the MBean shows how many tasks the Scheduler is currently managing. By default, the task limit is 100, which is almost always sufficient unless the tasks being executed run for a long time without suspending (such as report tasks). The ExecutingTaskCount value does not reflect all tasks running on the server. Remember that HTTP requests can also start tasks, so the total number of tasks running on the server is unknown to the Scheduler.
One of the Scheduler's many jobs is to resume tasks. Because each Waveset server can have a Scheduler running, the Schedulers also periodically look to process work that was being handled by another server that is not currently running. When a server goes into the recovered state, Schedulers on other servers attempt to process or clean up work that was being done on the recovered server. Schedulers go into a recovered state when one server observes another server has not issued a heartbeat within the last five minutes.
This check compares the timestamp of the last heartbeat message in the repository to the current server's clock. If Waveset servers have more than five minutes of system clock skew, the server with the clock that is farthest ahead marks servers with clocks that are behind as recovered. To avoid this situation, keep the system clocks on your Waveset servers synchronized.
If necessary, review Editing Waveset Configuration Objects in Oracle Waveset 8.1.1 Business Administrator’s Guide.
To control the Scheduler's behavior, add one or more of the following attributes to the SystemConfiguration configuration object. Use the serverSettings.default.scheduler.attribute path to create these attributes.
blockProcessing (Boolean). Indicates whether servers should work on independent blocks of tasks or not. Defaults to true, which means Scheduler will work on independent blocks of tasks.
blockProcessing does not apply to task deletion.
blockProcessingSize (Integer). Represents how many tasks a Scheduler will process in its block. Defaults to 50.
slots (List of Strings). Indicates slots of Waveset server instance names. Defaults to an empty list.
If the blockProcessing attribute is true, each Scheduler processes the block at the server's position in this list.
If you do not provide a list of slots, or do not include a server in the list, every Scheduler assumes it has slot 0 and each Scheduler processes the same block of tasks concurrently.
slots do not apply to task deletion.
fastResume (Boolean). Instructs the Scheduler to immediately execute tasks that are sleeping with an expired restart time. Defaults to true.
If true, the Scheduler executes sleeping tasks without first placing them in the ready state, which reduces load on the Identity Manager repository.
If false, the Scheduler first puts the task in a ready state, and then executes the task on the next Scheduler poll.
threadPriority (Boolean). Indicates the Scheduler thread should be run at an elevated priority. Defaults to false.
If true, the Scheduler threads run with an elevated priority.
If false, the Scheduler threads are only read at server startup.
sleepingTaskLimit (Integer). Indicates how many milliseconds the Scheduler can spend processing sleeping tasks before reentering the polling loop. Defaults to 60 seconds.
The Scheduler is single-threaded and does not perform any other operations when it is processing sleeping tasks. If a large number of tasks are sleeping and ready to run, the Scheduler only processes as many tasks as it can within this time limit.
readyTaskLimit (Integer). Indicates how many milliseconds the Scheduler can spend processing ready tasks before reentering the polling loop. Defaults to 60 seconds.
The Scheduler is single-threaded and does not perform any other operations when it is processing ready tasks. If a large number of tasks are ready to run, the Scheduler only processes as many tasks as it can within this time limit.
Waveset maintains a least recently used (LRU) cache of authenticated sessions for use by authenticated users. By using existing authenticated sessions, you can speed up repository access for objects and actions that require a session.
To optimize the authentication pool size, change the session.userPoolSize value in the Waveset.properties file to the maximum number of expected, concurrent user sessions on the server.
The Oracle Waveset Gateway generates a thread for each connection, and uses a different pool for each unique combination of resource type, Gateway host, and Gateway port. The Gateway checks for idle connections every five minutes. When a connection has been idle for 60 seconds, the Gateway closes and removes that connection from the pool.
When the Gateway receives a request, it does the following:
If there are no idle connections in the corresponding pool, the Gateway creates a new connection.
If an idle connection exists in the pool, the Gateway retrieves and reuses that connection.
You must configure the maximum number of connections on the resource, and you must configure these connections the same way for all resources of the same type, that are using the same Gateway. For that resource type, the first connection made to the Gateway on a given host and port uses that resource’s maximum connections value.
When you change the maximum number of connections on a resource, you must start and stop the server for the change to take effect.
The following example shows how connections, requests, and Gateway threads are related.
If you set the maximum number of connections to 10 on an Active Directory resource, and you are using two Waveset servers, then you can have up to 20 simultaneous connections (10 from each Waveset server) to the Gateway for that Active Directory resource. The Gateway can have 10 simultaneous requests outstanding from each server, and the Gateway processes each request on a different thread. When the number of simultaneous requests exceeds the maximum number of Gateway connections, additional requests are queued until the Gateway completes a request and returns the connection to the pool.
Although the Gateway code is multi-threaded, this characteristic does not address the APIs or services being used by the Gateway. For Active Directory, the Gateway uses the ADSI interface provided by Microsoft. No investigation has been done to determine whether this interface handles Gateway requests in parallel.
Other methods for improving Gateway performance, include:
Locating the Gateway near (from a network connectivity perspective) the domain controllers of the managed domain
Increasing the block size on a Gateway resource can increase throughput during reconciliation or load operations
Increased throughput results have been noted for basic reconciliations with no custom workflows and in which no attribute reconciliations are being performed. Initially, the Gateway does consume more system memory, but this memory is eventually released.
Be aware that there is a diminishing return. At some point, larger block sizes do not result in proportionately increased performance. For example, the following data shows the speed observed for a Load from Resource of 10,000 users from an Active Directory resource. Also, the peak memory usage for the Gateway process during the load is included.
Block Setting |
Users Created Per Hour |
Peak Gateway Memory Usage |
---|---|---|
100 |
500 |
20 MB |
200 |
250 |
25 MB |
500 |
9690 |
60 MB |
1000 |
10044 |
92 MB |
For Exchange Server 2007, the PowerShellExecutor performs actions for Exchange Server 2007. You can modify the following registry settings to change the behavior of the PowerShellExecutor inside the Gateway.
Both settings can have a large impact on the behavior and memory usage of the Gateway. Changes to these parameters should only be considered after careful testing.
powerShellTimeout
Content. Timeout for PowerShell actions (registry type REG_DWORD)
Default. 60000 ms (1 minute)
When the powerShellTimeout setting times out, any RunSpace actions are interrupted and canceled to prevent runaway actions in the PowerShell environment that cause the Gateway to become unresponsive.
Decreasing the powerShellTimeout value to a small value can prematurely cancel actions, and can prevent the RunSpace initialization from finishing correctly. Observed startup times for the first RunSpace in the pool range from 2—5 seconds.
The powerShellTimeout value is read-only on startup, and you cannot change it without restarting the gateway.
runSpacePoolSize
Content. Number of RunSpaces in the pool (registry type REG_DWORD)
Default. 5
Minimum. 5
Maximum. 25
The number of RunSpaces in the pool allow for parallel execution of PowerShell actions by the gateway. One provisioning action or update of a user in Exchange 2007 can result in multiple PowerShell actions being executed.
A started RunSpace can consume a large amount of memory. For the first RunSpace, the typical size is approximately 40 MB. Subsequent RunSpaces normally use between 10—20 MB.
The preceding figures can differ in specific environments and are only given as guidelines, so be careful when changing this value.
The runSpacePoolSize value is read-only on startup, and you cannot change the pool size value without restarting the Gateway.
The Administrator interface task bar displays links to previously performed provisioning tasks, which causes the interface to render more slowly when there are a large number of tasks.
To improve interface performance, remove the taskResults.jsp link from each JSP by deleting the <List>...</List> element from the UserUIConfig object.
The following example shows <List>...</List> entries within <TaskBarPages>.
<TaskBarPages> <List> <String>account/list.jsp</String> <String>account/find.jsp</String> <String>account/dofindexisting.jsp</String> <String>account/resourceReprovision.jsp></String> <String>task/newresults.jsp</String> <String>home/index.jsp</String> </List> </TaskBarPages> |
This section describes the different Waveset and third-party debugging tools you can use to debug performance issues.
The information is organized into the following sections:
Tracing affects system performance. To help ensure optimal performance, specify the minimum tracing level or turn tracing off after debugging the system.
This section provides instructions for accessing the Waveset Debug pages and describes how to use these pages to identify and debug Waveset performance issues.
See the following sections for information:
Provisioning Threads for Administrator Configurator (Show_Provisioning.jsp)
XML Resource Adapter Caches Flushed and Cleared (Clear_XMLResourceAdapter_Cache.jsp)
You must have the Debug, Security Administrator, or Waveset Administrator capabilities to access and execute operations from the Waveset Debug pages. Administrators and Configurator are assigned this capability by default.
If you do not have the Debug capability, an error message results.
Open a browser and log in to the Administrator interface.
Type the following URL:
http:// host:port /idm/debug
where:
host is the application server on which you are running Waveset.
port is the number of the TCP port on which the server is listening.
When the System Settings page displays, type the .jsp file name for the debug page you want to open.
For example:
http:// host:port /idm/debug/pageName.jsp
Some debugging utilities are not linked from the System Settings page, but you can use them to enhance your ability to gather data for product performance and usability. For a complete list of debug pages, open a command window and list the contents of the idm/debug directory.
Use the Control Timings page to collect and view call timer statistics for different methods. You can use this information to track bottlenecks to specific methods and invoked APIs. You can also use options on the Call Timings page to import or export call timer metrics.
Call timing statistics are only collected while trace is enabled.
Open the Control Timings page, and click Start Timing & Tracing to enable trace and timing.
To stop the timing, click Stop Timing & Tracing or click Stop Timing.
The page re-displays and populates the Show Timings table with a list of methods for which statistics are available and the methods’ aggregate call timer statistics (not broken down by caller).
This table contains the following information:
Method name (Click a method name to see which methods it calls)
Total time
Average time
Minimum time
Maximum time
Total calls
Total errors
To clear the list, click Clear Timing.
You can also use the callTimer command to collect call timer data from the Console. This command is useful when you are debugging performance issues during an upgrade or in other situations where Waveset is not running on the application server.
Use the Edit Trace Configuration page to enable and configure tracing for the Java classes provided with your Waveset installation.
Specifically, you can use this page to configure the following trace settings:
Choose methods, classes, or packages to trace and specify the level of trace you want to capture.
Send trace information to a file or to standard output.
Specify the maximum number of trace files to be stored and the maximum size for each file.
Specify how dates and times are formatted in the trace output file.
Specify the maximum number of methods to be cached.
Indicate how to write data to the trace file.
Write data to the trace file as the data is generated, or queue the data and then write it to the file.
If you are not using a data source, you can use the Host Connection Pool page to view connection pool statistics. These statistics include the pool version, how many connections were created, how many are active, how many connections are in the pool, how many requests were serviced from the pool, and how many connections were destroyed.
You can also use the Host Connection Pool page to view a summary of the connection pools used to manage connections to the Gateway. You can use this information to investigate low-memory conditions.
Use the List Cache Cleared page to clear recently used XML parsers from the cache and to investigate low memory conditions.
Use the Method Timings page to quickly detect and assess hotspots at a method level.
The following information is gathered from Waveset methods and displayed on the Method Timings page:
Method names
How many times the methods were called
How many times the methods exited with an error status
Average time consumed by the methods
Minimum and maximum times consumed by invocations of each method
The Method Timings page also contains a table with the following links. You can click these links to view additional information.
Details. Shows call stack information.
History. Shows a graph of call duration compared with the time of the most recent calls.
History data. Shows a list of the most recent calls, showing what time the call was made and the duration of the call.
To keep stack history and to control its depth, edit Waveset.properites and look at the MethodTimer keys.
Waveset does not keep stack history by default. Keeping stack history has a large negative impact on CPU and memory use.
The Clear ALL option on the Method Timings page clears all results. This option is enabled by default.
Use the Object Size Summary page to detect problematically large objects that can affect your system.
The Object Size Summary page shows information about the size of objects (in characters) stored in the repository. These objects are listed by type, along with the total number of objects of each type, and the objects’ total combined size, average size, maximum size, and minimum size.
Click an entry in the Type column to view additional size information about that object type. For example, click Configuration to view the ID, name, and size of the largest configuration objects in the repository.
You can also access this size information from the Console command line.
Open the console.
At the command prompt, type:
showSizes [ type[limit ]]
For upgrades, existing objects will report a size of 0 until they have been updated or otherwise refreshed.
Use the Provisioning Threads for Administrator Configurator to view a summary of the provisioning threads in use by the system. This summary is a subset of the information available in Show_Threads.jsp.
Looking at just a single thread dump can be misleading.
Use the System Cache Summary page to view information about the following items to help you investigate low-memory conditions:
Administrator-associated object caches
System object cache
User login sessions
XML parser cache
Use the System Memory Summary page to view how much total and free memory you have available in Mbytes. When you are using memory-intensive functionality such as Reconciliation, this information can help you determine whether there is sufficient memory allocated to the JVM.
You can also use this page to launch garbage collection or to clear unused memory in the JVM for investigating heap usage.
The System Properties page provides information about your environment, including software versions, paths and environmental variables.
Use the System Threads page to see which processes are running so you can verify that automated processes (such as reconciliation or Active Sync) are running.
This page includes information about the process type, process name, its priority, if the process is a daemon, and if the process is still running.
Looking at just a single thread dump can be misleading.
Use the Session Pool Cleared page to clear all of the cached sessions for users who have recently logged in and to investigate low memory conditions.
Use the Waveset Properties page to view and temporarily edit properties in the Waveset.properties file. You can test different property settings for a particular server on which the Waveset.properties file resides without having to restart the server to pick up the changes. The edited property settings only remain in effect until the next time you restart the server.
Use the XML Resource Adapter Caches Flushed and Cleared page to clear test XML resource adapters from the cache and to investigate low memory conditions.
Waveset provides a Profiler utility to help you troubleshoot performance problems with forms, Java, rules, workflows, and XPRESS in your deployment.
Customized forms, Java, rules, workflows, and XPRESS can cause performance and scale problems. The Profiler profiles how much time is spent in these different areas, enabling you to determine whether these forms, Java, rules, workflows, or XPRESS objects are contributing to performance and scale problems and, if so, which parts of these objects are causing the problems. You must use the Waveset IDE to view the profiled data.
When enabled, the Profiler has both a memory and a performance impact because this feature captures a significant amount of information across a wide range of services. If you are not having performance problems in your production system and you do not need the Profiler data, you can disable the Profiler as described in Disabling the Profiler.
This section explains how to use Waveset’s Profiler and provides a tutorial to help you learn how to troubleshoot performance issues in your deployment.
The information is organized into the following topics:
Waveset Profiler is only supported on version 7.1 Update 1 and later.
The section provides an overview of the Waveset’s Profiler’s features and functionality. The information is organized as follows:
The Profiler provides helpful information, but has a performance cost on its own. If you do not need the Profiler data, you can disable this utility using the instructions provided in .
You can use the Profiler utility to
Create “snapshots” of profiling data.
A snapshot is the cumulative result of profiling since the last time you reset all of your collected profile results.
Display snapshot results in four, different data views:
Call Tree view provides a tree table showing the call timing and invocations counts throughout the system.
Hotspots view provides a flattened list of nodes that shows the aggregate call timings regardless of parent.
Back Traces view provides an inverted call stack showing all the call chains from which that node (known as the root node) was called.
Callees view provides an aggregate call tree of the root node, regardless of its parent chain.
Specify what kinds of information to include in your snapshot:
You can include every element of form, workflow, and XPRESS or restrict the content to a set of specific elements.
You can pick specific Java methods and constructors to include or exclude from the instrumentation. Instrumentation of Identity Manager classes and custom classes is supported.
Manage your project snapshots as follows.
Save the snapshot in your project’s nbproject/private/idm-profiler directory or to an arbitrary location outside of your project.
You can view a list of all saved snapshots in the Saved Snapshots section of the IDM Profiler view.
Open snapshots from your project or load them from an arbitrary location outside your project.
Delete snapshots.
Search for specific nodes, by name.
This section describes how the Profiler looks up and manages the source for the following Waveset objects:
Forms, Rules, Workflows, and XPRESS objects: When you take a snapshot with the Profiler, the server evaluates all of the profiling data and discovers on which sources the data depends. The server then fetches all of these sources from the repository and includes them in the snapshot. Consequently, you can be sure that the Waveset objects displayed in the snapshot are accurately reflecting the point at which the snapshot was captured.
This process adds to the size of the snapshot, but the source size is actually a relatively small fraction of the total size. As a result, you can send a snapshot to Sun’s Customer Support without having to send your source files separately.
Java source: When you take a snapshot of Java source, the client downloads the snapshot and then goes through the snapshot to capture all referenced Java sources from the project. When you save the snapshot, the client zips the sources and attaches them to the end of the snapshot.
Then, when you view the snapshot and go to the Java source, the client first checks the content of the snapshot. If the client cannot find the content there, it checks the project’s content. This process allows you to send a snapshot containing profiling data from both your custom Java code and Waveset code.
In a Java source snapshot, do not assume the source is up-to-date with the server or always available.
In Call Tree view or Hotspots view, you can double-click any node that corresponds to a Java method, workflow, form, rule, or XPRESS to view the source for that node.
The following sections contain information to consider when you evaluate results provided by the Profiler:
Self Time Statistics: To compute a root node’s Self Time statistic, the Profiler subtracts the times of all children nodes from the root node’s total time. Consequently, an uninstrumented child node’s time is reflected in the root node’s self time. If a root node has a significant self time, you should certainly investigate why. You might not have the proper methods instrumented and so you are looking in the wrong place. For example, assume method A calls method B.
Method A takes a total time of 10 seconds (where total time includes the call to B) and the call to B takes a total time of 10 seconds.
If both A and B are instrumented, the call stack reflects that information. You will see that A has a self-time of 0 seconds and that B has a self-time of 10 seconds (where 10 seconds was actually spent in B). If, however, B is not instrumented, you only see that the call to A takes 10 seconds and that A’s self-time is 10 seconds. Consequently, you might assume the problem lies directly in A rather than in B.
In particular, you might notice large self times on JSPs during their initial compile. If you reset the collected results and then re-display the page, the self time value will be much less.
Constructor Calls: Because there are limitations in the Java instrumentation strategy, initial calls to this() or super() will appear as a sibling to the constructor call, rather than as a child. See the following example:
class A { public A() { this(0); } public A(int i) { } } and: class B { public static void test() { new A(); } } The call tree will look like this: B.test() -A.<init>(int) -A.<init>() Rather than this: B.test() -A.<init>() -A.<init>(int) |
Daemon Threads: Do not be misled by the seemingly large amount of time spent in a number of Waveset’s daemon threads, such as ReconTask.WorkerThread.run() or TaskThread.WorkerThread.run(). Most of this time is spent sleeping, while waiting for events. You must explore these traces to see how much time is actually spent when they are processing an event.
This section describes how to start the Profiler and how to work with various features of the Profiler’s graphical user interface. This information is organized as follows:
Because the Profiler is very memory intensive, you should significantly increase the memory for both your server and the Netbeans Java Virtual Machine (JVM).
To increase your server’s memory,
Open the Netbeans window and select the Runtime tab.
Expand the Servers node, right-click Bundled Tomcat, and select Properties from the menu.
When the Server Manager dialog displays, clear the Enable HTTP Monitor box on the Connection tab.
Select the Platform tab, set VM Options to -Xmx1024M, and then click Close.
To increase the Netbeans JVM memory,
Open the netbeans-installation-dir \etc\netbeans.conff file and locate the following line:
netbeans_default_options="-J-Xms32m -J-Xmx ...
Change the -J-Xmx value to -J-Xmx 1024M.
Save, and then close the file.
When you are finished, start the Profiler using the instructions provided in the Starting the Profiler section.
You can use any of the following methods to start the Profiler from the Waveset IDE window:
Click the Start Identity Manager Profiler on Main Project icon located on the menu bar.
Select Window -> IDM Profiler from the menu bar.
The Identity Manager Profiler window appears in the Explorer. From this window, select an Waveset project from Current Project drop-down menu, and then click the Start Identity Manager Profiler icon located in the Controls section.
The Start Identity Manager Profiler on Main Project icon is enabled when the main Waveset project is version 7.1 Update 1 or later.
Right-click a project in the Projects window, and then select Start Identity Manager Profiler from the pop-up menu.
Select a project in the Projects window, and then select IdM -> Start Identity Manager Profiler from the menu bar.
When you start the Profiler, the Profiler Options dialog displays so you can specify which profiling options you want to use. Instructions for setting these options are provided in Specifying the Profiler Options.
To disable the Waveset Profiler, import the following configuration update:
lh import file
where file contains
<?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE Waveset PUBLIC 'waveset.dtd' 'waveset.dtd'> <Waveset> <ImportCommand class='com.waveset.session.SystemConfigurationUpdater'> <Object> <Attribute name='serverSettings.default.disableProfiling'> <Boolean>true</Boolean> </Attribute> </Object> </ImportCommand> </Waveset> |
This section describes the features of the Profiler graphical user interface, and how to use these features. The information is organized as follows:
The Profiler Options dialog consists of the following tabs:
Mode
The Mode tab provides the following options:
IDM Objects Only: Select to profile form, rule, workflow, and XPRESS objects. Excludes Java objects from the profile.
Java and IDM Objects: Select to profile form, Java, rule, workflow, and XPRESS objects.
The Java and IDM Objects option is not available if you are using a regular Waveset project with an external Identity Manager instance or using a remote Waveset project.
You cannot change the Mode option while the Profiler is running. You must stop the Profiler to change the option.
IDM Object Filters:
The IDM Object Filters tab provides the following options:
Show IDM Object details
Include Anonymous Sources
Anonymous sources are forms (or portions of a form) that are generated on the fly (such as Login forms and MissingFields forms) and do not correspond to a persistent form that resides in the Waveset repository.
Select this box to include Anonymous sources in the snapshot.
Clear this box to exclude Anonymous sources from the snapshot.
Java Filters
Select the Java Filters tab to
Include or exclude Java filters
Create new filters
Delete existing filters
Restore the default filters
Java filters are given in terms of method patterns, and they are expressed in patterns that include or exclude based on canonical method name. Where a canonical method name is:
fully-qualified-class-name.method-name(parameter-type-1, parameter-type-2, ...)
For constructors, method-name is <init>.
Here are a few examples:
To exclude all constructors, enable the Exclude box and add the following filter:
*.<init>(*)
To exclude all constructors with a single org.w3c.dom.Element parameter, enable the Exclude box and add the following filter:
*.<init>(org.w3c.dom.Element)
To exclude all Identity Manager classes, enable the Exclude box and add the following filters:
"com.waveset.*" "com.sun.idm.*"
To instrument your custom code only, disable the Exclude box, remove the initial * include filter, and then add the following filter:
"com.yourcompany.*"
The last two examples are currently equivalent because the filters are applied only to your custom classes and Waveset classes.
If necessary, you can instrument other jars by modifying the following lines in build.xml as appropriate. For example,
<instrument todir="${lighthouse-dir-profiler}/WEB-INF" verbose="${instrumentor.verbose}" includeMethods="${profiler.includes}" excludeMethods="${profiler.excludes}"> <fileset dir="${lighthouse-dir}/WEB-INF"> <include name="lib/idm*.jar"/> <include name="classes/**/*.class"/> </fileset> </instrument> |
By default, the configuration includes all your custom classes and most Waveset classes. A number of Waveset classes are forcibly excluded— because enabling them would break the Profiler.
For example, classes from the workflow, forms, and XPRESS engines are excluded or the Profiler would produce an unintelligible snapshot when profiling Java and Waveset objects.
Note that Java filters provide much more filtering granularity than IDM Object Filters. Java instrumentation adds significant overhead to the execution time, which can drastically skew the profiling results. Because Waveset objects are interpreted rather than compiled, the instrumentation overhead is negligible. So for example, there is basically no reason to exclude workflow A and include workflow B, and so forth.
You cannot modify Java filters while the Profiler is running. You must stop the Profiler before changing Java filters.
Miscellaneous
The Miscellaneous tab provides the following options:
Prune snapshot nodes where execution time is 0:
Disable this option (default) if you want the snapshot to include invocation information for all executed entities— even those whose execution time is zero.
It might be useful to have the number of invocations, even for nodes where there is no execution time.
Enable this option to remove these nodes, which allows you to focus on the most relevant profiling data. In addition, enabling this option can provide a large savings in Profiler snapshot size.
Automatically Open Browser Upon Profiler Start:
Enable this option (default) when you launch the Profiler to automatically open a browser that points to the Identity Manager instance being profiled.
Disable this option if you do not want to open a browser.
Include Java Sources in Snapshot:
Enable this option (default) to include Java sources for any Java methods referenced by the profiling data in the Snapshot. You should always use this setting for snapshots in the field. Custom Java is relatively small and it is very valuable to have for support.
Disable this option only if you are profiling Waveset and have the complete Waveset source available.
In this situation, you do not want to include the Waveset source because it can create extremely large snapshots. (See How the Profiler Locates and Manages Source for more information.)
Use the options on these tabs to indicate which objects to profile and which elements to display in the profile.
After specifying the Profiler options, click OK to start the Profiler. Depending on your project configuration, the Profiler does one of two things:
If you are using a regular Waveset project with an Embedded Identity Manager Instance, the Profiler performs a full build, deploys into the NetBean’s application server, and starts the Profiler.
If you are using a regular Waveset project with an External Identity Manager Instance or the remote Waveset project, the Profiler attaches to the Identity Manager instance configured for the project.
You can select IdM -> Set Identity Manager Instance to control the Identity Manager Instance action for the project.
The IDM Profiler view consists of the following areas:
Current Project Area: Consists of a drop-down menu that lists all of your current projects. Use this menu to select the project you want to profile.
Controls Area: Contains four icons, as described in the following table:
Icon |
Name |
Purpose |
---|---|---|
|
Start Waveset Profiler |
Starts the Profiler and opens the Profiler Options dialog. |
|
Stop Waveset Profiler |
Stops the Profiler. |
|
Reset Collected Results |
Resets all of the profile results you collected to this point. |
|
Modify Profiling |
Reopens the Profiler Options dialog so you can change any of the settings to modify your current profile results. |
Status Area: Reports whether you are connected to the Host and provides Status information as the Profiler is starting up, running, and stopping.
Profiling Results Area: Contains two icons, which are described in the following table:
Icon |
Name |
Purpose |
---|---|---|
|
Start Waveset Profiler |
Starts the Profiler and opens the Profiler Options dialog. |
|
Reset Collected Results |
Resets all of the profile results you collected to this point. |
Saved Snapshots Area: Provides a list of all saved snapshots.
Instructions for saving snapshots are provided in Saving a Snapshot.
In addition, you can use the following buttons to manage these snapshots:
Open: Click to open saved snapshots in the Snapshot View window.
You can also double-click a snapshot in the Saved Snapshots list to open that snapshot.
Delete: Select a snapshot in the Saved Snapshots list, and then click this button to delete the selected snapshot.
Save As: Select a snapshot in the list and then click this button to save that snapshot externally to an arbitrary location.
Load: Click to open a snapshot from an arbitrary location into the Snapshot View window.
When you open a snapshot, the results display in the Snapshot View window, located on the upper right side of Waveset IDE.
A snapshot provides the following views of your data:
Call Tree view: Consists of a tree table showing the call timing and invocation counts throughout your system.
This tree table contains three columns:
Call Tree column: Lists all nodes.
Top-level nodes are one of the following:
Thread.run() methods for various background threads in the system
For example, if you enabled Java profiling, you will see the ReconTask.WorkerThread.run() method.
Request timings
For example, if you viewed the idm/login.jsp URL, you will see a top-level entry for idm/login.jsp. The data displayed in the Time column for this entry represents the total time for that request (or requests). The data displayed in the Invocations column represents the total number of invocations to that page. You can then explore further into that data to see what calls contributed to its time.
The Call Tree also contains Self Time nodes. Self Time values represent how much time was spent in the node itself. (For more information, see Statistics Caveats.)
Time column: Lists the time spent in each node when that node was called from its parent. The percentages are given relative to parent time.
Invocations column: Lists how many times each node was invoked from its parent.
Hotspots view: Provides a flattened list of nodes that shows aggregate call timings regardless of parent.
This view contains the following columns:
Self Time: Lists the total amount of time spent in each node.
Invocations: Lists the total number of times each node was invoked from its parent.
Time: Lists the total amount of time spent in each node and in all of its children.
Back Traces view: Provides an inverted call stack showing all the call chains from where each node was called.
You can use these statistics to answer the question— How much time would I save if I eliminated this particular call chain from this node?
You can access the Back Traces view from any of the other snapshot views by right-clicking a node (known as the root node) and selecting Show Back Traces from the pop-up menu.
The Time and Invocations data values mean something different in Back Traces view:
Time: The values in this column represent the time spent in the root node when it is called from a given call chain.
Invocations: The values in this column represent how many times the root node was invoked from a given call chain.
Callees view: Provides an aggregate call tree for a node (known as the root node), regardless of its parent chain.
These statistics are helpful if you have a problem area that is called from many places throughout the master call tree and you want to see the overall profile for that node.
You can access the Callees view from any of the other snapshot views by right-clicking a node (known as the root node) and selecting Show Callees from the pop-up menu.
The Time and Invocations data values used in Callees view have the same meaning as those used in Call Tree view.
Right-click any node in Call Tree view or in Hotspots view and a pop-up menu displays with the options described the following table:
Menu Options |
Description |
---|---|
GoTo Source |
Select this option to view the XML source for a node that corresponds to a Java method, workflow, form, rule, or XPRESS. For detailed information about this view, see How the Profiler Locates and Manages Source. |
Show Back Traces |
Select this option to access the Back Traces view. For detailed information about this view, see Working with the Snapshot View. |
Show Callees |
Select this option to access the Callees view. For detailed information about this view, see Working with the Snapshot View. |
Find In Hotspots |
Select this option to find a node in the Hotspots view. For detailed information about this view, see Working with the Snapshot View. |
List Options -> Sort -> |
Select this option to
|
List Options -> Change Visible Columns |
Select this option to change the columns displayed in the Call Tree or Hotspots list. When the Change Visible Columns dialog displays, you can select one or more of the following options:
|
Use the Search icon , located at the top of the Snapshot View window to search for nodes by name the Call Tree view or Hotspots tree.
Alternatively, right-click any node in Call Tree view or Hotspots view and select Find in Call Tree or Find in Hotspots (respectively) from the pop-up menu to search for a node.
The Profiler provides several options for saving a snapshot. See the following table for a description of these options:
Icon |
Name |
Purpose |
---|---|---|
|
Save the Snapshot in the Project icon (located at the top of the Snapshot View window) |
Saves the snapshot in the nbproject/private/idm-profiler directory of your project. Snapshots saved in your project are listed in the Saved Snapshots section of the Profiler view. |
|
Save the Snapshot Externally icon (located at the top of the Snapshot View window) |
Saves a snapshot to an external, arbitrary location. |
|
Save As button (located in the Saved Snapshots area) |
Saves a snapshot to an external, arbitrary location. |
Waveset provides a tutorial (profiler-tutorial.zip) to help you learn how to use the Profiler to troubleshoot forms, Java rules, workflows, and XPRESS.
Use the following steps to complete the tutorial.
Select File -> New Project.
When the New Project wizard displays, specify the following, and then click Next:
Complete the following fields on the Name and Location panel, and then click Next:
Project Name: Enter Idm80 as the project name.
Project Location: Use the default location or specify a different location.
Project Folder: Use the default folder or specify a different folder.
When the Waveset WAR File Location panel displays, enter the location of the Waveset 8.1.1 war file. Typically, unzipping this file creates an idm.war file in the same directory.
Click Next to continue to the Repository Setup panel.
You should not have to change the default settings on this panel, just click Finish. When you see the BUILD SUCCESSFUL message in the Waveset IDE Output window, you can extract the Profiler tutorial files. See Step 2: Unzip the Profiler Tutorial for instructions.
Unzip profiler-tutorial.zip in the project root. The extracted files include:
<project root>/custom/WEB-INF/config/ProfilerTutorial1.xml <project root>/custom/WEB-INF/config/ProfilerTutorial2.xml <project root>/src/org/example/ProfilerTutorialExample.java <project root>/PROFILER_TUTORIAL_README.txt
Start the Profiler. Proceed to Step 3: Start the Profiler.
Use the instructions provided in Before You Begin to increase the memory for your server and Netbeans JVM.
Use any of the methods described in Starting the Profiler to start the Profiler.
When the Profiler Options dialog displays, you can specify profiling options.
Continue to Step 4: Set the Profiler Options
For detailed information about all of the different Profiler options, see Specifying the Profiler Options.
For the purposes of this tutorial, specify the following Profiler options:
On the Mode tab, select Java and IDM Objects to profile form, Java, rule, workflow, and XPRESS objects.
Select the Java Filters tab.
Use the following steps to disable all Waveset Java classes except your custom Java classes (in this case, org.example.ProfilerTutorialExample):
Click OK to run the Profiler.
The Profiler takes a few minutes to complete the first time you run it on a project or if you have recently performed a Clean Project action.
When the Profiler finishes processing, you are prompted to Log In.
Enter the password configurator, select the Remember Password box, and then click OK to continue.
When the Waveset window displays, log in.
Typically, you should log in to Waveset as a different user instead of logging in as configurator again. You are already logged into the Profiler as configurator, and the Waveset session pool only allows one entry per user. Using multiple entries can result in the appearance of a broken session pool and might skew your profiling results for finer-grained performance problems.
However, for this simple example the session pool is of no consequence so you can login as configurator/configurator.
In Waveset, select Server Tasks -> Run Tasks, and then click ProfilerTutorialWorkflow1.
The tutorial might take a few moments to respond.
Although you could take a snapshot now; you are going to reset your results instead, run the Profiler, run it again, and then take a snapshot.
It is a best practice to run the Profiler a couple of times before taking a snapshot to be sure all the caches are primed, all the JSPs are compiled, and so forth.
Running the Profiler several times enables you to focus on actual performance problems. The only exception to this practice is if you are having a problem populating the caches themselves.
Return to the IDM Profiler view in the Waveset IDE. Click the Reset Collected Results icon in the Profiling Results section (or in the Controls section) to reset all of the results collected so far.
In Waveset, select Server Tasks -> Run Tasks again, and click ProfilerTutorialWorkflow1.
When the Process Diagram displays, return to the Waveset IDE and click Take Snapshot in the Profiling Results section.
The Waveset IDE downloads your snapshot and displays the results on the right side of the window.
This area is the Call Tree view. At the top of the Call Tree, you should see a /idm/task/taskLaunch.jsp with a time listed in the Time column. The time should indicate that the entire request took six+ seconds.
Expand the /idm/task/taskLaunch.jsp node, and you can see that ProfilerTutorialWorkflow1 took six seconds.
Expand the ProfilerTutorialWorkflow1 node. Note that activity2 took four seconds and activity1 took two seconds.
Expand activity2.
Note that action1 took two seconds and action2 took two seconds.
Expand action1 and note that the <invoke> also took two seconds.
Double-click the <invoke> to open ProfilerTutorialWorkflow1.xml and highlight the following line:
<invoke name=’example’ class=’org.example.ProfilerTutorialExample’/> |
You should see that a call to the ProfilerTutorialExample method took two seconds.
You are actually browsing XML source that was captured in the snapshot, rather than source in the project. Snapshots are completely self-contained. (For more information, see How the Profiler Locates and Manages Source.)
Select the CPU:<date><time> tab to return to your snapshot.
Expand the <invoke> node, and note that the Profiler spent two seconds in the Java ProfilerTutorialExample.example() method.
Double-click the method name to open the ProfilerTutorialExample.java source and highlight the following line:
Thread.sleep(2000); |
There’s the problem! This method contains a two-second thread sleep.
If you return to the Call Tree, you can see that all of the two second paths lead to this method. (You should see three paths; for a total of six seconds.)
Select the Hotspots tab (located at the bottom of the Call Tree area) to open the Hotspots view. Notice that ProfilerTutorialExample.example() has a total self time of six seconds.
(For more information about Hotspots, see Working with the Snapshot View.)
Right-click ProfilerTutorialExample.example() and select Show Back Traces from the pop-up menu.
A new Back Traces tab displays at the bottom of the area.
Expand the ProfilerTutorialExample.example() node on the Back Traces tab to see that this method was called from three places, and that the method took two seconds when it was called from each place.
(For more information about Back Traces, see Working with the Snapshot View.)
Click the Save the snapshot in the project icon to save your snapshot and close it.
If you check the Saved Snapshots section on the IDM Profiler tab, you should see your snapshot. (You might have to scroll down.)
Select the saved snapshot, and then click Open to re-open it.
You can use the Save As button to save your snapshots externally and use the Load button to load a snapshot from outside your project.
Close the snapshot again.
The next part of this tutorial illustrates how to profile a workflow ManualAction.
In Waveset, select Server Tasks -> Run Tasks, and then click ProfilerTutorialWorkflow2.
After a few moments, an empty form displays.
Click Save and the process diagram displays.
Select Server Tasks -> Run Tasks again.
Return to the Waveset IDE IDM Profiler view and click the Reset Collected Results icon in the Profiling Results section.
Now click ProfilerTutorialWorkflow2 in Waveset.
When the blank form displays again, click Save.
In the IDM Profiler view, click Take Snapshot.
After a few seconds, a snapshot should display in the Call Tree area. You should see that /idm/task/workItemEdit.jsp took six+ seconds. (This result corresponds to the manual action in the workflow.)
Expand the /idm/task/workItemEdit.jsp node and note that running all Derivations in the ManualAction form took a total of six seconds.
Expand the Derivation, displayNameForm, variables.dummy, and <block> nodes.
You should see that the <block> took six seconds and, of that time, the Profiler spent two seconds in each of the three invokes to the ProfilerTutorialExample.example(). method.
You can double-click <block> to view the source.
You can use the following Oracle and third-party tools to identify potential performance bottlenecks:
These tools can be particularly useful if your deployment uses custom Java classes.
The DTrace facility is a dynamic tracing framework for the Solaris 10 operating system that enables you to monitor JVM activity.
DTrace contains more than 30,000 probes and uses integrated user-level and kernel-level tracing to give you a view into your production system. You can also trace arbitrary data and expressions by using the D language, which is similar to C or awk. The DTrace facility also includes special support for monitoring the JVM, and enables you to watch your whole system and span outside the JVM.
DTrace is easiest to use with Java 6 because probes are built into the JVM. The facility also works with Java 1.4 and Java 5, but you must download JVM PI or JVM TI agents from the following URL:
https://solaris10-dtrace-vm-agents.dev.java.net/
The following example shows how to write a DTrace script.
#!/usr/sbin/dtrace -Zs #pragma D option quiet hotspot$1::: { printf("%s\n", probename); } |
In this example, you would replace $1 with the first argument to the script, which is the PID of the Java process you want to monitor. For example:
# ./all-jvm-probes.d 1234
The following table describes the commands you can use to enable different DTrace probes.
Table 4–3 DTrace Commands
Because DTrace causes additional work in the system, enabling this facility affects system performance. The effect is often negligible, but can become substantial if you enable many probes with costly enablings.
Instructions for minimizing the performance effect of DTrace are provided in the “Performance Considerations” chapter of the Solaris Dynamic Tracing Guide.
For more information about DTrace, see /usr/demo/dtrace and man dtrace.
Waveset enables you to use Java Management Extensions (JMX) to capture and expose operational statistics for certain resource adapter operations. You can use this data for diagnostic and predictive purposes, such as to monitor system health and reports.
This statistical data includes the following:
The number of times the action was performed
The minimum, maximum, and average duration of the operations
Objects |
Actions Monitored |
---|---|
For Accounts |
|
For Actions |
Run |
For Other Objects |
|
JMX creates MBeans for each resource adapter, by server, and registers these beans with names that match the following pattern:
serverName=server name, resourceAdapterType=Resource Adapter Type, resourceAdapterName=Resource Adapter Name |
Waveset records statistics for all completed operations, whether they completed successfully or with errors. However, Waveset does not record statistics for incomplete operations, such as any operations that throw exceptions.
You can configure excludes as follows:
From the Administrator interface, select Configure -> Servers.
On the Configure Servers page, perform one of the following tasks:
Click the Edit Default Server Settings button to edit the default server settings.
Click a server link to edit the policy for that server.
Click the JMX tab and enable the JMX Enable Resource Adapter Monitor box to turn on resource monitoring.
To exclude specific resources, add regular expressions to the JMX Resource Adapter Monitor Excludes list.
To exclude monitoring specific actions, add regular expressions to the JMX Resource Adapter Monitor Operation Excludes list.
All excludes use regular expressions. For excluding certain resources, JMX just matches on the resource name. For example, if you have adapters named
resource1 resource2 resource3 resource10 resource11 |
and you specify the following pattern
.*1$ |
which means, match any 0 or more of any character (.*) until something that ends with a 1 (1$). JMX will exclude resource1 and resource11.
For operations, the process is similar. If your operations have the following names, the patterns must match against those names.
ACCOUNT_CREATE ACCOUNT_UPDATE ACCOUNT_DELETE ACCOUNT_GET ACCOUNT_AUTHENTICATE OBJECT_CREATE OBJECT_UPDATE OBJECT_DELETE OBJECT_GET OBJECT_LIST ACTION_RUN |
For example, the ^ACCOUNT.* pattern excludes all operations that start with ACCOUNT. Or, using this pattern excludes updates and deletes:
.*UPDATE$ .*DELETE$ |
For more information about configuring and using JMX, see Configuring JMX Monitoring and The JMX Publisher Type in Oracle Waveset 8.1.1 Business Administrator’s Guide.
Waveset supplies some JMX MBeans that provide information about the following:
Data Exporter
Object Repository
Performance
Reconciliation
Resource Adapters and Connectors
Scheduler
Waveset Server Cluster
There are nine MBeans in the Performance category:
DataExporter
DataQueue
FormConverter
ObjectChangeNotification
Reconcile
Ruler
TaskInstanceCache
ViewMaster
WorkItemCache
These MBeans can capture the same performance data that is captured by the Waveset Profiler, but the MBeans have a lower runtime performance cost.
You enable the Profiler on a per-server basis. When enabled, the Profiler impacts both memory (to capture and hold the performance data) and performance.
Performance MBeans are disabled by default. When enabled, MBeans have a significantly lower operational impact, however they capture very specific performance data.
When enabled, the FormConverter and ViewMaster MBeans each have a configurable threshold and when processing takes longer than the threshold, a JMX Notification is produced. This notification indicates which view or form element was involved and how long it took to process that element. If there are no JMX notification listeners registered with the MBean server the notification is discarded, otherwise Waveset delivers the notification to the listener.
These MBeans are useful for tracking down performance problems with Waveset's GUI. Because you can customize Waveset's GUI, and rendering parts of the GUI can involve large amounts of data or significant computations. The FormConverter and ViewMaster MBeans can help you quickly identify whether the performance problem is caused by view processing or by processing a specific form field.
To use the FormConverter or ViewMaster MBeans, perform the following steps from a JMX console:
Enable the MBean by setting the Enabled attribute to true.
Specify an appropriate value (in mSecs) for the Limit attribute.
Subscribe to the MBean for notifications by using the Notifications tab.
After completing these steps, any View/Form processing that takes longer than the configured limit will cause a JMX notification to display in the JMX console. The notification will specify the Form, Field, or View ID and how long the operation took.
If the ViewMaster MBean indicates delays in processing a view, check these additional items:
For delays caused by runDerivations or runExpansions, the problem may still be in the form associated with the view.
For delays caused by getView, checkoutView, checkinView, refreshView, or getForm, the problem is probably in the viewer. You must turn on tracing in the viewer to further isolate the problem.
The FormConverter MBean shows processing delays for any field that is rendered to HTML during form processing. Only fields that have a <Display> element are candidates. Common reasons for fields taking a long time to process are:
The field has a lot of data
The field executes XPRESS or script code to compute its value
The FormConverter MBean indicates which Form/Field and how long it took to render. If a form has a lot of fields, the form might display slowly but no single field will exceed the limit.
The Rule MBean emits a notification any time the execution of a rule exceeds the configured limit. Rules can be executed in many places (in tasks, forms, or workflows), so having a single place to capture any rule execution that exceeds a specified time is useful for a high-level performance analysis.
The TaskInstanceCache and WorkItemCache MBeans show which cache operations are used when workflows contain ManualActions that are marked as transient. When a workflow contains a transient ManualAction, changes to the corresponding WorkItem and workflow are made in memory, bypassing the repository but removing the assurance that a workflow will survive a server restart. These MBeans are useful when diagnosing self-service workflow wizard performance.
The ObjectChangeNotification MBean issues a JMX notification any time the server exceeds the execLimit value by delivering a notification to Identity Manager code indicating that an object has changed. This MBean provides a useful diagnostic when the server appears to be processing tasks too slowly. If this MBean is producing JMX notifications with an execLimit of 50 milliseconds, the server is running slowly and you should capture both the JMX notification and the JVM thread dumps for analysis.
The Reconcile MBean shows how much data Reconcile has already processed, how much data is still queued, and the current processing rate. This bean is useful when measuring the impact of changing Reconcile processing thread counts. You can use this MBean with resource-specific MBeans to assess Reconciliation tunings. If the Reconcile.processRate is low and the resource account get is high, you can add more Reconciliation worker threads to increase throughput.
The DataExporter and DataQueue MBeans indicate both the size of the internally queued data and the rate at which the queue is being filled. The Data Exporter Queue provides an in-memory queue to buffer data that needs to be written to the repository, allowing separate threads to drain the queue without blocking the code that queued the data.
When looking at notifications using JConsole, you can hover the mouse pointer over the message field in the notification to see the details (such as viewId, form, field, and so on).
The Java Monitoring and Management Console (JConsole) is a Java Management Extension (JMX) technology-compliant graphical management tool that is co-packaged with at least JDK 5. JConsole connects to a running JVM and gathers information from the JVM MBeans in the connected JMX agent.
Specifically, you can use JConsole to perform the following tasks:
Detect low memory and deadlocks
JConsole accesses the memory system, memory pools, and MBeans garbage collector to provide information about memory use, including memory consumption, memory pools, and garbage collection statistics.
Enable or disable garbage collection
Enable or disable verbose tracing
Monitor local and remote applications
Monitor and manage MBeans including current heap memory use, non-heap memory use, and how many objects are pending for finalization
View information about performance, resource consumption, and server statistics
View summary information about the JVM and monitored values, threads running on the application, and loaded classes
View information about operating system resources (Waveset’s platform extension), such as:
CPU process time
How much total and free physical memory is available
The amount of committed virtual memory (how much virtual memory is guaranteed to be available to the running process)
How much total and free swap space is available
The number of open file descriptions (UNIXonly)
For more information about using JConsole to monitor applications on the Java platform, see the Oracle Developer Network (SDN) article titled Using JConsole to Monitor Applications, which is available from the following URL:
http://java.sun.com/developer/technicalArticles/J2SE/jconsole.html
You can use the Java Runtime Analysis Toolkit (JRat), an open-source performance profiler for the Java platform, to identify potential performance bottlenecks, especially if your deployment uses custom Java classes. JRat monitors your application’s execution and persists the application’s performance measurements.
For example, if you have a custom workflow for provisioning, you can use JRat to see which classes are being invoked and how much time is required to run your workflow compared to the default Waveset provisioning workflow.
For more information about JRat, see http://jrat.sourceforge.net.