Agile Product Lifecycle Management Capacity Planning Guide Release 9.3.6 E71149-01 |
|
![]() Previous |
![]() Next |
This chapter helps you to plan and gauge server capacity.
To determine the application server capacity, the average Transactions per second (TPS) the server can support for a given Agile solution must be determined. For each solution, business scenarios were identified that users with different roles would perform daily. Based on these scenarios and the user distribution, the workload is designed per solution.
In the first phase, tests were conducted on individual solutions to determine the TPS. A single, 2 CPU Dual core application server supported an average three second response time.
The TPS for the Agile PLM solutions is as follows:
Solution | Transactions Per Second (TPS) |
---|---|
Product Collaboration (PC) | 19 |
Product Quality Management (PQM) | 19 |
Product Portfolio Management (PPM) | 16 |
Product Cost Management (PCM) | 12 |
Product Governance and Compliance (PGC) | 17 |
The table below lists the default heap sizes configured by the Agile PLM installer.
Note: The heap size recommendations below apply to JVMs running the core Agile PLM application. There is no need to alter the heap settings for the Agile PLM File Manager or the WebLogic Admin Server components. |
Recommended Settings (Installer Defaults)
Configuration | Heap Size (min=max) | MaxPermSize | NewSize | MaxNewSize |
UNIX/Linux (64-bit JDK 7) | 3072m | 512m | 1300m | 1300m |
Windows (64-bit JDK 7) | 3072m | 512m | 1300m | 1300m |
Note: For the Agile PLM application server component that runs in WebLogic Server, the JVM heap settings can be found in the shell script, setUserOverrides, that is located in the directory AGILE_HOME/agileDomain/bin. On Windows, the heap settings are also present in the service install scripts and you must uninstall and re-install the service to put a change into effect. Always back up the original file before making changes. |
Note: The benchmark tests presented below were performed on 64-bit operating systems using 64-bit JVMs with a 3 GB (min=max) heap size. Note that when configuring heap size, larger is not necessarily better. The best heap configuration is arguably the smallest heap size that can accomplish the task without heap thrashing. Therefore, for most deployments, a 3 GB (3072m) initial heap size is recommended (assumes 64-bit JDK). |
Agile conducts extensive load tests to determine scalability for individual product components, and for combinations of modules. Agile uses HP Load Runner 9.1 to simulate virtual user load for the benchmark tests.
To determine the required hardware for a given implementation, many factors must be considered. These factors are:
Average user load
Peak user load
User distribution across different modules, if more than one module is implemented
Network configuration
Latency
Bandwidth
The goal of hardware sizing is to balance hardware costs with user response times. To effectively accomplish this, you must accurately estimate and plan for both peak and average system load. Peak load is the load on the system during peak times. For example, users may access the system heavily between 9:00 AM and 10:00 AM, then again between 1:00 PM and 2:00 PM. Average load is determined by calculating load during all periods and averaging it.
If the peak load occurs on a regular basis, such as, daily or weekly, it would be ideal to configure and tune systems to meet the peak load requirements. Those users who access the application during non-peak times would experience better response times than the peak-time users. If peak load times are infrequent or do not deviate much from average load and higher response times during peak usage is acceptable, then you can configure the system and tune it to average load. This leads to a decrease in hardware investment at the cost of higher response times during infrequently high server load.
Another major factor that must be considered for hardware sizing is the average wait time between actions or clicks for a given user. The average wait time can vary from one second to 15 seconds to several minutes, depending on how the user uses the system. The user spends time on analyzing or reading data received between transactions and performing other tasks such as reading email, using the telephone, and chatting with a colleague. All of these actions contribute to the average wait time between actions performed in the Agile system.
The Transaction Processing Performance Council (http://www.tpc.org
) that publishes the benchmarks for different applications, recommends a wait time of 7 to 70 seconds between subsequent actions. For sizing calculations, the average wait time must be considered. The lower the average wait time, the smaller the number of users the server can support.
Agile PLM on Oracle Exalogic (64-bit OS with 64-bit JVM)
The following table shows the recommended hardware sizing for Exa based on user load.
Processor Type and Speed | Number of Application Servers (JVMs) | Total Number of Cores (four cores per JVM) | Total Number of Users |
Intel Xeon E5-2690 @ 2.90GHz | 1 | 4 | 400 |
Intel Xeon E5-2690 @ 2.90GHz | 2 | 8 | 720 |
Intel Xeon E5-2690 @ 2.90GHz | Add 1 Server with 4 cores for every 360 users. | 720+ |
Note: To support multiple application servers, clustering must be implemented, which adds an additional 10% load on each server. |
Server Configuration by Processor Type
Processor Type and Speed | Server Model | Server Details |
Intel Xeon E5-2690 @ 2.90GHz | Oracle Exalogic X3-2 | Intel Xeon Processor, 2 CPU - 16 Cores, 2.9 GHz, 32 GB RAM |
Agile PLM on Oracle Linux (64-bit OS with 64-bit JVM)
The following table shows the recommended hardware sizing for Linux based on user load.
Processor Type and Speed | Number of Application Servers (JVMs) | Total Number of Cores (four cores per JVM) | Total Number of Users |
Intel Xeon E5-2690 @ 2.90GHz | 1 | 4 | 280 |
Intel Xeon E5-2690 @ 2.90GHz | 2 | 8 | 500 |
Intel Xeon E5-2690 @ 2.90GHz | Add 1 Server with 4 cores for every 250 users. | 500+ |
Note: To support multiple application servers, clustering must be implemented, which adds an additional 10% load on each server. |
Server Configuration by Processor Type
Processor Type and Speed | Server Model | Server Details |
Intel Xeon E5-2690 @ 2.90GHz | Sun Server X3-2L | Intel Xeon Processor, 2 CPU - 16 Cores, 2.9 GHz, 32 GB RAM |
Agile PLM on Microsoft Windows Server (64-bit OS with 64-bit JVM)
The following table shows the recommended hardware sizing for Windows based on user load.
Processor Type and Speed | Number of Application Servers (JVMs) | Total Number of Cores (four cores per JVM) | Total Number of Users |
Intel Xeon E5-2690 @ 2.90GHz | 1 | 4 | 280 |
Intel Xeon E5-2690 @ 2.90GHz | 2 | 8 | 500 |
Intel Xeon E5-2690 @ 2.90GHz | Add 1 Server with 4 cores for every 250 users. | 500+ |
Note: To support multiple application servers, clustering must be implemented, which adds an additional 10% load on each server. |
Server Configuration by Processor Type
Processor Type and Speed | Server Model | Server Details |
Intel Xeon E5-2690 @ 2.90GHz | Sun Server X3-2L | Intel Xeon Processor, 2 CPU - 16 Cores, 2.9 GHz, 32 GB RAM |
Agile PLM on Sun Solaris x86-64 (64-bit OS with 64-bit JVM)
The following table shows the recommended hardware sizing for Solaris based on user load.
Processor Type and Speed | Number of Application Servers (JVMs) | Total Number of Cores (four cores per JVM) | Total Number of Users |
Intel Xeon E5-2690 @ 2.90GHz | 1 | 4 | 280 |
Intel Xeon E5-2690 @ 2.90GHz | 2 | 8 | 500 |
Intel Xeon E5-2690 @ 2.90GHz | Add 1 Server with 4 cores for every 250 users. | 500+ |
Note: To support multiple application servers, clustering must be implemented, which adds an additional 10% load on each server. |
Server Configuration by Processor Type
Processor Type and Speed | Server Model | Server Details |
Intel Xeon E5-2690 @ 2.90GHz | Sun Server X3-2L | Intel Xeon Processor, 2 CPU - 16 Cores, 2.9 GHz, 32 GB RAM |
Agile PLM on Sun Solaris SPARC64 (64-bit OS with 64-bit JVM)
The following table shows the recommended hardware sizing for Solaris based on user load.
Processor Type and Speed | Number of Application Servers (JVMs) | Total Number of Cores (four cores per JVM) | Number of Users* |
---|---|---|---|
SPARC T7 processor @ 4.13GHz | 1 | 4 | 280 |
SPARC T7 processor @ 4.13GHz | 2 | 8 | 510 |
SPARC T7 processor @ 4.13GHz | Add 1 Server with 4 cores for every 255 users. | 510+ | |
Note: To support multiple application servers, clustering must be implemented, which adds an additional 10% load on each server.
Note: 4 cores = 32 virtual processors ( 4*8 threads per core ) *Note: These results were achieved using the following JVM parameters: -ms3072M -mx3072M -XX:PermSize=512M -XX:MaxPermSize=512M -XX:NewSize=1300M -XX:MaxNewSize=1300M -XX:+UseCompressedOops -XX:+AlwaysPreTouch -XX:+UseTLAB -XX:+AggressiveOpts -XX:SurvivorRatio=6 -XX:TargetSurvivorRatio=90 -XX:+UseParallelGC -XX:ParallelGCThreads=16 -XX:+UseLargePages -XX:LargePageSizeInBytes=256M |
Server Configuration by Processor Type
Processor Type and Speed | Server Model | Server Details |
SPARC T7 processor @ 4.13GHz | Oracle SPARC T7-1 | SPARC T7-1 Processor, 2 CPU - 32 Cores (8 threads per core totals to 256 virtual processors), 4.13 GHz, 512 GB RAM |
Agile PLM on IBM AIX (64-bit OS with 64-bit JVM)
The following table shows the recommended hardware sizing for AIX based on user load.
Processor Type and Speed | Number of Application Servers (JVMs) | Total Number of Cores (four cores per JVM) | Total Number of Users |
IBM POWER7 @ 3.86 GHz | 1 | 4 | 320 |
IBM POWER7 @ 3.86 GHz | 2 | 8 | 570 |
IBM POWER7 @ 3.86 GHz | Add 1 Server with 4 cores for every 285 users. | 570+ |
Note: To support multiple application servers, clustering must be implemented, which adds an additional 10% load on each server. |
Server Configuration by Processor Type
Processor Type and Speed | Server Model | Server Details |
IBM POWER7 @ 3.86 GHz | IBM Power 780 | 4 CPU - 16 Cores, 32 GB RAM |
For production environments, it is recommended to run the database server on dedicated hardware. Database hardware sizing depends on both concurrent usage and the amount of data or size of the database. The best measure of database size is schema dump file size and estimated monthly incremental increases. Exporting the Agile schema at periodic intervals and analyzing its size helps you determine if a larger database sizing model is needed to better manage database growth, and to minimize ongoing database maintenance and tuning.
For existing Agile customers, getting the initial dump file size as a baseline is easy. For new customers, the dump file size must be estimated. If there is an existing database, use the Oracle Export Utility to verify the dump file size. If there is no existing database to reference, the size of the database must be estimated by monitoring database growth over the first few months of normal operation to predict future disk size needs.
The following tables show the Agile PLM 9.3.6 Database Sizing Matrix for Oracle.
Database Sizing Model:
Small
Agile DB Configuration | Number of Users | CPU | RAM in GB | Disks |
D | 1000 | 8 | 8 | 9 |
C | 500 | 4 | 4 | 4 |
B | 250 | 4 | 2 | 4 |
A | 100 | 2 | 1 | 4 |
Medium
Agile DB Configuration | Number of Users | CPU | RAM in GB | Disks |
D | 1000 | 12 | 12 | 9 |
C | 500 | 8 | 8 | 9 |
B | 250 | 4 | 4 | 4 |
A | 100 | 2 | 2 | 4 |
Large
Agile DB Configuration | Number of Users | CPU | RAM in GB | |
---|---|---|---|---|
D | 1000 | 16 | 16 | 13 |
C | 500 | 8 | 8 | 11 |
B | 250 | 4 | 4 | 9 |
Extra Large
Agile DB Configuration | Number of Users | CPU | RAM in GB | Disks |
D | 1000 | 24 | 24 | 15 |
C | 500 | 12 | 12 | 13 |
The following table shows the Oracle Database Sizing Model.
Size | Initial Dump File Size (MB) | Monthly Increment (MB) |
---|---|---|
Small | < 1024 | <50 |
Medium (Regular) | < 5120 | <200 |
Large | < 16384 | <400 |
Extra Large | < 38912 | <1000 |
Each database sizing model requires an initial database configuration for deployment. For scalability and concurrency support, you need additional hardware resources, such as CPU, RAM, and number of disks.
The Agile PLM 9.3.6 small database sizing model can be used in a demo or test environment with the minimum hardware requirements. In a production environment, a small database (default) requires the settings of configuration A as an initial configuration. Configurations B, C, and D can be used for scalability and the addition of more concurrent users.
The Agile PLM 9.3.6 medium database sizing model requires configuration A as an initial configuration with additional RAM. Configurations B, C, and D can be used for scalability and the addition of more concurrent users.
The Agile PLM 9.3.6 large database sizing model requires configuration B as an initial configuration. Configurations C and D can be used for scalability and the addition of more concurrent users.
The Agile PLM 9.3.6 extra-large database sizing model requires configuration C as an initial configuration. Configuration D can be used for scalability and the addition of more concurrent users.
As you can see from the sizing tables, the Agile PLM 9.3.6 database, CPU, and memory requirements are roughly the same as with previous versions of Agile. With further improvement on bind variables and optimization of SQL, memory resource should be primarily used for the DB cache (or buffer), which is directly proportional to the amount of data.
For servers running Windows, the minimum recommended CPU is an Intel 2.8 GHz Xeon with 512 KB L2 cache.
It is recommended to start with a 4-disk configuration. The starting disk space requirement, 4x18GB=72GB, may seem quite large when comparing it to the size of the initial dump file, but considering the storage needs of the Agile PLM 9.3.6 features, including full text search, this may actually be on the low side.
The following table lists recommended hardware resources for different database size models.
Database Size | CPU | RAM | Disks * |
---|---|---|---|
Demo | 1 | 512 Mb | 1 |
Small | 2 | 1 Gb | 4 |
Medium | 2 | 2 Gb | 4 |
Large | 4 | 4 Gb | 8 |
Extra-Large | 12 | 8 Gb | 12 |
* Each disk has 18 Gb disk space.
While the proper sizing of extents minimizes dynamic extensions in the same segments, disk I/O contention within the same logical tablespace or physical data file can also be harmful.
You can improve disk I/O performance for multiple disk configurations by spreading the I/O burden across multiple disk devices. The following sections describe the use of multiple disks for the Oracle database server. It is always advisable to use more disks.
A one-disk configuration can result in disk I/O contention when the storage device is a single physical disk. As both database size and usage increase, performance can decline significantly. A one-disk configuration is best for a demonstration, preproduction, and testing environment or where the database files are stored on a RAID array, SAN, or other storage subsystem with built-in striping and mirroring. The configuration can be implemented as shown in the table below.
The following table shows a one-disk configuration for OFA implementation.
Disk | Resource |
---|---|
Disk 1 | ORACLE_HOME
SYSTEM TOOL UNDO TEMP USERS INDX AGILE_DATA1 AGILE_DATA2 AGILE_DATA3 AGILE_DATA4 AGILE_DATA5 AGILE_INDX1 AGILE_INDX2 AGILE_INDX3 AGILE_INDX4 AGILE_INDX5 LOG1 LOG2 LOG3 LOG4 |
There is no beneficial gain from OFA for the one-disk configuration from the perspective of disk I/O contention. There should be no significant impact on a current production database if you implement the default Oracle settings with a one-disk configuration.
A two-disk configuration is best for a small database. To eliminate potential I/O contention, AGILE_DATA and AGILE_INDX data files are on separate disks. As usage and database size increases, performance declines.
The following table shows a two-disk configuration for OFA implementation.
Disk | Resource |
---|---|
Disk 1 | ORACLE_HOME
SYSTEM TOOL UNDO AGILE_DATA1 AGILE_DATA2 AGILE_DATA3 AGILE_DATA4 AGILE_DATA5 LOG1 LOG2 |
Disk 2 | TEMP
USERS INDX AGILE_INDX1 AGILE_INDX2 AGILE_INDX3 AGILE_INDX4 AGILE_INDX5 LOG3 LOG4 |
A four-disk configuration is best for an enterprise-level implementation of Agile. A four-disk configuration spreads the various data files, control files, and redo log files across multiple disk devices.
The three control files can be mirrored onto three different disks for best recovery protection.
All potential I/O demand-intensive data files can be distributed onto their own separate disk. Redo log files are completely isolated from the rest of the data files, as the log files can cause significant I/O contention during transactions if they are sharing disks with other data files. The UNDO data file is separated from the schema data files and log files as well, so I/O contention can be minimized.
The Agile schema tablespaces can be isolated from the rest of the SYSTEM, TEMP, TOOL, and UNDO data files.
The four-disk configuration shown in the table below is recommended. For production database sites, the four-disk configuration represents the minimum requirements for an OFA implementation and provides the minimum hardware configuration for performance tuning.
The following table shows a four-disk configuration for OFA implementation.
Disk | Resource |
---|---|
Disk 1 | ORACLE_HOME
SYSTEM TOOL UNDO LOG1/2/3/4 Controlfile01 |
Disk 2 | TEMP
USERS INDX Archive log file Controlfile02 |
Disk 3 | AGILE_DATA1
AGILE_DATA2 AGILE_DATA3 AGILE_DATA4 AGILE_DATA5 Controlfile03 |
Disk 4 | AGILE_INDX1
AGILE_INDX2 AGILE_INDX3 AGILE_INDX4 AGILE_INDX5 |
In addition to the advantages associated with a four-disk configuration, an eight-disk configuration supports an enterprise-level implementation of Agile by further spreading various data files and redo log files across multiple disk devices.
Application schema can get additional performance gains in terms of I/O load spread by further separating the AGILE_DATA1, AGILE_DATA2, AGILE_DATA3, and AGILE_DATA4 and AGILE_INDX1, AGILE_INDX2, AGILE_INDX3 data files because of potential I/O contention between the AGILE_DATA data files and AGILE_INDX data files. A complete separation of potential large datafiles in its own disk spindle should help I/O contention as physical disk I/O is inevitable, due to the share size of data, as shown in the table below.
The following table shows an eight-disk configuration for OFA implementation.
Disk | Resource |
---|---|
Disk 1 | ORACLE_HOME
SYSTEM TOOL UNDO LOG1/2/3/4 Controlfile01 |
Disk 2 | TEMP
USERS INDX Archive log file Controlfile02 |
Disk 3 | AGILE_DATA1
Controlfile03 |
Disk 4 | AGILE_DATA2 |
Disk 5 | AGILE_DATA3 |
Disk 6 | AGILE_DATA4
AGILE_DATA5 |
Disk 7 | AGILE_INDX1
AGILE_INDX2 |
Disk 8 | AGILE_INDX3
AGILE_INDX4 AGILE_INDX5 |
Further separating the AGILE_DATA and AGILE_INDX tablespaces, twelve-disk configurations can be implemented as shown in the table below. This results in complete independent spindles for AGILE_DATA1, AGILE_DATA2, AGILE_DATA3, and AGILE_DATA4 and AGILE1_INDX, AGILE_INDX2, AGILE_INDX3, and AGILE_INDX4.
The following table shows a twelve-disk configuration for OFA implementation.
Disk | Resource |
---|---|
Disk 1 | ORACLE_HOME
SYSTEM TOOL LOG1/2/3/4 Controlfile01 |
Disk 2 | USERS
INDX Archive log file Controlfile02 |
Disk 3 | UNDO
Controlfile03 |
Disk 4 | TEMP |
Disk 5 | AGILE_DATA1 |
Disk 6 | AGILE_DATA2 |
Disk 7 | AGILE_DATA3 |
Disk 8 | AGILE_DATA4
AGILE_DATA5 |
Disk 9 | AGILE_INDX1 |
Disk 10 | AGILE_INDX2 |
Disk 11 | AGILE_INDX3 |
Disk12 | AGILE_INDX4
AGILE_INDX5 |
A load balancer or proxy web server is deployed to direct requests to the application server(s). When external users need access to Agile, at least one of these is deployed in the DMZ. The load balancer or proxy web server does not need to be installed in the DMZ if Agile is only accessed internally from within the corporate firewall.
Note: Load balancers can be used with the Java client and the Web client. Proxy web servers can only be used with the Web client. |
Much like the application server, the dominant factor in determining hardware sizing for the proxy web servers is concurrent usage. Use the following table to determine the hardware needed for the web server tier.
The following table shows the web server sizing matrix.
Peak Logged In Users | Number of Servers | Number of CPUs | Memory (GB) |
---|---|---|---|
100 | 1 | 1 | 1 |
250 | 1 | 1 | 1 |
500 | 1 | 2 | 2 |
1000 | 2 | 2 | 2 |
The performance of Distributed File Management is a function of how many files are being downloaded or uploaded concurrently, along with how large the files are. A site handling up to 100 logged-in users requires a server with two CPUs running the processors previously mentioned and 2GB of RAM. File vault storage size is a function of the expected amount of data to be stored there.
AutoVue for Agile PLM performance is a function of the number of files being viewed concurrently and the average file size being viewed. For the latest requirements, refer to the Oracle AutoVue OTN documentation site.
The Events feature allows custom business logic, which is implemented as Java code or Groovy script, to be invoked as part of PLM actions. This feature is a powerful capability that enables you to do validation, auto-populate attributes, and automate dependent tasks.
Enabling events, processing event subscriptions, and executing the handler with custom logic triggers additional processing by the server as part of the PLM action, which consequently impacts throughput and response times. The actual impact depends on the number of events enabled, the number of handlers registered for the enabled events, and the amount of computing done by the handler.
Internal tests done with a simple handler enabled for every action in the system reduced overall throughput by 2.5% and slowed response times for individual actions up to 10%. These numbers, however, are for illustration purposes only. In reality, you will most likely enable only a subset of the events, but the handlers are likely to be more compute intensive than the test scenario. You should consider the additional computational load from events and adjust hardware sizing accordingly.