In broad terms, throughput measures the amount of work performed by Enterprise Server. For Enterprise Server, throughput can be defined as the number of requests processed per minute per server instance. High availability applications also impose throughput requirements on HADB, since they save session state data periodically. For HADB, throughput can be defined as volume of session data stored per minute, which is the product of the number of HADB requests per minute, and the average session size per request.
As described in the next section, Enterprise Server throughput is a function of many factors, including the nature and size of user requests, number of users, and performance of Enterprise Server instances and back-end databases. You can estimate throughput on a single machine by benchmarking with simulated workloads.
High availability applications incur additional overhead because they periodically save data to HADB. The amount of overhead depends on the amount of data, how frequently it changes, and how often it is saved. The first two factors depend on the application in question; the latter is also affected by server settings.
HADB throughput can be defined as the number of HADB requests per minute multiplied by the average amount of data per request. Larger throughput to HADB implies that more HADB nodes are needed and a larger store size.