This chapter includes the following sections:
Partition Work Managers set thread usage policy among partitions. You can configure them to limit the number of Work Manager threads in each partition, as well as to manage thread usage allocation based on thread usage time for each partition. The importance of regulating the relative thread usage is to provide proper quality of service (QoS) and fairness among various partitions that share the same WebLogic Server instance. Without it, an application from one partition could starve thread resources from other partitions preventing them from functioning properly.
Partition Work Managers provide resource management within partitions. Administrators know about the runtime environment and how it will be shared. They configure Partition Work Managers at the domain level and assign them to partitions as they are created. These predefined Partition Work Managers let administrators standardize Work Manager configuration; for example, all partitions with business-critical applications can reference the business-critical Partition Work Manager.
Administrators might also want to customize the Partition Work Manager for a specific partition, or maybe for every partition. In this scenario, they configure the Partition Work Manager within (embedded in) the partition configuration. There is no need to predefine Partition Work Manager configurations for this use case.
You can define a Partition Work Manager in the domain to use with multiple domain partitions, or you can define Partition Work Manager attributes in the domain partition itself for use in that partition only. If no Partition Work Managers are defined, then default values for Partition Work Manager settings are applied.
Partition Work Managers can be used in more than one domain partition. However, a domain partition can be associated with only one Partition Work Manager.
A partition configuration can include one of the following:
<partition-work-manager-ref> to refer to a Partition Work Manager that is configured at the domain level
<partition-work-manager> to embed the Partition Work Manager settings within the partition configuration
<partition-work-manager-ref> to apply the default values for Partition Work Manager settings
Partition Work Managers define a set of policies that limits the usage of threads by Work Managers in partitions only. They do not apply to the domain.
The main steps for configuring a Partition Work Manager are as follows:
A minimum threads constraint guarantees the number of threads that the server allocates to a Work Manager to avoid deadlocks. This could result in a Work Manager receiving more thread use time than its configured fair share, and thus, a partition getting more thread usage time than it should compared to other partitions in the same WebLogic Server instance.
You can optionally provide a limit on the minimum threads constraint value for each partition configured in the WebLogic Server domain. If configured, then this imposes an upper limit on the minimum threads constraint values configured in a partition. If the sum of the configured values of all minimum threads constraints in a partition exceeds this configured value, then a warning message will be logged and WebLogic Server reduces the number of threads that the thread pool allocates for the constraints.
There is no minimum threads constraint limit set on a partition by default.
A maximum threads constraint value can be useful to prevent a partition from using more than its fair share of thread resources, especially in unusual situations such as when threads are blocked on I/O, waiting for responses from a remote server that is not responding. Setting a maximum threads constraint in such a scenario would help ensure that some threads would be available for processing requests from other partitions in the WebLogic Server instance.
The partition-shared capacity for Work Managers limit the number of work requests from a partition. This limit includes work requests that are either running or queued waiting for an available thread. When the limit is exceeded, WebLogic Server will start rejecting certain requests submitted from the partition. The value is expressed as a percentage of the capacity of the entire WebLogic Server as configured in the
sharedCapacityForWorkManagers option of the
OverloadProtectionMBean that constricts the number of requests in the entire WebLogic Server instance. The partition-shared capacity for Work Managers must be a value between 1 and 100 percent.
The following examples show how to define Partition Work Managers using WLST:
The following example creates and configures the domain-level Partition Work Manager,
# Creates a Partition Work Manager at the domain level edit() startEdit() cd('/') cmo.createPartitionWorkManager('myPartitionWorkManager') activate() # Configures the Partition Work Manager startEdit() cd('/PartitionWorkManagers/myPartitionWorkManager') cmo.setSharedCapacityPercent(50) cmo.setFairShare(50) cmo.setMinThreadsConstraintCap(0) cmo.setMaxThreadsConstraint(-1) activate()
Partition Work Managers can be used in more than one domain partition. However, a domain partition can be associated with only one Partition Work Manager. The following example associates the domain-level Partition Work Manager,
myPartitionWorkManager with the partition,
# Associate a domain-level Partition Work Manager with a partition. edit() startEdit() cd('/Partitions/Partition-0') cmo.destroyPartitionWorkManager(None) cmo.setPartitionWorkManagerRef(getMBean('/PartitionWorkManagers/myPartitionWorkManager')) activate()
The following example defines Partition Work Manager attributes in the domain partition,
Partition-1, for use in that partition only:
# Defines Partition Work Manager attributes within the partition edit() startEdit() cd('/Partitions/Partition-1') cmo.createPartitionWorkManager('Partition-1-PartitionWorkManager') cd('/Partitions/Partition-1/PartitionWorkManager/Partition-1-PartitionWorkManager') cmo.setSharedCapacityPercent(50) cmo.setFairShare(50) cmo.setMinThreadsConstraintCap(0) cmo.setMaxThreadsConstraint(-1) activate()
config.xml file, notice the Partition Work Manager element defined in
<partition> <name>Partition-0</name> <resource-group> <name>default</name> </resource-group> <default-target>VirtualTarget-0</default-target> <available-target>VirtualTarget-0</available-target> <realm>myrealm</realm> <partition-id>318e0d69-a71a-4fa6-bd7e-3d64b85ec2ed</partition-id> <system-file-system> <root>C:\Oracle\Middleware\Oracle_Home\user_projects\domains\base_domain/partitions/Partition-0/system</root> <create-on-demand>true</create-on-demand> <preserved>true</preserved> </system-file-system> <partition-work-manager-ref>myPartitionWorkManager</partition-work-manager-ref> </partition> <partition> <name>Partition-1</name> <resource-group> <name>default</name> </resource-group> <default-target>VirtualTarget-1</default-target> <available-target>VirtualTarget-1</available-target> <realm>myrealm</realm> <partition-id>8b7f6bf7-5440-4edf-819f-3674c630e3f1</partition-id> <system-file-system> <root>C:\Oracle\Middleware\Oracle_Home\user_projects\domains\base_domain/partitions/Partition-1/system</root> <create-on-demand>true</create-on-demand> <preserved>true</preserved> </system-file-system> <partition-work-manager> <name>Partition-1-PartitionWorkManager</name> <shared-capacity-percent>50</shared-capacity-percent> <fair-share>50</fair-share> <min-threads-constraint-cap>0</min-threads-constraint-cap> <max-threads-constraint>-1</max-threads-constraint> </partition-work-manager> </partition> <partition-work-manager> <name>myPartitionWorkManager</name> <shared-capacity-percent>50</shared-capacity-percent> <fair-share>50</fair-share> <min-threads-constraint-cap>0</min-threads-constraint-cap> <max-threads-constraint>-1</max-threads-constraint> </partition-work-manager> </domain>