18 Configuring Partition Work Managers

This chapter describes Partition Work Managers and how to configure them. You can use Fusion Middleware Control (FMWC), the WLS Administration Console, WLST, or the REST APIs. The chapter refers to the Fusion Middleware and WebLogic Server documentation sets and online help for additional information as appropriate.

This chapter includes the following sections:

Partition Work Managers: Overview

Partition Work Managers set thread usage policy among partitions. You can configure them to limit the number of Work Manager threads in each partition, as well as to manage thread usage allocation based on thread usage time for each partition. The importance of regulating the relative thread usage is to provide proper quality of service (QoS) and fairness among various partitions that share the same WLS instance. Without it, a rogue application from one partition could starve thread resources from other partitions preventing them from functioning properly.

Partition Work Managers provide resource management within partitions. Administrators know about the runtime environment and how it will be shared. They configure Partition Work Managers at the domain level and assign them to partitions as they are created. These predefined Partition Work Managers let administrators standardize Work Manager configuration; for example, all partitions with business-critical applications can reference the business-critical Partition Work Manager.

Administrators might also want to customize the Partition Work Manager for a specific partition, or maybe for every partition. In this scenario, they configure the Partition Work Manager within (embedded in) the partition configuration. There is no need to predefine Partition Work Manager configurations for this use case.

You can define a Partition Work Manager in the domain to use with multiple domain partitions, or you can define Partition Work Manager attributes in the domain partition itself for use in that partition only. If no Partition Work Managers are defined, default values for Partition Work Manager settings are applied.

Partition Work Managers can be used in more than one domain partition. However, a domain partition can be associated with only one Partition Work Manager.

A partition configuration can include one of the following:

  • <partition-work-manager-ref>—to refer to a Partition Work Manager that is configured at the domain level.

  • <partition-work-manager>—to embed the Partition Work Manager settings within the partition configuration.

  • Neither <partition-work-mananger> nor <partition-work-manager-ref>—to apply the default values for Partition Work Manager settings.

Configuring Partition Work Managers: Main Steps

Partition Work Managers define a set of policies that limit the usage of threads by Work Managers in partitions only. They do not apply to the domain.

The main steps for configuring a Partition Work Manager are as follows:

  1. Enter a name for the Partition Work Manager.

  2. Optionally, enter a Fair Share value, the desired percentage of thread usage for the partition compared to the thread usage of all partitions. Oracle recommends that the sum of this value for all partitions running in a WLS domain add up to 100. By default, a partition has a fair share value of 50.

  3. Optionally, enter a Minimum Threads Constraint, the upper limit on the number of standby threads that can be created for satisfying the minimum threads constraints configured in the partition.

    A minimum threads constraint guarantees the number of threads the server will allocate to a Work Manager to avoid deadlocks. This could result in a Work Manager receiving more thread use time than its configured fair share, and thus, a partition getting more thread usage time than it should compared to other partitions in the same WLS server instance.

    You can optionally provide a limit on the minimum threads constraint value for each partition configured in the WLS domain. If configured, this imposes an upper limit on the minimum threads constraint values configured in a partition. If the sum of the configured values of all minimum threads constraints in a partition exceeds this configured value, a warning message will be logged and WLS will reduce the number of threads the thread pool will allocate for the constraints.

    There is no minimum threads constraint limit set on a partition by default.

  4. Optionally, enter a Maximum Threads Constraint, the maximum number of concurrent requests that the self-tuning thread pool can process for a partition at any given time.

    A maximum threads constraint can be useful to prevent a partition from using more than its fair share of thread resources, especially in abnormal situations such as when threads are blocked on I/O, waiting for responses from a remote server that is not responding. Setting a maximum threads constraint in such a scenario would help ensure that some threads would be available for processing requests from other partitions in the WLS instance.

  5. Optionally, enter a Shared Capacity Constraint, the total number of requests that can be present in the server for a partition as a percentage.

    The partition-shared capacity for Work Managers limit the number of work requests from a partition. This limit includes work requests that are either running or queued waiting for an available thread. When the limit is exceeded, WLS will start rejecting certain requests submitted from the partition. The value is expressed as a percentage of the capacity of the entire WLS server as configured in the sharedCapacityForWorkManagers option of the OverloadProtectionMBean that throttles the number of requests in the entire WLS server instance. The partition-shared capacity for Work Managers must be a value between 1 and 100 percent.

Defining Partition Work Managers: WLST Example

The following examples show how to define Partition Work Managers using WLST.

Configuring Domain-Level Partition Work Managers: WLST Example

The following example creates and configures the domain-level Partition Work Manager, myPartitionWorkManager.

# Creates a Partition Work Manager at the domain level
edit()
startEdit()
cd('/')
cmo.createPartitionWorkManager('myPartitionWorkManager')
activate()
 
# Configures the Partition Work Manager
startEdit()
cd('/PartitionWorkManagers/myPartitionWorkManager')
cmo.setSharedCapacityPercent(50)
cmo.setFairShare(50)
cmo.setMinThreadsConstraintCap(0)
cmo.setMaxThreadsConstraint(-1)
activate()

Associating Partition Work Managers With Partitions: WLST Example

Partition Work Managers can be used in more than one domain partition. However, a domain partition can be associated with only one Partition Work Manager. The following example associates the domain-level Partition Work Manager, myPartitionWorkManager with the partition, Partition-0.

# Associate a domain-level Partition Work Manager with a Partition
edit()
startEdit()
cd('/Partitions/Partition-0')
cmo.destroyPartitionWorkManager(None)
cmo.setPartitionWorkManagerRef(getMBean('/PartitionWorkManagers/myPartitionWorkManager'))
activate()

Defining Partition Work Manager Attributes in a Partition: WLST Example

The following example defines Partition Work Manager attributes in the domain partition, Partition-1, for use in that partition only.

# Defines Partition Work Manager attributes within the partition
edit()
startEdit()
cd('/Partitions/Partition-1')
cmo.createPartitionWorkManager('Partition-1-PartitionWorkManager')
 
cd('/Partitions/Partition-1/PartitionWorkManager/Partition-1-PartitionWorkManager')
cmo.setSharedCapacityPercent(50)
cmo.setFairShare(50)
cmo.setMinThreadsConstraintCap(0)
cmo.setMaxThreadsConstraint(-1)
activate()

In the config.xml file, notice the Partition Work Manager element defined in Partition-0 and Partition-1:

  <partition>
    <name>Partition-0</name>
    <resource-group>
      <name>default</name>
    </resource-group>
    <default-target>VirtualTarget-0</default-target>
    <available-target>VirtualTarget-0</available-target>
    <realm>myrealm</realm>
    <partition-id>318e0d69-a71a-4fa6-bd7e-3d64b85ec2ed</partition-id>
    <system-file-system>
      <root>C:\Oracle\Middleware\Oracle_Home\user_projects\domains\base_domain/partitions/Partition-0/system</root>
      <create-on-demand>true</create-on-demand>
      <preserved>true</preserved>
    </system-file-system>
    <partition-work-manager-ref>myPartitionWorkManager</partition-work-manager-ref>
  </partition>
  <partition>
    <name>Partition-1</name>
    <resource-group>
      <name>default</name>
    </resource-group>
    <default-target>VirtualTarget-1</default-target>
    <available-target>VirtualTarget-1</available-target>
    <realm>myrealm</realm>
    <partition-id>8b7f6bf7-5440-4edf-819f-3674c630e3f1</partition-id>
    <system-file-system>
      <root>C:\Oracle\Middleware\Oracle_Home\user_projects\domains\base_domain/partitions/Partition-1/system</root>
      <create-on-demand>true</create-on-demand>
      <preserved>true</preserved>
    </system-file-system>
    <partition-work-manager>
      <name>Partition-1-PartitionWorkManager</name>
      <shared-capacity-percent>50</shared-capacity-percent>
      <fair-share>50</fair-share>
      <min-threads-constraint-cap>0</min-threads-constraint-cap>
      <max-threads-constraint>-1</max-threads-constraint>
    </partition-work-manager>
  </partition>
  <partition-work-manager>
    <name>myPartitionWorkManager</name>
    <shared-capacity-percent>50</shared-capacity-percent>
    <fair-share>50</fair-share>
    <min-threads-constraint-cap>0</min-threads-constraint-cap>
    <max-threads-constraint>-1</max-threads-constraint>
  </partition-work-manager>
</domain>

Configuring Partition Work Managers: Related Tasks and Links

For additional information, see the following: