7 Modeling Processes and Tasks

This chapter describes how to model process, rules, and tasks in an Oracle Communications Order and Service Management (OSM) solution.

Overview of Processes and Tasks

The Process editor in Oracle Communications Design Studio is where you define the flow of tasks for a particular process. Processes have a single entry point and one or more exit points. When you create the process structure, you must place the tasks in the order in which the process is to complete them.

In addition to running tasks and subprocesses, you can control how a process runs; for example, specify to delay processing a task or create multiple possible transitions from one task to another based on task status.

Order processes can contain automated tasks, manual tasks, and task status transitions from one task to another task, as well as other process actions such as task transition delays, joins, redirects, rules, subprocesses, and end process points.

A task is a specific activity that must be carried out to complete the order; for example, if an order needs to verify that an ADSL service was activated, you might model a task named Verify ADSL Service. Tasks can be manual or automated. Manual tasks must be processed by an order manager, using the Task web client. Automated tasks run automatically with no manual intervention.

OSM also provides specialized automated task types called the activation task for communicating with Oracle Communications ASAP and the transformation task for initiating the order transformation manager functionality from within a process flow.

Modeling Processes

The following sections provide information about modeling processes.

About Process Flows

Process flows define the sequence of tasks that the process performs. You can design flows for specific scenarios, including:

  • A flow that ends in a successful process completion (Success) or a process failure (Failure).

  • Flows for various activities, such as Cancel, Next, and Back.

Figure 7-1 shows how flows appear in a process in Design Studio. In this figure, flows are labeled with the task status; for example, route_to_osm.

Figure 7-1 Process Flows in Design Studio

Description of Figure 7-1 follows
Description of "Figure 7-1 Process Flows in Design Studio"

You can control flows in the following ways:

  • You can use an order rule to apply conditions that must be met before the flow can continue.

  • You can ensure that the system verifies that mandatory fields are present when a task completes. (This option is not available for tasks with a Rollback status.)

  • You can specify a reporting status to display in an OSM web client. This status is tracked in the web client's OSM history.

Figure 7-2 shows flow properties in Design Studio.

Adding Process Activities

You use process activities to design how the process runs. Figure 7-3 shows the Activities options in Design Studio. The example process includes a timer delay between the two tasks.

Figure 7-3 Process Activities Options in Design Studio

Description of Figure 7-3 follows
Description of "Figure 7-3 Process Activities Options in Design Studio"

In addition to the tasks and subprocesses that the process runs, you can control the process by using the following:

  • Rules

  • Timer delays

  • Event delays

  • Joins

  • Ends

  • Redirects

Rules evaluate a condition and then specify the next step in the process. For example, a rule task might evaluate the data that describes the geographic region of the order and branch the process appropriately. Rule tasks perform as follows:

  • They typically read and evaluate data to determine what to do.

  • They always evaluate to true or false.

  • They are always run automatically, with no manual involvement.

Timer delays delay the process until a rule evaluates to true. Timer delays perform as follows:

  • The rule is evaluated at specified timed intervals.

  • The data evaluated in the rule must be data that is included in the order.

  • The rule always evaluates to true or false.

  • The delay is always run automatically, with no manual involvement.

Event delays delay the process until a rule evaluates to true. Event delays perform as follows:

  • The rule is evaluated only when the data specified in the rule changes.

  • The data evaluated in the rule must be data that is included in the order.

  • The rule always evaluate to true or false.

  • The delay is always run by OSM, with no manual involvement.

Joins combine a set of flows into a single flow. (Process flows define the sequence of tasks that the process performs. See "About Process Flows" for more information.) The unified flow can join flows based on all transitions completing or any one transition completing (by selecting All or selecting Any). Selecting Any will create one instance of the flow for each incoming transition.

Ends stop the process from continuing.

Redirects redirect the process to another task in the same process or to a different process.

Note:

Timer and event delays are not used during amendment processing.

Configuring Subprocesses

When you model subprocesses, you specify the following properties:

  • If you want the associated tasks to appear in the Process History window in the Task web client.

  • The pivot data element on which OSM spawns individual subprocess instances. For example, if you have subprocess that creates an email address for every person in a list, you might select the Person data element as the pivot data element, so the subprocess spawns an instance for each person. See "Generating Multiple Task Instances from a Multi-Instance Field" for more information.

  • How to display the associated tasks in the Task web client. For example, you can display them sequentially, sorted, or unsorted.

  • The process to run, based on rules. The rules in an order control how various actions take place; for example, when to trigger a jeopardy notification and how delays in the order process should be handled.

  • How the subprocess handles exceptions. For example, you might have a process called create_vpn. Within that process, there is a subprocess called validate_address. The subprocess validate_address can throw an exception when an address is invalid. Using the exception mapping functionality, you can instruct the parent process and subprocesses to take specific actions when the subprocesses throw exceptions. Exception mapping enables you to indicate whether the parent process create_vpn should terminate all of the invoked instances, terminate only the offending instance, or ignore the exception altogether.

Understanding Parallel Process Flows

There are two ways to model parallel processes:

  • Subprocesses branching from a task. This allows multiple tasks to run within the same time frame. Parallel flows can be rejoined at an appropriate point if needed. Typically, there are no dependencies defined between parallel flows, but whether these tasks actually run simultaneously depends on the order data, how order tasks are fulfilled, and other factors.

  • Subprocesses running from a pivot data element. Multi-instance subprocesses are subprocesses that can be instantiated multiple times. When a subprocess has a pivot data element defined, multiple instances of the subprocess, running in parallel, are created. For example, if the pivot data element for a subprocess is defined as interested_party, and an order contains three instances of interested_party, each containing a different person's name and contact information, OSM creates three separate instances of the subprocess, one for each set of data.

When planning your order specifications, give careful consideration to which data you make available to each parallel process. Excessive and unnecessary data can have negative impacts on performance, and on usability if manual tasks are involved. Also, make sure to flag data as non-significant if the data is not needed for revision orders. By default, OSM assumes that all data is significant.

About Amendments and Multi-Instance Subprocesses

An amendment to an order on which some of the data affecting a multi-instance subprocess has changed can cause all subprocess instances to be redone, instead of only directly affected subprocesses to be redone. This can result in unneeded processing for the subprocesses with no data changes.

In amendment processing with multi-instance subprocesses, it is important to contain compensation to only the subprocess instances that require compensation. This is achieved by specifying a key. You specify a key in the Key subtab on the Order Template Node editor for the data element specified as the pivot data element of the subprocess in the order template. When a key is specified for a subprocess, OSM maps the revised data to the current data using the key field and redoes only the subprocess that was affected.

About Order Rules in Processes and Notifications

Order rules control how various actions take place; for example, when to trigger a jeopardy notification and how delays in the order process should be handled. Rules are used in process flow decisions, conditional transitions, subprocess logic, delay activities, jeopardies, and events.

OSM evaluates order rules by comparing data to data, or data to a fixed value. Figure 7-4 shows an order rule in the Design Studio Order editor Rules tab. This rule identifies residential customers in a specific city. This is an example of a rule that might be used to send a fallout notification to a regional fallout manager.

Figure 7-4 Example of an Order Rule Defined in Design Studio

Description of Figure 7-4 follows
Description of "Figure 7-4 Example of an Order Rule Defined in Design Studio"
Modeling Order Rules in Notifications

All jeopardy notifications and most event notifications use order rules to determine if the notification should be triggered. (Event notifications that are used only for running an automation plug-in do not use order rules.)

Figure 7-5 shows an example of a rule defined in Design Studio. This rule finds the city that the customer lives in and the type of account, (Business or Residential). When the jeopardy notification uses this rule, the notification is sent only if the order came from a residential customer in Sao Paulo.

You can use rules such as the one shown in Figure 7-5 to route notifications to specific roles. For example, you can combine rules and roles as follows:

Table 7-1 Example Rule and Role Combinations

Notification Type Triggered By Rule Specifies Sent to Role

Notification_Residential

Expected duration exceeded

Residential account

Residential

Notification_Business

Expected duration exceeded

Business account

Business

In this example, two identical notifications are created, both triggered by the order processing time exceeding the expected duration. If the order is for a residential account, the notification is triggered and sent to the role that handles residential accounts.

OSM uses a system-based null_rule. This rule always evaluates to true. Therefore, if you do not specify a rule for a notification, the null_rule is used; because it is set to true, the notification is triggered. If you do not specify any conditions to trigger the notification, and the notification uses the null_rule, the notification is triggered every time it is polled.

Note:

The polling interval cannot be changed at run time.

See "About Order Rules in Processes and Notifications" for more information about rules.

Using the System Date in Delays

You can create a rule that uses the system date as part of a condition. For example, you can create a rule used in a delay that delays a task transition until the system date is at least the value of a particular order data element of the dateTime data type. Figure 7-6 shows a rule that triggers when the system date is at least the value of the date when a particular poll is run.

Figure 7-6 Using the System Date in a Rule

Description of Figure 7-6 follows
Description of "Figure 7-6 Using the System Date in a Rule"

See "Adding Process Activities" for more information about delays in process flows.

Process and Task Design and Data Considerations for Compensation

There are aspects of compensation that you need to consider when you are designing data, tasks, and processes.

Order Perspectives and Data Elements in Compensation

There are some aspects of compensation that you should consider when designing your processes. Compensation takes place using the data in the contemporary order perspective, but must be reconciled with the data in the real-time order perspective. (For more information about the different order perspectives, see "About Order-Level and Task-Level Compensation Analysis.")

The issue relates to data elements that have been added in tasks that are later in the process than the task currently being compensated. The data that has been added is not present in the contemporary order perspective, since it was not present when the task performed its do operation. However, it is present in the real-time order perspective. If the redo operation checks whether the data element exists, it will be checking the contemporary perspective and will not find it. This will cause the redo operation to attempt to add the data element instead of updating it, which will cause problems when the data is reconciled with the real-time order perspective.

To avoid this situation, you should create any needed data elements before executing tasks that may be compensated. If the data is order-level data, you should initialize the data in the creation task for the order. If the data is function-level data, initialize the data needed by the process in a task that is executed early in the process, before tasks that may be compensated.

Effects of Process Loops on Compensation

When you have loops in your OSM processes that cause your tasks to execute multiple times and the process is compensated, each instance of the task that ran will be compensated. If entire sub-processes are being looped, this can cause a large number of tasks to require compensation.

For example, consider the process in Figure 7-7:

In this very simplified process, Task1 can run multiple times if it fails. In our current example, it is executed four times: three times exiting with failure and once with success, as shown in Figure 7-8.

Figure 7-8 Example of Initial Simple Loop Process Sequence

Description of Figure 7-8 follows
Description of "Figure 7-8 Example of Initial Simple Loop Process Sequence"

If the process needs to be compensated, the task will first be run once in redo mode. If this is successful, it will make the rest of the initial flow obsolete, so the tasks remaining in that flow would be run in undo mode, as shown in Figure 7-9.

Figure 7-9 Example of Compensation of Simple Loop Process

Description of Figure 7-9 follows
Description of "Figure 7-9 Example of Compensation of Simple Loop Process"

Then, in the new branch of the process, Task2 will also be run in amend-do mode.

This example shows that while looping inside a process is supported by OSM, solution designers must carefully consider the implications of such loops when OSM compensates them as a result of an amendment. Most solutions include more complicated loops with more tasks per iteration, so you need to consider the impact that looped processes will have on the performance of your overall solution.

Modeling Tasks Entities Common to All Task Types

The following sections provide information about modeling task entities common to all task types.

Modeling Task States

All OSM tasks use states that determine various milestones in the progress of a task. The default task states are:

  • Received: The task has been received in the system and is waiting to be accepted by a user (normally automatic for automated tasks) or assigned to a user (only in manual tasks).

  • Accepted: The assigned user (system user account or a manual operator's user account). The task is locked so that it cannot be modified or completed by other users.

  • Completed: The task is finished.

  • Assigned: (Manual tasks only) The task has been assigned to a user.

  • Create Activation Work order Failed: (Activation Task only) The task attempted to create a work order in the activation system but work order creation failed.

These tasks are mandatory and cannot be removed, but you can create custom task states.

Task states are important because they often trigger various functionality. For example, automation task automation plug-ins only run the task is in the Accepted state. You can configure task-level events to trigger when a task state is reached.

Modeling Task Permissions and Execution Modes

When you model tasks, you can specify which roles can perform which task execution modes (Do, Redo, Undo, Failed-Do, Failed-Redo, and Failed-Undo). For example, you may want to configure a specific role for normal Do, Redo, and Undo execution modes with a second role for fallout management that also operates in fallout execution modes. OSM users that are part of the fallout workgroup can work on failed automated and manual tasks. For more information about task execution modes and change order management, see "About Task Execution Modes".

Figure 7-10 shows roles used in a task specification.

About Normal and Fallout Execution Modes and Task States

OSM provides the following execution mode groups:

  • Normal: Task execution modes that run in normal mode include the Do, Undo, Redo, and Amend-Do modes for normal task processing activities.

  • Fallout: Task execution modes that run in the fallout mode include Do in Fallout, Undo in Fallout, Redo in Fallout, and Amend-Do in Fallout modes for troubleshooting tasks that have failed.

Note:

If an amendment is received while a task is in a fallout execution mode, the following will happen:

  • If the task is not configured to be compensated if it is in progress, the execution mode of the task will not change as a result of the amendment order.

  • If the task is configured to be compensated if it is in progress, and the amendment contains changes to significant data:

    • If the task is still needed after the changes to the order from the amendment are considered, it will transition automatically to (normal) Redo mode.

    • If the task is no longer needed after the changes to the order from the amendment are considered, it will transition automatically to (normal) Undo mode.

    In both of these cases, your automation code (for either Redo or Undo execution mode) should contain a check to see if the task has been in a fallout execution mode, and also whatever code is needed to resolve any actions that have been taken in the fallout execution mode. For example, if your automation for Do in Fallout mode opens a trouble ticket, your Redo automation should check to see whether it needs to close a trouble ticket.

  • If the amendment order contains no changes to significant data, the execution mode of the task will not change as a result of the amendment order.

Figure 7-11 shows how OSM transitions tasks to the fallout execution modes and back to normal execution modes and how these modes relate to task states.

Figure 7-11 Normal and Fallout Execution Mode and Task States

Description of Figure 7-11 follows
Description of "Figure 7-11 Normal and Fallout Execution Mode and Task States"

The following shows how the tasks in Figure 7-11 processes through each state in a Normal execution mode:

  1. When OSM starts a task, it enters into the Received state in a normal execution mode.

  2. In Manual tasks, an operator can optionally assign the task to themselves or have the task be assigned to them. When the task is assigned, it enters into the Assigned state. Automation tasks do not use this state.

  3. When an operator or the system begins working on the manual or automated task, the task enters into the Accepted state.

  4. While the task is in the Accepted state, the system or the operator can:

    • Move the task to a customer defined state like the Suspended state for a business reason defined for the task. From the Suspended state, the system or the operator can return the task to the Accepted state or move it to the Assigned state.

    • Move the task to the Completed state by completing the task.

    • Fail the task. A failed task automatically moves to the Received state in a fallout execution mode. You can fail a task in the following ways:

      • Task web client for manual tasks

      • OSM Java API for automated tasks in automation plug-in code.

      • OSM XML API for manual and automated tasks in automation plug-in code.

The following shows how the tasks in Figure 7-11 processes through each state in a fallout execution mode:

  1. A task enters a failed execution mode in the Received state from a normal execution mode in the Accepted state.

  2. In Manual tasks, an operator must assign the task to themselves or have the task be assigned to them. When the task is assigned, it enters into the Assigned state. Automation tasks do not use this state.

  3. When an operator or the system begins working on the manual or automated task, the task enters into the Accepted state.

  4. While the failed task is in the Accepted state, the system or the operator can:

    • Move the task to a customer defined state like the Suspended state for a business reason defined for the task. From the Suspended state, the system or the operator can return the task to the Accepted state or move it to the Assigned state.

    • Move the task to the normal execution mode Completed state to complete the task.

    • Retry the failed task. Retrying a task moves the task back to the normal execution mode to the Received state to retry the task from the beginning. You can retry a failed task in the following ways:

      • Task web client for one task or for all tasks on the order

      • Order Management web client for all failed tasks on a specific order component within an order, for all failed tasks on each order, or for all failed tasks of many orders as a job control order. You cannot retry a specific task type in bulk across multiple orders using a job control order.

      • OSM Java API in automation plug-in code

      • OSM XML API in automation plug-in code

      • OSM Web Service API operation for all failed tasks on a specific order component within an order, for all failed tasks on each order, or for all failed tasks of many orders as a job control order. You cannot retry a specific task type in bulk across multiple orders using a job control order.

    • Resolve the task. Resolving a task moves the task back to the original normal execution mode and state it had been in before failing. You can resolve a failed task in the following ways:

      • Task web client for one task or for all tasks on the order

      • Order Management web client for all failed tasks on a specific order component within an order, for all failed tasks on each order, or for all failed tasks of many orders as a job control order. You cannot resolve a specific task type in bulk across multiple orders using a job control order.

      • OSM Java API in automation plug-in code

      • OSM XML API in automation plug-in code

      • OSM Web Service API operation for all failed tasks on a specific order component within an order, for all failed tasks on each order, or for all failed tasks of many orders as a job control order. You cannot resolve a specific task type in bulk across multiple orders using a job control order.

Modeling Task Status Transitions

You model task status the define how a task completes and to determine what the next task is in the process flow. You define the status transitions available to a task in the task editor Status/Status tab, and then you apply the status transition of process flows you create between tasks.

You can use the default status transitions defined in manual, automated, activation, and transformation tasks or you can create new status transitions that may better describe what is happening during a status transition from one task to another.

The default statuses for a manual task are:

  • Back

  • Cancel

  • Finish

  • Next

The default statuses for a automated and transformation task are:

  • Failure

  • Success

The default statuses for a activation task are:

  • Success

  • Activation Failed

  • Updated OSM Order Failed

You can also select from the set of additional predefined statuses (Delete, False, Rollback, Submit, Failed, and True), and you can also define your own.

You can also use constraint behaviors with status transitions and manual tasks to better control when an operator can transition from one task to another task. See "Using the Constraint Behavior to Validate Data".

Specifying the Expected Task Duration

You can specify the expected length of time to complete a task. This information can be used to trigger jeopardy notifications and for reporting. See "Modeling Jeopardy and Notifications" for more information. This information is also used by OSM to calculate the order component duration.

You can specify the length of time in weeks, days, hours, minutes, and seconds. The default is one day.

You can also calculate the duration based on your workgroup calendars. If you have more than one workgroup with different calendars all responsible for the same task, the calculation is based on the first available workgroup that has access to the task. This ensures that a the task only exceeds it's duration based on the workgroup calendar time.

For example, there might be a task with an expected duration of two hours, and the workgroup that processes the task only works 9 AM - 5 PM Monday to Friday as indicated on their workgroup calendar. If such a task is received at 4 PM on Friday, then the expected duration of the task will expire at 10 AM Monday, as there was only two hours of the workgroup calendar time that had elapsed (4-5 PM Friday, then 9-10 AM Monday). This ensures that notifications and jeopardies are triggered appropriately.

See OSM Task Web Client User's Guide for more information.

Specifying the Task Priority

Task priority is the same as the order priority unless a priority offset is defined. Priority of orders and their tasks becomes effective when the system is under heavy load, ensuring that high priority orders and tasks are not starved of resources by lower priority orders and tasks.

You define the task priority as an offset from the priority of the order itself. This specifies the priority of the task in relation to other tasks in the order.

For example, if the order is created at priority 6, and this task is assigned a priority offset of -2, then this task would run at priority 4 while tasks in the order with no offset would run at priority 6. Similarly, you could assign a task a priority offset of +2, which would mean that the task would run at a slightly higher priority than other tasks in the order.

See "Modeling Order Priority" for more information about order priority.

About Extending Tasks

You can create a new task by extending from an existing task. The new task inherits all of the data, tasks, rules, and behaviors of the base task from which it was extended. Changing something on the base task is reflected in all tasks extending from it.

For example, if you have multiple tasks that all require the same data subset, you can create a base task that contains this data, then extend from this task to create as many new tasks as necessary. You can add new data and behaviors to each of the new tasks to create unique task and behavior functionality. Extending tasks can significantly reduce duplication and maintenance.

About Task Types

The following sections provide information about different task types.

Modeling Automated Tasks

You add automated tasks to processes whenever you need a task that can run automation plug-in instances without user intervention. Automated task automation plug-ins can do various tasks such as connect to a database to query data, transform data, or communicate with external fulfillment systems. OSM runs the automation plug-in instances on an automated task whenever the automated task transitions to the received state in a normal or fallout execution mode (see "About Normal and Fallout Execution Modes and Task States").

Automation plug-in user task can perform multiple tasks based on the code you write in the automation plug-ins states. Among the many functions you can implement in the code, you must also ensure that the automation plug-ins manage task status transitions to complete a task and move the task to another task on the process (see "Modeling Task Status Transitions"). You can also specify task execution modes that determine what roles (workgroups) can perform the task and in what ways (see "About Normal and Fallout Execution Modes and Task States"). If an automated task does not have any automation plug-ins that can run in fallout execution modes, and then the automated task runs as a manual task so long as there are users associated with roles designated to manage the fallout execution modes (see "Modeling Task Permissions and Execution Modes").

Automated tasks can also trigger a jeopardy notifications based on the duration of the task and event notifications based on task state changes (see "Modeling Jeopardy and Notifications").

About Automation Plug-in and Automated Tasks

When you add an automated task to a process, you must associate at least one automation plug-in for the task. To associate an automation plug-in for a task, you open the automated task entity in the Automated Task editor, and add the plug-in to the task in the Automation tab. When you deploy your cartridge to the run-time environment, the OSM server detects a task that has an automation plug-in associated with it, the server triggers the plug-in to perform its processing.

An automated task might have only a single automation plug-in associated with it. For example, you might associate a built-in Automator plug-in with the task to interrogate the task data, perform some calculation, update the order data, and transition the task. In this example, as soon as the Automator plug-in has finished processing, it updates the task with an exit status, and the OSM server moves to the next task.

An automated task can have multiple associated automation plug-ins. For example, you might want to associate multiple plug-ins with a task to represent conversations with external systems. You can associate a built-in Sender plug-in to receive the task data and send it to an external system for processing. That external system might send an acknowledgement back to a queue, where a second Automator plug-in--one that is defined as an external event receiver (it receives data from external system queues)--consumes the reply and updates the order data with the response. A third Sender plug-in might send the external system a message to begin processing, and a fourth Automator plug-in can receive the "processing complete" message from the external system, update the order, and transition the task.

See "About Automation Plug-ins" for more information.

Completing an Automation Task That Handles Concurrent Status Updates

An automated task can process multiple responses from external systems. For example, an activation task might receive the status for each service on the activation request. The activation task needs this information to determine when the activation has been completed by the external system, at which point the task can transition to the Completed state.

  • The external system can include data that indicates that all of the requests have been completed. Typically, this is a message indicating that the response is the last response, and there will be no further messages.

  • If the external system cannot report that the last request has been processed, the automation task must ensure that a response has been received for each request sent to the external system.

When OSM must determine the last response, there are special considerations for concurrent status updates. If the automated task needs to track the status of all responses, and multiple responses are processed concurrently, the automation receiver instances executing concurrently do not have visibility to status updates from the other receivers. The receiver may never execute with the task data that contains all status updates and so never encounters a condition where it can complete the task.

This situation can be handled by configuring an automated notification plug-in that monitors the status fields and creates a notification whenever the data changes.

Figure 7-12 Sequence Diagram for Concurrent Status Update Notification Process

Description of Figure 7-12 follows
Description of "Figure 7-12 Sequence Diagram for Concurrent Status Update Notification Process"

The notification plug-in is triggered every time the status field is updated by the automation receiver. The notification plug-in executes in a separate transaction after each receiver update, and can check the status responses to determine if all responses have been received for each action request. When all responses are received, the notification plug-in can generate a message to trigger an automation receiver. This receiver is correlated to the original sender by means of an ID set by the sender specifically for tracking the status updates. The receiver is then run with the task data that contains all of the status responses and it can complete the task.

Modeling Manual Tasks

You add manual tasks to processes whenever you need a task that requires direct user intervention. Users work with manual tasks in the OSM Task web client whenever a manual task transitions from the received state to the assigned state in a normal or fallout execution mode (see "About Normal and Fallout Execution Modes and Task States"). You assign manual tasks to OSM users in the following ways:

  • Manually: The task appears in the OSM Task web client in the received state and an operator has the responsibility to assign the task to a user.

  • Automatically (pre-defined in Design Studio): You can optionally chose a round robin task assignment algorithm that distributes tasks evenly between all users associated with the role (workgroup) that can work on the task, or load balancing task assignment algorithm that distributes tasks based on user workload.

  • Automatically (customized task assignment algorithm): You can develop a custom task assignment algorithm using OSM's cartridge management tools. See "Deploying a Custom Task Algorithm using the OSM Cartridge Management Tool".

When an operator is working in a manual task, they must directly update task data in the OSM Task web client. You can add behaviors to manual task data that perform various function. For example:

  • Performing calculations on numerical task data.

  • Adding constraints on task data fields to validate the data that users enter. You can also use constraints to control whether a user can transition from one task to another.

  • Making a field read-only.

  • Making a field visible to some users only.

See "Modeling Behaviors" for more information about all behavior options that OSM provides.

Manual tasks user task states to managed the progress of the task (see "Modeling Task States") and task status transitions to move from one task to another task (see "Modeling Task Status Transitions"). You can also specify task execution modes that determine what roles (workgroups) can perform the task and in what ways (see "About Normal and Fallout Execution Modes and Task States").

Manual tasks can also trigger a jeopardy notifications based on the duration of the task and event notifications based on task state changes (see "Modeling Jeopardy and Notifications").

Manual tasks are often used when initially developing OSM solutions to better understand the what needs to happen in various points of an OSM solution. When solution developers have a better understanding of what a task is doing, they can then consider transforming the task into an automated task with associated automation plug-ins. In addition, you can insert manual tasks in a process that function as breakpoints for debugging. This allows you to control a process when you test it.

Deploying a Custom Task Algorithm using the OSM Cartridge Management Tool

The OSM Cartridge Management Tool is only applicable for traditional OSM deployments. To use the custom task algorithm in OSM cloud native, see "Using a Custom Task Algorithm in OSM Cloud Native".

In addition to the round robin or load balancing algorithms for assigning workgroups to tasks provided by OSM, you can create a custom task assignment algorithm that assigns tasks based on custom business logic. Before you can use OSM CMT to deploy a custom task assignment algorithm, ensure that:

  • You can access and reference a WebLogic Server and ADF installation home directory from the OSM CMT build files. See OSM Installation Guide for version information.

  • You must download and install Ant. See OSM Installation Guide for version information.

  • You install the SDK Tools and the SDK Samples components using the OSM installer. You do not need to install the other options. See OSM Installation Guide for more information about using the OSM installer.

  • You have created a custom task assignment algorithm. See the SDK/Samples/TaskAssignment/code /CustomizedTaskAssignment.java reference sample for more information about creating a custom task assignment algorithm.

To deploy a custom task algorithm to an OSM server using OSM CMT:

  1. From a Windows command prompt or a UNIX terminal, go to WLS_home/server/lib (where WLS_home is the location of the base directory for the WebLogic Server core files).

  2. Create a WebLogic client wlfullclient.jar file that OSM CMT uses to communicate with the OSM WebLogic server:

    java -jar wljarbuilder.jar
    
  3. Copy the following files required by OSM CMT to the Ant_home/lib folder (where Ant_home is the location of the Ant installation base directory).

    • WLS_home/server/lib/weblogic.jar

    • WLS_home/server/lib/wlfullclient.jar

    • MW_home/modules/com.bea.core.descriptor.wl_1.2.0.0.jar (where MW_home is the location where the Oracle Middleware products were installed.)

    • SDK/deploytool.jar

    • SDK/Automation/automationdeploy_bin/automation_plugins.jar

    • SDK/Automation/automationdeploy_bin/xmlparserv2.jar

    • SDK/Automation/automationdeploy_bin/commons-logging.jar

    • SDK/Automation/automationdeploy_bin/log4j-1.2.13.jar

  4. Set the following environment variables and add them to the command shell's path:

    • ANT_HOME: The base directory of the Ant installation.

    • JAVA_HOME: The base directory of the JDK installation.

    For example, for a UNIX or Linux Bash shell:

    ANT_HOME=/home/user1/Middleware/modules/org.apache.ant_1.7.1
    JAVA_HOME=/usr/bin/local/jdk170_51
    PATH= $ANT_HOME/bin:$JAVA_HOME/bin:$PATH
    export ANT_HOME JAVA_HOME PATH
    

    For example, for a Windows command prompt:

    set ANT_HOME=c:\path\to\oracle\home\Middleware\modules\org.apache.ant_1.7.1
    set JAVA_HOME=c:\path\to\oracle\home\Middleware\jdk170_51
    set PATH=%ANT_HOME%\bin;%JAVA_HOME%\bin;%PATH%
    
  5. Open the SDK/Samples/config/samples.properties file.

  6. Set the following variables:

    • Set osm.root.dir to the OSM installation base directory.

    • Set oracle.home to the Oracle Middleware products base directory.

      For example, for a UNIX or Linux Bash shell:

      /home/oracle/Oracle
      

      For example, for a Windows command prompt:

      C:/Oracle
      
  7. Copy the custom task assignment algorithm file you created to SDK/Samples/TaskAssignment/code.

  8. Open the SDK/Samples/TaskAssignment/code/build.properties file.

  9. Set the following variables:

    • Set weblogic.url to the WebLogic Administration Server URL. The format is:

      t3://ip_address:port
      

      where:

      • ip_address is the IP address for the WebLogic Administration Server.

      • port is the port number for the WebLogic Administration Server.

    • Set weblogic.domain.server to the name of the WebLogic Administration Server.

    • Set weblogic.username to the WebLogic Administration Server user name.

    • Set webLogicLib to the path to the WLS_home/server/lib folder.

    • Set ejbname to the Enterprise Java Bean (EJB) name for the task assignment behavior.

    • Set ejbclass to the class name for the task assignment behavior.

    • Set jndiname to the Java Naming and Directory Interface (JNDI) bind name for task assignment behavior.

    • Set targetfile to the deploy target file name for a target file that does not contain a suffix like .ear or .jar.

    Note:

    ejbname, ejbclass, jndiname, and targetfile are preconfigured to deploy the SDK/Samples/TaskAssignment/code/CustomizedTaskAssignment.java sample task assignment algorithm. Replace these default values with those for the custom task assignment algorithm.

  10. Create and deploy a Design Studio cartridge that includes a manual task that you want to associate to the custom task assignment algorithm. You can associate the custom task assignment algorithm in the Details tab of the manual task using the Assignment Algorithm and JNDI Name fields. See "Task Editor Details Tab" in Modeling OSM Processes for more information.

    Note:

    You can import the sample task assignment cartridge from SDK/Samples/TaskAssignment/data/ taskassignment.xml. For more information about importing an OSM model into Design Studio, see "Working with Existing OSM Models" Modeling OSM Processes.

  11. From the SDK/Samples/TaskAssignment/code directory, at the Windows command prompt or UNIX shell, type:

    ant
    

    The Ant script begins to run.

  12. When the ant script reaches Input WebLogic Password for user weblogic ..., enter the WebLogic Administration Server password.

    The ant tool compiles, assembles, and deploys the custom task assignment algorithm to the OSM WebLogic Server.

    Note:

    You can also individually compile, assemble, deploy, or undeploy using the following Ant commands:

    ant compile
    ant assemble
    ant deploy
    ant undeploy
Using a Custom Task Algorithm in OSM Cloud Native
To use a custom task algorithm in OSM cloud native, ensure that you have followed these steps:
  • You have created a custom task assignment algorithm. See the SDK/Samples/TaskAssignment/code /CustomizedTaskAssignment.java reference sample for more information about creating a custom task assignment algorithm.
  • Traditional deployment mechanisms do not apply in an OSM cloud native environment. To deploy an application to WebLogic in OSM cloud native, see the "Deploying Entities to an OSM WebLogic Domain" section in OSM Cloud Native Guide.
  • Create and deploy a Design Studio cartridge that includes a manual task that you want to associate to the custom task assignment algorithm. You can associate the custom task assignment algorithm in the Details tab of the manual task using the Assignment Algorithm and JNDI Name fields. See Design Studio Help for more information.

Note:

You can import the sample task assignment cartridge from SDK/Samples/TaskAssignment/data/ taskassignment.xml. For more information about importing an OSM model into Design Studio, see Design Studio Help.

Modeling Transformation Tasks

You can use a transformation task if you want to call the order transformation manager from a process instead of before the orchestration plan is generated. See "Calling the Order Transformation Manager" for more information. The transformation task is very much like an automated task, except that by default it has an appropriate automation plug-in defined for it and provides the ability to define the transformation manager to call.

Modeling Activation Tasks

Before you can model Activation tasks in Design Studio, you must install the Design Studio for Order and Service Management Integration feature. This feature includes the Design Studio for Activation feature for integrating with ASAP and IP Service Activator. To model activation tasks, you must also install the Design Studio for Activation feature.

  1. OSM transforms order data into an operations support system through Java (OSS/J) message or a web service message and sends it to ASAP or to IP Service Activator. To model this, you configure service action request mapping, to map OSM data to ASAP data or to map OSM data to IP Service Activator data. See "About Service Action Request Mapping" for more information.

  2. ASAP or IP Service Activator receives the data, activates the service, and returns a success or failure status to OSM. To allow OSM to handle the returned data, you model service action response mapping. See "About Service Action Response Mapping" for more information.

Other elements specific to activation tasks are:

  • You can configure state and status transitions for completion events and exceptions returned by ASAP or IP Service Activator.

  • You can configure how to handle amendment processing with activation tasks.

  • If you are sending JMS OSS/J messages, Oracle recommends that you configure JMS store and forward (SAF) queues to manage the connection to ASAP or to manage the connection to IP Service Activator.

  • If you are sending web service messages, Oracle recommends that you configure web service SAF queues to manage the connection to ASAP or to manage the connection to IP Service Activator.

About Service Action Request Mapping

You send fulfillment data to ASAP or to IP Service Activator as a service action request. To model a service action request, you map OSM header data (information that applies to the customer or to all order line items on the order) and OSM task data to the following service order activation data:

  • Activation order header: Information that applies to the entire work order.

  • Service action: Information that is required to activate a service.

  • Global parameters: Information that you define once and which applies to multiple service actions.

About Service Action Response Mapping

After ASAP or IP Service Activator activates a service, it returns information to OSM. You create data structures in OSM to contain the response information returned from ASAP or IP Service Activator. For each event and exception returned by ASAP or IP Service Activator, you select the ASAP or IP Service Activator data that you want to retain, and then identify the OSM data structure to which that data is added. When ASAP or IP Service Activator returns an event or exception, OSM updates the order data with the ASAP or IP Service Activator data that you specified.

Tip:

The amount of response data from ASAP or IP Service Activator can be very large, though the data that is needed might be small. Parsing large amounts of ASAP or IP Service Activator response data can affect OSM performance. If you notice a reduction in OSM performance due to large amounts of ASAP or IP Service Activator response data, you can specify a condition on specific parameters to limit the ASAP or IP Service Activator response data.

About Activation Tasks and Amendment Processing

You can configure how to manage an activation task if the associated order undergoes amendment processing. The options are:

  • Intervene manually.

  • Do not perform any revision/amendment.

  • Have OSM redo the activation task, using the previously defined request mapping.

  • Have OSM redo the task, using different request mapping.

About State and Status Transition Mapping for Activation Tasks

You can configure state and status transitions to manage completion events (for example, activation complete) and errors returned by ASAP or returned by IP Service Activator. You can define multiple transitions to model different scenarios for variations in the data received from ASAP or received from IP Service Activator. For example, if an ASAP parameter or IP Service Activator parameter returns the value DSL, you may want the task to transition to a DSL task; when the same parameter returns the value VOIP, you want the task to transition to a different task.

You can define state transitions for user-defined states only; you cannot define transitions for system states, such as Received, Accepted, and Completed. At run time, OSM evaluates the conditions in the order and stops evaluating when a condition evaluates to true. Completion events and errors must include a default transition in case all specified conditions fail.

About Automation Plug-ins

You use automation plug-ins to implement specific business logic automatically. You can create automation plug-ins to update order data, complete order tasks with appropriate statuses, set process exceptions, react to system notifications and events, send requests to external systems, and process responses from external systems.

There are two basic types of delivered automation plug-ins, Sender and Automator. Each type can be implemented using XSLT or XQuery, and each type can be defined as an internal event receiver (the JMS message that triggers the call to the plug-in is generated by OSM), or as an external event receiver (the JMS message that triggers the call to the plug-in is generated by an external system).

  • Automator plug-ins receive information from OSM or an external system, and then perform some work. Depending on how you configure the plug-in, it can also update the order data.

  • Sender plug-ins receive information from OSM or from an external system. They perform some business logic, and may or may not update an order, depending on your configuration. Additionally, they can produce outgoing JMS or XML messages to an external system. When generating JMS messages, you can define JMS messages to connect to a topic or queue.

Note:

XQuery automation types cannot be implemented when using releases prior to OSM 7.0.

OSM assigns automated task plug-in instances to a user account specified in the plug-in Properties subtab Details subtab Run As field. The user account must belong to the OSM_automation WebLogic group. When you install OSM, the OSM installer automatically creates the oms-automation user that belongs to the OSM_automation group. You can use this user account to run automation plug-in instances or create new ones. You can also use the DEFAULT_AUTOMATION_USER model variable in the Run As field that you define at in the Oder and Service Management Project editor Model Variable tab or in the Environment editor Model Variables tab.

When referring to an automation, the following meanings can apply:

  • The automation plug-in code that you create and associate with an automation task in Design Studio.

  • The instance of an automation plug-in that the OSM run-time server creates in response to an event that triggers an automation. OSM creates and reuses such instances as required when processing automated tasks. OSM maintains these plug-in instances even if the instance is no longer required and only creates additional plug-in instances when the current pool of instances are insufficient to handle the number of incoming orders. OSM only destroys automation plug-in instances in the following scenarios:

    • When you shut down the OSM server, OSM destroys all plug-in instances.

    • When you undeploy a cartridge, OSM destroys all plug-in instances associated with the undeployed cartridges.

    • When OSM detects an error condition in the instance, OSM destroys the instances.

See OSM Developer's Guide for detailed information about automated tasks and automation plug-ins.

Specifying Which Data to Provide to Automation Plug-ins

The data that is available for each automation plug-in should be the minimum subset of order data necessary for the plug-in to be performed. You can choose the data to provide to automation plug-ins using the following methods:

  • Use the task data contained in an automation task to specify which data to provide to an automation plug-in.

  • Use query tasks to specify which data to provide to an automation plug-in associated with order notification, events, and jeopardies. A query task is a manual task that is associated with a role that has permissions to use some or all order data to run an automation plug-in. See "Modeling Query Tasks for Order Automation Plug-ins" for more information.

Modeling Query Tasks for Order Automation Plug-ins

In automated tasks, the data that is available to automation plug-ins associated with automated task is already defined in the Task Data tab. However, automation plug-ins used with order notifications, events, and jeopardies do not have immediate access to this task data, and, as a result, must reference a manual task called a query task that defines the task data and behavior data available to the automation plug-in.

You can select any manual task as the query task. You can also create special tasks that are only used as query tasks. Their only function is to specify which data to provide to an automation plug-in.

Figure 7-13 shows the Permissions tab in the Design Studio order editor. The upper screen shows the permissions for the provisioning role, with the provisioning function task as the query task. For the billing role, the billing function task is assigned as the query task.

Figure 7-13 Roles Assigned to Query Tasks

Description of Figure 7-13 follows
Description of "Figure 7-13 Roles Assigned to Query Tasks"

To associate a query task with an automation plug-in, use the Default check box, as shown in Figure 7-13.

Figure 7-14 shows an event notification with an automation plug-in that uses the ProvisioningFunctionTask query task that is defined as the default query task for the provisioning role. This role must be associated with the Run as OSM user that runs the automation plug-in as shown in the Properties Details tab. For more information about associating roles to OSM users, see the OSM Order Management Web Client User's Guide.

Figure 7-14 Order Event Notification Automation Query Task

Description of Figure 7-14 follows
Description of "Figure 7-14 Order Event Notification Automation Query Task"

About Automation Message Correlation

Automation plug-ins defined as external event receivers are designed to process JMS messages from external systems. JMS messages are asynchronous, therefore external event receivers provide a method of correlating responses to requests previously delivered to enable you to map OSM orders to external system orders.

To correlate responses, the plug-in sets a property on the outbound JMS message, with name of the value set for correlation property in the automationmap.xml file, and a value decided by your business logic. For example, business logic might dictate that you correlate on a reference number. The external system copies the properties that you defined for the correlation on the request and includes that data in the response.

You can use the Message Property Selector field to filter messages placed on the queue and determine which automation to run. You define the Message Property Selector value as a boolean expression that is a String with a syntax similar to the where clause of an SQL select statement. For example, the syntax may be:

"salary>64000 and dept in ('eng','qa')"

When the condition evaluates to true, the message is picked up and processed by the automation that defined that condition.

In a second example, consider that an external system defines five order types and OSM defines a different automation to process each order type. Each automation defines a different Message Property Selector, such as orderType=1, orderType=2, and so forth. When a message is sent to the queue by the external system, and the message includes the orderType upon which the condition is based, the automation framework evaluates each condition until one evaluates to true. If more than one automation defines the same condition, the first one that evaluates to true is picked up and processed.

Note:

When you define only one automation plug-in external event receiver for each automation task, you are not required to enter a selector in the Message Property Selector field. In this case, automation tasks can share the same JMS queue without a message property selector being set. You must set a message property selector when you do either of the following:

  • Define multiple automation plug-in external event receivers for the same automation task.

  • Use the Legacy build-and-deploy mode to build and deploy cartridges with automation plug-ins.

  • Use the Both (Allow server preference to decide) build-and-deploy mode to build and deploy cartridges with automation plug-ins and configure the OSM server dispatch mode for the Internal mode.

    For information on build-and-deploy modes, see "About Automation Message Correlation " in Modeling OSM Processes.

Example: Modeling a Basic Automator Plug-in for an Automated Task

This example demonstrates how to configure an Automator type plug-in that receives data from an internal OSM JMS queue and updates order data using an XSLT style sheet. In the example, assume that the XSLT style sheet includes conditional logic to apply a level 1 priority to the order if the order is from a specific customer.

This example demonstrates how to:

  1. Create an automated task and add the relevant task data.

  2. Add an automation plug-in to the automated task.

  3. Configure the automation plug-in properties.

Note:

An automated plug-in exists within the context of a Design Studio cartridge project, order, process, and automated task. For purposes of demonstration, this example assumes the existence of multiple Design Studio entities. For example, it assumes the existence of a cartridge project called DSLCartridge, an order called DSLOrder, a process called DSLProcess, and an XSLT style sheet called check_customer.xslt that populates default values in the order data. It assumes that the Data Dictionary includes the two data nodes, customer_name and order_priority. It also assumes that the new automated task will be added to the DSLProcess entity. The naming conventions used in this example are for illustrative purposes only.

Step 1: Creating the automated task

  1. Select Studio, then New, then Order and Service Management, then Order Management, and then Automated Task.

    The Automated Task wizard appears.

  2. In the Automated Task wizard, enter or select the following values:

    • In the Project field, enter DSLCartridge.

    • In the Order list, select DSLOrder.

    • In the Name field, enter Check_Customer.

  3. Click Finish.

    The new automated task appears in the Automated Task editor.

  4. Click the Task Data tab.

    In this example, you will update the order_priority field with a default value of 1 if the order is from a specific customer.

    Note:

    Normally, the task data includes all of the data that the task requires to complete. To simplify the example, this task includes only the two pertinent fields: customer_name and order_priority. See "Modeling Data for Tasks" for more information.

  5. Right-click in the Task Data area.

    The context menu appears.

  6. Select Select from Data Schema.

    The Select Data Elements dialog box appears.

  7. Select the data nodes customer_name and order_priority.

  8. Click OK.

    The two data nodes appear in the Task Data area.

  9. Click the Permissions tab.

    On the Permissions tab, you can ensure that only the automation role has permissions for automated tasks. See the note in "Modeling Roles and Setting Permissions" for more information.

You are now ready to add a plug-in to the automated task.

Step 2: Adding the automation plug-in to the automated task

  1. In the Automated Task editor, click the Automation tab.

  2. Click Add.

    The Add Automation dialog box opens.

  3. In the Name field, enter Check_Customer.

  4. In the Automation Type field, select XSLT Automator.

  5. Click OK.

    The Check_Customer plug-in appears in the Automation list.

  6. In the Automation list, select the Check_Customer plug-in.

  7. Click Properties.

    The Automation Plug-in Properties tabs appear.

    You are now ready to define the automation plug-in properties.

Step 3: Defining automation plug-in properties

  1. In the Automated Task editor Properties View Details tab, accept the default value in the EJB Name field.

  2. Ensure that the model variable that defaults to the Run As field points to a user name set up in the Oracle WebLogic console. When you deploy the cartridge, the user in the Run As field is added automatically to the OSM_automation group.

    For more information about users and groups, see the discussion of setting up security in OSM System Administrator's Guide. For more information about model variables, see the Design Studio Help.

  3. Click the XSLT tab.

    On the XSLT tab, you define where the XSLT style sheet is located and the status to set if the automation fails. In this example, you'll define a location on your local machine where the XSLT file is stored.

  4. Select Absolute Path.

  5. In the XSLT field, enter the location of the XSLT file.

    For this example, enter C:\oracle\user_projects\domains\osmdomain\xslt\DSLCartridge\1.0.0\check_customer.xslt.

  6. Do one of the following:

    • In the Exit Status on Exception field, select Failure.

      This field represents the exit status that the plug-in should use if it throws an exception. The options available in this field include any status values you assigned to the task. You use this option if you want to transition the task to a fallout task.

    • Click the Details tab and select the Fail Task on Automation Exception check box.

      This check-box transitions the task to a fallout execution mode if an exception occurs when running the automation plug-in. Using the option allows you troubleshoot task failures within the task that generated the failure.

  7. Select Update Order.

    This option ensures that the default values obtained from the XSLT style sheet will be saved to the order data.

  8. Click Save.

    You have completed the basic configuration for an Automator-type plug-in defined as an internal event receiver.

Note:

Successful automation requires a complete automation build file in the cartridge. If no automation build file exists, Quick Fix will generate one.