Skip navigation.

Client Application Developer's Guide

  Previous Next vertical dots separating previous/next from contents/index/pdf Contents View as PDF   Get Adobe Reader

Enabling SDO Data Source Updates

This chapter explains how to implement data services that support data source updates. It includes the following topics:

 


Overview

As it does for reading data, Liquid Data gives client applications an easy-to-use, unified interface for updating data. Liquid Data allows client applications to modify, create, and delete data from heterogeneous, distributed data sources as if it were a single entity. The complexity of propagating the changes to the diverse data sources is hidden from the client programmer by the Liquid Data integration layer.

From the data service implementor's point of view, building a library of update-capable data services is made significantly easier by the Liquid Data update framework. For relational sources, Liquid Data propagates changes to the data source automatically. For other sources, the data service implementor can use the Liquid Data update framework artifacts and APIs to quickly implement update-capable services.

Note: This chapter discusses data service design considerations and programming tasks for enabling updates. For instructions on invoking updates from client applications, see Submitting Data Object Changes, in Client Programming with Service Data Objects (SDO).

This chapter covers these areas:

 


How Data Source Updates Work

After operating on a data object (for example, by changing, adding, or deleting values), a client application initiates the update process by calling the submit() operation on the data service. The data graph, which contains the modified data object and a change summary (the list of old values), is passed to the update mediator service. If the data service that is bound to a changed object is a physical data service, the mediator simply checks for an update override class for the data service and calls the override class if it is present. For relational data sources, the mediator simply propagates the changes to the data source if an update override class is not present.

Decomposition

For data objects bound to logical data services, the mediator must first identify where the changed information came from. It does so by analyzing (essentially inverting) the function designated as the decomposition function of the data service bound to the data object. The decomposition function enables the mediator to determine the lineage of data, that is, the physical source for each individual element in the data object. This lineage information is expressed in the form of a decomposition map.

Note: If a function for decomposition is not explicitly specified in a data service, the mediator uses the first read function in the data service as the decomposition function. In most cases any of the read functions will do.

If data comes from more than one source, the incoming SDO object is decomposed into its constituent parts. The physical level data objects corresponding to the changed values in the updated data object are instantiated.

For example, a customersDocument object that is made up of an updated customer information from a Customer data service and three updated Order objects from an Orders data service would be decomposed into four objects, as illustrated in Figure 3-1.

Figure 3-1 Update Plan

Update Plan


 

By analyzing the change summary and decomposition map the mediator automatically derives an update plan. The update plan indicates what physical resources will be modified and how. (See Accessing the Decomposition Map on page 3-20 for a description of decomposition maps.) The update plan only has access to the modified objects in a submitted data graph. Unchanged objects do not appear in the plan, and data services for unchanged objects will not be accessed during the update.

Update Processing Sequence

There are a number of steps performed in the updating of a data source. The following are the steps in the update process:

  1. The client calls submit, passing the data graph with the changed object. The data graph has a change summary detailing the changes to the object.
  2. If present, the update override class associated with the top-level data service is instantiated and its performChange() function is run. (See Update Overrides on page 3-5.) The function can access and modify the update plan and decomposition map, or perform any other custom processing desired. In this case, when finished, the update override class returns control to the mediator. Alternatively, the class could have taken over the remaining processing steps.
  3. The mediator determines what data sources need to be changed (the changed object's lineage) and how to change it.
  4. Finally, the tree of "SDO objects to update" are applied to the respective data sources.
  5. If any are present, update override procedures in the physical data services are called. Note that an update override can exist at each layer of data service composition (and there can be many such levels in the most general case.) Thus, a logical data service of several layers of services would check for update overrides for each component part.

Figure 3-2 illustrates the steps in the update processing sequence used to update a data source. In this example, the mediator performs the final step of the update, known as data change propagation.

Figure 3-2 Update Processing Sequence

Update Processing Sequence


 

Update Overrides

An update override is a Java class associated with a data service. An update override lets you hook custom code into the default update process and is useful for customizing the default behavior, validating data, propagating data changes to a non-relational source, or applying any other processing action desired.

An update plan is generated for any change submitted for a data object bound to a logical data service. However, automatic update propagation will occur only if the data source is relational. If it is not, an update override class must be implemented to propagate changes to the physical data sources. (However, you will get an exception if no update override is available for an updated physical data service that is non-relational.) The override class must implement the UpdateOverride interface and contain a performChange method, which indicates to the mediator whether or not it should resume the normal course of processing after the update to apply any further changes. (For more information about the update override class, see Developing an Update Override Class on page 3-10.)

For logical data services (such as Customers) the update override class is called before the update plan is generated. For physical services, it is called immediately before update propagation. If decomposition yields multiple instances of a changed data object (for example, multiple Orders for a customer), an update override for the Order data service would be called multiple times, once for each changed object.

 


Update Behavior

This section provides additional information regarding the behavior of data source updates. It covers these topics:

Update Order

As previously described (see How Data Source Updates Work), the mediator produces an update tree in the decomposition process. The tree contains a data service object for each changed data source instance. When propagating an update, the mediator walks the update plan and submits the indicated changes to the lower-level data service.

The order of objects in the tree and their hierarchical relationships (that is, container-containment relationships) determines the order in which the changes are applied.

By default, the following order is observed:

Understanding Property Maps

The mediator saves logical foreign-primary key relationship information in a property map. A property map is used to populate foreign key fields when the parent is new and does not yet have a value for its primary key field. A property map ensures that after the primary key for a parent is generated, the generated value is propagated to the foreign key field of the contained element. In other words, the property map identifies a correspondence between data elements at adjacent levels of the decomposition. Figure 3-3 illustrates the decomposition and property mapping for the decomposition of a Customers data service.

Figure 3-3 Decomposition Map

Decomposition Map


 

Multi-Level Data Services

Logical data services can be built upon other logical data services. When functions in the top-level data services are executed, any mid-level logical data services are "folded in" so that the function appears to be written directly against physical data services.

The outcome of the decomposition process differs depending on whether the mid-level data service has a update override class, as follows:

Note that any information not projected in the top-level data service will not be able to be resupplied to the intermediate data service.

For example, say that a top-level data service provides a list of orders. It gets the information by calling a function in another logical data service that returns all customer information with their orders. When the orders-only function is called, the view is flattened so that only order information is retrieved from the data source.

Transaction Management

Each SDO submit() operates as a transaction. The change log associated with each SDO is unchanged whether or not the submit() succeeds. Additional changes may have occurred after the submit() call, but those changes are kept separately—the changes are not reflected in the values or change summary of the originally submitted SDO.

If the submit() succeeds, the SDO should be re-queried to be sure it matches the current data because side effects of the update may have changed the result of the query. This has the side effect of clearing the change summary as well. If the submit() fails, reinvoking submit() on the data object would cause an attempt at performing the same updates again because the original data object and change summary are still available. If the SDO submit() is not inside a broader containing transaction, the transaction will be committed if the submit() succeeds and rolled back if it fails.

SDO Submit Inside a Containing Transaction

All submits perform immediate updates to data sources. If a data object submit occurs within the context of a broader containing transaction, commits or rollbacks of the containing transaction have no effect on the SDO or its change summary, but they will affect any data source updates that participated in the transaction.

 


When to Customize Updates

You will need to create custom update classes whenever you want to support updates for non-relational sources. For relational sources, you may also want to use custom update classes to apply custom logic to the update process, or if an aspect of the data service design prevents automated updates.

Some examples of when custom updates are required include:

For relational sources, you will need to create custom updates if the XQuery design prevents Liquid Data from being able to perform updates. Factors that might prevent Liquid Data from performing updates include:

 


Developing an Update Override Class

This section describes how to create an update override class. It includes the following topics:

UpdateOverride Interface

As described in the section Update Overrides, a data service needs to specify an update override class to customize the behavior for updates. This class must implement the UpdateOverride public interface shown in Listing 3-1.

To implement the interface, your class must implement the performChange() method defined in the UpdateOverride interface. The method will be executed whenever a submit is issued for objects bound to the overridden data service. In cases where the submit is for an array of data service data objects, the array is decomposed into a list of singleton DataService objects. Some of these objects may have been added, deleted, or modified; therefore, the update override might be executed more than once (that is, once per changed object.)

The performChange() takes an on object of type DataGraph, which will be passed to it by the mediator. This object is the SDO on which your update override class will operate. The DataGraph object contains the data object, the changes to the object, and other artifacts, such as metadata.

The performChange() method returns a Boolean value where:

It is a good practice to verify at runtime that the root data object for the data graph being passed in is an instance of the singleton data object bound to the data service for which the update override is written.

Note: The Javadoc for the UpdateOverride class is available at:

http://download.oracle.com/docs/cd/E13190_01/liquiddata/docs85/sdoUpdateJavadoc/index.html

Listing 3-1 UpdateOverride Interface

package com.bea.ld.dsmediator.update;

import commonj.sdo.DataGraph;
import commonj.sdo.Property;

public interface UpdateOverride
{
public boolean performChange(DataGraph sdo)
{

}
}

Development Steps

To create an update override class, perform the following steps:

  1. Add a Java class to the Liquid Data project. (If it is not in the project, it should be in the classpath.) You can put the class anywhere in the application folder; however, for simple projects, it might be most convenient to add the class to the same directory as your data services. For larger projects, you may choose to keep the update classes in their own folder.
  2. In the new Java file, implement the UpdateOverride interface. For example, the class signature may be:
  3. public class OrderDetailUpdate implements UpdateOverride
  4. Import the following packages into your class, in which you are implementing the UpdateOverride class:
  5. import com.bea.ld.dsmediator.update.UpdateOverride;
    import commonj.sdo.DataGraph;
  6. Add a performChange() method to the class. This public method takes a DataGraph object (containing the modified data object) and returns a Boolean value. For example:
  7. public boolean performChange(DataGraph graph)
  8. In the body of the method, implement your processing logic. You can access the changed object, instantiate other data objects and modify and submit them, or access the mediator context's update plan and decomposition map.
  9. For general examples of the types of activities update overrides may implement, see Update Behavior.

  10. Associate the class with a data service by referring to it from the data service by placing a javaUpdate element in the pragma statement of the data service. For example, the OrderDetailUpdate class in step 2 could be referred to from an data service named ApplOrder by <javaUpdateExit className="RTLServices.OrderDetailUpdate"/>.

While it will make sense in most cases to have a single update class apply for specific data services, you can have multiple data services use a single update override class.

Listing 3-2 shows a simple update override implementation.

Listing 3-2 Sample Update Override

package RTLServices;

import com.bea.ld.dsmediator.update.UpdateOverride;
import commonj.sdo.DataGraph;
import java.math.BigDecimal;
import java.math.BigInteger;
import retailer.ORDERDETAILDocument;
import retailerType.LINEITEMTYPE;
import retailerType.ORDERDETAILTYPE;

public class OrderDetailUpdate implements UpdateOverride
{
public boolean performChange(DataGraph graph){
ORDERDETAILDocument orderDocument =
(ORDERDETAILDocument) graph.getRootObject();
ORDERDETAILTYPE order =
orderDocument.getORDERDETAIL().getORDERDETAILArray(0);
BigDecimal total = new BigDecimal(0);
LINEITEMTYPE[] items = order.getLINEITEMArray();
for (int y=0; y < items.length; y++) {
BigDecimal quantity =
new BigDecimal(Integer.toString(items[y].getQuantity()));
total = total.add(quantity.multiply(items[y].getPrice()));
}
order.setSubTotal(total);
order.setSalesTax(
total.multiply(new BigDecimal(".06")).setScale(2,BigDecimal.ROUND_UP));
order.setHandlingCharge(new BigDecimal(15));
order.setTotalOrderAmount(
order.getSubTotal().add(
order.getSalesTax().add(order.getHandlingCharge())));
System.out.println(">>> OrderDetail.ds Exit completed");
return true;
}
}

In the sample class shown in Listing 3-2, a OrderDetailUpdate class implements the UpdateOverride class, and, as required by the interface, defines a peformChange() method. The class illustrates the some basic concepts regarding update override classes:

Testing Submit Results

In the Liquid Data development environment view, the test pane lets you try submitting a change to a data service. Whenever you implement a submit-capable data service, you should similarly test your update results to ensure that changes occur as expected.

You can test submits using the Test View in BEA WebLogic Workshop. For information on testing submits, refer to the Data Services Developer's Guide.

While Test View gives you a quick way to test simple update cases in the data services you create, for more substantial testing and troubleshooting you can use an update override class to inspect the decomposition mapping and update plan for the update.

The override class is also the mechanism you can use to extend and override the Mediator's default update processing. You can use it to implement updates for data services that would otherwise not support updates, such as non-relational sources.

See Developing an Update Override Class on page 3-10 for information about override classes.

Understanding Update Override Context

An update override class can programmatically access several update framework artifacts:

The content of the artifacts is determined by the context from which they are accessed:

Figure 3-4 illustrates the context visibility within an update override.

Figure 3-4 Context Visibility in Update Override

Context Visibility in Update Override


 

The performChange() method class can perform gets and sets on the changed SDO, which is passed to the method. Any changes to the SDO values are added to the change summary, just as if the change had occurred in the client application.

Within the performChange() method, you can gain access to the decomposition map and the update plan. You can modify the update plan for a particular submit operation, giving you significant control over how updates are applied to a data source. The following are the types of changes that can be effected by modifying this method:

Although you can access the default decomposition map, you should not modify it. However, you can use access to the decomposition map to understand how decomposition will work, and this could be used to drive your own custom decomposition.

In addition to accessing the decomposition map, you can access the update plan (that is, the tree of changed objects) in the override class. You can modify values in the tree, remove nodes, or rearrange them (to change the order in which they are applied). However, if you modify the update plan, you should execute the plan within the override if you want to keep the changes. As you modify the values in the tree, remove nodes or rearrange them, the update plan will track your changes automatically in the change list.

Physical Level Update Override Considerations

Considerations for implementing update override classes for physical level data services include the following:

Additional considerations concerning update overrides for relational data services include:

For physical non-relational data services, the following additional considerations apply:

 


Update Programming Patterns

This section contains code samples that illustrate many of the concepts previously discussed.

Override Decomposition and Update

In this pattern, the override function takes over the entire decomposition and update processing for the submitted data object. Typical activities include:

In this case, the function would return false to indicate to the mediator not to attempt to proceed with automated decomposition.

Augment Original Data Object Content

The override function inspects or modifies the object values to be changed and returns control to the mediator. If validating values, it can raise a DataServiceException to signal errors, and roll back the transaction. The function returns true to have the mediator proceed with update propagation using the objects as changed.

Accessing the Data Service Mediator Context

To get the change plan and decomposition map for an update, you first need to get the data mediator service context.

The context enables you to view the decomposition map, produce an update plan, execute the update plan, and access the container data service instance for the data service object currently being processed.

The following code snippet shows how to get the context:

DataServiceMediatorContext context =
DataServiceMediatorContext().getInstance();

Accessing the Decomposition Map

Once you have the context (see Accessing the Data Service Mediator Context), you can access the decomposition map with the following method:

	DecompostionMapDocument.DecompostionMap dm = 
context.getCurrentDecompositionMap();

If you want the string form returned, you use the toString() method. The returned string will contain the XML of the decomposition map, such as the following:

<xml-fragment xmlns:upd="update.dsmediator.ld.bea.com">
<Binding>
<DSName>ld:DataServices/CUSTOMERS.ds</DSName>
<VarName>f1603</VarName>
</Binding>
<AttributeLineage>
<ViewProperty>CUSTOMERID</ViewProperty>
<SourceProperty>CUSTOMERID</SourceProperty>
<VarName>f1603</VarName>
</AttributeLineage>
<AttributeLineage>
<ViewProperty>CUSTOMERNAME</ViewProperty>
<SourceProperty>CUSTOMERNAME</SourceProperty>
<VarName>f1603</VarName>
</AttributeLineage>
<upd:DecompositionMap>
<Binding>
<DSName>ld:DataServices/getCustomerCreditRatingResponse.ds</DSName>
<VarName>getCustomerCreditRating</VarName>
</Binding>
<AttributeLineage>
<ViewProperty>CREDITSCORE</ViewProperty>
<SourceProperty>
getCustomerCreditRatingResult/TotalScore
</SourceProperty>
<VarName>getCustomerCreditRating</VarName>
</AttributeLineage>
<AttributeLineage>
<ViewProperty>CREDITRATING</ViewProperty>
<SourceProperty>
getCustomerCreditRatingResult/OverAllCreditRating
</SourceProperty>
<VarName>getCustomerCreditRating</VarName>
</AttributeLineage>
</upd:DecompositionMap>
<upd:DecompositionMap>
<Binding>
<DSName>ld:DataServices/PO_CUSTOMERS.ds</DSName>
<VarName>f1738</VarName>
</Binding>
<Predicate>
<LeftVarName>f1738</LeftVarName>
<LeftProperty>CUSTOMERID</LeftProperty>
<RightVarName>CUSTOMERID</RightVarName>
<RightProperty>CUSTOMERID</RightProperty>
</Predicate>
<AttributeLineage>
<ViewProperty>ORDERID</ViewProperty>
<SourceProperty>ORDERID</SourceProperty>
<VarName>f1738</VarName>
</AttributeLineage>
<AttributeLineage>
<ViewProperty>CUSTOMERID</ViewProperty>
<SourceProperty>CUSTOMERID</SourceProperty>
<VarName>f1738</VarName>
</AttributeLineage>
<upd:DecompositionMap>
<Binding>
<DSName>ld:DataServices/PO_ITEMS.ds</DSName>
<VarName>f1740</VarName>
</Binding>
<Predicate>
<LeftVarName>f1740</LeftVarName>
<LeftProperty>ORDERID</LeftProperty>
<RightVarName>ORDERID</RightVarName>
<RightProperty>ORDERID</RightProperty>
</Predicate>
<AttributeLineage>
<ViewProperty>ORDERID</ViewProperty>
<SourceProperty>ORDERID</SourceProperty>
<VarName>f1740</VarName>
</AttributeLineage>
<AttributeLineage>
<ViewProperty>KEY</ViewProperty>
<SourceProperty>KEY</SourceProperty>
<VarName>f1740</VarName>
</AttributeLineage>
<AttributeLineage>
<ViewProperty>ITEMNUMBER</ViewProperty>
<SourceProperty>ITEMNUMBER</SourceProperty>
<VarName>f1740</VarName>
</AttributeLineage>
<AttributeLineage>
<ViewProperty>QUANTITY</ViewProperty>
<SourceProperty>QUANTITY</SourceProperty>
<VarName>f1740</VarName>
</AttributeLineage>
</upd:DecompositionMap>
</upd:DecompositionMap>
<ViewName>ld:DataServices/Customer.ds</ViewName>
</xml-fragment>

Customizing an Update Plan

After possibly validating or modifying the values in the submitted data object, the function retrieves the update plan by passing in the current data object to the following function:

	DataServiceMediatorContext.getCurrentUpdatePlan()

The update plan can be augmented in several ways, including:

	DataServiceMediatorContext.executeUpdatePlan()

After executing the update plan, the function should return false so that the mediator does not attempt to apply the update plan.

The update plan lets you modify the values to be updated to the source. It also lets you modify the update order.

You can walk the update plan to view its contents. To walk the plan, you can use a call similar to the method navigateUpdatePlan() shown in Listing 3-3 where the method is called from a performChange method (see UpdateOverride Interface on page 3-10 for information about this method) and recursively walks the plan.

Listing 3-3 Walking an Update Plan

public boolean performChange(DataGraph datagraph){

UpdatePlan up = DataServiceMediatorContext.currentContext().
getCurrentUpdatePlan( datagraph );
navigateUpdatePlan( up.getDataServiceList() );
return true;
}

private void navigateUpdatePlan( Collection dsCollection ) {
DataServiceToUpdate ds2u = null;
for (Iterator it=dsCollection.iterator();it.hasNext();) {
ds2u = (DataServiceToUpdate)it.next();

// print the content of the SDO
System.out.println (ds2u.getDataGraph() );

// walk through contained SDO objects
navigateUpdatePlan (ds2u.getContainedDSToUpdateList() );
	}
}
A sample update plan report would look like the following
	UpdatePlan
SDOToUpdate
DSName: ... :PO_CUSTOMERS
DataGraph: ns3:PO_CUSTOMERS to be added
CUSOTMERID = 01
ORDERID = unset
PropertyMap = null

Now consider an example in which a line item is deleted along with the order that contains it. Given the original data, the following listing illustrates an update plan in which item 1001 will be deleted from Order 100, and then the Order is deleted.

UpdatePlan
SDOToUpdate
DSName:...:PO_CUSTOMERS
DataGraph: ns3:PO_CUSTOMERS to be deleted
CUSTOMERID = 01
ORDERID = 100
PropertyMap = null

SDOToUpdate
DSName:...:PO_ITEMS
DataGraph: ns4:PO_ITEMS to be deleted
ORDERID = 100
ITEMNUMBER = 1001
PropertyMap = null

In this case, the execution of the update plan is as follows: before deleting the PO_CUSTOMERS, its contained SDOToUpdates are visited and processed. So the PO_ITEMS is deleted first and then the PO_CUSTOMERS.

If the contents of the Update Plan are changed then the new update plan can be executed and the update exit should then return false, signaling that no further automation should occur.

The plan can then be propagated to the data source, as described in Executing an Update Plan.

Executing an Update Plan

After modifying an update plan, you can execute it. Executing the update plan causes the data service mediator to propagate the changes in the update plan to the indicated data sources.

Given a modified update plan named up, the following statement executes it:

	context.executeUpdatePlan(up);

Retrieving the Container of the Current Data Object

On a Data Service that is being processed for an update plan, you can get the container of the SDO being processed. The container must exist in the original changed object tree, as decomposed. If no container exists, null is returned. Consider the following example:

	String containerDS = context.getContainerDataServiceName();
DataObject container = context.getContainerSDO();

In this example, if in the update override class for the Orders data service the you ask to see the container, the Customer data service object for the Order instance being processed would be returned. If that Customer instance was in the update plan, then it would be returned. If it was not in the update plan, then it would be decomposed from CustOrders and returned. The update plan only shows what has been changed. In some cases, the container will not be in the update plan. When the code asks for the container, it will be returned from the update plan if present; otherwise, it will be decomposed from the source SDO.

Retrieving and Updating Data Through Other Data Services

Other data services may be accessed and updated from an update override. The data service mediator API can be used to access data objects, modify and submit them. Alternatively, the modified data objects can be added to the update plan and updated when the update plan is executed. If the data object is added to the update plan, it will be updated within the current context and its container will be accessible inside its Data Service update override.

If the DataService Mediator API is used to perform the update, a new DataService context is established for that submit, just as if it were being executed from the client. This submit() acts just like a client submit— changes are not reflected in the data object. Instead, the object must be re-fetched to see the changes made by the submit.

Setting the Log Level

Liquid Data utilizes the underlying WebLogic Server for logging. WebLogic logging is based on the JDK 1.4 logging APIs, which are available in the java.util.logging package.

In an update override, you can contribute to the log by acquiring a DataServiceMediatorContext instance, and calling the getLogger() method on the context, as follows:

DataServiceMediatorContext context =
DataServiceMediatorContext().getInstance();
Logger logger = context.getLogger()

You can then contribute to the log by issuing the appropriate logger call with a specified level.

The log level implies the severity of the event. When WebLogic Server message catalogs and the NonCatalogLogger generate messages, they convert the message severity to a weblogic.logging.WLLevel object. A WLLevel object can specify any of the following values, from lowest to highest impact:

Development_time logging is written to the following location:

	<bea_home>\user_projects\domains\<domain_name>

Given the specified logging level, the mediator logs the following information:

The following (Listing 3-4) is a sample of a log entry:

Listing 3-4 Sample Log Entry

<Nov 4, 2004 11:50:10 AM PST> <Notice> <LiquidData> <000000> <Demo - begin client sumbitted DS: ld:DataServices/Customer.ds>
<Nov 4, 2004 11:50:10 AM PST> <Notice> <LiquidData> <000000> <Demo - ld:DataServices/Customer.ds number of execution: 1 total execution time:171>
<Nov 4, 2004 11:50:10 AM PST> <Info> <LiquidData> <000000> <Demo - ld:DataServices/CUSTOMERS.ds number of execution: 1 total execution time:0>
<Nov 4, 2004 11:50:10 AM PST> <Info> <LiquidData> <000000> <Demo - EXECUTING SQL: update WEBLOGIC.CUSTOMERS set CUSTOMERNAME=? where CUSTOMERID=? AND CUSTOMERNAME=? number of execution: 1 total execution time:0>
<Nov 4, 2004 11:50:10 AM PST> <Info> <LiquidData> <000000> <Demo - ld:DataServices/PO_ITEMS.ds number of execution: 3 total execution time:121>
<Nov 4, 2004 11:50:10 AM PST> <Info> <LiquidData> <000000> <Demo - EXECUTING SQL: update WEBLOGIC.PO_ITEMS set ORDERID=? , QUANTITY=? where ITEMNUMBER=? AND ORDERID=? AND QUANTITY=? AND KEY=? number of execution: 3 total execution time:91>
<Nov 4, 2004 11:50:10 AM PST> <Notice> <LiquidData> <000000> <Demo - end clientsumbitted ds: ld:DataServices/Customer.ds Overall execution time: 381>

Configuring Optimistic Locking

Concurrency control helps to prevent data conflicts in systems in which multiple clients access the same data source. What if two client read the same information, a customer order total, for example, and attempt to change its value by adding an order? Because the second update does not take into account the first, it can result in invalid data.

Liquid Data uses optimistic locking as its concurrency control policy. With optimistic locking, a database lock is not held for a data record that has been read. Instead, locking reoccurs only when an update is being attempted. At that time, the value of the data when it was read is compared to its current value. If the values differ, the update is not applied and the client is notified.

You can specify which fields are to be compared at the time of the update for each table. Note that primary key column must match, and BLOB and floating types might not be compared. By default, Projected is used. The following table describes the other options.

Optimistic Locking Update Policy

Effect

Projected

Projected is the default setting. It uses a 1-to-1 mapping of elements in the SDO data graph to the data source to verify the "updateability" of the data source.

This is the most complete means of verifying that an update can be completed, however if many elements are involved updates will take longer due to the greater number of fields needing to be verified.

Unprojected

Only fields that have changed in your SDO data graph are used to verify the changed status of the data source.

Selected Fields

Selected fields are used to validate the changed status of the data source.

Note: In some instances, Liquid Data may not be able to read data from a database table because another application has locked the table, causing queries issued by Liquid Data to be queued until the application releases the lock. To prevent this, you can set the transaction isolation to read uncommitted in the JDBC connection pool on your WebLogic Server. See Setting the Transaction Isolation Level in the Administration Guide for details on how to set the transaction isolation level.

Handling Foreign and Primary Keys

This section describes how relational source updates that affect primary and foreign keys in some way are managed by Liquid Data. For data inserts of autonumber primary keys, the new primary key value is generated and returned to the client. Liquid Data also propagates the effects of changes to a primary or foreign key, as described in the following sections.

Returning Computed Primary Keys

If top-level data objects have been added which have primary keys that are automatically generated by the RDBMS, the values of the primary keys for the inserted tuples will be returned as an array of Java properties (XPath name/value pairs) after a successful update submit. This only applies to the primary keys of the top-level data object. Primary keys for nested data objects that have computed primary keys are not returned.

Returning the top-level primary keys of inserted tuples allows the developer to refetch tuples based on their new primary keys if desired.

For example, given an array of Customer objects with a primary key field CustID into which two customers are inserted, the submit would return an array of two properties with the name being CustID, relative to the Customer type, and the value being the new primary key value for each inserted Customer.

Managing Key Dependencies

Liquid Data manages primary key dependencies when updates are performed. It identifies primary and foreign keys using predicate statements in the decomposition function. For example, if a query joins data records using a value comparison, such as where customer/id = order/id, the mediator performs various services given the inferred key/foreign key relationship when updating the data source.

If a predicate dependency exists between two SDOToUpdate instances (data objects in the update plan) and the container SDOToUpdate instance is being inserted or modified and the contained SDOToUpdate instance is being inserted or modified, then a key pair list is identified that indicates which values from the container SDO should be moved to the contained SDO after the container SDO has been submitted for update. This Key Pair List is based on the set of fields in the container SDO and the contained SDO that were required to be equal when the current SDO was constructed, and the key pair list will only identify those primary key fields from the predicate fields. The Property Map will only identify container primary key to container field mappings. If the full primary key of the container is not identified by the map then no properties are specified to be mapped.

A Key Pair List contains one or more items, identifying the node names in the container and contained objects that are mapped. Mapping means that the value of the property will be propagated from the parent to the child, if the property is an autonumber primary key in the container, which is a new record in the data source after the autonumber has been generated.

Foreign Keys

When computable by SDO submit decomposition, foreign key values will be set to match the parent key values. Foreign keys are computed when an update plan is produced.

 

Skip navigation bar  Back to Top Previous Next