Skip Headers
Oracle® Database Lite Developer's Guide
10g (10.3.0)

Part Number B28923-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Synchronization

The Oracle Lite database contains a subset of data stored in the Oracle database. This subset is stored in snapshots in the Oracle Lite database. Unlike a base table, a snapshot keeps track of changes made to it in a change log. Users can make changes in the Oracle Lite database and can synchronize them with the Oracle database.

The following sections describe how synchronization functions between Oracle Database Lite and an Oracle database using the Mobile Server. This chapter discusses how you can programmatically initiate the synchronization both from the client or the server side.

3.1 How Does Synchronization Work?

The following sections describe how synchronization works for Oracle Database Lite:

3.1.1 Synchronization Overview

The full description of how synchronization works is in the "Managing Synchronization" chapter in the Oracle Database Lite Administration and Deployment Guide. Each component and its function is described in the administration guide. The following graphic depicts these components for your reference:

Figure 3-1 Synchronization Architecture

Description of Figure 3-1 follows
Description of "Figure 3-1 Synchronization Architecture"

  1. A synchronization is initiated on the Mobile client either by the user or from automatic synchronization. Note that the Mobile client may be a Windows platform client or a PDA.

  2. Mobile client software gathers all of the client changes into a transaction and the Sync Client uploads the transaction to the Sync Server on the Mobile Server.

  3. Sync Server places the transaction into the In-Queue.

    Note:

    When packaging your application, you can specify if the transaction is to be applied at the same time as the synchronization. If you set this option, then the transaction is immediately applied to the application tables. However, note that this may not be scaleable and you should only do this if the application of the transaction immediately is important and you have enough resources to handle the load.
  4. Sync Server gathers all transactions destined for the Mobile client from the Out-Queue.

  5. Sync client downloads all changes for client Oracle Lite database.

  6. Mobile client applies all changes for client Oracle Lite database. If this is the first synchronization, the Oracle Lite database is created.

    Note:

    For information on what Oracle Lite database (ODB) files are installed on the client, see Section 2.2, "Synchronizing or Executing Applications on the Mobile Client" in the Oracle Database Lite Administration and Deployment Guide.
  7. All transactions compiled from all Mobile clients are gathered by the MGP out of the In-Queue.

  8. The MGP executes the apply phase by applying all transactions for the Mobile clients to their respective application tables to the back-end Oracle database. The MGP commits after processing each publication.

  9. MGP executes the compose phase by gathering the client data into outgoing transactions for Mobile clients.

  10. MGP places the composed data for Mobile clients into the Out-Queue, waiting for the next client synchronization for the Sync Server to gather the updates to the client.

When we discuss how to perform the tasks associated with synchronization, refer back to this graphic to discover what part of the synchronization process that we are discussing.

3.1.2 Automatic or Manual Synchronization

In the past, all that was available was manual synchronization. That is, a client manually requests a synchronization either through an application program executing an API or by a user manually pushing the Sync button.

Currently, you can configure for synchronization to automatically occur under specific circumstances and conditions. When these conditions are met, then Oracle Database Lite automatically performs the synchronization for you without locking your database, so you can continue to work while the synchronization happens in the background. This way, synchronization can happen seamlessly without the client's knowledge.

Note:

Within a publication, you can have one or more publication items. You can define both manual and automatic publication items within the same publication.

For example, you may choose to enable automatic synchronization for the following scenarios:

  • If you have a user who changes data on their handheld device, but does not sync as often as you would prefer.

  • If you have multiple users who all sync at the same time and overload your system.

These are just a few examples of how automatic synchronization can make managing your data easier, be more timely, and occur at the moment you need it to be uploaded.

Note:

When a manual synchronization is requested by the client, ALL publication items are synchronized at that time—including those defined as manual and automatic synchronization. However, if an automatic synchronization is currently executing, the manual synchronization request is delayed until the automatic synchronization completes. You can stop the automatic synchronization to allow the manual synchronization to occur. After the manual synchronization is finished, re-start the automatic synchronization.

The differences between the two types of synchronization are as follows:

Table 3-1 Difference Between Automatic and Manual Synchronization


Manual Synchronization Automatic Synchronization

Initiation

After the snapshot is set up, you can initiate either by the user initiating mSync or by an application invoking one of the synchronization APIs.

All of the set up for automatic synchronization is configured. Once configured, it happens automatically, so there is no synchronization API.

Configuration for automatic synchronization can be defined when you create the publication item, publication or the platform.

Controlling synchronization

Synchronization occurs exactly when the user/application requests it.

Synchronization occurs without the user being aware of it occuring. You may have to manage synchronization through the Sync Control API if you have publications that contain both manual and automatic synchronization publication items.


Automatic synchronization is based on a different model than manual synchronization. Automatic synchronization operates on a transactional basis. Thus, when the conditions are correct, any new data transactions are uploaded to the server, in the order of the specified priority for the data. In the manual synchronization model, you can synchronize all data or use the selective sync option, where you can detail only certain portions of the data to be synchronized. The selective sync option is not supported in automatic synchronization, since we are no longer concerned with synchronization of only a subset of data.

For more information on Automatic Synchronization, see Section 3.2, "Automatic Synchronization Overview".

3.1.3 How Updates Are Propagated to the Back-End Database

The synchronization process applies client operations to the tables in the back-end database, as follows:

  1. The operations for each publication item are processed according to table weight. The publication creator assigns the table weight to publication items within a specific publication. This value can be an integer between 1 and 1023. For example, a publication can have more than one publication item of weight "2" which would have INSERT operations performed after those for any publication item of a lower weight within the same publication. You define the order weight for tables when you add a publication item to the publication. See Section 3.4.1.7.2, "Using Table Weight" for more information.

  2. Within each publication item being processed, the SQL operations are processed as follows:

    1. Client INSERT operations are executed first, from lowest to highest table weight order.

    2. Client DELETE operations are executed next, from highest to lowest table weight order.

    3. Client UPDATE operations are executed last, from highest to lowest table weight order.

For details and an example of exactly how the weights and SQL operations are processed, see Section 3.4.1.7.2, "Using Table Weight".

Note:

This order of executing operations can cause constraint violations. See Section 3.11, "Synchronizing With Database Constraints" for more information.

In addition, the order in which SQL statements are executed against the client Oracle Lite database is not the same as how synchronization propagates these modifications. Instead, synchronization captures the end result of all SQL modifications as follows:

  1. Insert an employee record 4 with name of Joe Judson.

  2. Update employee record 4 with address.

  3. Update employee record 4 with salary.

  4. Update employee record 4 with office number

  5. Update employee record 4 with work email address.

When synchronization occurs, all modifications are captured and only a single insert is performed on the back-end database. The insert contains the primary key, name, address, salary, office number and email address. Even though the data was created with multiple updates, the Synch Server only takes the final result and makes a single insert.

3.2 Automatic Synchronization Overview

Automatic synchronization occurs in the background, so that the user does not have to perform a synchronization; thus, the client appears continually connected to the back-end database without user interaction. All modifications to each record are saved in a log within the client Oracle Lite database. When you requested synchronization manually, Oracle Database Lite locked the database while processing your request. However, with automatic synchronization, it could be occurring while you are performing other tasks to the Oracle Lite database.

When synchronization occurs, all of the modified records stored in the log are uploaded to the server. In addition, any modified records from the server are downloaded into the client Oracle Lite database. This occurs in the same manner as manual synchronization. The only difference is when the synchronization is executed and how the modified records are stored.

The following are details about automatic synchronization:

Table 3-2 Automatic Synchronization

Steps for Automatic Synchronization See the Following for Details

The developer enables the publication item to use automatic synchronization.

Section 3.2.1, "Enable Automatic Synchronization at the Publication Item Level"

The client can disable and enable automatic synchronization through the client Workspace or with the Sync Control API.

Section 3.2.2, "Enable/Disable Automatic Synchronization"


You can configure under what rules the automatic synchronization occurs.

Section 3.2.3, "Define the Rules Under Which the Automatic Synchronization Starts"


The server can notify the client of data waiting for download.

Section 3.19.3, "Selecting How/When to Notify Clients of Composed Data"


The client application can request status of the outcome of an automatic synchronization.

Section 3.2.5, "Notify Application on Completion of Automatic Synchronization Cycle"



The following sections detail how you can configure for automatic synchronization:

3.2.1 Enable Automatic Synchronization at the Publication Item Level

Automatic synchronization can be enabled at publication item level. It is only the "enabled" publication items within a snapshot that can have automatic synchronization. All other publication items use manual synchronization. See Section 5.4, "Create a Publication Item" for details of how to enable synchronization in a publication item using MDW or Section 3.4.1.3, "Create Publication Items" using the API.

Within a publication, you can have one or more publication items. You can define both manual and automatic synchronization publication items within the same publication. However, if you have automatic synchronization enabled, then an automatic sync may be occurring when the client asks for a manual synchronization. In this case, the manual synchronization stops the automatic synchronization so that all snapshots are synchronized. Automatic synchronization is restarted after the manual synchronization completes.

If you want the manual synchronization to occur at that moment, you can stop the automatic synchronization to allow the manual synchronization to occur. After the manual synchronization is finished, re-start the automatic synchronization. You can start and stop automatic synchronization either programmatically or through the client Workspace. See Section 3.2.2, "Enable/Disable Automatic Synchronization" for full details.

3.2.2 Enable/Disable Automatic Synchronization

Automatic synchronization is enabled by default if a publication is enabled for automated synchronization. However, there may be a situation where you want to disable this automated ability, as follows:

  • Enable/Disable—If you decide to disable the automatic synchronization; then, even if you restart the client, automatic synchronization will not occur. Use enable/disable for permanently disabling automatic synchronization.

  • Start/Stop—If you decide to stop automatic synchronization; then, if you restart the client, automatic synchronization is restarted. Use start/stop for temporarily stopping automatic synchronization while a manual synchronization occurs.

If you are using the mSync GUI to initiate a synchronization, the underlying code performs the following for you:

  1. Stops the automatic synchronization with the Sync Control API.

  2. Initiates a manual synchronization with the programmatic API.

  3. Starts the automatic synchronization with the Sync Control API.

However, if you are performing the synchronization programmatically with the doSynchronize method, then you may need to perform the stop/start methods in your application to ensure that automatic synchronization is not executing.

The following control APIs can be used to manage the automatic synchronization or enable/disable automatic synchronization:

3.2.2.1 POLITE.INI Configuration to Enable/Disable Automatic Synchronization

The start and stop methods only control the automatic synchronization temporarily. To fully disable automatic synchronization, so that it is not restarted when a device is powered on, perform one of the following:

3.2.2.2 Overview of the Start/Stop Methods from the Sync Control API.

Stop/start automatic synchronization using the Sync Control API. The stop API has one parameter for input, which is a timeout. You can supply one of the following values for the timeout, which is a long that specifies a time in milliseconds to wait for any current activity in the automatic synchronization to complete.

  • BG_STOP_TIMEOUT: A value in seconds that allows the automatic synchronization process to complete before stopping the service. By default, this is set to 5 seconds.

  • BG_KILL_AGENT: A value of -1 that makes the automatic synchronization service stop immediately, even if it is in the middle of a synchronization. If an automatic synchronization is in process, it will be terminated. NO errors or messages are returned.

  • Any long value in milliseconds: If the automatic synchronization does not stop within the time designated, then the method returns with an error of BG_ERROR_TIMEOUT. At this point, you reissue the stop method to terminate the automatic synchronization immediately by supplying BG_KILL_AGENT or -1 as the input value.

Note:

There is also a GUI for starting, stopping the automatic synchronization process. See Section 5.3.1, "Start, Stop, or Get Status for Automatic Synchronization" in the Oracle Database Lite Administration and Deployment Guide for more details.

3.2.2.3 C/C++ Sync Control APIs to Start/Stop Automatic Synchronization

Use the control APIs for starting and stopping automatic synchronization. These APIs are as follows:

olError olStartSyncAgent() ;
olError olStopSyncAgent(long timeout);

3.2.2.4 C# Sync Control APIs to Start/Stop Automatic Synchronization

public class BGSyncControl
        {
                public void start();
                public void stop(int timeout);
}

All methods throw an OracleException in case of failure.

3.2.2.5 JAVA Sync Control APIs to Start/Stop Automatic Synchronization

package oracle.lite.msync;
class BGSyncControl {
          public void start() throws SyncException;
          public void stop(long timeout) throws SyncException;
}

3.2.3 Define the Rules Under Which the Automatic Synchronization Starts

You can configure under what circumstances a synchronization should occur and then Oracle Database Lite performs the synchronization for you automatically. The circumstances under which an automatic synchronization occurs is defined within the synchronization rules, which includes the following:

  • Events—An event is variable, as follows:

    • Data events: For example, you can specify that a synchronization occurs when there are a certain number of modified records in the client database.

    • System events: For example, you can specify that if the battery drops below a predefined minimum, you want to synchronize before the battery is depleted.

  • Conditions—A condition is an aspect of the client that needs to be present for a synchronization to occur. This includes conditions such as battery life or network availability.

The relationship between events and conditions when evaluating if an automatic synchronization occurs is as follows:

when EVENT and if (CONDITIONS), then SYNC

For example, if the event for new data inserted and the condition specified is that the network must be available, then a synchronization only occurs when the network is available and there is new data.

You can define the rules for automatic synchronization within certain parts of the normal snapshot setup and platform configuration, as follows:

  • Publication level: Within the publication, you specify the rules under which the synchronization occurs for all publication items in that publication.

  • Platform level: Some of the rules are specific to the platform of the client, such as battery life, network bandwidth, and so on. These rules apply to all enabled publication items that exist on this particular platform, such as WinCE.

The following sections detail all of the rules you can configure for automatic synchronization:

3.2.3.1 Configure Publication-Level Automatic Synchronization Rules

Within the publication, you specify the rules under which the synchronization occurs for all publication items in that publication. These rules are defined when you create the publication either using MDW or programmatically with the APIs. To create this through MDW, see Section 5.5, "Define the Rules Under Which the Automatic Synchronization Starts" ; to add publication-level automatic synchronization rules with the API, see Section 3.4.1.4, "Define Publication-Level Automatic Synchronization Rules".

When you are creating the publication, you can define events that will cause an automatic synchronization. Although these are defined at the publication level, they enable only the publication items within this publication that has automatic synchronization enabled.

Table 3-3 describes the publication level events for automatic synchronization. The lowest value that can be provided is 1.

Table 3-3 Automatic Events for the Publication

Events Description

Client commit

Upon commit to the Oracle Lite database, the Mobile client detects the total number of record changes in the automatic synchronization log. If the number of modifications is equal to or greater than your pre-defined number, automatic synchronization occurs. This rule is on by default and set to start an automatic synchronization if only one record is changed.

Server MGP compose

If after the MGP compose cycle, the number of modified records for a user is equal to or greater than your pre-defined number, then an automatic synchronization occurs. Thus, if there are a certain number of records contained in an Out Queue destined for a client on the server, these modifications are synchronized to the client.


Note:

If you want to modify the publication-level automatic synchronization rules after you publish the appliation, you can do so through the Mobile Manager, as follows:
  1. Click Data Synchronization.

  2. Click Repository.

  3. Click Publications.

  4. Select the publication and click Automatic Synchronization Rules.

3.2.3.2 Configure Platform-Level Automatic Synchronization Rules

Some of the rules are specific to the platform of the client, such as battery life, network bandwidth, and so on. These rules apply to all enabled publication items that exist on this particular platform, such as WinCE. You configure these rules through Mobile Manager or MDW. This section describes Mobile Manager.

The platform-level synchronization rules apply to a selected client platform and all publications that exist on that platform. You can specify both platform events and conditions using the Mobile Manager.

To assign platform-level automatic synchronization rules, perform the following in Mobile Manager:

  1. Click Data Synchronization.

  2. Click Platform Settings, which brings up a page with the list of all the platforms that support automatic synchronization.

  3. Click on the desired platform.

  4. You can modify the following for each platform:

3.2.3.2.1 Event Rules for Platforms

Table 3-4 shows the platform events for automatic synchronization.

Table 3-4 Automatic Event Rules for the Client Platform

Event Description

Network bandwidth

If the Mobile client detects that it is connected to a network with a pre-defined minimum bandwidth, then automatic synchronization occurs.

Battery life

If the battery life drops below a pre-defined minimum, then synchronization is automatically triggered.

AC Power

As soon as AC power is detected, then synchronization is automatically triggered.

Time

Synchronize at a specific time or time interval. You can configure an automatic synchronization to occur at a specific time each day or as an interval.

  • Select Specify Time if you want to automatically synchronize at a specific hour, such as 8:00 AM, everyday.

  • Select Specify Time Interval if you want to synchronize at a specific interval. For example, if you want to synchronize every hour, then specify how long to wait in-between synchronization attempts.


3.2.3.2.2 Condition Rules for Platforms

Table 3-5 shows the platform conditions for automatic synchronization.

Table 3-5 Automatic Condition Rules for Client Platform

Condition Description

Battery level

Specify the minimum battery level required in order for an automatic synchronization to start. The battery level is specified as a percentage.

Network conditions

Network quality can be specified using several properties. This condition enables you to specify a minimum value for the following network properties:

  • Minimum network bandwidth, which is measured in bits per second.

  • Maximum ping delay, which is measured in milliseconds.

  • Data priority, which is either high or regular. You can specify the priority of your data in the table row.

For example, you can define a rule where all high priority data is automatically synchronized at a specified network bandwidth. The ping delay is optional. If not specified, the ping is not calculated.


3.2.3.2.3 Network Configuration for the Client Platform

You can set proxy information for your network provider, if required for accessing the internet.

Note:

If you are not using a proxy, then you do not need to define proxy information on this page.

You could have two types of networks, as follows:

  • Always-on: Define the proxy and port number. Click Apply when finished.

  • Dial-up:

    • Click Add Dial-up Network to add a a new entry for dial-up configuration.

    • To edit an existing configuration, select the name of the existing configuration.

    • To delete an existing configuration, select the checkbox next to the desired configuration and click Delete.

If the platform has an always-on network, then this network is always tried first for the connection. If this network is not available, then the dial-up networks are tried in the order specified. You can rearrange the order of the dial-up networks by selecting one of the networks and clicking the up or down button.

3.2.4 Enable the Server to Notify the Client to Initiate a Synchronization to Download Data

If you have designed the compose yourself—that is, you do not use the MGP—then, you can notify the client if any data exists on the server that can be downloaded to the client through enqueue notification APIs. You can also use these APIs to manage the automatic synchronization schedule for your clients.

For more information on enqueue notification APIs, see Section 3.19.3, "Selecting How/When to Notify Clients of Composed Data".

3.2.5 Notify Application on Completion of Automatic Synchronization Cycle

You can develop your client application to be notified when an automatic synchronization cycle occurs. The application is notified from the Sync Agent when the automatic synchronization completes as well as when a critical event occurs in the client device. For example, when the device battery runs critically low, Oracle Database Lite can notify the application.

In the client application, create a procedure that executes one of the following message APIs. When your application calls the get message API, it blocks until an event occurs within an automatic synchronization. It returns a structure that describes this event.

The following sections provide implementation details for each development language:

Automatic Synchronization Notification for C/C++ Application

Use the olGetSyncMsg method in your client application to receive the automatic synchronization notification when implementing for C/C++ applications. In order to block for the status, you need to perform the following:

  1. Start the application messaging service with the olStartSyncMsg method, providing a queue handle of type olAppMsgQ. This message starts the messaging service and returns the queue handle in the olAppMsgQ.

  2. Execute the olGetSyncMsg with the olAppMsgQ message handle and the defined olSyncMsg structure for the returned automatic synchronization information.

The following provides the method definitions:

typedef void *olAppMsgQ
/* start application messaging, get queue handle */
olError olStartSyncMsg(olAppMsgQ *q);
/*Provide the queue handle and block to retrieve automatic sync event */
olError olGetSyncMsg(olAppMsgQ q, olSyncMsg *m);

The olGetSyncMsg method blocks until an event occurs, then the Sync Agent returns the olSyncMsg class, which you provide as an input parameter, with the information on what happened, as follows:

typedef struct _olSyncMsg {
    ol2B type;
    ol2B id;
    char msg[BG_MAX_MSG];
} olSyncMsg;

See "Input Parameters for Automatic Synchronization Notification" for a description of the input parameters in the structure.

The C/C++ application performs in a different manner than the Java and C# versions in that this creates a message service with its own message queue. Thus, when finished you must perform some cleanup to ensure that the message queue handle is released. Use the olStopSyncMsg method to stop the messaging service and release the handle. This must be performed for every message queue that is opened with the olStartSyncMsg method.

olError olStopSyncMsg(olAppMsgQ q);

If you want to force an existing olGetSyncMsg to return, use the olCancelSyncMsg from another thread in the application. This causes the olGetSyncMsg to return with the BG_ERR_APP_MSG_CANCEL error.

olError olCancelSyncMsg(olAppMsgQ q);

Automatic Synchronization Notification for C# Application

Use the GetMessage method in your client application to receive the automatic synchronization notification when implementing for C# applications, as follows:

public BGSyncMsg GetMessage();

This method blocks until an event occurs, then the Sync Agent returns the BGSyncMsg class with the information on what happened, as follows:

public class BGSyncMsg
{
 public int Type;
 public int Id;
 public string Msg;
}

See "Input Parameters for Automatic Synchronization Notification" for a description of the input parameters in the class.

Automatic Synchronization Notification for Java Application

Use the getMessage method in your client application to receive the automatic synchronization notification when implementing for Java applications, as follows:

public class BGSyncControl 
{
 public BGSyncMsg getMessage() throws SyncException;
}  

This method blocks until an event occurs, then the Sync Agent returns the BGSyncMsg class with the information on what happened, as follows:

public class BGSyncMsg{
    public int type;
    public int id;
    public String msg;
}  

See "Input Parameters for Automatic Synchronization Notification" for a description of the input parameters in the class.

Input Parameters for Automatic Synchronization Notification

The input parameters in the input structure/class are as follows:

Table 3-6 The Sync Message Variables

Variable Description

Event type

The event can be of three types, each of which indicate the level of severity of this notification:

  • INFO

  • ERROR

  • WARNING

Event identifier for INFO types:

The INFO event identifer describes what occurred, as follows:

  • SYNC_STARTED: The Sync Agent has started the synchronization task.

  • SYNC_SUCCEEDED: Data synchronization completed successfully.

  • APPLY_STARTED: The Sync Agent has started the apply task.

  • APPLY_SUCCEEDED: The apply phase completed successfully.

  • SVR_NOTIF: The Sync Agent has received a server notification. The message contains information about the server notification, such as publication name, number of modified records and the record priority (high priority or normal).

  • NETWORK_CHANGE: Device has moved into a different network

  • AGENT_STARTED: The Sync Agent started.

  • AGENT_STOPPED: The Sync Agent stopped.

Event identifier for the WARNING type:

The WARNING event identifier describes in more detail what occurred, as follows:

  • BATTERY_LOW: Device's battery is running low

  • MEMORY_LOW: Device's memory is running low

Event identifier for the ERROR type:

The ERROR event identifier describes in more detail what occurred, as follows:

  • APPLY_FAILED: The apply failed. In this case, 'message' contains the reason for failure.

  • SYNC_FAILED: Data synchronization failed. In this case, 'message' contains the reason for failure.

  • AGENT_ERROR: An internal error condition occurred. The message contains the actual error message. Examples would be failure to load a rule, failure to process server notification, failure to evaluate system power, and so on. In spite of this error, the Sync Agent continues to execute. Fatal errors are written to the olSyncAgent.err file.

Event Message

String message that expounds on the information provided by the event type and identifier.


3.2.6 Request Status for Automatic Synchronization Cycle

If you want to know at what stage the automatic synchronization cycle is, you can request status from the Sync Agent. In the client application, execute the get status API, which will return immediately with at what stage the automatic synchronization cycle is executing. This is different from the notification message API, which only returns when an event is completed within the synchronization cycle.

The get status API returns a structure that describes this event.

The following sections provide implementation details for each development language:

Retrieving Status for C/C++ Application

Use the olGetSyncStatus method in your C/C++ client application to retrieve status on the automatic synchronization, as follows:

olError olGetSyncStatus(olSyncStatus *s);

The Sync Agent returns the olSyncStatus class, which you provide as an input parameter, with the information on what happened, as follows:

typedef struct _olSyncStatus {
    char clientId[BG_MAX_USERNAME];
    ol2B syncState;
    ol2B syncProgress;
    char syncStateStr[BG_MAX_STATUS_STR];
    olError lastSyncError;
    ol2B lastSyncType;
    ol8B lastSyncTime;
    ol2B applyState;
    ol2B applyProgress;
    char applyStateStr[BG_MAX_STATUS_STR];
    olError lastApplyError;
    olU2B _reserved;
    ol8B lastApplyTime;
    char networkName[BG_MAX_STATUS_STR];
    ol4B networkSpeed;
    ol4B batteryPower;
} olSyncStatus;

See "Input Parameters for Retrieving Messages" for a description of the input parameters in the structure.

Retrieving Status for C# Application

Use the GetStatus method in your C/C++ client application to retrieve status on the automatic synchronization, as follows:

public BGSyncStatus GetStatus();

This method returns the BGSyncStatus class with the status information on the automatic synchronization, as follows:

public class BGSyncStatus 
{
  public string clientId;
  public short  syncState;
  public string syncStateStr;
  public short syncProgress;
  public short lastSyncError;
  public short lastSyncType;
  public long lastSyncTime;
  public short applyState;
  public string applyStateStr;
  public short applyProgress;
  public short lastApplyError;
  public ushort  _reserved;
  public long  lastApplyTime;
  public string networkName;
  public int networkSpeed;
  public int batteryPower;
}

See "Input Parameters for Retrieving Messages" for a description of the input parameters in the structure.

Retrieving Status for Java Application

Use the getStatus method in your Java client application to retrieve status on the automatic synchronization, as follows:

public BGSyncStatus getStatus() throws SyncException

This method returns the BGSyncStatus class with the status information on the automatic synchronization, as follows:

public class BGSyncStatus 
{
   public String clientId;
   public short  syncState;
   public String syncStateStr;
   public short syncProgress;
   public short lastSyncError;
   public short lastSyncType;
   public Date lastSyncTime;
   
   public short applyState;
   public String applyStateStr;
   public short applyProgress;
   public short lastApplyError;
   public Date  lastApplyTime;
   
   public String networkName;
   public int networkSpeed;
   public int batteryPower;
}

See "Input Parameters for Retrieving Messages" for a description of the input parameters in the structure.

Input Parameters for Retrieving Messages

The input parameters in the input structure/class are as follows:

Table 3-7 Status Class Fields

Field Description

clientId

Username

syncState

A numeric value that denotes the current synchronization stage, such as compose, send, or receive.

syncStateStr

String describing the state, as denoted in the syncState, for the automatic synchronization.

syncProgress

A percentage that indicates the current progress for the automatic synchronization.

lastSyncError

If an error occurred in the last synchronization, this is the error code. If no error, this value is zero.

lastSyncType

The priority of the data for the last synchronization. If 1, then high priority data; if 0, then regular priority data was synchronized.

lastSyncTime

Time of the last automatic synchronization.

applyState

Code that indicates the state for the apply phase.

applyStateStr

String describing the state for the apply phase, as denoted in the applyState variable.

applyProgress

A percentage that indicates the current progress for the apply phase.

lastApplyError

If an error occurred in the last apply phase, this is the error code. If no error, this value is zero.

lastApplyTime

Time of the last apply phase.

networkName

The network name assigned to this network.

networkSpeed

Current bandwidth of the network.

batteryPower

Current battery power percentage.


3.3 What is The Process for Setting Up a User For Synchronization?

Before you can perform the synchronization, the publication must be created, the user created and granted access to the publication, and optionally, the publication packaged up with an application and published to the Mobile Server. This is referred to as the publish and subscribe model, which can be implemented in one of two ways:

Once created and subscribed, the user can be synchronized, as follows:

On the back-end of the synchronization process, you have the option to customize how the apply and compose phase are executed. See Section 3.6, "Customize the Compose Phase Using MyCompose".

3.3.1 Creating a Snapshot Definition Declaratively

Use the Mobile Database Workbench (MDW), a GUI based tool of Oracle Database Lite—described fully in Chapter 5, "Using Mobile Database Workbench to Create Publications"—to create snapshots declaratively. The convenience of a graphical tool is a safer and less error prone technique for developers to create a Mobile application. Before actual application programming begins, the following steps must be executed:

  1. Verify that the base tables exist on the server database; if not, create the base table.

  2. Use MDW to define an application and the snapshot with the necessary publicatino and its publication items. See Chapter 5, "Using Mobile Database Workbench to Create Publications" for details.

  3. Use the Packaging Wizard to publish the application to the Mobile Server. This creates the publication items associated with the application. See Chapter 7, "Using the Packaging Wizard" for details.

  4. Use the Mobile Manager to create a subscription for a given user.

  5. Install the application on the development machine.

3.3.1.1 Manage Snapshots

The Mobile Server administrator can manage a snapshot, which is a full set or a subset of rows of a table or view. Create the snapshot by executing a SQL query against the base table. Snapshots are either read-only or updatable.

The following sections describes how to manage snapshots using MDW:

3.3.1.1.1 Read-only Snapshots

Read-only snapshots are used for querying purposes only. The data is downloaded from the Oracle server to the client; no data on the client is ever uploaded to the server. Any data added on the client in a read-only snapshot can be lost, since it is never uploaded to the server. Changes made to the master table in the back-end Oracle database server are replicated to the Mobile client. See Section 5.9.2, "Publication Item Tab Associates Publication Items With the Publication" for instructions on how to define the publication item as read-only.

Note:

A subscription created as complete refresh and read-only is light weight; thus, to keep the subscription light weight, the primary keys are not included in the replication. If you want to include primary keys, then create them with the createPublicationItemIndex API.
3.3.1.1.2 Updatable Snapshots

When you define a snapshot as updatable, then the data propagated within a synchronization is bi-directional. That is, any modifications made on the client are uploaded to the server; any modifications made on the back-end Oracle server are downloaded to the client. See Section 5.9.2, "Publication Item Tab Associates Publication Items With the Publication" for instructions on how to define the publication item as updatable.

A snapshot can only be updated when all the base tables that the snapshot is based on have a primary key or virtual primary key. If the base tables do not have a primary key, a snapshot cannot be updated and becomes read-only. Table 3-8 shows each refresh method type and whether it is updatable or read-only depending on primary key or virtual primary key:

Table 3-8 Which Refresh Methods Can Be Updatable or Read-Only


Fast Complete Queue-Based

Table Uses a Primary Key

Updatable or Read-Only

Updatable or Read-Only

Updatable or Read-Only

Table Uses a Virtual Primary Key

Updatable or Read-Only

Updatable or Read-Only

Updatable or Read-Only

No Primary Key or Virtual Primary Key Used

Not applicable since all Fast Refresh tables use a primary or virtual primary key.

Read-Only

Read-Only


3.3.1.1.3 Refresh a Snapshot

Your snapshot definition determines whether an updatable snapshot uses the complete or fast refresh method.

  • The complete refresh method recreates the snapshot every time it is refreshed. Note that when it recreates the snapshot, all of the data on the client Oracle Lite database is erased and then the snapshot for this user on the back-end Oracle database is brought down to the client.

  • The fast refresh method refreshes only the modified data within the snapshot definition on both the client and server. In general, the simpler your snapshot definition, the faster it is updated. All fast refresh methods require a primary key or a virtual primary key.

See Section 5.4, "Create a Publication Item" and Section 3.9, "Understanding Your Refresh Options"

3.3.1.1.4 Snapshot Template Variables

Snapshots are application-based. In some cases, you may quantify the data that your application downloads for each user by specifying all of the returned data match a predicate. You can accomplish this by using snapshot templates.

A snapshot template is an SQL query that contains data subsetting parameters. A data subsetting parameter is a colon (:), followed by an identifier name, as follows:

:var1

Note:

If the subsetting parameter is on a CHAR column of a specified length, then you should either preset all characters to spaces before setting the value or pad for the length of the column with spaces after setting the parameter.

When the Mobile client creates snapshots on the client machine, the Mobile Server replaces the snapshot variables with user-specific values. By specifying different values for different users, you can control the data returned by the query for each user.

You can use MDW to specify a snapshot template variable in the same way that you create a snapshot definition for any platform.

Data subsetting parameters are bind variables and so should not be enclosed in quotation marks ('). If you want to specify a string as the value of the data subsetting parameter, then the string contains single quotation marks. You can specify the values for the template variables within the Mobile Manager.

The following examples specify a different value for every user. By specifying a different value for every user, the administrator controls the behavior and output of the snapshot template.

select * from emp where deptno = :dno

You define this select statement in your publication item. See Section 5.4.1, "Create SQL Statement for Publication Item" for instructions. Then, modify the user in the Mobile Manager to add the value for :dno. Then, when the user synchronizes, the value defined for the user is replaced in the select script. See Section 5.3, "Managing Application Parameter Input (Data Subsetting)" in the Oracle Database Lite Administration and Deployment Guide for information on how to define the value of the variable. This value can only be defined after the application is published and the user is associated with it.

Table 3-9 provides a sample set of snapshot query values specified for separate users.

Table 3-9 Snapshot Query Values for Separate Users

User Value Snapshot Query

John

10

select * from emp where deptno = 10

Jane

20

select * from emp where deptno = 20


select * from emp where ename = :ename

Table 3-10 provides another sample snapshot query value.

Table 3-10 Snapshot Query Value for User Names

User Value Snapshot Query

John

'KING'

select * from emp where ename = 'KING'


3.3.2 Creating the Snapshot Definition Programmatically

You can use the Resource Manager or Consolidator Manager APIs to programmatically create the publication items on the Mobile Server. Create publication items from views and customize code to construct snapshots.

Note:

The Consolidator Manager API can only create a publication, which cannot be packaged with an application. In addition, a publication created with the Consolidator Manager API cannot be packaged with an application. See Section 3.4, "Creating Publications Using Oracle Database Lite APIs" for information on the Consolidator Manager API. Use the Resource Manager APIs to create the publication, package it with an application, and publish it to the Mobile Server. See the oracle.mobile.admin.ResourceManager Javadoc in the Oracle Database Lite API Specification, which you can link to off the ORACLE_HOME/Mobile/index.htm page.

The base tables must exist before the Consolidator Manager API can be invoked. The following steps are required to create a a subscription:

  • Create a publication

  • Create a publication item and add it to the publication

  • Create a user

  • Creating a subscription for the user based on the publication

The details of how to create a publication are documented in Chapter 5, "Using Mobile Database Workbench to Create Publications". Anything that you can do with the MDW tool, you can also perform programmatically using the Consolidator Manager API. Refer to the Javadoc for the syntax.

3.4 Creating Publications Using Oracle Database Lite APIs

Mobile Server uses a publish and subscribe model to centrally manage data distribution between Oracle database servers and Oracle Database Lite clients. Basic functions, such as creating publication items and publications, can be implemented easily using the Mobile Development Workspace (MDW). See Chapter 5, "Using Mobile Database Workbench to Create Publications" for more information.

These functions can also be performed using the Consolidator Manager or Resource Manager APIs by writing Java programs to customize the functions as needed. Some of the advanced functionality can only be enabled programmatically using the Consolidator Manager or Resource Manager APIs.

The publish and subscribe model can be implemented one of two ways:

3.4.1 Defining a Publication With Java Consolidator Manager APIs

While we recommend that you use MDW (see Chapter 5, "Using Mobile Database Workbench to Create Publications") or the Packaging Wizard (see Chapter 7, "Using the Packaging Wizard") for creating your publications, you can also create them, including the publication items and the user, with the Consolidator Manager API. Choose this option if you are performing more advanced techniques with your publications.

After creating the database tables in the back-end database, create the Resource Manager and Consolidator Manager objects to facilitate the creation of your publication:

  • The Resource Manager object enables you to create users to associate with the subscription.

  • The Consolidator Manager object enables you to create the subscription.

The order of creating the elements in the publication is the same as if you were using MDW. You must create a publication first and then add the publication items and other elements to it. Once the publications are created, subscribe users to them. See the Javadoc for full details on each method. See Chapter 5, "Using Mobile Database Workbench to Create Publications" for more details on the order of creating each element.

Note:

The following sections use the sample11.java sample to demonstrate the Resource Manager and Consolidator Manager methods used to create the publication and the users for the publication. The full source code for this sample can be found in the following directories:

On UNIX: <ORACLE_HOME>/mobile/server/samples

On Windows: <ORACLE_HOME>\Mobile\Server\Samples

  1. Section 3.4.1.1, "Create the Mobile Server User"

  2. Section 3.4.1.2, "Create Publications"

  3. Section 3.4.1.3, "Create Publication Items"

  4. Section 3.4.1.4, "Define Publication-Level Automatic Synchronization Rules"

  5. Section 3.4.1.5, "Data Subsetting: Defining Client Subscription Parameters for Publications"

  6. Section 3.4.1.6, "Create Publication Item Indexes"

  7. Section 3.4.1.7, "Adding Publication Items to Publications"

  8. Section 3.4.1.8, "Creating Client-Side Sequences for the Downloaded Snapshot"

  9. Section 3.4.1.9, "Subscribing Users to a Publication"

  10. Section 3.4.1.10, "Instantiate the Subscription"

  11. Section 3.4.1.11, "Bringing the Data From the Subscription Down to the Client"

  12. Section 3.4.1.12, "Modifying a Publication Item"

  13. Section 3.4.1.13, "Callback Customization for DML Operations"

  14. Section 3.4.1.14, "Restricting Predicate"

Note:

To call the Publish and Subscribe methods, the following JAR files must be specified in your CLASSPATH.
  • <ORACLE_HOME>\jdbc\lib\ojdbc14.jar

  • <ORACLE_HOME>\Mobile\classes\consolidator.jar

  • <ORACLE_HOME>\Mobile\classes\classgen.jar

  • <ORACLE_HOME>\Mobile\classes\servlet.jar

  • <ORACLE_HOME>\Mobile\classes\xmlparserv2.jar

  • <ORACLE_HOME>\Mobile\classes\jssl-1_2.jar

  • <ORACLE_HOME>\Mobile\classes\javax-ssl-1_2.jar

  • <ORACLE_HOME>\Mobile\Server\bin\devmgr.jar

  • <ORACLE_HOME>\Mobile\classes\share.jar

  • <ORACLE_HOME>\Mobile\classes\oracle_ice.jar

  • <ORACLE_HOME>\Mobile\classes\phaos.jar

  • <ORACLE_HOME>\Mobile\classes\jewt4.jar

  • <ORACLE_HOME>\Mobile\classes\jewt4-nls.jar

  • <ORACLE_HOME>\Mobile\classes\wtgpack.jar

  • <ORACLE_HOME>\Mobile\classes\jzlib.jar

  • <ORACLE_HOME>\Mobile\Server\bin\webtogo.jar

3.4.1.1 Create the Mobile Server User

Use the createUser method of the MobileResourceManager object to create the user for the publication.

  1. Create the MobileResourceManager object. A connection is opened to the Mobile Server. Provide the schema name, password, and JDBC URL for the database the contains the schema (the repository).

  2. Create one or more users with the createUser method. Provide the user name, password, the user's real name, and privilege, which can be one of the one of the following: "O" for publishing an application, "U" for connecting to Web-to-Go as user, or "A" for administrating the Web-to-Go. If NULL, no privilege is assigned.

    Note:

    Always request a drop user before you execute a create, in case this user already exists.
  3. Commit the transaction, which was opened when you created the MobileResourceManager object, and close the connection.

MobileResourceManager mobileResourceManager =
    new MobileResourceManager(CONS_SCHEMA, DEFAULT_PASSWORD, JDBC_URL);
mobileResourceManager.createUser("S11U1", "manager", "S11U1", "U");
mobileResourceManager.commitTransaction();
mobileResourceManager.closeConnection();

Note:

If you do not want to create any users, you do not need to create the MobileResourceManager object.
3.4.1.1.1 Change Password

You can change passwords for Mobile Server users with the setPassword method, which has the following syntax:

public static void setPassword

   (String userName,

    String newpwd) throws Throwable

Note:

Both username and passwords are limited to a maximum of 28 characters.

Execute the setPassword method before you commit the transaction and release the connection. The following example changes the password for the user MOBILE:

mobileResourceManager.setPassword("MOBILE","MOBILENEW");

3.4.1.2 Create Publications

A subscription is a combination of publications and the users who access the information gathered by the publications. Create any publication through the ConsolidatorManager object.

  1. Create the ConsolidatorManager object.

  2. Connect to the database using the openConnection method. Provide the schema name, password, and JDBC URL for the database the contains the schema (the repository).

  3. Create the publication with the createPublication method, which creates an empty publication.

Note:

Always request a drop publication before you execute a create, in case this publication already exists.
ConsolidatorManager consolidatorManager = new ConsolidatorManager();
consolidatorManager.openConnection(CONS_SCHEMA, DEFAULT_PASSWORD, JDBC_URL);
consolidatorManager.createPublication("T_SAMPLE11", 
 Consolidator.OKPI_CREATOR_ID, "OrdersODB.%s", null);

Note:

Special characters including spaces are supported in publication names. The publication name is case-sensitive.

3.4.1.3 Create Publication Items

An empty publication does not have anything that is helpful until a publication item is added to it. Thus, after creating the publication, it is necessary to create the publication item, which defines the snapshot of the base tables that is downloaded for your user.

When you create each publication item, you can specify the following:

  • Automatic or Manual Synchronization: Whether the publication item is to be synchronization automatically or manually.

  • Refresh Mode: The refresh mode of the publication item is specified during creation to be either fast, complete-refresh, or queue-based.

  • Data-Subsetting Parameters: You can also establish the data-subsetting parameters when creating the publication item, which provides a finer degree of control on the data requirements for a given client.

Publication item names are limited to twenty-six characters and must be unique across all publications. The publication item name is case-sensitive. The following examples create a publication item named P_SAMPLE11-M.

Note:

Always drop the publication item in case an item with the same name already exists.

The following example uses the createPublicationItem method, which creates a manual synchronization publication item P_SAMPLE11-M based on the ORD_MASTER database table with fast refresh. Use the addPublicationItem method to add this publication item to the publication.

Note:

For full details on the method parameters, see the Javadoc.
consolidatorManager.createPublicationItem("P_SAMPLE11-M", "MASTER", 
    "ORD_MASTER", "F", "SELECT * FROM MASTER.ORD_MASTER", null, null);

When you create a publication item that uses automatic synchronization through the createPublicationItem method, you can also define the following:

  • Automatic Synchronization: Set the publication to use automatic synchronization by setting the isLogBased flag to true.

  • Server-initiated change notifications: If you set the doChangeNtf flag to true, then the Mobile Server sends a notification to the client if any changes are made on the server for this publication item.

  • Set what constraints are replicated to the client: If you set the setDfltColOptions flag to true, then the default values and not null constraints are replicated to the client.

  • Create a client sub-query to return unique client ids in the cl2log_rec_stmt parameter. The client sub-query correlates the primary key of the changed records in the log table with the Consolidator client id. The log table contains the changes for the table and is named clg$<tablename>.

    Notes:

    • If you are creating a fast refresh publication item on a table with a composite primary key, the snapshot query must match the primary key columns in the order that they are present in the table definition. This automatically happens during the column selection when MDW is used or when a SELECT * query is used. Note that the order of the primary key columns in the table definition may be different from those in the primary key constraint definition.

    • A subscription created as complete refresh and read only is light weight; thus, to keep the subscription light weight, the primary keys are not included in the replication. If you want to include a primary key, then you can create it with the createPublicationItemIndex API.

    For example, if the publication item SQL query is as follows:

    SELECT * FROM scott.emp a
       WHERE deptno in
           (select deptno from scott.emp b
            where b.empno = :empno )
     
    

    Assuming that the Consolidator client id is empno and the snapshot table is emp, then the client sub-query queries for data changes in the clg$emp log table as follows:

    SELECT empno as clid$$cs FROM scott.clg$emp
       UNION SELECT empno as clid$$cs FROM scott.emp
             WHERE deptno in (select deptno from scott.clg$dept)
     
    

The following example uses the automatic synchronization version of createPublicationItem method, which uses the PubItemProps class to define all publication item definitions, including automatic synchronization, as follows:

PubItemProps pi_props = new PubItemProps();
pi_props.owner = "MASTER";                  // owner schema
pi_props.store = STORES[i][0];              // store
pi_props.refresh_mode = "F"; //default      // uses fast refresh
pi_props.select_stmt =                  // specify select statement for snapshot
   "SELECT * FROM "+"MASTER"+"."+STORES[i][0]+ " WHERE C1 =:CLIENTID";
pi_props.cl2log_rec_stmt = "SELECT base.C1 FROM "   // client sub-query to 
      + "MASTER"+"."+STORES[i][0] + " base,"        // return unique clientids
      + "MASTER"+".CLG$"+STORES[i][0] + " log"
      + "  WHERE base.ID = log.ID";
// Setting "isLogBased" to True enables automatic sync for this pub item. 
pi_props.isLogBased = true;
// If doChangeNtf is true, automatic publication item sends notifications 
// from server about new/modified records
pi_props.doChangeNtf = true;

cm.createPublicationItem(PUBITEMS[i], pi_props);
cm.addPublicationItem(PUB,PUBITEMS[i],null,null,"S",null,null);
3.4.1.3.1 Defining Publication Items for Updatable Multi-Table Views

Publication items can be defined for both tables and views. When publishing updatable multi-table views, the following restrictions apply:

  • The view must contain a parent table with a primary key defined.

  • INSTEAD OF triggers must be defined for data manipulation language (DML) operations on the view. See Section 3.9, "Understanding Your Refresh Options" for more information.

  • All base tables of the view must be published.

3.4.1.4 Define Publication-Level Automatic Synchronization Rules

Once the publication is created, you can create and add automatic synchronization rules that apply to all enabled publication items in this publication. Perform the following to add a rule to a publication:

  1. The rule is made up of a rule name and a String that contains the rule definition. The rules can be created using the Rules classes and RuleInfo objects.

    1. Define the rule and convert it to a String using the RuleInfo object and the setSyncRuleParams method.

      RuleInfo ri = Rules.RULE_MAX_DB_REC_ri;
      ri.params.put(Rules.PARAM_NREC,"5");
      String ruleText = cm.setSyncRuleParams(ri.type,ri.params);
      
      

      There are RuleInfo objects for all of the main automatic synchronization rules. So, in order to specify a rule, you obtain the appropriate RuleInfo object from the Rules class and then define the variable. Table 3-11, "Automatic Synchronization Rule Info Objects" describe the different types of rules you can specify for triggering automatic synchronization:

      Note:

      See the Javadoc for examples and the parameters that you need to set for each rule.

      Table 3-11 Automatic Synchronization Rule Info Objects

      Rule Info Object Description

      RULE_MAX_DB_REC_ri

      Synchronize if the client database for all publication items on the client contains more than NREC modified records, where you specify the NREC of modifed records in the client database to trigger an automatic synchronization.

      RULE_NOTIFY_MAX_PUB_REC_ri

      Synchronize if the out queue contains more than NREC modified records, where you specify the NREC of modifed records in the server database to trigger an automatic synchronization.

      RULE_MAX_PI_REC_ri

      Client automatically synchronizes if the number of modified records for a publication item is greater than NREC.

      RULE_HIGH_BANDWIDTH_ri

      Synchronize when the network bandwidth is greater than <number> bits/second. Where <number> is an integer that indicates the bandwidth bits/seconds. When the bandwidth is at this value, the synchronization occurs.

      RULE_LOW_PWR_ri

      Synchronize when the battery level drops to <number>%, where <number> is a percentage. Often you may wish to synchronize before you lose battery power.

      RULE_AC_PWR_ri

      Synchronize when the AC power is detected; that is, when the device is plugged in.

      RULE_MIN_MEM_ri

      Specify the minimum battery level required in order for an automatic synchronization to start. The battery level is specified as a percentage.

      RULE_NET_PRIORITY_ri

      Network conditions can be specified using the following properties: data priority, ping delay and network bandwidth.

      RULE_MIN_PWR_ri

      If the battery life drops below a pre-defined minimum, then synchronization is automatically triggered.

      NET_CONFIG_ri

      Configure network parameters (currently only the network specific proxy configuration is supported) The configuration rule contains a vector of hashtables with a hashtable representing properties of each individual network.

      RULE_TIME_INTERVAL_ri

      Schedule sync at a given time of day with a certain frequency (interval).

      Specify the time (PARAM_START_TIME) for an automatic synchronization to start. The format of time is standard date string: H24:MI:SS e.g. 00:00:00 or 23:59:00 The time is GMT. If not set, the synchronization starts when the Sync Agent starts and all other conditions are satisfied Set the period (PARAM_PERIOD), in seconds, to specify the frequency of scheduled synchronization events.


    2. Define a name for the rule, which should be a name not attached to any particular publication, so you can use the rule for several publications.

  2. Create the rule with the createSyncRule method, which creates the rule with the name, the String containing the rule, and a boolean on whether to replace the rule if it already exists. Once completed, then this rule can be associated with any publication.

    boolean replace = true;
    cm.createSyncRule ( ruleName, ruleText, replace );
    
    
  3. Associate the rule with the desired publication or platform using the addSyncRule method. This method can add any existing rule to a designated publication. To add to a publication, use the publication name as the first parameter, as follows:

    cm.addSyncRule( PUB, ruleName );
    
    

    To add a rule to a client platform—Win32 or WINCE platform—perform the following:

    cm.addSyncRule( Consolidator.DEFAULT_TEMPLATE_WIN32, rulename );
    
    

    Where the platform name is a constant defined in the Consolidator class as either DEFAULT_TEMPLATE_WIN32 or DEFAULT_TEMPLATE_WCE.

You can also perform the following:

3.4.1.4.1 Retrieve All Publications Associated with a Rule

Just as you can with resources, scripts and sequences that are associated with publications, you can retrieve all publications that are associated with a rule with the getPublicationNames method. The following retrieves all publications that are associated with the rule within the ruleName variable. The object type is defined as Consolidator.RULES_OBJECT.

String[] pubs  = cm.getPublicationNames ( ruleName , Consolidator.RULES_OBJECT);
3.4.1.4.2 Retrieve Rule Text

You can retrieve the text of the rule using the getSyncRule and providing the rule name. This is useful if you are not sure what the rule is and need to discover the text before associating it with another publication.

String retStr = cm.getSyncRule ( ruleName );
3.4.1.4.3 Check if Rule is Modified

You can compare the rule within the repository with a provided string to see if the rule has been modified with the isSyncRuleModified method. A boolean value of true is returned if the provided ruleText is different from what exists in the repository.

boolean ismod = cm.isSyncRuleModified ( ruleName, ruleText );
3.4.1.4.4 Remove Rule

You can remove the association of a rule from a publication by using the removeSyncRule method. You can delete the entire rule from the repository by using the dropSyncRule method. If you drop the rule and it is still associated with one or more publications, the rule is automatically unassociated from these publications.

3.4.1.5 Data Subsetting: Defining Client Subscription Parameters for Publications

Data subsetting is the ability to create specific subsets of data and assign them to a parameter name that can be assigned to a subscribing user. When creating publication items, a parameterized Select statement can be defined. Subscription parameters must be specified at the time the publication item is created, and are used during synchronization to control the data published to a specific client.

Creating a Data Subset Example

consolidatorManager.createPublicationItem("CORP_DIR1", 
   "DIRECTORY1", "ADDRLRL4P", "F" ,

   "SELECT LastName, FirstName, company, phone1, phone2, phone3, phone4,

    phone5, phone1id, phone2id, phone3id, displayphone, address, city, state,
    zipcode, country, title, custom1, custom2, custom3, note
    FROM directory1.addrlrl4p WHERE company = :COMPANY", null, null);

In this sample statement, data is being retrieved from a publication named CORP_DIR1, and is subset by the variable COMPANY.

Note:

Within the select statement, the parameter name for the data subset must be prefixed with a colon, for example:COMPANY.

When a publication uses data subsetting parameters, set the parameters for each subscription to the publication. For example, in the previous example, the parameter COMPANY was used as an input variable to describe what data is returned to the client. You can set the value for this parameter with the setSubscriptionParameter method. The following example sets the subscription parameter COMPANY for the client DAVIDL in the CORP_DIR1 publication to DAVECO:

consolidatorManager.setSubscriptionParameter("CORP_DIR1", "DAVIDL",
       "COMPANY", "'DAVECO'");

Note:

This method should only be used on publications created using the Consolidator Manager API. To create template variables, a similar technique is possible using MDW.

3.4.1.6 Create Publication Item Indexes

The Mobile Server supports automatic deployment of indexes in Oracle Database Lite on clients. The Mobile Server automatically replicates primary key indexes from the server database. The Consolidator Manager API provides calls to explicitly deploy unique, regular, and primary key indexes to clients as well.

By default, the primary key index of a table is automatically replicated from the server. You can create secondary indexes on a publication item. If you do not want the primary index, you must explicitly drop it from the publication items.

If you want to create other indexes on any columns in your application tables, then use the createPublicationItemIndex method. The following demonstrates how to set up indexes on the name field in our publication item P_SAMPLE11-M:

consolidatorManager.createPublicationItemIndex("P_SAMPLE11M-I3",
   "P_SAMPLE11-M", "I", "NAME");

An index can contain more than one column. You can define an index with multiple columns, as follows:

consolidatorManager.createPublicationItemIndex("P_SAMPLE11D-I1", "P_SAMPLE11-D",        "I", "KEY,NAME");
3.4.1.6.1 Define Client Indexes

Client-side indexes can be defined for existing publication items. There are three types of indexes that can be specified:

  • P - Primary key

  • U - Unique

  • I - Regular

Note:

When an index of type 'U' or 'P' is defined on a publication item, there is no check for duplicate keys on the server. If the same constraints do not exist on the base object of the publication item, synchronization may fail with a duplicate key violation. See the Oracle Database Lite API Specification for more information.

3.4.1.7 Adding Publication Items to Publications

Once you create a publication item, you must associate it with a publication using the addPublicationItem method, as follows:

consolidatorManager.addPublicationItem("T_SAMPLE11", "P_SAMPLE11-M",       null, null, "S", null, null);

See Section 3.4.1.12, "Modifying a Publication Item" for details on how to change the definition.

3.4.1.7.1 Defining Conflict Rules

When adding a publication item to a publication, the user can specify winning rules to resolve synchronization conflicts in favor of either the client or the server. See Section 3.13, "Resolving Conflict Resolution with Winning Rules" for more information.

3.4.1.7.2 Using Table Weight

Table weight is an integer associated with publication items that determines in what order the transactions for all publications are processed. For example, if three publication items exist—emp, dept, mgr, you can define the order in which the transactions associated with each publication item are executed. In our example, assign table weight of 1 to dept, table weight of 2 to mgr, and table weight of 3 to emp. In doing this, you ensure that the master table dept is always updated first, followed by mgr, and lastly by emp.

The insert, update, and delete client operations are executed in the following order:

  1. Client INSERT operations are executed first, from lowest to highest table weight order. This ensures that the master table entries are added before the details table entries.

  2. Client DELETE operations are executed next, from highest to lowest table weight order. Processing the delete operations ensures that the details table entries are removed before the master table entries.

  3. Client UPDATE operations are executed last, from highest to lowest table weight order.

In our example with dept, mgr, and emp tables, the execution order would be as follows:

  1. All insert operations for dept are processed.

  2. All insert operations for mgr are processed.

  3. All insert operations for emp are processed.

  4. All delete operations for emp are processed.

  5. All delete operations for mgr are processed.

  6. All delete operations for dept are processed.

  7. All update operations for emp are processed.

  8. All update operations for mgr are processed.

  9. All update operations for dept are processed.

Table weight is applied to publication items within a specific publication; for example, a publication can have more than one publication item of weight 2. In this case, it does not matter which publication is executed first.

Define the order weight for tables when you add a publication item to the publication.

3.4.1.8 Creating Client-Side Sequences for the Downloaded Snapshot

A sequence is a database schema object that generates sequential numbers. After creating a sequence, you can use it to generate unique sequence numbers for transaction processing. These unique integers can include primary key values. If a transaction generates a sequence number, the sequence is incremented immediately whether you commit or roll back the transaction.

If you have more than a single client, you want to assign who gets which sequence numbers, so that when you synchronize, none of the records have duplicate sequence numbers. Thus, if you have multiple clients, then specify a distinct range of numbers for each client, so that they are not using the same numbers.

  • Specify a range of values for each client. In our example, client A would be assigned sequence numbers 1 through 100, client B would be assigned sequence numbers 101 to 200, and client C would be assigned sequence numbers 201 through 300. If they ran out of sequence numbers, they are assigned another 100, which is the defined window size in our example, during the next synchronization. Since none of the clients checked to generate server-side sequence, the database, in order to never collide with the sequence numbers, starts its sequence number at -1 and decrements for each subsequent sequence number.

  • You could specify that all clients are allowed to have only odd numbers and the database has all even numbers. That is, you could start the client at 1 and increment by 2 for all of its sequence numbers. This enables you to avoid having negative numbers for your sequence numbers. The clients still have a window size, which in this example is 100, but they start with an odd number within that window and always increment by 2 to avoid any positive numbers. Thus, client A would still have the window of 1 to 100, but the sequence numbers would be 1, 3, 5, and so on up to 99.

Thus, for each client that uses sequences, you must define what numbers each client can use through the Consolidator Manager API, which allow you to manage the sequences with methods that create/drop a sequence, add/remove a sequence from a publication, modify a sequence, and advance a sequence window for each user.

Note:

The sequence name is case-sensitive.

Once you have created the sequence, you place it into the publication with the publication item to which it applies.

Note:

If the sequences do not work properly, check your parent publications. All parent publications must have at least one publication item. If you do not have any publication items for the parent publication, then create a dummy publication item within the parent.

See the Oracle Database Lite API Specification (included on the CD) for a complete listing of the APIs to define and administrate sequences.

3.4.1.8.1 Specifying Sequence Threshold for Window Management

Oracle Database Lite also allows you to set a threshold. If you know that you need a minimum number of records between synchronizations to perform your work, set this number as the threshold. That way, if you have less than this number available to you, Oracle Database Lite provides the client with a new window to work from.

For example, if a client has a window of 100 and retrieves the first window of 1-100. If the sequence numbers retrieved is currently at record number 97, then—if no threshold is set—the Oracle Database Lite does not provide a new window since this window is not complete. However, if you state that you need at least 20 records to perform your duties, Oracle Database Lite would notice that there are less than 20 records left in the window and assigsn the client the next window, which in this case would be sequence numbers 101-200.

3.4.1.8.2 Description of Sequence Support

The following sequence support is available:

  • True sequence support on the client—The Sync Server supports replication of true sequence objects to the client.

  • Clear association with a publication—In a manner similar to publication items, adding sequences to a publication propagates the corresponding sequence objects to all subscribing users. Note that a publication and a sequence have a one-to-many relationship. This means a publication can contain many different sequences, but a single sequence cannot exist in more than one publication.

  • Offline and Online—There are two types of sequences, as follows:

    • Offline: The developer specifies the increment value of the sequence used by the client. The sequence exists solely for the client.

    • Online: An online sequence is designed to support online Web-to-Go applications. This is accomplished by creating the same sequence object on both the server and the client. The paired sequences are incremented by two and started with staggered values; one starts with an even number and one starts with an odd number. By using an odd/even window model such as the one described above, the Consolidator Manager ensures uniqueness—regardless of whether the application is running while connected to the back-end Oracle database or not.

  • Sequence management - Once the sequences have been defined and associated with a publication, the Sync Server manages all aspects of administration for sequences, including allocation of new windows.

3.4.1.9 Subscribing Users to a Publication

Subscribe the users to a publication using the createSubscription function. The following creates a subscription between the S11U1 user and the T_SAMPLE11 publication:

consolidatorManager.createSubscription("T_SAMPLE11", "S11U1");

3.4.1.10 Instantiate the Subscription

After you subscribe a user to a publication, you complete the subscription process by instantiating the subscription, which associates the user with the publication in the back-end database. The next time that the user synchronizes, the data snapshot from the publication is provided to the user.

consolidatorManager.instantiateSubscription("T_SAMPLE11", "S11U1");

//Close the connection.
consolidatorManager.closeConnection();

Note:

If you need to set subscription parameters for data subsetting, this must be completed before instantiating the subscription. See Section 3.4.1.5, "Data Subsetting: Defining Client Subscription Parameters for Publications" for more information.

3.4.1.11 Bringing the Data From the Subscription Down to the Client

You can perform the synchronization and bring down the data from the subscription you just created. The client executes SQL queries against the client ODB to retrieve any information. This subscription is not associated with any application, as it was created using the low-level Consolidator Manager APIs.

3.4.1.12 Modifying a Publication Item

You can add additional columns to existing publication items. These new columns are pushed to all subscribing clients the next time they synchronize. This is accomplished through a complete refresh of all changed publication items.

  • An administrator can add multiple columns, modify the WHERE clause, add new parameters, and change data type.

  • This feature is supported for all Mobile client platforms.

  • The client does not upload snapshot information to the server. This also means the client cannot change snapshots directly on the client database, for example, you could not alter a table using Mobile SQL.

  • Publication item upgrades will be deferred during high priority synchronizations. This is necessary for low bandwidth networks, such as wireless, because all publication item upgrades require a complete refresh of changed publication items. While the high priority flag is set, high priority clients will continue to receive the old publication item format.

  • The server needs to support a maximum of two versions of the publication item which has been altered.

To change the definition, use one of the following:

  • If the publication item is read-only, then modify the publication item either with the reCreatePublicationItem method or by dropping and creating the publication item with the dropPublicationItem and createPublicationItem APIs.

  • If the publication item is updatable, then you can use the alterPublicationItem method. This method enables a smooth transition of changing any table structure on both the client and the server for updatable publications.

    If you use the alterPublicationItem method, you must follow it up by executing the resetCache method. The metadata cache should be reset every time a change is made to the publication or publication items. If you make the change though Mobile Manager, then the Mobile Manager calls the resetCache method. You can reset the metadata cache from the Mobile Manager or execute the resetCache method, part of the ConsolidatorManager class.

    You may use the alterPublicationItem method for schema evolution to add columns to an existing publication item. The WHERE clause may also be altered. If additional parameters are added to the WHERE clause, then these parameters must be set before the alter occurs. See the setSubscriptionParams method. However, if you are creating a fast refresh publication item on a table with a composite primary key, the snapshot query must match the primary key columns in the order that they are present in the table definition. This automatically happens during the column selection when MDW is used or when a SELECT * query is used. Note that the order of the primary key columns in the table definition may be different from those in the primary key constraint definition.

    consolidatorManager.alterPublicationItem("P_SAMEPLE1", "select * from EMP");
    

    Note:

    If the select statement does not change, then the call to the alterPublicationItem() method has no effect.

    See Section 3.15, "Facilitating Schema Evolution" and the alterPublicationItem method definition in the Oracle Database Lite API Specification for more information.

3.4.1.13 Callback Customization for DML Operations

Once a publication item has been created, a user can use the Consolidator Manager API to specify a customized PL/SQL procedure that is stored in the Mobile Server repository to be called in place of all DML operations for that publication item. There can be only one Mobile DML procedure for each publication item. The procedure should be created as follows:

AnySchema.AnyPackage.AnyName(DML in CHAR(1), COL1 in TYPE, COL2 in TYPE, COLn.., PK1 in TYPE, PK2 in TYPE, PKn..)

The parameters for customizing a DML operation are listed in Table 3-12:

Table 3-12 Mobile DML Operation Parameters

Parameter Description

DML

DML operation for each row. Values can be "D" for DELETE, "I" for INSERT, or "U" for UPDATE.

COL1 ... COLn

List of columns defined in the publication item. The column names must be specified in the same order that they appear n the publication item query. If the publication item was created with "SELECT * FROM exp", the column order must be the same as they appear in the table "exp".

PK1 ... PKn

List of primary key columns. The column names must be specified in the same order that they appear in the base or parent table.


The following defines a DML procedure for publication item exp:

select A,B,C from publication_item_exp_table

Assuming A is the primary key column for exp, then your DML procedure would have the following signature:

any_schema.any_package.any_name(DML in CHAR(1), A in TYPE, B in TYPE, C                          in TYPE,A_OLD in TYPE)

During runtime, this procedure is invoked with 'I', 'U', or 'D' as the DML type. For insert and delete operations, A_OLD will be null. In the case of updates, it will be set to the primary key of the row that is being updated. Once the PL/SQL procedure is defined, it can be attached to the publication item through the following API call:

consolidatorManager.addMobileDmlProcedure("PUB_exp","exp",                                            "any_schema.any_package.any_name")

where exp is the publication item name and PUB_exp is the publication name.

Refer to the Oracle Database Lite API Specification for more information.

3.4.1.13.1 DML Procedure Example

The following piece of PL/SQL code defines an actual DML procedure for a publication item in one of the sample publications. As described below, the ORD_MASTER table. The query was defined as:

SELECT * FROM "ord_master", where ord_master has a single column primary key               on "ID"

ord_master Table

SQL> desc ord_master

Name                                      Null?    Type
----------------------------------------- -------- -------------
ID                                        NOT NULL NUMBER(9)
DDATE                                              DATE
STATUS                                             NUMBER(9)
NAME                                               VARCHAR2(20)
DESCRIPTION                                        VARCHAR2(20)

Code Example

CREATE OR REPLACE  PACKAGE "SAMPLE11"."ORD_UPDATE_PKG"  AS
 procedure  UPDATE_ORD_MASTER(DML CHAR,ID NUMBER,DDATE DATE,STATUS
NUMBER,NAME VARCHAR2,DESCRIPTION VARCHAR2, ID_OLD NUMBER);
END ORD_UPDATE_PKG;
/
CREATE OR REPLACE  PACKAGE BODY "SAMPLE11"."ORD_UPDATE_PKG" as
  procedure  UPDATE_ORD_MASTER(DML CHAR,ID NUMBER,DDATE DATE,STATUS
NUMBER,NAME VARCHAR2,DESCRIPTION VARCHAR2, ID_OLD NUMBER) is
  begin
    if DML = 'U' then
     execute immediate 'update ord_master set id = :id, ddate = :ddate,
status = :status, name = :name, description = '||''''||'from
ord_update_pkg'||''''||' where id = :id_old'
      using id,ddate,status,name,id_old;
    end if;
    if DML = 'I' then
 begin
      execute immediate 'insert into ord_master values(:id, :ddate,
:status, :name, '||''''||'from ord_update_pkg'||''''||')'
        using id,ddate,status,name;
 exception
  when others then
   null;
 end;
    end if;
    if DML = 'D' then
     execute immediate 'delete from ord_master where id = :id'
      using id;
    end if;
  end UPDATE_ORD_MASTER;
end ORD_UPDATE_PKG;
/

The API call to add this DML procedure is as follows:

consolidatorManager.addMobileDMLProcedure("T_SAMPLE11",           "P_SAMPLE11-M","SAMPLE11.ORD_UPDATE_PKG.UPDATE_ORD_MASTER")

where T_SAMPLE11 is the publication name and P_SAMPLE11-M is the publication item name.

3.4.1.14 Restricting Predicate

A restricting predicate can be assigned to a publication item as it is added to a publication.The predicate is used to limit data downloaded to the client. The parameter, which is for advanced use, can be null. When using a restricting predicate, the synchronization uses the high priority replication mode. For using a restricting predicate, see Section 1.2.5 "Priority-Based Replication" in the Oracle Database Lite Troubleshooting and Tuning Guide.

3.5 Client Device Database DDL Operations

The first time a client synchronizes, Oracle Database Lite automatically creates the Oracle Lite database with the snapshot tables for the user subscriptions on the Mobile client. If you would like to execute additional DDL statements on the database, then add the DDL statements as part of your publication. Oracle Database Lite executes these DDL statements when the user synchronizes.

This is typically used for adding constraints and check values.

For example, you can add a foreign key constraint to a publication item. In this instance, if the Oracle Database Lite created snapshots S1 and S2 during the initial synchronization, where the definition of S1 and S2 are as follows:

S1 (C1 NUMBER PRIMARY KEY, C2 VARCHAR2(100), C3 NUMBER);
S2 (C1 NUMBER PRIMARY KEY, C2 VARCHAR2(100), C3 NUMBER);

If you would like to create a foreign key constraint between C3 on S2 and the primary key of S1 , then add the following DDL statement to your publication item:

ALTER TABLE S2
   ADD CONSTRAINT S2_FK FOREIGN KEY (C3)
   REFERENCES S1 (C1);

Then, Oracle Database Lite executes any DDL statements after the snapshot creation or, if the snapshot has already been created, after the next synchronization.

See the Oracle Database Lite API Specification for more information on these APIs.

3.6 Customize the Compose Phase Using MyCompose

The compose phase takes a query for one or more server-side base tables and puts the generated DML operations for the publication item into the Out Queue to be downloaded into the client. The Consolidator Manager manages all DML operations using the physical DML logs on the server-side base tables. This can be resource intensive if the DML operations are complex—for example, if there are complex data-subsetting queries being used. The tools to customize this process include an extendable MyCompose with compose methods which can be overridden, and additional Consolidator Manager APIs to register and load the customized class.

When you want to customize the compose phase of the synchronization process, you must perform the following:

  1. Section 3.6.1, "Create a Class That Extends MyCompose to Perform the Compose"

  2. Section 3.6.2, "Implement the Extended MyCompose Methods in the User-Defined Class"

  3. Section 3.6.3, "Use Get Methods to Retrieve Information You Need in the User-Defined Compose Class"

  4. Section 3.6.4, "Register the User-Defined Class With the Publication Item"

3.6.1 Create a Class That Extends MyCompose to Perform the Compose

The MyCompose class is an abstract class, which serves as the super-class for creating a user-written sub-class, as follows:

public class ItemACompose extends oracle.lite.sync.MyCompose

{

...

}

All user-written classes—such as ItemACompose—produce publication item DML operations to be sent to a client device by interpreting the base table DML logs. The sub-class is registered with the publication item, and takes over all compose phase operations for that publication item. The sub-class can be registered with more than one publication item—if it is generic—however, internally the Composer makes each instance of the extended class unique within each publication item.

3.6.2 Implement the Extended MyCompose Methods in the User-Defined Class

The MyCompose class includes the following methods—needCompose, doCompose, init, and destroy—which are used to customize the compose phase. One or more of these methods can be overridden in the sub-class to customize compose phase operations. Most users customize the compose phase for a single client. In this case, only implement the doCompose and needCompose methods. The init and destroy methods are only used when a process is performed for all clients, either before or after individual client processing.

The following sections describe how to implement these methods:

3.6.2.1 Implement the needCompose Method

The needCompose method to identifies a client that has changes to a specific publication item that is to be downloaded. Use this method as a way to trigger the doCompose method.

public int needCompose(Connection conn,

  String clientid) throws Throwable

The parameters for the needCompose method are listed in Table 3-13:

Table 3-13 needCompose Parameters

Parameter Definition

conn

Database connection to the Mobile Server repository.

clientid

Specifies the client that is connecting to the database.


The following example examines a client base table for changes—in this case, the presence of dirty records. If there are changes, then the method returns MyCompose.YES, which triggers the doCompose method.

public int needCompose(String clientid) throws Throwable{

        boolean baseDirty = false;

        String [][] baseTables = this.getBaseTables();


        for(int i = 0; i < baseTables.length; i++){

            if(this.baseTableDirty(baseTables[i][0], baseTables[i][1])){

                baseDirty = true;

                break;

            }

        }


        if(baseDirty){

            return MyCompose.YES;

        }else{

            return MyCompose.NO;

        }

    }

This sample uses subsidiary methods discussed in Section 3.6.3, "Use Get Methods to Retrieve Information You Need in the User-Defined Compose Class" to check if the publication item has any tables with changes that need to be sent to the client. In this example, the base tables are retrieved, then checked for changed, or dirty, records. If the result of that test is true, a value of Yes is returned, which triggers the call for the doCompose method.

3.6.2.2 Implement the doCompose Method

The doCompose method populates the DML log table for a specific publication item, which is subscribed to by a client.

public int doCompose(Connection conn,

   String clientid) throws Throwable

The parameters for the doCompose method are listed in Table 3-14:

Table 3-14 doCompose Parameters

Parameter Definition

conn

Database connection to the Mobile Server repository.

clientid

Specifies the client that is connecting to the database.


The following example contains a publication item with only one base table where a DML (Insert, Update, or Delete) operation on the base table is performed on the publication item. This method is called for each client subscribed to the publication item.

public int doCompose(Connection conn, String clientid) throws Throwable {

        int rowCount = 0;

        

        String [][] baseTables = this.getBaseTables();

        String baseTableDMLLogName = 

            this.getBaseTableDMLLogName(baseTables[0][0], baseTables[0][1]);

        String baseTablePK =

            this.getBaseTablePK(baseTables[0][0],baseTables[0][1]);

        String pubItemDMLTableName = this.getPubItemDMLTableName();


        String sql = "INSERT INTO " + pubItemDMLTableName 

            + " SELECT " +  baseTablePK + ", DMLTYPE$$ FROM " +
             baseTableDMLLogName;


        Statement st = conn.createStatement();

        rowCount = st.executeUpdate(sql);

        st.close();

        return rowCount;

    }

This code uses subsidiary methods discussed in Section 3.6.3, "Use Get Methods to Retrieve Information You Need in the User-Defined Compose Class" to create a SQL statement. The MyCompose method retrieves the base table, the base table primary key, the base table DML log name and the publication item DML table name using the appropriate get methods. You can use the table names and other information returned by these methods to create a dynamic SQL statement, which performs an insert into the publication item DML table of the contents of the base table primary key and DML operation from the base table DML log.

3.6.2.3 Implement the init Method

The init method provides the framework for user-created compose preparation processes. The init method is called once for all clients prior to the individual client compose phase. The default implementation has no effect.

public void init(Connection conn)

The parameter for the init method is described in Table 3-15:

Table 3-15 init Parameters

Parameter Definition

conn

Database connection to the Mobile Server repository.


3.6.2.4 Implement the destroy Method

The destroy method provides the framework for compose cleanup processes. The destroy method is called once for all clients after to the individual client compose phase. The default implementation has no effect.

public void destroy(Connection conn)

The parameter for the destroy method is described in Table 3-16:

Table 3-16 destroy Parameters

Parameter Definition

conn

Database connection to the Mobile Server repository.


3.6.3 Use Get Methods to Retrieve Information You Need in the User-Defined Compose Class

The following methods return information for use by primary MyCompose methods.

3.6.3.1 Retrieve the Publication Name With the getPublication Method

The getPublication method returns the name of the publication.

public String getPublication()

3.6.3.2 Retrieve the Publication Item Name With the getPublicationItem Method

The getPublicationItem method returns the publication item name.

public String getPublicationItem()

3.6.3.3 Retrieve the DML Table Name With the getPubItemDMLTableName Method

The getPubItemDMLTableName method returns the name of the DML table or DML table view, including schema name, which the doCompose or init methods are supposed to insert into.

public String getPubItemDMLTableName()

You can embed the returned value into dynamic SQL statements. The table or view structure is as follows:

<PubItem PK> DMLTYPE$$

The parameters for getPubItemDMLTableName are listed in Table 3-17:

Table 3-17 getPubItemDMLTableName View Structure Parameters

Parameter Definition

PubItemPK

The value returned by getPubItemPK()

DMLTYPE$$

This can have the values 'I' for insert, 'D' for delete, or 'U' for Update.


3.6.3.4 Retrieve the Primary Key With the getPubItemPK Method

Returns the primary key for the listed publication in comma separated format in the form of <col1>,<col2>,<col3>.

public String getPubItemPK() throws Throwable

3.6.3.5 Retrieve All Base Tables With the getBaseTables Method

Returns all the base tables for the publication item in an array of two-string arrays. Each two-string array contains the base table schema and name. The parent table is always the first base table returned, in other words, baseTables[0].

public string [][] getBaseTables() throws Throwable

3.6.3.6 Retrieve the Primary Key With the getBaseTablePK Method

Returns the primary key for the listed base table in comma separated format, in the form of <col1>, col2>,<col3>.

public String getBaseTablePK (String owner, String baseTable) throws Throwable

The parameters for getBaseTablePK are listed in Table 3-18:

Table 3-18 getBaseTablePK Parameters

Parameter Definition

owner

The schema name of the base table owner.

baseTable

The base table name.


3.6.3.7 Discover If Base Table Has Changed With the baseTableDirty Method

Returns the a boolean value for whether or not the base table has changes to be synchronized.

public boolean baseTableDirty(String owner, String store)

The parameters for baseTableDirty are listed in Table 3-19:

Table 3-19 baseTableDirty Parameters

Parameter Definition

owner

The schema name of the base table.

store

The base table name.


3.6.3.8 Retrieve the Name for DML Log Table With the getBaseTableDMLLogName Method

Returns the name for the physical DML log table or DML log table view for a base table.

public string getBaseTableDMLLogName(String owner, String baseTable)

The parameters for getBaseTableDMLLogName are listed in Table 3-20:

Table 3-20 getBaseTableDMLLogName Parameters

Parameter Definition

owner

The schema name of the base table owner.

baseTable

The base table name.


You can embed the returned value into dynamic SQL statements. There may be multiple physical logs if the publication item has multiple base tables. The parent base table physical primary key corresponds to the primary key of the publication item. The structure of the log is as follows:

<Base Table PK> DMLTYPE$$

The parameters for getBaseTableDMLLogName view structure are listed in Table 3-21:

Table 3-21 getBaseTableDMLLogName View Structure Parameters

Parameter Definition

Base Table PK

The primary key of the parent base table.

DMLTYPE$$

This can have the values 'I' for insert, 'D' for delete, or 'U' for Update.


3.6.3.9 Retrieve View of the Map Table With the getMapView Method

Returns a view of the map table which can be used in a dynamic SQL statement and contains a primary key list for each client device. The view can be an inline view.

public String getMapView() throws Throwable

The structure of the map table view is as follows:

CLID$$CS <Pub Item PK> DMLTYPE$$

The parameters of the map table view are listed in Table 3-22:

Table 3-22 getMapView View Structure Parameters

Parameter Definition

CLID$$CS

This is the client ID column.

Base Table PK

The primary key columns of the publication item.

DMLTYPE$$

This can have the values 'I' for insert, 'D' for delete, or 'U' for Update.


3.6.4 Register the User-Defined Class With the Publication Item

Once you have created your sub-class, it must be registered with a publication item. The Consolidator Manager API now has two methods registerMyCompose and deRegisterMyCompose to permit adding and removing the sub-class from a publication item.

  • The registerMyCompose method registers the sub-class and loads it into the Mobile Server repository, including the class byte code. By loading the code into the repository, the sub-class can be used without having to be loaded at runtime.

  • The deRegisterMyCompose method removes the sub-class from the Mobile Server repository.

3.7 Customize What Occurs Before and After Synchronization Phases

You can customize what happens before and after certain synchronization processes by creating one or more PL/SQL packages. The following sections detail the different options you have for customization:

3.7.1 Customize What Occurs Before and After Every Phase of Each Synchronization

You can customize the MGP phase of the synchronization process through a set of predefined callback methods that add functionality to be executed before or after certain phases of the synchronization process. These callback methods are defined in the CUSTOMIZE PL/SQL package. Note that these callback methods are called before or after the defined phase for every publication item.

Note:

If you want to customize certain activity for only a specific publication item, see Section 3.7.2, "Customize What Occurs Before and After Compose/Apply Phases for a Single Publication Item" for more information.

Manually create this package in the Mobile Server repository. The methods and their respective calling sequence are as follows:

Note:

Some of the procedures in the package are invoked for each client defined in your Mobile Server, such as the BeforeClientCompose and AfterClientCompose methods.

3.7.1.1 NullSync

The NullSync procedure is called at the beginning of every synchronization session. It can be used to determine whether or not a particular user is uploading data.

procedure NullSync (clientid varchar2, isNullSync boolean);

3.7.1.2 BeforeProcessApply

The BeforeProcessApply procedure is called before the entire apply phase of the MGP process.

procedure BeforeProcessApply;

3.7.1.3 AfterProcessApply

The AfterProcessApply procedure is called after the entire apply phase of the MGP process.

procedure AfterProcessApply;

3.7.1.4 BeforeProcessCompose

The BeforeProcessCompose procedure is called before the entire compose phase of the MGP process.

procedure BeforeProcessCompose;

3.7.1.5 AfterProcessCompose

The AfterProcessCompose procedure is called after the entire compose phase of the MGP process.

procedure AfterProcessCompose;

3.7.1.6 BeforeProcessLogs

The BeforeProcessLogs procedure is called before the database log tables (CLG$) are generated for the compose phase of the MGP process. This log tables capture changes for MGP and should not be confused with the trace logs.

procedure BeforeProcessLogs;

3.7.1.7 AfterProcessLogs

The AfterProcessLogs procedure is called after the database log tables (CLG$) are generated for the compose phase of the MGP process. This log tables capture changes for MGP and should not be confused with the trace logs.

procedure AfterProcessLogs;

3.7.1.8 BeforeClientCompose

The BeforeClientCompose procedure is called before each user is composed during the compose phase of the MGP process.

procedure BeforeClientCompose (clientid varchar2);

3.7.1.9 AfterClientCompose

The AfterClientCompose procedure is called after each user is composed during the compose phase of the MGP process.

procedure AfterClientCompose (clientid varchar2);

3.7.1.10 Example Using the Customize Package

If a developer wants to use any of the procedures listed above, perform the following:

  • Manually create the CUSTOMIZE package in the Mobile Server schema.

  • Define all of the methods with the following specification:

    create or replace package CUSTOMIZE as 
        procedure NullSync (clientid varchar2, isNullSync boolean); 
        procedure BeforeProcessApply ; 
        procedure AfterProcessApply ; 
        procedure BeforeProcessCompose ; 
        procedure AfterProcessCompose ; 
        procedure BeforeProcessLogs ; 
        procedure AfterProcessLogs ; 
        procedure BeforeClientCompose(clientid varchar2); 
        procedure AfterClientCompose(clientid varchar2); 
        end CUSTOMIZE; 
    

WARNING:

It is the developer's responsibility to ensure that the package is defined properly and that the logic contained does not jeopardize the integrity of the synchronization process.

3.7.1.11 Error Handling For CUSTOMIZE Package

Errors are logged for the CUSTOMIZE package only if logging is enabled for the MGP component for the finest level for all event types. Thus, you should set the logging level to ALL and the type to ALL.

If any errors occur due to an invalid CUSTOMIZE package, they are logged only on the first MGP cycle after the Mobile Server restarts. On subsequent synchronizations, the errors are not re-written to the logs, sine the MGP does not attempt to re-execute the CUSTOMIZE package until the Mobile Server is restarted.

Note:

One requirement is that the CUSTOMIZE package can only be executed as user mobileadmin.

To locate these errors easily within the MGP_<x>.log files, search for the MGP.callBoundCallBack method. Another option is to restart the Mobile Server and check the MGP log right after the next synchronization.

3.7.2 Customize What Occurs Before and After Compose/Apply Phases for a Single Publication Item

When creating publication items, the user can define a customizable PL/SQL package that MGP calls during the Apply and Compose phase of the MGP background process for that particular publication item. To customize the compose/apply phases for a publication item, perform the following:

  1. Create the PL/SQL package with the customized before/after procedures.

  2. Register this PL/SQL package with the publication item.

Then when the publication item is being processed, MGP calls the appropriate procedures from your package.

Client data is accumulated in the in queue prior to being processed by the MGP. Once processed by the MGP, data is accumulated in the out queue before being pulled to the client by Mobile Sync.

You can implement the following PL/SQL procedures to incorporate customized code into the MGP process. The clientname and tranid are passed to allow for customization at the user and transaction level.

  • The BeforeApply method is invoked before the client data is applied:

    procedure BeforeApply(clientname varchar2)
    
    
  • The AfterApply method is invoked after all client data is applied.

    procedure AfterApply(clientname varchar2)
    
    
  • The BeforeTranApply method is invoked before the client data with tranid is applied.

    procedure BeforeTranApply(tranid number)
    
    
  • The AfterTranApply method is invoked after all client data with tranid is applied.

    procedure AfterTranApply(tranid number)
    
    
  • The BeforeCompose method is invoked before the out queue is composed.

    procedure BeforeCompose(clientname varchar2)
    
    
  • The AfterCompose method is invoked after the out queue is composed.

    procedure AfterCompose(clientname varchar2)
    
    

The following is a PL/SQL example that creates a callback package and registers it when creating the P_SAMPLE3 publication item. The BeforeApply procedure disables constraints before the apply phase; the AfterApply procedure enables these constraints. Even though you are only creating procedures for the before and after apply phase of the MGP process, you still have to provide empty procedures for the other parts of the MGP process.

  1. Create PL/SQL package declaration with callback owner/schema name of SAMPLE3 and callback package name of SAMP3_PKG.

  2. Create the package definition, with all MGP process procedures with callback owner.callback package name of SAMPLE3.SAMP3_PKG. Provide a null procedure for any procedure you do not want to modify.

  3. Register the package as the callback package for the SAMPLE3 publication item. If you are creating the publication item, provide the callback schema/owner and the callback package names as input parameters to the createPublicationItem method. If you want to add the callback package to an existing publication item, do the following:

    1. Retrieve the template metadata with getTemplateItemMetaData for the publication item.

    2. Modify the attributes that specify the callback owner/schema (cbk_owner) and the callback package (cbk_name).

    3. Register the package by executing the setTemplateItemMetaData method.

// create package declaration
  stmt.executeUpdate("CREATE OR REPLACE PACKAGE SAMPLE3.SAMP3_PKG as"
  + " procedure BeforeCompose(clientname varchar2);"
  + " procedure AfterCompose(clientname varchar2);"
  + " procedure BeforeApply(clientname varchar2);"
  + " procedure AfterApply(clientname varchar2);"
  + " procedure BeforeTranApply(tranid number);"
  + " procedure AfterTranApply(tranid number);"
  + " end;"
  );
// create package definition
  stmt.executeUpdate("CREATE OR REPLACE PACKAGE body SAMPLE3.SAMP3_PKG as"
  + " procedure BeforeTranApply(tranid number) is"
  + " begin"
    + " null;"
  + " end;"
  + " procedure AfterTranApply(tranid number) is"
  + " begin"
    + " null;"
  + " end;"
  + " procedure BeforeCompose(clientname varchar2) is"
  + " begin"
        + "   null;"
  + " end;"
  + " procedure AfterCompose(clientname varchar2) is"
  + " begin"
  + "   null;"
  + " end;"
  + " procedure BeforeApply(clientname varchar2) is"
  + "   cur integer;"
  + "   ign integer;"
  + "   begin"
  + "     cur := dbms_sql.open_cursor;"
  + "     dbms_sql.parse(cur,'SET CONSTRAINT SAMPLE3.address14_fk DEFERRED',                              dbms_sql.native);"
  + "     ign := dbms_sql.execute(cur);"
  + "     dbms_sql.close_cursor(cur);"
  + "   end;"
  + " procedure AfterApply(clientname varchar2) is"
  + "   cur integer;"
  + "   ign integer;"
  + "   begin"
  + "     cur := dbms_sql.open_cursor;"
  + "     dbms_sql.parse(cur, 'SET CONSTRAINT SAMPLE3.address14_fk IMMEDIATE',                               dbms_sql.native);"
  + "     ign := dbms_sql.execute(cur);"
  + "     dbms_sql.close_cursor(cur);"
  + "   end;"
  + " end;"
  );

Then, register the callback package with the createPublicationItem method call, as follows:

// register SAMPLE3.SAMP3_PKG as the callback for MGP processing of 
// P_SAMPLE3 publication item.
 
cm.createPublicationItem("P_SAMPLE3","SAMPLE3","ADDRESS", "F",
    "SELECT * FROM SAMPLE3.ADDRESS", "SAMPLE3", "SAMP3_PKG");

In the previous code example, the following is required:

  • stmt, which is used when creating the package definition, is an instance of java.sql.Statement

  • cm, which is used when registering the callback package, is an instance of oracle.lite.sync.ConsolidatorManager

  • The callback package must have the following procedures defined:

    • BeforeCompose (clientname varchar2);

    • AfterCompose (clientname varchar2);

    • BeforeApply (clientname varchar2);

    • AfterApply (clientname varchar2);

    • BeforeTranApply (tranid number);

    • AfterTranApply (tranid number);

3.8 Initiating Client Synchronization With Synchronization APIs

You can modify the client-side application to start the synchronization programmatically. This section describes how to perform the synchronization upload and download phases for the client using the Synchronization APIs.

Note:

Currently, there are no APIs to perform the upload activity on the UNIX platforms.

To execute the upload portion of synchronization from the client (see steps 1 and 2 in Figure 3-1) from within your C, C++, or Java application, perform the following steps:

  1. Initialize the synchronization parameters.

  2. Set up the transport parameters.

  3. Initialize the synchronization options and environment, such as username, password, and selective synchronization.

  4. Perform the synchronization.

The following sections demonstrates how you can perform these steps in each of the allowed programming languages:

3.8.1 Starting Synchronization Upload and Download Phases With C or C++ Applications

You can initiate and monitor synchronization from a C or C++ client application. The synchronization methods for the C/C++ interface are contained in ocapi.h and ocapi.dll, which are located in the <ORACLE_HOME>\Mobile\bin directory. See Section 4.1, "Synchronization APIs For C or C++ Applications" for full details.

3.8.2 Starting Synchronization Upload and Download Phases With Java Applications

You can initiate and monitor synchronization from a Java client application. See Section 4.2, "Synchronization API for Java Applications" for more information.

3.8.3 Starting Synchronization Upload and Download Phases With the ADO.NET Provider

You can initiate and monitor synchronization from an ADO.NET provider application. See Section 13.1.6, "Data Synchronization With the OracleSync Class" for full details.

3.9 Understanding Your Refresh Options

The Mobile Server supports several refresh options. During a fast refresh, incremental changes are synchronized. However, during a complete refresh, all data is refreshed with current data. The refresh mode is established when you create the publication item using the createPublicationItem API call. In order to change the refresh mode, first drop the publication item and recreate it with the appropriate mode.

The following sections describe the types of refresh for your publication item that can be used to define how to synchronize:

The following sections describe the refresh options in more detail:

3.9.1 Fast Refresh

Publication items are created for fast refresh by default. Under fast refresh, only incremental changes are replicated. The advantages of fast refresh are reduced overhead and increased speed when replicating data stores with large amounts of data where there are limited changes between synchronization sessions.

The Mobile Server performs a fast refresh of a view if the view meets the following criteria:

  • Each of the view base tables must have a primary key.

  • All primary keys from all base tables must be included in the view column list.

  • If the item is a view, and the item predicate involves multiple tables, then all tables contained in the predicate definition must have primary keys and must have corresponding publication items.

The view requires only a unique primary key for the parent table. The primary keys of other tables may be duplicated. For each base table primary key column, you must provide the Mobile Server with a hint about the column name in the view. You can accomplish this by using the primaryKeyHint method of the Consolidator Manager object. See the Javadoc in the Oracle Database Lite API Specification for more information.

3.9.2 Complete Refresh for Views

A complete refresh is simply a complete execution of the snapshot query. When application synchronization performance is slow, tune the snapshot query. Complete refresh is not optimized for performance. Therefore, to improve performance, use the fast refresh option. The Consperf utility analyzes only fast refresh publication items.

Publication items can be created for complete refresh using the C refresh mode in the createPublicationItem API from the Consolidator Manager API. When this mode is specified, client data is completely refreshed with current data from the server after every sync. An administrator can force a complete refresh for an entire publication through an API call. This function forces complete refresh of a publication for a given client.

See the Javadoc in the Oracle Database Lite API Specification for more information.

The following lists what can cause a complete refresh, ordered from most likely to least likely:

  1. The same Mobile user synching from multiple devices on the same platform, or synching from different platforms when the publications are not platform specific.

  2. Republishing the application.

  3. An unexpected server apply condition, such as constraint violations, unresolved conflicts, and other database exceptions.

  4. Modifying the application, such as changing subsetting parameters or adding/altering publication items. This refresh only affects the publication items.

  5. A force refresh requested by server administrator or a force refresh requested by the client.

  6. Restoring an old Oracle Lite database (ODB file).

  7. Two separate applications using the same backend store.

  8. An unexpected client apply conditions, such as a moved or deleted database, database corruption, memory corruption, other general system failures.

  9. Loss of transaction integrity between the server and client. The server fails post processing after completing the download and disconnects from the client.

  10. Data transport corruptions.

3.9.3 Queue-Based Refresh

You can create your own queue. Mobile Server uploads and downloads changes from the user. Perform customized apply/compose modifications to the back-end database with your own implementation. See the Section 3.19, "Customizing Replication With Your Own Queues" for more information.

3.9.4 Forced Refresh

This is actually NOT a refresh option; however, we discuss it here in order to inform you of the consequences of performing a forced refresh. Out of all the different synchronization options, the Forced Refresh synchronization architecture is probably the most misunderstood synchronization type. This option is commonly confused with the Complete Refresh synchronization. This confusion may result in tragic consequences and the loss of critical data on the client.

The Forced Refresh option is an emergency only synchronization option. This option is for when a client is so corrupt or malfunctioning so severely that the determination is made to replace the Mobile client data with a fresh copy of data from the enterprise data store. When this option is selected, any data transactions that have been made on the client are lost.

When a Forced Refresh is initiated all data on the client is removed. The client will then bring down an accurate copy of the client data from the enterprise database to start fresh with exactly what is currently stored in the enterprise data store.

3.10 Resuming an Interrupted Synchronization

With client/server networking, communication may be interrupted by unreliable network conditions, physical disconnections, limited transport bandwidth, and so on. To efficiently cope with these conditions, the transport protocol between the client and server resumes a synchronization session from the last acknowledged byte. For example, the client starts to upload 10 MB of data and the connection fails after sending 9MB of the data. In this instance, the client does not resend the 9MB that was acknowledged, but resumes the synchronization by uploading the last 1 MB of data. The resume feature works the same for both the upload and download phases of the transport.

Configure the resume feature parameters, as follows:

3.10.1 Defining Temporary Storage Location for Client Data

By default, the client data is buffered in memory and maximum of 16MB is allocated for the buffering. If more space is needed, new clients are blocked until space is freed. Alternatively, you can configure where the client data is temporarily stored and how much space to allocate with the RESUME_FILE and RESUME_FILE_SIZE parameters in the CONSOLIDATOR section of the webtogo.ora file on the Mobile Server, as follows:

RESUME_FILE=d:\path\file
RESUME_FILE_SIZE =NNN (MB)
 

Setting the RESUME_FILE_SIZE parameter configures the amount of memory allocated for the buffering. Setting RESUME_FILE allows using a disk file instead of RAM, which is more efficient if JDK1.4 or later is installed and memory mapping can be used.

If there are multiple disks available on the Mobile Server host, one spool file should be created per disk to optimize performance. You can specify several spool files with multiple RESUME_FILE and RESUME_FILE_SIZE parameters, each designated with a unique suffix, as follow: RESUME_FILE_2, RESUME_FILE_SIZE_2.

Normally, 64KB blocks are used to buffer client data. Resume block size can be specified in KB, with the RESUME_BLOCKSIZE parameter. If you are using disk files to minimize fragmentation, then the block size should be specified as a larger number.

3.10.2 Controlling Server Load

If too many clients connect to a Mobile Server at once, it can become overloaded, run out of memory, or have poor performance when responding to the clients. The RESUME_MAXACTIVE parameter controls the maximum number of connections that the Mobile Server handles at a single time. If more clients try to connect, they are queued until existing connections complete. The default is 100 connections.

The RESUME_TIMEOUT parameter indicates how long to keep client data while the client is not connected. The default is 0, which means that resume is disabled and after disconnection, the client data is discarded. A short timeout, such as 15 minutes, is suitable to resume any accidentally dropped connections. A longer timeout may be needed if users explicitly pause and resume synchronization to switch networks or use a dialup connection for another purpose.

The RESUME_MAXCHUNK parameter causes the server to drop the connection after sending the specified data size, in KB. This forces the client to reconnect and inform the server on how much data it already has. The server can the discard all data before that offset. The fault value is 1024 KB.

These parameters are all configured within the webtogo.ora file on the Mobile Server.

3.10.3 Client Configuration.

Configure the client-side parameters for timeout and maximum data size in the CONSOLIDATOR section of the polite.ini file on the client, as follows:

  • The RESUME_CLIENT_TIMEOUT parameter is the number of seconds that the client should use to timeout network operations. The default is 60 seconds.

  • The RESUME_CLIENT_MAXSEND parameter is the maximum data size, in KB, that the client should send in a single POST request. This is used in cases where there is a proxy with a small limit on the data size in one request. Specifying a reasonable value, such as 256 KB, can also help clients with limited storage space, as they can free the chunks that have already been transmitted and acknowledged. The default is 1024 KB.

3.11 Synchronizing With Database Constraints

When you have database constraints on your table, you must develop your application in a certain way to facilitate the synchronization of the data and keeping the database constraints. The following sections detail each constraint and what issues you must take into account:

3.11.1 Synchronization And Database Constraints

Oracle Database Lite does not keep a record of the SQL operations executed against the database; instead, only the final changes are saved and synchronized to the back-end database.

For example, if you have a client with a unique key constraint, where the following is executed against the client Oracle Lite database:

  1. Record with primary key of one and unique field of ABC is deleted.

  2. Record with primary key of 4 and unique field of ABC is inserted.

When this is synchronized, according the Section 3.4.1.7.2, "Using Table Weight" discussion, the insert is performed before the delete. This would add a duplicate field for ABC and cause a unique key constraint violation. In order to avoid this, you should defer all constraint checking until after all transactions are applied. See Section 3.11.3.2, "Defer Constraint Checking Until After All Transactions Are Applied".

Another example of how synchronization captures the end result of all SQL modifications is as follows:

  1. Insert an employee record 4 with name of Joe Judson.

  2. Update employee record 4 with address.

  3. Update employee record 4 with salary.

  4. Update employee record 4 with office number

  5. Update employee record 4 with work email address.

When synchronization occurs, all modifications are captured and only a single insert is performed on the back-end database. The insert contains the primary key, name, address, salary, office number and email address. Even though the data was created with multiple updates, the Synch Server only takes the final result and makes a single insert.

Note:

If you want these constraints to apply on the Mobile client, see Section 3.11.6, "Generating Constraints on the Mobile Client".

3.11.2 Primary Key is Unique

When you have multiple clients, each updating the same table, you must have a method for guaranteeing that the primary key is unique across all clients. Oracle Database Lite provides you a sequence number that you can use as the primary key, which is guaranteed to be unique across all Oracle Database Lite clients.

For more information on the sequence number, see Section 3.4.1.8, "Creating Client-Side Sequences for the Downloaded Snapshot".

3.11.3 Foreign Key Constraints

A foreign key exists in a details table and points to a row in the master table. Thus, before a client adds a record to the details table, the master table must first exist.

For example, two tables EMP and DEPT have referential integrity constraints and are an example of a master-detail relationship. The DEPT table is the master table; the EMP table is the details table. The DeptNo field (department number) in the EMP table is a foreign key that points to the DeptNo field in the DEPT table. The DeptNo value for each employee in the EMP table must be a valid DeptNo value in the DEPT table.

When a user adds a new employee, first the employee's department must exist in the DEPT table. If it does not exist, then the user first adds the department in the DEPT table, and then adds a new employee to this department in the EMP table. The transaction first updates DEPT and then updates the EMP table. However, Oracle Database Lite does not store the sequence in which these operations were executed.

Oracle Database Lite does not keep a record of the SQL operations executed against the database; instead, only the final changes are saved and synchronized to the back-end database. For our employee example, when the user replicates with the Mobile Server, the Mobile Server could initiate the updates the EMP table first. If this occurs, then it attempts to create a new record in EMP with an invalid foreign key value for DeptNo. Oracle database detects a referential integrity violation. The Mobile Server rolls back the transaction and places the transaction data in the Mobile Server error queue. In this case, the foreign key constraint violation occurred because the operations within the transaction are performed out of their original sequence.

In order to avoid this violation, you can do one of two things:

3.11.3.1 Set Update Order for Tables With Weights

Set the order in which tables are updated on the back-end Oracle database with weights. To avoid integrity constraints with a master-details relationship, the master table must always be updated first in order to guarantee that it exists before any records are added to a details table. In our example, you must set the DEPT table with a lower weight than the EMP table to ensure that all records are added to the DEPT table first.

You define the order weight for tables when you add a publication item to the publication. For more information on weights, see Section 3.4.1.7.2, "Using Table Weight".

3.11.3.2 Defer Constraint Checking Until After All Transactions Are Applied

You can use a PL/SQL procedure avoid foreign key constraint violations based on out-of-sequence operations by using DEFERRABLE constraints in conjunction with the BeforeApply and AfterApply functions. DEFERRABLE constraints can be either INITIALLY IMMEDIATE or INITIALLY DEFERRED. The behavior of DEFERRABLE INITIALLY IMMEDIATE foreign key constraints is identical to regular immediate constraints. They can be applied interchangeably to applications without impacting functionality.

The Mobile Server calls the BeforeApply function before it applies client transactions to the server and calls the AfterApply function after it applies the transactions. Using the BeforeApply function, you can set constraints to DEFFERED to delay referential integrity checks. After the transaction is applied, call the AfterApply function to set constraints to IMMEDIATE. At this point, if a client transaction violates referential integrity, it is rolled back and moved into the error queues.

To prevent foreign key constraint violations using DEFERRABLE constraints:

  1. Drop all foreign key constraints and then recreate them as DEFERRABLE constraints.

  2. Bind user-defined PL/SQL procedures to publications that contain tables with referential integrity constraints.

  3. The PL/SQL procedure should set constraints to DEFERRED in the BeforeApply function and IMMEDIATE in the AfterApply function as in the following example featuring a table named SAMPLE3 and a constraint named address.14_fk:

procedure BeforeApply(clientname varchar2) is

     cur integer;

     begin

       cur := dbms_sql.open_cursor;

       dbms_sql.parse(cur,'SET CONSTRAINT SAMPLE3.address14_fk

                       DEFERRED', dbms_sql.native);

       dbms_sql.close_cursor(cur);

     end;

     procedure AfterApply(clientname varchar2) is

     cur integer;

     begin

       cur := dbms_sql.open_cursor;

       dbms_sql.parse(cur, 'SET CONSTRAINT SAMPLE3.address14_fk

                       IMMEDIATE', dbms_sql.native);

       dbms_sql.close_cursor(cur);

     end;

3.11.4 Unique Key Constraint

A unique key constraint enforces uniqueness of data. However, you may have multiple clients across multiple devices updating the same table. Thus, a record may be unique on a single client, but not across all clients. Enforcing uniqueness is the customer's reponsibility and depends on the data.

How do you guarantee that the records added on separate clients are unique? You can use the sequence numbers generated on the client by Oracle Database Lite. See Section 3.4.1.8, "Creating Client-Side Sequences for the Downloaded Snapshot" for more information.

3.11.5 Not Null Constraint

When you have a not null constraint on the client or on the server, you must ensure that this constraint is set on both sides.

  • On the server—Create a NOT NULL constraint on the back-end server table using the Oracle database commands.

  • For the client—Set a column as NOT NULL by executing the setPubItemColOption method in the ConsolidatorManager API. Provide Consolidator.NOT_NULL as the input parameter for nullType. The constraint is then enforced on the table in the client Oracle Lite database.

3.11.6 Generating Constraints on the Mobile Client

The Primary Key, Foreign Key, Not Null and Default Value constraints can be synchronized to the Mobile client; the Unique constraints cannot be synchronized. For foreign key constraints, you decide if you want the foreign key on the Mobile client. That is, when you create a foreign key constraint on a table on the back-end server, you may or may not want this constraint to exist on the Mobile client.

  • Each publication that is defined is specific to a certain usage. For example, if you have a foreign key constraint between two tables, such as department and employee, your publication may only specify that information from the employee table is downloaded. In this situation, you would not want the foreign constraint between the employee and department table to be enforced on the client.

  • If you do have a master-detail relationship or other constraint relationships synchronized down to the client, then you would want to have the constraint generated on the client.

In order to generate the constraints on theMobile client, perform the following:

  1. Within the process for creating or modifying an existing publication using the APIs, invoke the assignWeights method of the ConsolidatorManager object, which does the following tasks:

    1. Calculates a weight for each of the publication items included in the publication.

    2. Creates a script that, when invoked on the client, generates the constraints on the client. This script is automatically added to the publication.

  2. On the Mobile client, perform a synchronization for the user, which brings down the snapshot and the constraint script. The script is automatically executed on the Mobile client.

Once executed on the client, all constraints on the server for this publication are also enforced on the Mobile client.

3.11.6.1 The assignWeights Method

The assignWeights method automatically calculates weights for all publication items belonging to a publication. If a new publication item is added or if there is a change in the referential relationships, the API should be called again.

The following defines the assignWeights method and its parameters:

public void assignWeights(java.lang.String pub, boolean createScripts)
                   throws ConsolidatorException

Where:

  • pub - Publication name

  • createScripts - If true, creates refrential constraints scripts and adds them to the publication to be propagated to subscribed clients.

3.12 Parent Tables Needed for Updateable Views

For a view to be updatable, it must have a parent table. A parent table can be any one of the view base tables in which a primary key is included in the view column list and is unique in the view row set. If you want to make a view updatable, provide the Mobile Server with the appropriate hint and the view parent table before you create a publication item on the view.

To make publication items based on a updatable view, use the following two mechanisms:

3.12.1 Creating a Parent Hint

Parent table hints define the parent table for a given view. Parent table hints are provided through the parentHint method of the Consolidator Manager object, as follows:

consolidatorManager.parentHint("SAMPLE3","ADDROLRL4P","SAMPLE3","ADDRESS");

See the Javadoc in the Oracle Database Lite API Specification for more information.

3.12.2 INSTEAD OF Triggers

INSTEAD OF triggers are used to execute INSTEAD OF INSERT, INSTEAD OF UPDATE, or INSTEAD OF DELETE commands. INSTEAD OF triggers also map these DML commands into operations that are performed against the view base tables. INSTEAD OF triggers are a function of the Oracle database. See the Oracle database documentation for details on INSTEAD OF triggers.

3.13 Resolving Conflict Resolution with Winning Rules

The Mobile Server uses internal versioning to detect synchronization conflicts. A separate version number is maintained for each client and server record. When the client updates are applied to the server, then the Mobile Server checks the version numbers of the client against the version numbers on the server. If the version does not match, then the conflict resolves according to the defined winning rules—such as client wins or server wins, as follows:

The Mobile Server does not automatically resolve synchronization errors. Instead, the Mobile Server rolls back the corresponding transactions, and moves the transaction operations into the Mobile Server error queue. It is up to the administrator to view the error queue and determine if the correct action occurred. If not, the administrator must correct and re-execute the transaction. If it did execute correctly, then purge the transaction from the error queue.

One type of error is a synchronization conflict, which is detected in any of the following situations:

All the other errors, including nullity violations and foreign key constraint violations are synchronization errors. See Section 3.11, "Synchronizing With Database Constraints" for more information.

All synchronization errors are placed into the error queue. For each publication item created, a separate and corresponding error queue is created. The purpose of this queue is to store transactions that fail due to unresolved conflicts. The administrator can attempt to resolve the conflicts, either by modifying the error queue data or that of the server, and then attempt to re-apply the transaction.

The administrator can change the transaction operations and re-execute or purge transactions from the error queue from either of the following:

3.13.1 Resolving Errors and Conflicts Using the Error Queue

The error queue stores transactions that fail due to synchronization errors or unresolved conflicts. For unresolved conflicts, only the "Server Wins" conflicts are reported. If you have set your conflict rules to "Client Wins", then these are not reported. The administrator can do one of the following:

  • Attempt to correct the error by modifying the error queue data or that of the server, and re-apply the transaction through the executeTransaction method of the Consolidator Manager object.

  • If a conflict was reported and resolved to your satisfaction, then you can purge the transaction from the error queue with the purgeTransaction method of the Consolidator Manager object. Otherwise, you can override the default conflict resolution by modifying the error queue data and re-apply the transaction.

View the error queue through the Mobile Manager GUI, where you can see what the conflict was. You can fix the problem and reapply the data by modifying the DML operation appropriately and then re-executing. See Section 5.9.4.3 "Viewing Transactions in the Error Queue" in the Oracle Database Lite Administration and Deployment Guide for directions.

3.13.2 Customizing Synchronization Conflict Resolution Outcomes

You can customize synchronization conflict resolution by performing the following:

  1. Configure the winning rule to Client Wins.

  2. Attach BEFORE INSERT, UPDATE, and DELETE triggers to database tables.

  3. Create a custom DML procedure.

The triggers in the database compare old and new row values and resolve client changes, as you specify in the triggers.

3.14 Manipulating Application Tables

If you need to manipulate the application tables to create a secondary index or a virtual primary key, you can use ConsolidatorManager methods to programmatically perform these tasks in your application, as described in the following sections:

3.14.1 Creating Secondary Indexes on Client Device

The first time a client synchronizes, the Mobile Server automatically enables a Mobile client to create the database objects on the client in the form of snapshots. By default, the primary key index of a table is automatically replicated from the server. You can create secondary indexes on a publication item through the Consolidator Manager APIs. See the Oracle Database Lite API Javadoc for specific API information. See Section 3.4.1.6, "Create Publication Item Indexes" for an example.

3.14.2 Virtual Primary Key

You can specify a virtual primary key for publication items where the base object does not have a primary key defined. This is useful if you want to create a fast refresh publication item on a table that does not have a primary key.

A virtual primary key must be unique and not null. You can create a virtual primary key for more than one column, but the API must be called separately for each column that you wish to assign a virtual primary key. The following methods create and drop a virtual primary key.

Use the createVirtualPKColumn method to create a virtual primary key column.

consolidatorManager.createVirtualPKColumn("SAMPLE1", "DEPT", "DEPT_ID"); 

Use the dropVirtualPKColumns method to drop a virtual primary key.

consolidatorManager.dropVirtualPKColumns("SAMPLE1", "DEPT"); 

Note:

See the Javadoc in the Oracle Database Lite API Specification for more information.

3.15 Facilitating Schema Evolution

You can use schema evolution when adding or altering a column in the application tables for updatable publication items. You do not use schema evolution for read-only publication items.

If you do alter the schema, then the client receives a complete refresh on the modified publication item, but not for the entire publication.

Note:

You should stop all synchronization events and MGP activity during a schema evolution.

The following types of schema modifications is supported:

Note:

You cannot modify the definition of any primary key or virtual primary key.

For facilitating schema evolution, perform the following:

  1. If necessary, modify the table in the back-end Oracle database.

  2. Modify the publication item directly on the production Mobile repository through MDW or the alterPublicationItem API. Modifying the SQL query of the publication item causes the schema evolution to occur.

    A schema evolution only occurs if the SQL query is modified. If the SQL query does not change, then the evolution does not occur. If your modification only touched the table, then you must modify the SQL query by adding an additional space to force the schema evolution to occur.

    Note:

    If you decide to republish the application to a different Mobile repository, then update the publication definition in the packaging wizard.
  3. Once you alter the SQL query, then either use Mobile Manager to refresh the metadata cache or restart the Mobile Server. To refresh the metadata cache through the Mobile Server, select Data Synchronization->Administration->Reset Metadata Cache or execute the resetCache method of the ConsolidatorManager class.

Note:

Use of the high priority flag during sync will override any schema evolution, as a result, the new table definition will not come to the client.

When you modify the table in the Mobile repository, the client snapshot is no longer. Thus—by default—a complete refresh occurs the next time you synchronize, because a new snapshot must be created on the client.

3.16 Set DBA or Operational Privileges for the Mobile Server

You can set either DBA or operational privileges for the Mobile Server with the following Consolidator Manager API:

void setMobilePrivileges( String dba_schema, String dba_pass, int type )
        throws ConsolidatorException

where the input parameter are as follows:

If you specify Consolidator.DBA, then the privileges needed are those necessary for granting DBA privileges that are required for publish/subscribe functions of the Mobile Server.

If you specify Consolidator.OPER type, then the privileges needed are those necessary for executing the Mobile Server without any schema modifications. The OPER is given DML and select access to publication item base objects, version, log, and error queue tables.

The Mobile Server privileges are modified using the C$MOBILE_PRIVILEGES PL/SQL package, which is created for you automatically after the first time you use the setMobilePrivileges procedure. After the package is created, the Mobile Server privileges can be administered from SQL or from this Java API.

3.17 Create a Synonym for Remote Database Link Support For a Publication Item

Publication items can be defined for database objects existing on remote database instances outside of the Mobile Server repository. Local private synonyms of the remote objects should be created in the Oracle database. Execute the following SQL script located in the <ORACLE_HOME>\Mobile\server\admin\consolidator_rmt.sql directory on the remote schema in order to create Consolidator Manager logging objects.

The synonyms should then be published using the createPublicationItem method of the ConsolidatorManager object. If the remote object is a view that needs to be published in updatable mode and/or fast-refresh mode, the remote parent table must also be published locally. Parent hints should be provided for the synonym of the remote view similar those used for local, updatable and/or fast refreshable views.

Two additional methods have been created, dependencyHint and removeDependencyHint, to deal with non-apparent dependencies introduced by publication of remote objects.

Remote links to the Oracle database must be established prior to attempting remote linking procedures, please refer to the Oracle SQL Reference for this information.

Note:

The performance of synchronization from remote databases is subject to network throughput and the performance of remote query processing. Because of this, remote data synchronization is best used for simple views or tables with limited amount of data.

The following sections describe how to manage remote links:

3.17.1 Publishing Synonyms for the Remote Object Using CreatePublicationItem

The createPublicationItem method creates a new, stand-alone publication item as a remote database object. If the URL string is used, the remote connection is established and closed automatically. If the connection is null or cannot be established, an exception is thrown. The remote connection information is used to create logging objects on the linked database and to extract metadata.

Note:

See the Javadoc in the Oracle Database Lite API Specification for more information.
consolidatorManager.createPublicationItem(

     "jdbc:oracle:oci8:@oracle.world",

     "P_SAMPLE1",

     "SAMPLE1",

     "PAYROLL_SYN",

     "F"

     "SELECT * FROM sample1.PAYROLL_SYN"+"WHERE SALARY >:CAP", null, null);

Note:

Within the select statement, the parameter name for the data subset must be prefixed with a colon, for example :CAP.

3.17.2 Creating or Removing a Dependency Hint

Use the dependencyHint method to create a hint for a non-apparent dependency.

Given remote view definition

        create payroll_view as

        select p.pid, e.name 

        from payroll p, emp e

        where p.emp_id = e.emp_id;


Execute locally

        create synonym v_payroll_syn for payroll_view@<remote_link_address>;

        create synonym t_emp_syn for emp@<remote_link_address>;

Where <remote_link_address> is the link established on the Oracle database. Use dependencyHint to indicate that the local synonym v_payroll_syn depends on the local synonym t_emp_syn:

consolidatorManager.dependencyHint("SAMPLE1","V_PAYROLL_SYN","SAMPLE1","T_EMP_SYN");

Use the removeDependencyHint method to remove a hint for a non-apparent dependency.

Note:

See the Javadoc in the Oracle Database Lite API Specification for more information.

3.18 Using the Sync Discovery API to Retrieve Statistics

The sync discovery feature is used to request an estimate of the size of the download for a specific client, based on historical data. The following statistics are gathered to maintain the historical data:

The following sections contain methods that can be used to gather statistics:

3.18.1 getDownloadInfo Method

The getDownloadInfo method returns the DownloadInfo object. The DownloadInfo object contains a set of PublicationSize objects and access methods. The PublicationSize objects carry the size information of a publication item. The method Iterator iterator() can then be used to view each PublicationSize object in the DownloadInfo object.

DownloadInfo dl = consolidatorManager.getDownloadInfo("S11U1", true, true);

Note:

See the Javadoc in the Oracle Database Lite API Specification for more information.

3.18.2 DownloadInfo Class Access Methods

The access methods provided by the DownloadInfo class are listed in Table 3-23:

Table 3-23 DownloadInfo Class Access Methods

Method Definition

iterator

Returns an Iterator object so that the user can traverse through the all the PublicationSize objects that are contained inside the DownloadInfo object.

getTotalSize

Returns the size information of all PublicationSize objects in bytes, and by extension, the size of all publication items subscribed to by that user. If no historical information is available for those publication items, the value returned is '-1'.

getPubSize

Returns the size of all publication items that belong to the publication referred to by the string pubName. If no historical information is available for those publication items, the value returned is '-1'.

getPubRecCount

Returns the number of all records of all the publication items that belong to the publication referred by the string pubName, that will be synchronization during the next synchronization.

getPubItemSize

Returns the size of a particular publication item referred by pubItemName. It follows the following rules in order.

  1. If the publication item is empty, it will return '0'.

  2. If no historical information is available for those publication items, it will return '-1'.

getPubItemRecCount

Returns the number of records of the publication item referred by pubItemName that will be synced in the next synchronization.


Note:

See the Javadoc in the Oracle Database Lite API Specification for more information.

3.18.3 PublicationSize Class

The access methods provided by the PublicationSize class are listed inTable 3-24:

Table 3-24 PublicationSize Class Access Methods

Parameter Definition

getPubName

Returns the name of the publication containing the publication item.

getPubItemName

Returns the name of the publication item referred to by the PublicationSize object.

getSize

Returns the total size of the publication item referred to by the PublicationSize object.

getNumOfRows

Returns the number of rows of the publication item that will be synchronized in the next synchronization.


Note:

See the Javadoc in the Oracle Database Lite API Specification for more information.

Sample Code

import   java.sql.*;
import   java.util.Iterator;
import   java.util.HashSet;

 

import   oracle.lite.sync.ConsolidatorManager;

import   oracle.lite.sync.DownloadInfo;

import   oracle.lite.sync.PublicationSize;

 

public class TestGetDownloadInfo{

 

   public static void main(String argv[]) throws Throwable

   {

// Open Consolidator Manager connection

      try

      {

// Create a ConsolidatorManager object

         ConsolidatorManager cm = new ConsolidatorManager ();

// Open a Consolidator Manager connection
         cm.openConnection ("MOBILEADMIN", "MANAGER",

                      "jdbc:oracle:thin:@server:1521:orcl", System.out);

// Call getDownloadInfo

         DownloadInfo dlInfo = cm.getDownloadInfo ("S11U1", true, true);

// Call iterator for the Iterator object and then we can use that to transverse

// through the set of PublicationSize objects.

         Iterator it = dlInfo.iterator ();         

// A temporary holder for the PublicationSize object.

         PublicationSize ps = null;

// A temporary holder for the name of all the Publications in a HashSet object.

         HashSet pubNames = new HashSet ();         

// A temporary holder for the name of all the Publication Items in a HashSet 

// object.

         HashSet pubItemNames = new HashSet ();         

// Traverse through the set.

         while (it.hasNext ())

         {

// Obtain the next PublicationSize object by calling next ().

            ps = (PublicationSize)it.next ();            

 

// Obtain the name of the Publication this PublicationSize object is associated

// with by calling getPubName ().

            pubName = ps.getPubName ();

            System.out.println ("Publication: " + pubName);

 

// We save pubName for later use.

            pubNames.add (pubName);

 

// Obtain the Publication name of it by calling getPubName ().

            pubItemName = ps.getPubItemName ();

            System.out.println ("Publication Item Name: " + pubItemName);

            

// We save pubItemName for later use.

            pubItemNames.add (pubItemName);

            

// Obtain the size of it by calling getSize ().

            size = ps.getSize ();

            System.out.println ("Size of the Publication: " + size);

            

// Obtain the number of rows by calling getNumOfRows ().

            numOfRows = ps.getNumOfRows ();

            System.out.println ("Number of rows in the Publication: "

                                + numOfRows);

         }

         

// Obtain the size of all the Publications contained in the 

// DownloadInfo objects.

         long totalSize = dlInfo.getTotalSize ();

         System.out.println ("Total size of all Publications: " + totalSize);

 

// A temporary holder for the Publication size.

         long pubSize = 0;

         

// A temporary holder for the Publication number of rows.

         long pubRecCount = 0;

         

// A temporary holder for the name of the Publication.

         String tmpPubName = null;

         

// Transverse through the Publication names that we saved earlier.

         it = pubNames.iterator ();

         while (it.hasNext ())

         {

// Obtain the saved name.

            tmpPubName = (String) it.next ();

            

// Obtain the size of the Publication.

            pubSize = dlInfo.getPubSize (tmpPubName);

            System.out.println ("Size of " + tmpPubName + ": " + pubSize);

            

// Obtain the number of rows of the Publication.

            pubRecCount = dlInfo.getPubRecCount (tmpPubName);

            System.out.println ("Number of rows in " + tmpPubName + ": " 

                                + pubRecCount);

         }

         

// A temporary holder for the Publication Item size.

         long pubItemSize = 0;

         

// A temporary holder for the Publication Item number of rows.

         long pubItemRecCount = 0;

 

// A temporary holder for the name of the Publication Item.

         String tmpPubItemName = null;

         

// Traverse through the Publication Item names that we saved earlier.

         it = pubItemNames.iterator ();

         while (it.hasNext ())

         {

// Obtain the saved name.

            tmpPubItemName = (String) it.next ();

            

// Obtain the size of the Publication Item.

            pubItemSize = dlInfo.getPubItemSize (tmpPubItemName);

            System.out.println ("Size of " + pubItemSize + ": " + pubItemSize);

 

// Obtain the number of rows of the Publication Item.

            pubItemRecCount = dlInfo.getPubItemRecCount (tmpPubItemName);

            System.out.println ("Number of rows in " + tmpPubItemName + ": " 

                               + pubItemRecCount);

         }

         System.out.println ();

         

// Close the connection
         cm.closeConnection ();

      }

      catch (Exception e)

      {

         e.printStackTrace();

      }      

   }}

3.19 Customizing Replication With Your Own Queues

Application developers can manage the replication process programmatically by using queue-based publication items. By default on the server-side, the MGP manages both the in queues and the out queues by gathering all updates posted to the in queue, applying these updates to the relevant tables, and then composing all new updates created on the server that are destined for the client and posting it to the out queue. This is described in Section 3.1, "How Does Synchronization Work?".

However, you can bypass the MGP and provide your own solution for the apply and compose phases on the server-side for selected publication items. You may wish to bypass the MGP for the publication item if one or more of the following are true:

Figure 3-2 shows how the Sync Server invokes the UPLOAD_COMPLETE PL/SQL procedure when the client upload is complete. And before it downloads all composed updates to the client, the Sync Server invokes the DOWNLOAD_INIT PL/SQL procedure.

Figure 3-2 Queue-Based Synchronization Architecture

Description of Figure 3-2 follows
Description of "Figure 3-2 Queue-Based Synchronization Architecture"

To bypass the MGP, do the following:

  1. Define your publication item as queue-based or data collection. Then, the MGP is not aware of the queues associated with this publication item. You can do this when creating the publication item either through MDW or Consolidator APIs.

  2. If queue-based, then create a package, either PL/SQL or Java, that implements the queue interface callback methods. This includes the following callback methods:

    • UPLOAD_COMPLETE to process the incoming updates from the client.

    • DOWNLOAD_INIT to complete the compose phase.

    • DOWNLOAD_COMPLETE if you have any processing to perform after the compose phase.

  3. Create the queues. The in queue, CFM$<publication_item_name> is created by default for you. Create the out queue as CTM$<publication_item_name>.

This section describes two methods for customizing the server-side apply/compose phases, as follows:

3.19.1 Customizing Apply/Compose Phase of Synchronization with a Queue-Based Publication Item

Note:

The sample for queue-based publication items is located in <OLITE_HOME>/Mobile/Sdk/samples/Sync/win32/QBasedPI.

When you want to substitute your own logic for the apply/compose phase of the synchronization process, use a queue-based publication item. The following briefly gives an overview of how the process works internally when using a queue-based publication item:

  • When data arrives from the client it is placed in the publication item in queues. The Sync Server calls UPLOAD_COMPLETE, after which the data is committed. All records in the current synchronization session are given the same transaction identifier. The Queue Control Table (C$INQ) indicates which publication item In Queues have received new transactions with the unique transaction identifier. Thus, this table shows which queues need processing.

  • If you have a queue-based publication item, you must implement the compose phase, if you have one. The MGP is unaware of queue-based publication items and so will not be able to perform any action for this publication item. When you implement your own compose logic, you decide when and how the compose logic is invoked. For example, you could do the following:

    • You could have a script execute your compose logic at a certain time of the day.

    • You could schedule the compose procedure as a job in the Job Scheduler.

    • You could include the compose logic as part of the DOWNLOAD_INIT function, so that it executes before the client downloads.

    Note:

    If you decide to implement the compose phase independent of the DOWNLOAD_INIT function; then once the compose is finished, you may want the client to receive the data as soon as possible. In this case, invoke the EN_QUEUE_NOTIFICATION function to start an automatic synchronization from the client. For more information on this function, see Section 3.19.3, "Selecting How/When to Notify Clients of Composed Data".

    Before the Sync Server begins the download phase of the synchronization session, it calls DOWNLOAD_INIT. In this procedure, you can customize the compose or develop any pre-download logic for the client. The Sync Server finds a list of the publication items, which can be downloaded based on the client's subscription. A list of publication items and their refresh mode, ('Y' for complete refresh, 'N' for fast refresh) is inserted into a temporary table (C$PUB_LIST_Q). Items can be deleted or the refresh status can be modified in this table since the Sync Server refers to C$PUB_LIST_Q to determine the items that are downloaded to the client.

Similar to the In Queue, every record in the Out Queue should be associated with it a transaction identifier (TRANID$$). The Sync Server passes the last_tran parameter to indicate the last transaction that the client has successfully applied. New out queue records that have not been downloaded to the client are be marked with the value of curr_tran parameter. The value of curr_tran is always greater than that of last_tran, though not sequential. The Sync Server downloads records from the Out Queues when the value of TRANID$$ is greater than last_tran. When the data is downloaded, the Sync Server calls DOWNLOAD_COMPLETE.

When you decide to use queue-based publication items, you need to do the following:

  1. Create both the In and Out Queues used in the apply and compose phases.

    • You can use the default In Queue, which is named CFM$<publication_item_name>. Alternatively, you can create the queue of this name manually. For example, if you wanted the In Queue to be a view, then you would create the In Queue manually.

    • Create the Out Queue for the compose phase as CTM$<publication_item_name>.

  2. Create the publication item and define it as a queue-based publication item. This can be done either through MDW or the Consolidator APIs.

  3. Create the PL/SQL or Java callback methods for performing the apply and compose phases. Since the MGP has nothing to do with the queues used for these phases, when you are finished processing the data, you must manage the queues by deleting any rows that have completed the necessary processing.

  4. Register the package to be used for all of the queue processing for a particular publication item.

3.19.1.1 Queue Creation

If a queue-based publication item is created, it will always use a queue by the name of CFM$<publication_item_name>. However, if you want to customize how the In Queue is defined—for example, by defining certain rules, making it a view or designating the location of the queue—then you can create your own In Queue. The Out Queue is never defined for you, so you must create an Out Queue named CTM$<publication_item_name> in the Mobile Server repository manually using SQL.

These queues are created based upon the publication item tables. For example, the following table ACTIVESTATEMENT has five columns, as follows:

create table ACTIVESTATEMENT(
        StatementName varchar2(50) primary key,
        TestSuiteName varchar2(50),
        TestCaseName varchar2(50),
        CurrLine varchar2(4000),
        ASOrder integer) nologging;
 

The application stores its data in these five columns. When synchronization occurs, this data must be uploaded and downloaded. However, there is also meta-information necessary for facilitating the synchronization phases. Therefore, the Out Queue that you create contains the meta-information in the CLID$$CS, TRANID$$ and DMLTYPE$$ columns, as well as the columns from the ACTIVESTATEMENT table, as follows:

create table CTM$AUTOTS_PUBITEM(
CLID$$CS VARCHAR2 (30),
StatementName varchar2(50) primary key,
TestSuiteName varchar2(50),
TestCaseName varchar2(50),
CurrLine varchar2(4000),
ASOrder integer,
TRANID$$ NUMBER (10),
DMLTYPE$$ CHAR (1) CHECK (DMLTYPE$$ IN ('I','U','D'))) nologging;

Thus, before you can create the queues, you must already know the structure of the tables for the publication item, as well as the publication item name.

The following shows the structure and creation of the queues:

In queue

All In Queues are named CFM$<name> where name is the publication item name. It contains the application publication item table columns, as well as the fields listed in Table 3-25:

Table 3-25 In Queue Interface Creation Parameters

Parameter Description

CLID$$CS

A unique string identifying the client.

TRANID$$

A unique number identifying the transaction.

SEQNO$$

A unique number for every DML language operation per transaction in the inqueue (CFM$) only.

DMLTYPE$$

Checks the type of DML instruction:

  • 'I' - Insert

  • 'D' - Delete

  • 'U' - Update


The following designates the structure when creating the In Queue:

create table 'CFM$'+name

(

CLID$$CS   VARCHAR2 (30),

TRANID$$   NUMBER (10),

SEQNO$$    NUMBER (10),

DMLTYPE$$  CHAR (1) CHECK (DMLTYPE$$  IN ('I','U','D'),
publication item column definitions

)

Note:

You must have the parameters in the same order as shown above for the In Queue. It is different than the ordering in the Out Queue.

Out queue

All Out Queues are named CTM$<name> where name is the publication item name. It contains the application publication item table columns, as well as the fields listed in Table 3-26:

Table 3-26 Out Queue Interface Creation Parameters

Parameter Description

CLID$$CS

A unique string identifying the client.

TRANID$$

A unique number identifying the transaction.

DMLTYPE$$

Checks the type of DML instruction:

  • 'I' - Insert

  • 'D' - Delete

  • 'U' - Update


The following designates the structure when creating the In Queue:

create table 'CTM$'+name

(

CLID$$CS   VARCHAR2 (30),
publication item column definitions

TRANID$$   NUMBER (10),

DMLTYPE$$  CHAR (1) CHECK (DMLTYPE$$  IN ('I','U','D'),

)

Note:

You must have the parameters in the same order as shown above for the Out Queue. It is different than the ordering in the In Queue.

Another example of creating an Out Queue is in the FServ example, which uses the default In Queue of CFM$PI_FSERV_TASKS and creates the CTM$PI_FSERV_TASKS Out Queue for the PI_FSERV_TASKS publication item, as follows:

create table CTM$PI_FSERV_TASKS(
                   CLID$$CS     varchar2(30),
                   ID           number,
                   EMP_ID       number,
                   CUST_ID      number,
                   STAT_ID      number,
                   NOTES        varchar2(255)
                   TRANID$$     number(10),
                   DMLTYPE$$    char(1) check(DMLTYPE$$ in ('I','U','D')),
);

Note:

The application publication item table for the FServ example contains columns for ID, EMP_ID, CUST_ID, STAT_ID, and NOTES.

Queue Control Table

The Sync Server automatically creates a queue control table, C$INQ, and a temporary table, C$PUB_LIST_Q. You will process the information in the queue control table in the PL/SQL or Java callout methods to determine which publication items have received new transactions.

The parameters for the control table queue are listed in Table 3-27:

Table 3-27 Queue Control Table Parameters

Parameter Description

CLID$$CS

A unique string identifying the client.

TRANID$$

A unique number identifying the transaction.

STORE

Represents the publication item name in the queue control table.


The control table has the following structure:

'C$INQ'

(

CLIENTID   VARCHAR2 (30),

TRANID$$   NUMBER,

STORE      VARCHAR2 (30),


)

Temporary Table

The DOWNLOAD_INIT procedure uses the Temporary Table C$PUB_LIST_Q for determining what publication items to download in the compose phase.

'C$PUB_LIST_Q'

(

NAME   VARCHAR2 (30),

COMP_REF   CHAR(1),

CHECK(COMP_REF IN('Y','N'))

)

The parameters for the manually created queues are listed in Table 3-28:

Table 3-28 Queue Interface Creation Parameters

Parameter Description

NAME

The publication item name that is to be downloaded from the repository to the Out Queue.

COMP_REF

This value is 'Y' for complete refresh.


3.19.1.2 Queue-Based PL/SQL Procedure for UPLOAD_COMPLETE and DOWNLOAD_INIT Callouts

The PL/SQL package for the queue-based publication callouts is in a package where both the UPLOAD_COMPLETE and DOWNLOAD_INIT procedures are defined. The signatures for both callout procedures are as follows:

CREATE OR REPLACE PACKAGE CONS_QPKG AS

/*
 * notifies that In Queue has a new transaction by providing the client 
 * identifier and the transaction identifier. 

*/

PROCEDURE UPLOAD_COMPLETE(

     CLIENTID      IN     VARCHAR2, 

     TRAN_ID      IN     NUMBER     -- IN queue tranid

     );

/*
 * initializes client data for download. provides the compose phase for the 
 * client. The input data for this procedure is the client id, the last 
 * and current transaction markers and the priority.

*/

PROCEDURE DOWNLOAD_INIT(

     CLIENTID      IN     VARCHAR2, 

     LAST_TRAN     IN     NUMBER,

     CURR_TRAN     IN     NUMBER,

     HIGH_PRTY     IN     VARCHAR2

     );

/*

 *  notifies when all the client's data is sent

*/

PROCEDURE DOWNLOAD_COMPLETE(

     CLIENTID     IN     VARCHAR2

     );

     

END CONS_QPKG;

/
3.19.1.2.1 In Queue Apply Phase Processing

Within the UPLOAD_COMPLETE procedure, you should develop a method of applying all changes from the client to the correct tables in the repository. The FServ example performs the following:

  1. From the Master Table C$INQ, locates the rows for the designated client and transaction identifiers that have been marked for update.

  2. Retrieves the application publication item data and the DMLTYPE$$ from the In Queue, based on the client and transaction identifiers.

  3. Performs insert, update, or delete (determined by the value in DMLTYPE$$) for updates in the application tables in the repository.

  4. After updates are complete, delete the rows in the C$INQ and the In Queue that you just processed.

PROCEDURE UPLOAD_COMPLETE(CLIENTID IN VARCHAR2, TRAN_ID IN NUMBER) IS
/*create cursors for execution later */
/* PI_CUR locates the rows for the client out of the master table */
CURSOR PI_CUR(C_CLIENTID VARCHAR2, C_TRAN_ID NUMBER ) IS
   SELECT STORE FROM C$INQ 
       WHERE CLID$$CS = C_CLIENTID AND TRANID$$ = C_TRAN_ID FOR UPDATE;
/* TASKS_CUR retrieves the values for the client data to be updated */
/*   from the In Queue */
CURSOR TASKS_CUR(C_CLIENTID varchar2, C_TRAN_ID number ) IS 
   SELECT ID, EMP_ID, STAT_ID, NOTES, DMLTYPE$$ FROM CFM$PI_FSERV_TASKS 
       WHERE CLID$$CS = C_CLIENTID AND TRANID$$ = C_TRAN_ID FOR UPDATE;
/* create variables */ 
TASK_OBJ TASKS_CUR%ROWTYPE;
PI_OBJ PI_CUR%ROWTYPE;
INSERT_NOT_ALLOWED EXCEPTION;
DELETE_NOT_ALLOWED EXCEPTION;
UNKNOWN_DMLTYPE EXCEPTION;

BEGIN
 
   OPEN PI_CUR(CLIENTID, TRAN_ID);
   /* C$INQ is used to find out which publication items have received data 
      from clients. The publication item name is available in the STORE column  
    */
   LOOP
     FETCH PI_CUR INTO PI_OBJ;
     EXIT WHEN PI_CUR%NOTFOUND;
 
   /* Locate the updates for the publication item PI_FSERV_TASKS */
     IF PI_OBJ.STORE = 'PI_FSERV_TASKS' THEN
       OPEN TASKS_CUR(CLIENTID, TRAN_ID);
       LOOP                                    
         /* Process the in queue for PI_FSERV_TASKS */
         FETCH TASKS_CUR INTO TASK_OBJ;
         EXIT WHEN TASKS_CUR%NOTFOUND;
 
         /* Discover the DML command requested. For this publication, only 
             updates are allowed. 
         IF TASK_OBJ.DMLTYPE$$ = 'I' THEN
            RAISE INSERT_NOT_ALLOWED;
         ELSIF TASK_OBJ.DMLTYPE$$ = 'U' THEN
            FSERV_TASKS.UPDATE_TASK(TASK_OBJ.ID, TASK_OBJ.EMP_ID, 
                 TASK_OBJ.STAT_ID, TASK_OBJ.NOTES);
         ELSIF TASK_OBJ.DMLTYPE$$ = 'D' THEN
            RAISE DELETE_NOT_ALLOWED;
         ELSE
            RAISE UNKNOWN_DMLTYPE;
         END IF;

         /* after processing, delete the update request from the in queue */
         DELETE FROM CFM$PI_FSERV_TASKS WHERE CURRENT OF TASKS_CUR;
       END LOOP;
       close TASKS_CUR;   
     END IF;

     /* after completing all updates for the client apply phase, delete from 
        master queue */
     DELETE FROM C$INQ WHERE CURRENT OF PI_CUR;
   END LOOP;
END; 
3.19.1.2.2 Out Queue Compose Phase Processing

Within the DOWNLOAD_INIT procedure, develop a method of composing all changes from the server that are destined for the client from the publication item tables in the repository. The FServ example performs the following:

  1. From the Temporary Table C$PUB_LIST_Q, discover the publication items that you should download data for the user using the client id, current and last transaction.

  2. Retrieves the application publication item data into the Out Queue. This example always uses complete refresh.

PROCEDURE DOWNLOAD_INIT( CLIENTID IN VARCHAR2, 
                         LAST_TRAN IN NUMBER, 
                         CURR_TRAN IN NUMBER, 
                         HIGH_PRTY IN VARCHAR2 ) IS 
/*create cursor used later in procedure which retrieves the publication name 
  from the temporary table to perform compose phase.*/
CURSOR PI_CUR IS SELECT NAME from C$PUB_LIST_Q;
/*create variables*/
PI_NAME VARCHAR2(50);
STATID_CLOSE NUMBER;

BEGIN 
   
   OPEN PI_CUR;
   /* C$PUB_LIST_Q (the temporary table) is used to find out which pub items 
      have data to download to clients through the publication item out queue. 
      The publication item name is available in the NAME column
    */
   LOOP
     FETCH PI_CUR INTO PI_NAME;
     EXIT WHEN PI_CUR%NOTFOUND;
 
     /* Populate the out queue of pub item PI_FSERV_TASKS with all 
        unclosed tasks for the employee with this CLIENTID using a complete 
        refresh. COMP_REF is always reset to Y since partial refresh has 
        not been implemented.
      */
     /* if the PI_FSERV_TASKS publication item has data ready for the client,
        then perform a complete refresh and place all data in the out queue */
     IF PI_NAME = 'PI_FSERV_TASKS' THEN
        UPDATE C$PUB_LIST_Q SET COMP_REF='Y' where NAME = 'PI_FSERV_TASKS';
        SELECT ID INTO STATID_CLOSE FROM MASTER.TASK_STATUS 
             WHERE DESCRIPTION='CLOSED';
        INSERT INTO CTM$PI_FSERV_TASKS(CLID$$CS, ID, EMP_ID, CUST_ID, 
             STAT_ID, NOTES, TRANID$$, DMLTYPE$$)
             SELECT CLIENTID, a.ID, a.EMP_ID, a.CUST_ID, a.STAT_ID, a.NOTES, 
               CURR_TRAN, 'I' FROM MASTER.TASKS a, MASTER.EMPLOYEES b 
               WHERE a.STAT_ID < STATID_CLOSE AND b.CLIENTID = CLIENTID 
               AND a.EMP_ID = b.ID;
     END IF;
   END LOOP;
END; 

If, however, you want to perform another type of refresh than a complete refresh, such as an incremental refresh, then do the following:

  1. Read the value of COMP_REF

  2. If the value is N, insert only the new data into the Out Queue.

In this situation, the LAST_TRAN parameter becomes useful.

3.19.1.3 Create a Publication Item as a Queue

You create the publication item as you would normally, with one change: define the publication item as queue-based. See Section 5.4, "Create a Publication Item" for directions on how to define the publication item as queue-based when using MDW.

If you are using the Consolidator APIs, then the createQueuePublicationItem method creates a publication item in the form of a queue. This API call registers the publication item and creates CFM$<name> table as an In Queue, if one does not exist.

Note:

See the Javadoc in the Oracle Database Lite API Specification for more information.

You must provide the Consolidator Manager with the primary key, owner and name of the base table or view in order to create a queue that can be updated or refreshed with fast-refresh. If the base table or view name has no primary key, one can be specified in the primary key columns parameter. If primary key columns parameter is null, then Consolidator Manager uses the primary key of the base table.

3.19.1.4 Register the PL/SQL Package Outside the Repository

Once you finish developing the PL/SQL package, register it using the registerQueuePkg method. This method registers the package separately from the Mobile Server repository; although it refers to the in queues, out queues, queue control table and temporary table that are defined in the repository.

The following methods register or remove a procedure, or retrieve the procedure name.

  • The registerQueuePkg method registers the string pkg as the current procedure. The following registers the FServ package.

    Note:

    The developer used Consolidator Manager APIs to create the subscription, so this was included in the Java application that created the subscription.
    /* Register the queue package for this publication */   consolidatorManager.registerQueuePkg(QPKG_NAME, PUB_FSERV);
    
    
  • The getQueuePkg method returns the name of the currently registered procedure.

  • The unRegisterQueuePkg method removes the currently registered procedure.

Note:

See the Javadoc in the Oracle Database Lite API Specification for more information.

3.19.2 Creating Data Collection Queues for Uploading Client Collected Data

If you have an application where all it does it collect data, such as taking inventory or uploading collection on any meter (for example, a parking meter), then you can use data collection queues to improve the performance of uploading the data collected to the server. Since the data only flows from the client to the server, then synchronous communication is the best method for uploading massive amounts of data.

Note:

If you are collecting data on the client, but still need updates from the server, you can use the default method for synchronization or create your own queues. See Section 3.19.1, "Customizing Apply/Compose Phase of Synchronization with a Queue-Based Publication Item" for more information.

The Data Collection Queue is lightweight and simple to create. You can customize whether the data is implicitly applied or not. This queue does not require the MGP to apply the changes. It does not create objects in the application schema or map data.

Data Collection Queues are also easier to implement than a Queue-Based publication item. There is no need to create a package with callback methods, as Oracle Database Lite takes care of automatically uploading any new data from the client. In addition, you configure how Oracle Database Lite handles if there is any data to be downloaded or if you want the data on the client to be erased when it is uploaded to the server.

When you create the Data Collection Queue, the following is performed for you:

  • Automatically generates the in-queue when the publication item is created, which is named as follows: CFM$<publication_item_name>.

  • Optionally, enables the developer to choose automatic removal of client data once captured to the server. This is specified when you create the publication item.

  • Optionally, if you need an out-queue, then the developer can specify the out-queue or to have Oracle Database Lite automatically generate an out-queue, which would be named as follows: CTM$<publication_item_name>.

3.19.2.1 Creating a Data Collection Queue

When you create a data collection queue, you perform the following:

Note:

All ConsolidatorManager methods are fully documented in the Oracle Database Lite API Javadoc. This section provides context of the order in which to execute these methods.
  1. Create the table(s) for the data that the queue updates on the back-end Oracle database.

  2. Create the data collection queue and its publication item using the ConsolidatorManager createDataCollectionQueue method, where the input parameters are as follows:

    • name—A character string specifying a new publication item name.

    • owner—A string specifying the base schema object owner.

    • store—A string specifying the table name that it is based on.

    • inq_cols—A string specifigying columns in the order in which to replicate them. If null, then defaults to *, which makes the SQL statement, select * from <table>.

    • pk_columns—A string specifying the primary keys.

    • client_data—If true, removes client data from the Mobile device when uploaded to the server.

    • isOutView—If true, then creates out queue as an empty view, otherwise creates out queue as a table.

    The following creates the PI_CUSTOMERS data collection queue:

    cm.createDataCollectionQueue( "PI_CUSTOMERS", /* Publication Item name */
            MYSCHEMA,                              /* Schema owner */
           "CUSTOMERS",                            /* store */
            null,                                  /* inqueue_columns
            null,                                  /* null selects all pk_columns
            true,                                  /* removes old data after sync
            true );                                                         /* isOutView */
    
    
  3. Create the publication that is to be used by the data collection queue. Use the ConsolidatorManager createPublication method. The following creates the PUB_CUSTOMERS publication that is used by the PI_CUSTOMERS data collection queue:

    cm.createPublication("PUB_CUSTOMERS",0, "sales.%s", null);
    
    
  4. Add the publication item created within step 1 within this publication with the ConsolidatorManager addPublicationItem method. The following adds a publication item to the publication:

    cm.addPublicationItem("PUB_CUSTOMERS", "PI_CUSTOMERS", null, null,                     "S", null, null);
    
    
  5. If you want to have data download from the server to the Mobile client, create an Out Queue with a name that consists of CTM$<publication_item_name>. The following replaces the default out queue view for CUSTOMER with a view that selects all customers assigned to the EMP_ID associated with current sync session.

    stmt.executeUpdate(     "CREATE OR REPLACE VIEW CTM$"+pubIs[0]+" ( CLID$$CS, TRANID$$,            DMLTYPE$$,"+" CUST_ID, CNAME, CCOMPANY, CPHONE, CCONTACT_DATE )"+"\n            AS SELECT CONS_EXT.GET_CURR_CLIENT, 999999999, 'I',cust.*          FROM CUSTOMERS cust "+"\n          WHERE cust.CUST_ID IN (SELECT CUST_ID          FROM CUSTOMER_ASSIGNMENT WHERE EMP_ID IN "+"\n          (SELECT EMP_ID FROM SESSION_EMP               WHERE SESSION_ID = DBMS_SESSION.UNIQUE_SESSION_ID))"
    );
    

    Note:

    See the Oracle Database Lite samples page for the full data collection queue example from which these snippets were taken. The example demonstrates both a regular queue and a data collection queue.

3.19.3 Selecting How/When to Notify Clients of Composed Data

If you have created your own compose logic, such as in the queue-based publications, then you may want the server to notify the client that there is data to be downloaded. You can take control of starting an automatic synchronization from the server using the enqueue notification APIs.

There are other situations where you may want to control how and when clients are notified of compose data from the synchronization process. For example, if you have so many clients that to notify all of them of the data waiting for them would overload your system, you may want to control the process by notifying clients in batches.

In the normal synchronization process, when the compose phase is completed, all clients that have data in the out queue are notified to download the data. If, for example, you have 2000 clients, having all 2000 clients request a download at the same time could overrun your server and cause a performance issue. In this scenario, you could take control of the notification process and notify 100 clients at a time over the span of a couple of hours. This way, all of the clients receive the data in a timely fashion and your server is not overrun.

You can use the enqueue notification functionality, as follows:

  • If you implement queue-based publications for the compose phase, you can notify the clients with the EN_QUEUE_NOTIFICATION function within the Queue-based DOWNLOAD_INIT function.

  • If you write your own compose function, use the enQueueNotification method to notify the client that there is data to download.

This starts an automatic synchronization process for the intended client.

The enqueue notification APIs enable the server to tell the client that there is data to be downloaded and what type of data is waiting. Notifying the client of what type of data is waiting enables the client to evaluate whether it conforms to any automatic synchronization rules. For example, if the server has 10 records of low priority data, but the client has set the Server MGP Compose rule to only start an automatic synchronization if 20 records of low priority data exist, then the automatic synchronization is not started. So, the notification API input parameters include parameters that enable the server to describe the data that exists on the server.

A notification API is provided for you in both PL/SQL and Java, as follows:

  • Java: the ConsolidatorManager enQueueNotification method

    public long enQueueNotification(java.lang.String clientid,
                                    java.lang.String publication,
                                    java.lang.String pubItems,
                                    int recordCount,
                                    int dataSize,
                                    int priority)
                             throws ConsolidatorException
     
    
  • PL/SQL: the EN_QUEUE_NOTIFICATION function

    FUNCTION EN_QUEUE_NOTIFICATION(
      CLIENTID        IN VARCHAR2,
      PUBLICATION     IN VARCHAR2,
      PUB_ITEMS       IN VARCHAR2,
      RECORD_COUNT    IN NUMBER,
      DATA_SIZE       IN NUMBER,
      PRIORITY        IN NUMBER)
    RETURN NUMBER;
    
    

Where the parameters for the above are as follows:

Table 3-29 Enqueue Notification Parameters

Parameters Description

clientid

Consolidator client id, which is normally the username on the client device. This identifies the client to be notified. If the client does not have any automatic synchronization rules, this is the only required paramter for an automatic synchronization to start.

publication

Name of the publication for which you want notification control. This tells the client for which publication the data is destined.

pubItems

One or more publication items for which you want notification. Separate multiple publication items with a comma. This notifies the clients for which publication items the data applies.

recordCount

This notifies the client how many records exist on the server for the download.

dataSize

Reserved for future expansion.

priority

This notifies the client of the priority of the data that exists on the server. The value is 0 for high and 1 for low.


The enqueue notification API returns a unique notification ID, which can be used to query notification status in the isNotificationSent method, which is as follows:

  • JAVA

    public boolean isNotificationSent(long notificationId)
      throws ConsolidatorException
     
    
  • PL/SQL

    FUNCTION NOTIFICATION_SENT(
       NOTIFICATION_ID IN NUMBER)
    RETURN BOOLEAN;
    
    

If the notification has been sent, a boolean value of TRUE is returned.

3.20 Deleting a Client Device

If you want to delete a device, use the delete method from the Device class. To retrieve the Device object, use either the getDevice or getDeviceByName methods, as demonstrated below.

If the device id is available, the following can be directly used:

if (oracle.lite.resource.ResourceManager.getInstance() == null)
oracle.lite.resource.ResourceManager.initialize(JDBC_URL, USER, PASSWORD);

oracle.lite.resource.Device d = 
 oracle.lite.resource.ResourceManager.getInstance().getDevice(deviceId);

d.delete();

If the device id is not available, then you can provide the device name, which is shown on the Mobile Manager UI in the oracle.lite.resource.User.getDeviceByName(deviceName) method. Once retrieved, use the delete method of the Device object as demonstrated above.

3.21 Synchronization Performance

There are certain optimizations you can do to increase performance. See Section 1.2 "Increasing Synchronization Performance" in the Oracle Database Lite Troubleshooting and Tuning Guide for a full description.

3.22 Troubleshooting Synchronization Errors

The following section can assist you in troubleshooting any synchronization errors:

3.22.1 Foreign Key Constraints in Updatable Publication Items

Replicating tables between Oracle database and clients in updatable mode can result in foreign key constraint violations if the tables have referential integrity constraints. When a foreign key constraint violation occurs, the server rejects the client transaction.

3.22.1.1 Foreign Key Constraint Violation Example

For example, two tables EMP and DEPT have referential integrity constraints. The DeptNo (department number) attribute in the DEPT table is a foreign key in the EMP table. The DeptNo value for each employee in the EMP table must be a valid DeptNo value in the DEPT table.

A Mobile Server user adds a new department to the DEPT table, and then adds a new employee to this department in the EMP table. The transaction first updates DEPT and then updates the EMP table. However, the database application does not store the sequence in which these operations were executed.

When the user replicates with the Mobile Server, the Mobile Server updates the EMP table first. In doing so, it attempts to create a new record in EMP with an invalid foreign key value for DeptNo. Oracle database detects a referential integrity violation. The Mobile Server rolls back the transaction and places the transaction data in the Mobile Server error queue. In this case, the foreign key constraint violation occurred because the operations within the transaction are performed out of their original sequence.

Avoid this violation by setting table weights to each of the tables in the master-detail relationship. See Section 3.22.1.2, "Avoiding Constraint Violations with Table Weights" for more information.

3.22.1.2 Avoiding Constraint Violations with Table Weights

Mobile Server uses table weight to determine in which order to apply client operations to master tables. Table weight is expressed as an integer and are implemented as follows:

  1. Client INSERT operations are executed first, from lowest to highest table weight order.

  2. Client DELETE operations are executed next, from highest to lowest table weight order.

  3. Client UPDATE operations are executed last, from lowest to highest table weight order.

In the example listed in Section 3.22.1.1, "Foreign Key Constraint Violation Example", a constraint violation error could be resolved by assigning DEPT a lower table weight than EMP. For example:

(DEPT weight=1, EMP weight=2)

You define the order weight for tables when you add a publication item to the publication. For more information on setting table weights in the publication item, see Section 3.4.1.7.2, "Using Table Weight".

3.22.1.3 Avoiding Constraint Violations with BeforeApply and After Apply

You can use a PL/SQL procedure avoid foreign key constraint violations based on out-of-sequence operations by using DEFERRABLE constraints in conjunction with the BeforeApply and AfterApply functions. See Section 3.11.3.2, "Defer Constraint Checking Until After All Transactions Are Applied" for more information.

3.23 Datatype Conversion Between the Oracle Server and Client Oracle Lite Database

Before you publish your application, you create the tables for your applications in the Oracle database. Thus, when the first synchronization occurs, Oracle Database Lite takes the Oracle database datatypes and converts them to corresponding allowed datatypes in the Oracle Lite database on the client. Table 3-30 lists the Oracle database datatypes in the left column and displays how the datatype can be mapped to the Oracle Lite database datatypes across the top row.

Note:

For Oracle Database Lite Datatypes, see Appendix D, "Oracle Database Lite Datatypes" in the Oracle Database Lite SQL Reference.

Table 3-30 Conversion of Oracle Database Datatypes to Oracle Database Lite Datatypes

Oracle Database Lite Datatypes 1 B 2 B 4 B Float Double Number Date Time Long Var Binary Varchar Char BLOB CLOB

INTEGER



X










VARCHAR2









X




VARCHAR









X




CHAR










X



SMALLINT


X











FLOAT




X









DOUBLE PRECISION





X








NUMBER


X

X



X







DATE







X






LONG RAW








X





LONG









X




BLOB











X


CLOB












X


Note:

Oracle Database Lite does not support creating publication items for synchronization on a table with object type columns, even if the publication item query does not include any of the object type columns. However, it is possible to define a view which selects only columns of supported data types and then create a publication item using the view definition.

"X" indicates that the datatype can be mapped to this Oracle Lite database datatype. To save on space, signed 1 byte represents TINYINT, signed 2 byte represents SMALLINT, and signed 4 byte represents INTEGER.

For conversion of the NUMBER datatype, if the precision is less than 5, then the number maps to a signed 2 byte (SMALLINT) datatype. If the precision is less than 10, then it maps to a signed 4 bytes (INTEGER) datatype. Even though the numbers are not equivalent on the client and the server, we still guarantee that valid numbers from the server will transfer to the client, and invalid numbers from the client are rejected by the server.

While the TIMESTAMP data type is supported; the TIMESTAMP WITH TIME ZONE is not supported for publication items.