1 Improving Performance

Mobile devices do not have the processing power and memory that standard enterprise systems maintain. If the mobile applications and infrastructure are not tuned appropriately they really are of little benefit to the organization.

The most important performance concepts for a mobile infrastructure are as follows:

The following sections describe the methods you can manage the performance of Oracle Database Mobile Server:

1.1 Improving Connection Performance

The following methods enable you to streamline the connections between the client/server and the mobile server and back-end database:

1.1.1 Using Connection Pooling for Applications

Connection pooling enables you to eliminate the time delay in creating and destroying connections for incoming application requests. Instead, enable connection pooling, as shown in Section 3.4, "Managing Application and Connection Properties" in the Oracle Database Mobile Server Administration and Deployment Guide, so that each incoming connection request uses an existing connection from the pool.

1.1.2 Limit Application Connection Requests to the Database

You can limit the number of connections that access the database from each application, as shown in Section 3.4, "Managing Application and Connection Properties" in the Oracle Database Mobile Server Administration and Deployment Guide. Set the maximum database connection limit. Any request for a database connection beyond the limit is refused.

1.2 Increasing Synchronization Performance

The following sections describe how you can manipulate the synchronization performance:

1.2.1 Analyzing Performance of Publications With the Consperf Utility

The Consperf utility profiles your subscriptions and may modify how the publication item is executed if the utility determines that there is a more performant option. The Consperf tool evaluates how the SQL within the publication item interacts with our Data Synchronization query templates. The first synchronization is always a complete refresh, which is a direct invocation of the query. On subsequent synchronizations, the query templates determine incremental refreshes. This improves your performance from not having to perform a complete refresh each time you synchronize. However, the interaction of our query templates and your SQL may not be optimal, which is discovered by the Consperf tool. We either modify the query template or type of logical delete or insert for you or you can adjust your SQL to be more performant in regards to our templates.

In addition, application developers and administrators use this utility to analyze the performance of subscriptions and identify potential bottlenecks during synchronization.

This tool generates the following two primary analysis reports:

  1. Timing statistics for publication items

  2. Explain plans for publications

The Consperf tool automatically tunes subscription properties, if the default templates do not supply the highest performing option. You can select a client and choose the desired subscription for performance analysis. Users can change parameter values before analyzing performance. The analysis results, which are timing and execution plan reports, are stored on the server and can be accessed by viewing the same user and subscription.

You can execute the Consperf utility through one of the following locations:

  • Click the Users link under the Consperf section on the Performance tab.

  • Click the Users link from the Repository screen.

Then, perform the following:

  1. Select the User that you want to execute the Consperf tool against and click Subscriptions.

  2. From the subscriptions screen, choose the publication and click Consperf performance analysis. This starts the Consperf analysis.

  3. Click Set consperf parameters and launch the consperf thread, which brings you to a screen where you can configure parameters that effect how the performance analysis is executed. See Section 1.2.1.1, "Deciphering the Performance Evaluation Files" for more information on these parameters and how they effect the performance evaluation output.

  4. Once you have set the configuration for how you want your performance analysis to occur, click OK. The Consperf tool executes and prepares the reports for you, based on your configuration. You are returned to the first Consperf page with the reports listed as hyperlinks under the Last Consperf Run Results section as View Timing File or View Execution Plan File.

1.2.1.1 Deciphering the Performance Evaluation Files

There are two performance evaluatons that come out of the Consperf utility:

1.2.1.1.1 Timing File

The timing file contains the analysis of how the publication item performs with the data synchronization defaults against how it could perform if other options were chosen. The output of this file shows you the conclusions of the analysis and how the data synchronization defaults could be modified to perform better with your particular publication items.

The first section of the timing file provides you information on the configuration with which this analysis was executed. Thus, if you modify the configuration for other analysis, you can go back and compare each file to each other to easily see the differences in the output.

Note:

The results of this analysis may cause the data synchronization engine to modify the type of query template or logical delete/insert/update used with your publication item. To change it back to the defaults, you will have to rerun Consperf with CLEARTUNE set to YES. See Table 1-2 for a full description of parameter settings.

The following example shows the publication that is examined is the T_SAMPLE11 publication. The version of the Oracle Database Mobile Server is 11.2.0.0.0. The user is S11U1. And the configuration is set to time out if the query takes longer that 1000 milliseconds and change the defaults if the difference between the default and the other templates are greater than 20 seconds (20000 milliseconds). The command that authorizes the changes is when AUTOTUNE is set to true. If set to false, the analysis is provided, but nothing is modified.

VERSION = 11.2.0.0.0 
OPTMIZER_MODE = null 
APPLICATION = null 
PUBLICATION = T_SAMPLE11 
CLIENTID = S11U1 
TIMEOUT = 1000 ms 
TOLERANCE = 20000 ms 
ITERATIONS = 2 
AUTOTUNE_SUPPORT = true 

The next part of the Timing File lists the time in milliseconds each template type takes to complete with each publication item in the publication. There are three templates that data synchronization can use to "wrap" your SQL query. The default query template is SYNC_1. Since the tolerance is set to 20 seconds, then if either template SYNC_2 or SYNC_3 perform at least 20 seconds bettern than SYNC_1, then the template type will be modified for your publication item. You can set the TOLERANCE level to fewer seconds in the Consperf configuration. See Table 1-2 for a description of TOLERANCE.

Publication Item Name | NS | BS | SYNC_1 | SYNC_2 | SYNC_3 | AS | Total
 ----------------------------------------------------------------------- 
P_SAMPLE11-D          | <3> | <0> | <6>  | 10     | -1000  | <0> | 9 
P_SAMPLE11-M          | <3> | <0> | <5>  | 8      | -1000  | <0> | 8 
  • There are two publication items in the subscription.

  • NS stands for Null Sync. Your application may be issuing a null synchronization. If so, this shows the time in milliseconds that it took to complete. The null synchronization is a tool to see if it is the data that is causing the performance hit or the application itself.

  • BS stands for Before Synchronization; AS stands for After Synchronization. You can provide callouts that are executed either before or after each synchronization for this application. This shows the time in milliseconds it takes to perform each task. In this example, there is no before or after synchronization callouts.

  • SYNC_1 is the default template. In combination with the publication items, it still is executing the fastest as compared to the other two options: SYNC_2 and SYNC_3 with 6 and 5 milliseconds for each publication item respectively. Thus, these publication items will continue to use SYNC_1 template. Note that SYNC_3 has -1000 as its time. That either means that the template was not appropriate to execute or that it timed out.

    • SYNC_1 uses an outer-join for inserts, updates, and deletes

    • SYNC_2 is a simple insert and update

    • SYNC_3 uses the base view for insert and update. The base view is the first table in the select statement, as it is the primary key used to search for all records in the query.

  • The total is the total number of milliseconds to execute the entire publication item.

The second section is how the MGP performs with the templates it uses for deletes and inserts. It evaluates the default against other options, as follows:

  • Logical delete options:

    • MGP template for logical deletes using EXISTS: default for logical delete.

    • MGP template for logical deletes using correlated IN.

    • MGP template for logical deletes using HASH_AJ.

    • MGP template for logical deletes using IN.

  • Logical insert options:

    • MGP template for logical inserts using EXISTS: default for logical insert.

    • MGP template for logical inserts using correlated IN.

    • MGP template for logical inserts using IN.

  • Logical update options

    • MGP template for logical updates using correlated IN: default for logical updates.

    • MGP template for logical updates using EXISTS.

    • MGP template for logical updates using IN.

  • MGP template for logical updates with multiple table dependencies.

For example, the following evaluates how each publication item performs with its logical deletes:

MGP Output... 
Pub Item Name | LDEL_1 | LDEL_2 | LDEL_3 | LDEL_4
P_SAMPLE11-D  | <5>    | 3      | 3      | 3     
P_SAMPLE11-M  | <5>    | 3      | 5      | 4    

The LDEL_1 is the default and even though LDEL_2 , 3 and 4 are faster, they are not 20 seconds faster, which is the tolerance level. So, the default for deletes is kept the same. If the difference in speed had been greater than the tolerance level, the Consperf utility would have modified the logical delete method in the repository for the publication item in future—if the autotune parameter was set to yes.

The last section, Subscription Properties, describes the following:

  • Profiled: Has autotune been turned on and Consperf executed previously on this subscription?

  • Base View: True if this publication item uses more than one table.

  • How many records are in the subscription.

  • How many records are dirty?

  • How many records have been flagged as dirty to simulate an actual run? Up to the number of records in the subscription or MAXLOG will be flagged as dirty, whichever is least.

1.2.1.1.2 Configuration for Data Synchronization

Table 1-1 Consperf Parameters for Both Synchronization and MGP Processing

Parameter Default Value Allowed Values Description

PUBITEMLIST

<ALL>

Pub1, Pub2, and so on.

Specifies comma-separated list of publication items to process. The default is all publication items in the publication.

SKIPPUBITEMLIST

<NONE>

Pub1, Pub2, and so on.

Specifies comma-separated list of publication items to skip.

OPTIMIZER

<DB>

Can set to RULE or CHOOSE; otherwise sets to what database is set to.

Specifies the optimizer mode to use within Oracle. The default is the current database setting.

ORDERBYPUBITEM

NO

Yes or No

Orders all output by publication item name.


Table 1-2 Consperf Parameters for Synchronization Timing Performance

Parameter Default Value Allowed Values Description

TIMEOUT

10 seconds

Integer for seconds

Specifies the query timeout value in seconds. This is the amount of time Consperf will wait before it cancels a query.

UPDATECOUNT

5

Integer for number of records

Specifies the number of records to mark as dirty during synchronization.

MAXLOG

5000

Integer for number of records

Specifies the number of records to put in the log table. Simulates the transaction log

AUTOTUNE

NO

Yes or No

Enables auto-tune.

CLEARTUNE

NO

Yes or No

Clears existing auto-tune results.

TOLERANCE

20 seconds

Integer for seconds

A template must be faster by this number of seconds before it replaces the default template.


1.2.1.1.3 Execution Plan File

To improve performance when accessing data on the local client database, view the execution plan file, which shows the performance of your SQL query execution on the client database. To execute a SQL statement, Oracle might need to perform several steps. Each of these steps either physically retrieves rows of data from the database or prepares them in some way for the user issuing the statement. The combination of the steps Oracle uses to execute a statement is called an execution plan, which includes an access path for each table that the statement accesses and an ordering of the tables (the join order) with the appropriate join method. The execution plan shows you exactly how Oracle Database Mobile Server executes your SQL statement.

The components of an execution plan include the following:

  • An ordering of the tables referenced by the statement.

  • An access method for each table mentioned in the statement.

  • A join method for tables affected by join operations in the statement.

The execution plan file shows how your publication items interact with the different logical delete, insert, and update templates. From this report, you can evaluate your SQL to see if you want to modify it in any way to speed up your query. Set the optimizer parameter to designate how the database is organized. If you set this parameter to a setting that the database is not set to, it still acts as if the database is set to this way to show you how it would execute. See Table 1-3 for all configuration parameters that relate to this search.

Table 1-3 Consperf Parameters for Execution Performance Plan

Parameter Default Value Allowed Values Description

GATHERSTATS

NO

Yes or No

Gathers optimizer statistics on all mobile server objects. MGP compose must be disabled while Consperf analyzes objects. Consperf blocks this automatically, but the safest approach is to manually stop the MGP process before running Consperf with the GATHERSTATS option. If Consperf fails while gathering statistics, users must re-run CLEARSTATS before starting the MGP process again.

CLEARSTATS

NO

Yes or No

Removes optimizer statistics on mobile server objects.

SQLTRACE

NO

Yes or No

Enables Oracle SQL trace. TKPROF can be used to analyze the resulting trace file.


1.2.2 Monitoring Synchronization Using SQL Scripts

If, instead of viewing MGP statistics within the Mobile Manager, you would rather execute SQL scripts to monitor mobile application status during synchronization, you may use any of the following SQL scripts to retrieve the desired information.

1.2.2.1 Synchronization Times for All Clients

Using the following script, you can check the latest successful synchronization times for all clients by retrieving such information from the all_clients table.

select client, lastrefresh_starttime, lastrefresh_endtime
from cv$all_clients
order by client
/

1.2.2.2 Failed Transactions for all Clients

Using the following script, you can retrieve a list of failed transactions for all clients from the all_errors table.

select client, transaction_id, item_name, message_text
from cv$all_errors
where message_text is not null
order by client,transaction_id
/

1.2.2.3 Completely Refreshed Publication Items for all Clients

Using the following SQL script, you can retrieve a list of all publication items for all clients which were completely refreshed during the last synchronization process.

select clientid, publication_item
from c$complete_refresh_log
order by clientid, publication_item
/

1.2.2.4 Publications Flagged for Complete Refresh for All Clients

Using the following SQL script, you can retrieve a list of publications for all clients that are flagged for a complete refresh during the next synchronization process.

select clientid, template as publication
from c$all_subscriptions
where crr = 'Y'
/

1.2.2.5 Clients and Publication where Subscription Parameters are Not Set

Using the following SQL script, you can retrieve a list of clients and their publications where the subscription parameters have not been set.

select client, name as publication, param_name, param_value
from cv$all_subscription_params
where param_value is null
order by client, name
/

1.2.2.6 Record Counts for Map-based Publication Item by Client

Using the following script, you can retrieve record counts for all clients in queues for map-based publication items, that are grouped by clients.

select clid$$cs as client, count(*) as "RECORD COUNT"
from c$in_messages
group by clid$$cs
/

1.2.2.7 Record Count for Map-based Publication Items by Store

Using the following SQL script, you can retrieve record counts for all client in-queues for map-based publication items, that are grouped by store.

select clid$$cs as client, tranid$$ as transaction_id, store as item_name,
count(*) as "RECORD COUNT"
from c$in_messages
group by clid$$cs, tranid$$, store
/

1.2.2.8 All Client Sequence Partitions and Sequence Values

Using the following SQL script, you can retrieve a list of all client sequence partitions and current sequence values.

select clientid, name, curr_val, incr
from c$all_sequence_partitions
order by clientid, name
/

1.2.2.9 All Publication Item Indexes

Using the following SQL script, you can retrieve a list of all publication item indexes.

select publication as NAME, publication_item, conflict_rule as "INDEX_TYPE",
columns
from c$all_indexes
order by publication, publication_item
/

1.2.3 Create SQL Scripts With All Dependencies

When you create a SQL script in MDW or with the Consolidator APIs, you should include all dependent DDL statements in the same script in the order necessary. If you separate dependent DDL statements into separate scripts, Oracle Database Mobile Server may be executing them randomly, causing dependency errors and re-execution of each script. See Section 4.7 "Create and Load a Script into the Project" in the Oracle Database Mobile Server Developer's Guide for more information.

1.2.4 Configuration Parameters in the MOBILE.ORA that Affect Synchronization Performance

The following parameters in the [CONSOLIDATOR] section of the mobile.ora file are used for tuning synchronization:

  • MAX_THREADS

    The MAX_THREADS parameter is used by the MGP and controls the number of concurrent threads. As a rule, do not set this higher than 1.5 times the number of CPUs on the database machine. For example, if your system has four CPUs, theb you should not set it higher than six.

  • RESUME_MAXACTIVE and RESUME_MAX_WAIT

    You can configure for maximum concurrent clients with the RESUME_MAXACTIVE and RESUME_MAX_WAIT parameters. This limits the maximum number of concurrently synchronizing clients to RESUME_MAXACTIVE; additional incoming clients wait RESUME_MAX_WAIT in minutes before timing out with an error. Maximum concurrent clients are configured without the resume feature with these parameters if you set RESUME_TIMEOUT=0.

    For more details on the resume feature, see Section 4.7, "Resuming an Interrupted Synchronization" in the Oracle Database Mobile Server Administration and Deployment Guide.

  • COMPOSE_TIMEOUT

    The COMPOSE_TIMEOUT parameter specifies in seconds the MGP timeout for the compose phase for each user.

  • CONNECTION_POOL

    The CONNECTION_POOL parameter enables pooling of database connections.

  • MAX_THREADS

    The MAX_THREADS parameter sets the maximum number of threads spawned within the MGP process.

For full details on these and more parameters, see Section A.1.5, "CONSOLIDATOR" in the Oracle Database Mobile Server Administration and Deployment Guide.

Each synchronization request requires a number of system resources, such as creating a database connection, using memory, and so on. If you have too many requests competing for the same resources, then the overall performance can be poor. Limiting the number of parallel requests with the MAX_THREADS and MAX_CONCURRENCY parameters improve the average response time.

Set the MAX_THREADS and MAX_CONCURRENCY parameters if you notice that the synchronization performance is not linear. For example, if twice the number of parallel requests results in a synchronization time that is five times longer for each client, then you probably have resource contention. The value depends on your environment and should be determined on a trial and error basis.

1.2.5 Tuning Queries to Manage Synchronization Performance

You can increase synchronization performance by monitoring the performance of the SQL queries in your applications. The following sections provide details on how to tune your queries:

1.2.5.1 Avoid Using Non-Mergable Views

You should avoid using database query constructs that prevent a view from being mergable, as publication item queries that use non-mergable views do not perform well. Examples of such constructs are union, minus, and connect by. For more information on mergable views, see the Oracle database documentation.

1.2.5.2 Tune Queries With Consperf Utility

Once you have defined your application, use the consperf utility to profile the performance of the publication item queries. The mobile server does not execute your publication item queries directly; instead the query is wrapped into a template query, which is executed by the mobile server. The template query may have an unexpected query execution plan, resulting in poor performance. The consperf utility generates an EXPLAIN PLAN execution plan for those template queries, allowing you to tune your publication item query for best performance. In addition, consperf generates timing information for the execution of all template queries, so that you can identify bottleneck queries. For more information on the consperf utility, see Section 1.2.1, "Analyzing Performance of Publications With the Consperf Utility".

1.2.5.3 Manage the Query Optimizer

You must make sure that the optimizer picks the correct execution path when you either are using the cost-based optimizer or you have set the optimizer settings to choose. The optimizer can pick the correct execution path only when all of the tables are properly analyzed and statistics have been gathered for these tables.

The mobile server uses temporary tables during synchronization. Once a number of users have been created, and they have synchronized with the mobile server, run consperf with the gatherstats option to generate the statistics information for the temporary tables. For more information on the consperf utility, see Section 1.2.1, "Analyzing Performance of Publications With the Consperf Utility".

1.2.6 Synchronization Tablespace Layout

Tablespace layout across multiple disks can improve the performance of the mobile server data synchronization, as it reduces movement of the disk heads and improves I/O response time.

By default, the synchronization tablespace is SYNCSERVER, and is stored in the mobilexx.dbf file in the default location for the database under ORACLE_HOME, where xx is a number between 1 and 25. The tablespace name, filename, and file location for the tablespace is defined in the $ORACLE_HOME/Mobile/Server/admin/consolidator_o8a.sql script file, which is executed during the mobile server installation process. So, if you want to modify the tablespace, perform the following BEFORE you install the mobile server; otherwise, the default tablespace is created.

If you want to customize the SYNCSERVER tablespace, for example, by using multiple data files spread across several disks, or by using specific storage parameters, then you can precreate the SYNCSERVER tablespace with the required settings. The installation process automatically detects that the tablespace exists and uses it. Refer to the Oracle Database documentation for full details on how to create a tablespace.

1.2.7 Shared Maps

It is very common for publications to contain publication items that are used specifically for lookup purposes. That is, a publication item that creates a read-only snapshot. The server may change these snapshots, but the client would never update them directly. Furthermore, many users often share the data in this type of snapshot. For example, there could be a publication item called zip_codes, which is subscribed to by all mobile users.

The main function of Shared Maps is to improve scalability for this type of publication item by allowing users to share record state information and reduce the size of the resulting replication map tables. By default, if you have a non-updatable publication item, it defaults to using shared maps.

Note:

Shared Maps can also be used with updatable snapshots if the developer is willing to implement their own conflict detection and resolution logic; however, normally shared maps are only for non-updatable snapshots.

Shared maps shrink the size of map tables for large lookup publication items and reduce the MGP compose time. Lookup publication items contain read-only data that is not updatable on the clients and that is shared by multiple subscribed clients. When multiple users share the same data, their query subsetting parameters are usually identical.

For example, a query could be the following:

SELECT * FROM  WHERE EMP WHERE DEPTNO = :dept_id

In the preceding example, all users that share data from the same department have the same value for dept_id. The default sharing method is based on subscription parameter values.

In the following example, the query is:

SELECT * FROM WHERE EMP WHERE DEPTNO = ( SELECT DEPTNO FROM 
         EMP WHERE EMPNO = :emp_id )

In this example, users from the same departments still share data. Their subsetting parameters are not equal, because each user has a unique emp_id. To support the sharing of data for these types of queries (as illustrated by the example), a grouping function can be specified. The grouping function returns a unique group id based on the client id.

There is also another possible use for shared maps. It is possible to use shared maps for shared updatable publication items. However, this type of usage requires implementation of a custom dml procedure that handles conflict resolution.

1.2.7.1 Performance Attributes

The performance of the MGP compose cycle is directly proportional to the following:

NC * NPI 

where:

  • NC = number of clients

  • NPI = number of publication items that must be composed

With shared maps, the length of the MGP cycle is proportional to the following:

NC*(NPI - NSPI)  + NG*NSPI 

where:

  • NSPI = number of shared publication items

  • NG = number of groups

Note:

If NG = NC, then the MGP performance is similar in both cases. However, with fewer groups and more shared publication items, the MGP compose cycle becomes faster. In addition, map storage requirements are governed by these same factors.

1.2.7.2 Shared Map Usage

To set up a publication item to be shared, use the AddPublicationItem API and enable the shared flag. It is also possible to toggle the shared property of a publication item once it is added to the publication with the SetPublicationItemMetadata API. Both the AddPublicationItem API and the SetPublicationItemMetadata API allow users to specify a PL/SQL grouping function. The function signature must be as follows:

(
CLIENT in VARCHAR2, 
PUBLICATION in VARCHAR2, 
ITEM            in VARCHAR2  
        )       return VARCHAR2.

The returned value must uniquely identify the client's group. For example, if client A belongs to the group GroupA and client B belongs to the group GroupB, the group function F could return:

  F ('A','SUBSCRIPTION','PI_NAME') = 'GroupA'
        F ('B','SUBSCRIPTION','PI_NAME') = 'GroupB'

The implicit assumption of the grouping function is that all the members of the GroupA group share the same data, and that all the members of the GroupB group share the same data.. The group function uniquely identifies a group of users with the same data for a particular PUBLICATION ITEM.

For the query example in Section 1.2.7, "Shared Maps", the grouping function could be:

Function get_emp_group_id ( 
        clientid in varchar2, 
          publication in varchar2, 
          item in varchar2 
) return varchar2 is
          group_val_id varchar2(30);
begin
          select DEPTNO into group_val_id 
                  from EMP where EMPNO = clientid ;
          return group_val_id;
end;

Note:

This function assumes that EMPNO is the Consolidator Manager client id. If the group_fnc is not specified, the default grouping is based on subscription parameters.

1.2.7.3 Compatibility and Migration for Shared Maps

If you have been using a version prior to Oracle Database Mobile Server 11g, then you must migrate your existing mobile server schema with shared maps, as follows:

  1. Run one cycle of MGP.

  2. The clients must sync with the server to get the latest changes prepared by the MGP.

  3. Stop the Web server and MGP to migrate the server to 11g. This automatically sets all the nonupdatable publication items to shared items. If any shared publication items need to use grouping functions or any publication items need to change their sharing attribute, execute custom code that calls the appropriate Consolidator Manager API. See the SetPublicationItemMetadata API in Section 1.2.7.2, "Shared Map Usage".

  4. The ShrinkSharedMaps Consolidator Manager API must be called to set the clients to use shared map data and remove old redundant data from the maps.

  5. Start the Web server and MGP.

1.2.8 Use Map Table Partitions to Streamline Users Who Subscribe to a Large Amount of Data

Sync Server database objects called map tables are used to maintain the state for each mobile client. If there are a large number of clients, and each client subscribes to a large amount of data, the map tables can become very large creating scalability issues. Using the following APIs, map tables can be partitioned by client id, making them more manageable.

The API allows you to create a map table partition, add additional partitions, drop one or all partitions, and merge map table partitions. Map table partitions can be monitored using the ALL_PARTITIONS database catalog view.


Note:

This form of partitioning is not related to the partition functionality provided by Oracle Server, and is used exclusively by Oracle Database Mobile Server.

1.2.8.1 Create a Map Table Partition

Creates a partition for the referenced publication item map table. If there is data in the map table, it is transferred to the partition being created. After the partition has been successfully created, the map table can be truncated to remove redundant data using the SQL command TRUNCATE TABLE.

Note:

Records removed from the server through a truncate command will not be removed from the client unless a complete refresh is triggered. The truncate command is considered a DDL operation. Consequently, the necessary DML triggers do not fire and therefore the operations are not logged for fast refresh.

Syntax

public static void partitionMap
   (String pub_item,
    int num_parts,
    String storage,
    String ind_storage) throws Throwable

The parameters of partitionMap are listed in Table 1-4.

Table 1-4 The partitionMap Parameters

Parameter Definition

pub_item

The publication item whose map table is being partitioned.

num_parts

The number of partitions.

storage

A string specifying the storage parameters. This parameter requires the same syntax as the SQL command CREATE TABLE. See the Oracle SQL Reference for more information.

ind_storage

A string specifying the storage parameters for indexes on the partition. This parameter requires the same syntax as the SQL command CREATE INDEX. See the Oracle SQL Reference for more information.


Example

consolidatorManager.partitionMap("P_SAMPLE1", 5, "tablespace mobileadmin",
 "initrans 10 pctfree 70");

1.2.8.2 Add Map Table Partitions

Adds a partition for the referenced publication item's map table. If there is data in the map table, it is transferred to the partition being created. After the partition has been successfully created, the map table can be truncated to remove redundant data using the SQL command TRUNCATE TABLE.

Note:

Records removed from the server through a truncate command will not be removed from the client unless a complete refresh is triggered. The truncate command is considered a DDL operation. Consequently, the necessary DML triggers do not fire and therefore the operations are not logged for fast refresh.

Syntax

public static void addMapPartitions
   ( String pub_item,
    int num_parts,
    String storage,
    String ind_storage) throws Throwable

The parameters of addMapPartitions are listed in Table 1-5:

Table 1-5 The addMapPartitions Parameters

Parameter Definition

pub_item

The publication item whose map table is being partitioned.

num_parts

The number of partitions.

storage

A string specifying the storage parameters. This parameter requires the same syntax as the SQL command CREATE TABLE.

ind_storage

A string specifying the storage parameters for indexes on the partition. This parameter requires the same syntax as the SQL command CREATE INDEX.


Example

consolidatorManager.addMapPartitions("P_SAMEPLE1",5,"tablespace 
 mobileadmin","initrans 10 pctfree 40");

Note:

Map Partitions are created only for existing users. New users are placed in the original map table.

1.2.8.3 Drop a Map Table Partition

Drops the named partition. In the following example, the partition parameter is the name of the partition. Partition names must be retrieved by querying the ALL_PARTITIONS table view CV$ALL_PARTITIONS since partitions are named by Data Synchronization.

Syntax

public static void dropMapPartition( String partition) throws Throwable

Example

consolidatorManager.dropMapPartition("MAP101_1"); 

1.2.8.4 Drop All Map Table Partitions

Drops all partitions of the map table for the named publication item.

Syntax

public static void dropAllMapPartitions( String pub_item) throws Throwable

Example

consolidatorManager.dropAllMapPartitions("P_SAMPLE1");

1.2.8.5 Merge Map Table Partitions

Merges the data from one partition into another. Partition names must be retrieved by querying the ALL_PARTITIONS table view CV$ALL_PARTITIONS, since partitions are named by Data Synchronization.

Syntax

public static void mergeMapPartitions
   ( String from_partition,
    String to_partiton) throws Throwable

Example

consolidatorManager.mergeMapPartition(""MAP101_1", "MAP101_2"); 

1.2.9 Configuring Back-End Oracle Database to Enhance Synchronization Performance

You can configure the Oracle Database in such a way as to enchance your mobile server synchronization performance, as follows:

1.2.9.1 Physically Separate Map Tables and Map Indexes

During synchronization, map tables are used extensively. Map tables are internal tables, and have table names using the following pattern: CMP$pub_item_name. Each map table has four separate indexes. By default, both map table and indexes are created in the default tablespace SYNCSERVER.

You can improve performance if you move the map table indexes to a different disk than the map table itself. Create a separate tablespace (for example: MAPINDEXES) on a different disk and manually move all indexes. Because the process of moving the indexes requires you to drop and re-create the indexes, you should move the index before many users have synchronized. Otherwise recreating the indexes on the map tables may be very time consuming, as map tables grow with the number of users who have synchronized.

To move the indexes on a map table, do the following:

  1. Identify all indexes on the map table (CMP$pub_item_name). There are three or four indexes. Move all of them.

  2. For each index, record the type of index and column lists.

  3. If the index is a primary key index, then remove the primary key constraint on the map table.

  4. Drop the index.

  5. Recreate the index using the same name, type and column list. Use the storage clause in the create index statement to specify the new tablespace. You may also specify different storage parameters. Refer to the Oracle database documentation for more information on how to create indexes and storage clause parameters.

Note:

Repeat step 3 through 5 for all other indexes on the map table.

1.2.9.2 Database Parameter Tuning

Tuning the database for the mobile server is similar to any Oracle database tuning that is required for query intensive applications. Configure the SGA to be as large as possible on your system to maximize the caching capabilities and avoid I/O wherever possible.

Tune your Oracle database with the following database parameters:

  • db_block_buffers

  • sort_area_size

  • log_buffers

Refer to the Oracle database tuning guide for more information on database tuning.

1.2.10 Priority-Based Replication

With priority-based replication, you can specify what rows in the snapshot are synchronized by setting the priority.

There are two steps to enable this feature. These are described in the following sections:

  1. Section 1.2.10.1, "Create Restricting Predicate in Publication Item"

  2. Section 1.2.10.2, "Set Priority Flag in Mobile Sync API Before Initiating Synchronization"

1.2.10.1 Create Restricting Predicate in Publication Item

Create a Restricting Predicate expression in the publication item that is to be restricted when priority is requested. A Restricting Predicate is a conditional expression added to the SQL statement in the snapshot. It limits what records are sent to the client. Create the restricting predicate with either the addPublicationItem method or the MDW equivalent.

Note:

You can only use fast refresh with a high priority restricting predicate. If you use any other type of refresh, the high priority restricting predicate is ignored.

High-priority replication combined with the selective synchronization feature provides a mechanism for minimizing the synchronization payload to the most relevant data.

For example, if you created a snapshot with the following statement:

select * from projects where urgency_level in (1,2,3,4)

The developer can set the Restricting Predicate with the addPublicationItem method . For example, you could create a Restricting Predicate with the urgency_level as follows:

consolidatorManager.addPublicationItem("T_SAMPLE11", "P_SAMPLE11-M",  
        null, null, "S", "urgency_level = 1", null);

This specifies what the priority synchronization should do when requested. To request the priority synchronization, set the high priority flag in the Mobile Sync APIs, as described in Section 1.2.10.2, "Set Priority Flag in Mobile Sync API Before Initiating Synchronization".

1.2.10.2 Set Priority Flag in Mobile Sync API Before Initiating Synchronization

Once the publication item has the Restricting Predicate set, then when you want the priority restriction to occur for a synchronization, then you set the priority flag to 1 in the Mobile Sync API.

In each of the Mobile Sync APIs, there is a priority flag in either the environment or options. Setting this flag to 1 means that only high priority tables are synchronized. The default for the priority flag is 0, which specifies that all tables synchronize.

Note:

The Mobile Sync API is documented in Chapter 3, "Managing Synchronization on the Mobile Client" in the Oracle Database Mobile Server Developer's Guide.

In our previous example, when you set the priority flag to 0 (the default), all projects with urgency_level 1,2,3,4 are replicated. However, if you set the priority flag to 1, then the Restricting Predicate is enabled for the synchronization.

When the high priority flag is set, MGP appends (AND) the Restricting Predicate to the snapshot definitions when composing data for the client. In this example, the high priority statement would be transformed as follows:

SELECT * FROM projects where urgency_level in (1,2,3,4) AND urgency_level = 1;

In this case, only projects with urgency_level =1 are replicated to the client.

1.2.11 Caching Publication Item Queries

This feature allows complex publication item queries to be cached. This applies to queries that cannot be optimized by the Oracle query engine. By caching the query in a temporary table, the Sync Server template can join to the snapshot more efficiently.

Storing the data in a temporary table does result in additional overhead to MGP operation, and the decision to use it should only be made after first attempting to optimize the publication item query to perform well inside the Sync Server template. If the query cannot be optimized in this way, the caching method should be used.

The following example is a template used by the MGP during the compose phase to identify client records that are no longer valid, and should be deleted from the client:

UPDATE pub_item_map map
SET delete = true
WHERE client = <clientid>
AND NOT EXISTS (SELECT 'EXISTS' FROM
    (<publication item query>) snapshot
     WHERE map.pk = snapshot.pk);

In this example, when <publication item query> becomes too complex, because it contains multiple nested subqueries, unions, virtual columns, connect by clauses, and other complex functions, the query optimizer is unable to determine an acceptable plan. This can have a significant impact on performance during the MGP compose phase. Storing the publication item query in a temporary table, using the publication item query caching feature, flattens the query structure and enables the template to effectively join to it.

1.2.11.1 Enabling Publication Item Query Caching

The following API enables publication item query caching.

Syntax

public static void enablePublItemQueryCache(String name) 
      throws Throwable

The parameters for enablePublItemQueryCache are listed in Table 1-6:

Table 1-6 The enablePubItemQueryCache Parameters

Parameters Description

name

A string specifying the name of the publication item.


Example

consolidatorManager.enablePubItemQueryCache("P_SAMPLE1");

If you are using an input string from the input parameter argv array, cast it to a String, as follows:

consolidatorManager.enablePubItemQueryCache( (String) argv[0]);

1.2.11.2 Disabling Publication Item Query Caching

The following API disables publication item query caching.

Syntax

public static void disablePubItemQueryCache(String name) 
      throws Throwable

The name parameter for disablePubItemQueryCache is listed in Table 1-7:

Table 1-7 The disablePubItemQueryCache Parameters

Parameters Description

name

A string specifying the name of the publication item.


Example

consolidatorManager.disablePubItemQueryCache("P_SAMPLE1");

1.2.12 Architecture Design of Mobile Server and Oracle Database for Synchronization Performance

It is recommended that you execute the mobile server and the Oracle database on separate machines. If possible, use multi-CPU machines for both the mobile server and the Oracle database.

1.2.13 Designing Application Tables and Indexes for Synchronization Performance

Your clients may perform a large number of insert and delete operations on snapshots, and then synchronize their data changes with the mobile server. If this is the case, then consider placing the application tables and the indexes on those tables on separate disks.

1.3 Integrating Oracle Database Mobile Server With the Oracle Real Application Clusters

Oracle Real Application Clusters (Oracle RAC) enables a single database to run across multiple clustered nodes in a grid, pooling the processing resources of several machines. Oracle RAC enables you to cluster Oracle databases where the combined processing power of the multiple servers provides greater throughput and scalability than is available from a single server.

Oracle RAC is a unique technology that provides high availability and scalability for all application types. The Oracle RAC infrastructure is also a key component for implementing the Oracle enterprise grid computing architecture. Having multiple instances access a single database prevents the server from being a single point of failure. Oracle RAC enables you to combine smaller commodity servers into a cluster to create scalable environments that support mission critical business applications.

Note:

For full details on Oracle RAC capabilities, limitations, installation and configuration requirements, see the Oracle Database documentation.

You can use Oracle Database Mobile Server with a back-end Oracle RAC database. The mobile server repository is installed in the Oracle RAC database. Then, you can install and configure multiple mobile servers to access the repository in the Oracle RAC database to increase performance and availability.

Note:

Section 4.3.3, "Providing High Availability with a Farm of Mobile Servers" in the Oracle Database Mobile Server Installation Guide describes how you can install multiple mobile servers interacting with a single Oracle RAC database with a loadbalancer managing the incoming load.

Applications deployed on Oracle Database Mobile Server in an Oracle RAC configuration operate without any code modifications. The only impact is that during installation and configuration of Oracle Database Mobile Server and some of its tools, you need to provide an Oracle RAC URL instead of the regular JDBC URL.

The JDBC URL for an Oracle RAC database can have more than one address in it for multiple Oracle databases in the cluster and follows this URL structure:

jdbc:oracle:thin:@(DESCRIPTION=
 (ADDRESS_LIST=
   (ADDRESS=(PROTOCOL=TCP)(HOST=PRIMARY_NODE_HOSTNAME)(PORT=1521))
   (ADDRESS=(PROTOCOL=TCP)(HOST=SECONDARY_NODE_HOSTNAME)(PORT=1521))
 )
 (CONNECT_DATA=(SERVICE_NAME=DATABASE_SERVICENAME)))

If you are using an Oracle RAC database, you will be asked to supply the JDBC URL for the Oracle RAC database anytime you need to connect to the back-end Oracle database. This includes the following:

  • During the mobile server repository installation—which uses the Repository Wizard tool.

  • In the Mobile Database Workbench tool when connecting to the database.

  • In the Packaging Wizard tool when publishing an application to the database.

  • When providing the JDBC URL for the Mobile Manager, MSRDT and WSH tools.

1.4 Maximizing JVM Performance By Managing Java Memory

You can maximize your JVM performance by modifying the amount of memory used by the three areas of Java memory. This is fully described in Section 2.4.1, "Troubleshooting An Out of Memory Error".