PK
$BXEoa, mimetypeapplication/epub+zipPK $BXE iTunesMetadata.plistV
The following sections describe the methods you can manage the performance of your use of Oracle Lite Database:
Section 1.3, "Determining Performance of Client SQL Queries With the EXPLAIN PLAN"
Section 1.4, "Optimizing Application SQL Queries Against Oracle Lite Database"
Section 1.5, "Maximizing JVM Performance By Managing Java Memory"
The following methods enable you to streamline the connections between the client/server and the Mobile Server and back-end database:
Connection pooling enables you to eliminate the time delay in creating and destroying connections for incoming application requests. Instead, enable connection pooling, as shown in Section 3.4, "Manage Application Properties or Users" in the Oracle Database Lite Administration and Deployment Guide, so that each incoming connection request uses an existing connection from the pool.
You can limit the number of connections that access the database from each application, as shown in Section 3.4, "Manage Application Properties or Users" in the Oracle Database Lite Administration and Deployment Guide. Set the maximum database connection limit. Any request for a database connection beyond the limit is refused.
The following sections describe how you can manipulate the synchronization performance:
Section 1.2.1, "Analyzing Performance of Publications With the Consperf Utility"
Section 1.2.2, "Monitoring Synchronization Using SQL Scripts"
Section 1.2.4, "Configuration Parameters in the WEBTOGO.ORA that Affect Synchronization Performance"
Section 1.2.5, "Tuning Queries to Manage Synchronization Performance"
Section 1.2.9, "Configuring Back-End Oracle Database to Enhance Synchronization Performance"
Section 1.2.13, "Synchronization Disk Needs May Impose on WinCE Platform Available Space"
Section 1.2.14, "Designing Application Tables and Indexes for Synchronization Performance"
The Consperf utility profiles your subscriptions and may modify how the publication item is executed if the utility determines that there is a more performant option. The Consperf tool evaluates how the SQL within the publication item interacts with our Data Synchronization query templates. The first synchronization is always a complete refresh, which is a direct invocation of the query. On subsequent synchronizations, the query templates determine incremental refreshes. This improves your performance from not having to perform a complete refresh each time you synchronize. However, the interaction of our query templates and your SQL may not be optimal, which is discovered by the Consperf tool. We either modify the query template or type of logical delete or insert for you or you can adjust your SQL to be more performant in regards to our templates.
In addition, application developers and administrators use this utility to analyze the performance of subscriptions and identify potential bottlenecks during synchronization.
This tool generates the following two primary analysis reports:
Timing statistics for publication items
Explain plans for publications
The Consperf tool automatically tunes subscription properties, if the default templates do not supply the highest performing option. You can select a client and choose the desired subscription for performance analysis. Users can change parameter values before analyzing performance. The analysis results, which are timing and execution plan reports, are stored on the server and can be accessed by viewing the same user and subscription.
You can execute the Consperf utility through one of the following locations:
Click the Users link under the Consperf section on the Performance tab.
Click the Users link from the Repository screen.
Then, perform the following:
Select the User that you want to execute the Consperf tool against and click Subscriptions.
From the subscriptions screen, choose the publication and click Consperf performance analysis. This starts the Consperf analysis.
Click Set consperf parameters and launch the consperf thread, which brings you to a screen where you can configure parameters that effect how the performance analysis is executed. See Section 1.2.1.1, "Deciphering the Performance Evaluation Files" for more information on these parameters and how they effect the performance evaluation output.
Once you have set the configuration for how you want your performance analysis to occur, click OK. The Consperf tool executes and prepares the reports for you, based on your configuration. You are returned to the first Consperf page with the reports listed as hyperlinks under the Last Consperf Run Results section as View Timing File or View Execution Plan File.
There are two performance evaluatons that come out of the Consperf utility:
Timing File
Execution Plan File
Timing File
The timing file contains the analysis of how the publication item performs with the data synchronization defaults against how it could perform if other options were chosen. The output of this file shows you the conclusions of the analysis and how the data synchronization defaults could be modified to perform better with your particular publication items.
The first section of the timing file provides you information on the configuration with which this analysis was executed. Thus, if you modify the configuration for other analysis, you can go back and compare each file to each other to easily see the differences in the output.
Note: The results of this analysis may cause the data synchronization engine to modify the type of query template or logical delete/insert/update used with your publication item. To change it back to the defaults, you will have to rerun Consperf withCLEARTUNE set to YES . See Table 1-2 for a full description of parameter settings. |
The following example shows the publication that is examined is the T_SAMPLE11
publication. The version of the Oracle Database Lite is 10.0.0.0.0. The user is S11U1
. And the configuration is set to time out if the query takes longer that 1000 milliseconds and change the defaults if the difference between the default and the other templates are greater than 20 seconds (20000 milliseconds). The command that authorizes the changes is when AUTOTUNE
is set to true. If set to false, the analysis is provided, but nothing is modified.
VERSION = 10.0.0.0.0 OPTMIZER_MODE = null APPLICATION = null PUBLICATION = T_SAMPLE11 CLIENTID = S11U1 TIMEOUT = 1000 ms TOLERANCE = 20000 ms ITERATIONS = 2 AUTOTUNE_SUPPORT = true
The next part of the Timing File lists the time in milliseconds each template type takes to complete with each publication item in the publication. There are three templates that data synchronization can use to "wrap" your SQL query. The default query template is SYNC_1
. Since the tolerance is set to 20 seconds, then if either template SYNC_2
or SYNC_3
perform at least 20 seconds bettern than SYNC_1
, then the template type will be modified for your publication item. You can set the TOLERANCE
level to fewer seconds in the Consperf configuration. See Table 1-2 for a description of TOLERANCE
.
Publication Item Name | NS | BS | SYNC_1 | SYNC_2 | SYNC_3 | AS | Total ----------------------------------------------------------------------- P_SAMPLE11-D | <3> | <0> | <6> | 10 | -1000 | <0> | 9 P_SAMPLE11-M | <3> | <0> | <5> | 8 | -1000 | <0> | 8
There are two publication items in the subscription.
NS stands for Null Sync. Your application may be issuing a null synchronization. If so, this shows the time in milliseconds that it took to complete. The null synchronization is a tool to see if it is the data that is causing the performance hit or the application itself.
BS stands for Before Synchronization; AS stands for After Synchronization. You can provide callouts that are executed either before or after each synchronization for this application. This shows the time in milliseconds it takes to perform each task. In this example, there is no before or after synchronization callouts.
SYNC_1
is the default template. In combination with the publication items, it still is executing the fastest as compared to the other two options: SYNC_2
and SYNC_3
with 6 and 5 milliseconds for each publication item respectively. Thus, these publication items will continue to use SYNC_1
template. Note that SYNC_3
has -1000 as its time. That either means that the template was not appropriate to execute or that it timed out.
SYNC_1
uses an outer-join for inserts, updates, and deletes
SYNC_2
is a simple insert and update
SYNC_3
uses the base view for insert and update. The base view is the first table in the select statement, as it is the primary key used to search for all records in the query.
The total is the total number of milliseconds to execute the entire publication item.
The second section is how the MGP performs with the templates it uses for deletes and inserts. It evaluates the default against other options, as follows:
Logical delete options:
MGP template for logical deletes using EXISTS: default for logical delete
MGP template for logical deletes using correlated IN
MGP template for logical deletes using HASH_AJ
MGP template for logical deletes using IN
Logical insert options:
MGP template for logical inserts using EXISTS: default for logical insert
MGP template for logical inserts using correlated IN
MGP template for logical inserts using IN
Logical update options
MGP template for logical updates using correlated IN: default for logical updates
MGP template for logical updates using EXISTS
MGP template for logical updates using IN
MGP template for logical updates with multiple table dependencies
For example, the following evaluates how each publication item performs with its logical deletes:
MGP Output... Pub Item Name | LDEL_1 | LDEL_2 | LDEL_3 | LDEL_4 P_SAMPLE11-D | <5> | 3 | 3 | 3 P_SAMPLE11-M | <5> | 3 | 5 | 4
The LDEL_1
is the default and even though LDEL_2
, 3
and 4
are faster, they are not 20 seconds faster, which is the tolerance level. So, the default for deletes is kept the same. If the difference in speed had been greater than the tolerance level, the Consperf utility would have modified the logical delete method in the repository for the publication item in future—if the autotune parameter was set to yes.
The last section, Subscription Properties, describes the following:
Profiled: Has autotune been turned on and Consperf executed previously on this subscription?
Base View: True if this publication item uses more than one table.
How many records are in the subscription.
How many records are dirty?
How many records have been flagged as dirty to simulate an actual run? Up to the number of records in the subscription or MAXLOG will be flagged as dirty, whichever is least.
Configuration for Data Synchronization
Table 1-1 Consperf Parameters for Both Synchronization and MGP Processing
Parameter | Default Value | Allowed Values | Description |
---|---|---|---|
PUBITEMLIST |
<ALL> |
Pub1, Pub2, and so on. |
Specifies comma-separated list of publication items to process. The default is all publication items in the publication. |
SKIPPUBITEMLIST |
<NONE> |
Pub1, Pub2, and so on. |
Specifies comma-separated list of publication items to skip. |
OPTIMIZER |
<DB> |
Can set to RULE or CHOOSE; otherwise sets to what database is set to. |
Specifies the optimizer mode to use within Oracle. The default is the current DB setting. |
ORDERBYPUBITEM |
NO |
Yes or No |
Orders all output by publication item name. |
Table 1-2 Consperf Parameters for Synchronization Timing Performance
Parameter | Default Value | Allowed Values | Description |
---|---|---|---|
TIMEOUT |
10 seconds |
Integer for seconds |
Specifies the query timeout value in seconds. This is the amount of time Consperf will wait before it cancels a query. |
UPDATECOUNT |
5 |
Integer for number of records |
Specifies the number of records to mark as dirty during synchronization. |
MAXLOG |
5000 |
Integer for number of records |
Specifies the number of records to put in the log table. Simulates the transaction log |
AUTOTUNE |
NO |
Yes or No |
Enables auto-tune. |
CLEARTUNE |
NO |
Yes or No |
Clears existing auto-tune results. |
TOLERANCE |
20 seconds |
Integer for seconds |
A template must be faster by this number of seconds before it replaces the default template. |
The exeuction plan file shows how your publication items interact with the different logical delete, insert, and update templates. From this report, you can evaluate your SQL to see if you want to modify it in any way to speed up your query. Set the optimizer parameter to designate how the database is organized. If you set this parameter to a setting that the database is not set to, it still acts as if the database is set to this way to show you how it would execute. See Table 1-3 for all configuration parameters that relate to this search.
Table 1-3 Consperf Parameters for Execution Performance Plan
Parameter | Default Value | Allowed Values | Description |
---|---|---|---|
GATHERSTATS |
NO |
Yes or No |
Gathers optimizer statistics on all mobile server objects. MGP compose MUST be disabled while Consperf analyzes objects. Consperf blocks this automatically, but the safest approach is to manually stop the MGP process before running Consperf with the GATHERSTATS option. If Consperf fails while gathering statistics, users must re-run CLEARSTATS before starting the MGP process again. |
CLEARSTATS |
NO |
Yes or No |
Removes optimizer statistics on mobile server objects. |
SQLTRACE |
NO |
Yes or No |
Enables Oracle sql trace. TKPROF can be used to analyze the resulting trace file. |
If, instead of viewing MGP statistics within the Mobile Manager, you would rather execute SQL scripts to monitor Mobile application status during synchronization, you may use any of the following SQL scripts to retrieve the desired information.
Section 1.2.2.3, "Completely Refreshed Publication Items for all Clients"
Section 1.2.2.4, "Publications Flagged for Complete Refresh for All Clients"
Section 1.2.2.5, "Clients and Publication where Subscription Parameters are Not Set"
Section 1.2.2.6, "Record Counts for Map-based Publication Item by Client"
Section 1.2.2.7, "Record Count for Map-based Publication Items by Store"
Section 1.2.2.8, "All Client Sequence Partitions and Sequence Values"
Using the following script, you can check the latest successful synchronization times for all clients by retrieving such information from the all_clients
table.
select client, lastrefresh_starttime, lastrefresh_endtimefrom cv$all_clientsorder by client/
Using the following script, you can retrieve a list of failed transactions for all clients from the all_errors
table.
select client, transaction_id, item_name, message_textfrom cv$all_errorswhere message_text is not nullorder by client,transaction_id/
Using the following SQL script, you can retrieve a list of all publication items for all clients which were completely refreshed during the last synchronization process.
select clientid, publication_itemfrom c$complete_refresh_logorder by clientid, publication_item/
Using the following SQL script, you can retrieve a list of publications for all clients that are flagged for a complete refresh during the next synchronization process.
select clientid, template as publicationfrom c$all_subscriptionswhere crr = 'Y'/
Using the following SQL script, you can retrieve a list of clients and their publications where the subscription parameters have not been set.
select client, name as publication, param_name, param_valuefrom cv$all_subscription_paramswhere param_value is nullorder by client, name/
Using the following script, you can retrieve record counts for all clients in queues for map-based publication items, that are grouped by clients.
select clid$$cs as client, count(*) as "RECORD COUNT"from c$in_messagesgroup by clid$$cs/
Using the following SQL script, you can retrieve record counts for all client in-queues for map-based publication items, that are grouped by store.
select clid$$cs as client, tranid$$ as transaction_id, store as item_name,count(*) as "RECORD COUNT"from c$in_messagesgroup by clid$$cs, tranid$$, store/
Using the following SQL script, you can retrieve a list of all client sequence partitions and current sequence values.
select clientid, name, curr_val, incrfrom c$all_sequence_partitionsorder by clientid, name/
Using the following SQL script, you can retrieve a list of all publication item indexes.
select publication as NAME, publication_item, conflict_rule as "INDEX_TYPE",columnsfrom c$all_indexesorder by publication, publication_item/
When you create a SQL script in MDW or with the Consolidator APIs, you should include all dependent DDL statements in the same script in the order necessary. If you separate dependent DDL statements into separate scripts, Oracle Database Lite may be executing them randomly, causing dependency errors and re-execution of each script. See Section 5.7 "Create and Load a SQL Script" in the Oracle Database Lite Developer's Guide for more information.
The following parameters in the [CONSOLIDATOR]
section of the webtogo.ora
file are used for tuning synchronization:
The MAX_THREADS
parameter is used by the MGP and controls the number of concurrent threads. As a rule, do not set this higher than 1.5 times the number of CPUs on the database machine. For example, if your system has four CPUs, theb you should not set it higher than six.
The MAX_CONCURRENT
parameter controls how many users can synchronize in parallel. Once the maximum number of concurrent synchronization requests is reached, additional requests block until one or more synchronization requests completes. If you do not set this parameter, then there is no maximum.
The CONNECTION_TIMEOUT
parameter specifies in minutes the JDBC connection timeout for the synchronization session.
The COMPOSE_TIMEOUT
parameter specifies in seconds the MGP timeout for the compose phase for each user.
The CONNECTION_POOL
parameter enables pooling of database connections.
The MAX_THREADS
parameter sets the maximum number of threads spawned within the MGP process.
For full details on these and more parameters, see Section A.6, "CONSOLIDATOR" in the Oracle Database Lite Administration and Deployment Guide.
Each synchronization request requires a number of system resources, such as creating a database connection, using memory, and so on. If you have too many requests competing for the same resources, then the overall performance can be poor. Limiting the number of parallel requests with the MAX_THREADS
and MAX_CONCURRENCY
parameters improve the average response time.
Set the MAX_THREADS
and MAX_CONCURRENCY
parameters if you notice that the synchronization performance is not linear. For example, if twice the number of parallel requests results in a synchronization time that is five times longer for each client, then you probably have resource contention. The value depends on your environment and should be determined on a trial and error basis.
You can increase synchronization performance by monitoring the performance of the SQL queries in your applications. The following sections provide details on how to tune your queries:
You should avoid using database query constructs that prevent a view from being mergable, as publication item queries that use non-mergable views do not perform well. Examples of such constructs are union, minus, and connect by. For more information on mergable views, see the Oracle database documentation.
Once you have defined your application, use the consperf
utility to profile the performance of the publication item queries. Mobile Server does not execute your publication item queries directly; instead the query is wrapped into a template query, which is executed by Mobile Server. The template query may have an unexpected query execution plan, resulting in poor performance. The consperf
utility generates an EXPLAIN PLAN execution plan for those template queries, allowing you to tune your publication item query for best performance. In addition, consperf generates timing information for the execution of all template queries, so that you can identify bottleneck queries. For more information on the consperf utility, see Section 1.2.1, "Analyzing Performance of Publications With the Consperf Utility".
You must make sure that the optimizer picks the correct execution path when you either are using the cost-based optimizer or you have set the optimizer settings to choose
. The optimizer can pick the correct execution path only when all of the tables are properly analyzed and statistics have been gathered for these tables.
The Mobile Server uses temporary tables during synchronization. Once a number of users have been created, and they have synchronized with Mobile Server, run consperf with the gatherstats
option to generate the statistics information for the temporary tables. For more information on the consperf utility, see Section 1.2.1, "Analyzing Performance of Publications With the Consperf Utility".
Tablespace layout across multiple disks can improve the performance of Mobile Server data synchronization, as it reduces movement of the disk heads and improves I/O response time.
By default, the synchronization tablespace is SYNCSERVER
, and is stored in the mobile
xx
.dbf
file in the default location for the database instance under ORACLE_HOME
, where xx
is a number between 1 and 25. The tablespace name, filename, and file location for the tablespace is defined in the $ORACLE_HOME/Mobile/Server/admin/consolidator_o8a.sql
script file, which is executed during the Mobile Server installation process. So, if you want to modify the tablespace, perform the following BEFORE you install the Mobile Server; otherwise, the default tablespace is created.
If you want to customize the SYNCSERVER
tablespace, for example, by using multiple data files spread across several disks, or by using specific storage parameters, then you can precreate the SYNCSERVER
tablespace with the required settings. The installation process automatically detects that the tablespace exists and uses it. Refer to the Oracle Database documentation for full details on how to create a tablespace.
It is very common for publications to contain publication items that are used specifically for lookup purposes. That is, a publication item that creates a read-only snapshot. The server may change these snapshots, but the client would never update them directly. Furthermore, many users often share the data in this type of snapshot. For example, there could be a publication item called zip_codes
, which is subscribed to by all Mobile users.
The main function of Shared Maps is to improve scalability for this type of publication item by allowing users to share record state information and reduce the size of the resulting replication map tables. By default, if you have a non-updatable publication item, it defaults to using shared maps.
Note: Shared Maps can also be used with updatable snapshots if the developer is willing to implement their own conflict detection and resolution logic; however, normally shared maps are only for non-updatable snapshots. |
Shared maps shrink the size of map tables for large lookup publication items and reduce the MGP compose time. Lookup publication items contain read-only data that is not updatable on the clients and that is shared by multiple subscribed clients. When multiple users share the same data, their query subsetting parameters are usually identical.
For example, a query could be the following:
SELECT * FROM WHERE EMP WHERE DEPTNO = :dept_id
In the preceding example, all users that share data from the same department have the same value for dept_id
. The default sharing method is based on subscription parameter values.
In the following example, the query is:
SELECT * FROM WHERE EMP WHERE DEPTNO = ( SELECT DEPTNO FROM EMP WHERE EMPNO = :emp_id )
In this example, users from the same departments still share data. Their subsetting parameters are not equal, because each user has a unique emp_id
. To support the sharing of data for these types of queries (as illustrated by the example), a grouping function can be specified. The grouping function returns a unique group id
based on the client id
.
There is also another possible use for shared maps. It is possible to use shared maps for shared updatable publication items. However, this type of usage requires implementation of a custom dml procedure that handles conflict resolution.
The performance of the MGP compose cycle is directly proportional to the following:
NC * NPI
where:
NC
= number of clients
NPI
= number of publication items that must be composed
With shared maps, the length of the MGP cycle is proportional to the following:
NC*(NPI - NSPI) + NG*NSPI
where:
NSPI
= number of shared publication items
NG
= number of groups
Note: IfNG = NC , then the MGP performance is similar in both cases. However, with fewer groups and more shared publication items, the MGP compose cycle becomes faster. In addition, map storage requirements are governed by these same factors. |
To set up a publication item to be shared, use the AddPublicationItem
API and enable the shared flag. It is also possible to toggle the shared property of a publication item once it is added to the publication with the SetPublicationItemMetadata
API. Both the AddPublicationItem
API and the SetPublicationItemMetadata
API allow users to specify a PL/SQL grouping function. The function signature must be as follows:
( CLIENT in VARCHAR2, PUBLICATION in VARCHAR2, ITEM in VARCHAR2 ) return VARCHAR2.
The returned value must uniquely identify the client's group. For example, if client A belongs to the group GroupA and client B belongs to the group GroupB, the group function F could return:
F ('A','SUBSCRIPTION','PI_NAME') = 'GroupA' F ('B','SUBSCRIPTION','PI_NAME') = 'GroupB'
The implicit assumption of the grouping function is that all the members of the GroupA group share the same data, and that all the members of the GroupB group share the same data.. The group function uniquely identifies a group of users with the same data for a particular PUBLICATION ITEM.
For the query example in Section 1.2.7, "Shared Maps", the grouping function could be:
Function get_emp_group_id ( clientid in varchar2, publication in varchar2, item in varchar2 ) return varchar2 is group_val_id varchar2(30); begin select DEPTNO into group_val_id from EMP where EMPNO = clientid ; return group_val_id; end;
Note: This function assumes thatEMPNO is the Consolidator Manager client id. If the group_fnc is not specified, the default grouping is based on subscription parameters. |
If you have been using a version prior to Oracle Database Lite 10g, then you must migrate your existing Mobile Server schema with shared maps, as follows:
Run one cycle of MGP.
The clients must sync with the server to get the latest changes prepared by the MGP.
Stop the Web server and MGP to migrate the server to 10g. This automatically sets all the nonupdatable publication items to shared items. If any shared publication items need to use grouping functions or any publication items need to change their sharing attribute, execute custom code that calls the appropriate Consolidator Manager API. See the SetPublicationItemMetadata
API in Section 1.2.7.2, "Shared Map Usage".
The ShrinkSharedMaps
Consolidator Manager API must be called to set the clients to use shared map data and remove old redundant data from the maps.
Start the Web server and MGP.
Sync Server database objects called map tables are used to maintain the state for each Mobile client. If there are a large number of clients, and each client subscribes to a large amount of data, the map tables can become very large creating scalability issues. Using the following APIs, map tables can be partitioned by client id, making them more manageable.
The API allows you to create a map table partition, add additional partitions, drop one or all partitions, and merge map table partitions. Map table partitions can be monitored using the ALL_PARTITIONS
database catalog view.
Note: This form of partitioning is not related to the partition functionality provided by Oracle Server, and is used exclusively by Oracle Database Lite 10g. |
Creates a partition for the referenced publication item map table. If there is data in the map table, it is transferred to the partition being created. After the partition has been successfully created, the map table can be truncated to remove redundant data using the SQL command TRUNCATE TABLE
.
Syntax
public static void partitionMap (String pub_item, int num_parts, String storage, String ind_storage) throws Throwable
The parameters of partitionMap
are listed in Table 1-4.
Table 1-4 The partitionMap Parameters
Parameter | Definition |
---|---|
|
The publication item whose map table is being partitioned. |
|
The number of partitions. |
|
A string specifying the storage parameters. This parameter requires the same syntax as the SQL command |
|
A string specifying the storage parameters for indexes on the partition. This parameter requires the same syntax as the SQL command |
Example
consolidatorManager.partitionMap("P_SAMPLE1", 5, "tablespace mobileadmin", "initrans 10 pctfree 70");
Adds a partition for the referenced publication item's map table. If there is data in the map table, it is transferred to the partition being created. After the partition has been successfully created, the map table can be truncated to remove redundant data using the SQL command TRUNCATE TABLE
.
Syntax
public static void addMapPartitions ( String pub_item, int num_parts, String storage, String ind_storage) throws Throwable
The parameters of addMapPartitions
are listed in Table 1-5:
Table 1-5 The addMapPartitions Parameters
Parameter | Definition |
---|---|
|
The publication item whose map table is being partitioned. |
|
The number of partitions. |
|
A string specifying the storage parameters. This parameter requires the same syntax as the SQL command |
|
A string specifying the storage parameters for indexes on the partition. This parameter requires the same syntax as the SQL command |
Example
consolidatorManager.addMapPartitions("P_SAMEPLE1",5,"tablespace mobileadmin","initrans 10 pctfree 40");
Note: Map Partitions are created only for existing users. New users are placed in the original map table. |
Drops the named partition. In the following example, the partition
parameter is the name of the partition. Partition names must be retrieved by querying the ALL_PARTITIONS
table view CV$ALL_PARTITIONS
since partitions are named by Data Synchronization.
Syntax
public static void dropMapPartition( String partition) throws Throwable
Example
consolidatorManager.dropMapPartition("MAP101_1");