Skip Headers
Oracle TopLink Developer's Guide
10g Release 3 (10.1.3)
B13593-01
  Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
Next
Next
 

Query Optimization

TopLink provides an extensive query API for reading, writing, and updating data. This section describes ways of optimizing query performance in various circumstances.

Before optimizing queries, consider the optimization suggestions in "Data Access Optimization".

This section includes information on the following:

Parameterized SQL and Prepared Statement Caching

These features lets you cache and reuse a query's pre-parsed database statement when the query is re-executed.

For more information, see"Parameterized SQL (Binding) and Prepared Statement Caching".

Named Queries

Whenever possible, use named queries in your application. Named queries help you avoid duplication, are easy to maintain and reuse, and easily add complex query behavior to the application. Using named queries also allows for the query to be prepared once, and for the SQL generation to be cached.

For more information, see "Named Queries".

Batch and Join Reading

To optimize database read operations, TopLink supports both batch and join reading. When you use these techniques, you dramatically decrease the number of times you access the database during a read operation, especially when your result set contains a large number of objects.

For more information, see the following:

Partial Object Queries and Fetch Groups

Partial object queries lets you retrieve partially populated objects from the database rather than complete objects.

For CMP applications, you can use fetch groups to accomplish the same performance optimization.

For more information about partial object reading, see "Partial Object Queries".

For more information about fetch groups, see "Fetch Groups".

JDBC Fetch Size

The JDBC fetch size gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed.

For large queries that return a large number of objects you can configure the row fetch size used in the query to improve performance by reducing the number database hits required to satisfy the selection criteria.

Most JDBC drivers default to a fetch size of 10, so if you are reading 1000 objects, increasing the fetch size to 256 can significantly reduce the time required to fetch the query's results. The optimal fetch size is not always obvious. Usually, a fetch size of one half or one quarter of the total expected result size is optimal. Note that if you are unsure of the result set size, incorrectly setting a fetch size too large or too small can decrease performance.

Set the query fetch size with ReadQuery method setFetchSize as Example 11-2 shows. Alternatively, you can use ReadQuery method setMaxRows to set the limit for the maximum number of rows that any ResultSet can contain.

Example 11-2 JDBC Driver Fetch Size

ReadAllQuery query = new ReadAllQuery();
query.setReferenceClass(Employee.class);
query.setSelectionCriteria(new ExpressionBuilder.get("id").greaterThan(100));

// Set the JDBC fetch size
query.setFetchSize(50);

// Configure the query to return results as a ScrollableCursor
query.useScrollableCursor();

// Execute the query
ScrollableCursor cursor = (ScrollableCursor) session.executeQuery(query);

// Iterate over the results
while (cursor.hasNext()) {
    System.out.println(cursor.next().toString());
}
cursor.close();

In this example, when you execute the query, the JDBC driver retrieves the first 50 rows from the database (or all rows if less than 50 rows satisfy the selection criteria). As you iterate over the first 50 rows, each time you call cursor.next(), the JDBC driver returns a row from local memory–it does not need to retrieve the row from the database. When you try to access the fifty first row (assuming there are more than 50 rows that satisfy the selection criteria), the JDBC driver again goes to the database and retrieves another 50 rows. In this way, 100 rows are returned with only two database hits.

If you specify a value of zero, then the hint is ignored: the JDBC driver returns one row at a time. The default value is zero.

Cursored Streams and Scrollable Cursors

You can configure a query to retrieve data from the database using a cursored Java stream or scrollable cursor. This lets you view a result set in manageable increments rather than as a complete collection. This is useful when you have a large result set. You can further tune performance by configuring the JDBC driver fetch size used (see "JDBC Fetch Size").

For more information about scrollable cursors, see "Handling Cursor and Stream Query Results".

Read Optimization Examples

TopLink provides the read optimization features listed in Table 11-12.

This section includes the following read optimization examples:

Table 11-12 Read Optimization Features

Feature Function Performance Technique

Unit of Work

Tracks object changes within the Unit of Work.

To minimize the amount of tracking required, registers only those objects that will change.

For more information, see "Understanding TopLink Transactions".

Indirection

Uses indirection objects to defer the loading and processing of relationships.

Provides a major performance benefit. It allows database access to be optimized and allows TopLink to internally make several optimizations in caching and unit of work.

Soft cache, weak identity map

Offers client-side caching for objects read from database, and drops objects from the cache when memory becomes low.

Reduces database calls and improves memory performance.

For more information, see "Cache Type and Object Identity".

Weak identity map

Offers client-side caching for objects.

Reduces database access and maintains a cache of all referenced objects.

For more information, see "Cache Type and Object Identity".

Batch reading and joining

Reduces database access by batching many queries into a single query that reads more data.

Dramatically reduces the number of database accesses required to perform a read query.

For more information, see "Using Batch Reading" and "Using Join Reading".

Partial object reading and fetch groups.

Allows reading of a subset of a result set of the object's attributes.

Reduces the amount of data read from the database.

For more information, see "Partial Object Queries".

For more information about fetch groups, see "Fetch Groups"

Report query

Similar to partial object reading, but returns only the data instead of the objects.

Supports complex reporting functions such as aggregation and group-by functions. Also lets you compute complex results on the database, instead of reading the objects into the application and computing the results locally.

For more information, see "Report Query".

JDBC fetch size and ReadQuery maximum rows

Reduces the number of database hits required to return all the rows that satisfy selection criteria.

For more information, see "JDBC Fetch Size".

Cursors

Lets you view a large result set in manageable increments rather than as a complete collection

For more information, see "Cursored Streams and Scrollable Cursors"

Inheritance views

Allows a view to be used for queries against an inheritance superclass that can read all of its subclasses in a single query, instead of multiple queries

For more information, see "Reading Case 5: Inheritance Views".


Reading Case 1: Displaying Names in a List

An application may ask the user to choose an element from a list. Because the list displays only a subset of the information contained in the objects, it is not necessary to query for all information for objects from the database.

TopLink features that optimize these types of operations include:

These features let you query only the information required to display the list. The user can then select an object from the list.

Partial Object Reading

Partial object reading is a query designed to extract only the required information from a selected record in a database, rather than all the information the record contains. Because partial object reading does not fully populate objects, you can neither cache nor edit partially read objects.

For more information about partial object queries, see "Partial Object Queries".

In Example 11-3, the query builds complete employee objects, even though the list displays only employee last names. With no optimization, the query reads all the employee data.

Example 11-3 No Optimization

/* Read all the employees from the database, ask the user to choose one and return it. This must read in all the information for all the employees.*/
List list;

// Fetch data from database and add to list box
Vector employees = (Vector) session.readAllObjects(Employee.class);
list.addAll(employees);

// Display list box
....

// Get selected employee from list
Employee selectedEmployee = (Employee) list.getSelectedItem();

return selectedEmployee;

Example 11-4 demonstrates the use of partial object reading. It reads only the last name and primary key for the employee data. This reduces the amount of data read from the database.

Example 11-4 Optimization Through Partial Object Reading

/* Read all the employees from the database, ask the user to choose one and return it. This uses partial object reading to read just the last names of the employees. Since TopLink automatically includes the primary key of the object, the full object can easily be read for editing. */
List list;

// Fetch data from database and add to list box
ReadAllQuery query = new ReadAllQuery(Employee.class);
query.addPartialAttribute("lastName");

// The next line avoids a query exception
query.dontMaintainCache();
Vector employees = (Vector) session.executeQuery(query);
list.addAll(employees);

// Display list box
....

// Get selected employee from list
Employee selectedEmployee = (Employee)session.readObject(list.getSelectedItem());
return selectedEmployee;

Report Query

Report query lets you retrieve data from a set of objects and their related objects. Report query supports database reporting functions and features.

For more information, see "Report Query Results".

Example 11-5 demonstrates the use of report query to read only the last name of the employees. This reduces the amount of data read from the database compared to the code in Example 11-3, and avoids instantiating employee instances.

Example 11-5 Optimization Through Report Query

/* Read all the employees from the database, ask the user to choose one and return it. This uses the report query to read just the last name of the employees. It then uses the primary key stored in the report query result to read the real object.*/
List list;

// Fetch data from database and add to list box
ExpressionBuilder builder = new ExpressionBuilder();
ReportQuery query = new ReportQuery (Employee.class, builder);
query.addAttribute("lastName");
query.retrievePrimaryKeys();
Vector reportRows = (Vector) session.executeQuery(query);
list.addAll(reportRows);

// Display list box
....

// Get selected employee from list
ReportQueryResult result = (ReportQueryResult) list.getSelectedItem();
Employee selectedEmployee = (Employee)    result.readobject(Employee.Class,session);

Although the differences between the unoptimized example (Example 11-3) and the report query optimization in Example 11-5 appear to be minor, report queries offer a substantial performance improvement.

Fetch Groups

Fetch groups, applicable only to CMP projects, are similar to partial object reading, but does allow caching of the objects read. For objects with many attributes or reference attributes to complex graphs (or both), you can define a fetch group that determines what attributes are returned when an object is read. Because TopLink will automatically execute additional queries when the get method is called for attributes not in the fetch group, ensure that the unfetched data is not required: refetching data can become a performance issue.

For more information about querying with fetch groups, see "Using Queries with Fetch Groups".

Example 11-6 demonstrates the use of a static fetch group.

Example 11-6 Configuring a Query with a FetchGroup Using the FetchGroupManager

// Create static fetch group at the descriptor level
FetchGroup group = new FetchGroup("nameOnly");
group.addAttribute("firstName");
group.addAttribute("lastName");
descriptor.getFetchGroupManager().addFetchGroup(group);

// Use static fetch group at query level
ReadAllQuery query = new ReadAllQuery(Employee.class);
query.setFetchGroupName("nameOnly");

// Only Employee attributes firstName and lastName are fetched.
// If you call the Employee get method for any other attribute, TopLink executes
// another query to retrieve all unfetched attribute values. Thereafter, calling that get
// method will return the value directly from the object

Reading Case 2: Batch Reading Objects

The way your application reads data from the database affects performance. For example, reading a collection of rows from the database is significantly faster than reading each row individually.

A common performance challenge is to read a collection of objects that have a one-to-one reference to another object. This typically requires one read operation to read in the source rows, and one call for each target row in the one-to-one relationship.

To reduce the number of read operations required, use join and batch reading. Example 11-7 illustrates the unoptimized code required to retrieve a collection of objects with a one-to-one reference to another object. Example 11-8 and Example 11-9 illustrate the use of joins and batch reading to improve efficiency.

Example 11-7 No Optimization

/* Read all the employees, and collect their address' cities. This takes N + 1   
   queries if not optimized.
*/

// Read all the employees from the database. This requires 1 SQL call
Vector employees = session.readAllObjects(Employee.class,new    ExpressionBuilder().get("lastName").equal("Smith"));

//SQL: Select * from Employee where l_name = 'Smith'

// Iterate over employees and get their addresses.
// This requires N SQL calls
Enumeration enum = employees.elements();
Vector cities = new Vector();
while(enum.hasMoreElements()) Employee employee = (Employee) enum.nextElement();
   cities.addElement(employee.getAddress().getCity());

//SQL: Select * from Address where address_id = 123, etc }

Example 11-8 Optimization Through Joining

/* Read all the employees, and collect their address' cities. Although the code
   is almost identical because joining optimization is used it takes only 1 
   query. 
*/

// Read all the employees from the database, using joining. 
// This requires 1 SQL call
ReadAllQuery query = new ReadAllQuery();
query.setReferenceClass(Employee.class);
query.setSelectionCriteria(new    ExpressionBuilder().get("lastName").equal("Smith"));
query.addJoinedAttribute("address");
Vector employees = session.executeQuery(query);

/* SQL: Select E.*, A.* from Employee E, Address A where E.l_name = 'Smith' and 
   E.address_id = A.address_id Iterate over employees and get their addresses. 
   The previous SQL already read all the addresses, so no SQL is required. 
*/
Enumeration enum = employees.elements();
Vector cities = new Vector();
while (enum.hasMoreElements()) {
Employee employee = (Employee) enum.nextElement();
    cities.addElement(employee.getAddress().getCity());

Example 11-9 Optimization Through Batch Reading

/* Read all the employees, and collect their address' cities. Although the code 
   is almost identical because batch reading optimization is used it takes only 
   2 queries. 
*/

// Read all the employees from the database, using batch reading. 
// This requires 1 SQL call, note that only the employees are read 
ReadAllQuery query = new ReadAllQuery();
query.setReferenceClass(Employee.class);
query.setSelectionCriteria(new    ExpressionBuilder().get("lastName").equal("Smith"));
query.addBatchReadAttribute("address");
Vector employees = (Vector)session.executeQuery(query);

// SQL: Select * from Employee where l_name = 'Smith'

// Iterate over employees and get their addresses.
// The first address accessed will cause all the addresses to be read in a single SQL call
Enumeration enum = employees.elements();
Vector cities = new Vector();
while (enum.hasMoreElements()) {
    Employee employee = (Employee) enum.nextElement();
    cities.addElement(employee.getAddress().getCity());
    // SQL: Select distinct A.* from Employee E, Address A 
      where E.l_name = 'Smith' and E.address_id = A.address_i
}

Because the two-phase approach to the query (Example 11-8 and Example 11-9) accesses the database only twice, it is significantly faster than the approach illustrated in Example 11-7.

Joins offer a significant performance increase under most circumstances. Batch reading offers a further performance advantage in that it allows for delayed loading through value holders, and has much better performance where the target objects are shared.

For example, if employees in Example 11-7, Example 11-8, and Example 11-9 are at the same address, batch reading reads much less data than joining, because batch reading uses a SQL DISTINCT call to filter duplicate data.

Batch reading is available for one-to-one, one-to-many, many-to-many, direct collection, direct map and aggregate collection mappings. Joining is only available for one-to-one and one-to-many mappings. Note that one-to-many joining will return a large amount of duplicate data and so is normally less efficient than batch reading.


WARNING:

Allowing an unverified SQL string to be passed into methods (for example: readAllObjects(Class class, String sql) method) makes your application vulnerable to SQL injection attacks.


Reading Case 3: Using Complex Custom SQL Queries

TopLink provides a high-level query mechanism. However, if your application requires a complex query, a direct SQL or stored procedure call may be the best solution.

For more information about executing SQL calls, see "SQLCall".

Reading Case 4: Using View Objects

Some application operations require information from several objects rather than from just one. This can be difficult to implement, and resource-intensive. Example 11-10 illustrates unoptimized code that reads information from several objects.

Example 11-10 No Optimization

/* Gather the information to report on an employee and return the summary of the 
   information. In this situation, a hash table is used to hold the report 
   information. Notice that this reads a lot of objects from the database, but 
   uses very little of the information contained in the objects. This may take 5 
   queries and read in a large number of objects.
*/

public Hashtable reportOnEmployee(String employeeName)
{
    Vector projects, associations;
    Hashtable report = new Hashtable();
    // Retrieve employee from database
    Employee employee = session.readObject(Employee.class, new
       ExpressionBuilder.get("lastName").equal(employeeName)); 
    // Get all the projects affiliated with the employee
    projects = session.readAllObjects(Project.class, "SELECT P.* FROM PROJECT P, 
      EMPLOYEE E WHERE P.MEMBER_ID = E.EMP_ID AND E.L_NAME = " + employeeName);
    // Get all the associations affiliated with the employee
    associations = session.readAllObjects(Association.class, "SELECT A.*
      FROM ASSOC A, EMPLOYEE E WHERE A.MEMBER_ID = E.EMP_ID AND E.L_NAME = 
      " + employeeName);
    report.put("firstName", employee.getFirstName());
    report.put("lastName", employee.getLastName());
    report.put("manager", employee.getManager());
    report.put("city", employee.getAddress().getCity());
    report.put("projects", projects);
    report.put("associations", associations);
    return report;
}

To improve application performance in these situations, define a new read-only object to encapsulate this information, and map it to a view on the database. To set the object to be read-only, use the addDefaultReadOnlyClass API in the oracle.toplink.sessions.Project class.

Example 11-11 Optimization Through View Object

CREATE VIEW NAMED EMPLOYEE_VIEW AS (SELECT F_NAME = E.F_NAME, L_NAME = E.L_NAME,EMP_ID = E.EMP_ID, MANAGER_NAME = E.NAME, CITY = A.CITY, NAME = E.NAME 
FROM EMPLOYEE E, EMPLOYEE M, ADDRESS A 
WHERE E.MANAGER_ID = M.EMP_ID
AND E.ADDRESS_ID = A.ADDRESS_ID)

Define a descriptor for the EmployeeReport class:

  • Define the descriptor as usual, but specify the tableName as EMPLOYEE_VIEW.

  • Map only the attributes required for the report. In the case of the numberOfProjects and associations, use a transformation mapping to retrieve the required data.

You can now query the report from the database in the same way as any other object enabled by TopLink.

Example 11-12 View the Report from Example 11-11

/* Return the report for the employee */
public EmployeeReport reportOnEmployee(String employeeName) 
{
    EmployeeReport report;
    report = (EmployeeReport) session.readObject(EmployeeReport.class, 
      new ExpressionBuilder.get("lastName").equal(employeeName));
    return report;
}

WARNING:

Allowing an unverified SQL string to be passed into methods (for example: readAllObjects(Class class, String sql) and readObject(Class class, String sql) method) makes your application vulnerable to SQL injection attacks.


Reading Case 5: Inheritance Views

If you have an inheritance hierarchy that spans multiple tables and frequently query for the root class, consider defining an inheritance all-subclasses view. This allows a view to be used for queries against an inheritance superclass that can read all of its subclasses in a single query instead of multiple queries.

For more information about inheritance, see "Descriptors and Inheritance".

For more information about querying on inheritance, see "Querying on an Inheritance Hierarchy".

Write Optimization Examples

TopLink provides the write optimization features listed in Table 11-13.

This section includes the following write optimization examples:

Table 11-13 Write Optimization Features

Feature Effect on Performance

Unit of Work

Improves performance by updating only the changed fields and objects.

Minimizes the amount of tracking required (which can be expensive) by registering only those objects that will change.

For more information, see "Understanding TopLink Transactions").

Note: The Unit of Work supports marking classes as read-only (see "Configuring Read-Only Descriptors" and "Declaring Read-Only Classes"). This avoids tracking of objects that do not change.

Batch writing

Lets you group all insert, update, and delete commands from a transaction into a single database call. This dramatically reduces the number of calls to the database (see "Cursors" and "Batch Writing and Parameterized SQL").

Parameterized SQL

Improves performance for frequently executed SQL statements (see "Parameterized SQL and Prepared Statement Caching").

Sequence number preallocation

Dramatically improves insert performance. (see "Sequence Number Preallocation").

Multiprocessing

Splitting a batch job across threads lets you synchronize reads from a cursored stream and use parallel Units of Work for performance improvements even on a single machine (see "Multiprocessing").

Does exist alternatives

The does exist call on write object can be avoided in certain situations by checking the cache for does exist, or assuming the existence of the object (see "Configuring Existence Checking at the Project Level" or "Configuring Cache Existence Checking at the Descriptor Level" and "Using Registration and Existence Checking").


Writing Case: Batch Writes

The most common write performance problem occurs when a batch job inserts a large volume of data into the database. For example, consider a batch job that loads a large amount of data from one database, and then migrates the data into another. The objects involved:

  • Are simple individual objects with no relationships

  • Use generated sequence numbers as their primary key

  • Have an address that also uses a sequence number

The batch job loads 10,000 employee records from the first database and inserts them into the target database. With no optimization, the batch job reads all the records from the source database, acquires a Unit of Work from the target database, registers all objects, and commits the Unit of Work.

Example 11-13 No Optimization

/* Read all the employees, acquire a Unit of Work, and register them */

// Read all the employees from the database. This requires 1 SQL call, but will be very memory intensive as 10,000 objects will be read
Vector employees = sourceSession.readAllObjects(Employee.class);

//SQL: Select * from Employee

// Acquire a Unit of Work and register the employees
UnitOfWork uow = targetSession.acquireUnitOfWork();
uow.registerAllObjects(employees);
uow.commit();

// SQL: Begin transaction
// SQL: Update Sequence set count = count + 1 where name = 'EMP'
// SQL: Select count from Sequence
// SQL: ... repeat this 10,000 times + 10,000 times for the addresses ...
// SQL: Commit transaction
// SQL: Begin transaction
// SQL: Insert into Address (...) values (...)
// SQL: ... repeat this 10,000 times
// SQL: Insert into Employee (...) values (...)
// SQL: ... repeat this 10,000 times
// SQL: Commit transaction

This batch job performs poorly, because it requires 60,000 SQL executions. It also reads huge amounts of data into memory, which can raise memory performance issues. TopLink offers several optimization features to improve the performance of this batch job.

To improve this operation, do the following:

Cursors

To optimize the query in Example 11-13, use a cursored stream to read the Employees from the source database. You can also employ a weak identity map instead of a hard or soft cache identity map in both the source and target databases.

To address the potential for memory problems, use the releasePrevious method after each read to stream the cursor in groups of 100. Register each batch of 100 employees in a new Unit of Work and commit them.

Although this does not reduce the amount of executed SQL, it does address potential out-of-memory issues. When your system runs out of memory, the result is performance degradation that increases over time, and excessive disk activity caused by memory swapping on disk.

For more information, see "Cursored Streams and Scrollable Cursors".

Batch Writing and Parameterized SQL

Batch writing lets you combine a group of SQL statements into a single statement and send it to the database as a single database execution. This feature reduces the communication time between the application and the server, and substantially improves performance.

You can enable batch writing alone (dynamic batch writing) using Login method useBatchWriting. If you add batch writing to Example 11-13, you execute each batch of 100 employees as a single SQL execution. This reduces the number of SQL executions from 20,200 to 300.

You can also enable batch writing and parameterized SQL (parameterized batch writing) and prepared statement caching. Parameterized SQL avoids the prepare component of SQL execution. This improves write performance because it avoids the prepare cost of an SQL execution. For parameterized batch writing you would get one statement per Employee, and one for Address: this reduces the number of SQL executions from 20,200 to 400. Although this is more than dynamic batch writing alone, parameterized batch writing also avoids all parsing, so it is much more efficient overall.

Although parameterized SQL avoids the prepare component of SQL execution, it does not reduce the number of executions. Because of this, parameterized SQL alone may not offer as big of a gain as batch writing. However, if your database does not support batch writing, parameterized SQL will improve performance. If you add parameterized SQL in Example 11-13, you must still execute 20,200 SQL executions, but parameterized SQL reduces the number of SQL PREPAREs to 4.

For more information, see "Batch Writing".

Sequence Number Preallocation

SQL select calls are more resource-intensive than SQL modify calls, so you can realize large performance gains by reducing the number of select calls you issue. The code in Example 11-13 uses the select calls to acquire sequence numbers. You can substantially improve performance if you use sequence number preallocation.

In TopLink, you can configure the sequence preallocation size on the login object (the default size is 50). Example 11-13 uses a preallocation size of 1 to demonstrate this point. If you stream the data in batches of 100 as suggested in "Cursors", set the sequence preallocation size to 100. Because employees and addresses in the example both use sequence numbering, you further improve performance by letting them share the same sequence. If you set the preallocation size to 200, this reduces the number of SQL execution from 60,000 to 20,200.

For more information about sequencing preallocation, see "Sequencing and Preallocation Size".

Multiprocessing

You can use multiple processes or multiple machines to split the batch job into several smaller jobs. In this example, splitting the batch job across threads enables you to synchronize reads from the cursored stream, and use parallel Units of Work on a single machine.

This leads to a performance increase, even if the machine has only a single processor, because it takes advantage of the wait times inherent in SQL execution. While one thread waits for a response from the server, another thread uses the waiting cycles to process its own database operation.

Example 11-14 illustrates the optimized code for this example. Note that it does not illustrate multiprocessing.

Example 11-14 Fully Optimized

/* Read each batch of employees, acquire a Unit of Work, and register them */
targetSession.getLogin().useBatchWriting();
targetSession.getLogin().setSequencePreallocationSize(200);
targetSession.getLogin().bindAllParameters();
targetSession.getLogin().cacheAllStatements();
targetSession.getLogin().setMaxBatchWritingSize(200);

// Read all the employees from the database into a stream. This requires 1 SQL call, but none of the rows will be fetched.
ReadAllQuery query = new ReadAllQuery();
query.setReferenceClass(Employee.class);
query.useCursoredStream();
CursoredStream stream;
stream = (CursoredStream) sourceSession.executeQuery(query);
//SQL: Select * from Employee. Process each batch
while (! stream.atEnd()) {
    Vector employees = stream.read(100);
    // Acquire a Unit of Work to register the employees
    UnitOfWork uow = targetSession.acquireUnitOfWork();
    uow.registerAllObjects(employees);
    uow.commit();
}
//SQL: Begin transaction
//SQL: Update Sequence set count = count + 200 where name = 'SEQ'
//SQL: Select count from Sequence where name = 'SEQ'
//SQL: Commit transaction
//SQL: Begin transaction
//BEGIN BATCH SQL: Insert into Address (...) values (...)
//... repeat this 100 times
//Insert into Employee (...) values (...)
//... repeat this 100 times
//END BATCH SQL:
//SQL: Commit transactionJava optimization