Optimizing General Ledger Performance

This chapter provides an overview of optimal PeopleSoft Enterprise General Ledger performance and discusses how to:

Click to jump to parent topicUnderstanding Optimal General Ledger Performance

Many functions in PeopleSoft Enterprise General Ledger are run as background processes. If more than one process tries to use the same database table at the same time, contention can occur for the same row of data. This can cause rollbacks. Data retrieval can be slowed by queries on non-indexed data. This chapter presents some strategies for avoiding system slowdowns.

Click to jump to parent topicPrerequisites

Begin enhancing the performance of your online pages, Application Engine, SQRs, and COBOL background processes only after you:

Click to jump to parent topicUsing Non-Shared Tables

This section discusses how to:

Click to jump to top of pageClick to jump to parent topicPages Used to Set Up GL Non-Shared Tables

Page Name

Definition Name

Navigation

Usage

Shared Table Statistics

TEMP_TBL_STATS

General Ledger, Monitor Background Process, Shared Table Statistics, Shared Table Statistics

Access a log containing each time the process used a shared base working table because a non-shared table was not available.

Non-Shared Table Maintenance

NONSHARED_TBL_ASGN

General Ledger, Monitor Background Process, Non-Shared Table Maintenance, Non-Shared Table Maintenance

Add, delete, or modify non-shared tables.

Define Batch Truncate Threshold

DELETE_LIMIT_ASGN

General Ledger, Monitor Background Process, Batch Truncate Threshold, Define Batch Truncate Threshold

Add, delete, or modify non-shared tables.

Click to jump to top of pageClick to jump to parent topicUsing PeopleTools Temporary Tables

PeopleTools provides a feature that allows applications to dedicate a specific instance of a PeopleTools temporary table for each GL Application Engine program run. This concept is similar to the GL non-shared table design in that it drastically reduces the risk of table contention. This feature is supported in some of the General Ledger Application Engine processes that use the PeopleTools temporary tables. These General Ledger Application Engine processes use the PeopleTools temporary tables:

See Enterprise PeopleTools 8.5x PeopleBook: PeopleSoft Application Engine, "Using Temporary Tables"

Click to jump to top of pageClick to jump to parent topicUsing GL Non-Shared Tables

Application Engine and COBOL SQL processes use GL non-shared tables to increase performance.

The records PS_LEDGER (ledger) and PS_JRNL_LN (journal line), which include the PS_JRNL_HEADER (journal header) record are the most heavily accessed records. While these two records are the focus of this chapter, you can apply the same techniques to any record.

Several GL Application Engine and COBOL SQL processes use base working tables to process large volumes of temporary data. Base working tables are shared working temporary tables that can be used by more than one program to process temporary data. You can potentially hamper performance if you run concurrent processes that use the same base working table.

To increase performance, these processes use GL non-shared tables in place of the shared working tables:

GL non-shared tables are defined by appending the non-shared table version number, TEMP_TBL_NUM, from table TEMP_TBL_ASGNM to the base working table TEMP_TBL_NAME. Only the process that reserves the non-shared table can process transactions against that table.

The General Ledger Application Engine and COBOL SQL processes use table TEMP_TBL_ASGNM to identify and reserve a non-shared table. This table defines the fields in the TEMP_TBL_ASGNM table.

Column

Description

TEMP_TBL_NAME

Base working table name.

TEMP_TBL_NUM

Non-shared table version number.

IN_USE_SW

Indicator to specify whether the non-shared table is in use.

PROCESS_INSTANCE

The process instance of the process that has the table reserved.

Process for Assigning GL Non-Shared Tables

The process first determines if a non-shared table exists by matching the base working table name (shared table) of the processes to the base working table name in TEMP_TBL_ASGNM that has the IN_USE_SW indicator set to N:

Click to jump to top of pageClick to jump to parent topicSetting Up GL Non-Shared Tables

The TEMP_TBL_ASGNM table is delivered with four non-shared tables for each base working table. If these do not meet your processing requirements, use the GL Non-Shared Table Maintenance page to add more non-shared table entries to TEMP_TBL_ASGNM.

Adding a Non-Shared Table

To add a non-shared table:

  1. Determine if you need to create a non-shared table entry in TEMP_TBL_ASGNM.

    On the Shared Table Statistics page, check the TEMP_TBL_STATS table to determine if the usage of the base working table justifies creating additional non-shared tables.

  2. Create the GL non-shared table in PeopleSoft Application Designer.

    Save the record definition from an existing non-shared table or the base working table. You should create the non-shared table from existing non-share tables rather than the base working table. Non-shared tables usually have less fields in the index structure that the base working table.

  3. Create a GL non-shared table entry in TEMP_TBL_ASGNM using the GL Non-Shared Tables page.

  4. Add a non-shared table for a custom base working table.

    If you have created your own base working tables, add non-shared table entries to your Base Working Tables entries (repeat steps 1 to 3). Note that most non-shared tables do not require any unique indexes.

Viewing Shared Table Statistics

Access the Shared Table Statistics page.

This log helps you to determine whether you need to create additional non-shared table entries in TEMP_TBL_ASGNM.

Note. The process inserts a log in TEMP_TBL_STATS only if the non-shared table entries exist for the base working table in TEMP_TBL_ASGNM.

Record (Table) Name

Enter the name of the shared table whose statistics you want to view.

Non-Shared Tables

Displays the number of non-shared tables currently created.

Table Name

Displays the base working table name.

Process Instance

Displays the process that attempted to reserve a non-shared table.

Job ID

Displays the Job ID of the process that attempted to reserve a non-shared table.

Date Assigned

Displays the run date of the process that attempted to reserve a non-shared table.

Adding Instances of General Ledger Non-Shared Tables

Access the Non-Shared Table Maintenance page.

You add instances of non-shared tables to the TEMP_TBL_ASGNM table so that the non-shared table is available for use by the batch processes.

To add a new instance of a GL non-shared table, enter its name in the Table Name field and click the Search button. All instances of that table are displayed. Click the Add icon to add a new instance.

Table Name

Enter or select the name of the table that you want to maintain. Leave this field blank to view all tables available for maintenance.

Process Instance

Enter or select a specific process instance to maintain. Leave this field blank to view all process instances available for maintenance.

Reset

Click to reset the In Use field to No. which removes the check mark from the check box. Select Reset regularly for all tables, but make sure that no one is using the system.

If a process aborts, the In Use flag remains set to Yes. In order to free the table for other processes, the you must reset the flag to No.

Warning! If you reset a process instance while it is running, it can cause data integrity problems.

Reset does not clear the contents from a table. However, the table is cleared the next time it is assigned.

Table Name

Displays the name of a non-shared table. You can select another name.

Number

Displays the instance number of the non-shared table. You can change this number.

In Use

Indicates whether the non-shared table is being used in a process. Click Reset to remove the check mark and change the selection to N.

Note. See previous Warning before you select Reset.

Process Instance

Instance number of the batch process that has reserved this non-shared table.

Note. If you are using your own customized base working table, make sure the table name does not exceed 12 characters. The total character length of a table cannot exceed 15 characters, so you must allow for three characters for the non-shared table instances.

The following table identifies some of the General Ledger COBOL processes that use the General Ledger Non-Shared tables. The menu paths identify where the base working tables are assigned to the processes that support the use of non-shared tables, as well as the temp tables whose base table names are fixed.

In this table, if the temporary table is specified in the ledger template or some definition pages, the record name is listed in the Dynamic Working Tables column below; otherwise the record name is listed under the Fixed Working Tables column:

Process

Menu Path

Dynamic Working Tables

Fixed Working Tables

Closing

General Ledger, Ledgers, Templates, Record Definitions

Closing Tmp

Closing RE Tmp

Closing Account Tmp

Closing Journal Header Tmp

Closing Journal Line Tmp

Closing Journal Line Tmp2

 

Ledger Load

General Ledger, Ledgers, Templates, Record Definitions

Led Load Temp Record

Led Load Temp Record 2

 

Multicurrency

General Ledger, Ledgers, Templates, Record Definitions

MultiCurrency Tmp

MultiCurrency Tmp1

TREE_SELnn_R

Post Journals

General Ledger, Ledgers, Templates, Record Definitions

Ledger Tmp

Ledger Tmp2

Journal Line Tmp

GL_OI_TMP

JRNL_HDR_SEL

JRNL_HDR_TMP

JRNL_VAT_TMP

JRNL_XRF_TMP

Open Item Reconciliation

General Ledger, Ledgers, Templates, Record Definitions

Journal Line Tmp

GL_OI_TMP

Close Budget

N/A

N/A

CFV_SEL

KK_RSCFV_SEL

TREE_SELnn_R

Click to jump to parent topicUsing Indexes

This section discusses how to:

Depending on the database platform, volume and distribution of data, the correct index can speed processing greatly. As the volume of data in your ledger and journal tables grows, periodically review the plan and indexes to ensure that they remain efficient.

The indexes delivered with the demonstration database were created for small amounts of data and are specific to the delivered sample ChartFields—these might be inappropriate for your situation. Develop indexes specific to your own data and configuration.

Indexes are sets of information used to access data in a record. They are stored separately from the records but updated simultaneously when records are updated, inserted, or deleted. Each of your records should have one unique index. This provides a valuable edit function by preventing duplicate rows of data from being added to the database. Add more indexes to improve performance when locating and accessing the data.

When you create a record in PeopleSoft Application Designer, the system automatically creates a unique index with the same name as the record. So, journal line record (PS_JRNL_LN) is created with a unique index, usually named PS_JRNL_LN. This unique index includes all the keys that were identified on the record. The combination of these keys should identify one unique row of information.

PeopleSoft Application Designer sometimes creates additional indexes when you add a record. These are generated from fields identified as alternate search keys on the record itself and are not included in the unique index mentioned above. For example, the GL_ACCOUNT_TBL has Alternate Search Keys designated on Account Type and Description fields. When the table is created in Application Designer, two additional indexes (PS0GL_ACCOUNT_TBL and PS1GL_ACCOUNT_TBL) is created with each of these fields listed. These additional indexes are always termed duplicate indexes, because they may not point to unique rows of data.

General Ledger includes system tables that enable you to view the created indexes and their columns.

Click to jump to top of pageClick to jump to parent topicIdentifying Appropriate Indexes

Here we discuss the system analysis you need to do before you implement special indexes.

Now that you know the indexes that General Ledger creates, determine whether the delivered indexes are suitable or you need additional indexes. Changes to the ChartFields, changes in configuration, and differences in data content all affect the indexes and their effectiveness.

Capturing SQL Statements

First, determine the indexes currently used by your system. Do this by capturing the SQL statements executed by the system, then running them in isolation to determine the database access path for retrieving the data. For either realtime online access or batch processes, you can identify the SQL statements that access the ledger and journal line tables and whose performance might be a concern. Refer to PeopleTools PeopleBooks for information about turning on the SQL trace for online and batch processes.

See Enterprise PeopleTools PeopleBook: PeopleSoft Process Scheduler

See Enterprise PeopleTools PeopleBook: System and Server Administration

Establishing a Baseline

Next, determine the efficiency of your current indexes; you need to establish a method for measuring progress as you make changes. A baseline timing is generally used for comparison when trying out different indexes. Time either the individual SQL statements or the entire process, so long as you have some way of determining progress as you proceed with the tuning.

Determining Indexes Used

You have a list of processes that access the primary records. You now need to determine which indexes each process currently uses. In other words, you need to determine the database access path that the system takes when the statement is actually executed. Because the database access path might change according to differing volumes of data, it is important to execute the plan on approximately the same amount of data that the table contains in a production environment. It might be appropriate to take a copy of your production database specifically for the purpose of tuning the indexes. Generally, when obtaining plan information, you are not actually executing the statements; check your database administrator documentation to be sure this is the case before executing any statements in your production environment.

Each platform has a process for determining the database access path that the engine uses to access data for the SQL statement. Below we present a brief outline of the DB2 approach.

Note. Refer to your database administration documentation for your platform and consult with your database administrator.

If your system is on DB2, create a PLAN_TABLE if your database does not already have one. A sample CREATE statement is in your DB2 Performance Tuning documentation.

Include the SQL statement in the following and execute it:

DELETE FROM PLAN_TABLE WHERE QUERYNO=nnn; EXPLAIN PLAN SET QUERYNO=nnn FOR statement;

In this statement, nnn is a number you assign to this statement.

Retrieve the plan from PLAN_TABLE with the following SELECT:

SELECT QBLOCKNO, PLANNO, TNAME, ACCESSNAME, METHOD, ACCESSTYPE, MATCHCOLS, INDEXONLY, PREFETCH, SORTC_GROUPBY FROM PLAN_TABLE WHERE QUERYNO=nnn ORDER BY QBLOCKNO, PLANNO;

The table contains other plan information, but these are the most pertinent columns for your purposes.

Note. If your system is running Oracle, consult with your database administrator to create a plan_table if your database does not already have one.

Click to jump to top of pageClick to jump to parent topicSelecting Indexes

Before determining whether the index is appropriate, you need to know how the database engine selects indexes. In general, consider these basic rules:

Click to jump to top of pageClick to jump to parent topicMaking the Rules Practical

To put these rules to practical use, you need to identify SQL statements that performed badly, and examine each "where" clause in those SQL statements. What you are trying to obtain from each SQL statement are the columns, accessed in the "where" clause, that you believe are acceptable to the system for index selection. In other words, you get to count all the columns that use an equal value, but none of the columns using "between" logic, and so on. To logically view this huge amount of information, it is best to make a chart with the columns in the "where" clause across the top and the origin of the SQL statement down the left side. As you go through each SQL statement, place an X in the box under the column that the index is likely to use. Create a chart for each record that concerns you.

Viewing a Sample "Where" Clause Chart

The following is a sample "where" clause chart for the ledger record with SQL statements generated from the demo database:

 

BU

LED

ACCT

DEPT

STAT

FY

PERIOD

CURR

ONLINE

               

Budget Detail

X

X

X

X

X

X

 

X

 

X

X

X

X

X

X

X

X

Budget Copy

X

X

     

X

 

 
 

X

X

     

X

X

 

Budget Spread

X

X

X

X

X

X

X

 
 

X

X

X

X

X

X

X

X

Ledger Inquiry

X

X

X

 

X

X

 

X

 

X

X

X

X

 

X

X

X

 

X

X

X

X

X

X

 

X

 

X

X

X

   

X

   

InterUnit Inquiry

X

X

X

   

X

X

 

TimeSpan Inquiry

X

X

X

X

X

   

X

Journal Entry

X

X

X

X

X

X

X

X

REPORTS

               

GLS7004

X

X

   

X

X

 

X

GLS7005

X

X

   

X

X

 

X

GLS1003

X

X

   

X

X

X

X

GLS7002

X

X

   

X

X

   

GLS7003

X

X

   

X

X

 

X

BATCH

               

Posting

X

X

X

X

X

X

X

X

Closing

X

X

     

X

X

 

 

X

X

   

X

X

   

 

X

X

   

X

X

X

 

 

X

X

   

X

X

 

X

 

X

X

X

   

X

 

 

Summary Ledger

X

X

     

X

X

 

 

X

X

   

X

   

X

Consolidate

   

X

 

X

X

X

X

 

X

X

X

 

X

X

 

X

Currency

X

X

X

X

X

X

X

X

Armed with these charts and the rules of indexing, now work to create indexes that access the records more quickly. Looking across the columns used in ledger "where" clauses, assess the viability of each column.

Business unit is included in every "where" clause, but in the demo database there are only 79 of them. One of these, US001, is used much more frequently than in the others, so the cardinality is relatively low. Because it is always used, you will probably include it in indexes.

The ledger column is also included in each clause, but the cardinality is low (three are used in the LEDGER table and one used the majority of the time).

Account is used in a good percentage of the "where" clauses and is required in most of the online inquiry transactions. The cardinality is also high (735 unique values of account in the ledger table in the demo database), so this is a good possibility in an index.

Other ChartFields, including DEPTID, PRODUCT, and PROJECT, are lumped together because the demo database does not require them and accepts a wildcard in their place on the inquiry pages. This wildcard generates a "like" SQL statement that works well if you supply the field with a value; it is less efficient if the field is left as a wildcard ('%'). If you have ChartFields that you always enter, you should include these in the index in the same way the account field is included. You might also want to consider making any "always enter" fields required on the inquiry pages to make the select statements more efficient.

Fiscal year is included on nearly every "where" clause. At present the cardinality is relatively low (3 - 4 different values); however, expect it to increase as time goes by. Accounting period is used on a good number of "where" clauses, again with limited cardinality.

Currency code is included in many of the "where" clauses. There are many values in the currency code record, but in practice the vast majority of transactions in the ledger record have a currency code of USD, so the cardinality of this field is also relatively low. Therefore, this column might not be included in most indexes.

Hints for Indexing

The following hints can help you create better indexes:

Once you find the best combination for the SQL statements under review, run through all the processes again. Sometimes one new index can cause changes in the indexes used by other processes. Often the change is good, but sometimes it is worse, and more evaluation is required.

Click to jump to top of pageClick to jump to parent topicUsing Indexes in the Demo System

PeopleSoft software provides the sample database with the indexes listed below. The columns in the indexes have been noted, followed by a brief explanation of why each index is included and how it affects performance. These indexes are included in PeopleSoft Application Designer and are created when indexes are created for the record.

Note. It might be beneficial to drop these indexes before performing a large load or update. Loading data into a table means that each of the indexes on the table need to be updated; this can amount to considerable overhead when many rows are inserted. It might be beneficial to drop these indexes, load the data or perform the background process, and then run the script again to recreate the indexes on the tables.

PS_LEDGER: (All Platforms)

PS_LEDGER

PSCLEDGER

PSFLEDGER

BUSINESS_UNIT

FISCAL_YEAR

ACCOUNT

LEDGER

LEDGER

ACCOUNTING_PERIOD

ACCOUNT

BUSINESS_UNIT

PRODUCT

ALTACCT

ACCOUNT

FISCAL_YEAR

DEPTID

   

OPERATING UNIT

   

PRODUCT

   

FUND_CODE

   

CLASS_FLD

   

PROGRAM_CODE

   

BUDGET_REF

   

AFFILIATE

   

AFFILIATE_INTRA1

   

AFFILIATE_INTRA2

   

PROJECT_ID

   

BOOK_CODE

   

GL_ADJUST_TYPE

   

CURRENCY_CD

   

STATISTICS_CODE

   

FISCAL_YEAR

   

ACCOUNTING_PERIOD

   

PS_LEDGER

This is the original, unique index from PeopleSoft Application Designer, which was left in place as the unique index. Change this index to reflect your own ChartField configuration. Because it is generated from Application Designer, making the record changes to the LEDGER record should produce the correct index for you.

Some platforms have a 16-column limit for indexes and this index already has 21 columns, and is not supported for some platforms (namely Microsoft SQL Server, DB2/Unix, and Informix). PeopleSoft software resolves this by creating 'Functional Indexes' behind the scene with index based on a field which is the concatenation of all the key fields.

While no PeopleSoft processes rely on the presence of the unique index on a table, your database manager should carefully consider any decision not to have one. A unique index is a data safeguard enforced by your database engine. It guards against duplicate data resulting from a process that does not work correctly or from invalid data in an SQL script.

Note. If you are a DB2 customer and you want to partition the dataset based on fiscal year, you might want to delete this index and replace it with a unique index that has Fiscal Year as the leading field.

See Enterprise PeopleTools PeopleBook: PeopleSoft Application Designer Developer's Guide, "Understanding Functional Indexes"

See Enterprise PeopleTools PeopleBook: PeopleSoft Application Designer Developer's Guide, "Planning Records, Control Tables, and TableSets"

PSALEDGER (All Platforms Except Oracle)

Experienced General Ledger customers recognize this index. This is the most efficient index for PS/nVision reporting and also helps out the Closing and Summary ledger COBOL processes. Note that it is similar to the PSBLEDGER index except that it starts with Fiscal Year (so those processes that do have an equality for accounting period choose the "B" version over this one). The index also includes the Account field, thereby adding efficiency for any "where" clause selecting specific account values. This is an index you probably have to modify to reflect your own ChartField configuration. The last fields of this index should include the ChartFields with the highest cardinality that are usually entered and used in "where" clauses. Avoid adding all your ChartFields because that would create a great deal of overhead when any of the ChartFields are added or changed in the ledger. It is usually best to include the minimum number of fields to do the job in an index.

Note. If you are doing PS/nVision reporting on your summary ledger, be sure to create this index for the summary ledger record. See the notes under the PSBLEDGER index for additional thoughts on this topic.

PS_LEDGER (All Platforms Except Oracle)

PSALEDGER

PSBLEDGER

ACCOUNT

ACCOUNTING_PERIOD

FISCAL_YEAR

FISCAL_YEAR

BUSINESS_UNIT

LEDGER

LEDGER

BUSINESS_UNIT

The leading field on this index is the Account field. This index helps speed performance on all processes that access the ledger using a specific account selection. This includes the Trial Balance and General Ledger Activity SQRs, as well as the COBOL processes of Closing and Consolidations. Online, this index helps in the budget entry process and the ledger inquiry pages. You need to modify this index for your own ChartField configuration. The leading fields on the index should be the ChartFields that are always entered and have the greatest cardinality. Because the Fiscal Year, Business Unit, and Ledger fields are consistently requested with the Account field, they are also on the index and should be appropriate on your version of the index.

PSBLEDGER

This index begins with the Accounting Period field and is called into use when the accounting period and fiscal year are specified without specific ChartField references. This enhances performance in the Closing Trial Balance SQR, Closing and Summary Ledger processes, and the online budget copy process. You should be able to use this index as is without modifications.

Note. The summary ledgers delivered with the demo database (PS_S_LEDGER_ACCTS, S_LEDGER_ACTDEP, and PS_S_LEDGER_SUM) realized performance gains in the summary ledger background process when this index was created for them. The "B" type of index might be beneficial for your own summary ledgers. Because this index is similar to the PSCLEDGER index, you might want to do some timings and analysis before deciding if the additional indexes on your own summary ledgers are worth the additional overhead during inserts and updates.

PS_LEDGER: (Oracle Only)

PSDLEDGER

PSELEDGER

BUSINESS_UNIT

FISCAL_YEAR

LEDGER

ACCOUNTING_PERIOD

FISCAL_YEAR

BUSINESS_UNIT

ACCOUNTING_PERIOD

LEDGER

CURRENCY_CD

ACCOUNT

STATISTICS_CODE

PRODUCT

ACCOUNT

PROJECT_ID

 

AFFILIATE

 

CURRENCY_CD

PSDLEDGER (Oracle Only)

This index is used in the same way the PSALEDGER index is used on the other platforms—to optimize performance in online processes, SQRs, and COBOL processes when the entire ledger key is not specified. Specifically, the Trial Balance and General Ledger Activity SQRs, the Closing and Consolidations COBOL processes, and the budget and ledger inquiry online pages use it. The index leads off with the common fields of Business Unit and Ledger and includes more of the "where" clause columns than its PSALEDGER counterpart. As in the PSALEDGER version, when building this index on your production system you should change the Account field in the demo database to be the ChartFields you always enter that have the highest cardinality.

PSELEDGER (Oracle Only)

The closing process Closing Trial Balance SQR and the closing COBOL process primarily use this index. The summary ledger COBOL process also favors it. If you do not run either of those processes (or run them infrequently), you might not need this index. To modify the index, replace the demo ChartFields with your own. The leading ChartFields should be those you always enter that have the highest cardinality.

PSWLEDGER (Sql Server, DB2/UDB for Linux, Unix, and Windows, Informix)

The PSW<record name> index is used to build the search index for database platforms that have the 16–column limit on indexes. This index will not be a unique index, instead, the concatenated 'Functional Index' plays the role of the unique index.

This index will be created with the first 16 key fields of the functional index. The recommendation is to customize the index to your need as followed: The first 5 columns of the PSW<record name> index should be: Business_unit, Ledger, Fiscal_year, Accounting_period, Account. The subsequent 11 columns should consist of the other ChartFields that are always entered and have the greatest cardinality. This index is very crucial to the performance of Post Journals and Post Budget Journals processes.

PS_JRNL_HEADER (All Platforms)

The same analysis processes were applied to the JRNL_LN and JRNL_HEADER records, and the following indexes are delivered with your demo database as a result of this study:

PS_JRNL_ HEADER

PSCJRNL_ HEADER

PSDJRNL_ HEADER

BUSINESS_UNIT

PROCESS_INSTANCE

JOURNAL_ID

JOURNAL_ID

JRNL_HDR_STATUS

JOURNAL_DATE

JOURNAL_DATE

JRNL_PROCESS_REQST

BUSINESS_UNIT_IU

UNPOST_SEQ

 

UNPOST_SEQ

PS_JRNL_HEADER (All Platforms — Continued)

PSGJRNL_HEADER

PSHJRNL_HEADER

PSIJRNL_HEADER

PROCESS_INSTANCE

PROC_PART_ID

BUSINESS_UNIT

SYSTEM_SOURCE

BUSINESS_UNIT

PROCESS_INSTANCE

 

LEDGER_GROUP

LEDGER_GROUP

 

JRNL_PROCESS_REQST

JOURNAL_ID

 

JOURNAL_ID

JOURNAL_DATE

   

UNPOST_SEQ

PS_JRNL_HEADER

This is the unique index created by PeopleSoft Application Designer. It is used each time the journal is referenced by the key values. This includes the OpenItem and SJE Status SQRs, posting and journal edit processes, and online journal inquiry and entry processes. You should not need to change this index for any ChartField configuration.

PSCJRNL_HEADER

Because the leading field on this index is PROCESS_INSTANCE, the index helped speed processing in the journal posting and journal edit jobs where the statements select from the journal header based on PROCESS_INSTANCE. If you run those jobs, you want this index. You should not need to change it for any ChartField configuration.

PSDJRNL_HEADER

This index is used by the online system to obtain the journal headers and journal lines for InterUnit subjournals.

PS_JRNL_HEADER (All Platforms Except Oracle)

PSAJRNL_HEADER

PSBJRNL_HEADER

ACCOUNTING_PERIOD

JRNL_PROCESS_REQST

SOURCE

BUSINESS_UNIT

FISCAL_YEAR

LEDGER_GROUP

BUSINESS_UNIT

 

LEDGER_GROUP

 

PSAJRNL_HEADER

This index is selected when the accounting period, source, and fiscal year are specified in the "where" clause. The sequence of the columns was chosen by cardinality for the demo database (source has seven unique values and accounting period has 12). You need to analyze your own system to determine which column should come first—source or accounting period. Choose the column with the greatest cardinality (unique values). This index helps speed processing in the Trial Balance and General Ledger Activity SQRs and in the online inquiry and journal unpost functions.

PSBJRNL_HEADER

This index helped the Posting process by indexing on the JRNL_PROCESS_REQST field. You should not need to change this index for ChartField configuration.

PS_JRNL_HEADER (Oracle Only)

PSEJRNL_HEADER

PSFJRNL_HEADER

BUSINESS_UNIT

JOURNAL_ID

LEDGER_GROUP

SOURCE

JRNL_HDR_STATUS

JRNL_HDR_STATUS

FISCAL_YEAR

BUSINESS_UNIT

ACCOUNTING_PERIOD

LEDGER_GROUP

PSEJRNL_HEADER (Oracle Only)

This index assists the processing of the SJE Status SQR and the Journal Post and Allocations processes. No modifications should be necessary for ChartField changes.

PSFJRNL_HEADER (Oracle Only)

The only process that uses this index is the Allocations process. If you do not run that job, you should not need the index.

PS_JRNL_LN (All Platforms)

PS_JRNL_LN

PSDJRNL_LN

PSFJRNL_LN

BUSINESS_UNIT

PROCESS_INSTANCE

JOURNAL_DATE

JOURNAL_ID

BUSINESS_UNIT

BUSINESS_UNIT

JOURNAL_DATE

ACCOUNT

UNPOST_SEQ

UNPOST_SEQ

 

JOURNAL_ID

JOURNAL_LINE

 

JRNL_LN_SOURCE

LEDGER

   

PS_JRNL_LN

This is the unique index created by PeopleSoft Application Designer. It matches the keys on the parent record (PS_JRNL_HEADER) with the addition of the Journal Line field. This is used in processing when you update the journal line in the Posting and Journal Edit programs, and online Posting and Journal Update processes.

PSDJRNL_LN

This index assists the batch processes in accessing the journal line information. It is used in the Journal Edit and Journal Combo Edit processes, and should not require modification for ChartField changes. If the index is not chosen by the optimizer, update statistics for the table with histogram information.

For Oracle run the following command: Analyze table PS_JRNL_LN compute statistics for columns (process_instance).

For DB2/Unix, run the following command: Runstats on table <owner>.PS_JRNL_LN with distribution and indexes all.

PSFJRNL_LN

This index is used by the GLAJES SQR report, Posting, Consolidations, Currency Translation, and Allocations processes, as well as the online inquiry and posting pages.

PS_JRNL_LN (All Platforms Except Oracle)

PSAJRNL_LN

PSBJRNL_LN

ACCOUNT

JOURNAL_ID

BUSINESS_UNIT

JOURNAL_DATE

CURRENCY_CD

BUSINESS_UNIT

 

UNPOST_SEQ

PSAJRNL_LN

This index facilitates those queries that look for a specific match on the Account ChartField. It includes the additional fields of Business Unit and Currency Code, because those fields are consistently included in the "where" clauses of these statements. It enhances performance in the GLAOITEM SQR, Journal Posting COBOL process, and the online Journal Inquiry and Unposting processes. If changes are made to the ChartFields, you need to modify this. You should include the ChartField that is always entered and has the greatest cardinality in this index.

PSBJRNL_LN

The items in this index are similar to the unique index except that the sequence is different and the JOURNAL_LINE field is left off. The JOURNAL_LINE field was eliminated because it is almost never referenced in "where" clauses. The sequence was changed because JOURNAL_ID is a high-cardinality field and is frequently referenced in select statements. In the select statements that specify JOURNAL_ID, the BUSINESS_UNIT, JOURNAL_DATE, and UNPOST_SEQ fields are also referenced; so these were included on this index in order of cardinality.

This index is frequently used: in the GLALEDGD, GLALEDGS and GLAJES SQRs as well as the Posting, Consolidations, Currency Translation, and Journal Edit background processes. Online, it is used on the inquiry and posting pages. This index should not require changes for ChartField configuration.

Note. Consider adding the JOURNAL_LINE and LEDGER to the end of the PSBJRNL_LN index and making it the unique index (eliminating the existing PS_JRNL_LN index). This is an option because the order of the index columns can differ from PeopleSoft Application Designer field sequence.

PS_JRNL_LN (Oracle Only)

PSEJRNL_LN

BUSINESS_UNIT

CURRENCY_CD

ACCOUNT

PSEJRNL_LN (Oracle Only)

This index is comparable to the PSAJRNL_LN index, except that the sequence is different. Consistent with the cost-based optimizer approach, the more common fields (Business Unit and Currency Code) are included at the beginning of the index. The SQR processes of Trial Balance, General Ledger Activity, and OpenItem Status use this index.

Click to jump to top of pageClick to jump to parent topicOptimizing Indexes

This section discusses how to optimize indexes.

Updating Statistics

Once the indexes are created, you must tell the system to use these new indexes by "updating statistics." The specifics vary by platform (described in the following chart). After new indexes are created, or after inserting or deleting substantial rows of data from existing tables, you should run the update statistics procedure. The purpose of this procedure is to update the system tables with index and table information to assist the optimizer in choosing the most efficient index for an operation. If this procedure is not performed frequently, your system performance could suffer.

Platform

Method to Update Statistics

DB2

Batch RUNSTATS process, by Table Space.

ORACLE

ANALYZE TABLE xxx COMPUTE STATISTICS

ANALYZE INDEX xxx COMPUTE STATISTICS

For large tables, it is faster to replace COMPUTE with ESTIMATE. Determine through testing whether estimated statistics yield optimum access plans.

Table Seeding

Indexes are delivered for several temporary tables. These include tables such as LEDGER_TMP, which hold data only for the duration of a specific process. Because the tables are generally empty, running update statistics on the empty version of the table causes the optimizer to think that the table is always empty and to favor a full-table scan instead of using the index. For this reason, a script is available to seed these temporary tables with 100 rows of data for the purpose of updating the statistics on the index.

The script is delivered in your SQL subdirectory and is called SEEDGL with the SQL extension appropriate to your database platform. The script inserts 101 rows of data into the temporary tables and runs the update statistics command on the seeded table. The tables that are seeded are the and PSTREESELECT05, PSREESELECT06, PSTREESELECT08, and PSTREESELECT10 to correspond to the length of the ChartFields delivered with the demo system.

If rows currently exist in your PSTREESELECTxx tables, you should not delete this data. The system populates these rows when you execute a PS/nVision report. These rows correspond to a control table named PSTREESELCTL, and if removed by them, result in incorrect data or no data in your PS/nVision report the next time you execute it.

Review each script before running to ensure that the key values loaded do not conflict with any that would be used by the existing system, and to determine if changes are needed for any specific modifications you might have done. When running the scripts in your production environment, be sure that you seed the PSTREESELECT tables that correspond to the field length of your ChartFields; these are the PSTREESELECT tables that are used in your environment.

Note. If you are a DB2 customer, you can maximize the benefits of seeding these tables, by seeding them with the cardinality that is correct for your particular environment. A script named SEEDMULT.DMS is an example of seeding the PSTREESELECT06 table with correct cardinality. Use this version, rather than the above versions, when seeding the PSTREESELECT tables.

Table seeding is also needed for temp tables that are being reserved for Application Engine processes called from PeopleCode. The AE processes have a logic to update statistics of these temporary tables after each Insert. But each %UpdateStats requires an explicit commit, and since the commit is not allowed within an Application Engine program called from PeopleCode, therefore the %UpdateStats is not executed. To ensure that Sql statements referring to those temporary tables run well, we can try 2 things:

1– To leave the temporary tables with no statistics. That means never do any update statistics on these tables. If you had updated statistics of these tables when they were empty, you need to drop and recreate them. For Oracle, you can run the analyze table command with the option to delete the statistics. You then need to test the process to see if the performance is acceptable for you.

If having no statistics on the temporary table does not yield the performance required (like the case of table PS_COMB_EXP_TAOx of Journal Combination Edit process), then you need to seed the table by running an Application Engine trace of level 131, extract all the "insert into PS_COMB_EXP_TAOx" Sql statements, and run them outside from the program to seed the table. Then run the update statistics command on the seeded table.

Physical Performance Considerations

Even the best index planning and execution cannot overcome performance problems caused by disk and index fragmentation. Fragmentation happens over time as records are deleted, updated, and added to the database. When new records are inserted, the system places the new information wherever space is available—not necessarily in the same location as the rest of the physical table. As records are deleted, gaps might be left in the physical space that the record occupied which can or cannot be filled in with new information. As the physical locations of these records become more spread out, the system must work harder to find the specific record you requested, and response time suffers. Both indexes and tables can become fragmented and hamper performance, so it is important to take the steps outlined in your database administration documentation to eliminate database fragmentation.

Click to jump to parent topicUsing Partition IDs

When you have a large volume of transactions to process in a limited run-time, use partition IDs to enable the journal posting processes (GLPPPOST) to run in parallel. Partition IDs enable you to group your transaction data into mutually exclusive sets, ending contention between processes for the same row of data during posting. This eliminates possible delays due to halted processes and database rollbacks.

Only after your system has been in production for several months is it practical to implement partition IDs. It requires reevaluation at regular periods to ensure your system stays within the limits of your batch run-time window.

It is crucial that you understand your data structure before attempting to use partition IDs. Analyze the volume of your transactions by business unit and ChartField, and divide them into transaction groups with roughly equal numbers of transactions. Using your analysis, create a business unit/ChartField combination scheme to classify the transactions into mutually exclusive data sets. Then, create a partition ID for each a business unit/ChartField combination.

You might want to monitor the duration of batch processing to ensure your partition IDs remain effective over time. This can provide advance warning of changes in your system's batch processing requirements, as well as changes in your business.

To use partition IDs, first, define them using the Partition ChartField Entry Process Partition page. Then, specify your partition IDs in the request pages for the Journal Edit (GL_JEDIT) and Journal Post (GLPPPOST) processes.

See Also

Defining Process Partitions

Click to jump to parent topicUsing ADB Incremental Calculations

The ADB Calculation process (GL_ADB_CALCX) supports these optimizing features:

The process also enables the customer to specify a calendar ID to the target ledger. For example, customers can specify a monthly calendar to store the MTD balances.

See Also

Understanding Average Balance Calculation

Click to jump to parent topicUpdating Summary Ledgers Incrementally

Summary ledgers can be updated incrementally in either of these ways:

The objects and fields needed to support this feature are the following:

The incremental summary ledger update process flow is as follows:

  1. Summary Ledger process (initial summary ledger creation ):

  2. The Journal Post process:

  3. Summary Ledger Process (incremental update ):

Note. If the incremental update is initiated from the Summary Ledger Process Request page, the Request Type must be Increment.

See Also

Combining Accounts Using Summary Ledgers

Click to jump to parent topicOptimizing PS/nVision Performance

This section discusses:

PS/nVision is a complex tool, capable of producing a great variety of reports from a variety of database tables. The SQL statements it generates are not necessarily complex, but they are very sensitive to the performance of the underlying database, especially in the following areas:

Unlike traditional background reporting tools, PS/nVision supports interactive, focused reporting with a probing or querying approach to accessing the database. PS/nVision queries tend to be more numerous than traditional report writers are, but also more focused on the specific data that you want to see.

Click to jump to top of pageClick to jump to parent topicTree Joins

PS/nVision relates tree node criteria to data tables by joining the data table to a tree selector table. This selector table contains a row for every detail range defined for the tree in the Tree Manager, and is keyed by PROCESS_INSTANCE (a system-generated constant number for all the rows representing a particular tree) and tree node number. Because some database platforms only join tables efficiently if the field sizes match, the system uses up to 30 selector tables, one for each supported ChartField length. Each selector table has RANGE_FROM_nn and RANGE_TO_nn columns matching the corresponding ChartField size.

The following code is a typical SELECT for selection via nodes on a single tree:

SELECT L.TREE_NODE_NUM, SUM(POSTED_TOTAL_AMT) FROM PS_LEDGER A, ​PSTREESELECT06 L ​WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=1991 AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 9 ​AND A.ACCOUNT>=L.RANGE_FROM_06 AND A.ACCOUNT<=L.RANGE_TO_06 AND L.PROCESS_INSTANCE=198 AND (L.TREE_NODE_NUM BETWEEN 16 AND 30 OR L.TREE_NODE_NUM BETWEEN 35 AND 40) ​GROUP BY TREE_NODE_NUM

The parts of this statement in boldface accomplish the tree criteria selection. The GROUP BY clause returns an answer row for each node that has a detail range attached to it; these node numbers are used to post amounts from the answer set into the appropriate rows of the report.

Click to jump to top of pageClick to jump to parent topicCombination Rules

PS/nVision endeavors to retrieve the data for each report instance with as few SELECTs as possible. It examines all row criteria to determine which can be combined, and does the same for column criteria. It then builds a Select statement to retrieve each intersection of a combined group of rows with a combined group of columns. You should understand the following built-in rules when designing indexes:

Click to jump to top of pageClick to jump to parent topicCapturing SQL

To examine the SQL produced by PS/nVision, capture the statements in one of two ways: