Skip Headers
Oracle® Communications Billing and Revenue Management System Administrator's Guide
Release 7.5

E16719-13
Go to Documentation Home
Home
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

9 System Administration Utilities and Scripts

This chapter provides reference information for Oracle Communications Billing and Revenue Management (BRM) system administration utilities.

load_pin_event_record_map

Use this utility to load the event record file (BRM_Home/sys/data/config/pin_event_record_map) into the BRM database.

The event record file contains a list of event types to exclude from being recorded in the database. For more information, see "Managing Database Usage".

Note:

You cannot load separate /config/event_record_map objects for each brand. All brands use the same object.

Important:

At the time you load the event record file, if any events of a type specified in the file already exist in the database, these events remain in the database.

After running this utility, you must stop and restart the Connection Manager (CM).

Location

BRM_Home/bin

Syntax

load_pin_event_record_map [-d] [-v] | [-r] pin_event_record_map

Parameters

-d

Log debugging information.

-v

Verbose. Displays detailed information as the event record map is created.

-r

Returns a list of the events in the pin_event_record_map file configured to not be recorded.

Important:

This option must be used by itself and does not require the file name.
pin_event_record_map

The file containing the event types to exclude from the database.

Specify to not record the event type by setting its flag value to 0 in the file. To temporarily record an event type, change the event's flag value to 1 and reload the file.

The following example shows the event record file format where the event type is followed by its flag value:

/event/session : 1
/event/customer/nameinfo : 0
/event/billing/deal/purchase : 0
/event/billing/product/action/purchase : 0
/event/billing/product/action/modify : 0

Important:

The record file includes one default event type, /event/session, set to be recorded. Never exclude this event type, or any event type that is updated more than once during the same session from recording. If such events are configured not to record, an error occurs when the system tries to update the same event during the same session. To eliminate the error, remove the event causing the error from the record map file.

Results

The load_pin_event_record_map utility notifies you only if it encounters errors. Look in the default.pinlog file for errors. This file is either in the directory from which the utility was started or in a directory specified in the utility configuration file.

partition_utils

Use this utility to customize and maintain BRM database partitions.

Note:

Before you use the partition_utils utility, configure the database connection parameters in the partition_utils.values file in BRM_Home/apps/partition_utils. See "Configuring a Database Connection".

Important:

After you start the utility, do not interrupt it. It might take several minutes to complete, depending on the size of your database.

For more information, see the following topics:

Location

BRM_Home/bin

Syntax for Adding Partitions

partition_utils    -o add -t realtime|delayed -s start_date 
                   -u month|week|day -q quantity 
                   [-c storable_class] [-w width] [-f] [-p] [-h]

Parameters for Adding Partitions

-o add

Adds partitions.

-t realtime|delayed

Adds real-time or delayed-event partitions. Only event tables may be set to delayed.

Note:

Conversion Manager does not support the migration of data to the EVENT_T tables.
-s start_date

Specifies the starting date for the new partitions. The format is MMDDYYYY.

start_date must be the day after tomorrow or later; you cannot create partitions starting on the current day or the next day. For example, if the current date is January 1, the earliest start date for the new partition is January 3 (For example, 01032007.)

-u month|week|day

Specifies the time unit for the partitions.

-q quantity

Specifies the number of partitions to add.

If a partition with a future date already exists in the table, running this utility for adding partitions adds partitions more than the specified quantity.

For example, you want to create one partition with starting date as February 1 and the table already contains the P_R_02282009 partition. The P_R_02282009 partition is split into two partitions called P_R_02012009 and P_R_02282009.

-c storable_class

Specifies the class of objects to be stored in the partition. The default storable base class for partition_utils is the /event base class. See "partition_utils".

Note:

If you want to specify a non-event class, partitioning must have been enabled for that class before pin_setup was run during the BRM installation process. For more information, see "Editing the pin_setup.values File to Enable Partitioning for Non-Event Tables" in BRM Installation Guide.
-w width

Specifies the number of units in a partition (for example, 3).

-f

Forces the creation of partitions even if start date is included in the current partition. If this parameter is used on an existing partition, the existing partition is split into two, one containing the old data and the other including the start date given in the command.

Caution:

Before forcing partitions:

For real-time partitions, stop all BRM processes.

For delayed-event partitions, stop all delayed-event loading by stopping Pipeline Manager.

Note:

The -f parameter works differently when you remove partitions. In that case, it forces the removal of objects associated with open items.
-p

Writes an SQL statement of the operation to the partition_utils.log file but does not perform any action on the database.

-h

Displays the syntax and parameters for this utility.

Syntax for Purging Partitions

partition_utils -o purge -e end_date [-t realtime|delayed] [-p] [-h]

Parameters for Purging Partitions

-o purge

Purges event objects without removing partitions.

-e end_date

Specifies the cutoff date for keeping objects (on closed items). Objects that are older than that date and associated with closed items are purged; no objects associated with active items are purged.

-t realtime|delayed

Purges real-time or delayed-event partitions. The default is to purge both real-time and delayed-event partitions.

-p

Writes a report of purgeable objects to the partition_utils.log file but does not perform any action on the database.

-h

Displays the syntax and parameters for this utility.

Syntax for Removing Partitions

partition_utils    -o remove -s start_date -e end_date [-c storable_class] 
                   [-t realtime|delayed] 
                   [-f] [-p] [-h]

Parameters for Removing Partitions

-o remove

Removes partitions.

-s start_date

Specifies the start of the date range for the objects you want to remove. The format is MMDDYYYY.

If the specified dates do not match the partition boundaries, only objects in partitions that are completely within the date range are removed. See "About Purging Event Objects".

-e end_date

Specifies the end of the date range for the objects you want to remove. The format is MMDDYYYY.

By default, you can use this operation to remove only those partitions that are older than 45 days. You can change this limitation by editing the partition_utils.values file in BRM_Home/apps/partition_utils. See "Customizing Partition Limitations".

-c storable_class

Specifies the partition you are removing by base class. The default is /event.

Note:

BRM does not support the purging of non-event classes. The only way to purge them is to remove the partitions.

To remove a non-event table partition, you must use the -f parameter. Operations using this option cannot be undone and will remove objects that are being used. Use with caution.

-t realtime|delayed

Removes real-time or delayed-event partitions. The default is to remove both real-time and delayed-event partitions.

-f

Forces the removal of partitions that contain objects that are associated with an open item. By default, these partitions are not removed.

Caution:

Use this option carefully!

Note:

The -f parameter works differently when you add partitions. In that case, it forces the splitting of partitions even when they fall within the date range of the current partition.
-p

Writes an SQL statement of the operation to the partition_utils.log file but does not perform any action on the database. See "Running the partition_utils Utility in Test Mode".

-h

Displays the syntax and parameters for this utility.

Syntax for Enabling Delayed Partitions

partition_utils -o enable -t delayed -c storable_class [-p] [-h]

Parameters for Enabling Delayed Partitions

-o enable -t delayed

Enables delayed-event partitions.

-c storable_class

Specifies the event classes for which you want partitioning.

You can enable partitioning for all subclasses of an event by using the percent sign (%) as a wildcard.

-p

Writes an SQL statement of the operation to the partition_utils.log file but does not perform any action on the database. See "Running the partition_utils Utility in Test Mode".

-h

Displays the syntax and parameters for this utility.

Syntax for Updating Partitions

partition_utils -o update [-c storable_class] [-p] [-h]

Parameters for Updating Partitions

-o update

Aligns partitions across all object tables for a single base class. All real-time and delayed-event partitions get the same real-time partitioning scheme as their base table (EVENT_T for /event base class tables, ITEM_T for /item base class tables, etc.).

-c storable_class

Specifies the class of objects to be updated. The default storable base class is /event.

-p

Writes an SQL statement of the operation to the partition_utils.log file, but does not perform any action on the database. See "Running the partition_utils Utility in Test Mode".

-h

Displays the syntax and parameters for this utility.

Syntax for Restarting a partition_utils Job

partition_utils -o restart [-b] [-h]

Parameters for Restarting a partition_utils Job

-o restart

Re-executes the previous operation that was unsuccessful due to an error or abnormal termination.

-b

Bypasses executing the previous operation but cleans the status of it.

-h

Displays the syntax and parameters for this utility.

Syntax for Finding the Maximum POID for a Date

partition_utils -o maxpoid -s date -t realtime|delayed [-p] [-h]

Parameters for Finding the Maximum POID for a Date

-o maxpoid

Returns the maximum POID for the given date.

-s date

Specifies the date that determines the maximum POID. The format is MMDDYYYY.

-t realtime|delayed

Gets the maximum POID in only real-time or only delayed-event partitions.

-p

Writes an SQL statement of the operation to the partition_utils.log file but does not perform any action on the database. See "Running the partition_utils Utility in Test Mode".

-h

Displays the syntax and parameters for this utility.

Results

If the utility does not notify you that it was successful, look in the partition_utils.log file to find any errors. This file is either in the directory from which the utility was started or in a directory specified in the utility configuration file. The partition_utils.log file includes SQL statements if you use the -p parameter.

partitioning.pl

Use this script to manage all the nonpartitioning-to-partitioning upgrade tasks. Run it from a UNIX prompt.

For more information, see "Changing from a Nonpartitioned to a Partitioned Database".

Important:

The partitioning.pl script needs the partition.cfg configuration file in the directory from which you run the utility.

Location

BRM_Home/bin

Syntax

perl partitioning.pl [-c | -n | -a | -h ]

Parameters

-c

Creates the database objects required for the upgrade, including the following:

  • The UPG_LOG_T table that logs all the information about the upgrade.

  • The pin_upg_common package that contains all the common routines for the upgrade.

-n

Displays the event tables that will be partitioned during the upgrade.

Tables selected for partitioning are listed in the TABLES_TOBE_PARTITIONED_T table, which is created during the upgrade process. This table contains two columns:

  • table_name: The name of the table to be partitioned.

  • partition_exchanged: The value of the exchanged partition. This value is used by the upgrade scripts to perform the table partitioning.

For example:

tables_tobe_partitioned
table_name                             varchar2(30)
partition_exchanged                    number(38)

Use the INSERT statement to partition tables and use 0 for the partition_exchanged value. For example, to insert MY_CUSTOM_EVENT_TABLE, execute this SQL statement:

INSERT INTO TABLES_TOBE_PARTITIONED_T (table_name, partition_exchanged)
VALUES ('MY_CUSTOM_EVENT_TABLE',0); COMMIT;

Note:

To prevent a listed table from being partitioned, use the SQL DELETE statement to delete its name from TABLES_TOBE_PARTITIONED_T.
-a

Partitions the tables listed in the TABLES_TOBE_PARTITIONED_T table.

To partition additional event tables, insert their names into TABLES_TOBE_PARTITIONED_T.

-h

Displays the syntax and parameters for this utility.

Results

The utility does not notify you if it was successful. Look in the UPG_LOG_T table to find any errors.

pin_clean_asos

Use this utility to delete closed /active_session objects from Oracle In-Memory Database (IMDB) Cache.

This utility compares an object's expiration time to the current time to determine if the object needs to be deleted and deletes the objects from one IMDB Cache at a time.

Note:

The default pin.conf file for this utility includes the entry - pin_mta multi_db. This utility does not use this entry.

In a multischema environment, you must run this utility separately for each IMDB Cache node. You can create a script that calls the utility with connection parameters to connect to the desired node.

For more information about deleting expired objects from IMDB Cache, see "Purging Old Call Data from Memory" in BRM Telco Integration.

Important:

To connect to the BRM database, the pin_clean_asos utility needs a pin.conf configuration file in the directory from which you run the utility.

Location

BRM_Home/bin

Syntax

pin_clean_asos -object "object_type" [-expiration_time number_of_hours] [-help]

Parameters

-object "object_type"

Specifies the type of /active_session object to delete. For example, to delete /active_session/telco/gsm objects from IMDB Cache:

pin_clean_asos -object "/active_session/telco/gsm"
-expiration_time number_of_hours

Sets the expiration time (in hours) for /active_session objects. The utility compares the expiration time with an object's end time (PIN_FLD_END_T) and deletes objects that are older than the number of hours you specify. For example, if you specify 2 as the value and you run the utility at 10 a.m., the utility deletes objects that were closed on or before 8 a.m.

If not specified, the expiration time defaults to 0. This results in the removal of all /active_session objects that have an end time less than the current time.

-help

Displays the syntax and parameters for this utility.

Results

If the utility does not notify you that it was successful, look in the default.pinlog file to find any errors. This file is either in the directory from which the utility was started or in a directory specified in the utility configuration file.

pin_clean_rsvns

Use this utility to free space in Oracle IMDB Cache or the database by deleting objects that remain because of a session's abnormal termination, such as a missed stop accounting or cancel authorization request.

Note:

This utility is installed with Resource Reservation Manager. See "About Resource Reservation Manager" in BRM Configuring and Collecting Payments.

This utility performs the following functions:

  • Releases expired reservations.

  • Rates active sessions that are in a started or an updated state and updates balances in the database.

    Note:

    You can optionally rate active sessions in a created state.
  • Deletes expired active session objects.

This utility compares an object's expiration time with the current time to determine if the object needs to be deleted and deletes objects from one IMDB Cache at a time.

Note:

This utility requires a pin.conf file to provide entries for connecting to the CM and DM, login name and password, log file information, and performance tuning. The values for these entries are obtained from the information you provided during installation or the entries are set with a default value.

The default pin.conf file for this utility includes the entry - pin_mta multi_db. This utility does not use this entry.

In a multischema environment, you must run this utility separately for each IMDB Cache node. You can create a script to call the utility with connection parameters to connect to the desired node.

Location

BRM_Home/bin

Syntax

pin_clean_rsvns       [-help] [-object object_type] [-account] 
                      [-expiration_time number_of_hours] [-cause user_defined] 
                      [-bytes_uplink volume] [-bytes_downlink volume]

Parameters

-help

Displays the syntax and parameters for this utility.

-object object_type

Specifies the object storage location. In an IMDB Cache DM environment, object_type can only be 1. Use -object 1 to delete expired objects in IMDB Cache.

-account

Calls STOP_ACCOUNTING to delete the active session objects in both created and update states, and starts rating the session.

Use this parameter to rate active sessions if your network supports only the create state, and not start and update states, for sessions. For example, networks using the Message Based Interface (MBI) protocol send only authorization and reauthorization requests, so the active session objects remain in a created state even during usage.

Note:

When you run the utility without this parameter, it calls STOP_ACCOUNTING to delete the active session objects in the update state.
-expiration_time number_of_hours

Sets back the expiration time for the objects, in hours.

The default is the current time when the utility runs. The utility deletes objects that are in an expired state up to the time you specify. For example, if you specify 2 as the value and you run the utility at 10 a.m., the utility deletes objects that expired by 8 a.m.

-cause user_defined

Specifies the reason for releasing reservations and terminating the session.

You can define any value for the cause, and the value is stored in the event object. You can define different rate plans depending on the reason for the session termination.

Note:

You use the rate plan selector to specify the rate plan for calculating the charges for the event.
-bytes_uplink volume

When the /active_session object's PIN_FLD_BYTES_UPLINK field is set to 0 or not specified, this parameter specifies to populate the field with the specified volume.

-bytes_downlink volume

When the /active_session object's PIN_FLD_BYTES_DOWNLINK field is set to 0 or not specified, this parameter specifies to populate the field with the specified volume.

Results

The utility releases any expired reservations and deletes any expired session objects. If there are any session objects in a started or updated state, the utility rates the objects:

  • For duration RUMs, it calculates the duration to rate by using the end time specified in the session object, the expiration time in the reservation object, the pin_virtual_time, or the current system time.

  • For volume RUMs, it calculates the volume to rate by using the volume in the session object or the volume passed in at the command line.

pin_ctl

Use this utility to start and stop BRM components.

Important:

To connect to the BRM database and configure the different processes, the pin_ctl utility needs a configuration file in the directory from which you run the utility. This configuration file must be called pin_ctl.conf, which is different from most BRM configuration file names.

For information on setting up and configuring the processes that pin_ctl controls, see "Configuring the pin_ctl Utility".

For general information on creating configuration files, see "Creating Configuration Files for BRM Utilities".

Syntax

pin_ctl action component [-c file_name] [-collectdata] [-debug] [-i]

Parameters

action

Specifies the type of action to be executed. See "action Parameter".

For example, to start the CM, use the following command:

pin_ctl start cm
component

Specifies the process on which the action is performed. See "component Parameter".

-c file_name

Specifies a configuration file to use instead of the default. Use this parameter to run different configurations of the same system.

-collectdata

Gets diagnostic data when starting, stopping, or checking the status of a component.

-debug

Displays debugging information.

-i

Interaction mode. Use this parameter to allow the utility to stop and ask if you want to proceed. This is especially useful when running stop, halt, and clear.

action Parameter

clear

Deletes log entries associated with the component (not the file).

Note:

The log file is not deleted, just the entries.
cstart

Clears the component logs and, if the component is not running, starts the component.

If the component is already running, the command clears the log file; the component continues running.

Note:

You are not prompted to clear logs.
halt

Searches for the specified component and runs the kill -9 command.

restart

Stops the component, waits for completion, then restarts the component.

snmpget action

Gets an SNMP value.

To use this command, you must add snmpget actions by editing the pin_ctl.conf file. See "Customizing snmpset and snmpget Actions".

snmpset action

Sets an SNMP value:

  • addServerInstance. Rebalances DM connection load when you restart a failed DM in the DM pool or when you add a new DM to the DM pool.

  • refreshConnections. Rebalances DM connections when you restart a failed DM in the pool.

You can add snmpset actions by editing the pin_ctl.conf file. See "Customizing snmpset and snmpget Actions".

start

Starts the component if it is not running.

If you specify all for component, it starts the components specified in the pin_ctl.conf file. For information, see "Customizing the Components Included in ”all”".

status

Returns the status Running or NotRunning of component.

stop

Starts the component if it is running.

If you specify all for component, it stops the components specified in the pin_ctl.conf file. For information, see "Customizing the Components Included in ”all”".

component Parameter

You can perform an action on any of the following components:

all

Applies the action to a customizable set of components. By default, the components are:

  • Oracle Data Manager (DM)

  • Email DM

  • Connection Manager

  • CM Master Process

  • Invoice formatter

To customize the set of components, see "Customizing the Components Included in ”all”".

answer

Paymentech answer simulator

batch_controller

Batch Controller

bre

Pipeline Manager

bre_tt

Pipeline Manager with IMDB Cache

cm

Connection Manager

cm_proxy

CM Proxy

cmmp

Connection Manager Master Process

dm_eai

Enterprise Application Interface (EAI) DM

dm_email

Email DM

dm_fusa

Paymentech DM

dm_ifw_sync

Account Synchronization DM (Oracle AQ)

dm_invoice

Invoice DM

dm_ldap

LDAP DM

dm_oracle

Oracle DM

dm_tt

Oracle In-Memory Database (IMDB) Cache DM

dm_vertex

Vertex tax calculation DM

eai_js

EAI Java Server

formatter

Invoice formatter

infmgr

System Manager

nmgr

Node Manager

rtp

Real-time pipeline

pin_db_alert.pl

Use this utility to monitor the following database key performance indicators (KPIs) in Oracle databases:

  • Age of event and audit tables

  • Size of audit tables

  • Invalid or missing procedures, triggers, or indexes

You configure the pin_db_alert.pl utility to alert you when one of these components has returned a certain status.

For more information, see "Using the pin_db_alert Utility to Monitor Key Performance Indicators".

Important:

To connect to the BRM database, the pin_db_alert.pl utility needs a configuration file in the directory from which you run the utility.

Location

BRM_Home/diagnostics/pin_db_alert

Syntax

pin_db_alert.pl 

Parameters

This utility has no parameters.

pin_purge

Use this utility to delete old bills, items, journals, and expired account sub-balances from Oracle IMDB Cache.

This utility compares an object's expiration time to the current time to determine if the object needs to be deleted.

This utility deletes objects from one Oracle IMDB Cache or logical partition at a time. If the system has multiple logical partitions, you must run this utility for each logical partition. In a high-availability configuration, you must run the utility for each high-availability node. You can create a script to call the utility with relevant connection parameters to connect to the desired Oracle IMDB Cache nodes.

Important:

Before running this utility, unset the ORACLE_HOME environment variable.

Note:

This utility requires a pin.conf file in the directory from which you run the utility with entries for connecting to the CM and IMDB Cache DM, login name and password to connect to the data store, batch size for deleting, and log file information. The values for these entries are obtained from the information you provided during installation or the entries are set with a default value. The pin.conf file for this utility is installed in the BRM_Home/apps/pin_subscription directory.

Location

BRM_Home/bin

Syntax

pin_purge [-l username/password@DatabaseAlias] -c{ bill|item|subbalance} 
{-n number_of_days| -d MM/DD/YYYY} [-help]

Parameters

-l username/password@DatabaseAlias

Specifies how to connect to the database. If you omit this option, the utility uses the information provided in the pin.conf file to establish the connection.

-c {bill | item | subbalance}

Specifies the object to be deleted.

  • -c bill deletes bills and all related items and journals that do not have any pending or open items, due amounts set to zero, and bill status set to closed (bill in finalized)

    Deleting bill objects does not also delete the associated bill items. The items must be deleted separately using the -c item parameter.

    Note:

    The bill objects are deleted from the Oracle IMDB Cache only.
  • -c item deletes billable and special items (such as payments, adjustments, and disputes) in closed status and all journals related to these items.

    Note:

    The item and journal objects are deleted from Oracle IMDB Cache only.
  • -c subbalance deletes account sub-balances.

    Note:

    Account sub-balances are deleted from Oracle IMDB Cache. The objects are eventually deleted from the BRM database when updates from Oracle IMDB Cache are propagated to the BRM database.
-n number_of_days

Specifies the number of days prior to which bills, items, or sub-balances are deleted. For example, specify 90 to delete bills, items, or sub-balances older than 90 days.

-d MM/DD/YYYY

Specifies the date prior to which bills, items, or sub-balances are deleted. For example, specify 03/01/2009 to delete bills, items, or sub-balances older than March 1, 2009.

-help

Displays the syntax and parameters for this utility.

Results

If the utility does not notify you that it was successful, look in the pin_purge.pinlog file to find any errors. This file is either in the directory from which the utility was started or in a directory specified in the utility configuration file.

pin_sub_balance_cleanup

Use this utility to purge expired account sub-balances from the BRM database.

Caution:

When you delete sub-balances from the database, events that impacted those sub-balances cannot be rerated. Ensure you no longer need the expired sub-balances before deleting them.

For more information, see "About Purging Account Sub-Balances".

To connect to the BRM database, the pin_sub_balance_cleanup utility needs a configuration file in the directory from which you run the utility.

Important:

You must run the pin_sub_balance_cleanup utility from the BRM_Home/apps/pin_subscription directory. This directory contains the pin.conf file that has the parameters required for this utility.

Location

BRM_Home/bin

Syntax

pin_sub_balance_cleanup -n number_of_days | - d date 

Parameters

-n number_of_days

The number of days prior to which sub-balances are deleted. For example, specify 60 to delete expired sub-balances older than 60 days.

-d date

The date prior to which sub-balances are deleted. Use the format MM/DD/YYYY. For example, specify 06/30/2003 to delete expired sub-balances older than June 30, 2003.

Results

The utility does not notify you that it was successful. To check the for errors, look in the utility log file (BRM_Home/apps/pin_subscription/pin_subscription.pinlog).

pin_tt_schema_gen

Use this utility to connect to the BRM database and generate SQL scripts to create and initialize the BRM cache groups in Oracle IMDB Cache. These SQL scripts contain all the required cache group definitions. You can run these SQL scripts against the appropriate IMDB Cache nodes to load the cache groups with the required schema and data. See "Generating the BRM Cache Group Schema".

Note:

The pin_tt_schema_gen utility has been certified for Perl 5.8.0 and Oracle 11g.

Before running this utility, you must configure the database connection parameters in one of the following files in BRM_Home/bin.

  • pin_tt_schema_gen.values, which generates scripts for the default cache groups.

  • A custom configuration values file, which generates scripts for custom cache groups.

See "Configuring the pin_tt_schema_gen.values File" for more information.

Important:

After you start this utility, do not interrupt it. It might take several minutes to complete, depending on the size of your database.

Note:

When you run this utility, the following warning messages are logged in the pin_schema_gen.log file. These warnings are reported for storable classes that do not have associated tables. You can ignore these warning messages.

'/reservation/active' mentioned in array local_tables_class_def does not have any table.

'/active_session/telco/gprs/master' mentioned in array local_tables_class_def does not have any table.

'/active_session/telco/gprs/subsession' mentioned in array local_tables_class_def does not have any table.

Requirements

The pin_tt_schema_gen utility requires the following versions of the Perl modules:

  • DBI version 1.605

  • DBD-Oracle version 1.16

  • Bit-Vector version 7.1

Location

BRM_Home/bin

Syntax

pin_tt_schema_gen [-t][-l][-d][-o][-a][-h][-f configuration_values_files ]

Parameters

-t

Generates the tt_schema.sql script file, which you can run against the appropriate IMDB Cache node to create the cache groups and transient table schema.

-l

Generates the tt_load.sql script file, which you can run against the appropriate IMDB Cache node to load data from the BRM database.

-d

Generates the tt_drop.sql script file, which you can run against the appropriate IMDB Cache node to drop the cache groups.

-o

Updates the BRM database with unique indexes and not-null constraints.

-a

Runs the -t, -l, -d, and -o parameters.

-h

Displays the syntax and parameters for this utility.

-f configuration_values_files

Specifies the name of the configuration values files to be used by the utility. The default is pin_tt_schema_gen.values. You can provide another configuration values file.

Results

If the utility does not notify you that the files were generated, look in the pin_tt_schema_gen.log file to find any errors. This file is either in the directory from which the utility was started or in a directory specified in the utility configuration file.

pin_virtual_gen

Use the pin_virtual_gen utility to convert /event classes in the BRM schema to use virtual columns. After you run the utility, the poid_type columns of event tables in the BRM database are virtual-column enabled.

For more information, see "Generating Virtual Columns on Event Tables".

To connect to the BRM database and to specify logging information, the pin_virtual_gen utility uses the Infranet.properties file in the directory from which you run the utility.

Specify the log level by setting the infranet.log.level property in the Infranet.properties file. The default is 1. Valid values are 1, 2, and 3. Regardless of the log level set, status messages are printed to stdout and to the log file. Errors are logged and printed to stderr.

Location

BRM_Home/apps/pin_virtual_columns

Syntax

pin_virtual_gen { { -gentasks [-execute] | -readtasks [-execute] } { [create|pre_export|post_export|verify_types|create_types] } } | -showtasks [minID maxID] }

Parameters

-gentasks create [-execute]

Generates tasks and stores them in the database.

Executes tasks after saving to the database.

-readtasks create [-execute]

Reads previously stored tasks from the database.

Executes tasks after reading from the database.

-gentasks create

Creates virtual columns and supporting columns in the BRM database.

Use this option in conjunction with the - showtasks option if you want to display the tasks that will be executed for creating the virtual columns before executing them. The following example shows how to create the tasks for creating virtual columns, display them, and then execute them:

pin_virtual_gen -gentasks create
pin_virtual_gen -showtasks
pin_virtual_gen -readtasks create -execute
-showtasks minID maxID

Reads corresponding tasks from the database and displays task details.

minID and maxID specify the tasks to show within an ID range. The command shows tasks that have an ID greater than minID or less than maxID. minID and maxID are optional.

All tasks are displayed when an ID range is not provided.

-gentasks pre_export [-execute]

Removes virtual columns temporarily.

pin_virtual_gen -gentasks pre_export -execute
-gentasks post_export [-execute]

Restores virtual columns that were temporarily removed.

pin_virtual_gen -gentasks post_export -execute
-gentasks verify_types [-execute]

Verifies whether storable class type names exist in the data dictionary of the BRM database schema.

pin_virtual_gen -gentasks verify_types -execute
-gentasks create_types [-execute]

Creates the names of custom storable class types and stores them in the data dictionary of the BRM database schema.

pin_virtual_gen -gentasks create_types -execute
-help

Displays the syntax and parameters for this utility.

Results

The pin_virtual_gen utility notifies you when it runs successfully. Otherwise, look in the vcol.pinlog file for errors. This file is either in the directory from which the utility was started or in a directory specified in the Infranet.properties file.

purge_audit_tables.pl

Use this script to archive unneeded shadow objects in audit tables. Use it to:

  • Generate audit table reports

  • Create history tables

  • Move audit tables to history tables

  • Archive audit tables

  • Restore archived audit tables

The different versions of shadow objects that are valid for archiving are moved to the history tables so they can be accessed for future reference. In addition, when versions of an object are removed from audit tables, all subclasses of the shadow objects are also removed automatically by the script.

Note:

The purge_audit_tables.pl script does not delete objects from the database; it only purges the object rows stored in a table.

Important:

To connect to the BRM database, the purge_audit_tables.pl script needs a configuration file in the directory from which you run the utility.

For more information, see "Archiving Audit Data" in BRM Developer's Guide.

For information on audit trails and shadow objects, see "About Tracking Changes to Object Fields" in BRM Developer's Guide.

Location

BRM_Home/sys/archive/oracle

Syntax

purge_audit_tables.pl  report -t objects -d date -l login/pswd@connection|
                       create -t objects -l login/pswd@connection|
                       archivedirect -t objects -d date -c commit_size -l login/pswd@connection|
                       archiveindirect -t objects -d date -c commit_size -l login/pswd@connection|
                       renametohist -t objects -l login/pswd@connection|
                       updfromhist -t objects -d date -c commit_size -l login/pswd@connection|
                       help

The following purging actions are supported:

report Syntax

Generates audit table reports.

The utility generates a file named purge_tables.report, which provides information about the tables for the specified objects, including the number of rows in each table that are eligible for purging, and whether history tables exist for them. You create a report to determine which mode of archiving to use for the specified object: archivedirect or archiveindirect.

purge_audit_tables.pl  report -t objects -d date -l login/pswd@connection 
-t objects

Specifies a comma-separated list of shadow objects on which to report.

Shadow objects use an au prefix. For example, a change to a field marked for auditing in the /profile object results in the /au_profile shadow object.

Note:

Do not specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting object is reported on when you list the /au_profile object.
-d date

Specifies the cutoff date for purging data.

This date determines which versions of the audit object are eligible for purging. If a version of an object is valid at the cutoff date, and there is at least one older version of the same object, the valid object is kept and all older versions are marked for purging and moved to the history tables.

The format is YYYY:MM:DD.

-l login/pswd@connection

Specifies your standard Pipeline Manager user name and database password. This parameter is required to connect to the database.

Sample report command

perl purge_audit_tables.pl report -t au_account -d 2005:02:23 -l pin/p1N@subdb

create Syntax

Creates empty history tables for the specified objects and their child objects.

purge_audit_tables.pl create -t objects -l login/pswd@connection 
-t objects

Specifies a comma-separated list of objects for which to create history tables. History tables are prepended by H_ as shown in Table 9-1.

Table 9-1 Audit and History Tables Format

/service Object Audit Tables /service Object History Tables

AU_SERVICE_T

H_SERVICE_T

AU_SERVICE_ALIAS_T

H_SERVICE_ALIAS_T

AU_SERVICE_EXTRACTING_T

H_SERVICE_EXTRACTING_T


Important:

Do not specify the child objects for a table; they are handled automatically by the script.
-l login/pswd@connection

Specifies your standard Pipeline Manager user name and database password. This parameter is required to connect to the database.

Sample create command

purge_audit_tables.pl create -t au_account -l pin/p1N@subdb

archivedirect Syntax

Archives audit tables for the specified objects and their child objects by copying the data directly to the history tables and then removing it from the audit tables.

purge_audit_tables.pl archivedirect -t objects -d date -c commit_size -l login/pswd@connection 
-t objects

Specifies a comma-separated list of objects to archive audit tables for.

Shadow objects use an au prefix. For example, a change to a field marked for auditing in the /profile object results in the /au_profile shadow object.

Note:

Do not specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting object is reported on when you list the /au_profile object.
-d date

Specifies the cutoff date for purging data.

This date determines which versions of the audit object are eligible for purging. If a version of an object is valid at the cutoff date, and there is at least one older version of the same object, the valid object is kept and all older versions are marked for purging and moved to the history tables.

The format is YYYY:MM:DD.

-c commit_size

Specifies the number of rows to save to the database at one time.

-l login/pswd@connection

Specifies your standard Pipeline Manager user name and database password. This parameter is required to connect to the database.

Sample archivedirect command

purge_audit_tables.pl archivedirect -t au_account -d 2005:03:29 -c 1000 -l pin/p1N@subdb

archiveindirect Syntax

Archives audit tables for the specified objects and their child objects by copying the data first to temporary tables, then to the history tables. If successful, the old audit table data is removed.

Important:

Do not delete the temporary tables if the data was not copied successfully to the history tables. Errors might have occurred when the data was moved to the temporary tables from the main tables; therefore, manually transfer the data back to the main audit tables. Then, delete the temporary tables and run the script again.
purge_audit_tables.pl archiveindirect -t objects -d date -c commit_size -l login/pswd@connection 
-t objects

Specifies a comma-separated list of objects to archive audit tables for.

Shadow objects use an au prefix. For example, a change to a field marked for auditing in the /profile object results in the /au_profile shadow object.

Note:

Do not specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting object is reported on when you list the /au_profile object.
-d date

Specifies the cutoff date for purging data.

This date determines which versions of the audit object are eligible for purging. If a version of an object is valid at the cutoff date, and there is at least one older version of the same object, the valid object is kept and all older versions are marked for purging and moved to the history tables.

The format is YYYY:MM:DD.

-c commit_size

Specifies the number of rows to save to the database at one time.

-l login/pswd@connection

Specifies your standard Pipeline Manager user name and database password. This parameter is required to connect to the database.

Sample archiveindirect command

purge_audit_tables.pl archiveindirect -t au_account -d 2005:02:23 -c 1000 -l pin/p1N@subdb

renametohist Syntax

Renames the specified audit tables to their corresponding history tables and recreates the audit tables without any indexes. This option also creates the script files used to create, rename, rebuild, and drop indexes that were in the audit tables. You can run the following scripts manually when necessary.

  • - create_index_script.sql

  • - rename_index_script.sql

  • - rebuild_index_script.sql

  • - drop_index_script.sql

purge_audit_tables.pl  renametohist -t objects -l login/pswd@connection 
-t objects

Specifies a comma-separated list of objects for which to rename audit tables to history tables and recreate empty audit tables.

Note:

You do not need to specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting child object is reported on if you list the /au_profile object.
-l login/pswd@connection

Specifies your standard Pipeline Manager user name and database password. This parameter is required to connect to the database.

Sample renametohist command

purge_audit_tables.pl  renametohist -t au_account -l pin/p1N@subdb

updfromhist Syntax

Retrieves the data for a given object and its child objects from the history tables and transfers it back to the audit tables.

purge_audit_tables.pl updfromhist -t objects -d date -c commit_size -l login/pswd@connection 
-t objects

Specifies a comma-separated list of shadow objects to retrieve the data from the history tables and update in the corresponding audit tables.

Shadow objects use an au prefix. For example, a change to a field marked for auditing in the /profile object results in the /au_profile shadow object.

Note:

Do not specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting object is reported on when you list the /au_profile object.
-d date

Specifies the cutoff date for retrieving data.

The format is YYYY:MM:DD.

-c commit_size

Specifies the number of rows to save to the database at one time.

-l login/pswd@connection

Specifies your standard Pipeline Manager user name and database password. This parameter is required to connect to the database.

Sample updfromhist command

purge_audit_tables.pl  updfromhist -t au_account -d 2005:03:29 -c 1000 -l pin/p1N@subdb

help Syntax

Displays the syntax for the purge_audit_tables.pl utility.

help syntax

purge_audit_tables.pl help

Results

The utility notifies you only if it encounters errors.