This chapter provides reference information for Oracle Communications Billing and Revenue Management (BRM) system administration utilities.
Use this utility to load the event record file (BRM_home/sys/data/config/pin_event_record_map) into the BRM database.
The event record file contains a list of event types to exclude from being recorded in the database. For more information, see "Managing Database Usage".
Note:
You cannot load separate /config/event_record_map objects for each brand. All brands use the same object.Important:
At the time you load the event record file, if any events of a type specified in the file already exist in the database, these events remain in the database.After running this utility, you must stop and restart the Connection Manager (CM).
Logs debugging information.
Displays detailed information as the event record map is created.
Returns a list of the events in the pin_event_record_map file configured to not be recorded.
Important:
This option must be used by itself and does not require the file name.The file containing the event types to exclude from the database.
Specify to not record the event type by setting its flag value to 0 in the file. To temporarily record an event type, change the event's flag value to 1 and reload the file.
The following example shows the event record file format where the event type is followed by its flag value:
/event/session : 1 /event/customer/nameinfo : 0 /event/billing/deal/purchase : 0 /event/billing/product/action/purchase : 0 /event/billing/product/action/modify : 0
Important:
The record file includes one default event type, /event/session, set to be recorded. Never exclude this event type or any event type that is updated more than once during the same session from recording. If such events are configured to not record, an error occurs when the system tries to update the same event during the same session. To eliminate the error, remove the event causing the error from the record map file.Use this utility to do the following:
Add partitions
Purge objects without removing partitions
Remove partitions
Enable delayed partitions
Update partitions
Restart a partition_utils job
Find the maximum POID for a date
Note:
To use this utility to add a partition for a storable class other than event, bill, invoice, item, journal, newsfeed, sepa, or user activity, you must enable partitioning for that storable class. See "Enabling Different Classes for Partitioning during Installation" in BRM Installation Guide and "Converting Nonpartitioned Classes to Partitioned Classes" in BRM System Administrator's Guide.Before you use this utility, configure the database connection parameters in the BRM_home/apps/partition_utils/partition_utils.values file. See "Configuring a Database Connection".
Important:
After you start this utility, do not interrupt it. It might take several minutes to complete an operation, depending on the size of your database.For more information, see the following topics:
partition_utils -o add -t realtime|delayed -s start_date -u month|week|day -q quantity [-c storable_class] [-w width] [-f] [-p] [-b] [-h]
Parameters for Adding Partitions
Adds partitions.
Adds real-time or delayed-event partitions. Only event tables can have delayed-event partitions.
Note:
Conversion Manager does not support the migration of data to the EVENT_T tables.Specifies the starting date for the new partitions. The format is MMDDYYYY.
start_date must be the day after tomorrow or later; you cannot create partitions starting on the current day or the next day. For example, if the current date is January 1, the earliest start date for the new partition is January 3 (For example, 01032016.)
Specifies the time unit for the partitions.
Specifies the number of partitions to add.
If a partition with a future date already exists in the table, this adds more partitions than the specified quantity.
For example, to create one partition with a February 1 start date and the table already contains a P_R_02282016 partition, the P_R_02282016 partition is split into two partitions named P_R_02012016 and P_R_02282016.
Specifies the class of objects to be stored in the partition. The default is /event.
Note:
To add a partition for a storable class other than an event, bill, invoice, item, or journal, partitioning must have been enabled for that storable class. See "Enabling Different Classes for Partitioning during Installation" in BRM Installation Guide and "Converting Nonpartitioned Classes to Partitioned Classes" in BRM System Administrator's Guide.Specifies the number of units in a partition (for example, 3).
Forces the creation of a partition when start_date falls within the time period of the current partition. The existing partition is split in two: one new partition containing objects created before the specified start date and the other new partition containing objects created on or after the specified start date.
Caution:
Before forcing partitions:For real-time partitions, stop all BRM server processes.
For delayed-event partitions, stop all delayed-event loading by stopping Pipeline Manager and the Rated Event (RE) Loader utility (pin_rel).
Note:
The -f parameter works differently when you remove partitions. In that case, it forces the removal of objects associated with open items.Writes an SQL statement of the operation to the partition_utils.log file without performing any action on the database. See "Running the partition_utils Utility in Test Mode".
Creates backdated partitions. For more information, see "Creating Partitions for Your Legacy Data" in BRM Managing Customers.
Displays the syntax and parameters for this utility.
Syntax for Purging Objects without Removing Partitions
partition_utils -o purge [-s start_date] -e end_date -t [realtime|delayed] [-p] [-h]
Only event, item, bill, invoice, journal, newsfeed, and sepa objects can be purged without removing their partitions.
To purge other types of objects, see "Syntax for Removing Partitions".
Parameters for Purging Objects without Removing Partitions
Purges event, item, bill, invoice, journal, newsfeed, and sepa objects without removing their partitions. The event objects must be associated with closed items. To enable this utility to purge event objects associated with open items, see "Enabling Open Items to Be Purged".
Specifies the start of the date range containing the objects you want to purge. The date is inclusive. The format is MMDDYYYY. If start_date is not specified, all objects created on or before end_date are purged.
Specifies the end of the date range containing the objects you want to purge. The date is inclusive. The format is MMDDYYYY.
Note:
If the specified start and end dates do not match the partition boundaries, only objects in partitions that are completely within the date range are purged.Purges real-time objects, delayed-event objects, or both. The default is to purge both (include -t without realtime or delayed). If this parameter is omitted from the syntax, an error occurs.
Writes an SQL statement that shows the partitions that will be removed to the partition_utils.log file without performing any action on the database. See "Running the partition_utils Utility in Test Mode".
Displays the syntax and parameters for this utility.
Syntax for Removing Partitions
partition_utils -o remove -s start_date -e end_date [-c storable_class] [-t realtime|delayed] [-f] [-p] [-h]
The only way to purge objects other than bill, event, invoice, item, journal, newsfeed, and sepa from your database is to remove their partitions by using the -f parameter.
Note:
To purge bill, event, invoice, item, journal, newsfeed, and sepa objects that meet the purging criteria, see "Syntax for Purging Objects without Removing Partitions".To purge bill, event, invoice, item, journal, newsfeed, and sepa objects that do not meet the purging criteria, use the -f parameter, which removes the partitions that contain them.
For information about purging criteria, see "Objects Purged by Default".
Caution:
Operations using the -f parameter cannot be undone and will remove objects that are being used. Use with caution.Parameters for Removing Partitions
Removes partitions.
Specifies the start of the date range for the objects you want to remove. The format is MMDDYYYY.
By default, start_date must be at least 45 days ago. You can change this limitation by editing the BRM_home/apps/partition_utils/partition_utils.values file. See "Customizing Partition Limitations".
If the specified dates do not match the partition boundaries, only objects in partitions that are completely within the date range are removed. See "About Purging Database Objects".
Specifies the end of the date range for the objects you want to remove. The format is MMDDYYYY.
By default, end_date must be at least 45 days ago. You can change this limitation by editing the BRM_home/apps/partition_utils/partition_utils.values file. See "Customizing Partition Limitations".
Specifies the partition to remove by base storable class. The default is /event.
When you remove a partition, it removes partitions in all the partitioned tables for the specified base storable class and its subclasses.
Removes real-time or delayed-event partitions. The default is to remove both real-time and delayed-event partitions.
Forces the removal of partitions whether or not the objects in the partitions satisfy the purging criteria. For information about purging criteria, see "Objects Purged by Default".
Caution:
Operations using the -f parameter cannot be undone and will remove objects that are being used. Use with caution.Note:
The -f parameter works differently when you add partitions. In that case, it forces the splitting of partitions even when they fall within the time period of the current partition.Writes an SQL statement that shows the partitions that will be removed to the partition_utils.log file without performing any action on the database. See "Running the partition_utils Utility in Test Mode".
Displays the syntax and parameters for this utility.
Syntax for Enabling Delayed Partitions
partition_utils -o enable -t delayed -c storable_class [-p] [-h]
Parameters for Enabling Delayed Partitions
Enables delayed-event partitions.
Specifies the event storable class for which you want to add delayed-event partitions. Delayed-event partitions cannot be used for non-event storable classes.
To add delayed-event parathions for all subclasses of an event, use the percent sign (%) as a wildcard (for example, -c /event/session/%).
Writes an SQL statement of the operation to the partition_utils.log file without performing any action on the database. See "Running the partition_utils Utility in Test Mode".
Displays the syntax and parameters for this utility.
Parameters for Updating Partitions
Aligns partitions across all object tables for a single base storable class. All real-time and delayed-event partitions get the same real-time partitioning scheme as their base table (EVENT_T for event base storable class tables, ITEM_T for item base storable class tables, and so on).
Specifies the class of objects to be updated. The default is /event.
Writes an SQL statement of the operation to the partition_utils.log file without performing any action on the database. See "Running the partition_utils Utility in Test Mode".
Displays the syntax and parameters for this utility.
Parameters for Restarting a partition_utils Job
Restarts the previous operation that was unsuccessful due to an error or abnormal termination.
Bypasses running the previous operation but cleans the status of it.
Displays the syntax and parameters for this utility.
Syntax for Finding the Maximum POID for a Date
partition_utils -o maxpoid -s date -t realtime|delayed [-p] [-h]
Parameters for Finding the Maximum POID for a Date
Returns the maximum POID for the specified date.
Specifies the date for which the maximum POID is to be found. The format is MMDDYYYY.
Gets the maximum POID in only real-time partitions or only delayed-event partitions.
Writes an SQL statement of the operation to the partition_utils.log file without performing any action on the database. See "Running the partition_utils Utility in Test Mode".
Displays the syntax and parameters for this utility.
If the utility does not notify you that it was successful, look in the partition_utils.log file to find any errors. This file is either in the directory from which the utility was started or in a directory specified in the utility configuration file. The partition_utils.log file includes SQL statements if you use the -p parameter.
Use this script to manage all the nonpartitioning-to-partitioning upgrade tasks. Run it from a UNIX prompt.
For more information, see "Converting Nonpartitioned Classes to Partitioned Classes".
Important:
This script needs the partition.cfg configuration file in the directory from which you run the utility.Creates the database objects required for the upgrade, including the following:
The UPG_LOG_T table that logs all the information about the upgrade
The pin_upg_common package that contains all the common routines for the upgrade
Displays the event tables that will be partitioned during the upgrade.
Tables selected for partitioning are listed in the TABLES_TOBE_PARTITIONED_T table, which is created during the upgrade process. This table contains two columns:
table_name: The name of the table to be partitioned.
partition_exchanged: The value of the exchanged partition. This value is used by the upgrade scripts to perform the table partitioning.
Use the INSERT statement to partition tables and use 0 for the partition_exchanged value. For example, to insert MY_CUSTOM_EVENT_TABLE, run the following SQL statement:
INSERT INTO TABLES_TOBE_PARTITIONED_T (table_name, partition_exchanged) VALUES ('MY_CUSTOM_EVENT_TABLE',0); COMMIT;
Note:
To prevent a listed table from being partitioned, use the SQL DELETE statement to delete its name from TABLES_TOBE_PARTITIONED_T.Partitions the tables listed in the TABLES_TOBE_PARTITIONED_T table.
To partition additional tables, see "Converting Additional Nonpartitioned Classes to Partitioned Classes".
Displays the syntax and parameters for this utility.
Use this utility to delete closed /active_session objects from Oracle In-Memory Database (IMDB) Cache.
This utility compares an object's expiration time to the current time to determine if the object must be deleted and deletes the objects from one IMDB Cache at a time.
Note:
The default pin.conf file for this utility includes the entry - pin_mta multi_db. This utility does not use this entry.In a multischema environment, you must run this utility separately for each IMDB Cache node. You can create a script that calls the utility with connection parameters to connect to the desired node.
For more information about deleting expired objects from IMDB Cache, see "Purging Old Call Data from Memory" in BRM Telco Integration.
Important:
To connect to the BRM database, this utility needs a pin.conf configuration file in the directory from which you run the utility.Specifies the type of /active_session object to delete. For example, to delete /active_session/telco/gsm objects from IMDB Cache:
pin_clean_asos -object "/active_session/telco/gsm"
Sets the expiration time (in hours) for /active_session objects. This utility compares the expiration time with an object's end time (PIN_FLD_END_T) and deletes objects that are older than the number of hours you specify. For example, if you specify 2 as the value and you run the utility at 10 a.m., the utility deletes objects that were closed on or before 8 a.m.
If not specified, the expiration time defaults to 0. This results in the removal of all /active_session objects that have an end time less than the current time.
Displays the syntax and parameters for this utility.
Use this utility to free space in Oracle In-Memory Database (IMDB) Cache or the database by deleting objects that remain because of a session's abnormal termination, such as a missed stop accounting or cancel authorization request.
Note:
This utility is installed with Resource Reservation Manager. See "About Resource Reservation Manager" in BRM Configuring and Collecting Payments.This utility performs the following functions:
Releases expired reservations
Note:
When searching for reservation objects, this utility releases only expired reservations that are in reserved (active) status.Rates active sessions that are in a started or an updated state and updates balances in the database
Note:
You can optionally rate active sessions in a created state.Deletes expired active-session objects
This utility compares an object's expiration time to the current time to determine if the object must be deleted and deletes objects from one IMDB Cache at a time.
Note:
This utility requires a pin.conf file to provide entries for connecting to the CM and DM, login name and password, log file information, and performance tuning. The values for these entries are obtained from the information you provided during installation, or the entries are set with a default value.The default pin.conf file for this utility includes the entry - pin_mta multi_db. This utility does not use this entry.
In a multischema environment, you must run this utility separately for each IMDB Cache node. You can create a script to call the utility with connection parameters to connect to the desired node.
pin_clean_rsvns [-help] [-object object_type] [-account] [-expiration_time number_of_hours] [-cause user_defined] [-bytes_uplink volume] [-bytes_downlink volume]
Displays the syntax and parameters for this utility.
Specifies the object storage location. In an IMDB Cache DM environment, object_type can only be 1.
Calls STOP_ACCOUNTING to delete the active-session objects in both created and update states, and starts rating the session.
Use this parameter to rate active sessions if your network supports only the create state, and not start and update states, for sessions. For example, networks using the Message Based Interface (MBI) protocol send only authorization and reauthorization requests, so the active-session objects remain in a created state even during usage.
Note:
When you run this utility without - account, it calls STOP_ACCOUNTING to delete the active-session objects in the update state.Sets back the expiration time for the objects, in hours.
The default is the current time when the utility runs. This utility deletes objects that are in an expired state up to the time you specify. For example, if you specify 2 as the value and you run the utility at 10 a.m., the utility deletes objects that expired by 8 a.m.
Specifies the reason for releasing reservations and terminating the session.
You can define any value for the cause, and the value is stored in the event object. You can define different rate plans depending on the reason for the session termination.
Note:
You use the rate plan selector to specify the rate plan for calculating the charges for the event.When the /active_session object's PIN_FLD_BYTES_UPLINK field is set to 0 or not specified, this parameter specifies to populate the field with the specified volume.
When the /active_session object's PIN_FLD_BYTES_DOWNLINK field is set to 0 or not specified, this parameter specifies to populate the field with the specified volume.
The utility releases any expired reservations and deletes any expired session objects. If there are any session objects in a started or updated state, the utility rates the objects:
For duration RUMs, it calculates the duration to rate by using the end time specified in the session object, the expiration time in the reservation object, the pin_virtual_time, or the current system time.
For volume RUMs, it calculates the volume to rate by using the volume in the session object or the volume passed in at the command line.
Use this utility to close open item objects processed in past billing cycles. Run it from a UNIX prompt.
This utility calls the BRM_home/apps/partition_utils/sql_utils/oracle/pin_close_items.plb stored procedure.
Before you use this utility, configure the database connection parameters in the partition_utils.values file in BRM_home/apps/partition_utils. See "Configuring a Database Connection".
For more information, see "Closing Open Item Objects Processed in Past Billing Cycles".
Important:
This utility needs the partition_utils.values configuration file in the directory from which you run the utility.Use this utility to start and stop BRM components.
Important:
To connect to the BRM database and configure the processes, this utility requires a configuration file in the directory from which you run the utility. This configuration file must be called pin_ctl.conf, which is different from most BRM configuration file names.For information on setting up and configuring the processes that this utility controls, see "Configuring the pin_ctl Utility".
For general information on creating configuration files, see "Creating Configuration Files for BRM Utilities".
Specifies the type of action to be executed. See "action Parameter".
For example, to start the CM, use the following command:
pin_ctl start cm
Specifies the process on which the action is performed. See "component Parameter".
Specifies a configuration file to use instead of the default. Use this parameter to run different configurations of the same system.
Gets diagnostic data when starting, stopping, or checking the status of a component.
Displays debugging information.
Enables the utility to ask whether to proceed. This is especially useful when running stop, halt, and clear.
Deletes log entries associated with the component (not the file).
Note:
The log file is not deleted, just the entries.Clears the component logs and, if the component is not running, starts the component.
If the component is already running, the command clears the log file; the component continues running.
Note:
You are not prompted to clear logs.Searches for the specified component and runs the kill -9 command.
Stops the component, waits for completion, then restarts the component.
Gets an SNMP value.
To use this parameter, you must add snmpget actions by editing the pin_ctl.conf file. See "Customizing snmpset and snmpget Actions".
Sets an SNMP value:
addServerInstance. Rebalances DM connection load when you restart a failed DM in the DM pool or when you add a new DM to the DM pool.
refreshConnections. Rebalances DM connections when you restart a failed DM in the pool.
You can add snmpset actions by editing the pin_ctl.conf file. See "Customizing snmpset and snmpget Actions".
Starts the component if it is not running.
If you specify all for component, it starts the components specified in the pin_ctl.conf file. For information, see "Customizing the Components Included in ”all”".
Returns the status of component as Running or NotRunning.
Stops the component if it is running.
If you specify all for component, it stops the components specified in the pin_ctl.conf file. For information, see "Customizing the Components Included in ”all”".
You can perform an action on any of the following components:
Applies action to the components specified in the pin_ctl.conf file. By default, the components are:
Oracle Data Manager (DM)
Email DM
Connection Manager
CM Master Process
Invoice formatter
You can modify the list of components specified in the pin_ctl.conf file. See "Customizing the Components Included in ”all”".
Paymentech answer simulator
Batch Controller
Pipeline Manager
Pipeline Manager with IMDB Cache
Connection Manager
CM Proxy
Connection Manager Master Process
Enterprise Application Interface (EAI) DM
Email DM
Paymentech DM
Account Synchronization DM (Oracle AQ)
Invoice DM
LDAP DM
Oracle DM
Oracle In-Memory Database (IMDB) Cache DM
Vertex tax calculation DM
EAI Java Server
Invoice formatter
System Manager
Node Manager
Real-time pipeline
Use this utility to monitor the following database key performance indicators (KPIs) in Oracle databases:
Age of event and audit tables
Size of audit tables
Invalid or missing procedures, triggers, or indexes
You configure this utility to alert you when one of these components has returned a certain status.
For more information, see "Using the pin_db_alert Utility to Monitor Key Performance Indicators".
Important:
To connect to the BRM database, this utility needs a configuration file in the directory from which you run the utility.Use this utility to delete old bills, items, journals, and expired account subbalances from Oracle In-Memory Database (IMDB) Cache.
This utility compares an object's expiration time to the current time to determine if the object must be deleted.
This utility deletes objects from one Oracle IMDB Cache or logical partition at a time. If the system has multiple logical partitions, you must run this utility for each logical partition. In a high-availability configuration, you must run this utility for each high-availability node. You can create a script to call this utility with relevant connection parameters to connect to the desired Oracle IMDB Cache nodes.
Important:
Before running this utility, unset the ORACLE_HOME environment variable.Note:
This utility requires a pin.conf file in the directory from which you run the utility with entries for connecting to the CM and IMDB Cache DM, login name and password to connect to the data store, batch size for deleting, and log file information. The values for these entries are obtained from the information you provided during installation, or the entries are set with a default value. The pin.conf file for this utility is installed in the BRM_home/apps/pin_subscription directory.pin_purge [-l username/password@DatabaseAlias] -c{ bill|item|subbalance} {-n number_of_days| -d date} [-help]
Specifies how to connect to the database. If you omit this option, the utility uses the information provided in the pin.conf file to establish the connection.
Specifies the object to be deleted.
-c bill deletes bills and all related items and journals that do not have any pending or open items, due amounts set to zero, and bill status set to closed (bill in finalized)
Deleting bill objects does not also delete the associated bill items. The items must be deleted separately using the -c item parameter.
Note:
The bill objects are deleted from Oracle IMDB Cache only.-c item deletes billable and special items (such as payments, adjustments, and disputes) in closed status and all journals related to these items.
Note:
The item and journal objects are deleted from Oracle IMDB Cache only.-c subbalance deletes account subbalances.
Note:
Account subbalances are deleted from Oracle IMDB Cache. The objects are eventually deleted from the BRM database when updates from Oracle IMDB Cache are propagated to the BRM database.Specifies the number of days before which bills, items, or subbalances are deleted. For example, specify 90 to delete bills, items, or subbalances older than 90 days.
Specifies the date before which bills, items, or subbalances are deleted. Use the format MM/DD/YYYY. For example, specify 03/01/2009 to delete bills, items, or subbalances older than March 1, 2009.
Displays the syntax and parameters for this utility.
Use this utility to purge expired account subbalances from the BRM database.
Caution:
When you delete subbalances from the database, events that impacted those subbalances cannot be rerated. Ensure you no longer need the expired subbalances before deleting them.For more information, see "About Purging Account Subbalances".
To connect to the BRM database, this utility needs a configuration file in the directory from which you run the utility.
Important:
You must run this utility from the BRM_home/apps/pin_subscription directory. This directory contains the pin.conf file that has the parameters required for this utility.Specifies the number of days before which subbalances are deleted. For example, specify 60 to delete expired subbalances older than 60 days.
The date before which subbalances are deleted. Use the format MM/DD/YYYY. For example, specify 06/30/2003 to delete expired subbalances older than June 30, 2003.
Use this utility to connect to the BRM database and generate SQL scripts to create and initialize the BRM cache groups in Oracle IMDB Cache. These SQL scripts contain all the required cache group definitions. You can run these SQL scripts against the appropriate IMDB Cache nodes to load the cache groups with the required schema and data. See "Generating the BRM Cache Group Schema".
This utility requires the following versions of the Perl modules:
DBI version 1.605
DBD-Oracle version 1.16
Bit-Vector version 7.1
Note:
This utility has been certified for Perl 5.8.0 and Oracle 11g.Before running this utility, configure the database connection parameters in one of the following files in BRM_home/bin.
pin_tt_schema_gen.values, which generates scripts for the default cache groups
A custom configuration values file, which generates scripts for custom cache groups
See "Configuring the pin_tt_schema_gen.values File" for more information.
Important:
After you start this utility, do not interrupt it. It might take several minutes to complete an operation, depending on the size of your database.Note:
When you run this utility, the following warning messages are logged in the pin_schema_gen.log file. These warnings are reported for storable classes that do not have associated tables. You can ignore these warning messages.'/reservation/active' mentioned in array local_tables_class_def does not have any table.
'/active_session/telco/gprs/master' mentioned in array local_tables_class_def does not have any table.
'/active_session/telco/gprs/subsession' mentioned in array local_tables_class_def does not have any table.
Generates the tt_schema.sql script file, which you can run against the appropriate IMDB Cache node to create the cache groups and transient table schema.
Generates the tt_load.sql script file, which you can run against the appropriate IMDB Cache node to load data from the BRM database.
Generates the tt_drop.sql script file, which you can run against the appropriate IMDB Cache node to drop the cache groups.
Updates the BRM database with unique indexes and not-null constraints.
Runs the -t, -l, -d, and -o parameters.
Displays the syntax and parameters for this utility.
Specifies the name of the configuration values files to be used by the utility. The default is pin_tt_schema_gen.values. You can provide another configuration values file.
Use this utility to convert event storable classes in the BRM schema to use virtual columns. After you run this utility, the poid_type columns of event tables in the BRM database are virtual-column enabled.
For more information, see "Generating Virtual Columns on Event Tables".
To connect to the BRM database and to specify logging information, this utility uses the Infranet.properties file in the directory from which you run the utility.
Specify the log level by setting the infranet.log.level property in the Infranet.properties file. The default is 1. Valid values are 1, 2, and 3. Regardless of the log level set, status messages are printed to stdout and to the log file. Errors are logged and printed to stderr.
pin_virtual_gen -gentasks create|pre_export|post_export|verify_types|create_types[-execute] -readtasks create|pre_export|post_export|verify_types|create_types[-execute] -showtasks [minID maxID] -help
Generates tasks and stores them in the database.
Executes tasks after saving to the database.
Reads previously stored tasks from the database.
Executes tasks after reading from the database.
Creates virtual columns and supporting columns in the BRM database.
Use this with - showtasks to display the tasks that will be executed for creating the virtual columns before executing them. The following example shows how to create the tasks for creating virtual columns, display them, and then execute them:
pin_virtual_gen -gentasks create pin_virtual_gen -showtasks pin_virtual_gen -readtasks create -execute
Reads corresponding tasks from the database and displays task details.
minID and maxID specify the tasks to show within an ID range. The command shows tasks that have an ID greater than minID or less than maxID.
All tasks are displayed when an ID range is not provided.
Removes virtual columns temporarily.
Restores virtual columns that were temporarily removed.
Verifies whether storable class type names exist in the data dictionary of the BRM database schema.
Creates the names of custom storable class types and stores them in the data dictionary of the BRM database schema.
Reads and then executes the previously stored pre_export tasks in the database.
Reads and then executes the previously stored post_export tasks in the database.
Reads and then executes the previously stored verify_types tasks in the database.
Reads and then executes the previously stored create_types tasks in the database.
Displays the syntax and parameters for this utility.
Use this script to archive unneeded shadow objects in audit tables. Use it to:
Generate audit table reports
Create history tables
Move audit tables to history tables
Archive audit tables
Restore archived audit tables
The different versions of shadow objects that are valid for archiving are moved to the history tables so they can be accessed for future reference. In addition, when versions of an object are removed from audit tables, all subclasses of the shadow objects are also removed automatically by the script.
Note:
This script does not delete objects from the database; it only purges the object rows stored in a table.Important:
To connect to the BRM database, this script needs a configuration file in the directory from which you run the script.For more information, see "Archiving Audit Data" in BRM Developer's Guide.
For information on audit trails and shadow objects, see "About Tracking Changes to Object Fields" in BRM Developer's Guide.
purge_audit_tables.pl report -t objects -d date -l login/pswd@connection| create -t objects -l login/pswd@connection| archivedirect -t objects -d date -c commit_size -l login/pswd@connection| archiveindirect -t objects -d date -c commit_size -l login/pswd@connection| renametohist -t objects -l login/pswd@connection| updfromhist -t objects -d date -c commit_size -l login/pswd@connection| help
The following purging actions are supported:
Syntax for Generating Audit Table Reports
This generates a file named purge_tables.report, which provides information about the tables for the specified objects, including the number of rows in each table that are eligible for purging, and whether history tables exist for them. You create a report to determine which mode of archiving to use for the specified object: archivedirect or archiveindirect.
purge_audit_tables.pl report -t objects -d date -l login/pswd@connection
Parameters for Generating Audit Table Reports
Specifies a comma-separated list of shadow objects on which to report.
Shadow objects use an au prefix. For example, a change to a field marked for auditing in the /profile object results in the /au_profile shadow object.
Note:
Do not specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting object is reported on when you list the /au_profile object.Specifies the cutoff date for purging data.
This date determines which versions of the audit object are eligible for purging. If a version of an object is valid at the cutoff date, and there is at least one older version of the same object, the valid object is kept and all older versions are marked for purging and moved to the history tables.
The format is YYYY:MM:DD.
Specifies your standard Pipeline Manager user name and database password.
Syntax for Creating History Tables
Creates empty history tables for the specified objects and their child objects.
purge_audit_tables.pl create -t objects -l login/pswd@connection
Specifies a comma-separated list of objects for which to create history tables. History tables are prepended by H_ as shown in Table 10-1.
Table 10-1 Audit and History Tables Format
| /service Object Audit Tables | /service Object History Tables |
|---|---|
|
AU_SERVICE_T |
H_SERVICE_T |
|
AU_SERVICE_ALIAS_T |
H_SERVICE_ALIAS_T |
|
AU_SERVICE_EXTRACTING_T |
H_SERVICE_EXTRACTING_T |
Important:
Do not specify the child objects for a table; they are handled automatically by the script.Specifies your standard Pipeline Manager user name and database password.
Archives audit tables for the specified objects and their child objects by copying the data directly to the history tables and then removing it from the audit tables.
purge_audit_tables.pl archivedirect -t objects -d date -c commit_size -l login/pswd@connection
Specifies a comma-separated list of objects to archive audit tables for.
Shadow objects use an au prefix. For example, a change to a field marked for auditing in the /profile object results in the /au_profile shadow object.
Note:
Do not specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting object is reported on when you list the /au_profile object.Specifies the cutoff date for purging data.
This date determines which versions of the audit object are eligible for purging. If a version of an object is valid at the cutoff date, and there is at least one older version of the same object, the valid object is kept and all older versions are marked for purging and moved to the history tables.
The format is YYYY:MM:DD.
Specifies the number of rows to save to the database simultaneously.
Specifies your standard Pipeline Manager user name and database password.
Archives audit tables for the specified objects and their child objects by copying the data first to temporary tables, then to the history tables. If successful, the old audit table data is removed.
Important:
Do not delete the temporary tables if the data was not copied successfully to the history tables. Errors might have occurred when the data was moved to the temporary tables from the main tables; therefore, manually transfer the data back to the main audit tables. Then, delete the temporary tables and run the script again.purge_audit_tables.pl archiveindirect -t objects -d date -c commit_size -l login/pswd@connection
Specifies a comma-separated list of objects to archive audit tables for.
Shadow objects use an au prefix. For example, a change to a field marked for auditing in the /profile object results in the /au_profile shadow object.
Note:
Do not specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting object is reported on when you list the /au_profile object.Specifies the cutoff date for purging data.
This date determines which versions of the audit object are eligible for purging. If a version of an object is valid at the cutoff date, and there is at least one older version of the same object, the valid object is kept and all older versions are marked for purging and moved to the history tables.
The format is YYYY:MM:DD.
Specifies the number of rows to save to the database simultaneously.
Specifies your standard Pipeline Manager user name and database password.
Renames the specified audit tables to their corresponding history tables and recreates the audit tables without any indexes. This option also creates the script files used to create, rename, rebuild, and drop indexes that were in the audit tables. You can run the following scripts manually when necessary.
- create_index_script.sql
- rename_index_script.sql
- rebuild_index_script.sql
- drop_index_script.sql
purge_audit_tables.pl renametohist -t objects -l login/pswd@connection
Specifies a comma-separated list of objects for which to rename audit tables to history tables and recreate empty audit tables.
Note:
You do not need to specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting child object is reported on if you list the /au_profile object.Specifies your standard Pipeline Manager user name and database password.
Retrieves the data for a given object and its child objects from the history tables and transfers it back to the audit tables.
purge_audit_tables.pl updfromhist -t objects -d date -c commit_size -l login/pswd@connection
Specifies a comma-separated list of shadow objects to retrieve the data from the history tables and update in the corresponding audit tables.
Shadow objects use an au prefix. For example, a change to a field marked for auditing in the /profile object results in the /au_profile shadow object.
Note:
Do not specify child objects for an object; they are included automatically by the script. For example, the /au_profile/serv_extracting object is reported on when you list the /au_profile object.Specifies the cutoff date for retrieving data.
The format is YYYY:MM:DD.
Specifies the number of rows to save to the database simultaneously.
Specifies your standard Pipeline Manager user name and database password.