5 Managing, Backing Up, and Restoring Services Gatekeeper

This chapter describes managing Oracle Communications Services Gatekeeper to prevent server failure, and backing up and restoring critical application files and data.

About Managing, Backing Up and Restoring Services Gatekeeper

Services Gatekeeper uses Oracle WebLogic Server as its application server and Oracle Enterprise Database, Oracle MySQL Database, or JavaDB (Derby) as its storage repository. Managing, backing up and restoring Services Gatekeeper in your environment follows the general guidelines and best practices used when ensuring Oracle WebLogic Server and the databases can recover from server failure or other disaster.

Configuring Services Gatekeeper to Avoid Server Failure

See ”Avoiding and Recovering From Server Failure” in Fusion Middleware Manager Startup and Shutdown for Oracle WebLogic Server for more information on ensuring that the WebLogic Server component of your Services Gatekeeper installation is properly configured to avoid failures, and recover from them in the event one occurs.

Backing Up an Oracle Enterprise Database

If you store your Services Gatekeeper data in an Oracle Enterprise Database, see the section on backup and recovery in the Oracle Database Documentation Library for more information about backing up and restoring Services Gatekeeper data.

Backing up a MySQL Database

If you store your Services Gatekeeper data in a MySQL database, see "MySQL Backup and Recovery" here:

http://dev.mysql.com/doc/refman/5.7/en/backup-and-recovery.html

for more information about backing up and restoring Services Gatekeeper data.

Backing Up a JavaDB Database

JavaDB offers two methods of backing up a JavaDB database:

  • Offline backup - Stop the database and use operating system commands to simply copy the database directory. The database is unavailable during this process.

  • Online backup - Using the JavaDB backup commands. The database is read-only during this process.

For details see ”Backing up and restoring databases" on the Apache web site:

https://db.apache.org/derby/docs/10.1/adminguide/cadminhubbkup98797.html

Cleaning Services Gatekeeper Database Tables

This section lists the database tables that you clean periodically and provides an example script.

About Cleaning Database Tables

Table 5-1 lists the database tables that you must periodically clean to prevent them from growing too large and adversely affecting performance. See your database documentation for instructions on deleting rows from these tables.

Table 5-1 Database Table Cleaning Intervals

Type of Database Table Table Recommended Cleaning Interval

Services Gatekeeper

SLEE_ALARM

Every two months

Services Gatekeeper

SLEE_CHARGING

See "About Cleaning the SLEE_CHARGING Table" for details.

Services Gatekeeper

SLEE_STATISTICS_DATA

Every month

EDR Analytics

TRANSACTION_HISTORY

Daily if used. See "About Cleaning EDR Analytics Tables" for details.

EDR Analytics

TRANSACTION_PARAMETER_HISTORY

Daily if used. See "About Cleaning EDR Analytics Tables" for details.

EDR Analytics

TRANSACTION_INVALID_HISTORY

Daily if used. See "About Cleaning EDR Analytics Tables" for details.

EDR Analytics

TRANS_PARA_INVALID_HISTORY

Daily if used. See "About Cleaning EDR Analytics Tables" for details.


About Cleaning EDR Analytics Tables

Clean the EDR analytics tables listed in Table 5-1 if you use the EDR analytics feature and have EdrAanalyticMBean.StoreHistory set true. The cleaning interval depends on the amount of traffic stored in your database. However, these tables tend to grow quickly and require daily maintenance.

About Cleaning the SLEE_CHARGING Table

The SLEE_CHARGING table cleaning interval depends on the amount of charging traffic, so it is difficult to specify a recommended interval. There is usually one row in this table per plug-in transaction. A row can contain up to 300 bytes of data.

Operators export charging data from the SLEE_CHARGING table to a file either by using a scheduled script or by integrating with a billing system such as Oracle Communications Billing and Revenue Management.

Charging data is contained in CDRs, which are a type of event data record (EDR). CDRs are integrated by EDR listeners. You can also listen for alarms by setting up an SNMP/EDR listener. If you do so, disable CDR and alarm storage in the database by setting the StoreAlarms or Store CDRs attributes to false for the EDR Service.

Note:

Oracle recommends storing alarms for some period, for trouble-shooting purposes.

See the StoreCDRs and StoreAlarms fields of the EdrServiceMBean in the ”All Classes” section of Services Gatekeeper OAM Java API Reference for details.

For more information about

Creating a Script to Clean SLEE_CHARGING

If your Services Gatekeeper implementation has high traffic, such that these three tables are updated with millions of records per day, Oracle recommends that you create a script to clean them and run it for two to three hours. This script would ideally copy all data that arrived after the last cleaning to another table and then delete everything currently in the table. This script does not require a server restart.

To clean the SLEE_CHARGING table, create a shell script or PERL/SQL combination such as the following:

  1. Set up a repository for the data in the SLEE_CHARGING data. To do so, create a table called SLEE_CHARGING_BACKUP.

  2. Create a SQL query that:

    1. Selects all data in the SLEE_CHARGING table that is more than 15 minutes old.

      For example, if you run the script every 3 hours, the script would capture 2 hours and 45 minutes worth of data.

    2. Insert the selected data/rows into the SLEE_CHARGING_BACKUP table.

    3. Delete the same data from the SLEE_CHARGING table.

The steps a and b can easily be combined in one SQL query, for example:

insert into slee_charging_bkup(list all columns~ )
select list all the columns~
from slee_charging
where stored_ts millisecs calculated~

Archive the data into a new table called SLEE_STATISTICS_DATA as shown in Example 5-1:

Example 5-1 Archiving Data from the SLEE_CHARGING Table

create table slee_statistics_data_tmp1(
slee_name,
timestamp,
type_id,
transactions,
source_entity,
source_app_id,
datetime,
primary key ( timestamp, type_id )
)
organization index
nocompress
as select slee_name, timestamp, type_id, transactions, source_entity, source_app_id,
to_date(to_char(timestamp'1970-01-01 00:00:00.00' + numtodsinterval(timestamp/1000,'SECOND'), 
'DD-MON-YYYY HH24:MI:SS'),'DD-MON-YYYY HH24:MI:SS') from slee_statistics_data;