Previous     Contents     DocHome     Index     Next     
iPlanet Trustbase Transaction Manager 2.2.1 Installation and Configuration Guide



Chapter 10   Backup


The objectives of this chapter are to cover:


What data needs backup?

The database tables employed by the iPlanet Trustbase Transaction Manager fall into two groups:

  • Those that are to all intents and purposes read-only (e.g. configuration information)

  • Those that are used to store large volumes of frequently written data (e.g. raw log)

This next section describes the function and composition of the tables used, so that the database administrator may more accurately devise an archiving strategy.


What data is Read-Only?

The following tables are largely used for static read-only data. These tables are not expected to grow to large sizes, so archiving should not be required, and all data can be kept online. However these tables should be backed up initially after configuration and after any subsequent certificate or configuration changes for backup and disaster recovery purposes.

  • Tables comprising the certificate store. All these tables are modified only when items are added to the store, which is an infrequent occurrence.

    • ATTRIBUTE_KEY_ATTRS
      Used to store the attributes associated with key pairs.

    • ATTRIBUTE_NAME_ATTRS
      Used to store the attributes associated with certificates

    • CERT_TABLE
      Used to store X509 certificates

    • KEY_TABLE
      Used to store cryptographic key pairs

    • REVOCATION_ATTRS
      Used to store the attributes associated with Certificate Revocation Lists (CRLs)

    • REVOCATION_SERIAL_NUM
      Used to index the serial numbers of certificates revoked in CRLs

    • REVOCATION_TABLE
      Used to store encoded X509 CRLs.

    • SALT_TABLE
      Holds the salt value for the password-based authentication used in the database. This table never exceeds one entry.

  • Identrus

    • CERT_DATA
      All unique certificates from Identrus Messaging

    • BILL_DATA
      Billing Records for processed Identrus Messaging

  • Other tables

    • AUTHORISATION
      Maps role names to service names

    • CERTIFICATEAUTHENTICATION
      Maps certificate details to roles

    • CONFIG
      Used to store configuration data for the system. Each system element has one row in this table, so the table will only grow when new system elements are added.

    • ROLES
      Stores information about roles

    • USERNAMEPASSWORDAUTH
      Maps username / password combinations to roles

  • The initialisation files *.properties and proxi.ini that can be found in /opt/Trustbase/TTM/<machine-name>

  • When utilising an HSM such as nCipher, module keys. nCipher "Security World" allows module keys to be backed. Consult the KeySafe User Guide using the Administrator Card Set.


What Tables are used for frequently written data?

The following tables are written frequently and can be expected to grow rapidly. Therefore it will be necessary to archive data from these tables as storage limits are approached. These logs contain important information that mean it is imperative to avoid loss. A regular back-up process should be in force for the following tables:

  • Frequently Written Data

    • AUDITDATA
      Stores audit log data

    • ERROR
      Stores error log data

    • RAW_DATA
      Stores all the raw message data entering an leaving the system. This table is used for non-repudiation purposes, and is referenced by the INIT_TABLE described below. Due to this link, the archiving procedure for both these tables should follow that described above.

      SQL> desc raw_data;
      Name Null? Type
      ------------------------------- -------- ----
      SESSIONID NOT NULL NUMBER(38)
      LOGCONNECTIONID NOT NULL NUMBER(38)
      RECORDID NOT NULL NUMBER(38)
      MSGGRPID VARCHAR2(120)
      MSGID VARCHAR2(120)
      DOCTYPE NOT NULL VARCHAR2(120)
      RECORDMARKER NOT NULL VARCHAR2(240)
      CONNECTIONID NOT NULL VARCHAR2(100)
      PROTOCOLTYPE NOT NULL VARCHAR2(10)
      INPUT NOT NULL NUMBER(38)
      TIMESTAMP NOT NULL NUMBER(38)
      RAWDATA NOT NULL LONG
      DIGESTOFRECORD RAW(2000)
      SIGNEDDIGESTOFCALCULATION RAW(2000)

  • INIT_TABLE
    Stores information relating to the integrity of the raw log tables.

    SQL> desc init_table;
    Name Null? Type
    ------------------------------- -------- ----
    SESSIONID NOT NULL NUMBER(38)
    TIMESTAMP NOT NULL NUMBER(38)
    N_CONNECTIONS NOT NULL NUMBER(38)
    SIGDATA NOT NULL RAW(2000)
    SERVERCERTISSUERDN VARCHAR2(2000)
    SERVERCERTSERIALNUMBER VARCHAR2(100)


Archiving for Raw Data and Init Table

For Identrus, the log init_table and raw_data table are inter linked. A raw_data record holds actual message data along with a time stamp that is digitally signed. An init_table record points to the beginning of a set of raw_data records. There are two circumstances under which archiving may take place:

  1. The TTM instance is not running. In this case, the data in both raw_data and init_table tables may be archived using the prevailing archive strategy. Note that both tables should be archived in order to ensure that verification of the archived data operated correctly. When the system is re-started, a new initialisation record will be created in the init_table, and raw_data entries will be cryptographically chained from this.

  2. The TTM instance is running. In this case, the raw log verification utility should be run, in order to provide a list of the log end-points. This will give output of the form:

The script can be run as follows:

$ cd <TrustbaseInstallDir>/TTM/Scripts
$ ./runcheckintegrity

The following output appears:

Trustbase Raw Log Verification Utility
Checking all sessions
Checked chain 0 from session 4,160 with 82 records, ending at 26/02/01 19:41. Endpoint in chain is 81
Checked chain 1 from session 4,160 with 81 records, ending at 26/02/01 19:41. Endpoint in chain is 80
Checked chain 2 from session 4,160 with 81 records, ending at 26/02/01 19:41. Endpoint in chain is 80
Checked chain 3 from session 4,160 with 75 records, ending at 26/02/01 19:41. Endpoint in chain is 74
Checked chain 4 from session 4,160 with 87 records, ending at 26/02/01 19:41. Endpoint in chain is 86
Checked chain 5 from session 4,160 with 85 records, ending at 26/02/01 19:41. Endpoint in chain is 84
Checked chain 6 from session 4,160 with 81 records, ending at 26/02/01 19:41. Endpoint in chain is 80
Checked chain 7 from session 4,160 with 77 records, ending at 26/02/01 19:41. Endpoint in chain is 76
Checked chain 8 from session 4,160 with 82 records, ending at 26/02/01 19:41. Endpoint in chain is 81
Checked chain 9 from session 4,160 with 79 records, ending at 26/02/01 19:41. Endpoint in chain is 78
Checked session 4,160 with total 810 records over 10 connections. Started at 26/02/01 19:39 Ended at 26/02/01 19:41



Note Before running the script you need to check the entries ([LogManager/MessageLoggerStore] and [LogManager/MessageLogger]) in tbase.properties. The database connection string, user, password and driver settings are read from this file. Also the signing certificate for the raw log records and signature algorithm are read from this file.



The following is an example <install_directory/TTM/Scripts/runcheckintegrity script:

. ./setcp
CLASSPATH=$TBASE_INSTALL:$CLASSPATH
cd $TBASE_INSTALL
exec java uk.co.jcp.tbaseimpl.log.raw.tools.VerUtil

Each Session holds a collection of chains, each containing a start point and an end point. All data up to these end points may now be archived. Future runs of the raw log verification utility, when an archive has taken place, will report data written subsequently as "orphan" data, but will verify and report on the data. Output will be similar to that below:

$ ./runcheckintegrity

The following output appears:

Trustbase Raw Log Verification Utility
Checking all sessions
No init records were found
Orphan chain from session 4,160, Connection 0, startpoint 10, endpoint 81 is valid
Orphan chain from session 4,160, Connection 1, startpoint 10, endpoint 80 is valid
Orphan chain from session 4,160, Connection 2, startpoint 10, endpoint 80 is valid
Orphan chain from session 4,160, Connection 3, startpoint 10, endpoint 74 is valid
Orphan chain from session 4,160, Connection 4, startpoint 10, endpoint 86 is valid
Orphan chain from session 4,160, Connection 5, startpoint 10, endpoint 84 is valid
Orphan chain from session 4,160, Connection 6, startpoint 10, endpoint 80 is valid
Orphan chain from session 4,160, Connection 7, startpoint 10, endpoint 76 is valid
Orphan chain from session 4,160, Connection 8, startpoint 10, endpoint 81 is valid
Orphan chain from session 4,160, Connection 9, startpoint 10, endpoint 76 is valid
Checked ORPHAN session 4,160 with total 710 records over 10 connections. Started at 26/02/01 19:40 Ended at 26/02/01 19:41

RecordIds are declared valid when the signature of this record matches with the certificates held in the local database. If the recorded end points tally with the start points of the orphan data, the integrity of the log may be inferred. Provided the RecordId of subsequent verifications is less (and stated as valid) than its predecessor then integrity can be assumed. If this is not the case then either the archiving procedure has been mismanaged or somebody has deleted or tampered with some records. Under such circumstances you will need to go back and check the integrity of the archives. The fact that the second verification illustrated above shows only 710 records compared with 810 records in the first verification means that 100 records were archived between the two verification checks. Indeed all records within chains whose startpoints were less than 10 were archived.

By default the tool will check the integrity of all records in the raw log. An individual session can be checked with the command line switch

-session <Session Id>

For Example:

./runcheckintegrity -session 12344


What happens when certificates expire?



In the event of any certificate expiring, a complete new set of Transaction Co-ordinator certificates will need to be generated. Before you do this it is a good idea to archive the contents of the logs as this ensures the archived logs are only signed by one set of certificates. This can be done as part of the general backup.



Note Please refer to the section on Certificate Management. The procedure for obtaining new certificates is identical to the procedure that you used to obtain them the first time.




How to do Disaster Recovery?



In the event of hardware or disk failure it will be necessary to perform a disaster recovery. By ensuring the following contents are intact through restoration from backup, a iPlanet Trustbase Transaction Manager can continue its operation.

  • nCipher Users only. nCipher "Security World" needs to be restored according to the KeySafe User Guide using the Administrator Card Set and the nCipher backup data.

  • Reinstall iPlanet Trustbase Transaction Manager, Application Server and database.

  • Reinstate *.properties and proxi.ini found in /opt/Trustbase/TTM/<machine-name>

  • Reinstate database from the backup of tables created under the user specified in the SQL script in your Installation Guide. If necessary consult your Database Administrator.



    Note Refer to the installation worksheet for information about the setup of iPlanet Trustbase Transaction Manager's application server and database.




Previous     Contents     DocHome     Index     Next     
Copyright © 2001 Sun Microsystems, Inc. Some preexisting portions Copyright © 2001 Netscape Communications Corp. All rights reserved.

Last Updated April 18, 2001