24 Adding New Client Applications

This section describes how to add new client applications and how those applications function within the Oracle Communications Billing and Revenue Management (BRM) system. It includes general information about writing client applications in all supported languages and specific information about writing applications in C. The section also contains links to sample applications and their corresponding makefiles.

To create a custom application that communicates with BRM, you need to understand the following BRM components:

About Adding New Client Applications

Client applications can be virtually any type of program, including GUI-based tools, Web-based tools, network-enabled applications, batch jobs, cron jobs, and so on. You can write custom application programs to create, manipulate, delete, and display custom storable objects, which in turn implement a business policy.

The base BRM system comes with a set of client applications that manipulate storable objects within the database. You may, however, require custom applications. The most common custom application captures external events of some type. These events are then submitted to BRM, details of the event stored, and the event charges assessed if necessary.

For example, a Web hosting application needs to track the number of bytes downloaded and disk space used. This requires an application to analyze disk-space usage and send it to real-time rating at the end of each month. This also requires a new event type and new policy code that picks the rate category or quantity for the event.

Web servers do not provide an Application Programming Interface (API) to display the web usage automatically, so you might want to write an application that looks at the Web log files and generates events in BRM. Because of the potential number of records in the Web logs, you can aggregate the data externally, and then send the summary to BRM as events daily or monthly.

BRM includes a set of client libraries that make it easier to create custom applications. Existing BRM applications use these client libraries. These libraries are supported on Solaris, AIX, and Linux. For more information about the libraries, see "About the BRM Client Access Libraries".

Ways to Add Custom Applications

You can use the following options when you add custom applications to BRM:

Using Existing System Opcodes

You use the base opcodes to create, read, write, delete, and search storable objects. The base opcodes also provide programmatic access to transaction commands. More complex opcodes are implemented by the Facilities Modules (FMs). You use the policy opcodes to implement business decisions. These higher-level opcodes are translated by the FMs to base opcodes and sent to Data Managers (DMs) for processing.

For more information on the different categories of opcodes and how they are used, see "Understanding the PCM API and the PIN Library".

In a client application, you use opcodes to record events such as purchasing a product, changing a credit limit, creating an account or service, changing customer information, charging for use of a resource, verifying a password, and looking up names and addresses.

In your application, you can call any of the BRM opcodes without changing the BRM system. You need to determine when to use a particular system opcode, what constitutes an event, and then call the appropriate opcode in your application.

Your application must include the header files corresponding to the opcodes you use. All FM opcodes include the header file for base opcodes, so you do not need to include it unless your application uses only base opcodes.

For a list of opcodes and their corresponding header files, see "Header Files".

Using Custom Opcodes

If the system opcodes do not provide the required functionality, you can create custom opcodes. However, you must implement your new opcodes in a custom FM and configure the custom FM with each Connection Manager (CM).

For the application to communicate with the custom FM, the application and the FM have to agree on the opcodes to pass and the contents of their input and output flists, error buffers, and flags. The custom FM translates the new opcodes into base opcodes. Then, it can send the opcodes directly to a DM, or it can call other opcodes implemented by the standard FMs.

For information on how to create a custom FM and link it to a CM, see "Writing a Custom Facilities Module".

You call the custom opcodes in the same way with the same API by using PCM_OP() as the system opcode. Therefore, you must supply all the parameters, just as in the case of a system opcode.

Create a header file in which you define your custom opcodes. Include the header file in both the application and the custom FM source files. Compile the application and the custom FM with the new header file.

For examples of including opcodes in the header file, see the header files (.h) in the BRM_Home/include directory, where BRM_Home is the directory in which you installed BRM components.

Tip:

Defining your custom opcodes in a separate .h file helps you avoid updating FM header files when you upgrade to a new release of BRM.

Using a Custom Data Manager (DM)

Your application can communicate with a custom DM if the custom DM and the application agree on the semantics of the input and return flists.

You attach fields to the input flist that have meaning to the custom DM. These new fields act as opcodes to the custom DM. The return flist is the mirror image of the input flist in the sense that the application and custom storage manager agree on the meaning of the field-value pairs returned by the storage manager. The storage manager is responsible for setting error buffer values.

For information on creating and implementing a custom DM, see "Writing a Custom Data Manager".

You create new fields with the PIN_MAKE_FLD macro. For information on the PIN_MAKE_FLD macro, see pcm.h in the BRM_Home/include directory for field value ranges and examples of PIN_MAKE_FLD.

Tip:

To avoid updating the Portal .h files every time you upgrade BRM, define your custom fields in a separate header file and include that file in your application.

Using Transactions

Transactions enable an application to perform operations on multiple objects as if they were executed simultaneously. This guarantees the integrity of the data when related changes need to be made to a set of storable objects.

In your application, you can call PCM_OP_TRANS_OPEN before calling an opcode and PCM_OP_TRANS_ABORT or PCM_OP_TRANS_COMMIT after the opcode calls.

By default, all opcodes except the password opcodes are surrounded by transactions if transactions are not open when they are called.

Only one transaction at a time can be opened on a PCM context. A transaction is opened on a specific database schema specified by the POID database number in the input flist. All operations performed in an open transaction must manipulate data within the same database.

Any changes made within an open transaction can be aborted at any time and all changes are completely erased. These actions abort an open transaction:

  • You use the PCM_OP_TRANS_ABORT opcode.

  • The application exits or closes the PCM context.

  • A system error occurs and connectivity is lost between the application and the database.

The system tracks the transaction along with the context argument used by most of the PCM Library macros. If the context pointer passed has an outstanding transaction, it is used automatically.

Keeping a transaction open for a long time can affect performance because the system maintains a frozen view of the data while changes are made by other applications. It is recommended that you do not leave transactions open while long-latency tasks, such as prompting a user for input, are performed.

In general, any PCM opcode can be executed within an open transaction, and its effect follows the transactional rules. However, some Facilities Module opcodes that interface to legacy systems or external systems do not follow the transactional rules (that is, they can't be undone). Opcodes with this limitation must check for an open transaction and return an error if an application attempts to run the opcode within the open transaction.

Types of Transactions

When you use the PCM_OP_TRANS_OPEN opcode to open a transaction, you can use the following flags to open different types of transactions:

Note:

For J2EE-compliant applications, use JCA Resource Adapter to open extended architecture (XA) transactions through the XAResource interface. For more information, see "About JCA Resource Adapter Transaction Management" in BRM JCA Resource Adapter.

Read-Only Transactions

Use the PCM_TRANS_OPEN_READONLY flag to open a read-only transaction.

Use this type if operations will not change any data in the transaction.

From the application's point of view, a read-only transaction freezes the data in the database. The application does not see any changes to data made by other applications while the transaction is open. This allows data to be examined in a series of operations without being changed in mid-process.

Read-only transactions are more efficient and should be used when possible. Any number of read-only transactions can be open against a database at once.

Read-Write Transactions

Use the PCM_TRANS_OPEN_READWRITE flag to open a read-write transaction.

A read-write transaction freezes the data in the database from the application's point of view, and allows changes to be made to the data set. These changes are not seen by any other application until the transaction is committed. This allows the effects of a series of operations performed on storable objects to occur simultaneously when the transaction is committed.

Any number of read-write transactions can be open against a database at once.

Transaction with a Locked Storable Objects

Use the PCM_TRANS_OPEN_LOCK_OBJ flag to open a transaction and lock a storable object as part of the transaction.

A lock-object transaction is useful when two applications must synchronize the operations they perform on the same storable object. Lock-object transactions are the same as read-write transactions, with the addition of the storable object lock. If you use a lock-object transaction, you must specify the PCM_TRANS_OPEN_READWRITE flag.

If an application tries to open a lock-object transaction on a storable object that is already locked by another application, it will be held off until the application that currently holds the object finishes its transaction and unlocks the storable object.

Transaction with a Locked Default Balance Group

Use the PCM_TRANS_OPEN_LOCK_DEFAULT flag to open a transaction and lock the default balance group object only as part of the transaction.

Most opcode transactions lock the account object, if used, at the beginning of a transaction. This provides reliable data consistency but in systems that use account hierarchies, it can also cause a lot of serialization which decreases the throughput of the system. You can use the PCM_TRANS_OPEN_LOCK_DEFAULT flag to open a transaction that locks only the default balance group for the account instead of the sum of all the account objects in the hierarchy. See "Locking Specific Objects".

If you use a lock default balance group transaction, you must specify the PCM_TRANS_OPEN_READWRITE flag and not specify the PCM_TRANS_OPEN_LOCK_OBJ flag.

If an application tries to open a transaction on a balance group that is already locked by another application, it will be held off until the application that currently holds the object finishes its transaction and unlocks the storable object.

About Committing Transactions

Changes made within an open transaction are not permanent or visible to other applications until the transaction has been successfully committed.

Committing a transaction has these effects:

  • The transaction is closed and all data changes made within the open transaction take effect in the data set. The changes become visible to all other applications (subject to their open transactions).

  • The application's view of the data set is no longer frozen in time, so changes made by other applications are now visible to the application.

  • If a storable object was locked, it is unlocked.

  • The application is free to open another transaction. Subsequent operations on the PCM context are unrelated to the closed transaction.

Note:

For J2EE-compliant applications, use JCA Resource Adapter to commit XA transactions through the XAResource interface. The adapter supports both single-phase and two-phase commits. For more information, see "About JCA Resource Adapter Transaction Management" in BRM JCA Resource Adapter.

About Cancelling Transactions

Cancelling a transaction has the following effects:

  • All data changes made within the open transaction are discarded, so no data is changed by operations related to the transaction.

  • If a storable object was locked, it is unlocked.

  • The transaction is closed, and subsequent operations on the PCM context are unrelated to the closed transaction. The application is free to open another transaction.

  • The application's view of the data set is no longer frozen in time, so changes made by other applications are visible to the application.

Note:

For J2EE-compliant applications, use JCA Resource Adapter to roll back XA transactions through the XAResource interface. For more information, see "About JCA Resource Adapter Transaction Management" in BRM JCA Resource Adapter.

About the Transaction Base Opcodes

Use the following opcodes to manage transactions:

  • To open transactions, use PCM_OP_TRANS_OPEN.

  • To commit transaction, use PCM_OP_TRANS_COMMIT.

  • To cancel transactions, use PCM_OP_TRANS_ABORT.

Customizing How to Open Transactions

To customize how to open transactions, use PCM_OP_TRANS_POL_OPEN.

This opcode gets the same flist that PCM_OP_TRANS_OPEN does. The return flist then becomes the transaction ID flist; it can contain whatever fields that you want to put on it. This flist then is also the input to PCM_OP_TRANS_POL_COMMIT and PCM_OP_TRANS_POL_ABORT. The return flists from those opcodes are ignored.

Customizing the Verification Process for Committing a Transaction Opcode

To customize how to verify the readiness of an external system to commit a transaction opcode, use PCM_OP_TRANS_POL_PREP_COMMIT.

This opcode provides BRM with preparatory notice of a pending commit process for transaction policies working with an external system. This is its overall process:

  1. Open a transaction in each system.

  2. Do the work authorized by the transaction.

  3. Verify that the external system will be able to commit the transaction.

  4. Commit the transaction in BRM.

  5. Commit the transaction in the external system.

PCM_OP_TRANS_POL_PREP_COMMIT verifies that the external system will be able to commit the transaction. If the transaction is successfully committed, the CM calls PCM_OP_TRANS_COMMIT, and upon a successful commit transaction of that opcode it calls PCM_OP_TRANS_POL_COMMIT.

If PCM_OP_TRANS_POL_PREP_COMMIT fails, the CM automatically aborts the transaction using PCM_OP_TRANS_ABORT and PCM_OP_TRANS_POL_ABORT.

Customizing How to Commit a Transaction

To customize how to commit a transaction, use PCM_OP_TRANS_POL_COMMIT.

The return flist from PCM_OP_TRANS_POL_OPEN becomes the transaction ID flist; it can contain whatever fields that you want to put on it. This flist then is also the input to PCM_OP_TRANS_POL_COMMIT. The return flist from this opcode is ignored.

Customizing How to Cancel Transactions

To customize how to cancel transactions, use PCM_OP_TRANS_POL_ABORT.

The return flist from PCM_OP_TRANS_POL_OPEN becomes the transaction ID flist; it can contain whatever fields that you want to put on it. This flist then is also the input to PCM_OP_TRANS_POL_ABORT. The return flist from this opcode is ignored.

Implementing Timeout for Requests in Your Application

You can specify a timeout value for each connection to the CM. This allows you to set different timeout values for different operations. For example, you can set different timeout values for authorization and stop-accounting requests, or you can dynamically increase or decrease the timeout value for different operations based on the system load.

To specify a timeout value in milliseconds for a connection, pass the PIN_FLD_TIMEOUT_IN_MS field in the input flist to the PCM_CONTEXT_OPEN function. The PIN_FLD_TIMEOUT_IN_MS value is applicable after the client connects to the server. Before the connection happens this setting is not effective. It does not report a time out if the client cannot connect to the server at all.

The timeout value you specify applies to all the opcodes called during that open session and overrides the value in the client configuration file. You must ensure that your client application handles the timeout, closes its connection to the CM by calling PCM_CONTEXT_CLOSE, and cleans up the transaction context.

Note:

When the timeout happens, the CM does not provide any feedback about the success or failure of the request it received. When the CM detects the closed connection, it rolls back the ongoing transaction and shuts down.

Configuring Your Custom Application

You must set up the following applications to access the same database schema:

  • The client application

  • At least one CM

  • At least one DM

The client application makes the connection by using the BRM database number in all three configuration files: the application's, the CM's, and the DM's. The database number is arbitrary, but it is determined before the system is installed. After the system is installed, you cannot change this number because it is encoded in every storable object in the database schema.

The client application must use POIDs with the correct database number. The system routes storable object requests on the basis of POIDs, which include the database number.

In your custom application, use the database number returned by PCM_CONNECT(). If you are using PCM_CONTEXT_OPEN(), call PCM_GET_USERID() and then PIN_POID_GET_DB() on the POID.

Caution:

Do not get the database number or the userid POID from the configuration file by calling pin_conf() in your application.

Creating a Client Application in C

You write client applications by using the PCM opcodes, which send and receive flists to the BRM database. Each opcode has a corresponding input and return flist.

Flists are used to hold return values for two important reasons:

  • The macro call itself does not return a value.

  • An flist can contain an arbitrary number of fields and values that is frequently not known in advance.

Your custom applications must include the header files that correspond to the FM opcodes you use. Which file to include depends on which opcodes you use. The header file for base opcodes needs to be included if you are using only base opcodes, which is unlikely. For information on header files and a list of opcodes and their corresponding header files, see "Header Files".

You use the PCM_OP() macro to pass PCM opcodes and flists to BRM. The system returns an flist. You create input flists for the call to PCM_OP() and routine returns the results in an flist. You use the return flists and then destroy them.

The following pseudo-code shows the format of most client programs:

#include "pcm.h"
/* header file corresponding to the FM opcode you're using */
#include "ops/file.h" 
#include "pin_errs.h"
    main()
     .
     .
     .
         /* open a database context */
         PCM_CONTEXT_OPEN()
  
        /* clear error buffer */
         PIN_ERRBUF_CLEAR(&ebuf);
  
        /* send opcode to system, based on user activity or application
            function. */
  
PCM_OP(input_flist, opcode, return_flist, &ebuf)
  
        /* check for errors */
         if (PIN_ERRBUF_IS_ERR(&ebuf)) {
             /* handle error */
         } else {
             /* ok - no errors */
         }
         .
         .
         .
  
        /* close database context */
         PCM_CONTEXT_CLOSE()
     .
     .
     .
     exit(0);

Compiling and Linking Your Programs

You do not have to follow any special precompilation or other steps to compile and link applications. Both static and dynamic versions of BRM libraries are provided.

UNIX client libraries are multi-thread safe.

To compile and link your application:

  1. Compile using the include files in the BRM_SDK_Home/include directory.

  2. Link to the libraries in BRM_SDK_Home/lib.

    See the sample applications and their make files for more information.

Table 24-1 lists the supported compilers:

Table 24-1 Supported Compilers

Operating System Compiler

Solaris

Standard SUNPro C compiler, default mode (-Xa).

BRM is compiled with -xcg92.

Note: gcc is not supported. Use the -munaligned-doubles option to ensure proper linking.

Linux

gcc compiler

AIX

xlc 8


Guidelines for Developing Applications in C on UNIX Platforms

Follow these guidelines to develop custom applications in C on Solaris, Linux, and AIX:

  • Include the appropriate library file at link time:

    • libportal.so for Solaris and Linux

    • libportal.a for AIX

  • Add BRM_SDK_Home/include to the list of include file directories to search.

  • In the preprocessor directives, be sure to include the following symbol:

    PIN_USE_ANSI_HDRS.
    

Using the Sample Applications

BRM SDK includes sample applications and code as well as source code for policy FMs that you can refer to for coding examples.

Sample Applications

BRM SDK includes sample applications and code in C, C++, Java, and Perl. For a complete list of the sample applications, see "Sample Applications" in BRM Developer's Reference.

Before you write your program, try compiling and linking copies of these programs to familiarize yourself with the system. These programs are located in BRM_SDK_Home/source/samples. This directory also includes a sample application configuration file.

Caution:

Do not run the sample programs on a production system. Some programs fill the database with test storable objects. Remove the test storable objects before building your production system.

Policy FM Source Files

BRM SDK includes the source code for all the policy opcodes. You can refer to them for BRM coding examples. You can find the Customer Policy FM opcode source files in BRM_SDK_Home/source/sys. Each policy FM has its own directory containing the source files for the included opcodes as well as a make file and other support files.

For more information, see the transaction handling information on each opcode reference page, "About Transaction Usage", and "Context Management Opcodes".

Adding Branding to Your Application

Brand Manager allows a single hosting BRM site to support multiple virtual BRM sites. Each virtual Internet service provider (ISP) is known as a brand and requires a secured view of its own data. At the same time, the hosting site maintains administrative oversight of the entire system, which may include many independent BRM sites.

The Brand as an Independent Virtual Workspace

Two kinds of accounts exist in a branded service management setting: standard accounts and brand accounts. The brand account is a special entity that serves as a complete miniature BRM system. Each brand accesses the database according to its own rules. To effectively implement brands, "data scoping" is required.

Data Scoping

Data scoping is the ability to restrict database access so that only authorized users can access data and services. The concept of an account is essential to effective data scoping. Most objects are account-centric. For example, services and profiles are designed to be associated with an account. Events and pricing objects also "belong" to an account. Therefore, to implement data scoping in a branded environment, all objects must be associated with an account.

Pointing a Base Level Object to a Brand Account

To enforce scoping restrictions, the system associates all objects with an account. This association is done via the PIN_FLD_ACCOUNT_OBJ reference in the every base level object.

! Account object to which data belongs
field PIN_FLD_ACCOUNT_OBJ (
type = PIN_FLDT_POID,
perms = MW,

Defining a Brand Account

There are two account types: standard and brand. The following tag specifies the account type:

! Brand object or normal object
field PIN_FLD_ACCOUNT_TYPE (
type = PIN_FLDT_ENUM
  

This flag is set when the account is created and cannot be modified later. The flag can have one of these values:

  • For brand accounts, PIN_ACCOUNT_TYPE_BRAND

  • For standard accounts, PIN_ACCOUNT_TYPE_STANDARD

Providing Access to Brands

In a branded environment, it's often necessary for an object to have distinct read and write access rules. For example, all members of a brand account may have read access to a pricing object while write access may be more restricted. To meet this requirement, the following fields have been added to each account object that define read and write access rules for the object. These fields are controlled by the system and filled in by the Data Manager with values specified in the data dictionary.

  • PIN_FLD_READ_ACCESS: Specifies who has permission to read data

  • PIN_FLD_WRITE_ACCESS: Specifies who has permission to modify data

Branding Access Permissions

Table 24-2 defines the permissions that can be assigned for read and write access:

Table 24-2 Branding Access Permissions

Permission Description

Global

Any user can read or write the object. Many /config objects are globally readable.

Brand

Any user with brand access can read or write the object. For example, pricing objects, such as products and deals, are brand-readable.

Self

Only the brand owner can read or write the data. Pricing objects are self-writable and brand-specific to prevent unauthorized changes to the price list.

Ancestral Lineage

The brand owner or any superior billing group leader can read or write the data. For example, /device objects have Ancestral Lineage write permissions.

Brand Lineage

The owner, or any superior billing group leader within the brand, can read or write the data. Used with many objects, including services, profiles, and events to ensure brand segregation and data privacy.


Providing Access to Pricing Information

All pricing related objects available for system use are associated at the brand level. Therefore, Pricing FM opcodes only look for pricing objects associated with the brand.

Read Access to Pricing Data

Each brand in a BRM system maintains its own pricing data. Because everyone who accesses the brand should be able to read pricing information, objects related to pricing have read access set to the Brand Group Scope level. Thus objects of the following types are all of the type PIN_FLD_READ_ACCESS = B (brand): /product, /rate, /fold, /plan_list.

Write Access to Pricing Data

Since the ability to modify pricing information is generally much more restricted, write access to pricing objects are set locally. This ensures that only the owner of the pricing object can modify pricing data.

Billing Groups

Billing groups act as containers for grouping accounts. A billing group leader has access to all account data owned by the billing group, as well as information owned by any of the billing group members. Thus a group leader can look down an entire tree of billing groups and its associated accounts within the brand. In this way, billing groups are used as the primary mechanism for categorizing scoping classes.

Providing Billing Group Access with an Access Control List (ACL)

A new group object, /group/acl, has been created to define access for the billing group. The group acl includes a list of group members along with a pointer to the brand account or accounts to which group members have access.

Managing Custom Applications

A host ISP may manage many brand accounts. Each brand may in turn have access to a variety of multi-brand applications. BRM's client interface simplifies an administrator's task. Once a user enters an authorized login and password, BRM displays a list of available brands. A user can then set the current connection scope by selecting a brand to customize.

Creating Brands Programmatically

Use the PCM_OP_CUST_COMMIT_CUSTOMER opcode to create a new brand account. This opcode creates and initializes /account and /service storable objects. Once the account has been created, the account type flag must be set:

! Brand object or normal object
field PIN_FLD_ACCOUNT_TYPE (
type = PIN_FLDT_ENUM
  

This flag is set when the account is created, and cannot be modified later. The flag can have one of two values:

  • For brand accounts: PIN_ACCOUNT_TYPE_BRAND

  • For standard accounts: PIN_ACCOUNT_TYPE_STANDARD

Set the flag to PIN_ACCOUNT_TYPE_BRAND to specify this account as a brand.

Writing Brand-Aware Applications

Applications can be customized to meet the requirements of a brand. To do this, the administrator must first retrieve a list of available brands and then set the connection scope to the specific brand to be customized.

Displaying a List of User Authorized Brands

Use the PCM_OP_PERM_GET_CREDENTIALS opcode to display a list of the brands which have access to an application. Usually this opcode is called at startup time. After the user enters an authorized login and password, the system displays a list of brands which have access to the application along with the currently active brand and billing group.

Setting the Current Connection Scope

Because each brand represents an independent virtual workspace with its own data and access rules, a user must select a single brand (or billing subgroup) with which to work. Use the PCM_OP_PERM_SET_CREDENTIALS opcode to set the current connection scope.

About Adding Multischema Support to Your Applications

BRM supports the ability to add multiple database schemas for the purpose of scaling your BRM system. This chapter explains the programming considerations of creating an application to work with a BRM system using multiple schemas.

For instructions on installing a multischema system, see "Installing a Multischema System" in BRM Installation Guide.

For instructions on maintaining a multischema system, see "Managing a Multischema System" in BRM System Administrator's Guide.

This section contains information you need to know before you design a new application or enhance an existing program to take advantage of the BRM multischema feature.

About Working with Multiple Schemas

Generally, making applications work with multiple database schemas is not all that different from making them work with a single schema.

Accounts are distributed across schemas, but applications log in to the correct schema for an account based on the login name and service type. When an application logs in to BRM, it gets the schema context for the account it logged in as. An event for the login session for that application is created in the schema that hosts the account.

After the application has logged in, it has access to the entire BRM database for reads and writes on all classes that are modifiable. In most cases, after an account context is established, all subsequent operations for the account are performed in the single schema where the context was opened.

Creating Accounts

The PCM_OP_CUST_COMMIT_CUSTOMER opcode has been enhanced to work with multiple database schemas. Use that opcode to create accounts just as you would for a single-schema system. The opcode uses the /config/distribution object created by using the load_config_dist utility to determine which schema your account is created in.

You can specify which schema new accounts should be created in by editing the config_dist.conf configuration file. For more information, see "Setting Database Priorities" in BRM System Administrator's Guide.

Important:

Billing groups, including all accounts with the same brand, must reside in the same schema.

Maintaining Transactional Integrity

Important:

Remember, after you find an account to modify data in, confine all operations possible to that schema.

Although an application can connect to multiple database schemas and manipulate data in any schema, a transaction can only manipulate data in a single schema. To perform a transaction on more than one schema, you must close the existing transaction, open a context to the other schema, and open another transaction. An application that needs to perform the same operation on all accounts (such as billing or invoicing) should be run as a separate instance in each schema.

You must use the database number returned by PCM_CONNECT or PCM_CONTEXT_OPEN for all transactions within the context you open. These opcodes pass in an account's user name and return the database number for that account. To prevent losing transactional integrity, avoid opening contexts to multiple schemas whenever possible.

The exception to this rule is the rare occasion when you need to access information in any of the pricing storable classes. Embedded in these classes is the account information (including database number) of the person who changed that information. All account references are exact references. Dealing with this information can require you to switch contexts to another schema with a new call to PCM_CONNECT or PCM_CONTEXT_OPEN.

Searching for Accounts across Database Schemas

This section describes how to search for accounts across database schemas.

Searching for a Single Account

You can use the following opcodes to find a single account:

  • Use the PCM_OP_ACT_FIND opcode to find an account based on the login and service type. This opcode finds and returns the account POID (including the correct database number) of a single account.

  • Use the PCM_OP_GLOBAL_SEARCH opcode to find an account based on other account attributes. This opcode returns any fields that you specify on the input flist.

Searching for Multiple Accounts (Global Search)

Use the PCM_OP_GLOBAL_SEARCH opcodes to find and return the Portal object IDs (POIDs) of multiple accounts across multiple database schemas at the same time. The global search opcodes can also be used to search for a set of objects that reside in multiple database schemas (for example, all events from a particular day). See "Searching for Objects in the BRM Database" for a complete discussion of searching for accounts across multiple database schemas.

Important:

Remember to use nonglobal searches for better performance whenever possible. After you get the results of a global search, you can improve your application's overall performance by dividing the database read and write operations among database schemas.

Finding How Many Database Schemas You Have with testnap

Use the testnap utility to find the number of database schemas connected to your BRM system. The following example shows testnap being started and then displays the contents of an flist named 1. This flist is designed to match all root accounts. In the next step, this flist is passed to PCM_OP_GLOBAL_SEARCH (opcode number 25), which searches all database schemas. In the final step, testnap searches all database schemas for their root accounts. Each database schema has only one root account (in the /service class), so the result of this search is a listing of all the database schemas currently connected to your BRM system. In this example, there are two: 0.0.0.1 and 0.0.0.2.

testnap
input flist:
  
d 1
0 PIN_FLD_POID           POID [0] 0.0.0.1 /search -1 0
0 PIN_FLD_FLAGS           INT [0] 0
0 PIN_FLD_TEMPLATE        STR [0] "select X from /service 
where F1 like V1 and F2 = V2 "
0 PIN_FLD_ARGS          ARRAY [1]
1     PIN_FLD_LOGIN           STR [0] "root.0.0.0%"
0 PIN_FLD_ARGS          ARRAY [2]
1     PIN_FLD_POID           POID [0] 0.0.0.0 /service/pcm_client -1 0
0 PIN_FLD_RESULTS       ARRAY [0]
1     PIN_FLD_POID           POID [0] NULL
1 PIN_FLD_LOGIN           STR [0] ""
  
result:
  
XOP PCM_OP_GLOBAL_SEARCH 0 1
XOP: opcode 25, flags 0
# number of field entries allocated 3, used 3
0 PIN_FLD_POID           POID [0] 0.0.0.1 /search -1 0
0 PIN_FLD_RESULTS       ARRAY [0] allocated 2, used 2
1     PIN_FLD_POID           POID [0] 0.0.0.2 /service/pcm_client 1 1
1     PIN_FLD_LOGIN           STR [0] "root.0.0.0.2"
0 PIN_FLD_RESULTS       ARRAY [1] allocated 2, used 2
1     PIN_FLD_POID           POID [0] 0.0.0.1 /service/pcm_client 1 1
1     PIN_FLD_LOGIN            STR [0] "root.0.0.0.1"

Bill Numbering

Applications must now avoid hard-coding bill numbers. Bill numbers are now coded to the database schema they were created in, and BRM relies on the new numbering scheme. The /data/sequence class still tracks bill numbers to ensure that they are unique. This class has been enhanced to make sure that bill numbers are unique across database schemas.

About Adding Virtual Column Support to Your Applications

This section explains the programming considerations of creating an application to work with BRM virtual columns and applies to custom applications that interact with the BRM database directly. For information about using virtual columns in the BRM database, see the discussion on generating virtual columns in BRM System Administrator's Guide.

Caution:

  • Always use the BRM API to manipulate data. Changing data in the database without using the API can corrupt the data.

  • Do not use SQL commands to change data in the database. Always use the API.

Custom applications can perform read operations on virtual columns but cannot perform update or insert operations. The values of virtual columns are computed dynamically, and attempts to modify them directly result in an error.

BRM creates virtual columns for the POID field_name_type columns on event tables in the BRM database. If your custom applications must update or insert data in these physical columns after they have been converted to virtual columns, you must make your applications interact with the virtual columns' respective supporting column.

Each BRM virtual column is associated with a supporting column that stores the storable class ID. The supporting columns can be modified and use the suffix field_name_type_id (the virtual columns use the suffix field_name_type).

The following examples demonstrate how custom applications can perform update and insert operations on the supporting columns of physical columns that have become virtual-column enabled.

Note:

The get_object_id function shown in the examples is available in the PIN_VIRTUAL_COLUMNS package.

Consider a table event_t with virtual column session_obj_type. The virtual column has a session_obj_type_id supporting column, which stores the ID corresponding to the type value of the virtual column.

  • Update operation example

    Any custom application/PL/SQL updating the column session_obj_type using SQL

    update event_t set session_obj_type = ’/service/telco';
    

    will have to be modified to

    update event_t set session_obj_type_id = pin_virtual_column.get_object_id(’/service/telco');
    
  • Insert operation example

    Any custom application/PL/SQL inserting values into column session_obj_type with SQL

    insert into event_t (poid_type) values (pin_virtual_columns.get_object_id('/event'));
    

    will have to be modified to

    insert into event_t (poid_type_id) values (pin_virtual_columns.get_object_id('/event'));