Skip Headers
Oracle® Communications Order and Service Management Installation Guide
Release 7.2.2

Go to Documentation Home
Go to Table of Contents
Go to Feedback page
Contact Us

Go to previous page
Go to next page
PDF · Mobi · ePub

4 Installing and Configuring the Oracle Database

This chapter provides information about the installation and configuration of Oracle Database that is specific to Oracle Communications Order and Service Management (OSM). For complete installation instructions and general information about installing and configuring the Oracle Database, see the Oracle Database documentation.

Database Information You Should Record

Some of the information that you set in installing the Oracle database will be needed during the OSM installation. Record the following information and provide it to the OSM installer:

  • Database Instance Host

  • Database Instance Port

  • Database Instance SID

  • Database Service Name

  • Primary Schema Username/Password

  • Rule Engine Schema Username/Password

  • Database Admin Username/Password

  • Information about all tablespaces created for OSM

Downloading and Installing the Oracle Database

For information on installing Oracle database, see the Oracle database installation documentation.

OSM requires Oracle Database Release To download this release, install Oracle Database server patch set 10404530 from My Oracle Support. This patch set contains the complete installation package for Oracle 11.2.0 3, so it is not necessary to install a base release of the database and then upgrade it to However you must still install any individual patches mentioned in the software requirements section.

In order to set all of the options in the database, you may not wish to create a database during the database software installation. It may be more convenient to run the Oracle Database Configuration Assistant after installing the database software.

Downloading and Installing Oracle Database Patches

See "Software Requirements" for information about the patches needed for the database for your platform. Ensure that all of the patches have been downloaded from My Oracle Support and installed before installing OSM.

Creating and Configuring the Oracle Database

For information on installing Oracle database, see the Oracle database installation documentation.

This section describes how to create and configure an Oracle Database for OSM. Installation and configuration guidelines to improve OSM performance are given.


Some of the procedures in this section must be performed by an Oracle Database Administrator (DBA). The following roles and permissions are required for the account used by the DBA:
  • Connect, resource with admin option

  • Execute on dbms_lock with grant option

  • Execute on dbms_redefinition with grant option

  • Select on dba_jobs with grant option

  • exp_full_database, imp_full_database with admin option

  • Create table with admin option

  • Create materialized view with admin option

  • Query rewrite with admin option

  • Select on v_$parameter with grant option

  • sysdba

The largest impact to performance is determined by the hardware used for Oracle Server running the OSM database schema and its configuration.

Database Configuration Considerations for All Systems

This section contains information and settings for use when creating any instance of the Oracle Database to be used by OSM.

If you are creating the database from the Database Configuration Assistant, Oracle recommends that you use the "Custom Database" template.

The following optional components are not required by OSM. However, if you would like to install them because of needs in your organization or needs of other applications that will be using the database, you can.

  • Oracle Text

  • Oracle OLAP

  • Oracle Spatial

  • Oracle Label Security

  • Enterprise Manager Repository

  • Oracle Warehouse Builder

  • Oracle Database Vault

Memory Settings for the OSM Database

It is a good idea to allocate as much memory as possible to the database, considering your system configuration.

Oracle strongly recommends the use of Automatic Memory Management (AMM) to manage the memory on your system in both development and production environments. AMM enables Oracle Database to automatically manage and tune the instance memory. On most platforms, you must set the target memory size initialization parameter (memory_target) and optionally a maximum memory size initialization parameter (memory_max_target). The total memory that the instance uses remains relatively constant based on the value of memory_target, and the instance automatically distributes memory between the system global area (SGA) and the instance program global area (instance PGA).

Character Sets

Oracle recommends using the AL32UTF8 character set for the OSM database instance. However, if OSM is the only application that will be using the database instance, you can use the default character set for your location.

The national character set can be left to the default value for your location.

Database Parameters

Table 4-1 shows the parameters that should be set for all databases.

Table 4-1 Suggested Oracle Database Parameters for All Systems

Parameter values Description



This parameter value is required for the OSM installer to work. If you do not want to have this parameter set permanently in your database, you can change it back to the default (FALSE) after you have run the OSM installer.


8192 (default)

For large commercial application packages on raw disks, this value should be 8 KB. It is optimized for an environment with both Online Transaction Processing (OLTP) and Decision Support Systems (DSS). If you expect only OLTP operation, that is, no reporting or large queries, you can reduce this value to 4 KB.


150 (default)

Sets the total number of processes that can be connected simultaneously to the Oracle server.



Caution! This parameter is deprecated in Oracle Database 11gR2 and will generate warnings and alert logs if set. Oracle recommends that you do not specify this parameter. If you do use it, set it to FALSE (the default value).

Configuring Time Zone Settings in the Database

The database server running OSM must not use Daylight Savings Time (DST); otherwise date and schedule calculations will be incorrect during Daylight Savings Time. Set the database server to Greenwich Mean Time (GMT) to avoid this problem.

Database Configuration Considerations for Development Instances

The information in this section applies to all non-production database instances.

Starting with Oracle Database, the initial extent size for partitioned tables is set to 8MB. If you have many sub-partitions, a large amount of space can be allocated. For example, a table with 64 sub-partitions will be allocated with an initial space requirement of 512MB. Although this is not an issue for production environments, it can become an issue in development or testing environments with limited storage capacity.

To minimize the database space consumed by OSM:

  • Set the deferred_segment_creation initialization parameter to True (the default value.)

  • Install OSM with no partitions, or a small number of partitions (for example, 4.)

  • Use a tablespace with uniform extent allocation and a small extent size, such as 64KB.

Database Parameters

If you are going to have many (for example, more than ten) instances of OSM accessing the same database instance, reduce the value of the job_queue_processes parameter to a value between 100 and 500.


For non-production environments, one permanent tablespace is sufficient for the OSM data, and OSM can use the default temporary tablespace for the database instance.

Database Configuration Considerations for Production Instances

This section contains information that applies to production, performance testing, and some production staging OSM database instances.

Sizing the OSM Database Schema

The sizing of production systems is multi-dimensional and dependent on many variables that vary greatly from customer to customer, such as daily transaction volume and amount of historical data to be maintained.

For more information on sizing of your production system, contact Oracle.

OSM allows you to add additional partitions to store order data as needed. This means that as orders are entered into the system and the available storage is used, additional partitions on new tablespaces can be added to your environment. The space needed for these additional partitions does not need to be calculated at installation time. Orders can also be purged from the system based on the partition they are in.

See "Hardware Sizing Guidelines" for additional information on Oracle Database disk space usage.

Database Parameters

This section outlines suggested Relational Database Management System (RDBMS) server configurations for OSM.


This section provides suggested values for use with OSM in a production system. The suggested values are guidelines only. The values you use will depend on your system type and actual processing requirements.

You must configure the parameters shown in Table 4-2.

Table 4-2 Suggested Oracle Database Parameters for Production Systems

Parameter values Description



Maximum number of open cursors a session can have at once.



Specifies the number of processes that can be created for the execution of jobs.


(SPARC T3 and T4 only)


If you are using a SPARC T3 or T4, the default value of this parameter can cause severe performance degradation.

Oracle recommends that you do not use the Shared Server configuration (called Multi-Threaded Server, or MTS in previous versions of Oracle Database) for production systems running OSM, for performance reasons. OSM implements its own connection multiplexing.


For production instances, a minimum of two tablespaces should be created; one permanent and one temporary. For performance reasons and to facilitate backup and recovery, you should not share the permanent tablespaces for OSM with other applications.

OSM data is placed in the permanent tablespace(s) and the temporary tablespace is used by the Oracle database as a workspace while processing OSM commands. The OSM data can be placed in one tablespace for a minimum installation, but OSM performs better when data is distributed across multiple tablespaces. You can use up to five tablespaces when initially installing the system.

It is possible to spread the OSM database over more than five tablespaces by altering the database installation and upgrade scripts. This must only be completed by an experienced Oracle DBA. For more information, contact Oracle.

In a high throughput system, each tablespace should be created on a different physical disk. This limits disk contention and IO bottlenecks to improve performance. It is recommended that the Oracle 'redo' log files be placed on a separate, dedicated physical disk. You should not have any other load on this disk.

In a production system, a RAID device should be used for physical storage. In this case, there is no advantage to placing tablespaces on different physical RAID drives as long as space is available.

The following is a tablespace creation script sample for a small installation model.

create tablespace data
datafile '/u01/oradata/OSM/data_064K01_01.dbf' size 100M
extent management local 
uniform size 64K;

create tablespace large_data
datafile '/u01/oradata/OSM/data_001M01_01.dbf' size 2200M
extent management local
uniform size 1M;

create tablespace index_ts
datafile '/u01/oradata/OSM/index_064K01_01.dbf' size 100M
extent management local 
uniform size 64K;

create tablespace large_index
datafile '/u01/oradata/OSM/index_001M01_01.dbf' size 1600M
extent management local
uniform size 1M;


If you are using Chinese UTF8 characters, the Block Size for the tablespaces used by the OSM database must be configured for 8K at database instance creation.

Configuring Oracle Database with Clusterware

Oracle Clusterware is software that, when installed on servers running the same operating system, enables the servers to be bound together to operate as if they are one server, and manages the availability of applications and Oracle databases.

With Oracle Clusterware, you can provide a cold cluster failover (also known as cold standby) to protect an Oracle instance from a system or server failure. The basic function of a cold cluster failover is to monitor a database instance running on a server and, if a failure is detected, to restart the instance on a spare server. Network addresses are failed over to the backup node. Clients on the network experience a period of lockout while the failover occurs and are then served by the other database instance after the instance has started.

Setting Up the Database and Clusterware for Cold Standby

A typical cold standby configuration consists of Oracle Clusterware and two Oracle single-instance databases running on separate physical servers with shared disk storage. In this configuration, when node A fails, the database, listener, and ASM instance automatically fail over to node B.

The following procedure describes how to set up Oracle Database and Clusterware to provide cold failover for a single-instance, noncluster, Oracle database.

  1. Install Oracle Clusterware 11gR2 on two physical machines.

    • For Linux, see Grid Infrastructure Installation Guide for Linux.

    • For Solaris, see Grid Infrastructure Installation Guide for Solaris Operating System.

    • For AIX, see Grid Infrastructure Installation Guide for AIX.

  2. Install a clustered ASM (Automatic Storage Management) home and instance.


    Oracle recommends the use of ASM to optimize storage performance and usage, and to tolerate storage failures. You may need to use other storage systems, such as Oracle Automatic Cluster File System (ACFS), or storage on raw devices. For more information, see the respective product documentation.
  3. Create an ASM disk group on shared storage which is accessible to both nodes A and B.

  4. Install a software-only version of Oracle Database 11gR2 into a new Oracle home on both nodes.

  5. Use the database configuration assistant (DBCA) to create a single-instance database on node A, sharing data files stored on the ASM disk group created in step 3.

  6. Create a new virtual IP address on node A.

  7. Implement the scripts provided in the Oracle white paper Using Oracle Clusterware to Protect A Single Instance Oracle Database. The scripts must be run on both nodes to protect the single-instance database, virtual IP address, and listener.

  8. Start the database, listener, and ASM instance on node A using Oracle Clusterware.

  9. Start the ASM instance on node B.

You have now configured your database instances for cold cluster failover. Once you have installed OSM, you can configure a WebLogic instance to automatically restart when the database fails. See "Configuring WebLogic for Cold Cluster Failover" for more information.

Configuring Oracle Database with Real Application Clusters (RAC)

A higher level of availability and performance is offered by Oracle RAC. OSM supports Oracle RAC in both active-passive (warm standby) and active-active (load balancing) mode. The data files that comprise the database must reside on shared storage that is accessible to all servers in the database cluster. If a node in the cluster fails, Oracle Database continues to run on the surviving nodes.

OSM supports Oracle RAC through WebLogic multi data sources. For each WebLogic managed server, one database instance is configured as the primary instance (in the WebLogic multi data source) and the other as the secondary instance. WebLogic monitors the status of the primary instance and fails over to the secondary instance in the event of failure.

Setting Up the Database and Clusterware for Oracle RAC

The following procedure describes how to set up Oracle Database and Clusterware to support Oracle RAC.

  1. Install Oracle Clusterware 11gR2 on nodes A and B:

    • For Linux, see Grid Infrastructure Installation Guide for Linux.

    • For Solaris, see Grid Infrastructure Installation Guide for Solaris Operating System.

    • For AIX, see Grid Infrastructure Installation Guide for AIX.

  2. Install Oracle Database with Real Application Clusters on both nodes.

    For Linux, Solaris, and AIX, see Real Application Clusters Installation Guide for Linux and UNIX.

  3. Create an ASM instance and disk group with shared storage accessible to both nodes.

  4. Use Database Configuration Assistant (DBCA) to create an Oracle RAC database on both nodes, using the ASM disk group created in the previous step as storage.

  5. Create two virtual IP addresses for the database instances, and one Single Client Access Name (SCAN) for the database cluster.

    In an Oracle RAC environment, remote listeners are typically used. In 11gR2, when you create the database, the remote listener is created as a SCAN listener. The SCAN resolves to multiple IP addresses in the cluster.

  6. Start the database and listeners on both nodes.

You have now configured your database instances to support Oracle RAC. When you run the OSM installer, you have the option to create the WebLogic multi data source and data sources required for this configuration. See Chapter 6, "Installing OSM in Interactive Mode" for more information.