This chapter provides information about the installation and configuration of Oracle Real Application Cluster (Oracle RAC) Database that is specific to Oracle Communications Order and Service Management (OSM). For complete installation instructions and general information about installing and configuring the Oracle Database, see the Oracle Database documentation.
Some of the information that you set when installing the Oracle Database will be needed during the OSM installation. Record the following information and provide it to the OSM installer:
Oracle RAC database instance hosts
Oracle RAC database instance ports
Oracle RAC database instance SIDs
Oracle RAC database service name
Database Administrator User name/Password
Information about all tablespaces created for OSM
This section describes how to create and configure an Oracle database for OSM. It also gives installation and configuration guidelines to improve OSM performance. Although these guidelines can help improve OSM performance, the hardware and configuration used for Oracle Server running the OSM database schema have the largest impact on performance.
You can install OSM using Oracle Database Release 12c pluggable databases (PDB) within a multitenant container database (CDB) or using a non-container database.
A higher level of availability and performance is offered by Oracle Real Application Clusters (Oracle RAC). For OSM production environments, Oracle recommends Oracle RAC in the active-active mode. The data files that comprise the database must reside on shared storage that is accessible to all servers in the database cluster. If a node in the cluster fails, Oracle Database continues to run on the surviving nodes.
OSM supports Oracle RAC through WebLogic multi data sources. For each WebLogic managed server, one database instance is configured as the primary instance (in the WebLogic multi data source) and the others as secondary instances. WebLogic monitors the status of the primary instance and fails over to a secondary instance in the event of failure.
The following procedure describes how to set up Oracle Database and Clusterware to support Oracle RAC.
Install Oracle Clusterware on all nodes:
For Linux, see Oracle Grid Infrastructure Installation Guide for Linux.
For Solaris, see Oracle Grid Infrastructure Installation Guide for Solaris Operating System.
For AIX, see Oracle Grid Infrastructure Installation Guide for AIX.
Install Oracle Database with Real Application Clusters on all nodes.
For Linux and UNIX, see Oracle Real Application Clusters Installation Guide for Linux and UNIX.
Create an ASM instance and disk group with shared storage accessible to all nodes.
Use Database Configuration Assistant (DBCA) to create an Oracle RAC database on all nodes, using the ASM disk group created in the previous step as storage.
Create virtual IP addresses for the database instances, and one Single Client Access Name (SCAN) for the database cluster.
In an Oracle RAC environment, remote listeners are typically used. When you create the database, the remote listener is created as a SCAN listener. The SCAN resolves to multiple IP addresses in the cluster.
Start the database and listeners on all nodes.
Download and install database patches. See "Software Requirements" for information about the patches needed for the database for your platform. Ensure that all of the patches have been downloaded from Oracle support and installed before installing OSM.
For OSM, you also need to add a permission to the database administrator user that you are going to use during the OSM installation. To do this, log in to SQL*Plus as sysdba and run the following:
grant create any context to sysuser as sysdba with admin option
where sysuser is a user with sysdba privileges that you intend to use during OSM installation.
You have now configured your database instances to support Oracle RAC and OSM. When you run the OSM installer, you have the option to create the WebLogic multi data source and data sources required for this configuration. See "Performing an Interactive Installation of OSM" for more information.
It is a good idea to allocate as much memory as possible to the database. Use memory guidelines provided in "Planning the Physical Architecture" then confirm your memory requirements with performance testing and tuning.
Oracle recommends the use of Automatic Shared Memory Management (ASMM) to manage the memory on your system in production environments. ASMM is a better option than Automatic Memory Management (AMM) because AMM can cause performance problems.
If you are using Linux for your database server, and your database is using approximately 32 GB of memory or more, you may want to implement Linux
HugePages for memory performance improvements. Linux HugePages is incompatible with AMM. For more information about HugePages, consult your operating system documentation.
Oracle recommends using the AL32UTF8 character set for the OSM database instance. However, if OSM is the only application that will be using the database instance, you can use the default character set for your location.
The national character set can be left to the default value for your location.
This section outlines suggested Relational Database Management System (RDBMS) server configurations for OSM.
Note:This section provides suggested values for use with OSM in a production system. The suggested values are guidelines only. The values you use will depend on your system type and actual processing requirements.
Table 4-1 shows the parameters that should be set for all databases.
You can set this parameter to TRUE before running the OSM installer if you are planning to use sys as the database administrator user to use during the OSM installation. If you do this, you should consider setting the option back to the default of FALSE after installation of OSM is complete.
If you do not set this parameter to TRUE, you can append as sysdba to the user name when entering the database administrator credentials in the OSM installer.
You may find it more convenient to set this parameter (rather than appending as sysdba) if you are installing multiple instances of OSM to the same database or if you think you may be running the installer multiple times.
For large commercial application packages on raw disks, this value should be 8 KB. It is optimized for an environment with both Online Transaction Processing (OLTP) and Decision Support Systems (DSS). If you expect only OLTP operation, that is, no reporting or large queries, you can reduce this value to 4 KB.
Sets the total number of processes that can be connected simultaneously to the Oracle server.
Caution! This parameter is deprecated and will generate warnings and alert logs if set. Oracle recommends that you do not specify this parameter. If you do use it, set it to FALSE (the default value).
Maximum number of open cursors a session can have at once.
Specifies the number of processes that can be created for the execution of jobs.
(SPARC T3 and T4 only)
If you are using a SPARC T3 or T4, the default value of this parameter can cause severe performance degradation.
This parameter must be set to FORCE to avoid library cache lock, which is a result for scenarios where there are a large number of statements in the shared pool that differ only in the values of their literals, and database response time is low due to a very high number of library cache misses (for example, because of hard parses and library cache latch contention).
This value should be equal to the WebLogic Server domain Java Transaction API Timeout (JTA) setting plus 2. You set this value when you create the WebLogic Server cluster. For more information, see "Preventing Connection Timeout when Using a Remote Database."
Setting this value above the JTA value avoids situations where a database table locks timeout expires prior to WebLogic Server JTA or JDBC XA transaction timeout. This can result in getting ORA-02049 errors ("timeout: distributed transaction waiting for lock").
Oracle recommends that you do not use the Shared Server configuration (called Multi-Threaded Server, or MTS in previous versions of Oracle Database) for production systems running OSM, for performance reasons. OSM implements its own connection multiplexing.
The database server running OSM must not use Daylight Savings Time (DST); otherwise date and schedule calculations will be incorrect during Daylight Savings Time. You can avoid this problem in the following ways:
Set the time zone of the database to UTC (Coordinated Universal Time, formerly Greenwich Mean Time)
Set the time zone of the database as an offset to UTC, in the format +/-hh:mm.
Ensure that the operating system time zone setting of the user starting the database processes is set using the considerations above.
When a machine on which OSM is running is abruptly shut down, OSM orders may become stuck due to inactive database sessions that may not be cleaned for an extended period of time. When this happens, locks associated with these sessions are not released.
To avoid this problem, do the following:
Configure your database machine so that these inactive database sessions are automatically killed after 10 minutes. Do the following:
Add enable=broken in tnsnames.ora.
Add SQLNET.EXPIRE_TIME=10 in sqlnet.ora.
On a Linux system, as root, configure TCP keepalive as follows:
sysctl -w net.ipv4.tcp_keepalive_time=600 sysctl -w net.ipv4.tcp_keepalive_intvl=60
Note:This change can be made permanent by adding the following lines to /etc/sysctl.conf:
This section contains information and settings for use when creating any instance of the Oracle database to be used by OSM, including production and development systems.
If you are creating the database from the Database Configuration Assistant, Oracle recommends that you use the "Custom Database" template.
The OSM installer creates the following schemas:
The core schema, which contains order cartridge metadata, order data, configuration, and other data.
The rule engine schema, which contains logic for rule processing.
The reporting schema, which is used for reporting.
The sizing of production systems is multi-dimensional and dependent on many variables that vary greatly from customer to customer, such as daily transaction volume and amount of historical data to be maintained.
For help determining the sizing of your production system, contact Oracle Support.
OSM allows you to add additional partitions to store order data as needed. This means that as orders are entered into the system and the available storage is used, additional partitions on new tablespaces can be added to your environment. The space needed for these additional partitions does not need to be calculated at installation time. Orders can also be purged from the system based on the partition they are in.
For the initial sizing details about the overall Oracle Database disk space requirements, see "Planning the Physical Architecture." For more information about sizing the core schema, see the discussion on managing the OSM database schema in the OSM System Administrator's Guide.
The OSM installer prompts connect to a database using a user account with the sysdba privilege. You then select the following permanent database tablespaces:
The default tablespace for all OSM schemas.
Model Data and Model Indexes tablespaces: Used mainly for cartridge metadata and configuration data.
Order Data and Order Indexes: Used for order data and auxiliary order-related tables.
For production instances, a minimum of two tablespaces should be created; one permanent and one temporary. For performance reasons and to facilitate backup and recovery, you should not share the permanent tablespaces for OSM with other applications. You should put model data and indexes on different tablespaces than order data and indexes.
For a production environment, you must partition your schema when running the OSM installer. If you choose the same tablespace for order data and order indexes, the OSM installer creates local index partitions with tablespace DEFAULT, which means that local index partitions are stored in the same tablespace as table partitions.
You can also create new table partitions in different tablespaces for increased administration and availability, for example on a rotation basis. If a tablespace is damaged, the impact and restoration effort could be limited to one or just a few partitions. See the discussion in the OSM System Administrator's Guide on adding partitions online or offline for more information.
Oracle recommends the following:
Create tablespaces dedicated to OSM, so that OSM performance and availability are not affected by other applications, for example due to I/O contention or if a tablespace must be taken offline. Store the datafiles of these tablespaces on different disk drives to reduce I/O contention with other applications.
Create locally managed tablespaces with automatic segment space management by specifying EXTENT MANAGEMENT LOCAL and SEGMENT SPACE MANAGEMENT AUTO in the CREATE TABLESPACE statement. Both options are the default for permanent tablespaces because they enhance performance and manageability.
Configure automatic database extent management by using the AUTOALLOCATE clause of the CREATE TABLESPACE statement. This is the default. You cannot use uniform extents in the OSM database because the OSM installation will fail.
If you use smallfile tablespaces, do not create hundreds of small datafiles. These files must be checkpointed, resulting in unnecessary processing. Note that Oracle Database places a limit on the number of blocks per datafile depending on the platform. The typical limit is 222-1, which limits the datafile size to 32 GB for 8K blocks.
Additional considerations if you use bigfile tablespaces:
If data is stored in bigfile tablespaces instead of traditional tablespaces, the performance of database opens, checkpoints, and DBWR processes should improve. However, increasing the datafile size might increase time to restore a corrupted file or create a new datafile. You can mitigate the risk of corruption by using multiple tablespaces for partitions, for example on a rotating basis.
Bigfile tablespaces are intended to be used with Automatic Storage Management (Oracle ASM) or other logical volume managers that supports striping or RAID, and dynamically extensible logical volumes.
Avoid creating bigfile tablespaces on a system that does not support striping because of negative implications for parallel query execution and RMAN backup parallelization.
Using bigfile tablespaces on platforms that do not support large file sizes is not recommended and can limit tablespace capacity.
For more information about managing tablespaces, see Oracle Database Administration Guide.
OSM data is placed in the permanent tablespace(s) and the temporary tablespace is used by the Oracle database as a workspace while processing OSM commands. The OSM data can be placed in one tablespace for a minimum installation, but OSM performs better when data is distributed across multiple tablespaces. You can use up to five tablespaces when initially installing the system.
It is possible to spread the OSM database over more than five tablespaces by altering the database installation and upgrade scripts. This must only be completed by an experienced Oracle DBA. For more information, contact Oracle.
In a high throughput system, each tablespace should be created on a different physical disk. This limits disk contention and IO bottlenecks to improve performance. It is recommended that the Oracle redo log files be placed on a separate, dedicated physical disk. You should not have any other load on this disk.
In a production system, a RAID device should be used for physical storage. In this case, there is no advantage to placing tablespaces on different physical RAID drives as long as space is available.
You can create tablespaces in either a traditional database instance or in a pluggable database instance.
The following is a bigfile tablespace creation script sample for a small installation model on an ASM diskgroup called +DATA.
create bigfile tablespace model_data datafile '+DATA' size 100M; create bigfile tablespace model_index datafile '+DATA' size 100M; create bigfile tablespace order_data datafile '+DATA' size 200G; create bigfile tablespace order_index datafile '+DATA' size 200G;
Note:If you are using Chinese UTF8 characters, the Block Size for the tablespaces used by the OSM database must be configured for 8K at database instance creation.