|Oracle® Communications Data Model Implementation and Operations Guide
|PDF · Mobi · ePub|
This chapter provides general information about customizing the physical model of Oracle Communications Data Model and more detailed information about customizing the foundation layer of the physical model. This chapter contains the following topics:
See also:Chapter 3, "Access Layer Customization"
The default physical model of the Oracle Communications Data Model shares characteristics of a multischema "traditional" data warehouse, as described in "Layers in a "Traditional" Data Warehouse", but defines all data structures in a single schema as described in "Layers in the Default Oracle Communications Data Model Warehouse".
Staging layer. This layer is used when moving data from the operational system and other data sources into the data warehouse itself. It consists of temporary loading structures and rejected data. Having a staging layer enables the speedy extraction, transformation and loading (ETL) of data from your operational systems into data warehouse without disturbing any of the business users. It is in this layer the much of the complex data transformation and data quality processing occurs. The most basic approach for the design of the staging layer is as a schema identical to the one that exists in the source operational system.
Note:In some implementations this layer is not necessary, because all data transformation processing is done as needed as data is extracted from the source system before it is inserted directly into the foundation layer.
Foundation or integration layer. This layer is traditionally implemented as a Third Normal Form (3NF) schema. A 3NF schema is a neutral schema design independent of any application, and typically has many tables. It preserves a detailed record of each transaction without any data redundancy and allows for rich encoding of attributes and all relationships between data elements. Users typically require a solid understanding of the data to navigate the more elaborate structure reliably. In this layer data begins to take shape and it is not uncommon to have some end-user application access data from this layer especially if they are time sensitive, as data becomes available here before it is transformed into the Access and Performance layer.
Oracle Communications Data Model warehouse environment also consists of three layers However, as indicated by the dotted line in Figure 2-1, "Layers of an Oracle Communications Data Model Warehouse", in the Oracle Communications Data Model the definitions of the foundation and access layers are combined in a single schema.
The layers in the Oracle Communications Data Model warehouse are:
Staging layer. As in a "traditional" data warehouse environment, an Oracle Communications Data Model warehouse environment can have a staging layer. Because the definition of this layer varies by customer, a definition of this area is not provided as part of Oracle Communications Data Model.
Note:If you are using an application adapter for Oracle Communications Data Model to populate the foundation layer of Oracle Communications Data Model, then that adapter defines and populates an Oracle Communications Data Model staging layer.
Foundation and Access layers. The physical objects for these layers are defined in a single schema, the
Foundation layer. The foundation layer of the Oracle Communications Data Model is defined by base tables that present the data in 3NF (that is, tables that have the
DWB_ prefix). This layer also includes reference, lookup, and control tables defined in the
ocdm_sys schema (that is, the tables that have the
Access layer. The access layer of Oracle Communications Data Model is defined by derived and aggregate tables (defined with
DWA_ prefixes), cubes (defined with a
CB$ prefix), and views (that is, views defined with the
DWV_ prefix), and cube views (defined with
_VIEW suffix). These structures provide a summarized or "flattened" perspectives of the data in the foundation layer.
This layer also contains the results of the data mining models which are stored in derived (
DWD_) tables. The access layer also includes the tables with prefixes as shown in Table 2-1.
See:Oracle Communications Data Model Reference for detailed information on the
The starting point for the Oracle Communications Data Model physical data model is the 3NF logical data model. The physical data model mirrors the logical model as much as possible, (although some changes in the structure of the tables or columns may be necessary) and defines database objects (such as tables, cubes, and views).
To customize the default physical model of the Oracle Communications Data Model take the following steps:
Answer the questions outlined in "Questions to Answer Before You Customize the Physical Model".
Familiarize yourself with the characteristics of the logical and physical model of Oracle Communications Data Model as outlined in"Characteristics of the Default Physical Model" and presented in detail in Oracle Communications Data Model Reference.
Modify the foundation level of your physical model of Oracle Communications Data Model, as needed. See "Common Change Scenarios" for a discussion of when customization might be necessary.
When defining physical structures:
Keep the foundation layer in 3NF form.
Use the information presented in "General Recommendations When Designing Physical Structures" to guide you when designing the physical objects.
Follow the conventions used when creating the default physical model of Oracle Communications Data Model as outlined in "Conventions When Customizing the Physical Model".
Tip:Package the changes you make to the physical data model as a patch to the
Modify the access layer of your physical model of Oracle Communications Data Model as discussed in Chapter 3, "Access Layer Customization".
Modify existing or create a new Intra-ETL packages to feed the changes you make in the access layer, as discussed in Chapter 4, "ETL Implementation and Customization".
When designing the physical model remember that the logical data model is not one-to-one with the physical data model. Consider the load, query, and maintenance requirements when you convert the logical data model into the physical layer. For example, answer the following questions before you design the physical data model:
Do you need the physical data model to cover the full scope of the logical data model, or only part of the scope?
"Common Change Scenarios" provides an overview discussion of making physical data model changes when your business needs do not result in a logical model that is the same as the Oracle Communications Data Model logical model.
What is the result of the source data profile?
What is the data load frequency for each table?
How many large tables are there and which tables are these?
How will the tables and columns be accessed? What are the common joins?
What is your data backup strategy?
When naming the physical objects follow the naming guidelines for naming objects within an Oracle Database schema. For example:
Table and column names must start with a letter, can use only 30 alphanumeric characters or less, cannot contain spaces or some special characters such as "!" and cannot use reserved words.
Table names must be unique within a schema that is shared with views and synonyms.
Column names must be unique within a table.
Although it is common to use abbreviations in the physical modeling stage, as much as possible, use names for the physical objects that correspond to the names of the entities in the logical model. Use consistent abbreviations to avoid programmer and user confusion.
When naming columns, use short names if possible. Short column names reduce the time required for SQL command parsing.
ocdm_sys schema delivered with Oracle Communications Data Model uses the prefixes and suffixes shown Table 2-1 to identify object types.
|Prefix or Suffix||Used for Name of These Objects|
A relational view of an OLAP cube, dimension, or hierarchy.
Customized OLAP cubes.
Created when OLAP cubes are built. Used to store logs and results.
Created when the mining models are trained. Used to store trained model and logs.
Materialized view created for performance reasons (that is, not an aggregate table or an OLAP cube).
Created when the mining models are trained. Used to store trained model and logs.
Aggregate tables which are materialized views.
Base transaction data (3NF) tables.
Derived tables -- including data mining result tables.
Reference data tables.
Relational view of time dimension
See:Oracle Communications Data Model Reference for detailed information about the objects in the default Oracle Communications Data Model.
A domain is a set of values allowed for a column. The domain can be enforced by a foreign key, check constraints, or the application on top of the database. Define the standards for each domain across the model such as:
Date and time type, such as
'YYYY-MM-DD'. For example, be aware that most date columns (abbreviation DT) in Oracle Communications Data Model may contain the time, such as
EVT_STRT_DT. There is no separate
Numeric value in different situations. For example, all columns of type
NUMBER(16,0) while all monetary-like columns (
Character string length in different situations. For example, all Code columns are
VARCHAR2(120), Name (
NAME) and Description columns (
DSCR) are respectively 500 and 1000 characters long (with some exceptions). Indicator columns (
Coded value definition such as key or description. For example, all "Key" columns are
The first step in customizing the physical model of Oracle Communications Data Model is customizing the foundation layer of the physical data model. Since, as mentioned in "Layers in the Default Oracle Communications Data Model Warehouse", the foundation layer of the physical model mirrors the 3NF logical model of Oracle Communications Data Model, you might choose to customize the foundation layer to reflect differences between your logical model needs and the default logical model of Oracle Communications Data Model. Additionally, you might need to customize the physical objects in the foundation layer to improve performance (for example, you might choose to compress some foundation layer tables).
When making changes to the foundation layer, keep the following points in mind:
When changing the foundation layer objects to reflect your logical model design, make as few changes as possible. "Common Change Scenarios" outlines the most common customization changes you will make in this regard.
When defining new foundation layer objects or when redesigning existing foundation layer objects for improved performance, follow the "General Recommendations When Designing Physical Structures" and "Conventions When Customizing the Physical Model".
Remember that changes to the foundation layer objects can also impact the access layer objects.
Note:Approach any attempt to change the Oracle Communications Data Model with caution. The foundation layer of the physical model of the Oracle Communications Data Model has (at its core) a set of generic structures that allow it to be flexible and extensible. You may disable temporarily Foreign Keys and constraints if required but do not forget to set them back when the entities are back in use. Before making extensive additions, deletions, or changes, ensure that you understand the full range of capabilities of Oracle Communications Data Model and that you cannot handle your requirements using the default objects in the foundation layer.
Before making changes, review Appendix C, "Product Assumptions" in Oracle Communications Data Model Reference. For more information, see Oracle Communications Data Model Reference.
Additions to Existing Structures
If you identify business areas or processes that are not supported in the default foundation layer of the physical data model of Oracle Communications Data Model, add new tables and columns.
Carefully study the default foundation layer of the physical data model of Oracle Communications Data Model (and the underlying logical data model) to avoid building redundant structures when making additions. If these additions add high value to your business value, communicate the additions back to the Oracle Communications Data Model Development Team for possible inclusion in future releases of Oracle Communications Data Model.
Deletions of Existing Structures
If there are areas of the model that cannot be matched to any of the business requirements of your legacy systems, it is safer to keep these structures and not populate that part of the warehouse.
Deleting a table in the foundation layer of the physical data model can destroy relationships needed in other parts of the model or by applications based on the it. Some tables may not be needed during the initial implementation, but you may want to use these structures at a later time. If this is a possibility, keeping the structures now saves re-work later. If tables are deleted, perform a thorough analysis to identify all relationships originating from that entity.
Changes to Existing Structures
In some situations some structures in the foundation layer of the physical data model of the Oracle Communications Data Model may not exactly match the corresponding structures that you use.
Before implementing changes, identify the impact that the changes would have on the database design of Oracle Communications Data Model. Also identify the impact on any applications based on the new design.
ocdm_sys schema delivered with Oracle Communications Data Model was designed and defined following best practices for data access and performance. Continue to use these practices when you add new physical objects. This section provides information about how decisions about the following physical design aspects were made to the default Oracle Communications Data Model:
If possible, define tablespaces so that they represent logical business units.
Use ultra large data files for a significant improvement in very large Oracle Communications Data Model warehouse.
For tables that do not have partitions (that is, lookup tables and reference tables), you can change the existing tablespace for a table.
By default, Oracle Communications Data Model defines the partitioned tables as interval partitioning, which means the partitions are created only when new data arrives.
Consequently, for Oracle Communications Data Model tables that have partitions (that is, Base, Derived, and Aggregate tables), for the new interval partitions to be generated in new tablespaces rather than current ones, issue the following statements.
ALTER TABLE table_name MODIFY DEFAULT ATTRIBUTES TABLESPACE new_tablespace_name;
When new data is inserted in the table specified by table_name, a new partition is automatically created in the tablespace specified by tablespace new_tablespace_name.
For tables that have partitions (that is, base, derived, and aggregate tables), you can specify that new interval partitions be generated into new tablespaces.
For Oracle Communications Data Model tables that do not have partitions (that is, lookup tables and reference tables), to change the existing tablespace for a table then issue the following statement.
ALTER TABLE table_name MOVE TABLESPACE new_tablespace_name;
A key decision that you must make is whether to compress your data. Using table compression reduces disk and memory usage, often resulting in better scale-up performance for read-only operations. Table compression can also speed up query execution by minimizing the number of round trips required to retrieve data from the disks. Compressing data however imposes a performance penalty on the load speed of the data.
In general, choose to compress the data. The overall performance gain typically outweighs the cost of compression.
If you decide to use compression, consider sorting your data before loading it to achieve the best possible compression rate. The easiest way to sort incoming data is to load it using an
ORDER BY clause on either the CTAS or IAS statement ("Create Table As Select" or "Insert As Select" types of statements). Specify an
ORDER BY a
NOT NULL column (ideally non numeric) that has many distinct values (1,000 to 10,000).
With standard compression Oracle Database compresses data by eliminating duplicate values in a database block. Standard compression only works for direct path operations (CTAS or IAS). If the data is modified using any kind of conventional DML operation (for example updates), the data within that database block is uncompressed to make the modifications and is written back to disk uncompressed.
By using a compression algorithm specifically designed for relational data, Oracle Database can compress data effectively and in such a way that Oracle Database incurs virtually no performance penalty for SQL queries accessing compressed tables.
Oracle Communications Data Model leverages the compress feature for all base, derived, and aggregate tables which reduces the amount of data being stored, reduces memory usage (more data per memory block), and increases query performance.
You can specify table compression by using the
COMPRESS clause of the
CREATE TABLE statement or you can enable compression for an existing table by using
ALTER TABLE statement as shown below.
alter table <tablename> move compress;
To create a compressed table named
COMPRESS Create table CWB_ACCS_MTHD_PORT_HIST (NP_RQST_HDR_CD VARCHAR2(30) ,ACCS_MTHD_KEY NUMBER(30) NOT NULL ENABLE ,EXTRNL_OPRTR_KEY NUMBER(30) ….. ) tablespace TBS_BASE COMPRESS ;
OLTP compression is a component of the Advanced Compression option. With OLTP compression, just like standard compression, Oracle Database compresses data by eliminating duplicate values in a database block. But unlike standard compression OLTP compression allows data to remain compressed during all types of data manipulation operations, including conventional DML such as
See:Oracle Database Administrator's Guide for more information on OLTP table compression features.
Oracle by Example:For more information on Oracle Advanced Compression, see the "Using Table Compression to Save Storage Costs" OBE tutorial.
To access the tutorial, open the Oracle Learning Library in your browser by following the instructions in "Oracle Technology Network"; and, then, search for the tutorial by name.
is available with some storage formats and achieves its compression using a logical construct called the compression unit which is used to store a set of hybrid columnar-compressed rows. When data is loaded, a set of rows is pivoted into a columnar representation and compressed. After the column data for a set of rows has been compressed, it is fit into the compression unit. If conventional DML is issued against a table with Hybrid Columnar Compression, the necessary data is uncompressed to do the modification and then written back to disk using a block-level compression algorithm.
Tip:If your data set is frequently modified using conventional DML, then the use of Hybrid Columnar Compression is not recommended; instead, the use of OLTP compression is recommended.
Hybrid Columnar Compression provides different levels of compression, focusing on query performance or compression ratio respectively. With Hybrid Columnar Compression optimized for query, fewer compression algorithms are applied to the data to achieve good compression with little to no performance impact. However, compression for archive tries to optimize the compression on disk, irrespective of its potential impact on the query performance.
See also:The discussion on Hybrid Columnar Compression in Oracle Database Concepts.
Subtypes inherit all supertype attributes
Subtypes have attributes that are different from other subtypes
Create separate tables for the super type and all sub type entities for the following reasons:
Data integrity enforced at database level. (using
NOT NULL column constraints)
Relationships can be accurately modeled and enforced including those which apply to only one subtype
Physical model closely resembles the logical data model.
It is easier to correlate the logical data model with the physical data model and support the logical data model enhancements and changes.
Physical data model reflects true business rules (for example, if there are some attributes or relationships mandatory for only one subtype.)
The surrogate key method for primary key construction involves taking the natural key components from the source systems and mapping them through a process of assigning a unique key value to each unique combination of natural key components (including source system identifier). The resulting primary key value is completely non-intelligent and is typically a numeric data type for maximum performance and storage efficiency.
Ensure uniqueness: data distribution
Independent of source systems
Uses the numeric data type which is the most performant data type for primary keys and joins
Have to allocate during ETL
Complex and expensive re-processing and data quality correction
Not used in queries – performance impact
The operational business intelligence requires natural keys to join to operational systems
PRIMARY KEY constraints, this is usually defined on the surrogate key column to ensure uniqueness of the record identifiers. In general, it is recommended that you specify the
ENFORCED ENABLED RELY mode.
UNIQUE constraints, to ensure that a given column (or set of columns) is unique. For slowly changing dimensions, it is recommended that you add a unique constraint on the Business Key and the Effective From Date columns to allow tracking multiple versions (based on surrogate key) of the same Business Key record.
NOT NULL constraints, to ensure that no null values are allowed. For query rewrite scenarios, it is recommended that you have an inline explicit
NOT NULL constraint on the primary key column in addition to the primary key constraint.
FOREIGN KEY constraints, to ensure that relation between tables are being honored by the data. Usually in data warehousing environments, the foreign key constraint is present in
RELY DISABLE NOVALIDATE mode.
The Oracle Database uses constraints when optimizing SQL queries. Although constraints can be useful in many aspects of query optimization, constraints are particularly important for query rewrite of materialized views. Under some specific circumstances, constraints need space in the database. These constraints are in the form of the underlying unique index.
Unlike data in many relational database environments, data in a data warehouse is typically added or modified under controlled circumstances during the extraction, transformation, and loading (ETL) process, therefore, most foreign key columns in Oracle Communications Data Model are nullable.
Bitmap indexes are optimized index structures for set-oriented operations. Additionally, they are necessary for some optimized data access methods such as star transformations. Bitmap indexes are typically only a fraction of the size of the indexed data in the table.
B-tree indexes are most effective for high-cardinality data: that is, for data with many possible values, such as
phone_number. However, fully indexing a large table with a traditional B-tree index can be prohibitively expensive in terms of disk space because the indexes can be several times larger than the data in the table. B-tree indexes can be stored specifically in a compressed manner to enable huge space savings, storing more keys in each index block, which also leads to less I/O and better performance.
Make the majority of the indexes in your customized Oracle Communications Data Model bitmap indexes.
Use B-tree indexes only for unique columns or other columns with very high cardinalities (that is, columns that are almost unique). Store the B-tree indexes in a compressed manner.
Partition the indexes. Indexes are just like tables in that you can partition them, although the partitioning strategy is not dependent upon the table structure. Partitioning indexes makes it easier to manage the data warehouse during refresh and improves query performance.
Typically, specify the index on a partitioned table as local. Bitmap indexes on partitioned tables must always be local. B-tree indexes on partitioned tables can be global or local. However, in a data warehouse environment, local indexes are more common than global indexes. Use global indexes only when there is a specific requirement which cannot be met by local indexes (for example, a unique index on a non-partitioning key, or a performance requirement).
Partitioning allows a table, index or index-organized table to be subdivided into smaller pieces. Each piece of the database object is called a partition. Each partition has its own name, and may optionally have its own storage characteristics. From the perspective of a database administrator, a partitioned object has multiple pieces that can be managed either collectively or individually. This gives the administrator considerable flexibility in managing partitioned objects. However, from the perspective of the application, a partitioned table is identical to a nonpartitioned table. No modifications are necessary when accessing a partitioned table using SQL DML commands.
As discussed in the following topics, partitioning can provide tremendous benefits to a wide variety of applications by improving manageability, availability, and performance:
Oracle by Example:To understand the various partitioning techniques in Oracle Database, see the "Manipulating Partitions in Oracle Database 11g" OBE tutorial.
To access the tutorial, open the Oracle Learning Library in your browser by following the instructions in "Oracle Technology Network"; and, then, search for the tutorial by name.
See also:"Indexes and Partitioned Indexes in the Oracle Communications Data Model", "Choosing a Cube Partitioning Strategy for Oracle Communications Data Model", and "Partitioning and Materialized Views".
Consider the case where two year's worth of sales data or 100 terabytes (TB) is stored in a table. At the end of each day a new batch of data must be to loaded into the table and the oldest days worth of data must be removed. If the
Sales table is range partitioned by day then the new data can be loaded using a partition exchange load. This is a sub-second operation that has little or no impact on end user queries.
Oracle Communications Data Model uses Interval Partitioning as an extension of Range Partitioning, so that you provide just the first partition higher limit and interval to create the first partition and the following partitions are created automatically as and when data comes. The (hidden) assumption is that the data flow is more or less similar over the various intervals.
Range partitioning also helps ensure that only the necessary data to answer a query is scanned. Assuming that the business users predominately accesses the sales data on a weekly basis (for example, total sales per week) then range partitioning this table by day ensures that the data is accessed in the most efficient manner, as only seven partitions must be scanned to answer the business users query instead of the entire table. The ability to avoid scanning irrelevant partitions is known as partition pruning.
A major performance benefit of hash partitioning is partition-wise joins. Partition-wise joins reduce query response time by minimizing the amount of data exchanged among parallel execution servers when joins execute in parallel. This significantly reduces response time and improves both CPU and memory resource usage. In a clustered data warehouse, this significantly reduces response times by limiting the data traffic over the interconnect (IPC), which is the key to achieving good scalability for massive join operations. Partition-wise joins can be full or partial, depending on the partitioning scheme of the tables to be joined.
Figure 2-2 shows how a full partition-wise join divides a join between two large tables into multiple smaller joins. Each smaller join, performs a joins on a pair of partitions, one for each of the tables being joined. For the optimizer to choose the full partition-wise join method, both tables must be equi-partitioned on their join keys. That is, they have to be partitioned on the same column with the same partitioning method. Parallel execution of a full partition-wise join is similar to its serial execution, except that instead of joining one partition pair at a time, multiple partition pairs are joined in parallel by multiple parallel query servers. The number of partitions joined in parallel is determined by the Degree of Parallelism (DOP).
In order to ensure that the data gets evenly distributed among the hash partitions it is highly recommended that the number of hash partitions is a power of 2 (for example, 2, 4, 8, and so on). A good rule of thumb to follow when deciding the number of hash partitions a table should have is
2 X # of CPUs rounded to up to the nearest power of 2.
If your system has 12 CPUs, then 32 would be a good number of hash partitions. On a clustered system the same rules apply. If you have 3 nodes each with 4 CPUs, then 32 would still be a good number of hash partitions. However, ensure that each hash partition is at least 16 MB. Many small partitions do not have efficient scan rates with parallel query. Consequently, if using the number of CPUs makes the size of the hash partitions too small, use the number of Oracle RAC nodes in the environment (rounded to the nearest power of 2) instead.
Parallel Execution enables a database task to be parallelized or divided into smaller units of work, thus allowing multiple processes to work concurrently. By using parallelism, a terabyte of data can be scanned and processed in minutes or less, not hours or days.
Figure 2-3 illustrates the parallel execution of a full partition-wise join between two tables, Sales and Customers. Both tables have the same degree of parallelism and the same number of partitions. They are range partitioned on a date field and sub partitioned by hash on the cust_id field. As illustrated in the picture, each partition pair is read from the database and joined directly.
There is no data redistribution necessary, thus minimizing IPC communication, especially across nodes. Below figure shows the execution plan you would see for this join.
To ensure that you get optimal performance when executing a partition-wise join in parallel, specify a number for the partitions in each of the tables that is larger than the degree of parallelism used for the join. If there are more partitions than parallel servers, each parallel server is given one pair of partitions to join, when the parallel server completes that join, it requests another pair of partitions to join. This process repeats until all pairs have been processed. This method enables the load to be balanced dynamically (for example, 128 partitions with a degree of parallelism of 32).
What happens if only one table that you are joining is partitioned? In this case the optimizer could pick a partial partition-wise join. Unlike full partition-wise joins, partial partition-wise joins can be applied if only one table is partitioned on the join key. Hence, partial partition-wise joins are more common than full partition-wise joins. To execute a partial partition-wise join, Oracle Database dynamically repartitions the other table based on the partitioning strategy of the partitioned table.
After the other table is repartitioned, the execution is similar to a full partition-wise join. The redistribution operation involves exchanging rows between parallel execution servers. This operation leads to interconnect traffic in Oracle RAC environments, since data must be repartitioned across node boundaries.
Figure 2-4 illustrates a partial partition-wise join. It uses the same example as in Figure 2-3, except that the customer table is not partitioned. Before the join operation is executed, the rows from the customers table are dynamically redistributed on the join key.
Parallel query is the most commonly used parallel execution feature in Oracle Database. Parallel execution can significantly reduce the elapsed time for large queries. To enable parallelization for an entire session, execute the following statement.
alter session enable parallel query;
Note:It is usually suggested to set at session level rather than at the system level.
Data Manipulation Language (DML) operations such as
DELETE can be parallelized by Oracle Database. Parallel execution can speed up large DML operations and is particularly advantageous in data warehousing environments. To enable parallelization of DML statements, execute the following statement.
alter session enable parallel dml;
When you issue a DML statement such as an
DELETE, Oracle Database applies a set of rules to determine whether that statement can be parallelized. The rules vary depending on whether the statement is a DML
INSERT statement, or a DML
The following rules apply when determining how to parallelize DML
Oracle Database can parallelize
DELETE statements on partitioned tables, but only when multiple partitions are involved.
You cannot parallelize
DELETE operations on a nonpartitioned table or when such operations affect only a single partition.
The following rules apply when determining how to parallelize DML
INSERT statements using a
VALUES clause cannot be parallelized.
Oracle Database can parallelize only
INSERT . . . SELECT . . . FROM statements.
The setting of parallelism for a table influences the optimizer. Consequently, when using parallel query, also enable parallelism at the table level by issuing the following statement.
alter table <table_name> parallel 32;