3.4 Store Oracle Tablespaces in HDFS

You can store Oracle read-only tablespaces on HDFS and use Big Data SQL Smart Scan to off-load query processing of data stored in that tablespace to the Hadoop cluster. Big Data SQL Smart Scan performs data local processing - filtering query results on the Hadoop cluster prior to the return of the data to Oracle Database. In most circumstances, this can be a significant performance optimization. In addition to Smart Scan, querying tablespaces in HDFS also leverages native Oracle Database access structures and performance features. This includes features such as indexes, Hybrid Columnar Compression, Partition Pruning, and Oracle Database In-Memory.

Tables, partitions, and data in tablespaces in HDFS retain their original Oracle Database internal format. This is not a data dump. Unlike other means of accessing data in Hadoop (or other noSQL systems), you do not need to create Oracle External table. After copying the corresponding Oracle tablespaces to HDFS, you refer to the original Oracle table to access the data.

Permanent online, read only, and offline tablespaces (including ASM tablespaces) are eligible for the move to HDFS.

Note:

Since tablespaces allocated to HDFS are may not be altered, offline tablespaces must remain as offline. For offline tablespaces, then, what this feature provides is a hard backup into HDFS.

If you want to use Oracle SQL Developer to perform the operations in this section, confirm that you can access the Oracle Database server from your on-premises location. This typically requires a VPN connection.

3.4.1 Advantages and Limitations of Tablespaces in HDFS

The following are some reasons to store Oracle Database tablespaces in HDFS.

  • Because the data remains in Oracle Database internal format, I/O requires no resource-intensive datatype conversions.

  • All Oracle Database performance optimizations such as indexing, Hybrid Columnar Compression, Partition Pruning, and Oracle Database In-Memory can be applied.

  • Oracle user-based security is maintained. Other Oracle Database security features such as Oracle Data Redaction and ASO transparent encryption remain in force if enabled. In HDFS, tablespaces can be stored in zones under HDFS Transparent HDFS encryption.

  • Query processing can be off-loaded. Oracle Big Data SQL Smart Scan is applied to Oracle Database tablespaces in HDFS. Typically, Smart Scan can provide a significant performance boost for queries. With Smart Scan, much of the query processing workload is off-loaded to the Oracle Big Data SQL server cells on the Hadoop cluster where the tablespaces reside. Smart Scan then performs predicate filtering in-place on the Hadoop nodes to eliminate irrelevant data so that only data that meets the query conditions is returned to the database tier for processing. Data movement and network traffic are reduced to the degree that smart scan predicate filtering can distill the dataset before returning it to the database.

  • For each table in the tablespace, there is only a single object to manage – the Oracle-internal table itself. To be accessible to Oracle Database, data stored in other file formats typically used in HDFS requires an overlay of an external table and a view.

  • As is always the case with Oracle internal partitioning, partitioned tables and indexes can have partitions in different tablespaces some of which may be in Exadata , ZFSSA, and other storage devices. This feature adds HDFS as another storage option.

There are some constraints on using Oracle tablespaces in HDFS. As is the case with all data stored in HDFS, Oracle Database tables, partitions, and data stored in HDFS are immutable. Updates are done by deleting and replacing the data. This form of storage is best suited to off-loading tables and partitions for archival purposes. Also, with the exception of OD4H, data in Oracle tablespaces in HDFS is not accessible to other tools in the Hadoop environment, such as Spark, Oracle Big Data Discovery, and Oracle Big Data Spatial and Graph.

3.4.2 About Tablespaces in HDFS and Data Encryption

Oracle Database Tablespaces in HDFS can work with ASO ( Oracle Advanced Security) transparent table encryption as well as HDFS Transparent Encryption in HDFS.

Tablespaces With Oracle Database ASO Encryption

In Oracle Database, ASO transparent encryption may be enabled for a tablespace or objects within the tablespace. This encryption is retained if the tablespace is subsequently moved to HDFS. For queries against this data, the CELL_OFFLOAD_DECRYPTION setting determines whether Oracle Big Data SQL or Oracle Database decrypts the data.

  • If CELL_OFFLOAD_DECRYPTION = TRUE, then the encryption keys are sent to the Oracle Big Data server cells in Hadoop and data is decrypted at the cells.

  • If CELL_OFFLOAD_DECRYPTION = FALSE , encryption keys are not sent to the cells and therefore the cells cannot perform TDE decryption. The data is returned to Oracle Database for decryption.

The default value is TRUE.

Note:

In cases where CELL_OFFLOAD_DECRYPTION is set to FALSE, Smart Scan cannot read the encrypted data and is unable to provide the performance boost that results from the Hadoop-side filtering of the query result set. TDE Column Encryption prevents Smart Scan processing of the encrypted columns only. TDE Tablespace Encryption prevents Smart Scan processing of the entire tablespace.

Tablespaces in HDFS Transparent Encryption Zones

You can move Oracle Database tablespaces into zones under HDFS Transparent Encryption with no impact on query access or on the ability of Smart Scan to filter data.

3.4.3 Moving Tablespaces to HDFS

Oracle Big Data SQL provides two options for moving tablespaces from Oracle Database to the HDFS file system in Hadoop.

  • Using bds-copy-tbs-to-hdfs

    The script bds-copy-tbs-to-hdfs.sh lets you select a preexisting tablespace in Oracle Database. The script automates the move of the selected tablespace to HDFS and performs necessary SQL ALTER operations and datafile permission changes for you. The DataNode where the tablespace is relocated is predetermined by the script. The script uses FUSE-DFS to move the datafiles from Oracle Database to the HDFS file system in the Hadoop cluster .

    You can find bds-copy-tbs-to-hdfs.sh in the cluster installation directory – $ORACLE_HOME/BDSJaguar-3.2.0/<string identifer for the cluster>.

  • Manually Moving Tablespaces to HDFS

    As an alternative to bds-copy-tbs-to-hdfs.sh, you can manually perform the steps to move the tablespaces to HDFS. You can either move an existing tablespace, or, create a new tablespace and selectively add tables and partitions that you want to off-load. In this case, you can set up either FUSE-DFS or an HDFS NFS gateway service to move the datafiles to HDFS.

The scripted method is more convenient. The manual method is somewhat more flexible. Both are supported.

Before You Start:

As cited in the Prerequisites section of the installation guide, both methods require that the following RPMs are pre-installed:
  • fuse

  • fuse-libs

# yum -y install fuse fuse-libs

These RPMs are available in the Oracle public yum repository.

3.4.3.1 Using bds-copy-tbs-to-hdfs

On the Oracle Database server, you can use the script bds-copy-tbs-to-hdfs.sh to select and move Oracle tablespaces to HDFS. This script is in the bds-database-install directory that you extracted from the database installation bundle when you installed Oracle Big Data SQL.

Syntax

bds-copy-tbs-to-hdfs.sh syntax is as follows:

bds-copy-tbs-to-hdfs.sh
bds-copy-tbs-to-hdfs.sh --install
bds-copy-tbs-to-hdfs.sh --uninstall
bds-copy-tbs-to-hdfs.sh --force-uninstall-script
bds-copy-tbs-to-hdfs.sh --tablespace=<tablespace name> [-pdb=<pluggable database name>]
bds-copy-tbs-to-hdfs.sh --list=<tablespace name> [--pdb=<pluggable database name>]
bds-copy-tbs-to-hdfs.sh --show=<tablespace name> [--pdb=<pluggable database name>]

Additional command line parameters are described in the table below.

Table 3-3 bds-copy-tbs-to-hdfs.sh Parameter Options

Parameter List Description
No parameters Returns the FUSE-DFS status.
--install Installs the FUSE-DFS service. No action is taken if the service is already installed.
--uninstall Uninstalls the FUSE-DFS service and removes the mountpoint.
--grid-home Specifies the Oracle Grid home directory.
--base-mountpoint By default, the mountpoint is under /mnt. However, on some systems access to this directory is restricted. This parameter lets you specify an alternate location.
--aux-run-mode Because Oracle Big Data SQL is installed on the database side as a regular user (not a superuser), tasks that must be done as root and/or the Grid user require the installer to spawn shells to run other scripts under those accounts while bds-copy-tbs-to-hdfs.sh is paused. The --aux-run-mode parameter specifies a mode for running these auxiliary scripts.

--aux-run-mode=<mode>

Mode options are:

  • session – through a spawned session.

  • su — as a substitute user.

  • sudo — through sudo.

  • ssh — through secure shell.

--force-uninstall-script This option creates a secondary script that runs as root and forces the FUSE-DFS uninstall.

Caution:

Limit use of this option to system recovery, an attempt to end a system hang, or other situations that may require removal of the FUSE-DFS service. Forcing the uninstall could potentially leave the database in an unstable state. The customer assumes responsibility for this choice. Warning message are displayed to remind you of the risk if you use this option.
--tablespace=<tablespace name> [--pdb=<pluggable database name>] Moves the named tablespace in the named PDB to storage in HDFS on the Hadoop cluster. If there are no PDBs, then the --pdb argument is discarded.
--list=<tablespace name> [--pdb=<pluggable database name> Lists tablespaces whose name equals or includes the name provided. The --pdb parameter is an optional scope. --list=* returns all tablespaces. --pdb=* returns matches for the tablespace name within all PDBs.
--show=<tablespace name> [--pdb=<pluggable database name> Shows tablespaces whose name equals or includes the name provided and are already moved to HDFS. The --pdb parameter is an optional scope. --show=* returns all tablespaces. --pdb=* returns matches for the tablespace name within all PDBs.

Usage

Use bds-copy-tbs-to-hdfs.sh to move a tablespace to HDFS as follows.

  1. Log on as the oracle Linux user and cd to the bds-database-install directory where the database bundle was extracted. Find bds-copy-tbs-to-hdfs.sh in this directory.

  2. Check that FUSE-DFS is installed.

    $ ./bds-copy-tbs-to-hdfs.sh
  3. Install the FUSE-DFS service (if it was not found in the previous check). This command will also start the FUSE-DFS the service.

    $ ./bds-copy-tbs-to-hdfs.sh --install

    If this script does not find the mount point, it launches a secondary script. Run this script as root when prompted. It will set up the HDFS mount. You can run the secondary script in a separate session and then return to this session if you prefer.

    For RAC Databases: Install FUSE_DFS on All Nodes:

    On a RAC database, the script will prompt you that you must install FUSE-DFS on the other nodes of the database.

  4. List the eligible tablespaces in a selected PDB or all PDBs. You can skip this step if you already know the tablespace name and location.

    $ ./bds-copy-tbs-to-hdfs.sh --list=mytablesapce --pdb=pdb1
  5. Select a tablespace from the list and then, as oracle, run bds-copy-tbs-to-hdfs.sh again, but this time pass in the --tablespace parameter (and the --pdb parameter if specified). The script moves the tablespace to the HDFS file system.

    $ ./bds-copy-tbs-to-hdfs.sh --tablespace=mytablespace --pdb=pdb1

    This command automatically makes the tablespace eligible for Smart Scan in HDFS. It does this in SQL by adding the “hdfs:” prefix to the datafile name in the tablespace definition. The rename changes the pointer in the database control file. It does not change the physical file name.

Tip:

If the datafiles are stored in ASM, the extraction will be made using RMAN. At this time, RMAN does not support a direct copy from ASM into HDFS. This will result in an error.

As workaround, you can use the --staging-dir parameter, which that enables you to do a two-stage copy – first to a file system directory and then into HDFS. The file system directory specified by --staging-dir must have sufficient space for the ASM datafile.
$ ./bds-copy-tbs-to-hdfs.sh --tablespace=mytablespace --pdb=pdb1 --staging-dir=/home/user
For non-ASM datafiles, --staging-dir is ignored.

The tablespace should be back online and ready for access when you have completed this procedure.

3.4.3.2 Manually Moving Tablespaces to HDFS

As an alternative to bds-copy-tbs-to-hdfs.sh, you can use the following manual steps to move Oracle tablespaces to HDFS.

Note:

In the case of an ASM tablespace, you must first use RMAN or ASMCMD to copy the tablespace to the filesystem.

Oracle Big Data SQL includes FUSE-DFS and these instructions use it to connect to the HDFS file system. You could use an HDFS NFS gateway service instead. The documentation for your Hadoop distribution should provide the instructions for that method.

Perform all of these steps on the Oracle Database server. Run all Linux shell commands as root. For SQL commands, log on to the Oracle Database as the oracle user.

  1. If FUSE-DFS is not installed or is not started, run bds-copy-tbs-to-hdfs.sh --install . This script will install FUSE-DFS (if it’s not already installed) and then start it.

    The script will automatically create the mount point /mnt/fuse-<clustername>-hdfs.

    Note:

    The script bds-copy-tbs-to-hdfs.sh is compatible with FUSE-DFS 2.8 only.
  2. In SQL, use CREATE TABLESPACE to create the tablespace. Store it in a local .dbf file. After this file is populated, you will move it to the Hadoop cluster. A single, bigfile tablespace is recommended.

    For example:
    SQL> CREATE TABLESPACE movie_cold_hdfs DATAFILE '/u01/app/oracle/oradata/cdb/orcl/movie_cold_hdfs1.dbf' SIZE 100M reuse AUTOEXTEND ON nologging;
    
  3. Use ALTER TABLE with the MOVE clause to move objects in the tablespace.

    For example:
    SQL> ALTER TABLE movie_fact MOVE PARTITION 2010_JAN TABLESPACE movie_cold_hdfs ONLINE UPDATE INDEXES;
    You should check the current status of the objects to confirm the change. In this case, check which tablespace the partition belongs to.
    SQL> SELECT table_name, partition_name, tablespace_name FROM user_tab_partitions WHERE table_name='MOVIE_FACT';
  4. Make the tablespace read only and take it offline.

    SQL> ALTER TABLESPACE movie_cold_hdfs READ ONLY;
    SQL> ALTER TABLESPACE movie_cold_hdfs OFFLINE;
  5. Copy the datafile to HDFS and then change the file permissions to read only.

    hadoop fs -put /u01/app/oracle/oradata/cdb/orcl/movie_cold_hdfs1.dbf /user/oracle/tablespaces/
    hadoop fs –chmod 440 /user/oracle/tablespaces/movie_cold_hdfs1.dbf
    

    As a general security practice for Oracle Big Data SQL , apply appropriate HDFS file permissions to prevent unauthorized read/write access.

    You may need to source $ORACLE_HOME/bigdatasql/hadoop_<clustername>.env before running hadoop fs commands.

    As an alternative, you could use the LINUX cp command to copy the files to FUSE.

  6. Rename the datafiles, using ALTER TABLESPACE with the RENAME DATAFILE clause.

    Important:

    Note the “hdfs:” prefix to the file path in the SQL example below. This is the keyword that tells Smart Scan that it should scan the file. Smart Scan also requires that the file is read only. The cluster name is optional.

    Also, before running the SQL statement below, the directory $ORACLE_HOME/dbs/hdfs:<clustername>/user/oracle/tablespaces should include the soft link movie_cold_hdfs1.dbf, pointing to /mnt/fuse-<clustername>-hdfs/user/oracle/tablespaces/movie_cold_hdfs1.dbf.

    SQL> ALTER TABLESPACE movie_cold_hdfs RENAME DATAFILE '/u01/app/oracle/oradata/cdb/orcl/movie_cold_hdfs1.dbf' TO 'hdfs:<clustername>/user/oracle/tablespaces/movie_cold_hdfs1.dbf';

    When you rename the datafile, only the pointer in the database control file changes. This procedure does not physically rename the datafile.

    The tablespace must exist on a single cluster. If there are multiple datafiles, these must point to the same cluster.

  7. Bring the tablespace back online and test it.
    SQL> ALTER TABLESPACE movie_cold_hdfs ONLINE;
    SQL> SELECT avg(rating) FROM movie_fact;
    

Below is the complete code example. In this case we move three partitions from local Oracle Database storage to the tablespace in HDFS.

mount hdfs
select * from dba_tablespaces;

CREATE TABLESPACE movie_cold_hdfs DATAFILE '/u01/app/oracle/oradata/cdb/orcl/movie_cold_hdfs1.dbf' SIZE 100M reuse AUTOEXTEND ON nologging;

ALTER TABLE movie_fact 
MOVE PARTITION 2010_JAN TABLESPACE movie_cold_hdfs ONLINE UPDATE INDEXES;
ALTER TABLE movie_fact 
MOVE PARTITION 2010_FEB TABLESPACE movie_cold_hdfs ONLINE UPDATE INDEXES;
ALTER TABLE movie_fact 
MOVE PARTITION 2010_MAR TABLESPACE movie_cold_hdfs ONLINE UPDATE INDEXES;

-- Check for the changes 
SELECT table_name, partition_name, tablespace_name FROM user_tab_partitions WHERE table_name='MOVIE_FACT';

ALTER TABLESPACE movie_cold_hdfs READ ONLY;
ALTER TABLESPACE movie_cold_hdfs OFFLINE;

hadoop fs -put /u01/app/oracle/oradata/cdb/orcl/movie_cold_hdfs1.dbf /user/oracle/tablespaces/
hadoop fs –chmod 444 /user/oracle/tablespaces/ movie_cold_hdfs1.dbf

ALTER TABLESPACE movie_cold_hdfs RENAME DATAFILE '/u01/app/oracle/oradata/cdb/orcl/movie_cold_hdfs1.dbf' TO 'hdfs:hadoop_cl_1/user/oracle/tablespaces/movie_cold_hdfs1.dbf';
ALTER TABLESPACE movie_cold_hdfs ONLINE;

-- Test
select avg(rating) from movie_fact;

3.4.4 Smart Scan for TableSpaces in HDFS

Smart Scan is an Oracle performance optimization that moves processing to the location where the data resides. In Big Data SQL, Smart Scan searches for datafiles whose path includes the “hdfs:” prefix. This prefix is the key that indicates the datafile is eligible for scanning.

After you have moved your tablespace data to HDFS and the tablespace and have prefixed the datafile path with the "hdfs:" tag, then queries that access the data in these files will leverage Big Data SQL Smart Scan by default. All of the Big Data SQL Smart Scan performance optimizations will apply. This greatly reduces the amount of data that moves from the storage tier to the database tier. These performance optimizations include:

  • The massively parallel processing power of the Hadoop cluster is employed to filter data at its source.

  • Storage Indexes can be leveraged to reduce the amount of data that is scanned.

  • Data mining scoring can be off-loaded.

  • Encrypted data scans can be off-loaded.

Disabling or Enabling Smart Scan

The initialization parameter _CELL_OFFLOAD_HYBRID_PROCESSING determines whether Smart Scan for HDFS is enabled or disabled. It is enabled by default.

To disable Smart Scan for tablespaces in HDFS do the following.

  1. Set the parameter to FALSE in init or in a parameter file:

     _CELL_OFFLOAD_HYBRID_PROCESSING=FALSE 

    The underscore prefix is required in this parameter name.

  2. Restart the Oracle Database instance.

You can also make this change dynamically using the ALTER SYSTEM directive in SQL. This does not require a restart.

SQL> alter system set _cell_offload_hybrid_processing=false;

One reason to turn off Smart Scan is if you need to move the Oracle tablespace datafiles out of HDFS and back to their original locations.

You can re-enable Smart Scan by resetting _CELL_OFFLOAD_HYBRID_PROCESSING to TRUE.

Note:

When _CELL_OFFLOAD_HYBRID_PROCESSING is set to FALSE, Smart Scan is disabled for Oracle tablespaces residing in HDFS.