Skip Headers
Oracle® Healthcare Foundation Installation Guide
Release 7.1

E79438-01
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

2 Oracle Healthcare Foundation Data Model

This chapter describes the steps required to install the OHF Data Model. It contains the following sections:

2.1 Supported Upgrade Paths

The following are the supported upgrade paths:

  • Healthcare Data Warehouse Foundation (HDWF) 6.1 with Oracle Healthcare Analytics Data Integration (OHADI) 3.1 installed to OHF 7.1

  • OHF 7.0.1 to OHF 7.1

2.2 Prerequisite Checklist

Check box Install Oracle Database 12.1.0.2 Enterprise Edition.
Check box Create container database with a single pluggable database instance the help of a Database Administrator.
Check box Install Java Virtual Machine in the database server.
Check box Set the NLS_LENGTH_SEMANTICS parameter to either CHAR or BYTE based on your requirements.

For Oracle Healthcare Foundation Globalization Support information, see Oracle Database Globalization Support Guide and set your database character set accordingly.

Check box Enable database connection through SERVICE_NAME. Make sure that you are connecting to the pluggable database instance.

Verify the database connectivity using the following command:

SQL>connect <username>@<hostname>:<port number>/<service name>
Check box Sqlplus utility is available in the installation server.
Check box Password expiry notification message does not display for the Sys and system schemas.
Check box To upgrade to Oracle Healthcare Foundation 7.1, back up data model schemas of existing Oracle Healthcare Data Warehouse Foundation (HDWF) environment.
Check box To create an encrypted tablespace during fresh installation, set up your environment for tablespace encryption. For more information, see Oracle Database Advanced Security Guide 12.1. The installer only creates the tablespace and does not handle prerequisites like setting up the keystore, opening it, and so on. The installer only checks whether the keystore is open if the sys and system passwords are provided during installation.
Check box Set the GLOBAL_NAMES database initialization parameter to false.
Check box For remote installation, make sure the following are available on the client machine:
  • Oracle Linux 6.5 or 6.6 (64-bit) OS

  • Oracle Database 12.1.0.2.0 client

Check box To upgrade, execute the following query as a SYSTEM user to validate if the data model user has OHF_APPLICATION_ROLE as the default role.

select grantee, DEFAULT_ROLE from dba_role_privs where granted_role='OHF_APPLICATION_ROLE' AND DEFAULT_ROLE='NO';

If this is not a default role, execute the following command as a SYSTEM user to set this as a default role:

alter user <data model user> default role all;


2.3 Guidelines

  • If you are installing OHF on an Exadata database machine, see the Exadata Machine and Exadata Implementation sections in the Oracle Healthcare Foundation Programmer's Guide.

  • Table compression strategy is decided based on the number of updates in tables. If the number of updates is high, select the Advanced compression option instead of HCC (QUERY HIGH).

  • For a fresh installation on an Exadata environment, selecting the compression type as HCC (QUERY HIGH) will slow down the first few incremental loads.

  • Make sure that you have an appropriate database license for the compression methods you choose.

2.4 Installation Log Files

While installing the Data Model, the installer generates the following log files:

Table 2-1 Installation Log Files

File Name Description

installActions<timestamp>.log

Records the action of the installer and can be used to diagnose issues with the installer.

oraInstall<timestamp>.out

Records the output of the SQL scripts run by the installer. Database objects are installed using the Python framework.

oraInstall<timestamp>.err

Records the errors from the SQL scripts run by the installer.

You can ignore the following error:

java.io.FileNotFoundException: /<INSTALL_HOME>/inventory/Components21/
oracle.hsgbu.hc.datamodel/7.1.0.0.0/context.xml

The log files are time stamped and each installation session creates a new log file.

On a Linux machine, the log files are located at $ORACLE_BASE/oraInventory/logs. For example, /u01/app/oraInventory/logs.

<INSTALL_HOME>/dm/install.err

Contains any SQL errors. Database objects are installed using the Python framework and the error logging is redirected to this location.

<INSTALL_HOME>/reports/dm_install<timestamp>.html

Contains installation summary for the Data Model installation.


While reporting any problems that occur during installation, make sure that you include all the above log files. Check which components were installed successfully using the following query by connecting to the enterprise schema:

Select * from W_EHA_PRODUCT_VERSION

Send the result of the query while reporting any problems to the Oracle support.

2.5 Installing the Oracle Healthcare Foundation Data Model

To install the OHF Data Model:

  1. Extract the contents of the OHF media pack to your system.

  2. Navigate to the <media_pack_location>/ folder.

  3. Unzip the OHF_V71_Linux-x64.zip file where you want to launch the installer using the following command:

    unzip -a OHF_V71_Linux-x64.zip

  4. Navigate to the Disk1/install folder.

  5. Change the protection on files as follows:

    chmod 755 *

  6. Start the Oracle Universal Installer (OUI) using the following command:

    • Database server (exadata or non-exadata) is on the machine where the installer is running, execute:

      sh runInstaller.sh -local

    • Database server is on a different machine, execute:

      sh runInstaller.sh -local remote_installation=true

    where, the -local option is to install on the local node irrespective of the cluster nodes specified on the installer machine.

    The Welcome screen appears.

  7. Click Next. The Select a Product to Install screen appears.

  8. Select the Oracle Healthcare Foundation Data Model 7.1.0.0.0 option and click Next.

    The Specify Home Details screen appears.

  9. Enter the installation home name and location and click Next.

    The Verify Installation Prerequisites screen appears.

  10. Verify if all the prerequisites are met before proceeding and click Next.

    The Oracle Client Home Configuration screen appears.

  11. Specify the Oracle client home path and click Next. The installer validates this path.

    The Database Configuration screen appears.

  12. Enter values for the following fields:

    • Hostname - By default, the system host name appears

    • Port - By default, the port number is 1521. You can edit this field if required.

    • Service name

    • System user password

    • Sys user password

  13. Click Next. The Table Compression screen is displayed.

  14. Select the compression option for the following schemas from the corresponding drop-down list:

    Schema Name Compression Options for Non-Exadata Compression Options for Exadata
    Interface tables schema No Compression (default)

    Advanced Compression

    Hybrid columnar compression (default)

    No Compression

    Data warehouse schema No Compression (default)

    Advanced Compression

    No Compression (default)

    Advanced Compression: Preferred if updates are high

    Hybrid columnar compression

    Common data mart schema No Compression (default)

    Advanced Compression

    No Compression (default)

    Advanced Compression

    Cohort data mart schema No Compression (default) Advanced Compression No Compression (default)

    Advanced Compression

    Omics Data Bank schema No Compression (default) Advanced Compression Hybrid columnar compression (default)

  15. Click Next. The Data Model Configuration screen appears.

  16. Enter values for the following fields:

    • Interface tables schema name

    • Interface tables schema password

    • Data warehouse schema name

    • Data warehouse schema password

    • Common data mart schema name

    • Common data mart schema password

    • Omics data bank schema name

    • Omics data bank schema password

  17. Click Next. The Data Model Configuration screen appears.

  18. Enter values for the following fields:

    • Cohort data mart schema name

    • Cohort data mart password

    • Enterprise schema name

    • Enterprise schema password

    • Job engine schema name

    • Job engine schema password

    • Services schema name

    • Services schema password

  19. Click Next in the Data Model Configuration Verification and Tablespace Data file screens. The Location screen appears.

  20. Specify the location of the Tablespace data files. This is the directory on the database server where the data files are created during the installation.

    For example,

    Unix: /u01/oradata/dbname

    ASM: +DATA_EX02/hashas01/datafile

    Note:

    • ASM location must always start with +.

    • Path should not end with /.

    • The installer does not validate the data file location. Make sure the location physically exists if the database server is on the remote server. Else, the installer will fail.

  21. Click Next. The Tablespace Details screen appears.

  22. The installer creates the following tablespaces for fresh schema. Make sure that these tablespace do not exist in the database.

    Enter a tablespace name for each schema. You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas.

    Tablespace Details Tablespace Name
    Default tablespace used for Interface tables (hdi) schema hdi_ts
    Default tablespace used for Data Warehouse (hdm) schema hdm_ts
    Default tablespace used for Common Data Mart (hcd) schema hcd_ts
    Default tablespace for Omics Data Bank (odb) schema odb_data_ts
    Used for indexes of the Omics Data Bank (odb) schema odb_index_ts
    Used for LOB columns in the Omics Data Bank (odb) schema odb_lob_ts
    Default tablespace used for Cohort Data Mart (cdm) schema cdm_data_ts
    Used for indexes of the Cohort Data Mart (cdm) schema cdm_index_ts
    Default tablespace used for Enterprise (ent) schema ent_ts
    Default tablespace for Job Engine (job) schema job_data_ts
    Used for indexes of the Job Engine (job) schema job_index_ts
    Database File System (DBFS) store is created as part of the Job Engine installation job_store_ts
    Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user job_lob_ts
    Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema job_tbs_ts
    Default tablespace used for services schema (svc) svc_ts

  23. Click Next. The Temporary Tablespace Details screen appears.

    The installer creates the following temporary tablespaces for fresh schema. Make sure that these tablespace do not exist in database.

    You can edit the temporary tablespace name, initial size, and max size for these schemas.

    Temporary Tablespace Details Temporary Tablespace
    Interface tables schema (hdi) hdi_temp
    Data warehouse schema (hdm) hdm_temp
    Common data mart schema (hcd) hcd_temp
    Omics Data bank schema (odb) odb_temp
    Cohort data mart schema (cdm) cdm_temp
    Enterprise schema (ent) ent_temp
    Job Engine schema (job) job_temp
    Services schema (svc) svc_temp

  24. Click Next. The Omics Data Bank and Cohort Data Mart Parameters screen appears.

  25. Enter values for the following fields:

    • Result Partition - Used to partition result tables in the ODB schema. The available options are:

      • GENE (Default)

      • STUDY

    • Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.

    • Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.

    • Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.

    • Job Store Name

      Note:

      If you are using the same database instance for multiple environment, enter a unique job store name for each job schema.
  26. Click Next. The Summary screen appears.

  27. Click Install.

    After the installation is complete, the End of Installation screen appears.

  28. Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.

  29. Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.

  30. Contact Oracle support, if necessary, to resolve any errors.

2.6 Upgrading Healthcare Data Warehouse Foundation to Oracle Healthcare Foundation

For the list of supported upgrade paths, see Section 2.1.

If you are upgrading from Healthcare Data Warehouse Foundation (HDWF) 6.1 to Oracle Healthcare Foundation, you must install Oracle Healthcare Analytics Data Integration (OHADI) 3.1.

Note:

After upgrading, follow Section 2.7 for data migration from HDWF to OHF.
  1. To upgrade from HDWF 6.1 to OHF, upgrade the HDWF 6.1 database instance to the Oracle Database 12.1.0.2 Enterprise Edition.

  2. Extract the contents of the OHF media pack to your system.

  3. Navigate to the <media_pack_location>/ folder.

  4. Unzip the OHF_V71_Linux-x64.zip file where you want to launch the installer using the following command:

    unzip -a OHF_V71_Linux-x64.zip

  5. Navigate to the Disk1/install folder.

  6. Change the protection on files as follows:

    chmod 755 *

  7. Make sure that all existing user schemas are disconnected from the database.

  8. Start the Oracle Universal Installer (OUI) using the following command:

    • Database server (exadata or non-exadata) is on the machine where the installer is running, execute:

      sh runInstaller.sh -local

    • Database server is on a different machine, execute:

      sh runInstaller.sh -local remote_installation=true

    where, the -local option is to install on the local node irrespective of the cluster nodes specified on the installer machine.

    The Welcome screen appears.

  9. Click Next. The Select a Product to Install screen appears.

  10. Select the Oracle Healthcare Foundation Data Model 7.1.0.0.0 option and click Next.

    The Specify Home Details screen appears.

  11. Enter the installation home name and location, and click Next.

    The Verify Installation Prerequisites screen appears.

  12. Verify if all the prerequisites are met before proceeding and click Next.

    The Oracle Client Home Configuration screen appears.

  13. Specify the Oracle client home path and click Next. The installer validates the path.

    The Database Configuration screen appears.

  14. Enter values for the following fields:

    • Hostname - By default, the system host name appears

    • Port - By default, the port number is 1521. You can edit this field if required.

    • Service name

    • System user password

    • Sys user password

  15. Click Next. The Table Compression screen is displayed.

  16. Select the compression option for the following schemas from the corresponding drop-down list:

    Schema Name Compression Options for Non-Exadata Compression Options for Exadata
    Interface tables schema No Compression (default)

    Advanced Compression

    Hybrid columnar compression (default)

    No Compression

    Data warehouse schema No Compression (default)

    Advanced Compression

    No Compression (default)

    Advanced Compression: Preferred if updates are high

    Hybrid columnar compression

    Common data mart schema No Compression (default)

    Advanced Compression

    No Compression (default)

    Advanced Compression

    Cohort data mart schema No Compression (default) Advanced Compression No Compression (default)

    Advanced Compression

    Omics Data Bank schema No Compression (default) Advanced Compression Hybrid columnar compression (default)

  17. Click Next. The Data Model Configuration screen appears.

  18. Enter the existing schema details (HDWF 6.1) for the following fields:

    • Interface tables schema name

    • Interface tables schema password

    • Data warehouse schema name

    • Data warehouse schema password

    • Common data mart schema name

    • Common data mart schema password

    • Omics data bank schema name - Enter a new schema name

    • Omics data bank schema password

  19. Click Next. The Data Model Configuration screen appears.

  20. Enter values for the following fields:

    • Cohort data mart schema name

    • Cohort data mart password

    • Enterprise schema name

    • Enterprise schema password

    • Job engine schema name

    • Job engine schema password

    • Services schema name

    • Services schema password

  21. Click Next in the Data Model Configuration Verification screen appears.

  22. Click Next. The Tablespace Data File Location screen appears.

  23. Specify the location of the tablespace data files. This is the directory on the database server where the data files are created during the installation.

    For example,

    Unix: /u01/oradata/dbname

    ASM: +DATA_EX02/hashas01/datafile

    Note:

    • ASM location must always start with +.

    • Path should not end with /.

    • The installer does not validate the data file location. Make sure the location physically exists if the database server is on the remote server. Else, the installer will fail.

  24. Click Next. The Tablespace Details screen appears.

  25. The installer creates the following default tablespaces if they do not exist. It also creates tablespace for Services schema.

    You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas. Enter a tablespace name that does not exist in the database.

    Tablespace Details Tablespace Name
    Default tablespace for Omics Data Bank (odb) schema odb_data_ts
    Used for indexes of the Omics Data Bank (odb) schema odb_index_ts
    Used for LOB columns in the Omics Data Bank (odb) schema odb_lob_ts
    Default tablespace used for Cohort Data Mart (cdm) schema cdm_data_ts
    Used for indexes of the Cohort Data Mart (cdm) schema cdm_index_ts
    Default tablespace used for Enterprise (ent) schema ent_ts
    Default tablespace for Job Engine (job) schema job_data_ts
    Used for indexes of the Job Engine (job) schema job_index_ts
    A Database File System (DBFS) store is created as part of the Job Engine installation job_store_ts
    Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user job_lob_ts
    Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema job_tbs_ts
    Default tablespace used for services schema (svc) svc_ts

  26. Click Next. The Temporary Tablespace Details screen appears.

    The installer creates the following temporary tablespaces if they do not exist. It also creates temporary tablespace for Services schema.

    You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas. Enter a tablespace name that does not exist in the database.

    Temporary Tablespace Details Temporary Tablespace
    Temporary tablespace used for Omics Data bank schema (odb) odb_temp
    Temporary tablespace used for Cohort data mart schema (cdm) cdm_temp
    Temporary tablespace used for Enterprise schema (ent) ent_temp
    Temporary tablespace used for Job Engine schema (job) job_temp
    Temporary tablespace used for Services schema (svc) svc_temp

  27. Click Next. The Omics Data Bank and Cohort Data Mart Parameters screen appears.

  28. Enter the values in the following fields:

    Note:

    Make sure that you enter the same values used for the OHF 7.0.1 installation for Result Partition, Promoter Offset, Flaking Offset, and Job Store Name.
    • Result Partition - Used to partition result tables in the ODB schema. The available options are:

      • GENE (Default)

      • STUDY

    • Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.

    • Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.

    • Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.

    • Job Store Name

      Execute the following query to find all store names in the database (created by all previous installations) by connecting to SYS user:

      SELECT case when substr(store_name,1,3) ='FS_' then substr(store_name,4) else null end as store_name FROM TABLE(dbms_dbfs_content.liststores);

      Note:

      If you are using the same database instance for multiple environment, enter a unique job store name for each job schema.
  29. Click Next. The Tablespace Parameters screen appears.

  30. Enter values for the following fields from the existing CDM, ODB, and Job engine schemas:

    Note:

    Make sure that these tablespaces exist before the installation and which was used to create these schemas.
    • cdm_index_ts_name

    • odb_index_ts_name

    • odb_lob_ts_name

    • job_index_ts_name

    • job_lob_ts_name

    • job_tbs_ts_name

    • job_store_ts_name

  31. Click Next. The Summary screen appears.

  32. Click Install.

  33. After the installation is complete, the End of Installation screen appears.

  34. Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.

  35. Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.

  36. Contact Oracle support, if necessary, to resolve any errors.

2.7 Data Migration

To upgrade from a previous version of the product, follow this migration process.

2.7.1 Executing Data Migration Script

Data migration for interface tables and data warehouse schema does not occur as a part of the installer. You need to review the data migration scripts in <INSTALL_HOME>/dm/hdi_install/post_ddl_upgrade.sql and <INSTALL_HOME>/dm/hdm_install/post_ddl_upgrade.sql (note that there are references to other script within the main script). You can modify the scripts before executing them.

Note:

Migration scripts execution is mandatory and needs to be completed before the ETL execution.

2.7.2 HDWF 6.1 to OHF 7.0.1 Data Migration Summary

  • Specimen.Intervention ID/VN is deprecated and data is migrated to Intervention Specimen.

  • Specimen Processing Procedure. Specimen Processing Procedure Subtype is added as a mandatory attribute to Specimen Processing Procedure. This is initialized with the appropriate seed data coded value 'SPP_SUBTYP_GEN' during the migration.

  • Schedule Block.Schedule Block Subtype is added as a mandatory attribute to Schedule Block. This is initialized with the appropriate seed data coded value 'SCH_BLCK_SUBTYP_GEN' during the migration.

  • Schedule.Service Provider is deprecated and data is transferred to Schedule Service Provider.

  • Observation. Specimen ID/VN is deprecated and data is transferred to Observation Specimen. On the interface tables schema, data is transferred from Observation, Assessment, and Image Instance to Observation Specimen.

  • Data is migrated from Encounter Service Provider to Encounter.Responsible Service Provider for Encounter Service Provider Relationship Type="Responsible Service Provider". If the same encounter is associated to multiple service providers for a given value of Encounter and Relationship Type="Responsible Service Provider", the service provider that was inserted last will be migrated to Encounter.

  • The integration ID of the parent table is used wherever a new table is being populated based on an existing table.

  • The migration scripts use the seed data values given below provided by Oracle. If you have used a different set of seed data values in Encounter Service Provider to represent a Responsible Service Provider relationship, change it in the migration script for both Interface Tables and Data Warehouse schema, and run post_ddl_upgrade.sql.

  • The following are the seed data values that are used for filtering Encounter Service Provider. Encounter Service Provider Relationship Type in the migration scripts:

    • HDI_ENC_SVCPRV.ENC_SVCPRV_RLTYP_CD='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV' (CD_NM='Responsible Service Provider' AND INTEGRATION_ID='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV~EHA_CUSTOM_CD_SYS~1.0' AND SYS_INT_ID='EHA_CUSTOM_CD_SYS~1.0' AND SYS_DAT_SRC_NUM_ID=1 AND CREATED_BY_USER_ID='EHA_USER' AND CREATED_BY_USER_DS_NUM_ID=1)

    • On the Data Warehouse schema, it corresponds to HDM_ENC_SVCPRV.ENC_SVCPRV_RLSHPTYP_ID/VN resolving to the following code:

      CD='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV' AND CD_NM='Responsible Service Provider' AND INTEGRATION_ID='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV~EHA_CUSTOM_CD_SYS~1.0'

    The values mentioned above are presented in a logical way and is not the exact physical representation of the values present in the migration scripts. See sql files <INSTALL_HOME>/dm/hdi_install/pkb/hdi_upgrade_7_0.pkb and <INSTALL_HOME>/dm/hdm_install/pkb/hdm_upgrade_7_0.pkb for the exact seed data values.

  • Key steps related to data migration are logged into the HDI_RSLT_LOG and HDM_RSLT_LOG tables in the respective schemas.

2.7.3 OHF 7.0.1 to 7.1 Data Migration Summary

  • Bill Line Item.Bill Diagnosis ID/VN is deprecated and the corresponding data is migrated to Bill Diagnosis as new records.

  • Claim Line Item.ClaimDiagnosis ID/VN is deprecated and the corresponding data is migrated to Claim Diagnosis as new records.

    The values mentioned above are presented in a logical way and is not the exact physical representation of the values in the migration scripts. For exact details, see the SQL files:

    <INSTALL_HOME>/dm/hdi_install/pkb/hdi_upgrade_7_1.pkb and <INSTALL_ HOME>/dm/hdm_install/pkb/hdm_upgrade_7_1.pkb.

  • HDM_INTVN_SPCMN. INTVN_SPCMN_RLSHPTYP_ID/VN is populated with the value -1 corresponding to the default value of CODEID_NAV configured in the HMC schema. If you have modified the seed data, change the script <INSTALL_HOME>/dm/hdm_install/post_ddl_upgrade.sql accordingly.

  • Review the integration ID pattern for records populated using the migration scripts and change it accordingly if needed.

  • It is assumed that the time stamps for SRC_CHANGED_ON_DT, match from the parent to the child reference on the interface schema. For example, the value of HDI_ENC_SVCPRV. ENC_SRC_CHANGED_ON_DT should be same as that of the parent, that is, HDI_ENC.SRC_CHANGED_ON_DT for the same record being referred in both tables right up to the seconds. If it is not the same, change the migration script accordingly.

2.7.4 Executing Schema Migration Scripts After Modifications

2.7.4.1 Interface Tables Schema Migration

  1. Open the command prompt and navigate to the <INSTALL_HOME>/dm/hdi_install/ folder.

  2. Connect to the interface tables schema through command prompt using the SQL*Plus utility.

  3. Execute the script using the command '@post_ddl_upgrade.sql'.

2.7.4.2 Data Warehouse Schema Migration

  1. Open the command prompt and navigate to the <INSTALL_HOME>/dm/hdm_install/ folder.

  2. Connect to the data warehouse schema through command prompt using the SQL*Plus utility.

  3. Execute the script using the command '@post_ddl_upgrade.sql'.

2.8 Running the Installer on Pre-created Schemas

  1. Extract the contents of the OHF media pack to your system.

  2. Navigate to the <media_pack_location>/ folder.

  3. Unzip the OHF_V71_Linux-x64.zip file where you want to launch the installer using the following command:

    unzip -a OHF_V71_Linux-x64.zip

  4. Create the following default tablespaces and provide an appropriate tablespace name. The table provides an example tablespace name.

    Table 2-2 Default Tablespaces

    Tablespace Details Tablespace Name BIG File Tablespace

    Default tablespace used for Interface tables (hdi) schema

    hdi_ts

    Yes

    Default tablespace used for Data Warehouse (hdm) schema

    hdm_ts

    Yes

    Default tablespace used for Common Data Mart (hcd) schema

    hcd_ts

    Yes

    Default tablespace for Omics Data Bank (odb) schema

    odb_data_ts

    Yes

    Used for indexes of the Omics Data Bank (odb) schema

    odb_index_ts

    Yes

    Used for LOB columns in the Omics Data Bank (odb) schema

    odb_lob_ts

    Yes

    Default tablespace used for Cohort Data Mart (cdm) schema

    cdm_data_ts

    Yes

    Used for indexes of the Cohort Data Mart (cdm) schema

    cdm_index_ts

    Yes

    Default tablespace used for Enterprise (ent) schema

    ent_ts

    No

    Default tablespace for Job Engine (job) schema

    job_data_ts

    No

    Used for indexes of the Job Engine (job) schema

    job_index_ts

    No

    Database File System (DBFS) store created as part of the Job Engine installation

    job_store_ts

    No

    Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user

    job_lob_ts

    No

    Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema

    job_tbs_ts

    No

    Default tablespace used for services (svc) schema

    svc_ts

    No


  5. Create the following temporary tablespaces. The table provides an example tablespace name.

    Table 2-3 Temporary Tablespaces

    Tablespace Details Tablespace Name BIG File Tablespace

    Temporary tablespace used for Interface tables (hdi) schema

    hdi_temp

    Yes

    Temporary tablespace used for Data warehouse (hdm) schema

    hdm_temp

    Yes

    Temporary tablespace used for Common data mart (hcd) schema

    hcd_temp

    Yes

    Temporary tablespace used for Omics Data bank (odb) schema

    odb_temp

    Yes

    Temporary tablespace used for Cohort data mart (cdm) schema

    cdm_temp

    Yes

    Temporary tablespace used for Enterprise (ent) schema

    ent_temp

    No

    Temporary tablespace used for Job Engine (job) schema

    job_temp

    No

    Temporary tablespace used for Services (svc) schema

    svc_temp

    No


  6. Navigate to the directory <media_pack_location>/Disk1/stage/Components/oracle.hsgbu.hc.datamodel/7.1.0.0.0/1/DataFiles/Expanded/filegroup1.

  7. Unzip the master_install.zip file.

  8. Connect to Oracle SYS from the extracted content and create database roles by executing the roles.sql script.

  9. Create the following user schemas with an appropriate default tablespace and temporary tablespace.

    • Interface layer schema (hdi)

    • Data warehouse schema (hdm)

    • Common data mart schema (hcd)

    • Omics data bank schema (odb)

    • Cohort data mart (cdm)

    • Enterprise schema (ent)

    • Job engine schema (job)

    • Services schema (svc)

  10. Execute the following scripts/commands as a DBA user. When prompted, enter the respective schema names.

    • @hdi_install/grant_schema_priv.sql

    • @hdm_install/grant_schema_priv.sql

    • @hcd_install/grant_schema_priv.sql

    • @odb_install/grant_schema_priv.sql

      Execute the following commands:

      > ALTER USER <odb> QUOTA UNLIMITED on <odb_index_ts>

      > ALTER USER <odb> QUOTA UNLIMITED on <odb_lob_ts>

    • @cdm_install/grant_schema_priv.sql

      Execute the following command:

      > ALTER USER <cdm> QUOTA UNLIMITED on <cdm_index_ts>

    • @enterprise_install/grant_schema_priv.sql

    • @job_install/grant_schema_priv.sql

      Execute the following commands:

      > ALTER USER <job> QUOTA UNLIMITED on <job_index_ts>

      > ALTER USER <job> QUOTA UNLIMITED on <job_store_ts>

      > ALTER USER <job> QUOTA UNLIMITED on <job_lob_ts>

      > ALTER USER <job> QUOTA UNLIMITED on <job_tbs_ts>

    • @svc_install/grant_schema_priv.sql

  11. Make sure that all the pre-created user sessions are disconnected from the database.

  12. Navigate to the <media_pack_location>/Disk1/install folder.

  13. Change the protection on files as follows:

    chmod 755 *

  14. Start the Oracle Universal Installer (OUI) using the following command:

    • Database server is on the machine where the installer is running and is an exadata instance, execute:

      sh runInstaller.sh -local dba_tasks=false db_platform=exadata

    • Database server is on the machine where the installer is running and is a non-exadata instance, execute:

      sh runInstaller.sh -local dba_tasks=false db_platform=default

    • Database server is on a different machine and is an exadata instance, execute:

      sh runInstaller.sh -local remote_installation=true dba_tasks=false db_platform=exadata

    • Database server is on a different machine and is a non-exadata instance, execute:

      sh runInstaller.sh -local dba_tasks=false db_platform=default remote_installation=true

    where, the -local option is to install on the local node irrespective of the cluster nodes specified on the installer machine.

    The Welcome screen appears.

  15. Click Next. The Select a Product to Install screen appears.

  16. Select the Oracle Healthcare Foundation Data Model 7.1.0.0.0 option and click Next.

    The Specify Home Details screen appears.

  17. Enter the installation home name and location.

  18. Click Next. The Verify Installation Prerequisites screen appears.

  19. Verify if all the prerequisites are met before proceeding.

  20. Click Next. The Oracle Client Home Configuration screen appears.

  21. Specify the Oracle client home path. The installer validates this path.

  22. Click Next. The Database Configuration screen appears.

  23. Enter values for the following fields:

    • Hostname - By default, the system host name appears

    • Port - By default, the port number is 1521. You can edit this field if required.

    • Service name

    Note:

    The installer will not validate the database connectivity. Make sure that you are able to connect to the database from SQLPlus.
  24. Click Next. The Table Compression screen is displayed.

  25. Select the compression option for the following schemas from the corresponding drop-down list:

    Schema Name Compression Options for Non-Exadata Compression Options for Exadata
    Interface tables schema No Compression (default)

    Advanced Compression

    Hybrid columnar compression (default)

    No Compression

    Data warehouse schema No Compression (default)

    Advanced Compression

    No Compression (default)

    Advanced Compression: Preferred if updates are high

    Hybrid columnar compression

    Common data mart schema No Compression (default)

    Advanced Compression

    No Compression (default)

    Advanced Compression

    Cohort data mart schema No Compression (default) Advanced Compression No Compression (default)

    Advanced Compression

    Omics Data Bank schema No Compression (default) Advanced Compression Hybrid columnar compression (default)

  26. Click Next. The Data Model Configuration screen appears.

  27. Enter the respective pre-created schema details for the following:

    • Interface tables schema name

    • Interface tables schema password

    • Data warehouse schema name

    • Data warehouse schema password

    • Common data mart schema name

    • Common data mart schema password

    • Omics data bank schema name

    • Omics data bank schema password

  28. Click Next. The Data Model Configuration screen appears.

  29. Enter the pre-created schema details for the following:

    • Cohort data mart schema name

    • Cohort data mart password

    • Enterprise schema name

    • Enterprise schema password

    • Job engine schema name

    • Job engine schema password

    • Services schema name

    • Services schema password

  30. Click Next in the Data Model Configuration Verification, Tablespace Details, and Temporary Tablespace Details screens.

    The Omics Data Bank and Cohort Data Mart Parameters screen appears.

  31. Enter the values in the following fields:

    • Result Partition - Used to partition result tables in the ODB schema. The available options are:

      • GENE (Default)

      • STUDY

    • Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.

    • Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.

    • Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.

    • Job Store Name

      Execute the following query to find all store names in the database (created by all previous installations) by connecting to SYS user:

      SELECT case when substr(store_name,1,3) ='FS_' then substr(store_name,4) else null end as store_name FROM TABLE(dbms_dbfs_content.liststores);

  32. Click Next. The Tablespace Parameters screen appears.

  33. Enter values for the following fields from the existing CDM, ODB, and Job engine schemas:

    Note:

    Make sure that these tablespaces exist before the installation and which was used to create these schemas.
    • cdm_index_ts_name

    • odb_index_ts_name

    • odb_lob_ts_name

    • job_index_ts_name

    • job_lob_ts_name

    • job_tbs_ts_name

    • job_store_ts_name

  34. Click Next. The Summary screen appears.

  35. Click Install.

  36. After the installation is complete, the End of Installation screen appears.

  37. Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.

  38. Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.

  39. Contact Oracle support, if necessary, to resolve any errors.

2.9 Revoking Privileges

After the Cohort Data Mart (CDM) and Omics Data Bank (ODB) data model installation, connect to the database user with DBA privilege, and execute the following scripts to revoke privileges:

REVOKE CREATE ANY DIRECTORY FROM <odb user>

REVOKE CREATE ANY CONTEXT FROM <cdm user>

where,

<odb_user> is the ODB schema name.

<cdm_user> is the CDM schema name.

2.10 Troubleshooting

If the installation fails, open the log file for errors. For the log file details, see Section 2.4.

You must fix the issues and rerun the installer from the installation home location.

2.10.1 Seed Data Troubleshooting

Seed data scripts are executed as part of the installation process for both HDI schema and HDM schema. The seed data that is loaded in each schema is identical although the format is specific to each of the two schemas.

2.10.1.1 Error Logging and Debugging

These errors that occur while populating seed data using the seed data load procedures are logged in the HDM_X_SEED_DATA_ERR_LOG table for the HDM schema and the HDI_X_ SEED_DATA_ERR_LOG table for the HDI schema.

The following are the key columns in the error log table (HDM_X_SEED_DATA_ERR_ LOG or HDI_X_SEED_DATA_ERR_LOG):

  • ERR_ID - Unique identifier for a record in the error log table.

  • ERR_TBL_NM - Specifies the table name for which the error record was logged. For example, for an error in the HDM code repository will have a value HDM_CD_REPOSITORY in this column.

  • ERR_TYP - Indicates the type of error that is logged. There are five types of errors that can be logged. For the different types of errors, see Section 2.10.1.2.

  • ERR_DESC - A short description of the error.

  • ERR_INT_ID - Indicates the integration ID of the record for which the error was logged. This column along with the error table name (ERR_TBL_NM) can be used to debug the error. The Integration ID for tables that are populated with seed data has the following pattern:

    Table 2-4 Integration ID Pattern for Tables Populated with Seed Data

    Table Name Columns to be Concatenated to Generate the Integration ID

    HDM_USER

    LOGIN

    HDM_CD_SYS

    SYS_CD~SYS_VERSION

    HDM_CD_REPOSITORY

    CD~SYS_CD~SYS_VERSION

    HDM_CD_TYP

    TYP_CD

    HDM_CD_REPOSITORY_CD_TYP

    CD~SYS_CD~ SYS_VERSION~ TYP_CD


2.10.1.2 Seed Data Error Types

This section describes the different error types that can be logged and the resolution:

Note:

In Table 2-5, non-EHA user refers to individuals using the application.

Table 2-5 Seed Data Error Types

Error Type Description Resolution for HDM Identification Resolution for HDI Identification

Error Type - EHA_WARNING: Code name exists

Code name: <CODE NAME> exists. New Oracle seed data record inserted with same code name.

Search Code Repository for the record with the code name specified in the Error Description.

select * from HDM_CD_REPOSITORY where CD_NM = '<CODE NAME SPECIFIED IN THE ERROR STATEMENT>';

For example,

SQL> select * from HDM_CD_REPOSITORY where CD_NM = 'Patient Withdrew';

Search Code Repository for the record with the code name as specified in the Error Description.

select * from HDI_CD_REPOSITORY where CD_NM = '<CODE NAME SPECIFIED IN THE ERROR STATEMENT>';

For example,

SQL> select * from HDI_CD_REPOSITORY where CD_NM = 'Patient Withdrew';

Resolution

After identifying the code name, select one of the following options:

  • Delete the conflicting seed data record inserted by the Oracle seed data procedure (identified by the EHA user) and use your own seed data.

  • Delete the data record that you have inserted which caused the code clash and instead use the Oracle seed data record.

Resolution

After identifying the code name, select one of the following options:

  • Delete the conflicting seed data record inserted by the Oracle seed data procedure (identified by the EHA User) and use your own seed data.

  • Delete the data record that you have inserted which caused the code clash and instead use the Oracle seed data record.

EHA_ERROR: Insert failed: Creation of version failed

Non-EHA user has versioned a record. Unable to create a new version of the record.

Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.

select * from <HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> where INTEGRATION_ID = '<HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID>' and CURRENT_FLG='Y';

For example,

SQL> select * from HDM_CD_REPOSITORY where INTEGRATION_ID ='SPCMN_TYP_CD_IVFEHA_CUSTOM_CD_SYS1.0' and CURRENT_FLG='Y';

Navigate to the table identified in HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDI_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.

select * from <HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> t1 where INT_ID = <HDI_X_SEED_DATA_ERR_LOG.ERR_INT_ID> and SRC_CHANGED_ON_DT = (select max(SRC_CHANGED_ON_DT) from <HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> t2 where t1.int_id = t2.int_id);

For example,

SQL> select *from HDI_CD_REPOSITORY cr1 where INT_ID ='SPCMN_TYP_CD_ IVFEHA_CUSTOM_CD_SYS1.0' and SRC_CHANGED_ON_DT = (select max(SRC_CHANGED_ON_DT) fromHDI_CD_REPOSITORY cr2 where cr1.int_id = cr2.int_id);

Resolution

This error occurs when Oracle provided seed data was previously updated and versioned by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates through the Oracle seed data process are rejected with this error message. Ignore the Oracle provided seed data.

Resolution

This error occurs when Oracle provided seed data was previously updated and versioned by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates through the Oracle seed data process are rejected with this error message. Ignore the Oracle provided seed data.

EHA_ERROR: Update failed

Non-EHA user has changed the record. Unable to update the record.

Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.

select * from <HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> where INTEGRATION_ ID=<HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID> and CURRENT_FLG='Y';

For example,

SQL> select *from HDM_CD_REPOSITORY where INTEGRATION_ID ='SPCMN_TYP_CD_IVFEHA_CUSTOM_CD_SYS1.0' and CURRENT_FLG='Y';

This error will not occur in HDI as no updates are supported in HDI.

Resolution

This error occurs when Oracle provided seed data has previously been updated by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates through the Oracle seed data process are rejected with this error message. Ignore the Oracle provided seed data.

EHA_ERROR: Insert failed: Duplicate integration ID

Non-EHA user has created a record with the same integration ID. Unable to create a new record.

Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.

select * from <HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> where INTEGRATION_ID = <HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID> and CURRENT_FLG='Y';

For example,

SQL> select * from HDM_CD_REPOSITORY where INTEGRATION_ID ='SPCMN_TYP_CD_IVFEHA_CUSTOM_CD_SYS1.0' and CURRENT_FLG='Y';

Navigate to the table identified in HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDI_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.

select * from <HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> t1 where INT_ID=<HDI_X_ SEED_DATA_ERR_LOG.ERR_INT_ID> and SRC_CHANGED_ON_DT=(select max(SRC_CHANGED_ON_DT) from <HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> t2 where t1.int_id = t2.int_id);

For example,

SQL> select * from HDI_CD_REPOSITORY cr1 where INT_ID ='SPCMN_TYP_CD_IVFEHA_CUSTOM_CD_SYS1.0' and SRC_CHANGED_ON_DT = (select max(SRC_CHANGED_ON_DT) from HDI_CD_REPOSITORY cr2 where cr1.int_id = cr2.int_id);

Resolution

After examining the conflicting data, select one of the following options:

  • To insert Oracle seed data, modify the integration ID of the conflicting record that you have inserted and run the seed data procedure again.

  • If the Oracle supplied seed data is not required, ignore the Oracle provided seed data.

Resolution

After examining the conflicting data, you can select one of the following options:

  • To insert Oracle seed data, modify the integration ID of the conflicting record that you have inserted and run the seed data procedure again.

  • If the Oracle supplied seed data is not required, ignore the Oracle provided seed data.

EHA_ERROR: PL/SQL_ERROR

-

Other PL SQL errors that are encountered when inserting seed data.

Other PL SQL errors that are encountered when inserting seed data.


2.11 Uninstalling the Oracle Healthcare Foundation Data Model

Note:

Oracle recommends that you uninstall Data Model only from the development environment.
  1. Execute the following commands to drop user schemas by connecting to the system user. Replace the user schema names and tablespace names with the values provided during the installation.

    drop user <hdi> cascade;
    drop user <hdm> cascade;
    drop user <hcd> cascade;
    drop user <odb> cascade;
    drop user <cdm> cascade;
    drop user <ent> cascade;
    drop user <job> cascade;
    drop user <svc> cascade;
    drop tablespace <hdi_temp> INCLUDING CONTENTS and datafiles;
    drop tablespace <hdm_temp> INCLUDING CONTENTS and datafiles;
    drop tablespace <hcd_temp> INCLUDING CONTENTS and datafiles;
    drop tablespace <odb_temp> INCLUDING CONTENTS and datafiles;
    drop tablespace <cdm_temp> INCLUDING CONTENTS and datafiles;
    drop tablespace <ent_temp> INCLUDING CONTENTS and datafiles;
    drop tablespace <job_temp> INCLUDING CONTENTS and datafiles;
    drop tablespace <svc_temp> INCLUDING CONTENTS and datafiles;
    drop tablespace <hdi_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <hdm_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <hcd_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <odb_data_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <odb_index_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <odb_lob_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <cdm_data_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <cdm_index_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <ent_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <job_data_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <job_index_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <job_store_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <job_lob_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <job_tbs_ts> INCLUDING CONTENTS and datafiles;
    drop tablespace <svc_ts> INCLUDING CONTENTS and datafiles;
    
  2. Connect to Sys user, verify the following table records, and delete the entries for <job> schema and where STORE_OWNER=<job schema>.

    For example,

    delete from DBFS_SFS$_FS cascade;
    delete from DBFS$_MOUNTS cascade;
    delete from DBFS$_STORES cascade;
    delete from DBFS_SFS$_VOL cascade;
    delete from DBFS_SFS$_TAB cascade;
    commit;