Oracle® Healthcare Foundation Installation Guide Release 7.0.1 E70997-03 |
|
|
PDF · Mobi · ePub |
This chapter explains how to install the Oracle Healthcare Foundation Data Model. It contains the following sections:
Section 2.4, "Installing the Oracle Healthcare Foundation Data Model in Typical Mode"
Section 2.5, "Installing the Oracle Healthcare Foundation Data Model in Custom Mode"
Section 2.6, "Upgrading Healthcare Data Warehouse Foundation to Oracle Healthcare Foundation"
Section 2.11, "Uninstalling the Oracle Healthcare Foundation Data Model"
The following are the prerequisites to install the Oracle Healthcare Foundation Data Model:
Oracle Database 12.1.0.2 Enterprise Edition
Sqlplus utility is available in the installation server.
Container database with a single pluggable database instance, created with the help of a Database Administrator.
Java Virtual Machine is installed in the database server.
Make sure that the password expiry notification message does not display for the Sys and system schemas.
NLS_LENGTH_SEMANTICS parameter set to either CHAR or BYTE, based on your requirements.
For Oracle Healthcare Foundation Globalization Support information, see Oracle Database Globalization Support Guide and set your database character set accordingly.
Database connection through SERVICE_NAME enabled. Make sure that you are connecting to the pluggable database instance.
You can verify the database connectivity using the following command:
SQL>connect <username>@"(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=<hostname>)(PORT=<port number>))(CONNECT_DATA=(SERVICE_NAME=<service_name>)))"
To upgrade to Oracle Healthcare Foundation 7.0.1, back up existing Oracle Healthcare Data Warehouse Foundation (HDWF) 6.1 and Oracle Healthcare Analytics Data Integration (OHADI) 3.1 environment.
To create an encrypted tablespace during fresh installation, set up your environment for tablespace encryption. For more information, see Oracle Database Advanced Security Guide 12.1. The installer only creates the tablespace and does not handle prerequisites like setting up the keystore, opening it, and so on. The installer only checks whether the keystore is open if the sys and system passwords are provided during installation.
Set the GLOBAL_NAMES database initialization parameter to false.
For remote installation, make sure the following are available on the client machine:
Oracle Linux 6.5 or 6.6 (64-bit) OS
Oracle Database 12.1.0.2.0 client
If you are installing Oracle Healthcare Foundation on an exadata database machine, see the Exadata Machine and Exadata Implementation sections in the Oracle Healthcare Foundation Programmer's Guide.
Table compression strategy is decided based on the number of updates in tables. If the number of updates is high, select the Advanced compression option instead of HCC (QUERY HIGH).
For a fresh installation on an exadata environment, selecting the compression type as HCC (QUERY HIGH) will slow down the first few incremental loads.
Make sure that you have an appropriate database license for the compression methods you choose.
While installing Oracle Healthcare Foundation Data Model, the installer generates the following log files:
installActions<timestamp>.log
- Records the action of the installer and can be used to diagnose issues with the installer.
oraInstall<timestamp>.out
-Records the output of the SQL scripts run by the installer. Database objects are installed using the Python framework.
oraInstall<timestamp>.err
- Records the errors from the SQL scripts run by the installer.
You can ignore the following error:
java.io.FileNotFoundException: /<INSTALL_HOME>/inventory/Components21/
oracle.hsgbu.hc.datamodel/7.0.1.0.0/context.xml
The log files are time stamped and each installation session creates a new log file.
On a Linux machine, the log files are located at $ORACLE_BASE/oraInventory/logs. For example, /u01/app/oraInventory/logs.
<INSTALL_HOME>/dm/install.err
- Contains any SQL errors. Database objects are installed using the Python framework and the error logging is redirected to this location.
<INSTALL_HOME>/reports/dm_custom_install<timestamp>.html
- Contains installation summary for the Data Model custom installation.
<INSTALL_HOME>/reports/dm_typical_install<timestamp>.html
- Contains the installation summary for the Data Model typical installation.
While reporting any problems that occur during installation, make sure that you include all the above log files. Check which components were installed successfully using the following query by connecting to the enterprise schema:
Select * from W_EHA_PRODUCT_VERSION
Send the result of the query while reporting any problems to the Oracle support.
The installer creates the following schemas with default names:
Description | Schema Name |
---|---|
Interface layer schema |
hdi |
Data warehouse schema |
hdm |
Common data mart schema |
hcd |
Omics data bank schema |
odb |
Cohort data mart |
cdm |
Enterprise schema |
ent |
Job engine schema |
job |
Services schema |
svc |
Note:
If the above schemas already exist in the database, the installer updates the existing schemas.To install the Oracle Healthcare Foundation Data Model in typical mode, perform the following steps:
Extract the contents of the Oracle Healthcare Foundation media pack to your system.
Navigate to the <media_pack_location>/ folder.
Unzip the OHF_V701_Linux-x64.zip file where you want to launch the installer using the following command:
unzip -a OHF_V701_Linux-x64.zip
Navigate to the Disk1/install folder.
Change the protection on files as follows:
chmod 755 *
Start the Oracle Universal Installer (OUI) using the following command:
Database server (exadata or non-exadata) is on the machine where the installer is running, execute:
sh runInstaller.sh -local
Database server is on a different machine, execute:
sh runInstaller.sh -local remote_installation=true
where, the -local
option is to install on the local node irrespective of the cluster nodes specified on the installer machine.
The Welcome screen appears.
Click Next. The Select a Product to Install screen appears.
Select the Oracle Healthcare Foundation Data Model 7.0.1.0.0 option and click Next.
The Oracle Healthcare Foundation Data Model 7.0.1.0.0 screen appears.
Select the Typical option and click Next.
The Specify Home Details screen appears.
Enter the installation home name and location.
Click Next. The Verify Installation Prerequisites screen appears.
Verify if all the prerequisites are met before proceeding.
Click Next. The Oracle Client Home Configuration screen appears.
Specify the Oracle client home path. The installer validates this path.
Click Next. The Database Configuration screen appears.
Enter values for the following fields:
Hostname - By default, the system host name appears
Port - By default, the port number is 1521. You can edit this field if required.
Service name
System user password
Sys user password
Click Next. The Data Model Configuration screen appears.
Enter values for the following fields:
Interface tables schema password (hdi)
Data warehouse schema password (hdm)
Common data mart password (hcd)
Omics data bank schema password (odb)
Cohort data mart password (cdm)
Enterprise schema password (ent)
Job engine schema password (job)
Services schema password (svc)
Click Next. The Data Model Configuration Verification screen appears.
Click Next. The Tablespace Data File Location screen appears.
Specify the location of the tablespace data files. This is the directory on the database server where data files are created during installation.
For example,
Unix: /u01/oradata/dbname
ASM: +DATA_EX02/hashas01/datafile
Note:
ASM location must always start with +.
Path should not end with /.
The installer does not validate the data file location. Make sure the location physically exists if the database server is on the remote server. Else, the installer will fail.
Click Next. The Summary screen appears.
Click Install.
After the installation is complete, the End of Installation screen appears.
Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.
See the log file located in <oraInventory>/logs/InstallActions<current_timestamp>.log
Review the generated installation log files for errors. For details, Section 2.10.
Contact Oracle support, if necessary, to resolve any errors.
To install the Oracle Healthcare Foundation Data Model in custom mode, perform the following steps:
Extract the contents of the Oracle Healthcare Foundation media pack to your system.
Navigate to the <media_pack_location>/ folder.
Unzip the OHF_V701_Linux-x64.zip file where you want to launch the installer using the following command:
unzip -a OHF_V701_Linux-x64.zip
Navigate to the Disk1/install folder.
Change the protection on files as follows:
chmod 755 *
Start the Oracle Universal Installer (OUI) using the following command:
Database server (exadata or non-exadata) is on the machine where the installer is running, execute:
sh runInstaller.sh -local
Database server is on a different machine, execute:
sh runInstaller.sh -local remote_installation=true
where, the -local
option is to install on the local node irrespective of the cluster nodes specified on the installer machine.
The Welcome screen appears.
Click Next. The Select a Product to Install screen appears.
Select the Oracle Healthcare Foundation Data Model 7.0.1.0.0 option and click Next.
The Oracle Healthcare Foundation Data Model 7.0.1.0.0 screen appears.
Select the Custom (Recommended) option and click Next.
The Specify Home Details screen appears.
Enter the installation home name and location.
Click Next. The Verify Installation Prerequisites screen appears.
Verify if all the prerequisites are met before proceeding.
Click Next. The Oracle Client Home Configuration screen appears.
Specify the Oracle client home path. The installer validates this path.
Click Next. The Database Configuration screen appears.
Enter values for the following fields:
Hostname - By default, the system host name appears
Port - By default, the port number is 1521. You can edit this field if required.
Service name
System user password
Sys user password
Click Next. The Table Compression screen is displayed.
Select the compression option for the following schemas from the corresponding drop-down list:
Schema Name | Compression Options for Non-Exadata | Compression Options for Exadata |
---|---|---|
Interface tables schema | No Compression (default)
Advanced Compression |
Hybrid columnar compression (default)
No Compression |
Data warehouse schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression: Preferred if updates are high Hybrid columnar compression |
Common data mart schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression |
Cohort data mart schema | No Compression (default) Advanced Compression | No Compression (default)
Advanced Compression |
Omics Data Bank schema | No Compression (default) Advanced Compression | Hybrid columnar compression (default) |
Click Next. The Data Model Configuration screen appears.
Enter values for the following fields:
Interface tables schema name
Interface tables schema password
Data warehouse schema name
Data warehouse schema password
Common data mart schema name
Common data mart schema password
Omics data bank schema name
Omics data bank schema password
Click Next. The Data Model Configuration screen appears.
Enter values for the following fields:
Cohort data mart schema name
Cohort data mart password
Enterprise schema name
Enterprise schema password
Job engine schema name
Job engine schema password
Services schema name
Services schema password
Click Next. The Data Model Configuration Verification screen appears.
Click Next. The Tablespace Data file Location screen appears.
Specify the location of the Tablespace data files. This is the directory on the database server where the data files are created during the installation.
For example,
Unix: /u01/oradata/dbname
ASM: +DATA_EX02/hashas01/datafile
Note:
ASM location must always start with +.
Path should not end with /.
The installer does not validate the data file location. Make sure the location physically exists if the database server is on the remote server. Else, the installer will fail.
Click Next. The Tablespace Details screen appears.
The installer creates the following tablespaces for fresh schema. Make sure that these tablespace do not exist in the database.
Enter a tablespace name for each schema. You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas.
Tablespace Details | Tablespace Name |
---|---|
Default tablespace used for Interface tables (hdi) schema | hdi_ts |
Default tablespace used for Data Warehouse (hdm) schema | hdm_ts |
Default tablespace used for Common Data Mart (hcd) schema | hcd_ts |
Default tablespace for Omics Data Bank (odb) schema | odb_data_ts |
Used for indexes of the Omics Data Bank (odb) schema | odb_index_ts |
Used for LOB columns in the Omics Data Bank (odb) schema | odb_lob_ts |
Default tablespace used for Cohort Data Mart (cdm) schema | cdm_data_ts |
Used for indexes of the Cohort Data Mart (cdm) schema | cdm_index_ts |
Default tablespace used for Enterprise (ent) schema | ent_ts |
Default tablespace for Job Engine (job) schema | job_data_ts |
Used for indexes of the Job Engine (job) schema | job_index_ts |
Database File System (DBFS) store is created as part of the Job Engine installation | job_store_ts |
Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user | job_lob_ts |
Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema | job_tbs_ts |
Default tablespace used for services schema (svc) | svc_ts |
Click Next. The Temporary Tablespace Details screen appears.
The installer creates the following temporary tablespaces for fresh schema. Make sure that these tablespace do not exist in database.
You can edit the temporary tablespace name, initial size, and max size for these schemas.
Temporary Tablespace Details | Temporary Tablespace |
---|---|
Interface tables schema (hdi) | hdi_temp |
Data warehouse schema (hdm) | hdm_temp |
Common data mart schema (hcd) | hcd_temp |
Omics Data bank schema (odb) | odb_temp |
Cohort data mart schema (cdm) | cdm_temp |
Enterprise schema (ent) | ent_temp |
Job Engine schema (job) | job_temp |
Services schema (svc) | svc_temp |
Click Next. The Omics Data Bank and Cohort Data Mart Parameters screen appears.
Enter values for the following fields:
Result Partition - Used to partition result tables in the ODB schema. The available options are:
GENE (Default)
STUDY
Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.
Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.
Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.
Click Next. The Summary screen appears.
Click Install.
After the installation is complete, the End of Installation screen appears.
Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.
Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.
Contact Oracle support, if necessary, to resolve any errors.
Before upgrading Healthcare Data Warehouse Foundation (HDWF) 6.1 to Oracle Healthcare Foundation, you must have installed Oracle Healthcare Analytics Data Integration (OHADI) 3.1.
Note:
After upgrading, follow Section 2.7 for data migration from HDWF 6.1 to Oracle Healthcare Foundation 7.0.1.Upgrade the HDWF 6.1 database instance to Oracle Database 12.1.0.2 Enterprise Edition.
If you are upgrading for an exadata HDWF 6.1 user, make sure that you drop the non-unique FK indexes on the Data Warehouse schema using the script OHADI_FK_INDEX_DROP_DDL.sql
if it is not done as a part of the initial data load. You can find this script in OHADI 3.1 package.
Extract the contents of the Oracle Healthcare Foundation media pack to your system.
Navigate to the <media_pack_location>/ folder.
Unzip the OHF_V701_Linux-x64.zip file where you want to launch the installer using the following command:
unzip -a OHF_V701_Linux-x64.zip
Navigate to the Disk1/install folder.
Change the protection on files as follows:
chmod 755 *
Make sure that all existing user schemas are disconnected from the database.
Start the Oracle Universal Installer (OUI) using the following command:
Database server (exadata or non-exadata) is on the machine where the installer is running, execute:
sh runInstaller.sh -local
Database server is on a different machine, execute:
sh runInstaller.sh -local remote_installation=true
where, the -local
option is to install on the local node irrespective of the cluster nodes specified on the installer machine.
The Welcome screen appears.
Click Next. The Select a Product to Install screen appears.
Select the Oracle Healthcare Foundation Data Model 7.0.1.0.0 option and click Next.
The Oracle Healthcare Foundation Data Model 7.0.1.0.0 screen appears.
Select the Custom (Recommended) option and click Next.
The Specify Home Details screen appears.
Enter the installation home name and location.
Click Next. The Verify Installation Prerequisites screen appears.
Verify if all the prerequisites are met before proceeding.
Click Next. The Oracle Client Home Configuration screen appears.
Specify the Oracle client home path. The installer validates the path.
Click Next. The Database Configuration screen appears.
Enter values for the following fields:
Hostname - By default, the system host name appears
Port - By default, the port number is 1521. You can edit this field if required.
Service name
System user password
Sys user password
Click Next. The Table Compression screen is displayed.
Select the compression option for the following schemas from the corresponding drop-down list:
Schema Name | Compression Options for Non-Exadata | Compression Options for Exadata |
---|---|---|
Interface tables schema | No Compression (default)
Advanced Compression |
Hybrid columnar compression (default)
No Compression |
Data warehouse schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression: Preferred if updates are high Hybrid columnar compression |
Common data mart schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression |
Cohort data mart schema | No Compression (default) Advanced Compression | No Compression (default)
Advanced Compression |
Omics Data Bank schema | No Compression (default) Advanced Compression | Hybrid columnar compression (default) |
Click Next. The Data Model Configuration screen appears.
Enter the existing schema details (HDWF 6.1) for the following fields:
Interface tables schema name
Interface tables schema password
Data warehouse schema name
Data warehouse schema password
Common data mart schema name
Common data mart schema password
Omics data bank schema name - Enter a new schema name
Omics data bank schema password
Click Next. The Data Model Configuration screen appears.
Enter values for the following fields:
Cohort data mart schema name
Cohort data mart password
Enterprise schema name
Enterprise schema password
Job engine schema name
Job engine schema password
Services schema name
Services schema password
Click Next. The Data Model Configuration Verification screen appears.
Click Next. The Tablespace Data File Location screen appears.
Specify the location of the tablespace data files. This is the directory on the database server where the data files are created during the installation.
For example,
Unix: /u01/oradata/dbname
ASM: +DATA_EX02/hashas01/datafile
Note:
ASM location must always start with +.
Path should not end with /.
The installer does not validate the data file location. Make sure the location physically exists if the database server is on the remote server. Else, the installer will fail.
Click Next. The Tablespace Details screen appears.
The installer creates the following default tablespaces if they do not exist. It also creates tablespace for Services schema.
You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas. Enter a tablespace name that does not exist in the database.
Tablespace Details | Tablespace Name |
---|---|
Default tablespace for Omics Data Bank (odb) schema | odb_data_ts |
Used for indexes of the Omics Data Bank (odb) schema | odb_index_ts |
Used for LOB columns in the Omics Data Bank (odb) schema | odb_lob_ts |
Default tablespace used for Cohort Data Mart (cdm) schema | cdm_data_ts |
Used for indexes of the Cohort Data Mart (cdm) schema | cdm_index_ts |
Default tablespace used for Enterprise (ent) schema | ent_ts |
Default tablespace for Job Engine (job) schema | job_data_ts |
Used for indexes of the Job Engine (job) schema | job_index_ts |
A Database File System (DBFS) store is created as part of the Job Engine installation | job_store_ts |
Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user | job_lob_ts |
Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema | job_tbs_ts |
Default tablespace used for services schema (svc) | svc_ts |
Click Next. The Temporary Tablespace Details screen appears.
The installer creates the following temporary tablespaces if they do not exist. It also creates temporary tablespace for Services schema.
You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas. Enter a tablespace name that does not exist in the database.
Temporary Tablespace Details | Temporary Tablespace |
---|---|
Temporary tablespace used for Omics Data bank schema (odb) | odb_temp |
Temporary tablespace used for Cohort data mart schema (cdm) | cdm_temp |
Temporary tablespace used for Enterprise schema (ent) | ent_temp |
Temporary tablespace used for Job Engine schema (job) | job_temp |
Temporary tablespace used for Services schema (svc) | svc_temp |
Click Next. The Omics Data Bank and Cohort Data Mart Parameters screen appears.
Enter the values in the following fields:
Result Partition - Used to partition result tables in the ODB schema. The available options are:
GENE (Default)
STUDY
Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.
Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.
Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.
Click Next. The Tablespace Parameters screen appears.
Enter values for the following fields from the existing CDM, ODB, and Job engine schemas:
Note:
Make sure that these tablespaces exist before the installation and which was used to create these schemas.cdm_index_ts_name
odb_index_ts_name
odb_lob_ts_name
job_index_ts_name
job_lob_ts_name
job_tbs_ts_name
job_store_ts_name
Click Next. The Summary screen appears.
Click Install.
After the installation is complete, the End of Installation screen appears.
Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.
Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.
Contact Oracle support, if necessary, to resolve any errors.
To upgrade from a previous version of the product, follow this migration process.
Data migration for interface tables and data warehouse schema does not occur as a part of the installer. You need to review the data migration scripts in <INSTALL_HOME>/dm/hdi_install/post_ddl_upgrade.sql and <INSTALL_HOME>/dm/hdm_install/post_ddl_upgrade.sql (note that there are references to other script within the main script). You can modify the scripts before executing them.
Note:
Migration scripts execution is mandatory and needs to be completed before the ETL execution begins.Data migration summary:
Specimen.Intervention ID/VN is deprecated and data is migrated to Intervention Specimen.
Specimen Processing Procedure. Specimen Processing Procedure Subtype is added as a mandatory attribute to Specimen Processing Procedure. This is initialized with the appropriate seed data coded value 'SPP_SUBTYP_GEN' during the migration.
Schedule Block.Schedule Block Subtype is added as a mandatory attribute to Schedule Block. This is initialized with the appropriate seed data coded value 'SCH_BLCK_SUBTYP_GEN' during the migration.
Schedule.Service Provider is deprecated and data is transferred to Schedule Service Provider.
Observation. Specimen ID/VN is deprecated and data is transferred to Observation Specimen. On the interface tables schema, data is transferred from Observation, Assessment, and Image Instance to Observation Specimen.
Migrate data from Encounter Service Provider to Encounter.Responsible Service Provider for Encounter Service Provider Relationship Type="Responsible Service Provider". If the same encounter is associated to multiple service providers for a given value of Encounter and Relationship Type="Responsible Service Provider", the service provider that was inserted last will be migrated to Encounter.
The integration ID of the parent table is used wherever a new table is being populated based on an existing table.
The migration scripts use the seed data values given below provided by Oracle. If you have used a different set of seed data values in Encounter Service Provider to represent a Responsible Service Provider relationship, change it in the migration script for both Interface Tables and Data Warehouse schema, and run post_ddl_upgrade.sql
.
The following are the seed data values that are used for filtering Encounter Service Provider. Encounter Service Provider Relationship Type in the migration scripts:
HDI_ENC_SVCPRV.ENC_SVCPRV_RLTYP_CD='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV' (CD_NM='Responsible Service Provider' AND INTEGRATION_ID='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV~EHA_CUSTOM_CD_SYS~1.0' AND SYS_INT_ID='EHA_CUSTOM_CD_SYS~1.0' AND SYS_DAT_SRC_NUM_ID=1 AND CREATED_BY_USER_ID='EHA_USER' AND CREATED_BY_USER_DS_NUM_ID=1)
On the Data Warehouse schema, it corresponds to HDM_ENC_SVCPRV.ENC_SVCPRV_RLSHPTYP_ID/VN resolving to the following code:
CD='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV' AND CD_NM='Responsible Service Provider' AND INTEGRATION_ID='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV~EHA_CUSTOM_CD_SYS~1.0'
The values mentioned above are presented in a logical way and is not the exact physical representation of the values present in the migration scripts. See sql files <INSTALL_HOME>/dm/hdi_install/pkb/hdi_upgrade_7_0.pkb and <INSTALL_HOME>/dm/hdm_install/pkb/hdm_upgrade_7_0.pkb for the exact seed data values.
HDM_INTVN_SPCMN. INTVN_SPCMN_RLSHPTYP_ID/VN is populated with the value -1 corresponding to the default value of CODEID_NAV configured in the HMC schema. If you have modified the seed data, change the script <INSTALL_HOME>/dm/hdm_install/post_ddl_upgrade.sql
accordingly.
It is assumed that the time stamps for SRC_CHANGED_ON_DT, match from the parent to the child reference on the interface schema. For example, the value of HDI_ENC_SVCPRV. ENC_SRC_CHANGED_ON_DT should be same as that of the parent, that is, HDI_ENC.SRC_CHANGED_ON_DT for the same record being referred in both tables right up to the seconds. If it is not the same, change the migration script accordingly.
Key steps related to data migration are logged into the HDI_RSLT_LOG and HDM_RSLT_LOG tables in the respective schemas.
The following are the steps to execute the schema migration scripts after modifications:
Interface tables schema migration:
Open the command prompt and navigate to the <INSTALL_HOME>/dm/hdi_install/ folder.
Connect to the interface tables schema through command prompt using the SQL*Plus
utility.
Execute the script using the command '@post_ddl_upgrade.sql'
.
Data warehouse schema migration:
Open the command prompt and navigate to the <INSTALL_HOME>/dm/hdm_install/ folder.
Connect to the data warehouse schema through command prompt using the SQL*Plus
utility.
Execute the script using the command '@post_ddl_upgrade.sql'
.
Extract the contents of the Oracle Healthcare Foundation media pack to your system.
Navigate to the <media_pack_location>/ folder.
Unzip the OHF_V701_Linux-x64.zip file where you want to launch the installer using the following command:
unzip -a OHF_V701_Linux-x64.zip
Create the following default tablespaces and provide an appropriate tablespace name. The table provides an example tablespace name.
Tablespace Details | Tablespace Name | BIG File Tablespace |
---|---|---|
Default tablespace used for Interface tables (hdi) schema |
hdi_ts |
Yes |
Default tablespace used for Data Warehouse (hdm) schema |
hdm_ts |
Yes |
Default tablespace used for Common Data Mart (hcd) schema |
hcd_ts |
Yes |
Default tablespace for Omics Data Bank (odb) schema |
odb_data_ts |
Yes |
Used for indexes of the Omics Data Bank (odb) schema |
odb_index_ts |
Yes |
Used for LOB columns in the Omics Data Bank (odb) schema |
odb_lob_ts |
Yes |
Default tablespace used for Cohort Data Mart (cdm) schema |
cdm_data_ts |
Yes |
Used for indexes of the Cohort Data Mart (cdm) schema |
cdm_index_ts |
Yes |
Default tablespace used for Enterprise (ent) schema |
ent_ts |
No |
Default tablespace for Job Engine (job) schema |
job_data_ts |
No |
Used for indexes of the Job Engine (job) schema |
job_index_ts |
No |
Database File System (DBFS) store created as part of the Job Engine installation |
job_store_ts |
No |
Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user |
job_lob_ts |
No |
Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema |
job_tbs_ts |
No |
Default tablespace used for services (svc) schema |
svc_ts |
No |
Create the following temporary tablespaces. The table provides an example tablespace name.
Table 2-3 Temporary Tablespaces
Tablespace Details | Tablespace Name | BIG File Tablespace |
---|---|---|
Temporary tablespace used for Interface tables (hdi) schema |
hdi_temp |
Yes |
Temporary tablespace used for Data warehouse (hdm) schema |
hdm_temp |
Yes |
Temporary tablespace used for Common data mart (hcd) schema |
hcd_temp |
Yes |
Temporary tablespace used for Omics Data bank (odb) schema |
odb_temp |
Yes |
Temporary tablespace used for Cohort data mart (cdm) schema |
cdm_temp |
Yes |
Temporary tablespace used for Enterprise (ent) schema |
ent_temp |
No |
Temporary tablespace used for Job Engine (job) schema |
job_temp |
No |
Temporary tablespace used for Services (svc) schema |
svc_temp |
No |
Navigate to the directory <media_pack_location>/Disk1/stage/Components/oracle.hsgbu.hc.datamodel/7.0.1.0.0/1/DataFiles/Expanded/filegroup1.
Unzip the master_install.zip file.
Connect to Oracle SYS from the extracted content and create database roles by executing the roles.sql
script.
Create the following user schemas with an appropriate default tablespace and temporary tablespace.
Interface layer schema (hdi)
Data warehouse schema (hdm)
Common data mart schema (hcd)
Omics data bank schema (odb)
Cohort data mart (cdm)
Enterprise schema (ent)
Job engine schema (job)
Services schema (svc)
Execute the following scripts/commands as a DBA user. When prompted, enter the respective schema names.
@hdi_install/grant_schema_priv.sql
@hdm_install/grant_schema_priv.sql
@hcd_install/grant_schema_priv.sql
@odb_install/grant_schema_priv.sql
Execute the following commands:
> ALTER USER <odb> QUOTA UNLIMITED on <odb_index_ts>
> ALTER USER <odb> QUOTA UNLIMITED on <odb_lob_ts>
@cdm_install/grant_schema_priv.sql
Execute the following command:
> ALTER USER <cdm> QUOTA UNLIMITED on <cdm_index_ts>
@enterprise_install/grant_schema_priv.sql
@job_install/grant_schema_priv.sql
Execute the following commands:
> ALTER USER <job> QUOTA UNLIMITED on <job_index_ts>
> ALTER USER <job> QUOTA UNLIMITED on <job_store_ts>
> ALTER USER <job> QUOTA UNLIMITED on <job_lob_ts>
> ALTER USER <job> QUOTA UNLIMITED on <job_tbs_ts>
@svc_install/grant_schema_priv.sql
Make sure that all the pre-created user sessions are disconnected from the database.
Navigate to the <media_pack_location>/Disk1/install folder.
Change the protection on files as follows:
chmod 755 *
Start the Oracle Universal Installer (OUI) using the following command:
Database server is on the machine where the installer is running and is an exadata instance, execute:
sh runInstaller.sh -local dba_tasks=false db_platform=exadata
Database server is on the machine where the installer is running and is a non-exadata instance, execute:
sh runInstaller.sh -local dba_tasks=false db_platform=default
Database server is on a different machine and is an exadata instance, execute:
sh runInstaller.sh -local remote_installation=true dba_tasks=false db_platform=exadata
Database server is on a different machine and is a non-exadata instance, execute:
sh runInstaller.sh -local dba_tasks=false db_platform=default remote_installation=true
where, the -local
option is to install on the local node irrespective of the cluster nodes specified on the installer machine.
The Welcome screen appears.
Click Next. The Select a Product to Install screen appears.
Select the Oracle Healthcare Foundation Data Model 7.0.1.0.0 option, and click Next.
The Oracle Healthcare Foundation Data Model 7.0.1.0.0 screen appears.
Select the Custom (Recommended) option and click Next.
The Specify Home Details screen appears.
Enter the installation home name and location.
Click Next. The Verify Installation Prerequisites screen appears.
Verify if all the prerequisites are met before proceeding.
Click Next. The Oracle Client Home Configuration screen appears.
Specify the Oracle client home path. The installer validates this path.
Click Next. The Database Configuration screen appears.
Enter values for the following fields:
Hostname - By default, the system host name appears
Port - By default, the port number is 1521. You can edit this field if required.
Service name
Note:
The installer will not validate the database connectivity. Make sure that you are able to connect to the database from SQLPlus.Click Next. The Table Compression screen is displayed.
Select the compression option for the following schemas from the corresponding drop-down list:
Schema Name | Compression Options for Non-Exadata | Compression Options for Exadata |
---|---|---|
Interface tables schema | No Compression (default)
Advanced Compression |
Hybrid columnar compression (default)
No Compression |
Data warehouse schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression: Preferred if updates are high Hybrid columnar compression |
Common data mart schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression |
Cohort data mart schema | No Compression (default) Advanced Compression | No Compression (default)
Advanced Compression |
Omics Data Bank schema | No Compression (default) Advanced Compression | Hybrid columnar compression (default) |
Click Next. The Data Model Configuration screen appears.
Enter the respective pre-created schema details for the following:
Interface tables schema name
Interface tables schema password
Data warehouse schema name
Data warehouse schema password
Common data mart schema name
Common data mart schema password
Omics data bank schema name
Omics data bank schema password
Click Next. The Data Model Configuration screen appears.
Enter the pre-created schema details for the following:
Cohort data mart schema name
Cohort data mart password
Enterprise schema name
Enterprise schema password
Job engine schema name
Job engine schema password
Services schema name
Services schema password
Click Next. The Data Model Configuration Verification screen appears.
Click Next. The Tablespace Details screen appears.
Click Next. The Temporary Tablespace Details screen appears.
Click Next. The Omics Data Bank and Cohort Data Mart Parameters screen appears.
Enter the values in the following fields:
Result Partition - Used to partition result tables in the ODB schema. The available options are:
GENE (Default)
STUDY
Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.
Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.
Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.
Click Next. The Tablespace Parameters screen appears.
Enter values for the following fields from the existing CDM, ODB, and Job engine schemas:
Note:
Make sure that these tablespaces exist before the installation and which was used to create these schemas.cdm_index_ts_name
odb_index_ts_name
odb_lob_ts_name
job_index_ts_name
job_lob_ts_name
job_tbs_ts_name
job_store_ts_name
Click Next. The Summary screen appears.
Click Install.
After the installation is complete, the End of Installation screen appears.
Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.
Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.
Contact Oracle support, if necessary, to resolve any errors.
After the CDM and ODB data model installation, connect to the database user with DBA privilege, and execute the following scripts to revoke privileges:
REVOKE CREATE ANY DIRECTORY FROM <odb user>
REVOKE CREATE ANY CONTEXT FROM <cdm user>
where,
<odb_user>
is the ODB schema name.
<cdm_user>
is the CDM schema name.
If the installation fails, open the log file for errors. For the log file details, see Section 2.3.
You must fix the issues and rerun the installer from the installation home location.
Seed data scripts are executed as part of the installation process for both HDI schema and HDM schema. The seed data that is loaded in each schema is identical although the format is specific to each of the two schemas.
These errors that occur while populating seed data using the seed data load procedures are logged in the HDM_X_SEED_DATA_ERR_LOG table for the HDM schema and the HDI_X_ SEED_DATA_ERR_LOG table for the HDI schema.
The following are the key columns in the error log table (HDM_X_SEED_DATA_ERR_ LOG or HDI_X_SEED_DATA_ERR_LOG):
ERR_ID - Unique identifier for a record in the error log table.
ERR_TBL_NM - Specifies the table name for which the error record was logged. For example, for an error in the HDM code repository will have a value HDM_CD_REPOSITORY in this column.
ERR_TYP - Indicates the type of error that is logged. There are five types of errors that can be logged. For the different types of errors, see Section 2.10.1.2.
ERR_DESC - A short description of the error.
ERR_INT_ID - Indicates the integration ID of the record for which the error was logged. This column along with the error table name (ERR_TBL_NM) can be used to debug the error. The Integration ID for tables that are populated with seed data has the following pattern:
The following section describes the different error types that can be logged and the approach to understand and resolve them:
Note:
In the following section, non-EHA user refers to individuals using the application.Error Type - EHA_ERROR: Insert failed: Creation of version failed
Error Type - EHA_ERROR: Insert failed: Duplicate integration ID
Error Description
Code name: <CODE NAME> exists. New Oracle seed data record inserted with same code name.
Resolution for HDM Identification
Search Code Repository for the record having the code name specified in the Error Description.
select * from HDM_CD_REPOSITORY where CD_NM = '<CODE NAME SPECIFIED IN THE ERROR STATEMENT>';
For example,
SQL> select * from HDM_CD_REPOSITORY where CD_NM = 'Patient Withdrew';
Resolution
After identifying the code name, you can select one of the following options:
Delete the conflicting seed data record inserted by the Oracle seed data procedure (identified by the EHA user) and use your own seed data.
Delete the data record that you have inserted which caused the code clash and instead use the Oracle seed data record.
Resolution for HDI Identification
Search Code Repository for the record having the code name as specified in the Error Description.
select * from HDI_CD_REPOSITORY where CD_NM = '<CODE NAME SPECIFIED IN THE ERROR STATEMENT>';
For example,
SQL> select * from HDI_CD_REPOSITORY where CD_NM = 'Patient Withdrew';
Resolution
After identifying the code name, you can select one of the following options:
Delete the conflicting seed data record inserted by the Oracle seed data procedure (identified by the EHA User) and use your own seed data.
Delete the data record that you have inserted which caused the code clash and instead use the Oracle seed data record.
Error Description
Non-EHA user has versioned a record. Unable to create a new version of the record.
Resolution for HDM Identification
Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
select * from <HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> where INTEGRATION_ID = '<HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID>' and CURRENT_FLG='Y';
For example,
SQL> select * from HDM_CD_REPOSITORY where INTEGRATION_ID ='SPCMN_TYP_CD_IVFEHA_CUSTOM_CD_SYS1.0' and CURRENT_FLG='Y';
Resolution
This error occurs when Oracle provided seed data was previously updated and versioned by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates through the Oracle seed data process are rejected with this error message. The normal resolution is to ignore the Oracle provided seed data.
Resolution for HDI Identification
Navigate to the table identified in HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDI_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
select * from <HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> t1 where INT_ID = <HDI_X_SEED_DATA_ERR_LOG.ERR_INT_ID> and SRC_CHANGED_ON_DT = (select max(SRC_CHANGED_ON_DT) from <HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> t2 where t1.int_id = t2.int_id);
For example,
SQL> select *from HDI_CD_REPOSITORY cr1 where INT_ID ='SPCMN_TYP_CD_ IVFEHA_CUSTOM_CD_SYS1.0' and SRC_CHANGED_ON_DT = (select max(SRC_CHANGED_ON_DT) fromHDI_CD_REPOSITORY cr2 where cr1.int_id = cr2.int_id);
Resolution
This error occurs when Oracle provided seed data was previously updated and versioned by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates through the Oracle seed data process are rejected with this error message. The normal resolution is to ignore the Oracle provided seed data.
Error Description
Non-EHA user has changed the record. Unable to update the record.
Resolution for HDM Identification
Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
select * from <HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> where INTEGRATION_ ID=<HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID> and CURRENT_FLG='Y';
For example,
SQL> select *from HDM_CD_REPOSITORY where INTEGRATION_ID ='SPCMN_TYP_CD_IVFEHA_CUSTOM_CD_SYS1.0' and CURRENT_FLG='Y';
Resolution
This error occurs when Oracle provided seed data has previously been updated by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates via the Oracle seed data process are rejected with this error message. The normal resolution is to ignore the Oracle provided seed data.
Resolution for HDI
This error will not occur in HDI as no updates are supported in HDI.
Error Description
Non-EHA user has created a record with the same integration ID. Unable to create a new record.
Resolution for HDM Identification
Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
select * from <HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> where INTEGRATION_ID = <HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID> and CURRENT_FLG='Y';
For example,
SQL> select * from HDM_CD_REPOSITORY where INTEGRATION_ID ='SPCMN_TYP_CD_IVFEHA_CUSTOM_CD_SYS1.0' and CURRENT_FLG='Y';
Resolution
After examining the conflicting data, you can select one of the following options:
To insert Oracle seed data, modify the integration ID of the conflicting record that you have inserted and run the seed data procedure again.
If the Oracle supplied seed data is not required then ignore the Oracle provided seed data.
Resolution for HDI Identification
Navigate to the table identified in HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDI_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
select * from <HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> t1 where INT_ID=<HDI_X_ SEED_DATA_ERR_LOG.ERR_INT_ID> and SRC_CHANGED_ON_DT=(select max(SRC_CHANGED_ON_DT) from <HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM> t2 where t1.int_id = t2.int_id);
For example,
SQL> select * from HDI_CD_REPOSITORY cr1 where INT_ID ='SPCMN_TYP_CD_IVFEHA_CUSTOM_CD_SYS1.0' and SRC_CHANGED_ON_DT = (select max(SRC_CHANGED_ON_DT) from HDI_CD_REPOSITORY cr2 where cr1.int_id = cr2.int_id);
Resolution
After examining the conflicting data, you can select one of the following options:
To insert Oracle seed data, modify the integration ID of the conflicting record that you have inserted and run the seed data procedure again.
If the Oracle supplied seed data is not required then ignore the Oracle provided seed data.
Resolution for HDM
Other PL SQL errors that are encountered when inserting seed data.
Resolution for HDI
Other PL SQL errors that are encountered when inserting seed data.
Note:
Oracle recommends that you uninstall Data Model only from the development environment.Execute the following commands to drop user schemas by connecting to the system user. If you have installed the Data Model in the custom mode, replace the user schema names and tablespace names with the values provided during the installation.
drop user <hdi> cascade; drop user <hdm> cascade; drop user <hcd> cascade; drop user <odb> cascade; drop user <cdm> cascade; drop user <ent> cascade; drop user <job> cascade; drop user <svc> cascade; drop tablespace <hdi_temp> INCLUDING CONTENTS and datafiles; drop tablespace <hdm_temp> INCLUDING CONTENTS and datafiles; drop tablespace <hcd_temp> INCLUDING CONTENTS and datafiles; drop tablespace <odb_temp> INCLUDING CONTENTS and datafiles; drop tablespace <cdm_temp> INCLUDING CONTENTS and datafiles; drop tablespace <ent_temp> INCLUDING CONTENTS and datafiles; drop tablespace <job_temp> INCLUDING CONTENTS and datafiles; drop tablespace <svc_temp> INCLUDING CONTENTS and datafiles; drop tablespace <hdi_ts> INCLUDING CONTENTS and datafiles; drop tablespace <hdm_ts> INCLUDING CONTENTS and datafiles; drop tablespace <hcd_ts> INCLUDING CONTENTS and datafiles; drop tablespace <odb_data_ts> INCLUDING CONTENTS and datafiles; drop tablespace <odb_index_ts> INCLUDING CONTENTS and datafiles; drop tablespace <odb_lob_ts> INCLUDING CONTENTS and datafiles; drop tablespace <cdm_data_ts> INCLUDING CONTENTS and datafiles; drop tablespace <cdm_index_ts> INCLUDING CONTENTS and datafiles; drop tablespace <ent_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_data_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_index_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_store_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_lob_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_tbs_ts> INCLUDING CONTENTS and datafiles; drop tablespace <svc_ts> INCLUDING CONTENTS and datafiles;
Connect to Sys user, verify the following table records, and delete the entries for <job> schema and where STORE_OWNER=<job schema>.
For example,
delete from DBFS_SFS$_FS cascade; delete from DBFS$_MOUNTS cascade; delete from DBFS$_STORES cascade; delete from DBFS_SFS$_VOL cascade; delete from DBFS_SFS$_TAB cascade; commit;