Oracle® Healthcare Foundation Installation Guide Release 7.1 E79438-01 |
|
|
PDF · Mobi · ePub |
This chapter describes the steps required to install the OHF Data Model. It contains the following sections:
Section 2.5, "Installing the Oracle Healthcare Foundation Data Model"
Section 2.6, "Upgrading Healthcare Data Warehouse Foundation to Oracle Healthcare Foundation"
Section 2.11, "Uninstalling the Oracle Healthcare Foundation Data Model"
The following are the supported upgrade paths:
Healthcare Data Warehouse Foundation (HDWF) 6.1 with Oracle Healthcare Analytics Data Integration (OHADI) 3.1 installed to OHF 7.1
OHF 7.0.1 to OHF 7.1
![]() |
Install Oracle Database 12.1.0.2 Enterprise Edition. |
![]() |
Create container database with a single pluggable database instance the help of a Database Administrator. |
![]() |
Install Java Virtual Machine in the database server. |
![]() |
Set the NLS_LENGTH_SEMANTICS parameter to either CHAR or BYTE based on your requirements.
For Oracle Healthcare Foundation Globalization Support information, see Oracle Database Globalization Support Guide and set your database character set accordingly. |
![]() |
Enable database connection through SERVICE_NAME. Make sure that you are connecting to the pluggable database instance.
Verify the database connectivity using the following command: SQL>connect <username>@<hostname>:<port number>/<service name> |
![]() |
Sqlplus utility is available in the installation server. |
![]() |
Password expiry notification message does not display for the Sys and system schemas. |
![]() |
To upgrade to Oracle Healthcare Foundation 7.1, back up data model schemas of existing Oracle Healthcare Data Warehouse Foundation (HDWF) environment. |
![]() |
To create an encrypted tablespace during fresh installation, set up your environment for tablespace encryption. For more information, see Oracle Database Advanced Security Guide 12.1. The installer only creates the tablespace and does not handle prerequisites like setting up the keystore, opening it, and so on. The installer only checks whether the keystore is open if the sys and system passwords are provided during installation. |
![]() |
Set the GLOBAL_NAMES database initialization parameter to false. |
![]() |
For remote installation, make sure the following are available on the client machine:
|
![]() |
To upgrade, execute the following query as a SYSTEM user to validate if the data model user has OHF_APPLICATION_ROLE as the default role.
If this is not a default role, execute the following command as a SYSTEM user to set this as a default role:
|
If you are installing OHF on an Exadata database machine, see the Exadata Machine and Exadata Implementation sections in the Oracle Healthcare Foundation Programmer's Guide.
Table compression strategy is decided based on the number of updates in tables. If the number of updates is high, select the Advanced compression option instead of HCC (QUERY HIGH).
For a fresh installation on an Exadata environment, selecting the compression type as HCC (QUERY HIGH) will slow down the first few incremental loads.
Make sure that you have an appropriate database license for the compression methods you choose.
While installing the Data Model, the installer generates the following log files:
Table 2-1 Installation Log Files
File Name | Description |
---|---|
installActions<timestamp>.log |
Records the action of the installer and can be used to diagnose issues with the installer. |
oraInstall<timestamp>.out |
Records the output of the SQL scripts run by the installer. Database objects are installed using the Python framework. |
oraInstall<timestamp>.err |
Records the errors from the SQL scripts run by the installer. You can ignore the following error:
The log files are time stamped and each installation session creates a new log file. On a Linux machine, the log files are located at $ORACLE_BASE/oraInventory/logs. For example, /u01/app/oraInventory/logs. |
<INSTALL_HOME>/dm/install.err |
Contains any SQL errors. Database objects are installed using the Python framework and the error logging is redirected to this location. |
<INSTALL_HOME>/reports/dm_install<timestamp>.html |
Contains installation summary for the Data Model installation. |
While reporting any problems that occur during installation, make sure that you include all the above log files. Check which components were installed successfully using the following query by connecting to the enterprise schema:
Select * from W_EHA_PRODUCT_VERSION
Send the result of the query while reporting any problems to the Oracle support.
To install the OHF Data Model:
Extract the contents of the OHF media pack to your system.
Navigate to the <media_pack_location>/ folder.
Unzip the OHF_V71_Linux-x64.zip file where you want to launch the installer using the following command:
unzip -a OHF_V71_Linux-x64.zip
Navigate to the Disk1/install folder.
Change the protection on files as follows:
chmod 755 *
Start the Oracle Universal Installer (OUI) using the following command:
Database server (exadata or non-exadata) is on the machine where the installer is running, execute:
sh runInstaller.sh -local
Database server is on a different machine, execute:
sh runInstaller.sh -local remote_installation=true
where, the -local
option is to install on the local node irrespective of the cluster nodes specified on the installer machine.
The Welcome screen appears.
Click Next. The Select a Product to Install screen appears.
Select the Oracle Healthcare Foundation Data Model 7.1.0.0.0 option and click Next.
The Specify Home Details screen appears.
Enter the installation home name and location and click Next.
The Verify Installation Prerequisites screen appears.
Verify if all the prerequisites are met before proceeding and click Next.
The Oracle Client Home Configuration screen appears.
Specify the Oracle client home path and click Next. The installer validates this path.
The Database Configuration screen appears.
Enter values for the following fields:
Hostname - By default, the system host name appears
Port - By default, the port number is 1521. You can edit this field if required.
Service name
System user password
Sys user password
Click Next. The Table Compression screen is displayed.
Select the compression option for the following schemas from the corresponding drop-down list:
Schema Name | Compression Options for Non-Exadata | Compression Options for Exadata |
---|---|---|
Interface tables schema | No Compression (default)
Advanced Compression |
Hybrid columnar compression (default)
No Compression |
Data warehouse schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression: Preferred if updates are high Hybrid columnar compression |
Common data mart schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression |
Cohort data mart schema | No Compression (default) Advanced Compression | No Compression (default)
Advanced Compression |
Omics Data Bank schema | No Compression (default) Advanced Compression | Hybrid columnar compression (default) |
Click Next. The Data Model Configuration screen appears.
Enter values for the following fields:
Interface tables schema name
Interface tables schema password
Data warehouse schema name
Data warehouse schema password
Common data mart schema name
Common data mart schema password
Omics data bank schema name
Omics data bank schema password
Click Next. The Data Model Configuration screen appears.
Enter values for the following fields:
Cohort data mart schema name
Cohort data mart password
Enterprise schema name
Enterprise schema password
Job engine schema name
Job engine schema password
Services schema name
Services schema password
Click Next in the Data Model Configuration Verification and Tablespace Data file screens. The Location screen appears.
Specify the location of the Tablespace data files. This is the directory on the database server where the data files are created during the installation.
For example,
Unix: /u01/oradata/dbname
ASM: +DATA_EX02/hashas01/datafile
Note:
ASM location must always start with +.
Path should not end with /.
The installer does not validate the data file location. Make sure the location physically exists if the database server is on the remote server. Else, the installer will fail.
Click Next. The Tablespace Details screen appears.
The installer creates the following tablespaces for fresh schema. Make sure that these tablespace do not exist in the database.
Enter a tablespace name for each schema. You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas.
Tablespace Details | Tablespace Name |
---|---|
Default tablespace used for Interface tables (hdi) schema | hdi_ts |
Default tablespace used for Data Warehouse (hdm) schema | hdm_ts |
Default tablespace used for Common Data Mart (hcd) schema | hcd_ts |
Default tablespace for Omics Data Bank (odb) schema | odb_data_ts |
Used for indexes of the Omics Data Bank (odb) schema | odb_index_ts |
Used for LOB columns in the Omics Data Bank (odb) schema | odb_lob_ts |
Default tablespace used for Cohort Data Mart (cdm) schema | cdm_data_ts |
Used for indexes of the Cohort Data Mart (cdm) schema | cdm_index_ts |
Default tablespace used for Enterprise (ent) schema | ent_ts |
Default tablespace for Job Engine (job) schema | job_data_ts |
Used for indexes of the Job Engine (job) schema | job_index_ts |
Database File System (DBFS) store is created as part of the Job Engine installation | job_store_ts |
Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user | job_lob_ts |
Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema | job_tbs_ts |
Default tablespace used for services schema (svc) | svc_ts |
Click Next. The Temporary Tablespace Details screen appears.
The installer creates the following temporary tablespaces for fresh schema. Make sure that these tablespace do not exist in database.
You can edit the temporary tablespace name, initial size, and max size for these schemas.
Temporary Tablespace Details | Temporary Tablespace |
---|---|
Interface tables schema (hdi) | hdi_temp |
Data warehouse schema (hdm) | hdm_temp |
Common data mart schema (hcd) | hcd_temp |
Omics Data bank schema (odb) | odb_temp |
Cohort data mart schema (cdm) | cdm_temp |
Enterprise schema (ent) | ent_temp |
Job Engine schema (job) | job_temp |
Services schema (svc) | svc_temp |
Click Next. The Omics Data Bank and Cohort Data Mart Parameters screen appears.
Enter values for the following fields:
Result Partition - Used to partition result tables in the ODB schema. The available options are:
GENE (Default)
STUDY
Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.
Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.
Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.
Job Store Name
Note:
If you are using the same database instance for multiple environment, enter a unique job store name for each job schema.Click Next. The Summary screen appears.
Click Install.
After the installation is complete, the End of Installation screen appears.
Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.
Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.
Contact Oracle support, if necessary, to resolve any errors.
For the list of supported upgrade paths, see Section 2.1.
If you are upgrading from Healthcare Data Warehouse Foundation (HDWF) 6.1 to Oracle Healthcare Foundation, you must install Oracle Healthcare Analytics Data Integration (OHADI) 3.1.
Note:
After upgrading, follow Section 2.7 for data migration from HDWF to OHF.To upgrade from HDWF 6.1 to OHF, upgrade the HDWF 6.1 database instance to the Oracle Database 12.1.0.2 Enterprise Edition.
Extract the contents of the OHF media pack to your system.
Navigate to the <media_pack_location>/ folder.
Unzip the OHF_V71_Linux-x64.zip file where you want to launch the installer using the following command:
unzip -a OHF_V71_Linux-x64.zip
Navigate to the Disk1/install folder.
Change the protection on files as follows:
chmod 755 *
Make sure that all existing user schemas are disconnected from the database.
Start the Oracle Universal Installer (OUI) using the following command:
Database server (exadata or non-exadata) is on the machine where the installer is running, execute:
sh runInstaller.sh -local
Database server is on a different machine, execute:
sh runInstaller.sh -local remote_installation=true
where, the -local
option is to install on the local node irrespective of the cluster nodes specified on the installer machine.
The Welcome screen appears.
Click Next. The Select a Product to Install screen appears.
Select the Oracle Healthcare Foundation Data Model 7.1.0.0.0 option and click Next.
The Specify Home Details screen appears.
Enter the installation home name and location, and click Next.
The Verify Installation Prerequisites screen appears.
Verify if all the prerequisites are met before proceeding and click Next.
The Oracle Client Home Configuration screen appears.
Specify the Oracle client home path and click Next. The installer validates the path.
The Database Configuration screen appears.
Enter values for the following fields:
Hostname - By default, the system host name appears
Port - By default, the port number is 1521. You can edit this field if required.
Service name
System user password
Sys user password
Click Next. The Table Compression screen is displayed.
Select the compression option for the following schemas from the corresponding drop-down list:
Schema Name | Compression Options for Non-Exadata | Compression Options for Exadata |
---|---|---|
Interface tables schema | No Compression (default)
Advanced Compression |
Hybrid columnar compression (default)
No Compression |
Data warehouse schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression: Preferred if updates are high Hybrid columnar compression |
Common data mart schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression |
Cohort data mart schema | No Compression (default) Advanced Compression | No Compression (default)
Advanced Compression |
Omics Data Bank schema | No Compression (default) Advanced Compression | Hybrid columnar compression (default) |
Click Next. The Data Model Configuration screen appears.
Enter the existing schema details (HDWF 6.1) for the following fields:
Interface tables schema name
Interface tables schema password
Data warehouse schema name
Data warehouse schema password
Common data mart schema name
Common data mart schema password
Omics data bank schema name - Enter a new schema name
Omics data bank schema password
Click Next. The Data Model Configuration screen appears.
Enter values for the following fields:
Cohort data mart schema name
Cohort data mart password
Enterprise schema name
Enterprise schema password
Job engine schema name
Job engine schema password
Services schema name
Services schema password
Click Next in the Data Model Configuration Verification screen appears.
Click Next. The Tablespace Data File Location screen appears.
Specify the location of the tablespace data files. This is the directory on the database server where the data files are created during the installation.
For example,
Unix: /u01/oradata/dbname
ASM: +DATA_EX02/hashas01/datafile
Note:
ASM location must always start with +.
Path should not end with /.
The installer does not validate the data file location. Make sure the location physically exists if the database server is on the remote server. Else, the installer will fail.
Click Next. The Tablespace Details screen appears.
The installer creates the following default tablespaces if they do not exist. It also creates tablespace for Services schema.
You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas. Enter a tablespace name that does not exist in the database.
Tablespace Details | Tablespace Name |
---|---|
Default tablespace for Omics Data Bank (odb) schema | odb_data_ts |
Used for indexes of the Omics Data Bank (odb) schema | odb_index_ts |
Used for LOB columns in the Omics Data Bank (odb) schema | odb_lob_ts |
Default tablespace used for Cohort Data Mart (cdm) schema | cdm_data_ts |
Used for indexes of the Cohort Data Mart (cdm) schema | cdm_index_ts |
Default tablespace used for Enterprise (ent) schema | ent_ts |
Default tablespace for Job Engine (job) schema | job_data_ts |
Used for indexes of the Job Engine (job) schema | job_index_ts |
A Database File System (DBFS) store is created as part of the Job Engine installation | job_store_ts |
Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user | job_lob_ts |
Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema | job_tbs_ts |
Default tablespace used for services schema (svc) | svc_ts |
Click Next. The Temporary Tablespace Details screen appears.
The installer creates the following temporary tablespaces if they do not exist. It also creates temporary tablespace for Services schema.
You can edit the default tablespace name, initial size, max size, and tablespace encryption for these schemas. Enter a tablespace name that does not exist in the database.
Temporary Tablespace Details | Temporary Tablespace |
---|---|
Temporary tablespace used for Omics Data bank schema (odb) | odb_temp |
Temporary tablespace used for Cohort data mart schema (cdm) | cdm_temp |
Temporary tablespace used for Enterprise schema (ent) | ent_temp |
Temporary tablespace used for Job Engine schema (job) | job_temp |
Temporary tablespace used for Services schema (svc) | svc_temp |
Click Next. The Omics Data Bank and Cohort Data Mart Parameters screen appears.
Enter the values in the following fields:
Note:
Make sure that you enter the same values used for the OHF 7.0.1 installation for Result Partition, Promoter Offset, Flaking Offset, and Job Store Name.Result Partition - Used to partition result tables in the ODB schema. The available options are:
GENE (Default)
STUDY
Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.
Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.
Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.
Job Store Name
Execute the following query to find all store names in the database (created by all previous installations) by connecting to SYS user:
SELECT case when substr(store_name,1,3) ='FS_' then substr(store_name,4) else null end as store_name FROM TABLE(dbms_dbfs_content.liststores);
Note:
If you are using the same database instance for multiple environment, enter a unique job store name for each job schema.Click Next. The Tablespace Parameters screen appears.
Enter values for the following fields from the existing CDM, ODB, and Job engine schemas:
Note:
Make sure that these tablespaces exist before the installation and which was used to create these schemas.cdm_index_ts_name
odb_index_ts_name
odb_lob_ts_name
job_index_ts_name
job_lob_ts_name
job_tbs_ts_name
job_store_ts_name
Click Next. The Summary screen appears.
Click Install.
After the installation is complete, the End of Installation screen appears.
Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.
Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.
Contact Oracle support, if necessary, to resolve any errors.
To upgrade from a previous version of the product, follow this migration process.
Data migration for interface tables and data warehouse schema does not occur as a part of the installer. You need to review the data migration scripts in <INSTALL_HOME>/dm/hdi_install/post_ddl_upgrade.sql and <INSTALL_HOME>/dm/hdm_install/post_ddl_upgrade.sql (note that there are references to other script within the main script). You can modify the scripts before executing them.
Note:
Migration scripts execution is mandatory and needs to be completed before the ETL execution.Specimen.Intervention ID/VN is deprecated and data is migrated to Intervention Specimen.
Specimen Processing Procedure. Specimen Processing Procedure Subtype is added as a mandatory attribute to Specimen Processing Procedure. This is initialized with the appropriate seed data coded value 'SPP_SUBTYP_GEN' during the migration.
Schedule Block.Schedule Block Subtype is added as a mandatory attribute to Schedule Block. This is initialized with the appropriate seed data coded value 'SCH_BLCK_SUBTYP_GEN' during the migration.
Schedule.Service Provider is deprecated and data is transferred to Schedule Service Provider.
Observation. Specimen ID/VN is deprecated and data is transferred to Observation Specimen. On the interface tables schema, data is transferred from Observation, Assessment, and Image Instance to Observation Specimen.
Data is migrated from Encounter Service Provider to Encounter.Responsible Service Provider for Encounter Service Provider Relationship Type="Responsible Service Provider". If the same encounter is associated to multiple service providers for a given value of Encounter and Relationship Type="Responsible Service Provider", the service provider that was inserted last will be migrated to Encounter.
The integration ID of the parent table is used wherever a new table is being populated based on an existing table.
The migration scripts use the seed data values given below provided by Oracle. If you have used a different set of seed data values in Encounter Service Provider to represent a Responsible Service Provider relationship, change it in the migration script for both Interface Tables and Data Warehouse schema, and run post_ddl_upgrade.sql
.
The following are the seed data values that are used for filtering Encounter Service Provider. Encounter Service Provider Relationship Type in the migration scripts:
HDI_ENC_SVCPRV.ENC_SVCPRV_RLTYP_CD='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV' (CD_NM='Responsible Service Provider' AND INTEGRATION_ID='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV~EHA_CUSTOM_CD_SYS~1.0' AND SYS_INT_ID='EHA_CUSTOM_CD_SYS~1.0' AND SYS_DAT_SRC_NUM_ID=1 AND CREATED_BY_USER_ID='EHA_USER' AND CREATED_BY_USER_DS_NUM_ID=1)
On the Data Warehouse schema, it corresponds to HDM_ENC_SVCPRV.ENC_SVCPRV_RLSHPTYP_ID/VN resolving to the following code:
CD='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV' AND CD_NM='Responsible Service Provider' AND INTEGRATION_ID='ENC_SVCPRV_RLSHPTYP_RSPNSBL_SVCPRV~EHA_CUSTOM_CD_SYS~1.0'
The values mentioned above are presented in a logical way and is not the exact physical representation of the values present in the migration scripts. See sql files <INSTALL_HOME>/dm/hdi_install/pkb/hdi_upgrade_7_0.pkb and <INSTALL_HOME>/dm/hdm_install/pkb/hdm_upgrade_7_0.pkb for the exact seed data values.
Key steps related to data migration are logged into the HDI_RSLT_LOG and HDM_RSLT_LOG tables in the respective schemas.
Bill Line Item.Bill Diagnosis ID/VN is deprecated and the corresponding data is migrated to Bill Diagnosis as new records.
Claim Line Item.ClaimDiagnosis ID/VN is deprecated and the corresponding data is migrated to Claim Diagnosis as new records.
The values mentioned above are presented in a logical way and is not the exact physical representation of the values in the migration scripts. For exact details, see the SQL files:
<INSTALL_HOME>/dm/hdi_install/pkb/hdi_upgrade_7_1.pkb and <INSTALL_ HOME>/dm/hdm_install/pkb/hdm_upgrade_7_1.pkb.
HDM_INTVN_SPCMN. INTVN_SPCMN_RLSHPTYP_ID/VN is populated with the value -1 corresponding to the default value of CODEID_NAV configured in the HMC schema. If you have modified the seed data, change the script <INSTALL_HOME>/dm/hdm_install/post_ddl_upgrade.sql
accordingly.
Review the integration ID pattern for records populated using the migration scripts and change it accordingly if needed.
It is assumed that the time stamps for SRC_CHANGED_ON_DT, match from the parent to the child reference on the interface schema. For example, the value of HDI_ENC_SVCPRV. ENC_SRC_CHANGED_ON_DT should be same as that of the parent, that is, HDI_ENC.SRC_CHANGED_ON_DT for the same record being referred in both tables right up to the seconds. If it is not the same, change the migration script accordingly.
Extract the contents of the OHF media pack to your system.
Navigate to the <media_pack_location>/ folder.
Unzip the OHF_V71_Linux-x64.zip file where you want to launch the installer using the following command:
unzip -a OHF_V71_Linux-x64.zip
Create the following default tablespaces and provide an appropriate tablespace name. The table provides an example tablespace name.
Tablespace Details | Tablespace Name | BIG File Tablespace |
---|---|---|
Default tablespace used for Interface tables (hdi) schema |
hdi_ts |
Yes |
Default tablespace used for Data Warehouse (hdm) schema |
hdm_ts |
Yes |
Default tablespace used for Common Data Mart (hcd) schema |
hcd_ts |
Yes |
Default tablespace for Omics Data Bank (odb) schema |
odb_data_ts |
Yes |
Used for indexes of the Omics Data Bank (odb) schema |
odb_index_ts |
Yes |
Used for LOB columns in the Omics Data Bank (odb) schema |
odb_lob_ts |
Yes |
Default tablespace used for Cohort Data Mart (cdm) schema |
cdm_data_ts |
Yes |
Used for indexes of the Cohort Data Mart (cdm) schema |
cdm_index_ts |
Yes |
Default tablespace used for Enterprise (ent) schema |
ent_ts |
No |
Default tablespace for Job Engine (job) schema |
job_data_ts |
No |
Used for indexes of the Job Engine (job) schema |
job_index_ts |
No |
Database File System (DBFS) store created as part of the Job Engine installation |
job_store_ts |
No |
Name of the tablespace to store LOB data in the DBFS store created for the JOB ENGINE schema user |
job_lob_ts |
No |
Name of the tablespace to be used for the DBFS store created for the JOB_ENGINE schema |
job_tbs_ts |
No |
Default tablespace used for services (svc) schema |
svc_ts |
No |
Create the following temporary tablespaces. The table provides an example tablespace name.
Table 2-3 Temporary Tablespaces
Tablespace Details | Tablespace Name | BIG File Tablespace |
---|---|---|
Temporary tablespace used for Interface tables (hdi) schema |
hdi_temp |
Yes |
Temporary tablespace used for Data warehouse (hdm) schema |
hdm_temp |
Yes |
Temporary tablespace used for Common data mart (hcd) schema |
hcd_temp |
Yes |
Temporary tablespace used for Omics Data bank (odb) schema |
odb_temp |
Yes |
Temporary tablespace used for Cohort data mart (cdm) schema |
cdm_temp |
Yes |
Temporary tablespace used for Enterprise (ent) schema |
ent_temp |
No |
Temporary tablespace used for Job Engine (job) schema |
job_temp |
No |
Temporary tablespace used for Services (svc) schema |
svc_temp |
No |
Navigate to the directory <media_pack_location>/Disk1/stage/Components/oracle.hsgbu.hc.datamodel/7.1.0.0.0/1/DataFiles/Expanded/filegroup1.
Unzip the master_install.zip file.
Connect to Oracle SYS from the extracted content and create database roles by executing the roles.sql
script.
Create the following user schemas with an appropriate default tablespace and temporary tablespace.
Interface layer schema (hdi)
Data warehouse schema (hdm)
Common data mart schema (hcd)
Omics data bank schema (odb)
Cohort data mart (cdm)
Enterprise schema (ent)
Job engine schema (job)
Services schema (svc)
Execute the following scripts/commands as a DBA user. When prompted, enter the respective schema names.
@hdi_install/grant_schema_priv.sql
@hdm_install/grant_schema_priv.sql
@hcd_install/grant_schema_priv.sql
@odb_install/grant_schema_priv.sql
Execute the following commands:
> ALTER USER <odb> QUOTA UNLIMITED on <odb_index_ts>
> ALTER USER <odb> QUOTA UNLIMITED on <odb_lob_ts>
@cdm_install/grant_schema_priv.sql
Execute the following command:
> ALTER USER <cdm> QUOTA UNLIMITED on <cdm_index_ts>
@enterprise_install/grant_schema_priv.sql
@job_install/grant_schema_priv.sql
Execute the following commands:
> ALTER USER <job> QUOTA UNLIMITED on <job_index_ts>
> ALTER USER <job> QUOTA UNLIMITED on <job_store_ts>
> ALTER USER <job> QUOTA UNLIMITED on <job_lob_ts>
> ALTER USER <job> QUOTA UNLIMITED on <job_tbs_ts>
@svc_install/grant_schema_priv.sql
Make sure that all the pre-created user sessions are disconnected from the database.
Navigate to the <media_pack_location>/Disk1/install folder.
Change the protection on files as follows:
chmod 755 *
Start the Oracle Universal Installer (OUI) using the following command:
Database server is on the machine where the installer is running and is an exadata instance, execute:
sh runInstaller.sh -local dba_tasks=false db_platform=exadata
Database server is on the machine where the installer is running and is a non-exadata instance, execute:
sh runInstaller.sh -local dba_tasks=false db_platform=default
Database server is on a different machine and is an exadata instance, execute:
sh runInstaller.sh -local remote_installation=true dba_tasks=false db_platform=exadata
Database server is on a different machine and is a non-exadata instance, execute:
sh runInstaller.sh -local dba_tasks=false db_platform=default remote_installation=true
where, the -local
option is to install on the local node irrespective of the cluster nodes specified on the installer machine.
The Welcome screen appears.
Click Next. The Select a Product to Install screen appears.
Select the Oracle Healthcare Foundation Data Model 7.1.0.0.0 option and click Next.
The Specify Home Details screen appears.
Enter the installation home name and location.
Click Next. The Verify Installation Prerequisites screen appears.
Verify if all the prerequisites are met before proceeding.
Click Next. The Oracle Client Home Configuration screen appears.
Specify the Oracle client home path. The installer validates this path.
Click Next. The Database Configuration screen appears.
Enter values for the following fields:
Hostname - By default, the system host name appears
Port - By default, the port number is 1521. You can edit this field if required.
Service name
Note:
The installer will not validate the database connectivity. Make sure that you are able to connect to the database from SQLPlus.Click Next. The Table Compression screen is displayed.
Select the compression option for the following schemas from the corresponding drop-down list:
Schema Name | Compression Options for Non-Exadata | Compression Options for Exadata |
---|---|---|
Interface tables schema | No Compression (default)
Advanced Compression |
Hybrid columnar compression (default)
No Compression |
Data warehouse schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression: Preferred if updates are high Hybrid columnar compression |
Common data mart schema | No Compression (default)
Advanced Compression |
No Compression (default)
Advanced Compression |
Cohort data mart schema | No Compression (default) Advanced Compression | No Compression (default)
Advanced Compression |
Omics Data Bank schema | No Compression (default) Advanced Compression | Hybrid columnar compression (default) |
Click Next. The Data Model Configuration screen appears.
Enter the respective pre-created schema details for the following:
Interface tables schema name
Interface tables schema password
Data warehouse schema name
Data warehouse schema password
Common data mart schema name
Common data mart schema password
Omics data bank schema name
Omics data bank schema password
Click Next. The Data Model Configuration screen appears.
Enter the pre-created schema details for the following:
Cohort data mart schema name
Cohort data mart password
Enterprise schema name
Enterprise schema password
Job engine schema name
Job engine schema password
Services schema name
Services schema password
Click Next in the Data Model Configuration Verification, Tablespace Details, and Temporary Tablespace Details screens.
The Omics Data Bank and Cohort Data Mart Parameters screen appears.
Enter the values in the following fields:
Result Partition - Used to partition result tables in the ODB schema. The available options are:
GENE (Default)
STUDY
Promoter Offset - Numerical value to specify a portion of the gene used as a promoter in the ODB schema. The default value is 200.
Flanking Offset - Numerical value to specify the region before and after a gene to link results to a gene in the ODB schema. The default value is 200.
Max Parallel Degree - An option to specify the maximum degree of parallelism to be set on tables or used in the SQL statements for the CDM or ODB schema. It is dependent on the machine configuration of the database server. The default value is 2.
Job Store Name
Execute the following query to find all store names in the database (created by all previous installations) by connecting to SYS user:
SELECT case when substr(store_name,1,3) ='FS_' then substr(store_name,4) else null end as store_name FROM TABLE(dbms_dbfs_content.liststores);
Click Next. The Tablespace Parameters screen appears.
Enter values for the following fields from the existing CDM, ODB, and Job engine schemas:
Note:
Make sure that these tablespaces exist before the installation and which was used to create these schemas.cdm_index_ts_name
odb_index_ts_name
odb_lob_ts_name
job_index_ts_name
job_lob_ts_name
job_tbs_ts_name
job_store_ts_name
Click Next. The Summary screen appears.
Click Install.
After the installation is complete, the End of Installation screen appears.
Click Exit after reviewing the installation information. At the confirmation prompt, click Yes to exit the installer.
Review the generated installation log files for errors. For details on troubleshooting, Section 2.10.
Contact Oracle support, if necessary, to resolve any errors.
After the Cohort Data Mart (CDM) and Omics Data Bank (ODB) data model installation, connect to the database user with DBA privilege, and execute the following scripts to revoke privileges:
REVOKE CREATE ANY DIRECTORY FROM <odb user>
REVOKE CREATE ANY CONTEXT FROM <cdm user>
where,
<odb_user>
is the ODB schema name.
<cdm_user>
is the CDM schema name.
If the installation fails, open the log file for errors. For the log file details, see Section 2.4.
You must fix the issues and rerun the installer from the installation home location.
Seed data scripts are executed as part of the installation process for both HDI schema and HDM schema. The seed data that is loaded in each schema is identical although the format is specific to each of the two schemas.
These errors that occur while populating seed data using the seed data load procedures are logged in the HDM_X_SEED_DATA_ERR_LOG table for the HDM schema and the HDI_X_ SEED_DATA_ERR_LOG table for the HDI schema.
The following are the key columns in the error log table (HDM_X_SEED_DATA_ERR_ LOG or HDI_X_SEED_DATA_ERR_LOG):
ERR_ID - Unique identifier for a record in the error log table.
ERR_TBL_NM - Specifies the table name for which the error record was logged. For example, for an error in the HDM code repository will have a value HDM_CD_REPOSITORY in this column.
ERR_TYP - Indicates the type of error that is logged. There are five types of errors that can be logged. For the different types of errors, see Section 2.10.1.2.
ERR_DESC - A short description of the error.
ERR_INT_ID - Indicates the integration ID of the record for which the error was logged. This column along with the error table name (ERR_TBL_NM) can be used to debug the error. The Integration ID for tables that are populated with seed data has the following pattern:
This section describes the different error types that can be logged and the resolution:
Note:
In Table 2-5, non-EHA user refers to individuals using the application.Table 2-5 Seed Data Error Types
Error Type | Description | Resolution for HDM Identification | Resolution for HDI Identification |
---|---|---|---|
Error Type - EHA_WARNING: Code name exists |
Code name: <CODE NAME> exists. New Oracle seed data record inserted with same code name. |
Search Code Repository for the record with the code name specified in the Error Description.
For example,
|
Search Code Repository for the record with the code name as specified in the Error Description.
For example,
|
Resolution After identifying the code name, select one of the following options:
|
Resolution After identifying the code name, select one of the following options:
|
||
EHA_ERROR: Insert failed: Creation of version failed |
Non-EHA user has versioned a record. Unable to create a new version of the record. |
Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
For example,
|
Navigate to the table identified in HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDI_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
For example,
|
Resolution This error occurs when Oracle provided seed data was previously updated and versioned by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates through the Oracle seed data process are rejected with this error message. Ignore the Oracle provided seed data. |
Resolution This error occurs when Oracle provided seed data was previously updated and versioned by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates through the Oracle seed data process are rejected with this error message. Ignore the Oracle provided seed data. |
||
EHA_ERROR: Update failed |
Non-EHA user has changed the record. Unable to update the record. |
Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
For example,
|
This error will not occur in HDI as no updates are supported in HDI. |
Resolution This error occurs when Oracle provided seed data has previously been updated by a non-EHA user. This indicates that you have taken "ownership" of the data and any subsequent updates through the Oracle seed data process are rejected with this error message. Ignore the Oracle provided seed data. |
|||
EHA_ERROR: Insert failed: Duplicate integration ID |
Non-EHA user has created a record with the same integration ID. Unable to create a new record. |
Navigate to the table identified in HDM_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDM_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
For example,
|
Navigate to the table identified in HDI_X_SEED_DATA_ERR_LOG.ERR_TBL_NM and use HDI_X_SEED_DATA_ERR_LOG.ERR_INT_ID to identify the error record.
For example,
|
Resolution After examining the conflicting data, select one of the following options:
|
Resolution After examining the conflicting data, you can select one of the following options:
|
||
EHA_ERROR: PL/SQL_ERROR |
- |
Other PL SQL errors that are encountered when inserting seed data. |
Other PL SQL errors that are encountered when inserting seed data. |
Note:
Oracle recommends that you uninstall Data Model only from the development environment.Execute the following commands to drop user schemas by connecting to the system user. Replace the user schema names and tablespace names with the values provided during the installation.
drop user <hdi> cascade; drop user <hdm> cascade; drop user <hcd> cascade; drop user <odb> cascade; drop user <cdm> cascade; drop user <ent> cascade; drop user <job> cascade; drop user <svc> cascade; drop tablespace <hdi_temp> INCLUDING CONTENTS and datafiles; drop tablespace <hdm_temp> INCLUDING CONTENTS and datafiles; drop tablespace <hcd_temp> INCLUDING CONTENTS and datafiles; drop tablespace <odb_temp> INCLUDING CONTENTS and datafiles; drop tablespace <cdm_temp> INCLUDING CONTENTS and datafiles; drop tablespace <ent_temp> INCLUDING CONTENTS and datafiles; drop tablespace <job_temp> INCLUDING CONTENTS and datafiles; drop tablespace <svc_temp> INCLUDING CONTENTS and datafiles; drop tablespace <hdi_ts> INCLUDING CONTENTS and datafiles; drop tablespace <hdm_ts> INCLUDING CONTENTS and datafiles; drop tablespace <hcd_ts> INCLUDING CONTENTS and datafiles; drop tablespace <odb_data_ts> INCLUDING CONTENTS and datafiles; drop tablespace <odb_index_ts> INCLUDING CONTENTS and datafiles; drop tablespace <odb_lob_ts> INCLUDING CONTENTS and datafiles; drop tablespace <cdm_data_ts> INCLUDING CONTENTS and datafiles; drop tablespace <cdm_index_ts> INCLUDING CONTENTS and datafiles; drop tablespace <ent_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_data_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_index_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_store_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_lob_ts> INCLUDING CONTENTS and datafiles; drop tablespace <job_tbs_ts> INCLUDING CONTENTS and datafiles; drop tablespace <svc_ts> INCLUDING CONTENTS and datafiles;
Connect to Sys user, verify the following table records, and delete the entries for <job> schema and where STORE_OWNER=<job schema>.
For example,
delete from DBFS_SFS$_FS cascade; delete from DBFS$_MOUNTS cascade; delete from DBFS$_STORES cascade; delete from DBFS_SFS$_VOL cascade; delete from DBFS_SFS$_TAB cascade; commit;