Configuring Data Transfer and PHI Logs Export
This section provides instructions on how to configure data transfer for On-Premises and Software as a Service (SaaS) customers.
For On-Premises Customers
Configure Database Directories and Properties
-
Create two shared directories on the database host at suitable paths. One is for storing data transfer files, and the other is for storing PHI logs. Ensure these directories are accessible to both WebLogic and the database.
-
Create database directories using the following SQL commands:
NOTE: Replace the directory paths to suit your environment.
create directory OHI_DATA_TRANSFER_DIRECTORY as 'DATA_TRANSFER_DIRECTORY_PATH'; create directory OHI_PHI_LOGS_DIRECTORY as 'PHI_LOGS_DIRECTORY_PATH';
-
Grant read and write access to both the directories to the application owner schema by running the following SQL commands:
grant read, write on directory OHI_DATA_TRANSFER_DIRECTORY to ohi_{app_name}_owner; grant read, write on directory OHI_PHI_LOGS_DIRECTORY to ohi_{app_name}_owner;
-
Configure the following system properties in the application’s properties file:
ohi.data.transfer.target.db.directory.name=OHI_DATA_TRANSFER_DIRECTORY ohi.phi.logs.data.transfer.target.db.directory.name=OHI_PHI_LOGS_DIRECTORY
-
Create an automated job (for example, a cron job) to regularly purge older files from the database directories to avoid filling up disk space.
Configure Database Scheduler Job
Create a database scheduler job for processing data transfer IP requests using the following SQL. Update the application owner schema in the job_action
, and the value in repeat_interval
(the following example processes the request every five seconds) to suit your requirements.
begin
dbms_scheduler.create_job (
job_name => 'OPERATIONAL_RPT_PKG_PROCESS_JOB',
job_type => 'PLSQL_BLOCK',
job_action => 'begin ohi_{app_name}_owner.operational_rpt_pkg.process; end;',
start_date => systimestamp,
repeat_interval => 'freq=secondly; interval=5',
enabled => true
);
end;
For SaaS Customers
Configure Object Storage Buckets and Properties
-
Create two object storage buckets in OCI. One is for storing PHI logs, and the other is for storing non-PHI logs, such as data files generated by the data transfer IP.
-
Configure retention policies for the buckets as described in Configuring Retention Strategy.
-
Provide the location of the OCI config file (for non-PHI data bucket), the OCI region where the Object Storage buckets are located, and the deployment type in the properties file. Update the values of
OCI_CONFIG_FILE_PATH
andOCI_OBJECT_STORAGE_REGION
to suit your environment. Use the value from theRegion Identifier
column in the OCI Documentation as the value forOCI_OBJECT_STORAGE_REGION
.ohi.object.storage.NON_PHI_DATA.config.file.location=OCI_CONFIG_FILE_PATH ohi.oci.region=OCI_OBJECT_STORAGE_REGION ohi.deployment.type=cloud ohi.ui.iframe.allowlist.url=https://objectstorage.<OCI_OBJECT_STORAGE_REGION>.oraclecloud.com
-
Create DBMS cloud credentials using the following SQL:
begin dbms_cloud.create_credential( credential_name => 'NON_PHI_DATA', username => 'username@oracle.com', -- replace with valid email password => 'secret' -- replace with oci auth token ); end;
and
begin dbms_cloud.create_credential( credential_name => 'PHI_DATA', username => 'username@oracle.com', -- replace with valid email password => 'secret' -- replace with oci auth token ); end;
Configure Object Storage Use Cases and Database Scheduler Job
-
For the two object storage buckets created above, create corresponding object storage use cases in the OHI database, using the generic API or SQL.
-
Generic API
-
Post to
api/generic/objectstorageconfigurationfileusecases
with the following request bodies. Replace the namespace and bucket name according to your bucket details.{ "useCaseName": "PHI_DATA", "namespace": "{BUCKET_NAMESPACE}", "bucketName": "{BUCKET_NAME}" }
and
{ "useCaseName": "NON_PHI_DATA", "namespace": "{BUCKET_NAMESPACE}", "bucketName": "{BUCKET_NAME}" }
-
-
SQL
-
Run the following insert statements:
NOTE: Replace namespace and bucket name according to your bucket details.
insert into ohi_object_storage_usecases ( id, usecase_name, namespace, bucket_name, subtype, created_by, last_updated_by, creation_date, last_updated_date, object_version_number ) values ( ohi_object_storage_usecases_s1.nextval, 'NON_PHI_DATA', '{BUCKET_NAMESPACE}', '{BUCKET_NAME}', 'CF', -1, -1, current_timestamp, current_timestamp, 1 ); insert into ohi_object_storage_cf_usecases (id) values (ohi_object_storage_usecases_s1.currval); commit;
and
insert into ohi_object_storage_usecases ( id, usecase_name, namespace, bucket_name, subtype, created_by, last_updated_by, creation_date, last_updated_date, object_version_number ) values ( ohi_object_storage_usecases_s1.nextval, 'PHI_DATA', '{BUCKET_NAMESPACE}', '{BUCKET_NAME}', 'CF', -1, -1, current_timestamp, current_timestamp, 1 ); insert into ohi_object_storage_cf_usecases (id) values (ohi_object_storage_usecases_s1.currval); commit;
-
-
-
Create a database scheduler job for processing data transfer IP requests using the below SQL. Update the application owner schema (
ohi_{app_name}_owner
) and the sessiontime_zone
in thejob_action
based on the application deployment and the customer’s time zone.If both the middle-tier and the database run in the same time zone, setting the
time_zone
at the session level is not required.begin dbms_scheduler.create_job ( job_name => 'OPERATIONAL_RPT_PKG_PROCESS_JOB', job_type => 'PLSQL_BLOCK', job_action => q'[begin execute immediate 'alter session set time_zone = ''Australia/Melbourne'''; ohi_{app_name}_owner.operational_rpt_pkg.process; end; ]', start_date => systimestamp, repeat_interval => 'freq=secondly; interval=5', enabled => true ); end;