2Dialog Reference

Provisioning a User for BI Cloud Connector Console Access

To provision access to the BI Cloud Connector for a user, use the Security Console to create an administrative role that inherits BICC privileges from existing roles and assign the user to that role.

To provision a user:
  1. In Fusion, navigate to the Security Console in the Navigator.

  2. In the Security Console, create a BIACM_ADMIN role.

    1. Click Create Role.

    2. In the Basic Information page, enter the following values and click Next.

      Role Name BIACM_ADMIN
      Role Code BIACM_ADMIN
      Role Category BI - Abstract Roles
    3. Click the Add icon in the Role Hierarchy list

    4. In the Add Role Membership dialog box, search for ESS.

    5. In the search results, confirm that the ESS Administrator role appears, then click Add Role Membership.

    6. Search for ORA_ASM_APPLICATION_IMPLEMENTATION_ADMIN_ABSTRACT and click Add Role Membership.

    7. Close the Add Role Membership dialog box.

    8. Click Next.

    9. In the Users page, click Add User.

    10. In the Add User dialog box, search for the name of the user you want to assign access to, then click Add User to Role.

    11. Close the Add User dialog box.

    12. Click Next.

    13. Click Save and Close.

Provision a User to Access BI Cloud Connector Content in Universal Content Management

To provision access to the BI Cloud Connector content in Universal Content Management (UCM), use the Security Console to create an administrative role and assign a user to that role.

To provision an administrator:
  1. In Fusion, navigate to the Security Console in the Navigator.

  2. In the Security Console, create a BICC_UCM_CONTENT_ADMIN role.

    1. Click Create Role.

    2. In the Basic Information page, enter the following values and click Next.

      Role Name BICC_UCM_CONTENT_ADMIN
      Role Code BICC_UCM_CONTENT_ADMIN
      Role Category BI - Abstact Roles
    3. In the Add Role Membership dialog box, search for OBIA_EXTRACTTRANSFORMLOAD_RWD and click Add Role Membership.

    4. Close the Add Role Membership dialog box.

    5. In the Users page, click Add User.

    6. In the Add User dialog box, search for the name of the user you want to assign access to, then click Add User to Role.

    7. Close the Add User dialog box.

    8. Click Next.

    9. Click Save and Close.

BI Cloud Connector Enabled Data Stores Page

Use BI Cloud Connector to extract Business Intelligence data from a Fusion Applications Cloud data source into an Oracle Storage Service or UCM server. For instructions on loading data, refer to the Business Intelligence documentation for your product. If you’re using BI Cloud Connector with Oracle BI Applications, before you start, refer to the Fusion Applications compatibility matrix for BI Cloud Connector to ensure that your product version is supported.

How to Use the Cloud Extract Configuration and Execution Tool

When you log in, use the Enabled Data Stores dialog to search the View Objects (VOs) by Offering that are enabled for extract. To view a list of enabled Data Stores for an Offering, click the Manage Offerings and Data Stores button in the panel tab and select the Manage Offerings and Data Stores link, select an Offering, and use the Data Store for Offering list to view the Data Stores and their last extract date and other properties.

To extract Business Intelligence data from a Fusion Applications Cloud data source, perform the following tasks in the order they appear in the panel tab:

  • Click the Configure External Storage button and select the Configure External Storage link to specify the storage area into which you want to load the data.

  • Click the Manage Extract Schedules button and select the Manage Extract Schedules link to create a schedule for one-time or recurring data extraction and to monitor the last scheduled run and verify completion.

How to Review View Object to Database Lineage Mappings

To review the mappings between BI VOs and database tables and columns, review the following documents and spreadsheets on Oracle Cloud Customer Connect.

Configure Offerings to Extract

Click theManage Offerings and Data Stores button in the panel tab and select the Manage Offerings and Data Stores link to open the Offerings dialog box, where you can select offerings that you want to extract, specify VOs from which to extract data, and set up once-only or regular data extracts.

Offerings

Field Name or Option How to Use
Offerings list View the offerings that are available for extraction. Click an offering to view and configure its data stores.
Search Enter an offering name and click Search to locate it in the list.
Actions > Create Offering Create a new offering and specify its VOs.
Actions > Reset Last Extract Date Specify the last extract date from which extraction should begin for incremental loads.
Actions > Configure Flex Label Languages Specify a language for flexfield labels.
Actions > Extract Preferences Specify extract parameters, including: job timeout; CSV file size to split files by; retry parameters in case of intermittent BI Server connection or query failures; and extract schedule email notification frequency and recipients.
List View View the Offerings list as a list with Actions icon for each offering.
Grid View View the Offerings list as a grid with Actions icon for each offering.
Offering Action > Delete Delete the currently selected offering and its corresponding VO association. Available only for user-defined offerings.
Offering Action > Edit Change the Offering Name and VO association of the currently selected offering.
Offering Action > Reset to Shipped Content Reset the offering to shipped content, removing any changes made.
Offering Action > Reset to Full Extract Reset the last extract date so that a full data load is performed for the offering, instead of an incremental load. You typically use this option if your business requirements have changed or when fact data has been corrupted.

Click the Manage Offerings and Data Stores button in the panel tab and select the Export Customization link to collect modification information from the source environment and export it as compressed CSV files. Select the Import Customization link to apply modifications to the destination environment from exported compressed CSV files. In the Import Customization dialog box, click Browse and specify exported customization files, then click Import.

Data Store for Offering: Offering name

Click an offering in the Offerings list open the Data Store for Offering: Offering Name page, where you can specify View Objects VOs from which to extract data.

Field Name or Option How to Use
Data Stores list View the data areas that are available for extraction for the offering you clicked.
View > Columns Select columns to be displayed in the Offerings list.
View > Detach Pop out the section of the dialog box so you can see more data.
View > Reorder Columns Change the display order of the columns in the Data Stores list.
View > Query By Example Filter the displayed results by entering the first few letters of a name.
Add Specify a new Data Store for an offering. For example, you might want to add a view object (VO) for extraction. To add a VO, in the wizard’s Datastore details page, provide the VO name, then specify whether you want to disable effective data filter, which allows for extraction of all historical records, if required. Enter any required query filter, using column references following the format __DATASTORE__.<BI VO Column Name>. In the wizard’s Select Columns page, select the column types for the select query from the Column Filter drop-down list, then uncheck the columns you don’t want included in the SELECT list. If the VO is defined as Effective Date Disabled, you can select the Natural Key option for a Primary Key Column to define a natural key.
Remove Delete the currently selected Data Store.
Refresh Refreshes the Data Store list.
Query by Example Filter the displayed results by entering the first few letters of a name.
Detach Pop out the section of the dialog box so you can see more data.
Actions > Reset to Full Extract Reset the last extract date so that a full data load is performed at the next load for the data store/VO, instead of an incremental load. You typically use this option if your business requirements have changed or if fact data has been corrupted.
Actions > Reset to Shipped Content Reset the VO to shipped content, removing any changes made.
Actions > Export Metadata Definition Export metadata definition for the VO.
Actions > Export UI Label Export user interface labels for the VO. A zip file is generated with files for each configured language.
Actions > Test Data Store Test extract from the selected Data Store.
Actions > Advanced Extract Configuration Specify initial extract date and chunking for creation date and primary key for full loads.
Last Extract Date View the date and time when the Data Store was last extracted.

Click the Configure Cloud Extract button in the panel tab and select the Review Cloud Extract Configuration link to return to the Enabled Data Stores dialog box.

Perform Advanced Extract Configuration

Select Actions > Advanced Extract Configuration in the Data Store for Offering page of the Offering dialog to open the Advanced Extract Configuration For: Data store name dialog, where you can set advanced extract configuration for a selected data store. For full extracts, you can enable chunking by creation date or by primary key.

Filter and/or Chunk By Creation Date Columns

Field Name or Option How to Use
Column list View the columns that are available for designation as creation date. Select the Is Creation Date option for the appropriate column.
Initial Extract Date Optionally, specify the initial date from which the full extract should be performed. This option requires selection of the Is Creation Date option for a column or columns in the column list which represent the Creation Date.
Support chunking Optionally, select By Creation Date to chunk by to specify a number of days by which to extract date range batches or chunks of data. This option requires selection of the Is Creation Date option for a column or columns in the column list which represent the Creation Date.
Number of Days If you have selected to support chunking by creation date, specify the number of days, for example 365, by which to chunk extracts.

Chunk By Primary Key Column

Field Name or Option How to Use
Support chunking Support chunking by numeric primary key. This option requires a single numeric primary key column for the data store.
Number of Rows Specify a number of rows to chunk extracts by.

Create and Manage an Offering

Click Actions > Create Offering in the Offerings dialog to open the Manage Offering dialog, where you can specify a new offering and associate a data store.

Manage Offering

Field Name How to Use
Offering Code Enter a code for the offering.
Offering Name Enter a name for the offering. This is the name that will appear in the list of Business Intelligence Applications Offerings in the Configure Cloud Extract dialog.
Offering Description Optionally, enter a description.

Associate Data Store

In the Associate Data Store section of the dialog, filter for the data store, then select and click the Move selected items to other list button to add the VO, then click Save.

Add a Datastore for an Offering

Click the Add button in the Data Store for Offering dialog to open the Define Datastore dialog box, where you can specify a new data store for the selected offering.

Datastore Details Page

  1. Enter the following information and click Next.

    Field Name How to Use
    Data Store Key Enter the VO name for datastore.
    Disable Effective date filter Enable this option if there is a requirement to extract all historical records. Note that the VO not validated as being effective dated in nature, so set this option only after confirming the nature of the data.
    Extract Data Store Metadata Enable this option to generate an mdcsv file with the data extract.
    Query Filter Enter the Query Filter in the Oracle BI Enterprise Edition select_physical supported format. All column references should follow the format underscore( _ )underscore ( _ ) DATASTORE underscore ( _ ) underscore ( _ )dot(.)>BI VO Column Name>, for example __DATASTORE__.ViewApplicationId=0 where ViewApplicationId is the column name in the BI VO and of data type number.
  2. In the Associate Offerings section of the page, select the names of the offerings you want to associate with the datastore and click the Move selected items to other list button to add them, then click Save.

  3. Click Next to navigate to the Select Columns page. The column definitions are fetched from the BI repository. By default, the table shows the date type columns so the user can select which of these columns should be included in the incremental filter query.

Select Columns Page

By default, the Column Name table shows the Date Type column so you can select which of these columns is included in the incremental filter query. Click the Column Filter drop-down list to switch from the default filter to Primary Key Columns or All Columns.

By default, all of the columns are selected for query. In the columns list, deselect the Used in Select list option for any columns you don’t want included.

If the VO is defined as Effective Date Disabled, you can view the Primary Key Columns and select the Natural Key option for a column to define a natural key.

For custom VOs, primary keys are retrieved from the repository (RPD). You can override these and set your own by selecting the Primary Key option for a column. To reset primary keys to those defined in the repository, click the Retrieve PK button.

Reset Last Extract Date For All Enabled Data Stores

Cick Actions > Reset Last Extract Date in the Offerings dialog to open the Reset Last Extract Date For All Enabled Data Stores dialog.

Specify the last extract date from which extraction should begin for incremental loads for the selected Offering. You typically use this option if your business requirements have changed or when fact data has been corrupted. Click OK to reset.

Configure Flexfield Label Languages

Click Actions > Configure Flex Label Languages in the Offerings dialog to open the Configure Flex Label Languages dialog, in which you can specify a language for flex labels.

In the Flex Label Languages list, scroll to select the language you want, then click the Move selected items to other list button to add it to the selected list, then click Save and Close. To suspend extraction of flexfield labels during extraction, select the Suppress Flex Label Extract option.

Configure Extract Preferences

Click Actions > Extract Preferences to open the Extract Preferences dialog, where you can specify preferences for extracts.

Language

In the Preferred Extract Language field, select language you would prefer extracts be made in.

Default BICC incremental job will add a look back timeframe as defined in Prune time setting. Since the extraction is done on live Applications DB and not a snapshot, look back/prune time is best practice to ensure dependency synchronization across objects. Default works best for extracts with daily or higher reoccurrence. Prune time should be adjusted when extracts are scheduled more frequently or if downstream system can handle objects extracted in any order.

Job Settings

In the Timeout in Hours field, enter the number of hours before a job times out. The default is 10. By default, the job fails on timeout. Deselect Timeout Force Fail if you prefer that timed out jobs not be failed.
Note: If timed out jobs are not failed, all data files for all data stores that were successful before the timeout are uploaded to external storage.

File Parameters

In the Compression type field, select the type of compression you’d like to use. In the Split file size (GB) field, specify the file size by which extracted CSV files are divided for a single VO. The default is 1 GB. You can set the file size from one to five GB.

In the Uploaded file expiry (Days) field, enter the number of days you’d like the extract files to persist.

File Parameters settings can be overridden at the schedule level when creating schedules and assigning external storages to them.

Extract Mode

Select the Bypass OTBI Metadata for all data stores option to bypass OTBI metadata.

External Storage

Select the Upload to Multiple External Storage option to increase the number of external storages that can be selected in the External Storage list when creating or editing schedules. The default is one storage, and the maximum is two.

Retry Parameters

During extraction, connections to the BI Server or queries may fail, causing retries. In the Analytic server connection retry limit field, specify the number of connection attempts made to complete the extraction. In the Analytic server query retry limit field, specify the number of times a query is resubmitted.

The Analytic server connection retry limit setting can be overridden at the schedule level when creating schedules and assigning external storages to them.

Global Extract Schedule Notification

To send notifications when scheduled extract events occur, select the notifications you want upon extract start, success, or failure. In the Mail To Addresses, enter email addresses, separated by commas, to which you want notifications sent.

Global Extract Schedule Notification settings can be overridden at the schedule level when creating schedules and assigning external storages to them.

Configure Where to Load Data

Click the Configure External Storage button in the panel tab and select the Configure External Storage link to open the Configure External Storage dialog box, where you can specify storage areas into which Cloud data is loaded. For example, to load into one or more Oracle Cloud Storage Service instances, select the Storage Service Connection Configuration tab. Select the OCI Object Storage Connection tab to configure Cloud Infrastructure connections, and select the UCM Connection Configuration tab to configure Universal Content Management Server connections.

Storage Type — Cloud Storage Service

Specify the connection details for one or more Oracle Storage Service instances by clicking the name of a provisioned connection or clicking Add to create a new connection. Click Delete to delete an existing connection.

Storage areas are associated with extract schedules. You can have as many Cloud Storage Service containers as are required by your Cloud application integrations, but each should be used for its own requirements, so that there's no overlap between them. Runtime metadata, including the last extract date, is managed across all storage locations, so configuring the same data store in multiple schedules with different external storage configurations results in loss of incremental data.

Use the following fields in the Storage Service Connection page to specify the connection details:

Field Name How to Use
Name Specify a name for the connection.
OAC External Storage Select this option if the connection is to an Oracle Analytics Cloud (OAC) storage service. You can specify only one OAC storage service connection. Note that encryption is disabled for OAC storage.
Protocol Specify http for non-SSL, or https for SSL connection.
Host Specify the Host name for the Oracle Storage Service. For example, mystorage.storage.oraclecloud.com.
Port Specify the port number (optional).
User Name Specify the user that is provisioned to load data. The user should have privileges to upload files in the container specified. User credentials will be stored in the Weblogic credential store under oracle.apps.security/FUSION_APPS_OBIA_STORAGESERV_USER-KEY.
Password Specify the password for the user specified in the User Name field.
Service Name Specify the service name of the Oracle Cloud Storage Service. For example, gse-otbie1.
Container Specify the name of the container that is allocated to upload extracted files.
Data Encryption — Support Encryption If you want to encrypt communication, select this option and use the Import Certificate option below to specify the encryption keys.
Import Certificate Click Browse and navigate to and select the location of the key file, or type the literal path location and file name.
Download Folder Shows the directory in the domain server where the Batch Extract file is downloaded to reset extract dates before the extraction process.
Upload Folder Shows the directory in the domain server where files are temporarily extracted by the cloud extractor before uploading to the storage service.

Storage Type — OCI Object Storage Connection

Use the following fields in the Configure External Storage page to specify the connection details:

Field Name How to Use
Name Specify a name for the connection. .
Host Specify host name. Host information is available inObject Storeage Service API in the OCI API Documentation.
Tenancy OCID Specify Tenancy OCID. To obtain the Tenancy and user OCID, refer to this to Where to Get the Tenancy's OCID and User's OCID.
User OCID Specify User OCID.
Namespace Specify the namespace. Namespace is obtained in the OCI Console.
Bucket Specify the bucket into which extracts are uploaded. Bucket names are obtained in the OCI Console.
Generate API Signing Key Generate the required API signing key for OCI. The fingerprint is displayed after generation.
Export Public Key Export the public key for upload to OCI.
Test Connection Test the connection.

Storage Type — UCM

Review the connection details for Universal Content Management (UCM) using the following fields:

Field Name How to Use
Protocol Specify http for non-SSL, or https for SSL. If you select https here, you must also enable HTTPS on the UCM server, using the UCM Server Console.
Host Shows the host name for the UCM Server. For example, myserver.company.com
Port Specify the port number of the UCM Server (optional). For example, 7012.
Download Folder Shows the directory in the domain server where the Batch Extract file is downloaded to reset extract dates before the extraction process.
Upload Folder Shows the directory in the domain server where files are temporarily extracted by the cloud extractor before uploading to UCM.

Preview a Data Store

Click a data store link in the Data Store for Offering dialog to open the Data Store Preview dialog, where you can preview a selected data store’s columns and enable and disable the data store and its effective date filter.

Field Name How to Use
Data Store Displays the data store VO name of the selected data store.
Enabled Specify whether the data store is enabled for the offering.
Disable Effective date filter Specify whether to disable the effective date filter so that a full extract is performed on the data store.
Query Filter View or edit the effective date filter for the data store.
Last Extract Date View the date of the last extract.
Data Store Columns list View the columns in the data store. Includes columns indicating whether each is used in the incremental filter for incemental extracts, appears in the Select list for the data store, or is a primary key.

Specify When to Extract Data

Click the Manage Extract Schedules button in the panel tab and select the Manage Extract Schedules link to open the Manage Extract Schedules dialog, where you can set up a once-only or regular data extract of Business Intelligence data from an Oracle Applications Cloud data source. For example, you might want to extract data from your Cloud data source once per day at midnight. You can also monitor an extract here.

Field Name How to Use
Schedules This list shows schedules submitted in the last 24 hours. Use the Add option to set up once-only or regular data extract. Use the Edit option to update the details of the currently selected schedule. Use the Delete Schedule option to delete the currently selected schedule.

It’s recommended that you periodically purge the Schedules list, as not all completed shedules are shown. To do this, use the Delete Inactive Schedules option.

Schedule Requests This list shows the details of data extract processes for the Schedule that is currently selected in the Schedules list above. A new row is created in the table every time an Cloud extract request is processed. Use the Delete option to delete the details of the currently selected request. If you delete a schedule job from this list, then this does not remove the BI Cloud data that has been extracted and loaded by that job.

Monitor a Cloud Extract

In the Schedules dialog, click Actions and select the option for the last run corresponding to the job type, Cloud Data Extract or Deleted Record Extract. Each job type displays in its own dialog, which lists the last cloud extract or deleted record extract status of each VO with status of SUCCESS or FAILURE for each data store and error messages in the case of failures. The ESS Request Id column displays the job for which the VO extraction last ran.

Scheduled jobs also write logs that can be used to review issues causing errors or shared with Oracle Support to resolve a service request. To download logs, click Help and select Download Logs.

For scheduled jobs, whether successful or not, an extraction status file in JSON format is uploaded to external storage. The files have a default expiration date, and have the following file name format, depending on job type:
  • Cloud Data Extracts:EXTRACT_STATUS_DATA_SCHEDULE_<Schedule Id>_REQUEST_<request_id>.JSON

  • Deleted Record Extracts:EXTRACT_STATUS_DELETED_SCHEDULE_<Schedule Id>_REQUEST_<request_id>.JSON

Column Content
Name VO name
status VO extract status
errorMessage If extract failed, any resulting error status
runDate Run date of the extract of the VO
queryDurationInSec Time the query took to run, in seconds
extractDurationInSec Time the extract of the query results took, in seconds
uploadDurationInSec Time the upload to external storage took, in seconds
totalDurationInSec Total duration of the job
rowCount Number of rows extracted

Create a New or Edit an Existing Data Extract Schedule

In the Manage Extract Schedules dialog, click Add or Edit to create a new Cloud data extract or edit an existing one. For example, you might want to extract data from your Cloud data source once per day at midnight. For a once-only data extract, use the Simple option in the Recurrence drop down list. You can only create a schedule if there is not an active schedule for the selected job type (for example, Cloud Data Extract, Deleted Record Extract, or Data and Delete Record Extract). Click Next to specify the data stores for extract for an offering in the Data Store List page.

Schedule Details

Field Name or Option How to Use
Job Type

To extract data, select Application Data Extract. To sync the Cloud system to your source data, select Active Primary Key Extract, which extracts primary key values to identify deleted records. To combine both Cloud Data Extract and Delete Record Extract into one job, select Application Data and Active Primary Key Extract.

To purge expired files from Universal Content Management (UCM), select Delete Expired UCM Files. When a file is extracted and uploaded to UCM, it has a default expiration of 90 days. Expired files are soft deleted from UCM and may eventually require purging to preserve space. The job deletes only expired files from the OBIAImport security group.

To delete all of the files uploaded to one or more selected external storages, select Delete Files in Storage. For a storage service, this job deletes all files from the container associated with the external storage. For UCM, it deletes all file uploaded to the OBIAImport security group.

Name Specify a short name to identify the schedule in the Schedules list.
Description Specify a brief description to identify the schedule, which is only displayed on the Edit Schedule dialog.
Global Data Store List Accept the default of No to select data stores for extraction. Select Yes to use the Global Data Store.
Recurrence Specify how often you want the extract to be performed. To create a once-only data extract, select Simple.
Hourly Interval Specify the number of hours to perform hourly interval extracts by (if you select Hourly in the Recurrence list).
Date and Time Specify the date and time to perform a once-only extract (if you select Simple in the Recurrence list).
Time Specify the time to start an extract, in the format HH:MM:SS AM|PM. For example, 3:00:00 AM.
Day For weekly schedules, select the check box next to each day on which you want to extract data. For Monthly or Yearly extracts, select the day of the month on which you want to extract data.
Month For Yearly (annual) schedules, select the month in which you want to extract data.

Data Store List

Field Name or Option How to Use
Offering Select an offering to extract.
Data Store List Lists the data stores for a selected offering.
Enabled for Extract Select to enable a data store for extract.
Query By Example Filter the displayed results by entering the first few letters of a name.
Detach Pop out the section of the dialog box so you can see more data.

External Storage

For seamless integration, you can configure a schedule to use application-specific storage service containers, allowing you to schedule extracts for multiple integrations.

Note: To optimize the extraction flow and force reuse of the extracted data files across integrations, it’s preferable that separate storage containers be used when there is no overlap on the data stores required for each integration. Runtime metadata, including the last extract date, is managed across all storage locations, so configuring the same data store in multiple schedules with different external storage configurations results in loss of incremental data. If an overlap on the data stores is required, you can enable advanced extract options for extract jobs to upload the same data files to two separate external storage locations. To do this, select the Upload to Multiple External Storage option in the Extract Preferences dialog box.

Field Name or Option How to Use
External Storage

For extracts, select a data store to upload the extract to. By default, you can select one. If the Upload to Multiple External Storage option is selected in your extract preferences, you can select two data stores if an overlap of the data stores is required.

If you’re deleting expired UCM files, the UCM external storage is selected. If you’re deleting files in storage, select one or more from the list of external storages to delete.

Notification Select one of the following options: Use Global Extract Notification to use the global settings defined in the Extract Preferences dialog box; Define Notification to set notifications for the schedule and override the global settings; or None. the notifications you want upon extract start, success, or failure. In the Mail To Addresses, enter email addresses, separated by commas, to which you want notifications sent.
Notify On If you have selected Define Notification for the schedule, select the notifications you want upon extract start, success, or failure.
Mail to Addresses Enter email addresses, separated by commas, to which you want notifications sent.
Use Global File Parameters Select Yes to you use the global file parameter settings defined in the Extract Preferences dialog box. Select No to set parameters for the schedule and override the global settings.
Compression Type Select the type of compression you’d like to use for the schedule.
Split file size (GB) Specify the file size by which extracted CSV files are divided for a single VO for the schedule. The default is 1 GB. You can set the file size from one to five GB.
Uploaded file expiry (Days) Enter the number of days you’d like the extract files to persist for the scheduled extract.

View Last Run Status for a Data Extract

In the Manage Extract Schedules dialog, click Actions and select a a last run status for a run type to view logging and status for each VO for the last extraction job for each, indicated by the ESS Request Id. Click Detach to expand the dialog to full size.

Statuses

The status for each data store is displayed in the Status column. In the event of an error, the error message is displayed in the Message column. Status includes:

  • ERROR: Extract failed with the error message displayed in the Message column.

  • EXTRACT_SUCCESS: Extract ran successfully.

  • UPLOAD_SUCCESS: Upload to external storage ran successfully.

View Last Run Status for a Deleted Record Extract

In the Manage Extract Schedules dialog, click Actions and select Last Run Status for Active Primary Key Extract to open the Last Run Status for Active Primary Key Extract dialog, which provides logging and status for each VO for the last extraction job for each, indicated by the ESS Request Id. Click Detach to expand the dialog to full size.

Statuses

The status for each data store is displayed in the Status column. In the event of an error, the error message is displayed in the Message column. Status includes:

  • ERROR: Extract failed with the error message displayed in the Message column.

  • EXTRACT_SUCCESS: Extract ran successfully.

  • UPLOAD_SUCCESS: Upload to external storage ran successfully.

Add a Job

You can create jobs, specify and manage their data stores, and schedule and run them using refresh metadata at the job level, isolating them from global refresh dates. Click the Manage Jobs button and select the Manage Jobs link to open the Manage Jobs dialog box, where you can specify a new job and manage its refresh metadata.

Field Name How to Use
Search Use the Job list to filter for specific jobs and the Submission Time After field to decrease the submission time window displayed in the Schedules and Schedule Requests lists.
Schedules list This list shows currently defined schedules. Use the Add option to set up once-only or regular data extract. Use the Edit option to update the details of the currently selected schedule. Use the Delete option to delete the currently selected schedule.
Schedule Requests This list shows the details of data extract processes for the Schedule that is currently selected in the Schedules list above. A new row is created in the table every time an Cloud extract request is processed. Use the Delete option to delete the details of the currently selected request. If you delete a schedule job from this list, then this does not remove the BI Cloud data that has been extracted and loaded by that job.

Manage Jobs

Field Name or Option How to Use
Jobs list View the extract jobs that are available for extraction. Click a job to view the job definition and configure its data stores.
View > Columns Select columns to be displayed in the Jobs list.
View > Detach Pop out the section of the dialog box so you can see more data.
View > Reorder Columns Change the display order of the columns in the Jobs list.
View > Query By Example Filter the displayed results by entering the first few letters of a name.
Add

Specify a new job definition. Specify a name, description, and the data stores for the job. The data store metadata definitions are copied to the new job definition.

If the data stores have been modified, the modified versions of the metadata are copied.

Delete Delete the currently selected job.
Refresh Refreshes the Jobs list.
Query by Example Filter the displayed results by entering the first few letters of a name.
Detach Pop out the section of the dialog box so you can see more data.
Search Enter an offering name and click Search to locate it in the list.
Actions > Copy Copy the selected job definition. Schedules for the job aren’t copied.
Actions > Reset to Full Extract Reset the last extract date so that a full data load is performed at the next load for all data stores/VOs selected for the job, instead of an incremental load.
Actions > Reset Last Extract Date

Specify the last extract date from which extraction should begin for incremental loads.

Actions > Manage Initial Extract Date Specify the initial extract date for all the data stores in the job that have at least one creation date column.
Actions > Manage Batch Mode Preferences Specify whether to run the job in batch mode. By default, all data stores defined in a job definition have the Silent Error flag enabled. To turn off the Silent Error flag for all job data stores, set the job to batch mode.
Actions > Manage Extract Mode Set the extract mode to OTBI Metadata Dependent Mode for all data stores supporting metadata dependent extracts. By default, job data stores are set to OTBI Metadata Independent Mode, which allows them to be extracted without a dependency on OTBI BIVO metadata in the BI repository (RPD).

OTBI Metadata Independent Mode

By default, all data stores for a job created using the Manage Jobs dialog box have the extract mode set to OTBI Metadata Independent mode, allowing them to be extracted without a dependency on OTBI BIVO metadata in the BI repository (RPD).

The mode for all BIVOs in a job can be managed either at the job or the BIVO level. All BIVOs default to OTBI Independent Mode when using the Manage Jobs feature. It’s recommended that you use the Manage Jobs dialog box option to deselect the OTBI Metadata dependent option for a job or data store to opt in to using Metadata Independent Mode for any existing BIVOs.

Export and Import Jobs

You can export and import job definitions and any associated job schedules. Definitions are exported in JSON with naming standard exportJob_<Year>-<Month>-<Day>_<Hour>_<Minute>_<Second>.zip.

To export jobs, click the Manage Jobs button and select theExport Jobs link to open the Export Jobs dialog box, where you can specify whether to include active schedules in the export. To import jobs and any associated schedules, click the Manage Jobs button and select theImport Jobs link to open the Import Jobs dialog box, where you can browse for an exported JSON file and import it.

Manage Job Data Stores

Click a job in the Jobs list open the Job Details: Job Name page, where you can view and specify data stores from which to extract data.

Job Details: Job name

Field Name or Option How to Use
Data Stores list View the data stores that are available for extraction for the job you clicked.
View > Columns Select columns to be displayed in the Data Stores list.
View > Detach Pop out the section of the dialog box so you can see more data.
View > Reorder Columns Change the display order of the columns in the Data Stores list.
View > Query By Example Filter the displayed results by entering the first few letters of a name.
Query by Example Filter the displayed results by entering the first few letters of a name.
Detach Pop out the section of the dialog box so you can see more data.
Actions > Compare Shipped Metadata Compare shipped data store metadata with modified metadata side-by-side.
Actions > Reset to Full Extract Reset the last extract date so that a full data load is performed at the next load for the data store/VO, instead of an incremental load.
Actions > Reset to Shipped Content Reset the VO to shipped content, removing any changes made.
Actions > Export Metadata Definition Export metadata definition for the VO.
Actions > Export UI Label Export user interface labels for the VO. A zip file is generated with files for each configured language.
Actions > Test Data Store Test extract from the selected Data Store.

Preview Job Data Stores

Click a data store in the Job Details: Job Name dialog box to open it in the Preview Data Store dialog box.

Manage Jobs

Field Name or Option How to Use
Data Store Columns list View the columns in the data store.
Customize Customize the columns, including their extract mode.
Done Close the dialog box and return to the job details.
View > Columns Select columns to be displayed in the Jobs list.
View > Detach Pop out the section of the dialog box so you can see more data.
View > Reorder Columns Change the display order of the columns in the Data Store Columns list.
View > Query By Example Filter the displayed results by entering the first few letters of a name.
Query by Example Filter the displayed results by entering the first few letters of a name.
Detach Pop out the section of the dialog box so you can see more data.

Customize Job Data Stores

Click Customize in the Preview Data Store dialog box to open the Customize Job Data Store for job: Job Name dialog box. All modifications for a data store are stored for the current job.

Customize Data Store for Job

Field Name or Option How to Use
Disable Effective Date Filter Enable this option if there is a requirement to extract all historical records. Note that the VO not validated as being effective dated in nature, so set this option only after confirming the nature of the data
Extract Data Store Metadata Enable this option to extract data store metadata with the job, for example attribute definitions.
Silent Error Default is to use silent error. Deselect to allow errors.
Use OTBI metadata dependent extract Select to use OTBI Metadata Dependent Mode for the data store. The default, OTBI Metadata Independent Mode, allows extraction without a dependency for BIVO metadata in the BI repository (RPD).
Use UNION ALL for incremental extract The default incremental extract strategy is to use OR. Select this option to use UNION ALL.
Query Filter Specify a query to filter the data.
Support Chunking Chunk data for large data VOs, splitting the output into chunks by initial extract date, creation date, or primary key. Set an Initial Extract date to filter data on that date for data store columns defined as Creation Date, and select the preferred chunking option in the Support Chunking list.

Add a Job Schedule

Click the Manage Jobs button and select the Manage Job Schedules link to open the Manage Job Schedules dialog box, where you can manage job level schedules.

Manage Job Schedules

Field Name or Option How to Use
Schedules list This list shows currently defined schedules. Use the Add option to set up once-only or regular data extract.Select a schedule to view or update its details. Click the Cancel Schedule button to cancel a scheduled extract, and the Delete Schedule button to delete the currently selected schedule.
Schedule Requests list This list shows the details of data extract processes for the Schedule that is currently selected in the Schedules list above. A new row is created in the table every time an Cloud extract request is processed. Use the Cancel option to cancel a scheduled run or delete the details of the currently selected request. If you delete a schedule job from this list, the data that has been extracted and loaded by that job isn’t removed.

In the Manage Job Schedules dialog box, click Add to create a new Cloud data extract or click a schedule to preview an existing one and edit it if it’s a recurring schedule.

Field Name or Option How to Use
Job Specify the job you’re creating the schedule for.
Name Specify a short name to identify the schedule in the Schedules list.
Job Type

To extract data, select Application Data Extract. To sync the job data stores to your source data, select Active Primary Key Extract, which extracts primary key values to identify deleted records. To combine both Application Data Extract and Active Primary Key Extract into one job, select Application Data and Active Primary Key Extract.

Description Specify a brief description to identify the schedule.
External Storage Select the external storage for the schedule.
Recurrence Specify how often you want the extract to be performed. To create a once-only data extract, select Simple.
Hourly Interval Specify the number of hours to perform hourly interval extracts by (if you select Hourly in the Recurrence list).
Date and Time Specify the date and time to perform a once-only extract (if you select Simple in the Recurrence list).
Time Specify the time to start an extract, in the format HH:MM:SS AM|PM. For example, 3:00:00 AM.
Day For weekly schedules, select the check box next to each day on which you want to extract data. For Monthly or Yearly extracts, select the day of the month on which you want to extract data.
Month For Yearly (annual) schedules, select the month in which you want to extract data.
Notification Select one of the following options: Use Global Extract Notification to use the global settings defined in the Extract Preferences dialog box; Define Notification to set notifications for the schedule and override the global settings; or None. the notifications you want upon extract start, success, or failure.
Notify On If you have selected Define Notification for the schedule, select the notifications you want upon extract start, success, or failure.
Mail to Addresses Enter email addresses, separated by commas, to which you want notifications sent.
Use Global File Parameters Select Yes to you use the global file parameter settings defined in the Extract Preferences dialog box. Select No to set parameters for the schedule and override the global settings.
Compression Type Select the type of compression you’d like to use for the schedule.
Split file size (GB) Specify the file size by which extracted CSV files are divided for a single VO for the schedule. The default is 1 GB. You can set the file size from one to five GB.
Uploaded file expiry (Days) Enter the number of days you’d like the extract files to persist for the scheduled extract.

Monitor Extracts

In the Schedules list, click Actions and select the option for the last run corresponding to the job type, Application Data Extract or Active Primary Key Extract. Each job type displays in its own dialog box, which lists the last extract status of each VO with status of SUCCESS or FAILURE for each data store and error messages in the case of failures. The ESS Request Id column displays the job for which the VO extraction last ran. You can also select a job in the Job list to filter the results.

Manage Files in External Storage for Custom Warehouse Integration

During extract, view object (VO) data in compressed files is uploaded to external storage with a manifest file that lists the files from the current batch. Use the information in the manifest file to process data. For a custom warehouse implementation, you must manage the manifest file and its content.

Data Uploaded to External Storage

The following files are uploaded as compressed files with .zip extensions with the file name format of file_<vonames in lower case and '.' replaced with _>-batch*.zip:

  • Comma-separated value (.csv) files: VO data and are uploaded as compressed files.

  • Metadata comma-separated value (.mdcsv) files: metadata files with details about columns and data type definitions for Data Stores (BIVOs).

  • Primary Key comma-separated value (.pecsv) files: data files with primary key column values used to identify deleted records in the warehouse.

The uploaded files are detailed in a manifest file, whose name format depends on the configured storage area. Universal Content Manager (UCM) manifest files are named MANIFEST.MF. Cloud Storage Service manifest files have a file name format of MANIFEST-[TIMESTAMP].MF

Note: To support parsing of the comma-separated value files, column values are wrapped in double quotes. The double quote value in the column is escaped using two consecutive double quote values. Because of this, a custom delimiter isn’t required.The decimal floating point numbers will have rounding errors due to representational limitations of binary floating point formats in BICC. For example, a decimal number such as 1.365 may be represented as 1.364999999999999 when converting to DOUBLE type.

Manifest File Formats and Content

The first line of a manifest file describes the source version. In UCM MANIFEST.MF files, the body of the file contains information about each of the uploaded files in the format vo_name;ucm_document_id;md5_check_sum_value. For example, in the below sample line from a UCM manifest file, 9526 is the UCM document ID of the uploaded file, ;b2af2bf486366e2c2cb7598849f0df2e is the check sum value.

crmanalyticsam_partiesanalyticsam_customer;9526;b2af2bf486366e2c2cb7598849f0df2e

In Cloud Storage Service MANIFEST-[TIMESTAMP].MF files, the body of the file contains information about each of the uploaded files in the format extract_uploaded_filename;md5_check_sum_value. For example, in the below sample line from a Storage Service manifest file, file_fscmtopmodelam_analyticsserviceam_currenciestlpvo-batch1209716923-20150615_105514.zip is the uploaded file name, and ;b2af2bf486366e2c2cb7598849f0df2e is the check sum value.

file_fscmtopmodelam_analyticsserviceam_currenciestlpvo-batch1209716923-20150615_105514.zip;fa981be0caf70a9a52df3aceb9998cc9

Downloading and Processing Content from UCM

To download extracted content from UCM, search for DOCTITLE MANIFEST.MF and sort by DOCDATE in DESC order. This provides all of the manifest UCM files in order by docid. Download each MANIFEST file using docid. Parse the lines in the manifest file to download data files using their respective ucm_document_ids. You can use the md5_check_sum_value to verify downloaded file content. After downloading the files, unzip them and process them based on their file extension, for example by .csv, .mdcsv, or .pecsv.

Once the data files are processed, rename the corresponding MANIFEST.MF file in UCM by adding a timestamp prefix in the format [TIMESTAMP]_MANIFEST.MF so that it’s not reused in the next download from UCM. Expire the manifest file and all the processed files after 30 days so that UCM storage doesn’t run out of space.

Downloading and Processing Content from Cloud Storage Service

To download extracted content from Cloud Storage Service, search for MANIFEST- and sort by filename. This provides all of the manifest files in order by date. Download each manifest file and parse the lines in the manifest file to download data files using their respective file names. You can use the md5_check_sum_value to verify downloaded file content. After downloading the files, unzip them and process them based on their file extension, for example by .csv, .mdcsv, or .pecsv.

Once the data files are processed, rename the corresponding manifest file in Storage Service by adding a timestamp prefix in the format [TIMESTAMP]_MANIFEST so that it’s not reused in the next download. Expire the manifest file and all the processed files after 30 days so that storage doesn’t run out of space.

BI Cloud Connector Preferences

Set preferences for the BI Cloud Connector , including regional settings, display language, and accessibility options.

Preference Description
Regional Select the regional options, which include indicating the country, date format, time format, number format, currency, and time zone.
Language Select the display language for the BI Cloud Connector Console.
Accessibility Select accessibility options, such as use of a screen reader, high color contrast, and font size.

To set Regional and Language preferences, click the Preferences button in the panel tab and select the Regional link or Language link. To save your changes, click Save. To set Accessibility preferences, click the Accessibility button, make changes to your settings, and click Apply.