12 Run and Manage Jobs Using the Web Interface

The Jobs page in the Essbase web interface is a centralized place from which to execute operations over the entire Essbase platform.

Essbase administrators or users with execute permissions on certain applications can use the Jobs page to quickly execute jobs such as clearing and loading data, importing and exporting applications, running calculations and much more.

The Jobs page is convenient for one-time execution of administrative tasks, but it is not a replacement for scripted administration of Essbase platform jobs. MaxL, CLI, REST, and API programs are the most efficient way to schedule jobs for production activities and life cycle maintenance.

View Job Status and Details

Users have access to job listings based on their assigned user role. For example, if you have the Service Administrator role, you can see all jobs; if you have the User role, you can see only the jobs you ran.

Because Essbase jobs run in the background, you must refresh the Jobs page to view their status.

The job listing shows all the jobs for all the applications provisioned to the logged in user. You can scroll down to see the history of all the jobs that you ran.

  1. On the Applications page, click Jobs.
  2. Click Refresh to refresh once, or click Auto Refresh to refresh the jobs every few seconds. In Cube Designer, job status refreshes automatically.

You can also view details for an individual job. To view job details, click the Actions menu to the right of the job listing, and select Job Details to see input and output details for a job.

You can kill jobs on the Console page, on the Sessions tab:
  1. On the Console page, click Sessions.
  2. Select the user, and the applications and cube in which the job is running.
  3. Select Kill All.

    This kills all jobs in the application and cube, started by the selected user.

Execute Jobs

You can execute numerous types of jobs from the Jobs page. For each, you choose an option from the New Job drop-down list, and then provide the necessary information.

You can execute up to 10 jobs concurrently, or change the default setting.

Aggregate storage:

Block storage:

Aggregate storage and block storage:

Build Aggregations

Building aggregations requires Database Access permission.

Aggregations apply to aggregate storage cubes. Aggregations are intermediate stored consolidations called Aggregate Views. Aggregate views store upper-level intersections, which support query performance by avoiding dynamic aggregations on the most commonly queried intersections in the cube. The term aggregation is used to refer to the aggregation process and the set of values stored as a result of the process.

When you build an aggregation, Essbase selects aggregate views to be rolled up, aggregates them based on the outline hierarchy, and stores the cell values in the selected views. If an aggregation includes aggregate cells dependent on level 0 values that are changed through a data load, the higher-level values are automatically updated at the end of the data load process.


Image of the build aggregations dialog box, in Jobs.

To build aggregations:

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Build Aggregations.
  3. For Application, choose an application.
  4. For Database, choose a cube.
  5. Optionally, enter a non-zero value for Ratio To Stop.

    Leaving Ratio To Stop at zero (the default) means there is no stopping ratio set.

    Consider this option if there is no known common type of query executed by your cube’s users, and you want to improve performance by limiting the cube’s growth. Essbase aggregates the selected views, with the exception that the maximum growth of the aggregated cube must not exceed the given ratio. For example, if the size of a cube is 1 GB, specifying the total size as 1.2 means that the size of the resulting data cannot exceed 20% of 1 GB, for a total size of 1.2 GB.

  6. Check or clear the box for Based on Query Data.

    If you check the box for Based on Query Data, Essbase aggregates a selection of views that is defined based on analysis of user querying patterns. This is a good approach if similar types of queries are typically executed by the users of your cube.

    This check box has no effect unless you have first enabled query tracking. For general information about query tracking, see Selecting Views Based on Usage.

    After you’ve enabled query tracking, allow sufficient time to collect user data-retrieval patterns before running this job. A good approach is to prepare a set of your most important and long running queries, enable query tracking, run the prepared set of queries, and then run this job to create an aggregate view based on the query tracking.

    While query tracking is enabled, the cost of retrieving cells is recorded for every level combination. This recording continues until the application is shut down or until you turn off query tracking (using the MaxL statement alter database <dbs-name> disable query_tracking).

  7. Select whether to enable alternate rollups.

    Consider checking this box if your cube implements alternate hierarchies for shared members or attributes, and you want to include them in the aggregation.

See Aggregating an Aggregate Storage Database.

Clear Aggregations

Clears aggregations. Requires Database Update permission.

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Clear Aggregations.
  3. For Application, choose an application.
  4. For Database, choose a cube.
  5. Click OK.

Export to Table Format

If you have at least Database Update application permission, you can export a cube from the Essbase web interface into Excel, in tabular format.

This exported tabular data is organized into columns with headers that Essbase can use to deploy a new multidimensional cube. See Export a Cube to Tabular Data.

To export a cube in tabular format:

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Export to Table Format.
  3. For Application, select an application.
  4. Choose whether to export dynamic blocks.

    If you choose Export Dynamic Blocks, cells for dynamic members in the dense dimensions are exported.

Run Calculation

Requires at least Database Update permission, as well as provisioned access to the calculation script.

Prerequisite: upload the script, as a .csc file, to the cube directory. See Work with Files and Artifacts.

To run a calculation:

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Run Calculation.
  3. For Application, choose an application.
  4. For Database, choose a cube.
  5. Select a calculation script.
  6. Click OK.

See Calculate Cubes.

Build Dimension

Building dimensions is the process of loading dimensions and members to a cube outline using a data source and a rule file. To run a dimension build job, you must have at least Database Manager permission.


Image of the Build Dimensions dialog box, in Jobs.

This procedure covers how to build dimensions using the File load type. SQL and Datasource types are also available. For information on loading different sources of data, see Define Rules that Query External Sources.

To build a dimension:

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Build Dimension.
  3. For Application, choose an application.
  4. For Database, choose a cube.
  5. Click the Actions menu to the right of the Script field and select a rule file.
  6. Select the File load type.
  7. Click the actions menu to the right of the Data File field to select a data file.
  8. Choose a restructure option.
    • Preserve All Data: Preserves all existing data.
    • Preserve No Data: Discards existing data (valid for block storage and aggregate storage cubes).
    • Preserve Leaf Level Data: Preserves data in existing level 0 blocks (block storage only). If you select this option, all upper-level blocks are deleted before the cube is restructured. After restructure, only data in level 0 blocks remains.
    • Preserve Input Data: Preserves existing input-level blocks (block storage only).

Clear Data

Changes the values of all cells containing data to #Missing. Requires at least Database Update permission.

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Clear Data.
  3. For Application, choose an application.
  4. For Database, choose a cube.
  5. Click OK.

Export Data

Exports data to a text file. Requires at least Database Manager permission.


Image of the Export Data dialog box, in Jobs.

To export data:
  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Export Data.
  3. For Application, choose an application.
  4. For Database, choose a cube.
  5. For Data Level, select a data level.

    You can choose from All Data, Level 0 Data, or Input Data.

To download the exported data file:
  1. On the applications page, click Jobs.
  2. Select the Actions menu to the right of the export job.
  3. Select Job Details.
  4. To view the data file, you can click the Output Path link, or to download the file, select download Image of the download icon..

    The exported data file is stored in the database folder in the catalog.

Export Excel

Exports a cube to an Excel application workbook. Requires at least Database Manager permission.


Image of the Export Excel dialog box, in Jobs.

To export to Excel:
  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Export Excel.
  3. For Application, choose an application.
  4. For Database, choose a cube.
  5. Select a build method.

    See Understanding Build Methods.

  6. Choose whether to export data. This option adds a data worksheet to the application workbook.
  7. Choose whether to export scripts. This option adds Calc and MDX sheets to the application workbook if calculation scripts and MDX scripts exist in the cube.
  8. Click OK.

Export LCM

Backs up cube artifacts to a Lifecycle Management (LCM) .zip file.

Requires at least user role with Application Manager permission, or, you must be the power user who created the application.


Image of the Export LCM dialog box, in Jobs.

To back up cube artifacts to a .zip file:

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Export LCM.
  3. Select an application in the Application selector, OR, click All Application to export all applications to the zip.
  4. Enter a name for the .zip file. If no location is specified, it is saved in <Application Directory>/catalog/users/<user_name>.
  5. Optionally, select any of the following backup-related actions:
    • Skip Data - Excludes data from the backup.
    • Include Server Level Artifacts - Include globally defined connections and Datasources as part of the export.
    • Generate Artifact List - Generate a text file containing a complete list of the exported artifacts. You can use this text file to manage the import of artifacts. For example, you can rearrange the order of artifacts in the list to control the order in which they are imported. You can skip importing some artifacts by removing or commenting out items in the list.

Notes

By default, the ZIP file is stored on the Essbase server file catalog, in the user directory of the user who exported it.

Lifecycle Management (LCM) import operations (and Migration Utility import) are not supported for migration of federated partitions. Federated partitions (applicable only for deployments on OCI) must be recreated manually on the target.

See also: LcmExport: Back Up Cube Files.

Import LCM

Imports cube artifacts from an Essbase Lifecycle Management (LCM) ZIP file.

Requires at least user role with Application Manager permission, or, you must be the power user who created the application.

Restores cube artifacts from a Lifecycle Management (LCM) ZIP file that was created using the Export LCM job (or the LcmExport: Back Up Cube Files CLI command).


Image of the Import LCM dialog box, in Jobs.

To restore cube artifacts from a Lifecycle Management (LCM) ZIP file:

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Import LCM.
  3. Select the LCM export ZIP file.
  4. Enter the target application name.
  5. Select or clear Server Artifacts.

    If server-level artifacts were included in the LCM export, you can check this box to also include server-level artifacts on LCM import.

  6. Select or clear Overwrite.

    Choosing to overwrite causes recreation of the target application. The import operation deletes and recreates the application, importing only the artifacts present in the ZIP file. If overwrite is not selected, and the specified application name is the same as an existing application, the Import LCM job fails.

  7. Select whether to use verbose descriptions.

    Choosing Verbose enables extended descriptions.

  8. Click OK.

Notes

To check the job status, click on the Actions menu to the right of the job and select Job Details.

After the LCM import completes, you may need to take further action to restore migrated connections to external sources. To do this, open the connection and enter the password.

Lifecycle Management (LCM) import operations (and Migration Utility import) are not supported for migration of federated partitions. Federated partitions must be recreated manually on the target.

Rollback from a patch, to a version that’s older than what was used to configure the Essbase instance, is not supported. In this scenario, importing applications from LCM in the Essbase web interface can fail after rollback.

See also: LcmImport: Restore Cube Files.

Load Data

Loading data is the process of inserting values into a cube, using a data source and a rule file. To run this job, you must have at least Database Manager permission.

Select a workflow:

Load Data From File

This procedure covers how to load data using the File load type.

To load data from a file:

  1. On the Applications page, click Jobs.

  2. From the New Job menu, select Load Data.

  3. For the Load Type, select File.


    Image of the Load Data dialog box, in Jobs, with Load Type of File selected, and a rule file and data file selected.

  4. Click Select Files from Catalog.

  5. Browse to the data files you want to load, select them and click Select.

  6. If you are using rule files, select the Add file option immediately below each of the data files you selected, browse to the rule file you want to use for that data file, select it, and click Select.

  7. Select Abort on Error if you want to end the data load if it encounters an error.

  8. Click OK.

  9. To check the job status, click on the Actions menu to the right of the job and select Job Details. If you are loading more than one data file, there is information in Job Details about each, individual data load.

Load Data From SQL Source

This procedure covers how to load data using the SQL load type. Use this type if the load rule itself queries an external source of data. To learn how to set up rules to access external sources of data, see Define Rules that Query External Sources.

  1. On the Applications page, click Jobs.

  2. From the New Job menu, select Load Data.

  3. For the Load Type, select SQL.

  4. For Script, browse the catalog and select the rule file.

  5. Do one of the following:

    • If your load rule's connectivity to the external database is based on configured ODBC drivers or a connection string, enter the User Name and Password of a user authorized to access the external database.

    • If your load rule's connectivity to the external database is based on a global or application-level connection saved in Essbase, click Use Connection Credentials and select the named connection.

      Application-level connections are prefixed with the application name; for example, SAMPLE.OracleDB.


      Image of the Load Data dialog box in Jobs, with Load Type of SQL selected for application Sample and database Basic. The rule file is entered for the Script field, with the path of /applications/Sample/Basic/Data.rul. The Use Connection Credentials option is checked. The connection being used is named SAMPLE.OracleDB.

  6. Select Abort on Error if you want to end the data load if it encounters an error.

  7. Click OK.

  8. To check the job status, click on the Actions menu to the right of the job and select Job Details. If you are loading more than one data file, there is information in Job Details about each, individual data load.

Load Data From Datasource

This procedure covers how to load data using the Datasource load type. This procedure assumes that your load rule's SQL Properties are pointing to a Datasource defined in Essbase, as illustrated in Access External Data Using a Connection and Datasource.

  1. On the Applications page, click Jobs.

  2. From the New Job menu, select Load Data.

  3. For the Load Type, select Datasource.

  4. For Script, browse the catalog and select the rule file.


    Image of the Load Data dialog box in Jobs, with Load Type of Datasource selected for application Sample and database Basic. The rule file is entered for the Script field, with the path of /applications/Sample/Basic/Data.rul.

  5. Select Abort on Error if you want to end the data load if it encounters an error.

  6. Click OK.

  7. To check the job status, click on the Actions menu to the right of the job and select Job Details. If you are loading more than one data file, there is information in Job Details about each, individual data load.

Run MDX

Requires at least Database Access permission.

To run an MDX script:

  1. On the Applications page, click Jobs.
  2. From the New Job menu, select Run MDX.
  3. For Application, choose an application.
  4. For Database, choose a cube.
  5. Select an MDX script.
  6. Click OK.

See Run MDX Scripts.