Import Data (Aggregate Storage)
Click here for non-aggregate storage version
Import data from text or spreadsheet data files, with or without a rules file.
Minimum permission required: Write.
Syntax

Description of the illustration imp_data_as.gif
Use import data in the following ways to load data into an aggregate storage database:
Keywords
- import database <dbs-name> data from...
-
Specify whether the data import is from a local or server file, and what type of file to import data from.
- ...using ... rules_file
-
Import data into the database using a specified rules file.
- ...<data error spec> (on error...)
-
Required. Tell Essbase what to do in case of errors during the data load: abort the operation, or write or append to a specified error log.
- ...<data record spec> from data_string
-
Load a single data record into the selected database. The string following data_string must be a contiguous line, without newline characters.
- ...<SQL connect spec> (connect as...)
-
If you are importing data from an SQL source, provide your SQL user name and password. You must always use a rules file when you load SQL data sources.
When loading SQL data into aggregate storage databases, you can use up to eight rules files to load data in parallel by using the multiple rules_file grammar with the grammar specified in <buffer-block-spec>. Essbase initializes multiple temporary aggregate storage data load buffers (one for each rules file) and, when the data is fully loaded into the buffers, commits the contents of all buffers into the database in one operation.
Each rules file must use the same authentication information (SQL user name and password).
In the following example, SQL data is loaded from two rules files (
rule1.rul
andrule2.rul
):import database ASOsamp.Sample data connect as TBC identified by 'password' using multiple rules_file 'rule1','rule2' to load_buffer_block starting with buffer_id 100 on error write to "error.txt";
In specifying the list of rules files, use a comma-separated string of rules file names (excluding the
.rul
extension). The file name for rules files must not exceed eight bytes and the rules files must reside on Essbase Server.In initializing a data load buffer for each rules file, Essbase uses the starting data load buffer ID you specify for the first rules file in the list (for example, ID 100 for rule1) and increments the ID number by one for each subsequent data load buffer (for example, ID 101 for rule2).
The ODBC driver you are using must be configured for parallel SQL connections.
Note:
Performing multiple SQL data loads in parallel to aggregate storage databases is different than using the to load_buffer with buffer_id grammar to load data into a buffer, and then using the from load_buffer with buffer_id grammar to explicitly commit the buffer contents to the database.
- ...to load_buffer with buffer_id
-
If you are importing data from multiple data files to an aggregate storage database, you can import to a buffer first, in order to make the data import operation more efficient.
- ...from load_buffer with buffer_id
-
If you are importing data from multiple data files to an aggregate storage database, you can import from a data load buffer in order to make the data import operation more efficient.
- ...from load_buffer with buffer_id...values
-
Specify whether you want to add to existing values, substract from existing values, or override existing values when committing the contents of the specified data load buffer to the database.
- ...from load_buffer with buffer_id...create slice
-
Commit the contents of the specified data load buffer to the database by creating a new data slice.
- ...from load_buffer with buffer_id override all data
-
Remove the current contents of the database and replace the database with the contents of the specified data load buffer.
- ...from load_buffer with buffer_id override incremental data
-
Remove the current contents of all incremental data slices in the database and create a new data slice with the contents of the specified data load buffer. The new data is created with the data load property "add values" (aggregate_sum). If there are duplicate cells between the new data and the primary slice, their values are added together when you query for them.
Notes
-
This statement requires that the database is started.
-
When using the import statement, you must specify what should happen in case of an error.
-
To import from a SQL data source, you must connect as the relational user name and use a rules file.
Example
import database ASOsamp.Sample data from server data_file '/ASOsamp/Sample/expsamp.txt' on error abort;
Loads data into the ASOsamp.Sample database.
import database ASOsamp.Sample data from load_buffer with buffer_id 1;
Commits the contents of a specified data load buffer to the ASOsamp.Sample database.
import database ASOsamp.Sample data from load_buffer with buffer_id 1, 2;
Commits the contents of multiple data load buffers (buffer_id 1 and buffer_id 2) to the ASOsamp.Sample database.
import database ASOsamp.Sample data from load_buffer with buffer_id 1 add values;
Commits the contents of a specified data load buffer to the ASOsamp.Sample database by adding values.
import database ASOsamp.Sample data from load_buffer with buffer_id 1 override values create slice;
Commits the contents of the specified data load buffer into a new data slice in the ASOsamp.Sample database.
import database ASOsamp.Sample data from load_buffer with buffer_id 1 override all data;
Replaces the contents of the ASOsamp.Sample database with the contents of the specified data load buffer.
import database ASOsamp.Sample data from load_buffer with buffer_id 1 override incremental data;
Replaces the contents of all incremental data slices in the ASOsamp.Sample database by creating a new data slice with the contents of the specified data load buffer. The new data is created with the data load property "add values" (aggregate_sum). If there are duplicate cells between the new data and the primary slice, their values are added together when you query for them.