Process Large Data Sets Asynchronously with Different Bulk Operations

The Salesforce Bulk API enables you to handle huge data sets asynchronously with different bulk operations. For every bulk operation, the Salesforce application creates a job that is processed in batches.

A job contains one or more batches in which each batch is processed independently. The batch is a nonempty CSV/XML/JSON file that is limited to 10,000 records and is less than 8 MB in size. Because the batches are processed in parallel, no execution order is followed. A batch can contain a maximum of 10,000,000 characters in which 5,000 fields in a batch are allowed with a maximum of 400,000 characters for all its fields and 32,000 characters for each field.

This use case discusses about how to configure the Salesforce Adapter to create a large number of account records in Salesforce Cloud.

To perform this operation, you create FTP Adapter and Salesforce Adapter connections in Oracle Integration.

In this use case, a CSV file is used as input, but you can also use other format files. The Salesforce Adapter transforms the file contents into a Salesforce-recognizable format.
Description of bulk_operations.png follows
Description of the illustration bulk_operations.png

  1. Create a scheduled orchestration integration.
  2. Drag an FTP Adapter into the integration canvas.
  3. Configure the FTP Adapter as follows:
    1. On the Operations page, select Download File from the Select Operation list.
    2. Select ASCII from the Select a Transfer Mode list.
    3. Provide the input directory, file name, and download directory.
    4. Review your selections on the Summary page.
  4. Drag a stage file action into the integration canvas below the FTP Adapter. The stage file action helps the Salesforce Adapter fetch data in the form of segments (a single segment contains 200 records).
    1. On the Basic Info page, provide a name.
    2. On the Configure Operations page, select Read Files in Segments from the Choose Stage File Operation field.
    3. Specify the XPath for the file name in the Specify the File Name field.
    4. Specify the directory name in the Specify the Directory to read from field (downloaded to Oracle Integration using FTP).
    5. On the Scheme Options page, select Create a new schema from a CSV file in the Do you want to create a new schema or select an existing one list.
    6. On the Format Definition page, click Choose File and upload the sample CSV file in the Select a New Delimited Data File field.
    7. Review your selections on the Summary page.
  5. Drag a Salesforce Adapter inside the stage file action and configure it with the required BULK operation.
    1. On the Basic Info page, provide a name.
    2. Select standard applications delivered by Salesforce.com.
    3. On the Action page, select Perform Bulk Data Operations.
    4. On the Operations page, select the required operation (for this example, Create) in the Select an Operation field.
    5. Select the required business objects (for this example, Account) in the Select Business Objects field.
    6. Review your selections on the Summary page.
  6. In the mapper, map the stage file action elements to the Salesforce input payload.
  7. Drag a Salesforce Adapter below the stage file action and configure it with the Final Batch operation.
    1. On the Basic Info page, provide a name.
    2. On the Action page, select Perform Bulk Data Operations.
    3. On the Operations page, select Final Batch in the Select an Operation field.

      Note:

      You cannot select the object on which to perform the operation because this Salesforce Adapter connection performs the activity for the operation that was configured for the Salesforce endpoint inside the stage file action.

      Description of bulk_operation.png follows
      Description of the illustration bulk_operation.png
    4. Review your selections on the Summary page.
  8. Drag an FTP Adapter connection below the Salesforce Adapter and configure the FTP endpoint to write the Salesforce final batch response to a file for future use.
    1. On the Basic Info page, provide a name.
    2. On the Operations page, select Write File from the Select Operation list.
    3. Select ASCII from the Select a Transfer Mode list.
    4. Specify the output directory, file name pattern, and download directory.
    5. On the Scheme page, select XML schema (XSD) document (to describe the XML message) from the Which one of the following choices would be used to describe the structure of the file contents list.
    6. On the File Contents-Definition page, click Choose File and upload the schema source file in the Select a New File field.
    7. Review your selections on the Summary page.
  9. In the mapper, map the Salesforce final batch response to the FTP write request.

    The completed integration looks as follows.


    Description of bulk_mapper2.png follows
    Description of the illustration bulk_mapper2.png
  10. Specify the tracking variable.
    1. Click Tracking.
    2. Drag the required variable to use for the tracking reference (for this example, startTime is selected).
  11. Activate the integration.
    After successful activation, you can submit the integration and monitor the runtime in Oracle Integration. Once you receive the reference bulk job ID in the final batch response, you can also get the job status using the Get Status for all Batches operation available under the Bulk Operation type.

    Salesforce provides documentation on bulk operations. See https://resources.docs.salesforce.com/sfdc/pdf/api_asynch.pdf.