Process Large Data Sets Asynchronously with Different Bulk 2.0 Data Operations

The Salesforce Bulk API enables you to handle huge data sets asynchronously with different bulk operations. For every bulk operation, the Salesforce application creates a job that is processed in batches. A job contains one or more batches in which each batch is processed independently. The batch is a nonempty CSV file.

This use case describes how to configure the Salesforce Adapter to create a large number of account records in Salesforce Cloud.

To perform this operation, you create FTP Adapter and Salesforce Adapter connections in Oracle Integration.

In this use case, a CSV file is used as input. However, you can also use other format files. The Salesforce Adapter transforms the file contents into a Salesforce-recognizable format.


This image shows a box labeled csv File in the upper left. Right below it is a circle labeled Start Here. Both boxes point to a box labeled Download the .csv File to Oracle Integration. That box points downward to a box labeled Stage File - Read File in Segments Operation. Inside that box is an inner box labeled Salesforce Cloud Adapter with Bulk 2.0 Operations. On its right is an incoming arrow labeled Sends File in Segments that comes from a cloud labeled Salesforce Application Cloud. The Stage File - Read File in Segments Operation box points downward to a box labeled Salesforce Final Job Operation.

  1. Create a scheduled orchestration integration.
  2. Drag an FTP Adapter into the integration canvas.
  3. Configure the FTP Adapter as follows:
    1. On the Basic Info page, provide a name.
    2. On the Operations page, select Download File from the Select Operation list.
    3. Select Binary from the Select a Transfer Mode list.
    4. Provide the input directory, file name, and download directory.
    5. On the Summary page, review your selections.
  4. Drag a stage file action into the integration canvas below the FTP Adapter. The stage file action helps the Salesforce Adapter fetch data in the form of segments (a single segment contains 200 records).
    1. On the Basic Info page, provide a name.
    2. On the Configure Operations page, select Read Files in Segments from the Choose Stage File Operation field.
    3. Specify the XPath expression for the file name in the Specify the File Name field.
    4. Specify the directory name in the Specify the Directory to read from field (downloaded to Oracle Integration using FTP).
    5. On the Scheme Options page, Select Yes in the Do you want to specify the structure for the contents of the file field and select Sample delimited document (e.g. CSV) in the Which one of the following choices would be used to describe the structure of the file contents field.
    6. On the Format Definition page, click Choose File and upload the sample CSV file in the Select a New Delimited Data File field.
    7. Enter the record name and record set name.
    8. On the Summary page, review your selections.
  5. Drag a Salesforce Adapter inside the stage file action and configure it with the required bulk operation.
    1. On the Basic Info page, provide a name.
    2. On the Action page, select Perform Bulk Data Operations.
    3. On the Operations page, select Bulk v2.0, and then select the required operation (for this example, Create) in the Select an Operation field.
    4. Select the required business objects (for this example, Account) in the Select Business Objects field.
    5. On the Summary page, review your selections.
  6. In the mapper, map the stage file action elements to the Salesforce input payload.

    The mapper shows Record, Name, Phone, and External Id in the Source tree mapped to Account, Account_Ext_Id, Account Name, and Account Phone, respectively in the Target tree. In the middle is the Mapping canvas column.

  7. Drag a Salesforce Adapter below the stage file action and configure it with the Final Job operation.
    1. On the Basic Info page, provide a name.
    2. On the Action page, select Perform Bulk Data Operations.
    3. On the Operations page, select Bulk v2.0, and select Final Job in the Select an Operation field.

      Note:

      You cannot select the object on which to perform the operation because this Salesforce Adapter connection performs the activity for the operation that was configured for the Salesforce endpoint inside the stage file action.
    4. Review your selections on the Summary page.
  8. Drag an FTP Adapter connection below the Salesforce Adapter and configure the FTP endpoint to write the Salesforce final batch response to a file for future use.
    1. On the Basic Info page, provide a name.
    2. On the Operations page, select Write File from the Select Operation list.
    3. Select Binary from the Select a Transfer Mode list.
    4. Specify the output directory, file name pattern, and download directory.
    5. On the Scheme page, select XML schema (XSD) document (to describe the XML message) from the Which one of the following choices would be used to describe the structure of the file contents list.
    6. On the File Contents-Definition page, click Choose File and upload the schema source file in the Select a New File field.
    7. Review your selections on the Summary page.
  9. In the mapper, map the Salesforce final batch response to the FTP write request.
    The completed integration looks as follows.


    The integration shows the Schedule, Map, Invoke, Stage File, Map, Invoke, Map, Invoke, and end icons.

  10. Specify the tracking variable.
    1. Click Business Identifiers Business identifiers icon.
    2. Drag the required variable to use for the tracking reference (for this example, startTime is selected).
  11. Activate the integration.
    After successful activation, you can submit the integration and monitor the runtime in Oracle Integration. For every bulk operation, the Salesforce application creates a job ID. Once you receive the reference bulk job ID in the final batch response, you can also get the details of the records using the Get Successful Records, Get Failed Records, and Get Unprocessed Records operations for all batch operations available under Bulk v2.0. See Get Successful/Failed/Unprocessed Records Using the Bulk 2.0 Operations.