Transformation Dynamic Logic

Data Transformation is a key piece of functionality offered by Oracle Insurance Gateway. The foundation for this functionality is provided by Oracle Health Insurance, Groovy-based, dynamic logic subsystem. The goal of this section is to describe specifics with respect to dynamic logic based transformation that are possible in Oracle Insurance Gateway. This also includes examples.

Input, Transform, Output

Basically the above sequence of steps are involved in a transformation process.

First some input needs to be obtained. Whenever a transformation is configured as part of the integration flow, typically a specific type of input - either a raw String or a Data File Set - is considered a primary source of input. That said, from a (transformation) dynamic logic it is possible to obtain (additional) input sources. For example input can be obtained from the other elements that are bound to the dynamic logic context (aka dynamic logic bindings). Yet another way is to use one of the available predefined dynamic logic methods (e.g. Search Library, External Call-Out). Regardless where input originates from, subsequently that input needs to be read/interpreted through dynamic logic in order for the core transformation to be possible. In the section "Assistance for reading" lists ways the Oracle Insurance Gateway provides additional assistance to accomplish this read/interpretation phase.

Next, after the input(s) have been obtained and the appropriate way to read/interpret it has been selected and coded for, the actual transformation needs to happen. Transformation is the process of converting data from one format or structure into another format or structure. For instance it might be required to transform a CSV input source to XML, or XML to JSON. From a structural perspective a transformation might consist of splitting, combining, concatenating specific elements. Virtually any imaginable kind of data transformation can be coded for in dynamic logic.

Lastly the transformation result needs to be written to somewhere. The transformer allows conversion of the payload type. For example, if the source is a raw payload, the transformed output may be written to either a raw payload destination or a data file destination.

If the transformation logic creates data files and also returns a raw payload then the system sets the subsequent Exchange Step payload type to Raw and adds the returned value as raw payload. In this case the data files that were created are only available as bind variables to subsequent dynamic logic.

Assistance for Reading

Whenever the system invokes a transformer, it makes a fileReader binding available.

fileReader Binding

The fileReader binding enables a dynamic logic to read any data file that is in scope for the transformation. What is in scope for the transformation is largely defined by what else is provided for in the dynamic logic binding context. Typically though it is possible to get to (and read) a data file that has been read/produced in exchange steps that have been executed up until the point where this transformation runs (see also "dataFiles"). Through the fileReader, the payload transformer can perform transformations of data that originates (or originated) from a data file. It is important to note that a fileReader does not have a 1:1 relationship with a particular data file. Rather, a fileReader provides capabilities to obtain a particular view of a data file.

The views supported by the fileReader are:

  • json

  • xml

  • csv

  • text (lines)

The table below describes these views in a bit more detail.

Type

Acquired Through

Description

json

fileReader.jsonReader(DataFile)

com.google.gson.stream.JsonReader

xml

fileReader.xmlReader(DataFile)

Additional methods: * read() - reads one XML element (by the given XML element name) at a time, parses it and returns the result in an object which supports GPath expressions. * readAsString() - reads one XML element (by the given XML element name) at a time, parses it and returns the result as a String.

csv

fileReader.csvParser(DataFile, CSVFormat)

  • allows detailed format specification through CSVFormat

text

fileReader.lineReader(DataFile)

  • effectively allows any type of data file to be read

DataFiles

Although not a binding, the role of the keyword "dataFiles" is important from a reading perspective. Effectively this represents the key to a collection of data files that are in context at/from a particular exchange step. Information about steps (including payload) is contained in a context object that represents the exchange’s runtime. For instance the initial exchange step is called trigger. In case the payload that initiated the exchange was a data file set, the data files that where uploaded can be accessed through dynamic logic through: trigger.dataFiles. Now suppose a Step was configured, with a name "extract" - in that case any data files that were the result of this step are subsequently accessible through: extract.dataFiles.

Example

   // read dataFiles that triggered the exchange
   extract.dataFiles.each { df ->

       char delimiter = ';'
       def csvParser = fileReader.csvParser(df, CSVFormat.DEFAULT
                                 .withFirstRecordAsHeader()
                                 .withIgnoreHeaderCase()
                                 .withTrim()
                                 .withDelimiter(delimiter))

       // processing csvParser information
   }

Assistance for Writing

If the destination for the result of the transformation is a raw payload type, the result is expected to be a raw string that is returned by the dynamic logic. For transformation to one or more data files additional assistance is provided by Oracle Insurance Gateway.

From a technical perspective, it is still possible to leverage, specific reader (view) and writers in dynamic logic transformations - where the final result is a raw string. For instance it is perfectly valid to construct a CSVParser (and CSVPrinter - see below) to read/write raw strings. These are however atypical use cases because the reader and writer facilities are effectively there for datafiles.

Apart from a fileReader that is made available to the dynamic logic, an outputWriters binding is made available to the dynamic logic context for data file transformers.

OutputWriters Binding

An outputWriter writes data to a destination data file within a dynamic logic. You can access the outputWriter by calling the outputWriters.getDataFileOutputWriter() method. The first request on outputWriter creates a data file set. Multiple outputWriter instances from the outputWriters binding write to multiple data files. Multiple data files for a particular dynamic logic reside in the same data file set. Contrary to the fileReader, an outputWriter has a 1:1 relationship with the destination data file.

For obtaining an outputWriter the following methods are available through the outputWriters binding:

  • outputWriters.getDataFileOutputWriter(): this creates a new data file in the destination data file set with a generated, unique name and without a mime type and returns a writer to it.

  • outputWriters.getDataFileOutputWriter(dataFileName, mimeType): this creates a new data file in the destination data file set with the given name and the given mime type and returns a writer to it. Note that the name of the data file needs to be unique within the destination data file set and the mime type needs to be an existing mime type. An exception is thrown if these requirements are not met.

By default, an outputWriter writes any kind of textual data from a dynamic logic. For cases when you know the output format, the outputWriter has other methods. These methods control the output of the data file. Both these options allow maximum flexibility in writing (see also "StreamingMarkupBuilder").

The below table describes these views in a bit more detail.

Type

Acquired Through

Description

json

outputWriter.jsonWriter()

com.google.gson.stream.JsonWriter

xml

outputWriter.xmlWriter()

javax.xml.stream.XMLStreamWriter

csv

outputWriter.csvPrinter(CSVFormat)

  • allows detailed format specification through CSVFormat

text

outputWriter.lineWriter()

java.io.BufferedWriter

The following example shows how to obtain an outputWriter for writing JSON to a data file:

def anotherWriter = outputWriters
                    .getDataFileOutputWriter("DF_" + exchangeStep.getId(), "application/json")
                    .jsonWriter()

The system makes sure that all writers are properly closed upon exiting the dynamic logic.

StreamingMarkupBuilder

A special mention needs to be made for the "StreamingMarkupBuilder". It leverages the fact that the underlying writer for the transformation destination data file is exposed as an outputWriter. The StreamingMarkupBuilder is a Groovy method to create XML and can be used in transformation dynamic logic as the below, extensive example shows. Equally, XML output can be created from an XMLStreamWriter (acquired through outputWriter.getDataFileOutputWriter().xmlWriter()) as well. However the StreamingMarkupBuilder typically does result in a cleaner dynamic logic.

Example

Update provider information in Claims by reading a ";" delimited file.

import org.apache.commons.csv.*
import javax.ws.rs.core.Response
import groovy.json.JsonSlurper
import groovy.xml.StreamingMarkupBuilder

def shouldInvokeWorkflow = false

// Note: this assumes a single datafile in a set
trigger.dataFiles.each { df ->

    // define the CSV structure that is capable of parsing the csv input
    char delimiter = ';'
    def csvParser = fileReader.csvParser(df, CSVFormat.DEFAULT
                              .withFirstRecordAsHeader()
                              .withIgnoreHeaderCase()
                              .withTrim()
                              .withDelimiter(delimiter))

    // this closure defines the XML payload
    def individualProvidersXML = { xml ->
        individualProviders {
            csvParser.each { csvRecord ->
                def validationResult = validate(csvRecord)
                if (!validationResult) {
                    shouldInvokeWorkflow = true
                    return
                }

                // variables for reuse
                incomingProviderCode = csvRecord["Provider No"] //here the record value corresponding to column header "Provider No" is being read.
                startDate = formatDate(csvRecord["Commencement Date"])
                endDate = formatDate(csvRecord["Suspended date"])


                def jsonPayload = JsonOutput.toJson([
                  resource: [q: "code.eq(" + incomingProviderCode + ")"],
                      resourceRepresentation: [expand: "all"]
                ])

                // get existing subscriptionRecords
                // maybe add accept header too for dynamic data

                Response providerResponse = webTarget("claimsDestination").path("generic")
                  .path("individualproviders/search")
                  .request()
                  .accept("application/json")
                  .buildPost(Entity.json(jsonPayload))
                  .invoke()

                def existingSubscriptions = []
                if (providerResponse.status == 200) {
                    existProvider = new JsonSlurper().parseText(providerResponse.readEntity(String.class))
                    existingSubscriptions = existProvider.items[0].subscriptions
                }

                // xml element for the individual provider
                individualProvider(
                    code                   : incomingProviderCode,
                    flexCodeDefinitionCode : "US_PROVIDER",
                    startDate              : startDate,
                    endDate                : endDate,
                    titleCode              : csvRecord["Title"],
                    name                   : csvRecord["Surname"],
                    firstName              : csvRecord["First Name"],
                    outputLanguageCode     : "en",
                    nameFormatCode         : "NAFMDFLTPROV",
                    phoneNumberBusiness    : csvRecord["Phone Number"]
                ) {
                    renderingAddressList {
                        renderingAddress ( startDate : startDate) {
                            serviceAddress (
                                street              : csvRecord["Address Line 1"]?.toUpperCase(),
                                city                : csvRecord["City/Suburb"],
                                postalCode          : csvRecord["Postcode"],
                                countryCode         : "US",
                                countryRegionCode   : "RI"
                            )
                        }
                    }
                    subscriptions {
                        existingSubscriptions.each { s ->
                            record(commencementDate : s["commencementDate"],
                                   suspendedDate    : s["suspendedDate"],
                                   expelledDate     : s["expelledDate"]
                                  )
                        }
                        record(commencementDate     : startDate,
                               suspendedDate        : endDate,
                               expelledDate         : formatDate(csvRecord["Expelled Date"])
                              )
                    }
                  }
            }
        }
    }

    def builder = new StreamingMarkupBuilder()
    builder.encoding = "UTF-8"
    outputWriters.getDataFileOutputWriter() << builder.bind(individualProvidersXML)

    if (shouldInvokeWorkflow) {
        parameters.put("shouldInvokeWorkflow", true)
    }
}

// formats the dateString to a regular date, or returns null/empty string
String formatDate(String dateString) {
   if( dateString?.trim() ) {
      return Date.parse("dd-MM-yyyy", dateString).format("yyyy-MM-dd")
   } else {
      return dateString
   }
}

// validate an individual csvRecord, as some elements are required
boolean validate(csvRecord) {
   def validationResult = true

   if( !csvRecord["Provider No"]?.trim() ) {
       exchangeStep.addLogEntry("Provider code is mandatory")
       validationResult = false
   }

   if( !csvRecord["Commencement Date"]?.trim() ) {
       exchangeStep.addLogEntry("Start Date is mandatory")
       validationResult = false
   }

   return validationResult
}

Examples Per Format

This section shows the examples on how the reader and writer assistants can be used in transformation dynamic logic. The examples show a use case where a particular format needs to have a structural alteration - for example, concatenating two fields name and email into one name_email.

JSON

Transforming from

Transforming to

{"items":[{ "name":"John Williams", "email":"John.Williams@someserver.com", "country":"US" }] }

[{ "No":0, "Name_Email":"John Williams(John.Williams@someserver.com)", "Country":"US" }]

import com.google.gson.*

def jsonWriter = outputWriters.getDataFileOutputWriter().jsonWriter()

trigger.dataFiles.each { df ->

    def gson = new GsonBuilder().create()
    def jsonReader = fileReader.jsonReader(df)

    jsonReader.beginObject()
    jsonReader.nextName()
    jsonReader.beginArray()
    jsonWriter.beginArray()

    def index = 0
    while( jsonReader.hasNext() ) {
        def person = gson.fromJson(jsonReader, Map.class)

        jsonWriter.beginObject()
        jsonWriter.name("No").value(index++)
        jsonWriter.name("Name_Email").value("${person["name"]} (${person["email"]})")
        jsonWriter.name("Country").value(person["country"])
        jsonWriter.endObject()
    }

    jsonWriter.endArray()
}

JSON, Multiple OutputWriters

Using multiple output writes results in multiple data files.

Transforming From

Transforming To

{"items":[{ "name":"John Williams", "email":"John.Williams@someserver.com", "country":"US" }] }

[{ "Name_Email":"John Williams (John.Williams@someserver.com)" }] and [{ "No":0, "Name_Email":"John Williams (John.Williams@someserver.com)", "Country":"US" }]

import com.google.gson.*

def jsonWriter = outputWriters.getDataFileOutputWriter().jsonWriter()
def condensedWriter = outputWriters
                     .getDataFileOutputWriter("condensed", "application/json")
                     .jsonWriter()

trigger.dataFiles.each { df ->

    def gson = new GsonBuilder().create()
    def jsonReader = fileReader.jsonReader(df)

    jsonReader.beginObject()
    jsonReader.nextName()
    jsonReader.beginArray()

    jsonWriter.beginArray()
    condensedWriter.beginArray()

    def index = 0
    while( jsonReader.hasNext() ) {
        def person = gson.fromJson(jsonReader, Map.class)

        jsonWriter.beginObject()
        jsonWriter.name("No").value(index++)
        jsonWriter.name("Name_Email").value("${person["name"]} (${person["email"]})")
        jsonWriter.name("Country").value(person["country"])
        jsonWriter.endObject()

        condensedWriter.beginObject()
        condensedWriter.name("Name_Email").value("${person["name"]} (${person["email"]})")
        condensedWriter.endObject()
    }

    jsonWriter.endArray()
    condensedWriter.endArray()
}

XML

Transforming From

Transforming To

<person country="US" email="John.Williams@someserver.com" name= "John Williams" > </person>

<people> <peopleInDataFile> <person No="0" Name_Email="John Williams(John.Williams@someserver.com)" Country="US"> </person> </peopleInDataFile> </people>

def writer = outputWriters.getDataFileOutputWriter().xmlWriter()

writer.writeStartElement("people")
trigger.dataFiles.each { df ->

    writer.writeStartElement("peopleInDataFile")

    def reader = fileReader.xmlReader(df)

    def index = 0
    def inPerson = false
    while( reader.hasNext() ) {
        int eventType = reader.next()

        switch( eventType ) {
            case reader.START_ELEMENT:
                if( reader.localName == "person" ) {
                    inPerson = true
                    writer.writeStartElement("person")
                    writer.writeAttribute("No", "${index++}")
                    writer.writeAttribute("Name_Email",
                       "${reader.getAttributeValue(null, "name")} (${reader.getAttributeValue(null, "email")})")
                    writer.writeAttribute("Country", reader.getAttributeValue(null, "country"))
                }
                break
            case reader.END_ELEMENT:
                if( inPerson ) {
                    inPerson = false
                    writer.writeEndElement()
                }
                break
        }
    }

    writer.writeEndElement()
}

writer.writeEndElement()

Delimiter Separated Files

Transforming From

Transforming To

<table print-width="100%" width="100%"> <tbody> <tr> <td>

name,email,country

John Williams,John Williams@someserver.com,US

</td> </tr> </tbody> </table>

name,email,country

John Williams,John Williams@someserver.com,US

<table print-width="100%" width="100%"> <tbody> <tr> <td>

No,Name_Email,Country

John Williams (John.Williams@someserver.com),US

</td> </tr> </tbody> </table>

No,Name_Email,Country

John Williams (John.Williams@someserver.com),US

name,email,country

John Williams,John Williams@someserver.com,US

No,Name_Email,Country

John Williams (John.Williams@someserver.com),US

import org.apache.commons.csv.*

def csvPrinter = outputWriters.getDataFileOutputWriter().csvPrinter(
CSVFormat.DEFAULT
.withHeader("No", "Name_Email", "Country"))

trigger.dataFiles.each { df ->

    def csvParser = fileReader.csvParser(df,
                    CSVFormat.DEFAULT.withFirstRecordAsHeader()
                                     .withIgnoreHeaderCase()
                                     .withTrim())

    csvParser.each { csvRecord ->
        csvPrinter.printRecord(
           "${csvRecord["name"]} (${csvRecord["email"]})", csvRecord["country"])
    }
}