|Oracle® Healthcare Master Person Index Analyzing and Cleansing Data User's Guide
This chapter provides an overview of the Data Profiler and the Data Cleanser, including the profiling and cleansing process. It also provides information about the required format for flat data files.
This chapter includes the following section:
Data analysis and cleansing are essential first steps towards managing the quality of data in a master person index system. Performing these processes early in the project helps ensure the success of the project and can eliminate surprises down the road. Oracle Healthcare Master Person Index provides the tools you need to analyze, profile, cleanse, and standardize legacy data before loading it into a master person index database. The Data Profiler and Data Cleanser are generated from a master person index application, and they use the object definition and configuration of the master person index application to validate, transform, and standardize data.
The following sections provide information about the Data Profiler and Data Cleanser as well as the overall process of analyzing and cleansing data.
The Data Profiler analyzes the frequency of data values and patterns in your existing data based on predefined rules and rules you define. Rules are defined using a Rules Definition Language (RDL) in XML format. The RDL provides a flexible and extensible framework that makes defining rules an easy and straightforward process. Use the Data Profiler to perform an initial analysis of existing data to determine which fields contain invalid or default values, invalid formats, incorrect dates, and so on. This analysis spotlights values that need to be validated or modified during the cleansing process. For example, if you find you have several dates in the incorrect format, you can reformat the dates during cleansing. You might also find cases where the postal code extension is appended to the postal code, making the field value nine characters. You can truncate those fields to five characters during the cleansing phase, leaving the actual zip code.
The final data analysis is performed after the data is cleansed using the Data Cleanser. Use this analysis to verify the blocking definitions for the blocking query used in the master person index match process. This analysis indicates whether the data blocks defined for the query are too wide or narrow in scope, which would result in an unreliable set of records being returned for a match search. This analysis can also show how reliably the field indicates a match between two records, which indicates how much relative weight should be given to each field in the match string.
The Data Profiler performs three types of analysis and outputs a report for each field being profiled. A simple frequency analysis provides a count of each value in the specified fields. You can specify a sort order and minimum frequency to display in the reports. A constrained frequency analysis provides a count of each value in the specified fields based on the validation rules you define. You can validate against a range of values, field length, or pattern to define the rules for the validation. A pattern frequency analysis provides a count of the patterns found in the specified fields. You can specify a sort order and the number of frequencies to display on the report.
The Data Cleanser validates and modifies data based on predefined rules and rules you define. Rules are defined using the same Rules Definition Language (RDL) as the Data Profiler, which provides a flexible framework for defining cleansing rules. The Data Cleanser not only validates and transforms data based on the rules you define, but it also parses, normalizes, and phonetically encodes data using the standardization configuration in the mefa.xml file of the master person index project. You can define rules that validate or transform data during the cleansing process, and you can include conditional rules and operators. If you need to perform custom processing, you can define Java classes to extend the functionality of the rules.
The output of the Data Cleanser is two flat files; one file contains the records that passed all validations and was successfully transformed and standardized, and the other contains all records that failed validation or could not be transformed correctly. The bad data file also provides the reason each record failed so you can easily determine how to fix the data. This is an iterative process, and you might run the Data Cleanser several times to make sure all data is processed correctly. The final run of the Data Cleanser should produce only a good data file, with no records failing the process.
The final result of the Data Cleanser is a file that contains records that conform to the master person index object definition and that no longer contain invalid or default values. The fields are formatted correctly and any fields that are defined for standardization in the master person index application are standardized in the file. This is the file to load into the master person index database using the Initial Bulk Match and Load tool (for more information, see Oracle Healthcare Master Person Index Loading the Initial Data Set).
The general process of cleansing data begins with analysis, followed by cleansing, followed by additional analysis. By the time you are ready to load your existing data into the master person index database, you want it to be of the best possible quality. To ensure this, you might need to repeat some of the profiling and cleansing steps to be sure all errant data is caught and fixed. The steps below provide a general overview to the analysis and cleansing process.
Make sure the master person index application is configured and generated.
Extract the data to a flat file.
Generate the Data Profiler and Data Cleanser from the master person index application.
Determine the fields to use for the initial analysis.
Define the rules for the initial analysis and perform the initial analysis. This results in a set of reports for you to review to determine the cleansing rules to define.
Define the rules for the cleansing process and cleanse the data. This is an iterative process and you might find you need to define additional rules once the initial cleansing process is complete.
Analyze the blocking and matching fields to determine how to best configure the match process for the master person index application.
After you configure the master person index, you can load the data into the master person index database using the Initial Bulk Match and Load tool (see Oracle Healthcare Master Person Index Loading the Initial Data Set for more information).
Both the Data Profiler and the Data Cleanser are designed to read data from a flat file. You can extract your data to a flat file using the extractor of your choice. The data needs to be placed in a flat file format the Data Profiler and Data Cleanser can read. If your data is in a different format, you can define a custom data reader to read the flat file into the Data Profiler and Data Cleanser. The analysis tools can read a flat file in the following format without any additional configuration:
GID is a global ID computed by the data analysis. This field can be empty in the flat file.
SystemCode is the processing code for the system from which the record originated.
LocalID is the object's local ID in the given system.
UpdateDate is the most recent update date for the record. This field can be empty.
UserID is the logon ID of the user who last updated the record. This field can be empty.
ObjectFields includes the objects and fields that are defined in object.xml. Be sure to include every field defined in the object structure in the order they are defined (include standardized, phonetic, and parsed fields). Child object types are delimited by a pound sign (#) and multiple child objects of one type are delimited by a dollar sign ($).
Below is an example of a valid input record based on the standard master person index Person template, which includes alias, address, and phone objects. Note the empty fields after the first and last names for the phonetic and standardized data that will be inserted by the Data Cleanser. There are also empty fields after the street address for the parsed street address components that will also be inserted by the Data Cleanser.
28|ORACLE|00160419|11/14/1999 08:41:10|GSMYTHE|P|ELIZABETH|||ANN|WARREN||||MRS |554-44-55555|08/18/1977|Y|F|M|W|13|BAP|ENG|STEVE|ANN|MARCH|GEORGE|CAHILL|SHEFFIELD |CT|USA|E|Y||C4411444|CA|07/21/2018||ENG|USA#$BETH||CAHILL$LIZ|ANN|CAHILL#$H|1519 BOARDWALK||||||Unit 5|SHEFFIELD|CT|09876|1075|CAPE BURR|USA$W|12500 EAST RIVER ST. ||||||Suite 1310|CAPE BURR|CT|09877||CAPE BURR|USA#$CH|9895557848|$CB|9895551500|19