Skip Headers
Oracle® Healthcare Master Person Index Installation Guide
Release 2.0.3

E25243-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

1 Introducing the Oracle Healthcare Master Person Index

Oracle Healthcare Master Person Index (OHMPI) 2.0 is the release of the single person view application that was acquired from Sun Microsystems. This product has been developed over many years and is an established person identity resolution solution in the market with extensive customer base in the healthcare segment. It provides a flexible framework for you to design and create custom single-view applications, or master person indexes, which cleanse, match, and cross-reference healthcare objects across an enterprise. A master person index that contains the most current and accurate data about each healthcare object ensures availability of unified, trusted data to all systems in the enterprise.

Before installing OHMPI it is highly recommended that you review the information in Chapter 1, "Security Configuration Issues," in the Oracle Healthcare Master Person Index Configuration Guide.

What's Included With the OHMPI Installer

The OHMPI Installer includes the following software:

Features of OHMPI

Oracle Healthcare Master Person Index includes the following features:

Design-time Wizard

OHMPI provides a wizard that takes you through all the steps of creating a master person index application. Using the wizard, you can define a custom master person index with a data structure, processing logic, and matching and standardization logic that are completely geared to the type of data you are indexing. OHMPI provides a graphical editor so you can further customize the business logic, including matching, standardization, queries, match weight thresholds, and so on.

Data Quality and Load Tools

By default, Master Person Index uses the OHMPI Match Engine and OHMPI Standardization Engine to standardize and match incoming data. Additional tools are generated directly from the master person index application and use the object structure defined for the master person index. These tools include the Data Profiler, Data Cleanser, and the Initial Bulk Match and Load (IBML) tool.

  • OHMPI Standardization Engine

    The OHMPI Standardization Engine is built on a highly configurable and extensible framework to enable standardization of multiple types of data originating in various languages and counties. It performs parsing, normalization, and phonetic encoding of the data being sent to the master person index or being loaded in bulk to the master person index database. Parsing is the process of separating a field into individual components, such as separating a street address into a street name, house number, street type, and street direction. Normalization changes a field value to its common form, such as changing a nickname like Bob to its standard version, Robert. Phonetic encoding allows queries to account for spelling and input errors. The standardization process cleanses the data prior to matching, providing data to the match engine in a common form to help provide a more accurate match weight.

  • OHMPI Match Engine

    The OHMPI Match Engine provides the basis for deduplication with its record matching capabilities. The OHMPI Match Engine compares the match fields in two records and calculates a match weight for each match field. It then totals the weights for all match fields to provide a composite match weight between records. This weight indicates how likely it is that two records represent the same entity. The OHMPI Match Engine is a high-performance engine, using proven algorithms and methodologies based on research at the U.S. Census Bureau. The engine is built on an extensible and configurable framework, allowing you to customize existing comparison functions and to create and plug in custom functions.

  • Data Profiler

    When gathering data from various sources, the quality of the data sets is unknown. You need a tool to analyze, or profile, legacy data in order to determine how it needs to be cleansed prior to being loaded into a master person index database. It uses a subset of the Data Cleanser rules to analyze the frequency of data values and patterns in bulk data. The Data Profiler performs a variety of frequency analyses. You can profile data prior to cleansing in order to determine how to define cleansing rules, and you can profile data after cleansing in order to fine-tune query blocking definitions, standardization rules, and matching rules.

  • Data Cleanser

    Once you know the quality of the data to be loaded to a master person index database, you can clean up data anomalies and errors as well as standardize and validate the data. The Data Cleanser validates, standardizes, and transforms bulk data prior to loading the initial data set into a master person index database. The rules for the cleansing process are highly customizable and can easily be configured for specific data requirements. Any records that fail validation or are rejected can be fixed and put through the cleanser again. The output of the Data Cleanser is a file that can be used by the Data Profiler for analysis and by the Initial Bulk Match and Load Tool. Standardizing data using the Data Cleanser aids the matching process.

  • Initial Bulk Match and Load Tool (IBML Tool)

    Before your Master Data Manager (MDM) solution can begin to cleanse data in real time, you need to seed the master person index database with the data that currently exists in the systems that will share information with the master person index. The IBML tool can match bulk data outside of the master person index environment and then load the matched data into the master person index database, greatly reducing the amount of time it would normally take to match and load bulk data. This tool is highly scalable and can handle very large volumes of data when used in a distributed computing environment. The IBML Tool loads a complete image of processed data, including potential duplicate flags, assumed matches, and transaction information.

Master Index Data Manager (MIDM)

The Master Index Data Manager is your primary tool to view and maintain the data stored in a master person index database and cross-referenced by a master person index application. The web-based interface allows you to access, monitor, and maintain the data stored by the master person index applications you create using OHMPI. The MIDM provides the ability to search for, add, update, deactivate, merge, unmerge, and compare object profiles. It also enables you to view and correct potential duplicate profiles, view transaction histories, view an audit log, and print reports.

Integrating the Healthcare Enterprise

Integrating the Healthcare Enterprise (IHE) has created a number of standards and profiles that help create, process, and manage electronic health records in secure patient cross-reference applications. They work in conjunction with native Health Level 7 (HL7) v2 and v3 messaging and transport standards, which define how the information is packaged and shared between systems. OHMPI has incorporated a number of the IHE profiles (listed below), as they increase the efficiency of sharing trusted cross-references of healthcare person entities. A number of the IHE profiles function with HL7 v2 and v3 encoding standards to integrate healthcare networks.

With OHMPI Release 2.0.3 you have the capability to create an IHE project that contains a pre-configured master person index project. See "IHE-MPI Projects" in Oracle Healthcare Master Person Index Working With IHE Profiles.

  • Patient Identifier Cross Referencing (PIX) allows cross-referencing of patient identifiers across a network of healthcare sites.

  • Patient Demographics Query (PDQ) queries and retrieves patient demographics.

  • Audit Record Repository (ARR) includes an audit server and an audit repository. It also supports ATNA (see below).

  • Audit Trail and Node Authentication (ATNA) uses certificates and transmits and receives audit events to a secure repository to maintain patient confidentiality, and is built on top of Security Audit and Access Accountability Message XML Data Definitions for Healthcare Applications, the Syslog Protocol, Transmission of Syslog Messages over Transport Layer Security (TLS), and Transmission of Syslog Messages over User Datagram Protocol (UDP).

  • Consistent Time (CT) synchronizes time stamps and system clocks on computers functioning within a healthcare network.

  • Patient Identity Management (PIM) under Patient Administration Management (PAM) creates a patient record, updates the record, links the record to another patient record, and if the records represent the same patient, merges the records (these records can be unlinked if the records do not represent the same patient).

  • Patient Encounter Management (PEM) is part of Patient Administration Management (PAM), which specifies two transactions (Patient Encounter Management, and the previously implemented Patient Identity Feed). For this release PEM supports the following:

    • Basic mandatory subset, which creates, updates, or closes inpatient and outpatient encounters. It also updates patient information and merges patient records.

    • Inpatient/outpatient encounter management option, which includes pre-admitting an inpatient and changing a patient's classification.

  • Patient Demographics Visit Query (PDVQ) accesses user-defined search criteria and visit criteria to provide multiple distributed applications the capability to query a patient information server for a list of patients.

  • Patient Identifier Cross-Reference and Patient Demographics Query for HL7v3 (PIX/PDQ v3) leverages HL7 version 3 to extend the capability of these profiles.