JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Java CAPS Master Data Management Suite Primer     Java CAPS Documentation
search filter icon
search icon

Document Information

Oracle Java CAPS Master Data Management Suite Primer

About Master Data Management

About the Oracle Java CAPS Master Data Management Suite

Java CAPS MDM Suite Features

Java CAPS MDM Suite Architecture

Master Data Management Components

Java CAPS Master Index

Java CAPS Data Integrator

Java CAPS Data Quality and Load Tools

Java CAPS MDM Integration and Infrastructure Components

Oracle Java CAPS Enterprise Service Bus

Oracle Java CAPS Business Process Manager

Oracle Java System Access Manager

Oracle Directory Server Enterprise Edition

Oracle Java System Portal Server

Java CAPS Adapters

GlassFish Enterprise Server

NetBeans Integrated Development Environment (IDE)

Java CAPS Master Data Management Process

MDM Lifecycle

MDM Workflow

About the Standardization and Matching Process

Java CAPS Master Index

Java CAPS Master Index Overview

Java CAPS Master Index Features

Design-Time Features

Runtime Features

Java CAPS Master Index Architecture

Master Index Design and Development Phase

Analysis and Design Tasks

Standard Development Tasks

Advanced Development Tasks

Master Index Wizard

Configuration Editor

Master Index Runtime Phase

Manager Service

Master Index Database

Data Monitoring and Maintenance

Java CAPS Data Integrator

Java CAPS Data Integrator Overview

Java CAPS Data Integrator Features

Java CAPS Data Integrator Architecture

Java CAPS Data Integrator Development Phase

Standard Development Tasks

Advanced Development Tasks

Data Integrator Wizard

ETL Collaboration Editor

Java CAPS Data Integrator Runtime Phase

Monitoring and Maintenance

Java CAPS Data Quality and Load Tools

Master Index Standardization Engine

Standardization Concepts

Master Index Standardization Engine Configuration

Master Index Standardization Engine Features

Master Index Match Engine

Master Index Match Engine Matching Weight Formulation

Master Index Match Engine Features

Data Cleanser and Data Profiler

About the Data Profiler

About the Data Cleanser

Data Cleanser and Data Profiler Features

Initial Bulk Match and Load Tool

Initial Bulk Match and Load Process Overview

About the Bulk Match Process

About the Bulk Load Process

About the Cluster Synchronizer

IBML Tool Features

About Master Data Management

In today's business environment, it is becoming increasingly difficult to access current, accurate, and complete information about the people or entities for which information is stored across an organization. As organizations merge and grow, information about the same entity is dispersed across multiple disparate systems and databases, and there might be several different versions of the information of varying quality. Information becomes fragmented, duplicated, unreliable, and hard to locate. A single source of authoritative, reliable, and sustainable data is needed. As soon as data about the same entities begins to be stored in multiple departments, locations, and applications, the need for this single source becomes apparent.

Master Data Management (MDM) creates and maintains a source of enterprise data that identifies and stores the single best information about each entity across an organization in a secure environment. MDM is the framework of processes and technologies used to cleanse records of inconsistent data, analyze the state of the data, remove data duplication, call into question potential duplication, and maintain a system of continuous cleansing. Implementing an MDM initiative produces a complete and consolidated view of the entities about which information is stored, such as customers, patients, vendors, inventory, and so on. The MDM solution produces a single best view of the data. The single best view is referred to as reference data.

Core features of an MDM solution include data profiling, stewardship, standardization, matching, and deduplication. This combination cleanses data from the very beginning, identifying and rectifying data anomalies from the start and providing a system of continuous cleansing as new data is added.