Skip Navigation Links | |
Exit Print View | |
Oracle Java CAPS Master Data Management Suite Primer Java CAPS Documentation |
Oracle Java CAPS Master Data Management Suite Primer
About the Oracle Java CAPS Master Data Management Suite
Java CAPS MDM Suite Architecture
Master Data Management Components
Java CAPS Data Quality and Load Tools
Java CAPS MDM Integration and Infrastructure Components
Oracle Java CAPS Enterprise Service Bus
Oracle Java CAPS Business Process Manager
Oracle Java System Access Manager
Oracle Directory Server Enterprise Edition
Oracle Java System Portal Server
NetBeans Integrated Development Environment (IDE)
Java CAPS Master Data Management Process
About the Standardization and Matching Process
Java CAPS Master Index Overview
Java CAPS Master Index Features
Java CAPS Master Index Architecture
Master Index Design and Development Phase
Data Monitoring and Maintenance
Java CAPS Data Integrator Overview
Java CAPS Data Integrator Features
Java CAPS Data Integrator Architecture
Java CAPS Data Integrator Development Phase
Java CAPS Data Integrator Runtime Phase
Java CAPS Data Quality and Load Tools
Master Index Standardization Engine
Master Index Standardization Engine Configuration
Master Index Standardization Engine Features
Master Index Match Engine Matching Weight Formulation
Master Index Match Engine Features
Data Cleanser and Data Profiler
Data Cleanser and Data Profiler Features
Initial Bulk Match and Load Tool
Initial Bulk Match and Load Process Overview
In today's business environment, it is becoming increasingly difficult to access current, accurate, and complete information about the people or entities for which information is stored across an organization. As organizations merge and grow, information about the same entity is dispersed across multiple disparate systems and databases, and there might be several different versions of the information of varying quality. Information becomes fragmented, duplicated, unreliable, and hard to locate. A single source of authoritative, reliable, and sustainable data is needed. As soon as data about the same entities begins to be stored in multiple departments, locations, and applications, the need for this single source becomes apparent.
Master Data Management (MDM) creates and maintains a source of enterprise data that identifies and stores the single best information about each entity across an organization in a secure environment. MDM is the framework of processes and technologies used to cleanse records of inconsistent data, analyze the state of the data, remove data duplication, call into question potential duplication, and maintain a system of continuous cleansing. Implementing an MDM initiative produces a complete and consolidated view of the entities about which information is stored, such as customers, patients, vendors, inventory, and so on. The MDM solution produces a single best view of the data. The single best view is referred to as reference data.
Core features of an MDM solution include data profiling, stewardship, standardization, matching, and deduplication. This combination cleanses data from the very beginning, identifying and rectifying data anomalies from the start and providing a system of continuous cleansing as new data is added.