Oracle® Data Mining Concepts 10g Release 2 (10.2) B1433901 


PDF · Mobi · ePub 
This chapter describes unsupervised models. These models do not predict a target value, but focus on the intrinsic structure, relations, and interconnectedness of the data. Unsupervised models are sometimes called descriptive models. Oracle Data Mining supports the following unsupervised functions:
Clustering is useful for exploring data. If there are many cases and no obvious natural groupings, clustering data mining algorithms can be used to find natural groupings.
Clustering analysis identifies clusters embedded in the data. A cluster is a collection of data objects that are similar in some sense to one another. A good clustering method produces highquality clusters to ensure that the intercluster similarity is low and the intracluster similarity is high; in other words, members of a cluster are more like each other than they are like members of a different cluster.
Clustering can also serve as a useful datapreprocessing step to identify homogeneous groups on which to build supervised models. Clustering models are different from supervised models in that the outcome of the process is not guided by a known result, that is, there is no target attribute. Supervised models predict values for a target attribute, and an error rate between the target and predicted values can be calculated to guide model building. Clustering models, on the other hand, are built using optimization criteria that favor high intracluster and low intercluster similarity. The model can then be used to assign cluster identifiers to data points.
In ODM a cluster is characterized by its centroid, attribute histograms, and the cluster's place in the model's hierarchical tree. ODM performs hierarchical clustering using an enhanced version of the kmeans algorithm and the orthogonal partitioning clustering algorithm, an Oracle proprietary algorithm. The clusters discovered by these algorithms are then used to create rules that capture the main characteristics of the data assigned to each cluster. The rules represent the bounding boxes that envelop the data in the clusters discovered by the clustering algorithm. The antecedent of each rule describes the clustering bounding box. The consequent encodes the cluster ID for the cluster described by the rule. For example, for a data set with two attributes: AGE and HEIGHT, the following rule represents most of the data assigned to cluster 10:
If AGE >= 25 and AGE <= 40 and HEIGHT >= 5.0ft and HEIGHT <= 5.5ft then CLUSTER = 10
The clusters are also used to generate a Bayesian probability model which is used during scoring for assigning data points to clusters.
Oracle Data Mining supports the following algorithms for clustering:
For a brief comparison of the two clustering algorithms, see Table 41.
The kMeans algorithm is a distancebased clustering algorithm that partitions the data into a predetermined number of clusters (provided there are enough distinct cases). Distancebased algorithms rely on a distance metric (function) to measure the similarity between data points. The distance metric is either Euclidean, Cosine, or Fast Cosine distance. Data points are assigned to the nearest cluster according to the distance metric used.
ODM implements an enhanced version of the kmeans algorithm with the following features:
The algorithm builds models in a hierarchical manner. The algorithm builds a model top down using binary splits and refinement of all nodes at the end. In this sense, the algorithm is similar to the bisecting kmeans algorithm. The centroid of the inner nodes in the hierarchy are updated to reflect changes as the tree evolves. The whole tree is returned.
The algorithm grows the tree one node at a time (unbalanced approach). Based on a user setting available in either of the programming interfaces, the node with the largest variance is split to increase the size of the tree until the desired number of clusters is reached.
The algorithm provides probabilistic scoring and assignment of data to clusters.
The algorithm returns, for each cluster, a centroid (cluster prototype), histograms (one for each attribute), and a rule describing the hyperbox that encloses the majority of the data assigned to the cluster. The centroid reports the mode for categorical attributes or the mean and variance for numerical attributes.
This approach to kmeans avoids the need for building multiple kmeans models and provides clustering results that are consistently superior to the traditional kmeans.
The ODM implementation of kMeans supports both categorical and numerical data.
For numerical attributes, data normalization is recommended.
For the kMeans algorithm, NULL
values indicate sparse data. Missing values are not automatically handled. If the data is not sparse and the values are indeed missing at random, you should perform missing data imputation (that is, perform some kind of missing values treatment) and substitute a nonNULL
value for the NULL
value. One simple way to treat missing values is to use the mean for numerical attributes and the mode for categorical attributes. If you do not treat missing values, the algorithm will not handle the data correctly.
The clusters discovered by enhanced kMeans are used to generate a Bayesian probability model that is then used during scoring (model apply) for assigning data points to clusters. The kmeans algorithm can be interpreted as a mixture model where the mixture components are spherical multivariate normal distributions with the same variance for all components.
The OCluster algorithm supports Orthogonal Partitioning Clustering; OCluster creates a hierarchical gridbased clustering model, that is, it creates axisparallel (orthogonal) partitions in the input attribute space. The algorithm operates recursively. The resulting hierarchical structure represents an irregular grid that tessellates the attribute space into clusters. The resulting clusters define dense areas in the attribute space. The clusters are described by intervals along the attribute axes and the corresponding centroids and histograms. A parameter called sensitivity defines a baseline density level. Only areas with peak density above this baseline level can be identified as clusters.
The kmeans algorithm tessellates the space even when natural clusters may not exist. For example, if there is a region of uniform density, kMeans tessellates it into n clusters (where n is specified by the user). OCluster separates areas of high density by placing cutting planes through areas of low density. OCluster needs multimodal histograms (peaks and valleys). If an area has projections with uniform or monotonically changing density, OCluster does not partition it.
OCluster does not necessarily use all the input data when it builds a model. It reads the data in batches (the default batch size is 50000). It will only read another batch if it believes, based on statistical tests, that there may still exist clusters that it has not yet uncovered.
Because OCluster may stop the model build before it reads all of the data, it is highly recommended that the data be randomized.
The use of ODM's equiwidth binning transformation with automated estimation of the required number of bins is highly recommended.
The clusters discovered by OCluster are used to generate a Bayesian probability model that is then used during scoring (model apply) for assigning data points to clusters. The generated probability model is a mixture model where the mixture components are represented by a product of independent normal distributions for numerical attributes and multinomial distributions for categorical attributes.
The presence of outliers can significantly impact either type of clustering model. Use a clipping transformation before you bin or normalize the table to avoid the problems caused by outliers.
The main characteristics of the enhanced kmeans and OCluster algorithms are summarized in Table 41.
Table 41 Clustering Algorithms Compared
Feature  Enhanced kMeans  OCluster 

Clustering methodolgy 
Distancebased 
Gridbased 
Number of cases 
Handles data sets of any size 
More appropriate for data sets that have more than 500 cases. Handles large tables via active sampling 
Number of attributes 
More appropriate for data sets with a low number of attributes 
More appropriate for data sets with a high number of attributes 
Number of clusters 
Userspecified 
Automatically determined 
Hierarchical clustering 
Yes 
Yes 
Probablistic cluster assignment 
Yes 
Yes 
Recommended data preparation 
Normalization 
Equiwidth binning after clipping 
An Association model is often used for market basket analysis, which attempts to discover relationships or correlations in a set of items. Market basket analysis is widely used in data analysis for direct marketing, catalog design, and other business decisionmaking processes. A typical association rule of this kind asserts that, for example, "70% of the people who buy spaghetti, wine, and sauce also buy garlic bread."
Association models capture the cooccurrence of items or events in large volumes of customer transaction data. Because of progress in barcode technology, it is now possible for retail organizations to collect and store massive amounts of sales data. Association models were initially defined for such sales data, even though they are applicable in several other applications. Finding association rules is valuable for crossmarketing and mailorder promotions, but there are other applications as well: catalog design, addon sales, store layout, customer segmentation, web page personalization, and target marketing.
Traditionally, association models are used to discover business trends by analyzing customer transactions. However, they can also be used effectively to predict Web page accesses for personalization. For example, assume that after mining the Web access log, Company X discovered an association rule "A and B implies C," with 80% confidence, where A, B, and C are Web page accesses. If a user has visited pages A and B, there is an 80% chance that he/she will visit page C in the same session. Page C may or may not have a direct link from A or B. This information can be used to create a dynamic link to page C from pages A or B so that the user can "clickthrough" to page C directly. This kind of information is particularly valuable for a Web server supporting an ecommerce site to link the different product pages dynamically, based on the customer interaction.
There are several properties of association models that can be calculated. ODM calculates the following two properties related to rules:
Support: Support of a rule is a measure of how frequently the items involved in it occur together. Using probability notation, support (A implies B) = P(A, B).
Confidence: Confidence of a rule is the conditional probability of B given A; confidence (A implies B) = P (B given A).
These statistical measures can be used to rank the rules and hence the predictions.
ODM uses the Apriori algorithm for association models.
Association models are designed to use sparse data. Sparse data is data for which only a small fraction of the attributes are nonzero or nonnull in any given row. Examples of sparse data include market basket and text mining data. For example, in a market basket problem, there might be 1,000 products in the company's catalog, and the average size of a basket (the collection of items that a customer purchases in a typical transaction) might be 20 products. In this example, a transaction/case/record has on average 20 out of 1000 attributes that are not null. This implies that the fraction of nonzero attributes on the table (or the density) is 20/1000, or 2%. This density is typical for market basket and text processing problems. Data that has a significantly higher density can require extremely large amounts of temporary space to build associations.
Association models treat NULL values as sparse data. The algorithm does not handle missing values. If the data is not sparse and the NULL values are indeed missing at random, you should perform missing data imputation (that is, treat the missing values) and substitute nonnull values for the NULL value.
The presence of outliers, when external equalwidth binning is used, makes most of the data concentrate in a few bins (a single bin in extreme cases). As a result, the ability of the model to detect differences in numerical attributes may be significantly lessened. For example, a numerical attribute such as income may have all the data belonging to a single bin except for one entry (the outlier) that belongs to a different bin. As a result, there won't be any rules reflecting different levels of income. All rules containing income will only reflect the range in the single bin; this range is basically the income range for the whole population. In cases like this, use a clipping transformation to handle outliers.
The Apriori algorithm works by iteratively enumerating item sets of increasing lengths subject to the minimum support threshold. Since stateoftheart algorithms for associations work by iterative enumeration, association rules algorithms do not handle the following cases efficiently:
Finding associations involving rare events
Finding associations in data sets that are dense and that have a large number of attributes.
Association mining discovers patterns with frequency above the minimum support threshold. Therefore, in order to find associations involving rare events, the algorithm must run with very low minimum support values. However, doing so could potentially explode the number of enumerated item sets, especially in cases with large number of items. That could increase the execution time significantly.
Therefore, association rule mining is not recommended for finding associations involving rare events in problem domains with a large number of items.
One option is to use classification models in such problem domains.
Associations are calculated using the Apriori algorithm. The association mining problem can be decomposed into two subproblems:
Find all combinations of items, called frequent itemsets, whose support is greater than the specified minimum support.
Use the frequent itemsets to generate the desired rules. ODM Association supports single consequent rules only (for example, "ABC implies D").
The number of frequent itemsets is controlled by the minimum support parameters. The number of rules generated is controlled by the number of frequent itemsets and the confidence parameter. If the confidence parameter is set too high, there may be frequent itemsets in the association model but no rules.
Feature Extraction creates a set of features based on the original data. A feature is a combination of attributes that is of special interest and captures important characteristics of the data. It becomes a new attribute. Typically, there are far fewer features than there are original attributes.
Some applications of feature extraction are latent semantic analysis, data compression, data decomposition and projection, and pattern recognition. Feature extraction can also be used to enhance the speed and effectiveness of supervised learning.
For example, feature extraction can be used to extract the themes of a document collection, where documents are represented by a set of key words and their frequencies. Each theme (feature) is represented by a combination of keywords. The documents in the collection can then be expressed in terms of the discovered themes.
ODM uses the NonNegative Matrix nFactorization (NMF) algorithm for feature extraction. NonNegative Matrix Factorization (NMF) is described in the paper "Learning the Parts of Objects by NonNegative Matrix Factorization" by D. D. Lee and H. S. Seung in Nature (401, pages 788791, 1999). NonNegative Matrix Factorization is a feature extraction algorithm that decomposes multivariate data by creating a userdefined number of features, which results in a reduced representation of the original data. NMF decomposes a data matrix V into the product of two lower rank matrices W and H so that V is approximately equal to W times H. NMF uses an iterative procedure to modify the initial values of W and H so that the product approaches V. The procedure terminates when the approximation error converges or the specified number of iterations is reached. Each feature is a linear combination of the original attribute set; the coefficients of these linear combinations are nonnegative. During model apply, an NMF model maps the original data into the new set of attributes (features) discovered by the model.
Text mining involves extracting information from unstructured data. Typically, text data is highdimensional and sparse. Unsupervised algorithms like Principal Components Analysis (PCA), Singular Value Decomposition (SVD), and NMF involve factoring the documentterm matrix based on different constraints. One widely used approach for text mining is latent semantic analysis. NMF focuses on reducing dimensionality. By comparing the vectors for two adjoining segments of text in a highdimensional semantic space, NMF provides a characterization of the degree of semantic relatedness between the segments. NMF is less complex than PCA and can be applied to sparse data. NMFbased latent semantic analysis is an attractive alternative to SVD approaches due to the additive nonnegative nature of the solution and the reduced computational complexity and resource requirements.
The presence of outliers can significantly impact NMF models. Use a clipping transformation before you bin or normalize the table to avoid the problems caused by outliers for these algorithms.
NULL
values are treated as missing and not as indicators of sparse data.
NMF may benefit from normalization.