Learn how to use enhanced k-Means Clustering algorithm that the Oracle Data Mining supports.
Distance-based algorithms rely on a distance function to measure the similarity between cases. Cases are assigned to the nearest cluster according to the distance function used.
Oracle Data Mining implements an enhanced version of the k-Means algorithm with the following features:
Distance function: The algorithm supports Euclidean and Cosine distance functions. The default is Euclidean.
Hierarchical model build: The algorithm builds a model in a top-down hierarchical manner, using binary splits and refinement of all nodes at the end. In this sense, the algorithm is similar to the bisecting k-Means algorithm. The centroids of the inner nodes in the hierarchy are updated to reflect changes as the tree evolves. The whole tree is returned.
Tree growth: The algorithm uses a specified split criterion to grow the tree one node at a time until a specified maximum number of clusters is reached, or until the number of distinct cases is reached. The split criterion may be the variance or the cluster size. By default the split criterion is the variance.
Cluster properties: For each cluster, the algorithm returns the centroid, a histogram for each attribute, and a rule describing the hyperbox that encloses the majority of the data assigned to the cluster. The centroid reports the mode for categorical attributes and the mean and variance for numerical attributes.
This approach to k-Means avoids the need for building multiple k-Means models and provides clustering results that are consistently superior to the traditional k-Means.
The centroid represents the most typical case in a cluster. For example, in a data set of customer ages and incomes, the centroid of each cluster would be a customer of average age and average income in that cluster. The centroid is a prototype. It does not necessarily describe any given case assigned to the cluster.
The attribute values for the centroid are the mean of the numerical attributes and the mode of the categorical attributes.
The Oracle Data Mining enhanced k-Means algorithm supports several build-time settings. All the settings have default values. There is no reason to override the defaults unless you want to influence the behavior of the algorithm in some specific way.
You can configure k-Means by specifying the following considerations:
Number of clusters
Growth factor for memory allocated to hold clusters
Distance Function. The default distance function is Euclidean.
Split criterion. The default criterion is the variance.
Number of iterations for building the cluster tree.
The fraction of attribute values that must be non-null in order for an attribute to be included in the rule description for a cluster.
Number of histogram bins. The bin boundaries for each attribute are computed globally on the entire training data set. The binning method is equi-width. All attributes have the same number of bins with the exception of attributes with a single value that have only one bin.
"Algorithm Settings: k-Means" in Oracle Database PL/SQL Packages and Types Reference
Normalization is typically required by the k-Means algorithm. Automatic Data Preparation performs outlier-sensitive normalization for k-Means. If you do not use ADP, you must normalize numeric attributes before creating or applying the model.
When there are missing values in columns with simple data types (not nested), k-Means interprets them as missing at random. The algorithm replaces missing categorical values with the mode and missing numerical values with the mean.
When there are missing values in nested columns, k-Means interprets them as sparse. The algorithm replaces sparse numerical data with zeros and sparse categorical data with zero vectors.
"Linear Normalization" in Oracle Database PL/SQL Packages and Types Reference
"Preparing the Data" in Oracle Data Mining User’s Guide
"Transforming the Data" in Oracle Data Mining User’s Guide