17 Expectation Maximization
Learn how to use expectation maximization clustering algorithm.
Related Topics
 Clustering Algorithms
 Expectation Maximization for Anomaly Detection
 DBMS_DATA_MINING  Model Settings
 DBMS_DATA_MINING  Algorithm Settings: Expectation Maximization
 Automatic Data Preparation
 Model Detail Views for Expectation Maximization
 OML4SQL Examples
 OML4R Expectation Maximization Example
 OML4R GitHub Examples
17.1 About Expectation Maximization
Expectation Maximization (EM) estimates mixture models for variety of applications, enhancing clustering and anomaly detection.
Oracle Machine Learning uses EM to implement a distributionbased clustering algorithm (EMclustering) and a distributionbased anomaly detection algorithm (EM Anomaly).
17.1.1 Expectation Step and Maximization Step
EM iteratively computes and maximizes likelihood to improve model accuracy, ensuring reliable clustering results.
Expectation maximization is an iterative method. It starts with an initial parameter guess. The parameter values are used to compute the likelihood of the current model. This is the Expectation step. The parameter values are then recomputed to maximize the likelihood. This is the Maximization step. The new parameter estimates are used to compute a new expectation and then they are optimized again to maximize the likelihood. This iterative process continues until model convergence.
17.1.2 Probability Density Estimation
You can compute reliable cluster assignment using probability density.
In density estimation, the goal is to construct a density function that captures how a given population is distributed. In probability density estimation, the density estimate is based on observed data that represents a sample of the population. Areas of high data density in the model correspond to the peaks of the underlying distribution.
Densitybased clustering is conceptually different from distancebased clustering (for example kMeans) where emphasis is placed on minimizing intercluster and maximizing the intracluster distances. Due to its probabilistic nature, densitybased clustering can compute reliable probabilities in cluster assignment. It can also handle missing values automatically.
A distributionbased anomaly detection algorithm identifies an object as an outlier if its probability density is lower than the density of other data records in a data set. The EM Anomaly algorithm can capture the underlying data distribution and thus flag records that do not fit the learned data distribution well.
17.2 Algorithm Enhancements
Expectation Maximization (EM) is enhanced to resolve some challenges in its standard form.
Although EM is well established as a distributionbased algorithm, it presents some challenges in its standard form. The Oracle Machine Learning for SQL implementation includes significant enhancements, such as scalable processing of large volumes of data and automatic parameter initialization. The strategies that OML4SQL uses to address the inherent limitations of EM clustering and EM Anomaly are described further.
Note:
The EM abbreviation is used here to refer to general EM technique for probability density estimation that is common for both EM Clustering and EM Anomaly.
Limitations of Standard Expectation Maximization:

Scalability: EM has linear scalability with the number of records and attributes. The number of iterations to convergence tends to increase with growing data size (both rows and columns). EM convergence can be slow for complex problems and can place a significant load on computational resources.

High dimensionality: EM has limited capacity for modeling high dimensional (wide) data. The presence of many attributes slows down model convergence, and the algorithm becomes less able to distinguish between meaningful attributes and noise. The algorithm is thus compromised in its ability to find correlations.

Number of components: EM typically requires the user to specify the number of components. In most cases, this is not information that the user can know in advance.

Parameter initialization: The choice of appropriate initial parameter values can have a significant effect on the quality of the model. Initialization strategies that have been used for EM have generally been computationally expensive.

From components to clusters: In EM Clustering model, components are often treated as clusters. This approach can be misleading since cohesive clusters are often modeled by multiple components. Clusters that have a complex shape need to be modeled by multiple components. To accomplish this, the Oracle Machine Learning for SQL implementation of EM Clustering creates a component hierarchy based on the overlap of the distributions of the individual components. The OML4SQL EM Clustering algorithm employs agglomerative hierarchical clustering. The OML4SQL implementation of EM Custering produces an assignment of the model components to highlevel clusters.

Anomaly Detection: In EM Anomaly detection, an anomaly probability is used to classify whether an object is normal or anomalous. The EM algorithm estimates the probability density of a data record which is mapped to a probability of an anomaly.
17.2.1 Scalability
Expectation Maximization (EM) uses database parallel processing to achieve excellent scalability, efficiently handling large data sets.
The OML4SQL implementation of Expectation Maximization uses database parallel processing to achieve excellent scalability. EM computations naturally lend themselves to row parallel processing, and the partial results are easily aggregated. The parallel implementation efficiently distributes the computationally intensive work across secondary processes and then combines the partial results to produce the final solution.
Related Topics
17.2.2 High Dimensionality
Process high dimensional data through Expectation Maximization.
The Oracle Machine Learning for SQL implementation of Expectation Maximization (EM) can efficiently process highdimensional data with thousands of attributes. This is achieved through a twofold process:

The data space of singlecolumn (not nested) attributes is analyzed for pairwise correlations. Only attributes that are significantly correlated with other attributes are included in the EM mixture model. The algorithm can also be configured to restrict the dimensionality to the M most correlated attributes.

Highdimensional (nested) numerical data that measures events of similar type is projected into a set of lowdimensional features that are modeled by EM. Some examples of highdimensional, numerical data are: text, recommendations, gene expressions, and market basket data.
17.2.3 Number of Components
EM automatically determines the optimal number of components, improving model accuracy and avoiding overfitting.
Typical implementations of Expectation Maximization (EM) require the user to specify the number of model components. This is problematic because users do not generally know the correct number of components. Choosing too many or too few components can lead to overfitting or underfitting, respectively.
When model search is enabled, the number of EM components is automatically determined. The algorithm uses a heldaside sample to determine the correct number of components, except in the cases of very small data sets when Bayesian Information Criterion (BIC) regularization is used.
17.2.4 Parameter Initialization
Choosing appropriate initial parameter values can have a significant effect on the quality of the solution.
Expectation maximization (EM) is not guaranteed to converge to the global maximum of the likelihood function but may instead converge to a local maximum. Therefore different initial parameter values can lead to different model parameters and different model quality.
In the process of model search, the EM model is grown independently. As new components are added, their parameters are initialized to areas with poor distribution fit.
17.2.5 From Components to Clusters
Expectation Maximization produces assignment of model components to highlevel clusters.
Expectation Maximization (EM) model components are often treated as clusters. However, this approach can be misleading. Cohesive clusters are often modeled by multiple components. The shape of the probability density function used in EM effectively predetermines the shape of the identified clusters. For example, Gaussian density functions can identify single peak symmetric clusters. Clusters of more complex shape need to be modeled by multiple components.
Ideally, high density areas of arbitrary shape must be interpreted as single clusters. To accomplish this, the Oracle Machine Learning for SQL implementation of EM builds a component hierarchy that is based on the overlap of the individual components' distributions. OML4SQL EM uses agglomerative hierarchical clustering. Component distribution overlap is measured using the Bhattacharyya distance function. Choosing an appropriate cutoff level in the hierarchy automatically determines the number of highlevel clusters.
The OML4SQL implementation of EM produces an assignment of the model components to highlevel clusters. Statistics like means, variances, modes, histograms, and rules additionally describe the highlevel clusters. The algorithm can be configured to either produce clustering assignments at the component level or at the cluster level.
17.2.6 Expectation Maximization for Anomaly Detection
EM identifies anomalies based on probability density, ensuring accurate anomaly detection for better data integrity.
An object is identified as an outlier in an EM Anomaly model if its anomaly probability is greater than 0.5. A label of 1 denotes normal, while a label of 0 denotes anomaly. The EM technique models the underlying data distribution of a data set, and the probability density of a data record is translated into an anomaly probability.
The following example displays the code snippet used for anomaly detection
using the Expectation Maximization algorithm. Specify the
EMCS_OUTLIER_RATE
setting to capture the desired rate of outliers in the
training data set.
 SET OUTLIER RATE IN SETTINGS TABLE  DEFAULT IS 0.05

BEGIN DBMS_DATA_MINING.DROP_MODEL('CUSTOMERS360MODEL_AD');
EXCEPTION WHEN OTHERS THEN NULL; END;
/
DECLARE
v_setlst DBMS_DATA_MINING.SETTING_LIST;
BEGIN
v_setlst('ALGO_NAME') := 'ALGO_EXPECTATION_MAXIMIZATION';
v_setlst('PREP_AUTO') := 'ON';
v_setlst('EMCS_OUTLIER_RATE') := '0.1';
DBMS_DATA_MINING.CREATE_MODEL2(
MODEL_NAME => 'CUSTOMERS360MODEL_AD',
MINING_FUNCTION => 'CLASSIFICATION',
DATA_QUERY => 'SELECT * FROM CUSTOMERS360_V',
CASE_ID_COLUMN_NAME => 'CUST_ID',
SET_LIST => v_setlst,
TARGET_COLUMN_NAME => NULL);  NULL target indicates anomaly detection
END;
/
To view the complete example, see https://github.com/oraclesamples/oracledbexamples/blob/main/machinelearning/sql/23ai/oml4sqlanomalydetectionem.sql.
17.3 Configuring the Algorithm
Configure Expectation Maximization (EM).
In Oracle Machine Learning for SQL, EM can effectively model very large data sets (both rows and columns) without requiring the user to supply initialization parameters or specify the number of model components. While the algorithm offers reasonable defaults, it also offers flexibility.
The following list describes some of the configurable aspects of EM:

Whether or not independent nonnested column attributes are included in the model. For EM Clustering, it is systemdetermined by default. For EM Anomaly, extreme values in each column attribute can indicate a potential outlier, even when the attribute itself has low dependency on other columns. Therefore, by default the algorithm disables attribute removal in EM Anomaly.

Whether to use Bernoulli or Gaussian distribution for numerical attributes. By default, the algorithm chooses the most appropriate distribution, and individual attributes may use different distributions. When the distribution is userspecified, it is used for all numerical attributes.

Whether the convergence criterion is based on a heldaside data set or on Bayesian Information Criterion (BIC). The convergence criterion is systemdetermined by default.

The percentage improvement in the value of the log likelihood function that is required to add a new component to the model. The default percentage is 0.001.

For EM Clustering, whether to define clusters as individual components or groups of components. Clusters are associated to groups of components by default.

The maximum number of components in the model. If model search is enabled, the algorithm determines the number of components based on improvements in the likelihood function or based on regularization (BIC), up to the specified maximum.

For EM Clustering, whether the linkage function for the agglomerative clustering step uses the nearest distance within the branch (single linkage), the average distance within the branch (average linkage), or the maximum distance within the branch (complete linkage). By default, the algorithm uses single linkage.
 For EM Anomaly, whether to specify the percentage of the data that is expected to be anomalous. If it is known in advance that the number of "suspicious" cases is a certain percentage of the data, then the outlier rate can be set to that percentage. The algorithm's default value is 0.05.
See Also:
DBMS_DATA_MINING —Algorithm Settings: Expectation Maximization for a listing and explanation of the available model settings.Note:
The term hyperparameter is also interchangeably used for model setting.Related Topics
17.4 Data Preparation for Expectation Maximization
Learn how to prepare data for Expectation Maximization (EM).
If you use Automatic Data Preparation (ADP), you do not need to specify additional data preparation for Expectation Maximization. ADP normalizes numerical attributes (in nonnested columns) when they are modeled with Gaussian distributions. ADP applies a topN binning transformation to categorical attributes.
Missing value treatment is not needed since Oracle Machine Learning for SQL algorithms handle missing values automatically. The EM algorithm replaces missing values with the mean in singlecolumn numerical attributes that are modeled with Gaussian distributions. In other singlecolumn attributes (categoricals and numericals modeled with Bernoulli distributions), NULLs are not replaced; they are treated as a distinct value with its own frequency count. In nested columns, missing values are treated as zeros.
Related Topics