27 NonNegative Matrix Factorization
Learn how to use NonNegative Matrix Factorization (NMF), an unsupervised algorithm, for feature extraction.
 About NMF
NonNegative Matrix Factorization is useful when there are many attributes and the attributes are ambiguous or have weak predictability. By combining attributes, NMF can produce meaningful patterns, topics, or themes. NMF is a feature extraction algorithm.  Tuning the NMF Algorithm
Learn about configuring parameters for NonNegative Matrix Factorization (NMF).  Data Preparation for NMF
You can use Automatic Data Preparation (ADP) or supply your transformation like binning or normalization to prepare the data for NonNegative Matrix Factorization (NMF).
Related Topics
See Also:
Paper "Learning the Parts of Objects by NonNegative Matrix Factorization" by D. D. Lee and H. S. Seung in Nature (401, pages 788791, 1999)
Parent topic: Algorithms
27.1 About NMF
NonNegative Matrix Factorization is useful when there are many attributes and the attributes are ambiguous or have weak predictability. By combining attributes, NMF can produce meaningful patterns, topics, or themes. NMF is a feature extraction algorithm.
Each feature created by NMF is a linear combination of the original attribute set. Each feature has a set of coefficients, which are a measure of the weight of each attribute on the feature. There is a separate coefficient for each numerical attribute and for each distinct value of each categorical attribute. The coefficients are all nonnegative.
 Matrix Factorization
NonNegative Matrix Factorization uses techniques from multivariate analysis and linear algebra. It decomposes the data as a matrix M into the product of two lower ranking matrices W and H. The submatrix W contains the NMF basis; the submatrix H contains the associated coefficients (weights).  Scoring with NMF
NonNegative Matrix Factorization (NMF) can be used as a preprocessing step for dimensionality reduction in classification, regression, clustering, and other machine learning tasks. Scoring an NMF model produces data projections in the new feature space. The magnitude of a projection indicates how strongly a record maps to a feature.  Text Analysis with NMF
NMF analyzes text effectively by introducing context through combining attributes, enhancing explanatory power.
Parent topic: NonNegative Matrix Factorization
27.1.1 Matrix Factorization
NonNegative Matrix Factorization uses techniques from multivariate analysis and linear algebra. It decomposes the data as a matrix M into the product of two lower ranking matrices W and H. The submatrix W contains the NMF basis; the submatrix H contains the associated coefficients (weights).
The algorithm iteratively modifies of the values of W and H so that their product approaches M. The technique preserves much of the structure of the original data and guarantees that both basis and weights are nonnegative. The algorithm terminates when the approximation error converges or a specified number of iterations is reached.
The NMF algorithm must be initialized with a seed to indicate the starting point for the iterations. Because of the high dimensionality of the processing space and the fact that there is no global minimization algorithm, the appropriate initialization can be critical in obtaining meaningful results. Oracle Machine Learning for SQL uses a random seed that initializes the values of W and H based on a uniform distribution. This approach works well in most cases.
Parent topic: About NMF
27.1.2 Scoring with NMF
NonNegative Matrix Factorization (NMF) can be used as a preprocessing step for dimensionality reduction in classification, regression, clustering, and other machine learning tasks. Scoring an NMF model produces data projections in the new feature space. The magnitude of a projection indicates how strongly a record maps to a feature.
The SQL scoring functions for feature extraction support NMF models. When the functions are invoked with the analytical syntax, the functions build and apply a transient NMF model. The feature extraction functions are: FEATURE_DETAILS
, FEATURE_ID
, FEATURE_SET
, and FEATURE_VALUE
.
Related Topics
Parent topic: About NMF
27.1.3 Text Analysis with NMF
NMF analyzes text effectively by introducing context through combining attributes, enhancing explanatory power.
NMF is especially wellsuited for analyzing text. In a text document, the same word can occur in different places with different meanings. For example, "hike" can be applied to the outdoors or to interest rates. By combining attributes, NMF introduces context, which is essential for explanatory power:

"hike" + "mountain" > "outdoor sports"

"hike" + "interest" > "interest rates"
Related Topics
Parent topic: About NMF
27.2 Tuning the NMF Algorithm
Learn about configuring parameters for NonNegative Matrix Factorization (NMF).
Oracle Machine Learning for SQL supports five configurable parameters for NMF. All of them have default values which are appropriate for most applications of the algorithm. The NMF settings are:

Number of features. By default, the number of features is determined by the algorithm.

Convergence tolerance. The default is .05.

Number of iterations. The default is 50.

Random seed. The default is 1.

Nonnegative scoring. You can specify whether negative numbers must be allowed in scoring results. By default they are allowed.
See Also:
DBMS_DATA_MINING —Algorithm Settings: NonNegative Matrix Factorization for a listing and explanation of the available model settings.Note:
The term hyperparameter is also interchangeably used for model setting.Parent topic: NonNegative Matrix Factorization
27.3 Data Preparation for NMF
You can use Automatic Data Preparation (ADP) or supply your transformation like binning or normalization to prepare the data for NonNegative Matrix Factorization (NMF).
ADP normalizes numerical attributes for NMF.
When there are missing values in columns with simple data types (not nested), NMF interprets them as missing at random. The algorithm replaces missing categorical values with the mode and missing numerical values with the mean.
When there are missing values in nested columns, NMF interprets them as sparse. The algorithm replaces sparse numerical data with zeros and sparse categorical data with zero vectors.
If you choose to manage your own data preparation, keep in mind that outliers can significantly impact NMF. Use a clipping transformation before binning or normalizing. NMF typically benefits from normalization. However, outliers with minmax normalization cause poor matrix factorization. To improve the matrix factorization, you need to decrease the error tolerance. This in turn leads to longer build times.
Related Topics
Parent topic: NonNegative Matrix Factorization