17.6 Using the Unsupervised Anomaly Detection GraphWise Algorithm (Vertex Embeddings and Anomaly Scores)
UnsupervisedAnomalyDetectionGraphWise is an inductive vertex representation learning algorithm which is able to leverage vertex feature information. It can be applied to a wide variety of tasks, including unsupervised learning vertex embeddings for vertex classification.
UnsupervisedAnomalyDetectionGraphWise
is based on Deep Anomaly Detection on Attributed
Networks (Dominant) by Ding, Kaize, et al.
Model Structure
A UnsupervisedAnomalyDetectionGraphWise
model consists
of graph convolutional layers followed by an embedding layer which
defaults to a DGI layer.
The forward pass through a convolutional layer for a vertex proceeds as follows:
- A set of neighbors of the vertex is sampled.
- The previous layer representations of the neighbors are mean-aggregated, and the aggregated features are concatenated with the previous layer representation of the vertex.
- This concatenated vector is multiplied with weights, and a bias vector is added.
- The result is normalized to such that the layer output has unit norm.
The DGI Layer consists of three parts enabling unsupervised learning using embeddings produced by the convolution layers.
- Corruption function: Shuffles the node features while preserving the graph structure to produce negative embedding samples using the convolution layers.
- Readout function: Sigmoid activated mean of embeddings, used as summary of a graph.
- Discriminator: Measures the similarity of positive (unshuffled) embeddings with the summary as well as the similarity of negative samples with the summary from which the loss function is computed.
Since none of these contains mutable hyperparameters, the default DGI layer is always used and cannot be adjusted.
The second embedding layer available is the Dominant Layer, based on Deep Anomaly Detection on Attributed Networks (Dominant) by Ding, Kaize, et al.
Dominant is a model that detects anomalies based on the features and the neighbors' structure. Using GCNs to reconstruct the features in an autoencoder's settings, and the mask with the dot products of the embeddings.
The loss function is computed from the feature reconstruction loss and the structure reconstruction loss. The importance given to features or to the structure can be tuned with the alpha hyperparameter.
The following describes the usage of the main
functionalities of the implementation of
Dominant
in PGX:
- Loading a Graph
- Building a Minimal Unsupervised Anomaly Detection GraphWise Model
- Advanced Hyperparameter Customization
- Building an Unsupervised Anomaly Detection GraphWise Model Using Partitioned Graphs
- Training an Unsupervised Anomaly Detection GraphWise Model
- Getting the Loss Value for an Unsupervised Anomaly Detection GraphWise Model
- Inferring Embeddings for an Unsupervised Anomaly Detection GraphWise Model
- Inferring Anomalies
- Storing an Unsupervised Anomaly Detection GraphWise Model
- Loading a Pre-Trained Unsupervised Anomaly Detection GraphWise Model
- Destroying an Unsupervised Anomaly Detection GraphWise Model
Parent topic: Using the Machine Learning Library (PgxML) for Graphs