17.6 Using the Unsupervised Anomaly Detection GraphWise Algorithm (Vertex Embeddings and Anomaly Scores)

UnsupervisedAnomalyDetectionGraphWise is an inductive vertex representation learning algorithm which is able to leverage vertex feature information. It can be applied to a wide variety of tasks, including unsupervised learning vertex embeddings for vertex classification.

UnsupervisedAnomalyDetectionGraphWise is based on Deep Anomaly Detection on Attributed Networks (Dominant) by Ding, Kaize, et al.

Model Structure

A UnsupervisedAnomalyDetectionGraphWise model consists of graph convolutional layers followed by an embedding layer which defaults to a DGI layer.

The forward pass through a convolutional layer for a vertex proceeds as follows:

  1. A set of neighbors of the vertex is sampled.
  2. The previous layer representations of the neighbors are mean-aggregated, and the aggregated features are concatenated with the previous layer representation of the vertex.
  3. This concatenated vector is multiplied with weights, and a bias vector is added.
  4. The result is normalized to such that the layer output has unit norm.

The DGI Layer consists of three parts enabling unsupervised learning using embeddings produced by the convolution layers.

  1. Corruption function: Shuffles the node features while preserving the graph structure to produce negative embedding samples using the convolution layers.
  2. Readout function: Sigmoid activated mean of embeddings, used as summary of a graph.
  3. Discriminator: Measures the similarity of positive (unshuffled) embeddings with the summary as well as the similarity of negative samples with the summary from which the loss function is computed.

Since none of these contains mutable hyperparameters, the default DGI layer is always used and cannot be adjusted.

The second embedding layer available is the Dominant Layer, based on Deep Anomaly Detection on Attributed Networks (Dominant) by Ding, Kaize, et al.

Dominant is a model that detects anomalies based on the features and the neighbors' structure. Using GCNs to reconstruct the features in an autoencoder's settings, and the mask with the dot products of the embeddings.

The loss function is computed from the feature reconstruction loss and the structure reconstruction loss. The importance given to features or to the structure can be tuned with the alpha hyperparameter.

The following describes the usage of the main functionalities of the implementation of Dominant in PGX: