17.6 Using the Unsupervised Anomaly Detection GraphWise Algorithm (Vertex Embeddings and Anomaly Scores)

UnsupervisedAnomalyDetectionGraphWise is an inductive vertex representation learning and anomaly detection algorithm which is able to leverage vertex and edge feature information. Although it can be applied to a wide variety of tasks, it is particularly suitable for unsupervised learning of vertex embeddings for anomaly detection. After training this model, it is possible to infer anomaly scores or labels for unseen nodes.

UnsupervisedAnomalyDetectionGraphWise is based on Deep Anomaly Detection on Attributed Networks (Dominant) by Ding, Kaize, et al.

Model Structure

A UnsupervisedAnomalyDetectionGraphWise model consists of graph convolutional layers followed by an embedding layer. There are two types of embedding layers - DGI layer and Dominant layer. Both the layers are for inductive vertex representation learning with different loss functions. The embedding layer defaults to the DGI layer.

The forward pass through a convolutional layer for a vertex proceeds as follows:

  1. A set of neighbors of the vertex is sampled.
  2. The previous layer representations of the neighbors are mean-aggregated, and the aggregated features are concatenated with the previous layer representation of the vertex.
  3. This concatenated vector is multiplied with weights, and a bias vector is added.
  4. The result is normalized to such that the layer output has unit norm.

The DGI Layer, which is based on (Deep Graph Infomax (DGI) by Velickovic et al.) consists of three parts that enable unsupervised learning using embeddings produced by the convolution layers.

  1. Corruption function: Shuffles the node features while preserving the graph structure to produce negative embedding samples using the convolution layers.
  2. Readout function: Sigmoid activated mean of embeddings, used as summary of a graph.
  3. Discriminator: Measures the similarity of positive (unshuffled) embeddings with the summary as well as the similarity of negative samples with the summary from which the loss function is computed.

Since none of these contains mutable hyperparameters, the default DGI layer is always used and cannot be adjusted.

The Dominant layer enables unsupervised learning using a deep autoencoder. It uses the graph convolutional networks (GCNs) to reconstruct the features in the autoencoder setting, together with the reconstructed structure that is estimated using the dot products of the embeddings.

The loss function is computed from the feature reconstruction loss and the structure reconstruction loss. The importance given to features or to the structure can be tuned with the alpha hyperparameter.

The Dominant layer is based on Deep Anomaly Detection on Attributed Networks (Dominant) by Ding, Kaize, et al.

The following describes the usage of the main functionalities of the implementation of Dominant in PGX. The example demonstrates a scenario to detect fraudulent vertices based on their features.