Skip to content

MonirZaman/Applied-Graph-Neural-Network

Repository files navigation

Applied-Graph-Neural-Network

Extracting embedding from Graph Neural Network

Notebook
Cora embedding

Attention in Graph Neural Network (GNN)

Attention introduces weight to quantify how relevant information from neighboring nodes are. In GNN, representation of a node is a combination of its own information as well as the information from its neighbors. Attention weights are used to scale up and down information from neighbors that are relevant and not relevant, respectively.

To be more concrete, representation of a node i and its neighbors (j) are multipled with a learnable weight matrix a which is then activated through RELU or LeakyRELU. This is akin to similarity calculation between the node and its neighbor. Resulting weight is softmaxed over all the neighbors to normalize it. Normalized weight are used to multiply neighbor representation and finally, all the neighbor's representations are added to make up node's representation. It is summarized in the picture below.
gat

In Multi-head attention, mutiple attention matrices are calculated. Motivation is that each attention head will learn a specific aspect of neighbors. At the end, all attention heads are either averaged or concatenated into one attention matrix. When attention distribution are uniformly distributed over all the neighbors, then applying attention has less value.

GNN usecase

Particle tracking

Dealing with Class imbalance problem

  • Weight minority class higher in the loss function e.g.,
    number-of-records / (number of class * number of records of a class)

Adding such term in the loss function will help the model to make less mistakes on minority class.

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published