Graph Neural Network

Representative methods

Tutorial slides on AAAI2019

  • Graph Convolutional Network (GCN) [6]
  • Graph Attention Network [7]
  • GraphSAGE(SAmple and aggreGatE) [8]
  • Transformer [9]: transformer can be deemed as a type of GNN.

Avoid oversmoothing and go deeper

  • Initial residual and Identity mapping [12]

  • GCN and PageRank [13]

Graph similarity/matching

A survey on graph similarity [4]

Graph transformation:

pooling/unpooling [5]

Dynamic Graph:

  • Pointer Graph Network [11]

Application

  • GNN for zero-shot learning [1][2]: treat each category as a graph node

  • GNN for multi-view learning [3]: treat each view as a graph node

  • GNN for clustering [10]

Reference:

  1. Wang, Xiaolong, Yufei Ye, and Abhinav Gupta. “Zero-shot recognition via semantic embeddings and knowledge graphs.” CVPR, 2018.

  2. Lee, Chung-Wei, et al. “Multi-label zero-shot learning with structured knowledge graphs.” CVPR, 2018.

  3. Wang, Dongang, et al. “Dividing and aggregating network for multi-view action recognition.” ECCV, 2018.

  4. Ma, Guixiang, et al. “Deep Graph Similarity Learning: A Survey.” arXiv preprint arXiv:1912.11615 (2019).

  5. Hongyang Gao, Shuiwang Ji: Graph U-Nets. CoRR abs/1905.05178 (2019)

  6. Kipf, Thomas N., and Max Welling. “Semi-supervised classification with graph convolutional networks.” arXiv preprint arXiv:1609.02907 (2016).

  7. Veličković, Petar, et al. “Graph attention networks.” arXiv preprint arXiv:1710.10903 (2017).

  8. Hamilton, Will, Zhitao Ying, and Jure Leskovec. “Inductive representation learning on large graphs.” NeurIPS, 2017.

  9. Vaswani, Ashish, et al. “Attention is all you need.” NeurIPS, 2017.

  10. Bo, Deyu, et al. “Structural Deep Clustering Network.” Proceedings of The Web Conference 2020. 2020.

  11. Veličković, Petar, et al. “Pointer Graph Networks.” arXiv preprint arXiv:2006.06380 (2020).

  12. Chen, Ming, et al. “Simple and deep graph convolutional networks.” arXiv preprint arXiv:2007.02133 (2020).

  13. Klicpera, Johannes, Aleksandar Bojchevski, and Stephan Günnemann. “Predict then propagate: Graph neural networks meet personalized pagerank.” arXiv preprint arXiv:1810.05997 (2018).

  14. Dosovitskiy, Alexey, et al. “An image is worth 16x16 words: Transformers for image recognition at scale.” arXiv preprint arXiv:2010.11929 (2020).

  15. Carion, Nicolas, et al. “End-to-End Object Detection with Transformers.” arXiv preprint arXiv:2005.12872 (2020).

  16. Chen, Hanting, et al. “Pre-Trained Image Processing Transformer.” arXiv preprint arXiv:2012.00364 (2020).

  17. Chefer, Hila, Shir Gur, and Lior Wolf. “Transformer Interpretability Beyond Attention Visualization.” arXiv preprint arXiv:2012.09838 (2020).