Representation Learning on Networks E o Class +1 OF 080.20.3..0.00.0 B A Class-1 How to automate the representation of each user
11 Class -1 Class +1 v A B H D E F C How to automate the representation of each user? 0.8 0.2 0.3 … 0.0 0.0 Representation Learning on Networks
Outline Representation Learning on Networks Revisiting Graph Neural Networks Applications Conclusion and Q& A
12 Outline • Representation Learning on Networks • Revisiting Graph Neural Networks • Applications • Conclusion and Q&A
Review representation learning for networks Representation Learning d-dimensional vector, d<< v Graph Embedding 0.80.20.3..0.00.0 Users with the same label are located in the d-dimensional space closer than those with different labels e.g., node classification label2 label1
13 Review Representation Learning for Networks 0.8 0.2 0.3 … 0.0 0.0 d-dimensional vector, d<<|V| Users with the same label are located in the d-dimensional space closer than those with different labels label1 label2 e.g., node classification Representation Learning/ Graph Embedding
DeepWalk Random walk One example Rw path Skip gram with Hierarchical softmax V4 V3 V1 V5 v6 回國网圆回网 n4=4 5 1 p(un)LITIn P( \vl (vi)))=I P(yl(vi) Hierarchical softmax P(中()=IP(b()=I1/(1+eeo)w) 10-0.5000 B. Perozzi, R. AF-Rfou, and S. Skiena. 2014. Deepwalk: Online learning of social representations. KDD, 701-710
14 DeepWalk 1. B. Perozzi, R. Al-Rfou, and S. Skiena. 2014. Deepwalk: Online learning of social representations. KDD, 701–710. v1 v2 v3 v4 v6 v5 v4 v3 v1 v5 v6 Random walk One example RW path SkipGram with Hierarchical softmax Hierarchical softmax
Later LINE1: explicitly preserves both first-O order and second-order proximities ○ PTEL2]: learn heterogeneous text text network embedding via a semi information label 1 network lab bel 2 supervised manner label 3 Node2vec[3]: use a biased random walk to better explore node's q neighborhood 1. J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei 2015. Line: Large-scale information network embedding www, 1067-1077 2. J. Tang, M. Qu, and Q. Mei 2015. Pte: Predictive text embedding through large-scale heterogeneous text networks. KDD, 1165-1174 A Grover and J Leskovec 2016. node 2vec: Scalable feature learning for networks. KDD, 855-864
15 Later… • LINE[1]: explicitly preserves both firstorder and second-order proximities. • PTE[2]: learn heterogeneous text network embedding via a semisupervised manner. • Node2vec[3]: use a biased random walk to better explore node’s neighborhood. 1. J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. 2015. Line: Large-scale information network embedding. WWW, 1067–1077. 2. J. Tang, M. Qu, and Q. Mei. 2015. Pte: Predictive text embedding through large-scale heterogeneous text networks. KDD, 1165–1174. 3. A. Grover and J. Leskovec. 2016. node2vec: Scalable feature learning for networks. KDD, 855–864