TY - JOUR
T1 - Multiple Kernel Representation Learning on Networks
AU - Celikkanat, Abdulkadir
AU - Shen, Yanning
AU - Malliaros, Fragkiskos D.
N1 - Publisher Copyright:
© 1989-2012 IEEE.
PY - 2022/5/4
Y1 - 2022/5/4
N2 - Learning representations of nodes in a low dimensional space is a crucial task with numerous interesting applications in network analysis, including link prediction, node classification, and visualization. Two popular approaches for this problem are matrix factorization and random walk-based models. In this paper, we aim to bring together the best of both worlds, towards learning node representations. In particular, we propose a weighted matrix factorization model that encodes random walk-based information about nodes of the network. The benefit of this novel formulation is that it enables us to utilize kernel functions without realizing the exact proximity matrix so that it enhances the expressiveness of existing matrix decomposition methods with kernels and alleviates their computational complexities. We extend the approach with a multiple kernel learning formulation that provides the flexibility of learning the kernel as the linear combination of a dictionary of kernels in data-driven fashion. We perform an empirical evaluation on real-world networks, showing that the proposed model outperforms baseline node embedding algorithms in downstream machine learning tasks.
AB - Learning representations of nodes in a low dimensional space is a crucial task with numerous interesting applications in network analysis, including link prediction, node classification, and visualization. Two popular approaches for this problem are matrix factorization and random walk-based models. In this paper, we aim to bring together the best of both worlds, towards learning node representations. In particular, we propose a weighted matrix factorization model that encodes random walk-based information about nodes of the network. The benefit of this novel formulation is that it enables us to utilize kernel functions without realizing the exact proximity matrix so that it enhances the expressiveness of existing matrix decomposition methods with kernels and alleviates their computational complexities. We extend the approach with a multiple kernel learning formulation that provides the flexibility of learning the kernel as the linear combination of a dictionary of kernels in data-driven fashion. We perform an empirical evaluation on real-world networks, showing that the proposed model outperforms baseline node embedding algorithms in downstream machine learning tasks.
KW - Graph representation learning
KW - kernel methods
KW - link prediction
KW - node classification
KW - node embeddings
UR - http://www.scopus.com/inward/record.url?scp=85129422012&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2022.3172048
DO - 10.1109/TKDE.2022.3172048
M3 - Journal article
AN - SCOPUS:85129422012
SN - 1041-4347
VL - 35
SP - 6113
EP - 6125
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 6
ER -