Link Membership Inference Attacks against Unsupervised Graph Representation Learning
Significant advancements have been made in recent years in the field of unsupervised graph representation learning (UGRL) approaches. UGRL involves representing large graphs as low-dimensional vectors, commonly referred to as embeddings. These embeddings can be publicly released or shared with third parties for downstream analytics. However, adversaries can deduce sensitive structural information from the target graph through its embedding using various types of privacy inference attacks. This paper investigates the privacy vulnerabilities of UGRL models through the lens of {\em link membership inference attack} (LMIA). Specifically, an LMIA adversary aims to infer whether any two nodes are connected in the target graph from the node embeddings generated by a UGRL model. To achieve this, we propose two LMIA attacks that leverage the properties of node embeddings and various forms of adversary knowledge for inference. By conducting experiments on four state-of-the-art UGRL models using five real-world graph datasets, we demonstrate the effectiveness of the two LMIA attacks against these UGRL models. Furthermore, we conduct a comprehensive analysis to examine how varying degrees of preserved structural information in the embeddings impact the performance of LMIA. To enhance the security of UGRL models against LMIA, we design a family of defense mechanisms that perturb the least significant dimensions of embeddings. Our experimental results show that our defense mechanism achieves a favorable balance between defense effectiveness and embedding quality.