Full Program »
Attacking Graph-Based Classification without Changing Existing Connections
In recent years, with the rapid development of machine learning in various domains, more and more studies have shown that machine learning models are vulnerable to adversarial attacks. However, most existing researches on adversarial machine learning studied non-graph data, such as images and text. Though some previous works on graph data have shown that adversaries can make graph-based classification methods unreliable by adding perturbations to features or adjacency matrices of existing nodes, sometimes these kinds of attacks have limitations for real-world applications. For example, to launch such attacks in real social networks, the attacker cannot force two good users to change (e.g., remove) the connection between them, which means that the attacker can not launch such attacks. In this paper, we propose a novel attack on collective classification methods by adding fake nodes into existing graphs. Our attack is more realistic and practical than the attack mentioned above. For instance, in a real social network, an attacker only needs to create some fake accounts and connect them to existing users without modifying the connections among existing users. We formulate the new attack as an optimization problem and utilize a gradient-based method to generate edges of newly added fake nodes. Our extensive experiments show that the attack can not only make new fake nodes evade detection, but also make the detector misclassify most of the target nodes. The proposed new attack is very effective and can achieve up to 100% False Negative Rates (FNRs) for both the new node set and the target node set.