Full Program »
Better Together: Attaining the Triad of Byzantine-robust Federated Learning via Local Update Amplification
Manipulation of local training data and local updates, i.e., the Byzantine poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Many Byzantine-robust aggregation algorithms (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants at the central aggregator. However, they largely suffer from model quality degradation due to the overremoval of local updates or/and the inefficiency caused by the expensive analysis of the high-dimensional local updates.
In this work, we propose AgrAmplifier that aims to simultaneously attain the triad of robustness, fidelity and efficiency for FL. AgrAmplifier features the amplification of the “morality” of local updates to render their maliciousness and benignness clearly distinguishable. It re-organizes the local updates into patches and extracts the most activated features in the patches. This strategy
can effectively enhance the robustness of the aggregator, and it also retains high fidelity as the amplified updates become more resistant to local translations. Furthermore, the significant dimension reduction in the feature space greatly benefits the efficiency of the aggregation.
AgrAmplifier is compatible with any existing Byzantine-robust mechanism. In this paper, we integrate it with three mainstream ones, i.e., distance-based, prediction-based and trust bootstrapping-based mechanisms. Our extensive evaluation against five representative poisoning attacks on five datasets across diverse domains demonstrates the consistent enhancement for all of them, with average gains at 57.47%, 30.40% and 10.68% in terms of robustness, fidelity and efficiency respectively.