ABFL: A Blockchain-enabled Robust Framework for Secure and Trustworthy Federated Learning
The demand for effective, safe, and privacy-preserving machine learning methods has increased due to the rapid growth of large pre-trained models in recent years. In large-scale AI applications, federated learning (FL) has emerged as a cutting-edge method for addressing privacy and data silos issues. However, FL systems are vulnerable to poisoning attacks, and centralized master-slave architectures have reliability, fairness, and security limitations. We propose a secure and efficient decentralized FL framework called ABFL to address these challenges. The framework tightly integrates FL with blockchain technology to strengthen data ownership guarantees and significantly lessen the negative impact of malicious nodes on the global model. Using historical data stored on the blockchain, ABFL enables model update prediction and identifies malicious nodes by verifying consistency. In addition, we present a novel agent consensus mechanism to lower the expense of model cross-validation and increase consensus efficiency. The ABFL framework's robustness to various sophisticated poisoning attacks while maintaining high model performance and increasing consensus efficiency is demonstrated in a comprehensive analysis of three benchmark datasets.