This repository contains the code and experiments for the manuscript:
Ditto: Fair and Robust Federated Learning Through Personalization
Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.
This pytorch implementation is based off of the code from Simplicial-FL repository (Laguel et al. 2021).
We also provide Tensorflow implementation
pip3 install -r requirements.txt
(A subset of) Options in models/run.sh:
datasetchosen from[so], where so is short for StackOverflow.aggregationchosen from['mean', 'median', 'krum'].attackchosen from['label_poison', 'random', 'model_replacement'].num_mali_devicesis the number of malicious devices.personalizedindicates whether we want to train personalized models.clippingindicates whether we want to clip the model updates while training the global model.k_aggregatorindicates whether we want to run k-loss/k-norm.
