This is the source code of the paper "FedGA: Federated Learning with Gradient Alignment for Error Asymmetry Mitigation". Please refer to the pdf file for the convergence analysis and proof.
All the experiments were conducted on a Linux machine with the following specifications:
- OS: Red Hat Enterprise Linux 8.5 (Ootpa)
- CPU: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
The code was developed using the following versions of the software:
- Python: 3.11.9
The following packages are required to run the code:
- datasets 2.21.0
- python-box 7.0
- scipy 1.12.0
- torchvision 0.15.2
- torchmetrics 0.10.1
- wandb 0.17.7
Always cd to the root directory of this project before running the experiments.
Sync the dependencies with uv:
uv syncActivate the virtual environment before running the experiments:
source .venv/bin/activateTo run a short example, use the following command:
python fedlearn.pyTo run a specific experiment on dataset SVHN with Dirichlet parameter 0.1 based on GA algorithm, you can supply the configuration as command line argument:
python fedlearn.py DATA.NAME SVHN DATA.IB.ALPHA 0.1 FL.ALG FedGAor, make a copy of the default config file config_default and modify it and run the following command:
python fedlearn.py --cfg configs/your_config.yamlTo reproduce the results in the paper, run the following command:
python fedlearnParallel.py --cfg configs/config_MNIST.yaml
python fedlearnParallel.py --cfg configs/config_SVHN.yaml
python fedlearnParallel.py --cfg configs/config_CIFAR10.yaml