[project] [arxiv] Wenxuan Wang, Chenglei Wang, Huihui Qi, Menghao Ye, Xuelin Qian*, Peng Wang, Yanning Zhang
When confronted with the challenge of ongoing generated new adversarial examples in complex and long-term multimedia applications, existing adversarial training methods struggle to adapt to iteratively updated attack methods. In contrast, our SSEAT model achieves sustainable defense performance improvements by continuously absorbing new adversarial knowledge.
Create and activate conda environment named SSEAT from our requirements.yaml
conda env create -f requirements.yaml
conda activate SSEATPlease download the CIFAR-10 and CIFAR-100 datasets from CIFAR and place the adversarial samples generated using the adversarial attack algorithm into the /dataset folder..
/dataset
┣ 📂 CIFAR10
┃ ┣ 📂 data
┃ ┃ ┗ 📜 FGSM.pth
┃ ┃ ┗ 📜 PGD.pth
┃ ┃ ┗ 📜 SIM.pth
┃ ┃ ┗ 📜 DIM.pth
┃ ┃ ┗ 📜 VNIM.pth
┃
┣ 📂 CIFAR100
┃ ┣ 📂 data
┃ ┃ ┗ 📜 FGSM.pth
┃ ┃ ┗ 📜 PGD.pth
┃ ┃ ┗ 📜 SIM.pth
┃ ┃ ┗ 📜 DIM.pth
┃ ┃ ┗ 📜 VNIM.pth
You can set the size of the hyperparameters in run.sh
bash run.sh
@inproceedings{wang2024sustainable,
title={Sustainable Self-evolution Adversarial Training},
author={Wang, Wenxuan and Wang, Chenglei and Qi, Huihui and Ye, Menghao and Qian, Xuelin and Wang, Peng and Zhang, Yanning},
booktitle={Proceedings of the 32nd ACM International Conference on Multimedia},
pages={9799--9808},
year={2024}
}

