The code for paper: Robustness Meets Fairness: Investigating Adversarial Attack Effects on Alleviating Model Bias
The code has not been cleaned and well-structured. The experiments are in three folders, i.e., app_reviews, fake_news and news_sentiments. We adopted open-source packages likes Textattack and AIF360 in this work. Please install all dependecies before the runing the experiment codes.