Implementation of state-of-the-art adversarial attacks (L_inf PGD, Carlini Wagner) against CNN trained on CIFAR10. Personal research and proposition of new adversarial attacks.
- Discover the importance of the Adversarial robustness of a Classifier, espacially a Neural network (st CNN) trained on images from CIFAR-10.
- Contribute on this subject by exploring some personal ideas of relevant attacks on a CNN, in order to reduce its accuracy.
- L_inf PGD
- Personal attacks (Targeted attacks inspired by PGD, locally unconstrained attack "corner attack")
- "Towards Evaluating the Robustness of Neural Networks" : https://arxiv.org/pdf/1608.04644.pdf
- "Towards Deep Learning Models Resistant to Adversarial Attacks" : https://arxiv.org/pdf/1706.06083.pdf