Skip to content

dacostaHugo/Adversarial_attacks

Repository files navigation

Adversarial attacks

Implementation of state-of-the-art adversarial attacks (L_inf PGD, Carlini Wagner) against CNN trained on CIFAR10. Personal research and proposition of new adversarial attacks.

Goal of this project :

  • Discover the importance of the Adversarial robustness of a Classifier, espacially a Neural network (st CNN) trained on images from CIFAR-10.
  • Contribute on this subject by exploring some personal ideas of relevant attacks on a CNN, in order to reduce its accuracy.

Adversarial attacks presented :

  • L_inf PGD
  • Personal attacks (Targeted attacks inspired by PGD, locally unconstrained attack "corner attack")

References :

About

Implementation of state-of-the-art adversarial attacks (L_inf PGD, Carlini Wagner) against CNN trained on CIFAR10. Personal research and proposition of new adversarial attacks.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors