{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,31]],"date-time":"2025-03-31T22:47:27Z","timestamp":1743461247596},"reference-count":40,"publisher":"PeerJ","license":[{"start":{"date-parts":[[2021,9,13]],"date-time":"2021-09-13T00:00:00Z","timestamp":1631491200000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61572034"],"award-info":[{"award-number":["61572034"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100018530","name":"Major Science and Technology Projects in Anhui Province","doi-asserted-by":"crossref","award":["18030901025"],"award-info":[{"award-number":["18030901025"]}],"id":[{"id":"10.13039\/501100018530","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Anhui Province University Natural Science Fund","award":["KJ2019A0109"],"award-info":[{"award-number":["KJ2019A0109"]}]},{"name":"Natural Science Foundation of Anhui Province of China","award":["2008085MF220"],"award-info":[{"award-number":["2008085MF220"]}]},{"name":"Science and Technology Project of Wuhu City in 2020","award":["2020yf48"],"award-info":[{"award-number":["2020yf48"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"abstract":"<jats:p>Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method.<\/jats:p>","DOI":"10.7717\/peerj-cs.702","type":"journal-article","created":{"date-parts":[[2021,9,13]],"date-time":"2021-09-13T11:03:42Z","timestamp":1631531022000},"page":"e702","source":"Crossref","is-referenced-by-count":4,"title":["Generating adversarial examples without specifying a target model"],"prefix":"10.7717","volume":"7","author":[{"given":"Gaoming","family":"Yang","sequence":"first","affiliation":[{"name":"School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China"}]},{"given":"Mingwei","family":"Li","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China"}]},{"given":"Xianjing","family":"Fang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China"}]},{"given":"Ji","family":"Zhang","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Computing, University of Southern Queensland, Queensland, Australia"}]},{"given":"Xingzhu","family":"Liang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China"}]}],"member":"4443","published-online":{"date-parts":[[2021,9,13]]},"reference":[{"key":"10.7717\/peerj-cs.702\/ref-1","first-page":"1","article-title":"BI-GRU capsule networks for student answers assessment","author":"Ait-Khayi","year":"2019"},{"key":"10.7717\/peerj-cs.702\/ref-2","first-page":"214","article-title":"Wasserstein generative adversarial networks","author":"Arjovsky","year":"2017"},{"key":"10.7717\/peerj-cs.702\/ref-3","first-page":"1","article-title":"Using deep learning to recommend discussion threads to users in an online forum","author":"Buhagiar","year":"2018"},{"key":"10.7717\/peerj-cs.702\/ref-4","first-page":"39","article-title":"Towards evaluating the robustness of neural networks","author":"Carlini","year":"2017"},{"key":"10.7717\/peerj-cs.702\/ref-5","first-page":"1","article-title":"Audio adversarial examples: targeted attacks on speech-to-text","author":"Carlini","year":"2018"},{"key":"10.7717\/peerj-cs.702\/ref-6","first-page":"7","article-title":"Wide & deep learning for recommender systems","author":"Cheng","year":"2016"},{"key":"10.7717\/peerj-cs.702\/ref-7","first-page":"321","article-title":"Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks","author":"Demontis","year":"2019"},{"key":"10.7717\/peerj-cs.702\/ref-8","first-page":"4690","article-title":"Arcface: additive angular margin loss for deep face recognition","author":"Deng","year":"2019"},{"key":"10.7717\/peerj-cs.702\/ref-9","first-page":"9185","article-title":"Boosting adversarial attacks with momentum","author":"Dong","year":"2018"},{"issue":"2","key":"10.7717\/peerj-cs.702\/ref-10","doi-asserted-by":"publisher","first-page":"654","DOI":"10.1016\/j.ejor.2017.11.054","article-title":"Deep learning with long short-term memory networks for financial market predictions","volume":"270","author":"Fischer","year":"2018","journal-title":"European Journal of Operational Research"},{"key":"10.7717\/peerj-cs.702\/ref-11","first-page":"2672","article-title":"Generative adversarial nets","author":"Goodfellow","year":"2014"},{"key":"10.7717\/peerj-cs.702\/ref-12","first-page":"1","article-title":"Explaining and harnessing adversarial examples","author":"Goodfellow","year":"2014"},{"key":"10.7717\/peerj-cs.702\/ref-13","article-title":"Advbox: a toolbox to generate adversarial examples that fool neural networks","author":"Goodman","year":"2020"},{"issue":"5786","key":"10.7717\/peerj-cs.702\/ref-14","doi-asserted-by":"publisher","first-page":"504","DOI":"10.1126\/science.1127647","article-title":"Reducing the dimensionality of data with neural networks","volume":"313","author":"Hinton","year":"2006","journal-title":"Science"},{"key":"10.7717\/peerj-cs.702\/ref-15","first-page":"2021","article-title":"Adversarial examples for evaluating reading comprehension systems","author":"Jia","year":"2017"},{"key":"10.7717\/peerj-cs.702\/ref-16","first-page":"99","article-title":"Adversarial examples in the physical world","author":"Kurakin","year":"2016"},{"key":"10.7717\/peerj-cs.702\/ref-17","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR42600.2020.00044","article-title":"Projection & probability-driven black-box attack","author":"Li","year":"2020a"},{"key":"10.7717\/peerj-cs.702\/ref-18","first-page":"1","article-title":"Universal adversarial perturbations generative network for speaker recognition","author":"Li","year":"2020b"},{"key":"10.7717\/peerj-cs.702\/ref-19","first-page":"4208","article-title":"Deep text classification can be fooled","author":"Liang","year":"2018"},{"key":"10.7717\/peerj-cs.702\/ref-20","first-page":"1","article-title":"Towards deep learning models resistant to adversarial attacks","author":"Madry","year":"2017"},{"key":"10.7717\/peerj-cs.702\/ref-21","first-page":"1765","article-title":"Universal adversarial perturbations","author":"Moosavi-Dezfooli","year":"2017"},{"key":"10.7717\/peerj-cs.702\/ref-22","first-page":"2574","article-title":"Deepfool: a simple and accurate method to fool deep neural networks","author":"Moosavi-Dezfooli","year":"2016"},{"key":"10.7717\/peerj-cs.702\/ref-23","first-page":"742","article-title":"NAG: network for adversary generation","author":"Mopuri","year":"2018"},{"key":"10.7717\/peerj-cs.702\/ref-24","first-page":"427","article-title":"Deep neural networks are easily fooled: high confidence predictions for unrecognizable images","author":"Nguyen","year":"2015"},{"key":"10.7717\/peerj-cs.702\/ref-25","first-page":"2642","article-title":"Conditional image synthesis with auxiliary classifier gans","author":"Odena","year":"2017"},{"key":"10.7717\/peerj-cs.702\/ref-26","article-title":"Transferability in machine learning: from phenomena to black-box attacks using adversarial samples","author":"Papernot","year":"2016"},{"key":"10.7717\/peerj-cs.702\/ref-27","first-page":"506","article-title":"Practical black-box attacks against machine learning","author":"Papernot","year":"2017"},{"key":"10.7717\/peerj-cs.702\/ref-28","first-page":"372","article-title":"The limitations of deep learning in adversarial settings","author":"Papernot","year":"2016"},{"key":"10.7717\/peerj-cs.702\/ref-29","first-page":"1528","article-title":"Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition","author":"Sharif","year":"2016"},{"key":"10.7717\/peerj-cs.702\/ref-30","first-page":"8312","article-title":"Constructing unrestricted adversarial examples with generative models","volume":"31","author":"Song","year":"2018","journal-title":"Advances in Neural Information Processing Systems"},{"issue":"5","key":"10.7717\/peerj-cs.702\/ref-31","doi-asserted-by":"publisher","first-page":"828","DOI":"10.1109\/TEVC.2019.2890858","article-title":"One pixel attack for fooling deep neural networks","volume":"23","author":"Su","year":"2019","journal-title":"IEEE Transactions on Evolutionary Computation"},{"key":"10.7717\/peerj-cs.702\/ref-32","first-page":"1","article-title":"Intriguing properties of neural networks","author":"Szegedy","year":"2013"},{"key":"10.7717\/peerj-cs.702\/ref-33","first-page":"1","article-title":"Adversarial images for variational autoencoders","author":"Tabacof","year":"2016"},{"key":"10.7717\/peerj-cs.702\/ref-34","article-title":"Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples","author":"Tuna","year":"2021"},{"key":"10.7717\/peerj-cs.702\/ref-35","first-page":"1","article-title":"Automated risk identification using NLP in cloud based development environments","volume":"3","author":"Vijayakumar","year":"2019","journal-title":"Journal of Ambient Intelligence and Humanized Computing"},{"key":"10.7717\/peerj-cs.702\/ref-36","first-page":"3905","article-title":"Generating adversarial examples with adversarial networks","author":"Xiao","year":"2018"},{"key":"10.7717\/peerj-cs.702\/ref-37","first-page":"2730","article-title":"Improving transferability of adversarial examples with input diversity","author":"Xie","year":"2019"},{"key":"10.7717\/peerj-cs.702\/ref-38","first-page":"1","article-title":"Generating natural adversarial examples","author":"Zhao","year":"2017"},{"key":"10.7717\/peerj-cs.702\/ref-39","doi-asserted-by":"publisher","first-page":"1452","DOI":"10.1109\/TIFS.2020.3036801","article-title":"Towards transferable adversarial attack against deep face recognition","volume":"16","author":"Zhong","year":"2020","journal-title":"IEEE Transactions on Information Forensics and Security"},{"key":"10.7717\/peerj-cs.702\/ref-40","first-page":"2847","article-title":"Adversarial attacks on neural networks for graph data","author":"Z\u00fcgner","year":"2018"}],"container-title":["PeerJ Computer Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/peerj.com\/articles\/cs-702.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/peerj.com\/articles\/cs-702.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/peerj.com\/articles\/cs-702.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/peerj.com\/articles\/cs-702.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,9]],"date-time":"2023-01-09T02:15:57Z","timestamp":1673230557000},"score":1,"resource":{"primary":{"URL":"https:\/\/peerj.com\/articles\/cs-702"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,9,13]]},"references-count":40,"alternative-id":["10.7717\/peerj-cs.702"],"URL":"https:\/\/doi.org\/10.7717\/peerj-cs.702","archive":["CLOCKSS","LOCKSS","Portico"],"relation":{},"ISSN":["2376-5992"],"issn-type":[{"value":"2376-5992","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,9,13]]},"article-number":"e702"}}