{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T01:07:44Z","timestamp":1776128864914,"version":"3.50.1"},"reference-count":78,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2022,2,7]],"date-time":"2022-02-07T00:00:00Z","timestamp":1644192000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Digital Threats"],"published-print":{"date-parts":[[2022,9,30]]},"abstract":"<jats:p>\n            Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before. Manipulated videos can fuel disinformation and reduce trust in media. Therefore detection of fake videos has garnered immense interest in academia and industry. Recently developed Deepfake detection methods rely on\n            <jats:bold>Deep Neural Networks (DNNs)<\/jats:bold>\n            to distinguish AI-generated fake videos from real videos. In this work, we demonstrate that it is possible to bypass such detectors by adversarially modifying fake videos synthesized using existing Deepfake generation methods. We further demonstrate that our adversarial perturbations are robust to image and video compression codecs, making them a real-world threat. We present pipelines in both white-box and black-box attack scenarios that can fool DNN-based Deepfake detectors into classifying fake videos as real. Finally, we study the extent to which adversarial perturbations transfer across different Deepfake detectors and create more accessible attacks using universal adversarial perturbations that pose a very feasible attack scenario since they can be easily shared amongst attackers.\n            <jats:xref ref-type=\"fn\">\n              <jats:sup>1<\/jats:sup>\n            <\/jats:xref>\n          <\/jats:p>","DOI":"10.1145\/3464307","type":"journal-article","created":{"date-parts":[[2021,5,21]],"date-time":"2021-05-21T13:20:08Z","timestamp":1621603208000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":24,"title":["Exposing Vulnerabilities of Deepfake Detection Systems with Robust Attacks"],"prefix":"10.1145","volume":"3","author":[{"given":"Shehzeen","family":"Hussain","sequence":"first","affiliation":[{"name":"University of California, San Diego, California, USA"}]},{"given":"Paarth","family":"Neekhara","sequence":"additional","affiliation":[{"name":"University of California, San Diego, California, USA"}]},{"given":"Brian","family":"Dolhansky","sequence":"additional","affiliation":[{"name":"Facebook AI, Seattle, Washington, USA"}]},{"given":"Joanna","family":"Bitton","sequence":"additional","affiliation":[{"name":"Facebook AI, Seattle, Washington, USA"}]},{"given":"Cristian Canton","family":"Ferrer","sequence":"additional","affiliation":[{"name":"Facebook AI, Seattle, Washington, USA"}]},{"given":"Julian","family":"McAuley","sequence":"additional","affiliation":[{"name":"University of California, San Diego, La Jolla, California, USA"}]},{"given":"Farinaz","family":"Koushanfar","sequence":"additional","affiliation":[{"name":"University of California, San Diego, La Jolla, California, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,2,7]]},"reference":[{"key":"e_1_3_2_2_2","volume-title":"2018 IEEE International Workshop on Information Forensics and Security (WIFS)","author":"Afchar Darius","year":"2018","unstructured":"Darius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. 2018. MesoNet: A compact facial video forgery detection network. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE."},{"key":"e_1_3_2_3_2","volume-title":"The IEEE International Conference on Computer Vision (ICCV) Workshops","author":"Amerini Irene","year":"2019","unstructured":"Irene Amerini, Leonardo Galteri, Roberto Caldelli, and Alberto Del Bimbo. 2019. Deepfake video detection through optical flow based CNN. In The IEEE International Conference on Computer Vision (ICCV) Workshops."},{"key":"e_1_3_2_4_2","article-title":"Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018).","journal-title":"arXiv preprint arXiv:1802.00420"},{"key":"e_1_3_2_5_2","volume-title":"Proceedings of the 35th International Conference on Machine Learning","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. Synthesizing robust adversarial examples. In Proceedings of the 35th International Conference on Machine Learning."},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.532"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2895466"},{"key":"e_1_3_2_8_2","doi-asserted-by":"crossref","first-page":"962","DOI":"10.23919\/EUSIPCO.2018.8553305","volume-title":"2018 26th European Signal Processing Conference (EUSIPCO)","author":"Barni Mauro","year":"2018","unstructured":"Mauro Barni, Matthew C. Stamm, and Benedetta Tondi. 2018. Adversarial multimedia forensics: Overview and challenges ahead. In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 962\u2013966."},{"key":"e_1_3_2_9_2","first-page":"7345","volume-title":"International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"Behjati Melika","year":"2019","unstructured":"Melika Behjati, Seyed-Mohsen Moosavi-Dezfooli, Mahdieh Soleymani Baghshah, and Pascal Frossard. 2019. Universal adversarial attacks on text classifiers. In International Conference on Acoustics, Speech and Signal Processing (ICASSP). 7345\u20137349."},{"key":"e_1_3_2_10_2","volume-title":"International Conference on Learning Representations","author":"Belinkov Yonatan","year":"2018","unstructured":"Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations."},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4614-0757-7_12"},{"key":"e_1_3_2_12_2","doi-asserted-by":"crossref","unstructured":"R. Bohme and M. Kirchner. 2013. Digital Image Forensics: There is More to a Picture Than Meets the Eye chapter Counter-forensics: Attacking Image Forensics.","DOI":"10.1007\/978-1-4614-0757-7_12"},{"key":"e_1_3_2_13_2","article-title":"Video face manipulation detection through ensemble of CNNs","author":"Bonettini Nicolo","year":"2020","unstructured":"Nicolo Bonettini, Edoardo Daniele Cannas, Sara Mandelli, Luca Bondi, Paolo Bestagini, and Stefano Tubaro. 2020. Video face manipulation detection through ensemble of CNNs. arXiv preprint arXiv:2004.07676 (2020).","journal-title":"arXiv preprint arXiv:2004.07676"},{"key":"e_1_3_2_14_2","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1109\/SP.2017.49","volume-title":"2017 IEEE Symposium on Security and Privacy (sp)","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (sp). IEEE, 39\u201357."},{"key":"e_1_3_2_15_2","volume-title":"2018 IEEE Security and Privacy Workshops (SPW)","author":"Carlini Nicholas","year":"2018","unstructured":"Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE Security and Privacy Workshops (SPW). IEEE."},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.195"},{"key":"e_1_3_2_17_2","article-title":"Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression","author":"Das Nilaksh","year":"2017","unstructured":"Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E. Kounavis, and Duen Horng Chau. 2017. Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression. arXiv preprint arXiv:1705.02900 (2017).","journal-title":"arXiv preprint arXiv:1705.02900"},{"key":"e_1_3_2_18_2","unstructured":"Azat Davletshin. 2020. https:\/\/github.com\/NTech-Lab\/deepfake-detection-challenge."},{"key":"e_1_3_2_19_2","unstructured":"DeepFakes. 2017. https:\/\/github.com\/deepfakes\/faceswap."},{"key":"e_1_3_2_20_2","volume-title":"Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Deng Jiankang","unstructured":"Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. [n.d.]. RetinaFace: Single-shot multi-level face localisation in the wild. In Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_3_2_21_2","article-title":"The deepfake detection challenge (DFDC) dataset","author":"Dolhansky Brian","year":"2020","unstructured":"Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cristian Canton Ferrer. 2020. The deepfake detection challenge (DFDC) dataset. arXiv preprint arXiv:2006.07397 (2020).","journal-title":"arXiv preprint arXiv:2006.07397"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00957"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00444"},{"key":"e_1_3_2_24_2","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"3","author":"Duarte Amanda","year":"2019","unstructured":"Amanda Duarte, Francisco Roldan, Miquel Tubau, Janna Escur, Santiago Pascual, Amaia Salvador, Eva Mohedano, Kevin McGuinness, Jordi Torres, and Xavier Giro-i Nieto. 2019. Wav2Pix: Speech-conditioned face generation using generative adversarial networks. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. 3."},{"key":"e_1_3_2_25_2","article-title":"A study of the effect of JPG compression on adversarial images","author":"Dziugaite Gintare Karolina","year":"2016","unstructured":"Gintare Karolina Dziugaite, Zoubin Ghahramani, and Daniel M. Roy. 2016. A study of the effect of JPG compression on adversarial images. arXiv preprint arXiv:1608.00853 (2016).","journal-title":"arXiv preprint arXiv:1608.00853"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-2006"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.5555\/3036073"},{"key":"e_1_3_2_28_2","article-title":"Explaining and harnessing adversarial examples","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. Stat (2015).","journal-title":"Stat"},{"key":"e_1_3_2_29_2","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops","author":"Guarnera Luca","year":"2020","unstructured":"Luca Guarnera, Oliver Giudice, and Sebastiano Battiato. 2020. Deepfake detection by analyzing convolutional traces. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops."},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/AVSS.2018.8639163"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3393880"},{"key":"e_1_3_2_32_2","article-title":"Countering adversarial images using input transformations","author":"Guo Chuan","year":"2017","unstructured":"Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. 2017. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017).","journal-title":"arXiv preprint arXiv:1711.00117"},{"key":"e_1_3_2_33_2","unstructured":"Cui Hao. 2020. https:\/\/github.com\/cuihaoleo\/kaggle-dfdc."},{"key":"e_1_3_2_34_2","article-title":"Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples","author":"Hussain Shehzeen","year":"2021","unstructured":"Shehzeen Hussain, Paarth Neekhara, Malhar Jere, Farinaz Koushanfar, and Julian McAuley. 2021. Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples. WACV (2021).","journal-title":"WACV"},{"key":"e_1_3_2_35_2","first-page":"2137","volume-title":"International Conference on Machine Learning","author":"Ilyas Andrew","year":"2018","unstructured":"Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning. 2137\u20132146."},{"key":"e_1_3_2_36_2","volume-title":"SBP-BRiMS","author":"Jin Z.","year":"2017","unstructured":"Z. Jin, J. Cao, Han Guo, Yongdong Zhang, Y. Wang, and Jiebo Luo. 2017. Detection and analysis of 2016 US presidential election related rumors on Twitter. In SBP-BRiMS."},{"key":"e_1_3_2_37_2","first-page":"4401","volume-title":"Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Karras Tero","year":"2019","unstructured":"Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Conference on Computer Vision and Pattern Recognition (CVPR). 4401\u20134410."},{"key":"e_1_3_2_38_2","unstructured":"Marek Kowalski. 2018. FaceSwap https:\/\/github.com\/MarekKowalski\/FaceSwap\/."},{"key":"e_1_3_2_39_2","article-title":"Adversarial examples in the physical world","author":"Kurakin Alexey","year":"2016","unstructured":"Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).","journal-title":"arXiv preprint arXiv:1607.02533"},{"key":"e_1_3_2_40_2","volume-title":"Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Li Jian","unstructured":"Jian Li, Yabiao Wang, Changan Wang, Ying Tai, Jianjun Qian, Jian Yang, Chengjie Wang, Ji-Lin Li, and Feiyue Huang. [n.d.]. DSFD: Dual shot face detector. In Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00505"},{"key":"e_1_3_2_42_2","first-page":"1","volume-title":"2018 IEEE International Workshop on Information Forensics and Security (WIFS)","author":"Li Yuezun","year":"2018","unstructured":"Yuezun Li, Ming-Ching Chang, and Siwei Lyu. 2018. In ictu oculi: Exposing AI created fake videos by detecting eye blinking. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 1\u20137."},{"key":"e_1_3_2_43_2","first-page":"46","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops","author":"Li Yuezun","year":"2019","unstructured":"Yuezun Li and Siwei Lyu. 2019. Exposing DeepFake videos by detecting face warping artifacts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 46\u201352."},{"key":"e_1_3_2_44_2","article-title":"Delving into transferable adversarial examples and black-box attacks","author":"Liu Yanpei","year":"2016","unstructured":"Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016).","journal-title":"arXiv preprint arXiv:1611.02770"},{"key":"e_1_3_2_45_2","volume-title":"The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Moosavi-Dezfooli Seyed-Mohsen","year":"2017","unstructured":"Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_3_2_46_2","article-title":"Fast feature fool: A data independent approach to universal adversarial perturbations","author":"Mopuri Konda Reddy","year":"2017","unstructured":"Konda Reddy Mopuri, Utsav Garg, and R. Venkatesh Babu. 2017. Fast feature fool: A data independent approach to universal adversarial perturbations. arXiv preprint arXiv:1707.05572 (2017).","journal-title":"arXiv preprint arXiv:1707.05572"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1525"},{"key":"e_1_3_2_48_2","volume-title":"Proc. Interspeech 2019","author":"Neekhara Paarth","year":"2019","unstructured":"Paarth Neekhara, Shehzeen Hussain, Prakhar Pandey, Shlomo Dubnov, Julian McAuley, and Farinaz Koushanfar. 2019. Universal adversarial perturbations for speech recognition systems. In Proc. Interspeech 2019."},{"key":"e_1_3_2_49_2","unstructured":"CBS News. 2019. Doctored Nancy Pelosi video highlights threat of \u201cdeepfake\u201d tech . https:\/\/www.cbsnews.com\/news\/doctored-nancy-pelosi-video-highlights-threat-of-deepfake-tech-2019-05-25\/."},{"key":"e_1_3_2_50_2","first-page":"7184","volume-title":"Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Nirkin Yuval","year":"2019","unstructured":"Yuval Nirkin, Yosi Keller, and Tal Hassner. 2019. FSGAN: Subject agnostic face swapping and reenactment. In Conference on Computer Vision and Pattern Recognition (CVPR). 7184\u20137193."},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_2_52_2","volume-title":"2016 IEEE European Symposium on Security and Privacy (EuroS&P)","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE."},{"key":"e_1_3_2_53_2","doi-asserted-by":"crossref","first-page":"582","DOI":"10.1109\/SP.2016.41","volume-title":"2016 IEEE Symposium on Security and Privacy (SP)","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 582\u2013597."},{"key":"e_1_3_2_54_2","series-title":"Proceedings of the 36th International Conference on Machine Learning","first-page":"5231","volume":"97","author":"Qin Yao","year":"2019","unstructured":"Yao Qin, Nicholas Carlini, Garrison Cottrell, Ian Goodfellow, and Colin Raffel. 2019. Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, Long Beach, California, USA, 5231\u20135240. http:\/\/proceedings.mlr.press\/v97\/qin19a.html."},{"key":"e_1_3_2_55_2","doi-asserted-by":"crossref","first-page":"1822","DOI":"10.1109\/CVPRW.2017.228","volume-title":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","author":"Raghavendra Ramachandra","year":"2017","unstructured":"Ramachandra Raghavendra, Kiran B. Raja, Sushma Venkatesh, and Christoph Busch. 2017. Transferable deep-CNN features for detecting digital and print-scanned morphed face images. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 1822\u20131830."},{"key":"e_1_3_2_56_2","first-page":"1","volume-title":"2017 IEEE Workshop on Information Forensics and Security (WIFS)","author":"Rahmouni Nicolas","year":"2017","unstructured":"Nicolas Rahmouni, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. 2017. Distinguishing computer graphics from natural images using convolution neural networks. In 2017 IEEE Workshop on Information Forensics and Security (WIFS). IEEE, 1\u20136."},{"key":"e_1_3_2_57_2","volume-title":"The IEEE International Conference on Computer Vision (ICCV)","author":"Rossler Andreas","year":"2019","unstructured":"Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Niessner. 2019. FaceForensics++: Learning to detect manipulated facial images. In The IEEE International Conference on Computer Vision (ICCV)."},{"key":"e_1_3_2_58_2","first-page":"1","article-title":"Recurrent convolutional strategies for face manipulation detection in videos","volume":"3","author":"Sabir Ekraam","year":"2019","unstructured":"Ekraam Sabir, Jiaxin Cheng, Ayush Jaiswal, Wael AbdAlmageed, Iacopo Masi, and Prem Natarajan. 2019. Recurrent convolutional strategies for face manipulation detection in videos. Interfaces (GUI) 3 (2019), 1.","journal-title":"Interfaces (GUI)"},{"key":"e_1_3_2_59_2","unstructured":"Tim Salimans Jonathan Ho Xi Chen Szymon Sidor and Ilya Sutskever. 2017. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. arxiv:1703.03864http:\/\/arxiv.org\/abs\/1703.03864."},{"key":"e_1_3_2_60_2","unstructured":"Selim Seferbekov. 2020. https:\/\/github.com\/selimsef\/dfdc_deepfake-_challenge."},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00668"},{"key":"e_1_3_2_62_2","volume-title":"12th USENIX Workshop on Offensive Technologies (WOOT\u201918)","author":"Song Dawn","year":"2018","unstructured":"Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tram\u00e8r, Atul Prakash, and Tadayoshi Kohno. 2018. Physical adversarial examples for object detectors. In 12th USENIX Workshop on Offensive Technologies (WOOT\u201918). USENIX Association."},{"key":"e_1_3_2_63_2","doi-asserted-by":"crossref","DOI":"10.1145\/3072959.3073640","article-title":"Synthesizing Obama: Learning lip sync from audio","author":"Suwajanakorn Supasorn","year":"2017","unstructured":"Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman. 2017. Synthesizing Obama: Learning lip sync from audio. ACM Transactions on Graphics (TOG\u201917).","journal-title":"ACM Transactions on Graphics (TOG\u201917)"},{"key":"e_1_3_2_64_2","volume-title":"International Conference on Learning Representations","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations."},{"key":"e_1_3_2_65_2","first-page":"6105","volume-title":"International Conference on Machine Learning","author":"Tan Mingxing","year":"2019","unstructured":"Mingxing Tan and Quoc Le. 2019. EfficientNet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning. 6105\u20136114."},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323035"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.262"},{"key":"e_1_3_2_68_2","article-title":"Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news","author":"Vaccari Cristian","year":"2020","unstructured":"Cristian Vaccari and Andrew Chadwick. 2020. Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society (2020).","journal-title":"Social Media+ Society"},{"key":"e_1_3_2_69_2","article-title":"Media forensics and Deepfakes: An overview","author":"Verdoliva Luisa","year":"2020","unstructured":"Luisa Verdoliva. 2020. Media forensics and Deepfakes: An overview. arXiv preprint arXiv:2001.06564 (2020).","journal-title":"arXiv preprint arXiv:2001.06564"},{"key":"e_1_3_2_70_2","first-page":"1","article-title":"Realistic speech-driven facial animation with GANs","author":"Vougioukas Konstantinos","year":"2019","unstructured":"Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic. 2019. Realistic speech-driven facial animation with GANs. International Journal of Computer Vision (2019), 1\u201316.","journal-title":"International Journal of Computer Vision"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2007.902661"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.5555\/2627435.2638566"},{"key":"e_1_3_2_73_2","volume-title":"International Conference on Learning Representations","author":"Xie Cihang","year":"2018","unstructured":"Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. 2018. Mitigating adversarial effects through randomization. In International Conference on Learning Representations."},{"key":"e_1_3_2_74_2","first-page":"2725","article-title":"Improving transferability of adversarial examples with input diversity","author":"Xie Cihang","year":"2019","unstructured":"Cihang Xie, Zhishuai Zhang, Jianyu Wang, Yuyin Zhou, Zhou Ren, and A. Yuille. 2019. Improving transferability of adversarial examples with input diversity. 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201919), 2725\u20132734.","journal-title":"2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201919)"},{"key":"e_1_3_2_75_2","doi-asserted-by":"crossref","first-page":"8261","DOI":"10.1109\/ICASSP.2019.8683164","volume-title":"ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"Yang Xin","year":"2019","unstructured":"Xin Yang, Yuezun Li, and Siwei Lyu. 2019. Exposing deep fakes using inconsistent head poses. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 8261\u20138265."},{"key":"e_1_3_2_76_2","first-page":"9459","volume-title":"International Conference on Computer Vision (ICCV)","author":"Zakharov Egor","year":"2019","unstructured":"Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, and Victor Lempitsky. 2019. Few-shot adversarial learning of realistic neural talking head models. In International Conference on Computer Vision (ICCV). 9459\u20139468."},{"key":"e_1_3_2_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2016.2603342"},{"key":"e_1_3_2_78_2","doi-asserted-by":"crossref","first-page":"1831","DOI":"10.1109\/CVPRW.2017.229","volume-title":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","author":"Zhou Peng","year":"2017","unstructured":"Peng Zhou, Xintong Han, Vlad I Morariu, and Larry S. Davis. 2017. Two-stream neural networks for tampered face detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 1831\u20131839."},{"key":"e_1_3_2_79_2","volume-title":"ECCV","author":"Zhou Wen","year":"2018","unstructured":"Wen Zhou, X. Hou, Y. Chen, Mengyun Tang, Xiangqi Huang, X. Gan, and Yong Yang. 2018. Transferable adversarial perturbations. In ECCV."}],"container-title":["Digital Threats: Research and Practice"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3464307","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3464307","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:12:16Z","timestamp":1750191136000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3464307"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,2,7]]},"references-count":78,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,9,30]]}},"alternative-id":["10.1145\/3464307"],"URL":"https:\/\/doi.org\/10.1145\/3464307","relation":{},"ISSN":["2692-1626","2576-5337"],"issn-type":[{"value":"2692-1626","type":"print"},{"value":"2576-5337","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,2,7]]},"assertion":[{"value":"2020-11-30","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-04-29","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-02-07","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}