{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,28]],"date-time":"2026-04-28T01:06:34Z","timestamp":1777338394833,"version":"3.51.4"},"reference-count":259,"publisher":"Association for Computing Machinery (ACM)","issue":"5","funder":[{"name":"ARC Discovery Early Career Researcher Award","award":["DE200101465"],"award-info":[{"award-number":["DE200101465"]}]},{"name":"ARC DP Project","award":["DP240101108"],"award-info":[{"award-number":["DP240101108"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2025,10,31]]},"abstract":"<jats:p>\n                    Today, computer systems hold large amounts of personal data. Yet while such an abundance of data allows breakthroughs in AI, and especially machine learning, its existence can be a threat to user privacy, and it can weaken the bonds of trust between humans and AI. Recent regulations now require that, on request, private information about a user must be removed both from computer systems and from machine learning models\u2014this legislation is more colloquially called \u201cthe right to be forgotten.\u201d While removing data from back-end databases should be straightforward, it is not sufficient in the AI context as machine learning models often \u201cremember\u201d the old data. Contemporary adversarial attacks on trained models have proven that we can learn whether an instance or an attribute belonged to the training data. This phenomenon calls for a new paradigm, namely\n                    <jats:italic toggle=\"yes\">machine unlearning<\/jats:italic>\n                    , to make machine learning models forget about particular data. It turns out that recent works on machine unlearning have not been able to completely solve the problem due to the lack of common frameworks and resources. Therefore, this article aspires to present a comprehensive examination of machine unlearning\u2019s concepts, designs, methods, and applications. Specifically, as a category collection of cutting-edge studies, the intention behind this article is to serve as a comprehensive resource for researchers and practitioners seeking an introduction to machine unlearning and its formulations, design criteria, removal requests, algorithms, and applications. In addition, we aim to highlight the key findings, current trends, and new research areas that have not yet featured the use of machine unlearning but could benefit greatly from it. We hope that this survey serves as a valuable resource for machine learning researchers and those seeking to innovate privacy technologies. Our resources are publicly available at\n                    <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/tamlhp\/awesome-machine-unlearning\">https:\/\/github.com\/tamlhp\/awesome-machine-unlearning<\/jats:ext-link>\n                    .\n                  <\/jats:p>","DOI":"10.1145\/3749987","type":"journal-article","created":{"date-parts":[[2025,7,22]],"date-time":"2025-07-22T22:20:28Z","timestamp":1753222828000},"page":"1-46","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":56,"title":["A Survey of Machine Unlearning"],"prefix":"10.1145","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2586-7757","authenticated-orcid":false,"given":"Thanh Tam","family":"Nguyen","sequence":"first","affiliation":[{"name":"Griffith University - Gold Coast Campus, Southport, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2027-5362","authenticated-orcid":false,"given":"Thanh Trung","family":"Huynh","sequence":"additional","affiliation":[{"name":"VinUniversity, Hanoi, Vietnam"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0707-5016","authenticated-orcid":false,"given":"Zhao","family":"Ren","sequence":"additional","affiliation":[{"name":"University of Bremen, Bremen, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6547-7641","authenticated-orcid":false,"given":"Phi Le","family":"Nguyen","sequence":"additional","affiliation":[{"name":"Hanoi University of Science and Technology, Hanoi, Vietnam"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6718-7584","authenticated-orcid":false,"given":"Alan Wee-Chung","family":"Liew","sequence":"additional","affiliation":[{"name":"Griffith University - Gold Coast Campus, Southport, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1395-261X","authenticated-orcid":false,"given":"Hongzhi","family":"Yin","sequence":"additional","affiliation":[{"name":"The University of Queensland, Brisbane, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9687-1315","authenticated-orcid":false,"given":"Quoc Viet Hung","family":"Nguyen","sequence":"additional","affiliation":[{"name":"Griffith University - Gold Coast Campus, Southport, Australia"}]}],"member":"320","published-online":{"date-parts":[[2025,9,18]]},"reference":[{"key":"e_1_3_2_2_2","first-page":"308","article-title":"Deep learning with differential privacy","author":"Abadi Martin","year":"2016","unstructured":"Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In SIGSAC, 308\u2013318.","journal-title":"SIGSAC"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAI.2024.3465441"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3090019"},{"key":"e_1_3_2_5_2","first-page":"1","article-title":"Influence functions in deep learning are fragile","author":"Basu Samyadeep","year":"2021","unstructured":"Samyadeep Basu, Phil Pope, and Soheil Feizi. 2021. Influence functions in deep learning are fragile. In ICLR, 1\u201322.","journal-title":"ICLR"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-022-06178-9"},{"key":"e_1_3_2_7_2","unstructured":"Alexander Becker and Thomas Liebig. 2022. Evaluating Machine Unlearning via Epistemic Uncertainty. arxiv:2208.10836. Retrieved from https:\/\/arxiv.org\/abs\/2208.10836"},{"key":"e_1_3_2_8_2","first-page":"1063","article-title":"A multi-batch L-BFGS method for machine learning","author":"Berahas Albert S.","year":"2016","unstructured":"Albert S. Berahas, Jorge Nocedal, and Martin Tak\u00e1\u010d. 2016. A multi-batch L-BFGS method for machine learning. In NIPS, 1063\u20131071.","journal-title":"NIPS"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/2090236.2090263"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-024-11078-6"},{"key":"e_1_3_2_11_2","first-page":"620","article-title":"A progressive batching L-BFGS method for machine learning","author":"Bollapragada Raghu","year":"2018","unstructured":"Raghu Bollapragada, Jorge Nocedal, Dheevatsa Mudigere, Hao-Jun Shi, and Ping Tak Peter Tang. 2018. A progressive batching L-BFGS method for machine learning. In ICML, 620\u2013629.","journal-title":"ICML"},{"key":"e_1_3_2_12_2","doi-asserted-by":"crossref","unstructured":"Jacopo Bonato Marco Cotogni and Luigi Sabetta. 2024. Is retain set all you need in machine unlearning? Restoring performance of unlearned models with out-of-distribution images. arXiv:2404.12922. Retrieved from https:\/\/arxiv.org\/abs\/2404.12922","DOI":"10.1007\/978-3-031-73232-4_1"},{"key":"e_1_3_2_13_2","first-page":"141","article-title":"Machine unlearning","author":"Bourtoule Lucas","year":"2021","unstructured":"Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In SP, 141\u2013159.","journal-title":"SP"},{"key":"e_1_3_2_14_2","first-page":"1092","article-title":"Machine unlearning for random forests","author":"Brophy Jonathan","year":"2021","unstructured":"Jonathan Brophy and Daniel Lowd. 2021. Machine unlearning for random forests. In ICML, 1092\u20131104.","journal-title":"ICML"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2015.35"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3196494.3196517"},{"key":"e_1_3_2_17_2","first-page":"1","article-title":"Machine unlearning method based on projection residual","author":"Cao Zihao","year":"2022","unstructured":"Zihao Cao, Jianzong Wang, Shijing Si, Zhangcheng Huang, and Jing Xiao. 2022. Machine unlearning method based on projection residual. In DSAA, 1\u20138.","journal-title":"DSAA"},{"key":"e_1_3_2_18_2","first-page":"388","article-title":"Incremental and decremental support vector machine learning","author":"Cauwenberghs Gert","year":"2000","unstructured":"Gert Cauwenberghs and Tomaso Poggio. 2000. Incremental and decremental support vector machine learning. In NIPS, 388\u2013394.","journal-title":"NIPS"},{"key":"e_1_3_2_19_2","first-page":"4003","article-title":"Example-based explanations with adversarial attacks for respiratory sound analysis","author":"Chang Yi","year":"2022","unstructured":"Yi Chang, Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl, and Bj\u00f6rn W. Schuller. 2022. Example-based explanations with adversarial attacks for respiratory sound analysis. In INTERSPEECH, 4003\u20134007.","journal-title":"INTERSPEECH"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3641289"},{"issue":"3","key":"e_1_3_2_21_2","first-page":"1069","article-title":"Differentially private empirical risk minimization","volume":"12","author":"Chaudhuri Kamalika","year":"2011","unstructured":"Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. 2011. Differentially private empirical risk minimization. The Journal of Machine Learning Research 12, 3 (2011), 1069\u20131109.","journal-title":"The Journal of Machine Learning Research"},{"key":"e_1_3_2_22_2","first-page":"4241","article-title":"Fast federated machine unlearning with nonlinear functional theory","author":"Che Tianshi","year":"2023","unstructured":"Tianshi Che, Yang Zhou, Zijie Zhang, Lingjuan Lyu, Ji Liu, Da Yan, Dejing Dou, and Jun Huan. 2023. Fast federated machine unlearning with nonlinear functional theory. In ICML, 4241\u20134268.","journal-title":"ICML"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1002\/aaai.12209"},{"key":"e_1_3_2_24_2","first-page":"2768","article-title":"Recommendation unlearning","author":"Chen Chong","year":"2022","unstructured":"Chong Chen, Fei Sun, Min Zhang, and Bolin Ding. 2022. Recommendation unlearning. In WWW, 2768\u20132777.","journal-title":"WWW"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1017\/ATSIP.2020.13"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.emnlp-main.738"},{"key":"e_1_3_2_27_2","unstructured":"Kongyang Chen Yao Huang and Yiwen Wang. 2021. Machine unlearning via GAN. arXiv:2111.11869. Retrieved from https:\/\/arxiv.org\/abs\/2111.11869"},{"key":"e_1_3_2_28_2","unstructured":"Min Chen Zhikun Zhang Tianhao Wang Michael Backes Mathias Humbert and Yang Zhang. 2021. Graph unlearning. arXiv:2103.14991. Retrieved from https:\/\/arxiv.org\/abs\/2103.14991"},{"key":"e_1_3_2_29_2","first-page":"896","article-title":"When machine unlearning jeopardizes privacy","author":"Chen Min","year":"2021","unstructured":"Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. 2021. When machine unlearning jeopardizes privacy. In SIGSAC, 896\u2013911.","journal-title":"SIGSAC"},{"key":"e_1_3_2_30_2","first-page":"499","article-title":"Graph unlearning","author":"Chen Min","year":"2022","unstructured":"Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. 2022. Graph unlearning. In CCS, 499\u2013513.","journal-title":"CCS"},{"key":"e_1_3_2_31_2","first-page":"14516","article-title":"Fast model debias with machine unlearning","author":"Chen Ruizhe","year":"2024","unstructured":"Ruizhe Chen, Jianfei Yang, Huimin Xiong, Jianhong Bai, Tianxiang Hu, Jin Hao, Yang Feng, Joey Tianyi Zhou, Jian Wu, and Zuozhu Liu. 2024. Fast model debias with machine unlearning. In NIPS, 14516\u201314539.","journal-title":"NIPS"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10586-018-1772-4"},{"key":"e_1_3_2_33_2","unstructured":"Jiali Cheng and Hadi Amiri. 2023. Multimodal machine unlearning. arXiv:2311.12047. Retrieved from https:\/\/arxiv.org\/abs\/2311.12047"},{"key":"e_1_3_2_34_2","unstructured":"Jiali Cheng and Hadi Amiri. 2024. Mu-bench: A multitask multimodal benchmark for machine unlearning. arXiv:2406.14796. Retrieved from https:\/\/arxiv.org\/abs\/2406.14796"},{"key":"e_1_3_2_35_2","first-page":"1","article-title":"GNNDelete: A general strategy for unlearning in graph neural networks","author":"Cheng Jiali","year":"2023","unstructured":"Jiali Cheng, George Dasoulas, Huan He, Chirag Agarwal, and Marinka Zitnik. 2023. GNNDelete: A general strategy for unlearning in graph neural networks. In ICLR, 1\u201323.","journal-title":"ICLR"},{"key":"e_1_3_2_36_2","unstructured":"Eli Chien Chao Pan and Olgica Milenkovic. 2022. Certified graph unlearning. arXiv:2206.09140. Retrieved from https:\/\/arxiv.org\/abs\/2206.09140"},{"key":"e_1_3_2_37_2","unstructured":"Vikram S. Chundawat Ayush K. Tarun Murari Mandal and Mohan Kankanhalli. 2022. Zero-shot machine unlearning. arXiv:2201.05629. Retrieved from https:\/\/arxiv.org\/abs\/2201.05629"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i6.25879"},{"key":"e_1_3_2_39_2","unstructured":"Weilin Cong and Mehrdad Mahdavi. 2022. GRAPHEDITOR: An Efficient Graph Representation Learning and Unlearning Approach. Retrieved from https:\/\/congweilin.github.io\/CongWeilin.io\/"},{"key":"e_1_3_2_40_2","unstructured":"Weilin Cong and Mehrdad Mahdavi. 2022. Privacy Matters! Efficient Graph Representation Unlearning with Data Removal Guarantee. Retrieved from https:\/\/congweilin.github.io\/CongWeilin.io\/"},{"key":"e_1_3_2_41_2","first-page":"6674","article-title":"Efficiently forgetting what you have learned in graph representation learning via projection","author":"Cong Weilin","year":"2023","unstructured":"Weilin Cong and Mehrdad Mahdavi. 2023. Efficiently forgetting what you have learned in graph representation learning via projection. In AISTATS, 6674\u20136703.","journal-title":"AISTATS"},{"key":"e_1_3_2_42_2","first-page":"8493","article-title":"Knowledge neurons in pretrained transformers","author":"Dai Damai","year":"2022","unstructured":"Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In ACL, 8493\u20138502.","journal-title":"ACL"},{"key":"e_1_3_2_43_2","first-page":"403","article-title":"Right to be forgotten in the age of machine learning","author":"Dang Quang-Vinh","year":"2021","unstructured":"Quang-Vinh Dang. 2021. Right to be forgotten in the age of machine learning. In ICADS, 403\u2013411.","journal-title":"ICADS"},{"key":"e_1_3_2_44_2","first-page":"248","article-title":"Imagenet: A large-scale hierarchical image database","author":"Deng Jia","year":"2009","unstructured":"Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR, 248\u2013255.","journal-title":"CVPR"},{"key":"e_1_3_2_45_2","first-page":"369","article-title":"Unlearning scanner bias for MRI harmonisation","author":"Dinsdale Nicola K.","year":"2020","unstructured":"Nicola K. Dinsdale, Mark Jenkinson, and Ana I. L. Namburete. 2020. Unlearning scanner bias for MRI harmonisation. In MICCAI, 369\u2013378.","journal-title":"MICCAI"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neuroimage.2020.117689"},{"key":"e_1_3_2_47_2","first-page":"1283","article-title":"Lifelong anomaly detection through unlearning","author":"Du Min","year":"2019","unstructured":"Min Du, Zhi Chen, Chang Liu, Rajvardhan Oak, and Dawn Song. 2019. Lifelong anomaly detection through unlearning. In SIGSAC, 1283\u20131297.","journal-title":"SIGSAC"},{"key":"e_1_3_2_48_2","first-page":"358","article-title":"Decremental learning algorithms for nonlinear Langrangian and least squares support vector machines","author":"Duan Hua","year":"2007","unstructured":"Hua Duan, Hua Li, Guoping He, and Qingtian Zeng. 2007. Decremental learning algorithms for nonlinear Langrangian and least squares support vector machines. In OSB, 358\u2013366.","journal-title":"OSB"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.2478\/jaiscr-2020-0002"},{"key":"e_1_3_2_50_2","first-page":"17108","article-title":"Safe: Machine unlearning with shard graphs","author":"Dukler Yonatan","year":"2023","unstructured":"Yonatan Dukler, Benjamin Bowman, Alessandro Achille, Aditya Golatkar, Ashwin Swaminathan, and Stefano Soatto. 2023. Safe: Machine unlearning with shard graphs. In ICCV, 17108\u201317118.","journal-title":"ICCV"},{"key":"e_1_3_2_51_2","first-page":"1","article-title":"Differential privacy: A survey of results","author":"Dwork Cynthia","year":"2008","unstructured":"Cynthia Dwork. 2008. Differential privacy: A survey of results. In TAMC, 1\u201319.","journal-title":"TAMC"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1561\/0400000042"},{"key":"e_1_3_2_53_2","unstructured":"Thorsten Eisenhofer Doreen Riepel Varun Chandrasekaran Esha Ghosh Olga Ohrimenko and Nicolas Papernot. 2022. Verifiable and provably secure machine unlearning. arXiv:2210.09126. Retrieved from https:\/\/arxiv.org\/abs\/2210.09126"},{"key":"e_1_3_2_54_2","unstructured":"Ronen Eldan and Mark Russinovich. 2023. Who\u2019s Harry Potter? Approximate unlearning in LLMs. arXiv:2310.02238. Retrieved from https:\/\/arxiv.org\/abs\/2310.02238"},{"key":"e_1_3_2_55_2","doi-asserted-by":"crossref","unstructured":"Daniel L. Felps Amelia D. Schwickerath Joyce D. Williams Trung N. Vuong Alan Briggs Matthew Hunt Evan Sakmar David D. Saranchak and Tyler Shumaker. 2020. Class clown: Data redaction in machine unlearning at enterprise scale. arXiv:2012.04699. Retrieved from https:\/\/arxiv.org\/abs\/2012.04699","DOI":"10.5220\/0010419600070014"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1007\/s12599-020-00650-3"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-023-00644-5"},{"key":"e_1_3_2_58_2","first-page":"3457","article-title":"SIFU: Sequential informed federated unlearning for efficient and provable client unlearning in federated optimization","author":"Fraboni Yann","year":"2024","unstructured":"Yann Fraboni, Martin Van Waerebeke, Kevin Scaman, Richard Vidal, Laetitia Kameni, and Marco Lorenzi. 2024. SIFU: Sequential informed federated unlearning for efficient and provable client unlearning in federated optimization. In AISTATS, 3457\u20133465.","journal-title":"AISTATS"},{"key":"e_1_3_2_59_2","first-page":"17","article-title":"Privacy in pharmacogenetics: An  \\(\\{\\) end-to-end \\(\\}\\)  case study of personalized warfarin dosing","author":"Fredrikson Matthew","year":"2014","unstructured":"Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in pharmacogenetics: An \\(\\{\\) end-to-end \\(\\}\\) case study of personalized warfarin dosing. In USENIX Security, 17\u201332.","journal-title":"USENIX Security"},{"key":"e_1_3_2_60_2","first-page":"1","article-title":"Knowledge removal in sampling-based Bayesian inference","author":"Fu Shaopeng","year":"2022","unstructured":"Shaopeng Fu, Fengxiang He, and Dacheng Tao. 2022. Knowledge removal in sampling-based Bayesian inference. In ICLR, 1\u201322.","journal-title":"ICLR"},{"key":"e_1_3_2_61_2","unstructured":"Shaopeng Fu Fengxiang He Yue Xu and Dacheng Tao. 2021. Bayesian inference forgetting. arXiv:2101.06417. Retrieved from https:\/\/arxiv.org\/abs\/2101.06417"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3477495.3531820"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.56553\/popets-2022-0079"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2024.3382321"},{"key":"e_1_3_2_65_2","first-page":"373","article-title":"Formalizing data deletion in the context of the right to be forgotten","author":"Garg Sanjam","year":"2020","unstructured":"Sanjam Garg, Shafi Goldwasser, and Prashant Nalini Vasudevan. 2020. Formalizing data deletion in the context of the right to be forgotten. EUROCRYPT, 373\u2013402.","journal-title":"EUROCRYPT"},{"key":"e_1_3_2_66_2","first-page":"1","article-title":"Combining neural networks with personalized PageRank for classification on graphs","author":"Gasteiger Johannes","year":"2019","unstructured":"Johannes Gasteiger, Aleksandar Bojchevski, and Stephan G\u00fcnnemann. 2019. Combining neural networks with personalized PageRank for classification on graphs. In ICLR, 1\u201314.","journal-title":"ICLR"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-006-6226-1"},{"key":"e_1_3_2_68_2","first-page":"3518","article-title":"Making ai forget you: Data deletion in machine learning","author":"Ginart Antonio","year":"2019","unstructured":"Antonio Ginart, Melody Guan, Gregory Valiant, and James Y. Zou. 2019. Making ai forget you: Data deletion in machine learning. In NIPS, 3518\u20133531.","journal-title":"NIPS"},{"key":"e_1_3_2_69_2","unstructured":"Shashwat Goel Ameya Prabhu and Ponnurangam Kumaraguru. 2022. Evaluating inexact unlearning requires revisiting forgetting. arXiv:2201.06640. Retrieved from https:\/\/arxiv.org\/abs\/2201.06640"},{"key":"e_1_3_2_70_2","first-page":"792","article-title":"Mixed-privacy forgetting in deep networks","author":"Golatkar Aditya","year":"2021","unstructured":"Aditya Golatkar, Alessandro Achille, Avinash Ravichandran, Marzia Polito, and Stefano Soatto. 2021. Mixed-privacy forgetting in deep networks. In CVPR, 792\u2013801.","journal-title":"CVPR"},{"key":"e_1_3_2_71_2","first-page":"9304","article-title":"Eternal sunshine of the spotless net: Selective forgetting in deep networks","author":"Golatkar Aditya","year":"2020","unstructured":"Aditya Golatkar, Alessandro Achille, and Stefano Soatto. 2020. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In CVPR, 9304\u20139312.","journal-title":"CVPR"},{"key":"e_1_3_2_72_2","first-page":"383","article-title":"Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations","author":"Golatkar Aditya","year":"2020","unstructured":"Aditya Golatkar, Alessandro Achille, and Stefano Soatto. 2020. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In ECCV, 383\u2013398.","journal-title":"ECCV"},{"key":"e_1_3_2_73_2","first-page":"1","article-title":"Forget-svgd: Particle-based Bayesian federated unlearning","author":"Gong Jinu","year":"2022","unstructured":"Jinu Gong, Joonhyuk Kang, Osvaldo Simeone, and Rahif Kassab. 2022. Forget-svgd: Particle-based Bayesian federated unlearning. In DSLW, 1\u20136.","journal-title":"DSLW"},{"key":"e_1_3_2_74_2","first-page":"247","article-title":"Revisiting machine learning training process for enhanced data privacy","author":"Goyal Adit","year":"2021","unstructured":"Adit Goyal, Vikas Hassija, and Victor Hugo C. de Albuquerque. 2021. Revisiting machine learning training process for enhanced data privacy. In IC3, 247\u2013251.","journal-title":"IC3"},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i13.17371"},{"key":"e_1_3_2_76_2","first-page":"3832","article-title":"Certified data removal from machine learning models","author":"Guo Chuan","year":"2020","unstructured":"Chuan Guo, Tom Goldstein, Awni Y. Hannun, and Laurens van der Maaten. 2020. Certified data removal from machine learning models. In ICML, 3832\u20133842.","journal-title":"ICML"},{"issue":"4","key":"e_1_3_2_77_2","first-page":"1","article-title":"A survey of learning causality with data: Problems and methods","volume":"53","author":"Guo Ruocheng","year":"2020","unstructured":"Ruocheng Guo, Lu Cheng, Jundong Li, P. Richard Hahn, and Huan Liu. 2020. A survey of learning causality with data: Problems and methods. ACM Computing Surveys 53, 4 (2020), 1\u201337.","journal-title":"ACM Computing Surveys"},{"key":"e_1_3_2_78_2","unstructured":"Tao Guo Song Guo Jiewei Zhang Wenchao Xu and Junxiao Wang. 2022. Efficient attribute unlearning: Towards selective removal of input attributes from feature representations. arXiv:2202.13295. Retrieved from https:\/\/arxiv.org\/abs\/2202.13295"},{"key":"e_1_3_2_79_2","first-page":"2289","article-title":"Fast: Adopting federated unlearning to eliminating malicious terminals at server side","author":"Guo Xintong","year":"2023","unstructured":"Xintong Guo, Pengfei Wang, Sen Qiu, Wei Song, Qiang Zhang, Xiaopeng Wei, and Dongsheng Zhou. 2023. Fast: Adopting federated unlearning to eliminating malicious terminals at server side. IEEE Transactions on Network Science and Engineering 11, 2 (2023), 2289\u20132302.","journal-title":"IEEE Transactions on Network Science and Engineering"},{"key":"e_1_3_2_80_2","first-page":"708","article-title":"Verifying in the dark: Verifiable machine unlearning by using invisible backdoor triggers","author":"Guo Yu","year":"2023","unstructured":"Yu Guo, Yu Zhao, Saihui Hou, Cong Wang, and Xiaohua Jia. 2023. Verifying in the dark: Verifiable machine unlearning by using invisible backdoor triggers. IEEE Transactions on Information Forensics and Security 19, 1 (2023), 708\u2013721.","journal-title":"IEEE Transactions on Information Forensics and Security"},{"key":"e_1_3_2_81_2","first-page":"16319","article-title":"Adaptive machine unlearning","author":"Gupta Varun","year":"2021","unstructured":"Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. 2021. Adaptive machine unlearning. In NIPS, 16319\u201316330.","journal-title":"NIPS"},{"key":"e_1_3_2_82_2","unstructured":"Anisa Halimi Swanand Kadhe Ambrish Rawat and Nathalie Baracaldo. 2022. Federated unlearning: How to efficiently erase a client in FL? arXiv:2207.05521. Retrieved from https:\/\/arxiv.org\/abs\/2207.05521"},{"key":"e_1_3_2_83_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-01588-5"},{"key":"e_1_3_2_84_2","first-page":"9452","article-title":"Learning parameter distributions to detect concept drift in data streams","author":"Haug Johannes","year":"2021","unstructured":"Johannes Haug and Gjergji Kasneci. 2021. Learning parameter distributions to detect concept drift in data streams. In ICPR, 9452\u20139459.","journal-title":"ICPR"},{"key":"e_1_3_2_85_2","first-page":"20","article-title":"A decision-making process to implement the \u2018right to Be forgotten\u2019 in machine learning","author":"Hawkins Katie","year":"2023","unstructured":"Katie Hawkins, Nora Alhuwaish, Sana Belguith, Asma Vranaki, and Andrew Charlesworth. 2023. A decision-making process to implement the \u2018right to Be forgotten\u2019 in machine learning. In Annual Privacy Forum, 20\u201338.","journal-title":"Annual Privacy Forum"},{"key":"e_1_3_2_86_2","unstructured":"Yingzhe He Guozhu Meng Kai Chen Jinwen He and Xingbo Hu. 2021. Deepobliviate: A powerful charm for erasing data residual memory in deep neural networks. arXiv:2105.06209. Retrieved from https:\/\/arxiv.org\/abs\/2105.06209"},{"key":"e_1_3_2_87_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i16.29784"},{"key":"e_1_3_2_88_2","first-page":"3957","article-title":"Distilling causal effect of data in class-incremental learning","author":"Hu Xinting","year":"2021","unstructured":"Xinting Hu, Kaihua Tang, Chunyan Miao, Xian-Sheng Hua, and Hanwang Zhang. 2021. Distilling causal effect of data in class-incremental learning. In CVPR, 3957\u20133966.","journal-title":"CVPR"},{"key":"e_1_3_2_89_2","first-page":"1","article-title":"Unlearnable examples: Making personal data unexploitable","author":"Huang Hanxun","year":"2021","unstructured":"Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, and Yisen Wang. 2021. Unlearnable examples: Making personal data unexploitable. In ICLR, 1\u201317.","journal-title":"ICLR"},{"key":"e_1_3_2_90_2","first-page":"793","article-title":"EMA: Auditing data removal from trained models","author":"Huang Yangsibo","year":"2021","unstructured":"Yangsibo Huang, Xiaoxiao Li, and Kai Li. 2021. EMA: Auditing data removal from trained models. In MICCAI, 793\u2013803.","journal-title":"MICCAI"},{"key":"e_1_3_2_91_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-021-05946-3"},{"key":"e_1_3_2_92_2","first-page":"55","article-title":"Fast-fedul: A training-free federated unlearning with provable skew resilience","author":"Huynh Thanh Trung","year":"2024","unstructured":"Thanh Trung Huynh, Trong Bang Nguyen, Phi Le Nguyen, Thanh Tam Nguyen, Matthias Weidlich, Quoc Viet Hung Nguyen, and Karl Aberer. 2024. Fast-fedul: A training-free federated unlearning with provable skew resilience. In ECML-PKDD, 55\u201372.","journal-title":"ECML-PKDD"},{"key":"e_1_3_2_93_2","first-page":"2008","article-title":"Approximate data deletion from machine learning models","author":"Izzo Zachary","year":"2021","unstructured":"Zachary Izzo, Mary Anne Smart, Kamalika Chaudhuri, and James Zou. 2021. Approximate data deletion from machine learning models. In AISTAT, 2008\u20132016.","journal-title":"AISTAT"},{"key":"e_1_3_2_94_2","unstructured":"Matthew Jagielski Om Thakkar Florian Tram\u00e8r Daphne Ippolito Katherine Lee Nicholas Carlini Eric Wallace Shuang Song Abhradeep Thakurta Nicolas Papernot et al. 2022. Measuring forgetting of memorized training examples. arXiv:2207.00099. Retrieved from https:\/\/arxiv.org\/abs\/2207.00099"},{"key":"e_1_3_2_95_2","first-page":"25","article-title":"Machine unlearning: An overview of the paradigm shift in the evolution of AI","author":"Jaman Layan","year":"2024","unstructured":"Layan Jaman, Reem Alsharabi, and Passent M. ElKafrawy. 2024. Machine unlearning: An overview of the paradigm shift in the evolution of AI. In L&T, 25\u201329.","journal-title":"L&T"},{"key":"e_1_3_2_96_2","unstructured":"Joel Jang Dongkeun Yoon Sohee Yang Sungmin Cha Moontae Lee Lajanugen Logeswaran and Minjoon Seo. 2022. Knowledge unlearning for mitigating privacy risks in language models. arXiv:2210.01504. Retrieved from https:\/\/arxiv.org\/abs\/2210.01504"},{"key":"e_1_3_2_97_2","first-page":"1039","article-title":"Proof-of-learning: Definitions and practice","author":"Jia Hengrui","year":"2021","unstructured":"Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, and Nicolas Papernot. 2021. Proof-of-learning: Definitions and practice. In SP, 1039\u20131056.","journal-title":"SP"},{"key":"e_1_3_2_98_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2025.3533907"},{"key":"e_1_3_2_99_2","first-page":"1","article-title":"A unified PAC-Bayesian framework for machine unlearning via information risk minimization","author":"Jose Sharu Theresa","year":"2021","unstructured":"Sharu Theresa Jose and Osvaldo Simeone. 2021. A unified PAC-Bayesian framework for machine unlearning via information risk minimization. In MLSP, 1\u20136.","journal-title":"MLSP"},{"key":"e_1_3_2_100_2","first-page":"1","article-title":"FairSISA: Ensemble post-processing to improve fairness of unlearning in LLMs","author":"Kadhe Swanand","year":"2023","unstructured":"Swanand Kadhe, Anisa Halimi, Ambrish Rawat, and Nathalie Baracaldo. 2023. FairSISA: Ensemble post-processing to improve fairness of unlearning in LLMs. In SoLaR, 1\u201312.","journal-title":"SoLaR"},{"key":"e_1_3_2_101_2","first-page":"907","article-title":"Multiple incremental decremental learning of support vector machines","author":"Karasuyama Masayuki","year":"2009","unstructured":"Masayuki Karasuyama and Ichiro Takeuchi. 2009. Multiple incremental decremental learning of support vector machines. In NIPS, 907\u2013915.","journal-title":"NIPS"},{"key":"e_1_3_2_102_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNN.2010.2048039"},{"key":"e_1_3_2_103_2","first-page":"4360","article-title":"Preserving privacy through dememorization: An unlearning technique for mitigating memorization risks in language models","author":"Kassem Aly","year":"2023","unstructured":"Aly Kassem, Omar Mahmoud, and Sherif Saad. 2023. Preserving privacy through dememorization: An unlearning technique for mitigating memorization risks in language models. In EMNLP, 4360\u20134379.","journal-title":"EMNLP"},{"key":"e_1_3_2_104_2","doi-asserted-by":"publisher","DOI":"10.1145\/293347.293351"},{"key":"e_1_3_2_105_2","first-page":"19757","article-title":"Knowledge-adaptation priors","author":"Khan Mohammad Emtiyaz","year":"2021","unstructured":"Mohammad Emtiyaz Khan and Siddharth Swaroop. 2021. Knowledge-adaptation priors. In NIPS, 19757\u201319770.","journal-title":"NIPS"},{"key":"e_1_3_2_106_2","doi-asserted-by":"publisher","DOI":"10.1145\/3633518"},{"key":"e_1_3_2_107_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i19.30118"},{"key":"e_1_3_2_108_2","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1611835114"},{"key":"e_1_3_2_109_2","first-page":"1885","article-title":"Understanding black-box predictions via influence functions","author":"Wei Koh Pang","year":"2017","unstructured":"Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In ICML, 1885\u20131894.","journal-title":"ICML"},{"key":"e_1_3_2_110_2","doi-asserted-by":"publisher","DOI":"10.1214\/aoms\/1177729694"},{"key":"e_1_3_2_111_2","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2024.3396524"},{"key":"e_1_3_2_112_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2024.102684"},{"key":"e_1_3_2_113_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cagd.2018.10.005"},{"key":"e_1_3_2_114_2","article-title":"Machine unlearning for image-to-image generative models","author":"Li Guihong","year":"2024","unstructured":"Guihong Li, Hsiang Hsu, Chun-Fu Chen, and Radu Marculescu. 2024. Machine unlearning for image-to-image generative models. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_2_115_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i12.29273"},{"key":"e_1_3_2_116_2","first-page":"984","article-title":"Making users indistinguishable: Attribute-wise unlearning in recommender systems","author":"Li Yuyuan","year":"2023","unstructured":"Yuyuan Li, Chaochao Chen, Xiaolin Zheng, Yizhao Zhang, Zhongxuan Han, Dan Meng, and Jun Wang. 2023. Making users indistinguishable: Attribute-wise unlearning in recommender systems. In MM, 984\u2013994.","journal-title":"MM"},{"key":"e_1_3_2_117_2","first-page":"217","article-title":"Online forgetting process for linear regression models","author":"Li Yuantong","year":"2021","unstructured":"Yuantong Li, Chi-Hua Wang, and Guang Cheng. 2021. Online forgetting process for linear regression models. In AISTAT, 217\u2013225.","journal-title":"AISTAT"},{"key":"e_1_3_2_118_2","first-page":"20147","article-title":"Erm-ktp: Knowledge-level machine unlearning via knowledge transfer","author":"Lin Shen","year":"2023","unstructured":"Shen Lin, Xiaoyu Zhang, Chenyang Chen, Xiaofeng Chen, and Willy Susilo. 2023. Erm-ktp: Knowledge-level machine unlearning via knowledge transfer. In CVPR, 20147\u201320155.","journal-title":"CVPR"},{"key":"e_1_3_2_119_2","unstructured":"Bo Liu Qiang Liu and Peter Stone. 2022. Continual learning and private unlearning. arXiv:2203.12817. Retrieved from https:\/\/arxiv.org\/abs\/2203.12817"},{"key":"e_1_3_2_120_2","unstructured":"Gaoyang Liu Xiaoqiang Ma Yang Yang Chen Wang and Jiangchuan Liu. 2020. Federated unlearning. arXiv:2012.13891. Retrieved from https:\/\/arxiv.org\/abs\/2012.13891"},{"key":"e_1_3_2_121_2","first-page":"1","article-title":"Federaser: Enabling efficient client-level data removal from federated learning models","author":"Liu Gaoyang","year":"2021","unstructured":"Gaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. 2021. Federaser: Enabling efficient client-level data removal from federated learning models. In IWQOS, 1\u201310.","journal-title":"IWQOS"},{"key":"e_1_3_2_122_2","unstructured":"Hengzhu Liu Ping Xiong Tianqing Zhu and Philip S. Yu. 2024. A survey on machine unlearning: Techniques and new emerged privacy risks. arXiv:2406.06186. Retrieved from https:\/\/arxiv.org\/abs\/2406.06186"},{"key":"e_1_3_2_123_2","unstructured":"Sijia Liu Yuanshun Yao Jinghan Jia Stephen Casper Nathalie Baracaldo Peter Hase Yuguang Yao Chris Yuhao Liu Xiaojun Xu Hang Li et al. 2024. Rethinking machine unlearning for large language models. arXiv:2402.08787. Retrieved from https:\/\/arxiv.org\/abs\/2402.08787"},{"key":"e_1_3_2_124_2","unstructured":"Wenyan Liu Juncheng Wan Xiaoling Wang Weinan Zhang Dell Zhang and Hang Li. 2022. Forgetting fast in recommender systems. arXiv:2208.06875. Retrieved from https:\/\/arxiv.org\/abs\/2208.06875"},{"key":"e_1_3_2_125_2","first-page":"95","article-title":"Have you forgotten? A method to assess if machine learning models have forgotten data","author":"Liu Xiao","year":"2020","unstructured":"Xiao Liu and Sotirios A. Tsaftaris. 2020. Have you forgotten? A method to assess if machine learning models have forgotten data. In MICCAI, 95\u2013105.","journal-title":"MICCAI"},{"key":"e_1_3_2_126_2","doi-asserted-by":"crossref","unstructured":"Yang Liu Mingyuan Fan Cen Chen Ximeng Liu Zhuo Ma Li Wang and Jianfeng Ma. 2022. Backdoor defense with machine unlearning. arXiv:2201.09538. Retrieved from https:\/\/arxiv.org\/abs\/2201.09538","DOI":"10.1109\/INFOCOM48880.2022.9796974"},{"key":"e_1_3_2_127_2","unstructured":"Yang Liu Zhuo Ma Ximeng Liu Jian Liu Zhongyuan Jiang Jianfeng Ma Philip Yu and Kui Ren. 2020. Learn to forget: Machine unlearning via neuron masking. arXiv:2003.10933. Retrieved from https:\/\/arxiv.org\/abs\/2003.10933"},{"key":"e_1_3_2_128_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2021.3104842"},{"key":"e_1_3_2_129_2","first-page":"1749","article-title":"The right to be forgotten in federated learning: An efficient realization with rapid retraining","author":"Liu","year":"2022","unstructured":"Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, and Bo Li. 2022. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In INFOCOM, 1749\u20131758.","journal-title":"INFOCOM"},{"key":"e_1_3_2_130_2","doi-asserted-by":"publisher","DOI":"10.1145\/3679014"},{"key":"e_1_3_2_131_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-020-02036-0"},{"key":"e_1_3_2_132_2","first-page":"27591","article-title":"Quark: Controllable text generation with reinforced unlearning","author":"Lu Ximing","year":"2022","unstructured":"Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. 2022. Quark: Controllable text generation with reinforced unlearning. In NeurIPS, 27591\u201327609.","journal-title":"NeurIPS"},{"key":"e_1_3_2_133_2","unstructured":"Ananth Mahadevan and Michael Mathioudakis. 2021. Certifiable machine unlearning for linear models. arXiv:2106.15093. Retrieved from https:\/\/arxiv.org\/abs\/2106.15093"},{"key":"e_1_3_2_134_2","doi-asserted-by":"publisher","DOI":"10.3390\/make4030028"},{"key":"e_1_3_2_135_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2013.03.010"},{"key":"e_1_3_2_136_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i7.20736"},{"key":"e_1_3_2_137_2","doi-asserted-by":"publisher","DOI":"10.5555\/3455716.3455862"},{"key":"e_1_3_2_138_2","first-page":"471","article-title":"Deep face recognition: A survey","author":"Masi Iacopo","year":"2018","unstructured":"Iacopo Masi, Yue Wu, Tal Hassner, and Prem Natarajan. 2018. Deep face recognition: A survey. In SIBGRAPI, 471\u2013478.","journal-title":"SIBGRAPI"},{"key":"e_1_3_2_139_2","first-page":"1273","article-title":"Communication-efficient learning of deep networks from decentralized data","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In AISTAT, 1273\u20131282.","journal-title":"AISTAT"},{"key":"e_1_3_2_140_2","doi-asserted-by":"publisher","DOI":"10.1145\/3457607"},{"key":"e_1_3_2_141_2","first-page":"10422","article-title":"Deep unlearning via randomized conditionally independent hessians","author":"Mehta Ronak","year":"2022","unstructured":"Ronak Mehta, Sourav Pal, Vikas Singh, and Sathya N. Ravi. 2022. Deep unlearning via randomized conditionally independent hessians. In CVPR, 10422\u201310431.","journal-title":"CVPR"},{"key":"e_1_3_2_142_2","unstructured":"Salvatore Mercuri Raad Khraishi Ramin Okhrati Devesh Batra Conor Hamill Taha Ghasempour and Andrew Nowlan. 2022. An introduction to machine unlearning. arXiv:2209.00939. Retrieved from https:\/\/arxiv.org\/abs\/2209.00939"},{"key":"e_1_3_2_143_2","first-page":"1","article-title":"Zero-shot knowledge transfer via adversarial belief matching","author":"Micaelli Paul","year":"2019","unstructured":"Paul Micaelli and Amos Storkey. 2019. Zero-shot knowledge transfer via adversarial belief matching. In NIPS, 1\u201311.","journal-title":"NIPS"},{"key":"e_1_3_2_144_2","first-page":"20673","article-title":"Learning from failure: De-biasing classifier from biased classifier","author":"Nam Junhyun","year":"2020","unstructured":"Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. 2020. Learning from failure: De-biasing classifier from biased classifier. In NIPS, 20673\u201320684.","journal-title":"NIPS"},{"key":"e_1_3_2_145_2","first-page":"931","article-title":"Descent-to-delete: Gradient-based methods for machine unlearning","author":"Neel Seth","year":"2021","unstructured":"Seth Neel, Aaron Roth, and Saeed Sharifi-Malvajerdi. 2021. Descent-to-delete: Gradient-based methods for machine unlearning. In Algorithmic Learning Theory, 931\u2013962.","journal-title":"Algorithmic Learning Theory"},{"key":"e_1_3_2_146_2","first-page":"16025","article-title":"Variational Bayesian unlearning","author":"Nguyen Quoc Phong","year":"2020","unstructured":"Quoc Phong Nguyen, Bryan Kian Hsiang Low, and Patrick Jaillet. 2020. Variational Bayesian unlearning. In NIPS, 16025\u201316036.","journal-title":"NIPS"},{"key":"e_1_3_2_147_2","first-page":"351","article-title":"Markov chain monte Carlo-based machine unlearning: Unlearning what needs to be forgotten","author":"Nguyen Quoc Phong","year":"2022","unstructured":"Quoc Phong Nguyen, Ryutaro Oikawa, Dinil Mon Divakaran, Mun Choon Chan, and Bryan Kian Hsiang Low. 2022. Markov chain monte Carlo-based machine unlearning: Unlearning what needs to be forgotten. In ASIACCS, 351\u2013363.","journal-title":"ASIACCS"},{"key":"e_1_3_2_148_2","volume-title":"Debunking Misinformation on the Web: Detection, Validation, and Visualisation","author":"Nguyen Thanh Tam","year":"2019","unstructured":"Thanh Tam Nguyen. 2019. Debunking Misinformation on the Web: Detection, Validation, and Visualisation. PhD Dissertation. EPFL, Switzerland."},{"key":"e_1_3_2_149_2","unstructured":"Thanh Tam Nguyen Thanh Trung Huynh Phi Le Nguyen Alan Wee-Chung Liew Hongzhi Yin and Quoc Viet Hung Nguyen. 2022. A survey of machine unlearning. arXiv:2209.02299. Retrieved from https:\/\/arxiv.org\/abs\/2209.02299"},{"key":"e_1_3_2_150_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2021.04.018"},{"key":"e_1_3_2_151_2","first-page":"3736","article-title":"Fair machine unlearning: Data removal while mitigating disparities","author":"Oesterling Alex","year":"2024","unstructured":"Alex Oesterling, Jiaqi MaFlavio Calmon, and Himabindu Lakkaraju. 2024. Fair machine unlearning: Data removal while mitigating disparities. In AISTATS, 3736\u20133744.","journal-title":"AISTATS"},{"key":"e_1_3_2_152_2","first-page":"716","article-title":"Unlearning graph classifiers with limited data resources","author":"Pan Chao","year":"2023","unstructured":"Chao Pan, Eli Chien, and Olgica Milenkovic. 2023. Unlearning graph classifiers with limited data resources. In WWW, 716\u2013726.","journal-title":"WWW"},{"key":"e_1_3_2_153_2","unstructured":"Subhodip Panda and Prathosh Ap. 2023. FAST: Feature aware similarity thresholding for weak unlearning in black-box generative models. arXiv:2312.14895. Retrieved from https:\/\/arxiv.org\/abs\/2312.14895"},{"key":"e_1_3_2_154_2","first-page":"68","article-title":"The California consumer privacy act: Towards a European-style privacy regime in the United States","volume":"23","author":"Pardau Stuart L.","year":"2018","unstructured":"Stuart L. Pardau. 2018. The California consumer privacy act: Towards a European-style privacy regime in the United States. Journal of Technology Law and Policy 23 (2018), 68.","journal-title":"Journal of Technology Law and Policy"},{"key":"e_1_3_2_155_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2019.01.012"},{"key":"e_1_3_2_156_2","unstructured":"Nishchal Parne Kyathi Puppaala Nithish Bhupathi and Ripon Patgiri. 2021. An investigation on learning polluting and unlearning the spam emails for lifelong learning. arXiv:2111.14609. Retrieved from https:\/\/arxiv.org\/abs\/2111.14609"},{"key":"e_1_3_2_157_2","unstructured":"Martin Pawelczyk Seth Neel and Himabindu Lakkaraju. 2023. In-context unlearning: Language models as few shot unlearners. arXiv:2310.07579. Retrieved from https:\/\/arxiv.org\/abs\/2310.07579"},{"key":"e_1_3_2_158_2","first-page":"234","article-title":"Uncertainty in neural networks: Approximately Bayesian ensembling","author":"Pearce Tim","year":"2020","unstructured":"Tim Pearce, Felix Leibfried, and Alexandra Brintrup. 2020. Uncertainty in neural networks: Approximately Bayesian ensembling. In AISTATS, 234\u2013244.","journal-title":"AISTATS"},{"key":"e_1_3_2_159_2","first-page":"1","article-title":"SSSE: Efficiently erasing samples from trained machine learning models","author":"Peste Alexandra","year":"2021","unstructured":"Alexandra Peste, Dan Alistarh, and Christoph H. Lampert. 2021. SSSE: Efficiently erasing samples from trained machine learning models. In NeurIPS 2021 Workshop Privacy in Machine Learning, 1\u20136.","journal-title":"NeurIPS 2021 Workshop Privacy in Machine Learning"},{"key":"e_1_3_2_160_2","unstructured":"Nicholas Pochinkov and Nandi Schoots. 2024. Dissecting language models: machine unlearning via selective pruning. arXiv:2403.01267. Retrieved from https:\/\/arxiv.org\/abs\/2403.01267"},{"key":"e_1_3_2_161_2","first-page":"1932","article-title":"Towards understanding and enhancing robustness of deep learning models against malicious unlearning attacks","author":"Qian Wei","year":"2023","unstructured":"Wei Qian, Chenxu Zhao, Wei Le, Meiyi Ma, and Mengdi Huai. 2023. Towards understanding and enhancing robustness of deep learning models against malicious unlearning attacks. In KDD, 1932\u20131942.","journal-title":"KDD"},{"key":"e_1_3_2_162_2","first-page":"5559","article-title":"FedCIO: Efficient exact federated unlearning with clustering, isolation, and one-shot aggregation","author":"Qiu Hongyu","year":"2023","unstructured":"Hongyu Qiu, Yongwei Wang, Yonghui Xu, Lizhen Cui, and Zhiqi Shen. 2023. FedCIO: Efficient exact federated unlearning with clustering, isolation, and one-shot aggregation. In BigData, 5559\u20135568.","journal-title":"BigData"},{"key":"e_1_3_2_163_2","doi-asserted-by":"publisher","DOI":"10.1109\/MC.2023.3333319"},{"key":"e_1_3_2_164_2","first-page":"9301","article-title":"Fair attribute classification through latent space de-biasing","author":"Ramaswamy Vikram V.","year":"2021","unstructured":"Vikram V. Ramaswamy, Sunnie S. Y. Kim, and Olga Russakovsky. 2021. Fair attribute classification through latent space de-biasing. In CVPR, 9301\u20139310.","journal-title":"CVPR"},{"key":"e_1_3_2_165_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eng.2019.12.012"},{"key":"e_1_3_2_166_2","first-page":"7184","article-title":"Generating and protecting against adversarial attacks for deep speech-based emotion recognition models","author":"Ren Zhao","year":"2020","unstructured":"Zhao Ren, Alice Baird, Jing Han, Zixing Zhang, and Bj\u00f6rn Schuller. 2020. Generating and protecting against adversarial attacks for deep speech-based emotion recognition models. In ICASSP, 7184\u20137188.","journal-title":"ICASSP"},{"key":"e_1_3_2_167_2","first-page":"496","article-title":"Enhancing transferability of black-box adversarial attacks via lifelong learning for speech emotion recognition models","author":"Ren Zhao","year":"2020","unstructured":"Zhao Ren, Jing Han, Nicholas Cummins, and Bj\u00f6rn W. Schuller. 2020. Enhancing transferability of black-box adversarial attacks via lifelong learning for speech emotion recognition models. In INTERSPEECH, 496\u2013500.","journal-title":"INTERSPEECH"},{"key":"e_1_3_2_168_2","first-page":"9087","article-title":"Prototype learning for interpretable respiratory sound analysis","author":"Ren Zhao","year":"2022","unstructured":"Zhao Ren, Thanh Tam Nguyen, and Wolfgang Nejdl. 2022. Prototype learning for interpretable respiratory sound analysis. In ICASSP, 9087\u20139091.","journal-title":"ICASSP"},{"key":"e_1_3_2_169_2","first-page":"209","article-title":"Incremental and decremental learning for linear support vector machines","author":"Romero Enrique","year":"2007","unstructured":"Enrique Romero, Ignacio Barrio, and Llu\u00eds Belanche. 2007. Incremental and decremental learning for linear support vector machines. In ICANN, 209\u2013218.","journal-title":"ICANN"},{"key":"e_1_3_2_170_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2884905"},{"key":"e_1_3_2_171_2","first-page":"1291","article-title":"\\(\\{\\) Updates-leak \\(\\}\\) : Data set inference and reconstruction attacks in online learning","author":"Salem Ahmed","year":"2020","unstructured":"Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang Zhang. 2020. \\(\\{\\) Updates-leak \\(\\}\\) : Data set inference and reconstruction attacks in online learning. In USENIX Security, 1291\u20131308.","journal-title":"USENIX Security"},{"key":"e_1_3_2_172_2","first-page":"1","article-title":"ML-leaks: Model and data independent membership inference attacks and defenses on machine learning models","author":"Salem Ahmed","year":"2019","unstructured":"Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2019. ML-leaks: Model and data independent membership inference attacks and defenses on machine learning models. In NDSS, 1\u201315.","journal-title":"NDSS"},{"key":"e_1_3_2_173_2","doi-asserted-by":"publisher","DOI":"10.1088\/1742-6596\/1477\/2\/022012"},{"key":"e_1_3_2_174_2","first-page":"5531","article-title":"Fedaux: Leveraging unlabeled auxiliary data in federated learning","author":"Sattler Felix","year":"2021","unstructured":"Felix Sattler, Tim Korjakow, Roman Rischke, and Wojciech Samek. 2021. Fedaux: Leveraging unlabeled auxiliary data in federated learning. IEEE Transactions on Neural Networks and Learning Systems 34, 9 (2021), 5531\u20135543.","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"key":"e_1_3_2_175_2","first-page":"1","article-title":"\u201cAmnesia\u201d\u2014A selection of machine learning models that can forget user data very fast","author":"Schelter Sebastian","year":"2020","unstructured":"Sebastian Schelter. 2020. \u201cAmnesia\u201d\u2014A selection of machine learning models that can forget user data very fast. In CIDR, 1\u20139.","journal-title":"CIDR"},{"key":"e_1_3_2_176_2","first-page":"1545","article-title":"Hedgecut: Maintaining randomised trees for low-latency machine unlearning","author":"Schelter Sebastian","year":"2021","unstructured":"Sebastian Schelter, Stefan Grafberger, and Ted Dunning. 2021. Hedgecut: Maintaining randomised trees for low-latency machine unlearning. In SIGMOD, 1545\u20131557.","journal-title":"SIGMOD"},{"key":"e_1_3_2_177_2","first-page":"18075","article-title":"Remember what you want to forget: Algorithms for machine unlearning","author":"Sekhari Ayush","year":"2021","unstructured":"Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. 2021. Remember what you want to forget: Algorithms for machine unlearning. In NIPS, 18075\u201318086.","journal-title":"NIPS"},{"key":"e_1_3_2_178_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2024.3382726"},{"key":"e_1_3_2_179_2","unstructured":"Thanveer Shaik Xiaohui Tao Haoran Xie Lin Li Xiaofeng Zhu and Qing Li. 2023. Exploring the landscape of machine unlearning: A comprehensive survey and taxonomy. arXiv:2305.06360. Retrieved from https:\/\/arxiv.org\/abs\/2305.06360"},{"key":"e_1_3_2_180_2","first-page":"11676","article-title":"Exploring the landscape of machine unlearning: A comprehensive survey and taxonomy","author":"Shaik Thanveer","year":"2024","unstructured":"Thanveer Shaik, Xiaohui Tao, Haoran Xie, Lin Li, Xiaofeng Zhu, and Qing Li. 2024. Exploring the landscape of machine unlearning: A comprehensive survey and taxonomy. IEEE Transactions on Neural Networks and Learning Systems 36, 7 (2024), 11676\u201311696.","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"key":"e_1_3_2_181_2","first-page":"1","article-title":"Protecting personal privacy against unauthorized deep learning models","author":"Shan S.","year":"2020","unstructured":"S. Shan, E. Wenger, J. Zhang, H. Li, H. Zheng, and B. Y. Zhao. 2020. Protecting personal privacy against unauthorized deep learning models. In USENIX Security, 1\u201316.","journal-title":"USENIX Security"},{"key":"e_1_3_2_182_2","unstructured":"Jiaqi Shao Tao Lin Xuanyu Cao and Bing Luo. 2024. Federated unlearning: A perspective of stability and fairness. arXiv:2402.01276. Retrieved from https:\/\/arxiv.org\/abs\/2402.01276"},{"key":"e_1_3_2_183_2","unstructured":"Weijia Shi Jaechan Lee Yangsibo Huang Sadhika Malladi Jieyu Zhao Ari Holtzman Daogao Liu Luke Zettlemoyer Noah A. Smith and Chiyuan Zhang. 2024. Muse: Machine unlearning six-way evaluation for language models. arXiv:2407.06460. Retrieved from https:\/\/arxiv.org\/abs\/2407.06460"},{"key":"e_1_3_2_184_2","first-page":"6","article-title":"Learning with selective forgetting","author":"Shibata Takashi","year":"2021","unstructured":"Takashi Shibata, Go Irie, Daiki Ikami, and Yu Mitsuzumi. 2021. Learning with selective forgetting. In IJCAI, 6.","journal-title":"IJCAI"},{"key":"e_1_3_2_185_2","first-page":"72","article-title":"Making machine learning forget","author":"Shintre Saurabh","year":"2019","unstructured":"Saurabh Shintre, Kevin A. Roundy, and Jasjeet Dhaliwal. 2019. Making machine learning forget. In Annual Privacy Forum, 72\u201383.","journal-title":"Annual Privacy Forum"},{"key":"e_1_3_2_186_2","first-page":"3","article-title":"Membership inference attacks against machine learning models","author":"Shokri Reza","year":"2017","unstructured":"Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In SP, 3\u201318.","journal-title":"SP"},{"key":"e_1_3_2_187_2","unstructured":"Ravid Shwartz-Ziv and Naftali Tishby. 2017. Opening the black box of deep neural networks via information. arXiv:1703.00810. Retrieved from https:\/\/arxiv.org\/abs\/1703.00810"},{"issue":"4","key":"e_1_3_2_188_2","first-page":"111","article-title":"Data leakage detection using cloud computing","volume":"6","author":"Singh Abhijeet","year":"2017","unstructured":"Abhijeet Singh and Abhineet Anand. 2017. Data leakage detection using cloud computing. International Journal of Engineering and Computer Science 6, 4 (2017), 111\u2013115.","journal-title":"International Journal of Engineering and Computer Science"},{"key":"e_1_3_2_189_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i11.21500"},{"key":"e_1_3_2_190_2","unstructured":"Yash Sinha Murari Mandal and Mohan Kankanhalli. 2024. Multi-modal recommendation unlearning. arXiv:2405.15328. Retrieved from https:\/\/arxiv.org\/abs\/2405.15328"},{"key":"e_1_3_2_191_2","unstructured":"David Marco Sommer Liwei Song Sameer Wagh and Prateek Mittal. 2020. Towards probabilistic verification of machine unlearning. arXiv:2003.04247. Retrieved from https:\/\/arxiv.org\/abs\/2003.04247"},{"key":"e_1_3_2_192_2","doi-asserted-by":"publisher","DOI":"10.56553\/popets-2022-0072"},{"key":"e_1_3_2_193_2","first-page":"241","article-title":"Machine unlearning: Its need and implementation strategies","author":"Tahiliani Aman","year":"2021","unstructured":"Aman Tahiliani, Vikas Hassija, Vinay Chamola, and Mohsen Guizani. 2021. Machine unlearning: Its need and implementation strategies. In IC3, 241\u2013246.","journal-title":"IC3"},{"key":"e_1_3_2_194_2","first-page":"2850","article-title":"Retaining data from streams of social platforms with minimal regret","author":"Tam Nguyen Thanh","year":"2017","unstructured":"Nguyen Thanh Tam, Matthias Weidlich, Duong Chi Thang, Hongzhi Yin, and Nguyen Quoc Viet Hung. 2017. Retaining data from streams of social platforms with minimal regret. In IJCAI, 2850\u20132856.","journal-title":"IJCAI"},{"key":"e_1_3_2_195_2","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-020-00349-y"},{"key":"e_1_3_2_196_2","doi-asserted-by":"publisher","DOI":"10.14778\/3641204.3641220"},{"key":"e_1_3_2_197_2","unstructured":"Ayush K. Tarun Vikram S. Chundawat Murari Mandal and Mohan Kankanhalli. 2021. Fast yet effective machine unlearning. arXiv:2111.08947. Retrieved from https:\/\/arxiv.org\/abs\/2111.08947"},{"key":"e_1_3_2_198_2","first-page":"303","article-title":"Unrolling sgd: Understanding factors influencing machine unlearning","author":"Thudi Anvith","year":"2022","unstructured":"Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, and Nicolas Papernot. 2022. Unrolling sgd: Understanding factors influencing machine unlearning. In EuroS&P, 303\u2013319.","journal-title":"EuroS&P"},{"key":"e_1_3_2_199_2","first-page":"4007","article-title":"On the necessity of auditable algorithmic definitions for machine unlearning","author":"Thudi Anvith","year":"2022","unstructured":"Anvith Thudi, Hengrui Jia, Ilia Shumailov, and Nicolas Papernot. 2022. On the necessity of auditable algorithmic definitions for machine unlearning. In USENIX Security, 4007\u20134022.","journal-title":"USENIX Security"},{"key":"e_1_3_2_200_2","unstructured":"Anvith Thudi Ilia Shumailov Franziska Boenisch and Nicolas Papernot. 2022. 2022. Bounding membership inference. arXiv:2202.12232. Retrieved from https:\/\/arxiv.org\/abs\/2202.12232"},{"key":"e_1_3_2_201_2","unstructured":"Naftali Tishby Fernando C. Pereira and William Bialek. 2000. The information bottleneck method. arXiv:physics\/0004057. Retrieved from https:\/\/arxiv.org\/abs\/physics\/0004057"},{"key":"e_1_3_2_202_2","first-page":"1","article-title":"Deep learning and the information bottleneck principle","author":"Tishby Naftali","year":"2015","unstructured":"Naftali Tishby and Noga Zaslavsky. 2015. Deep learning and the information bottleneck principle. In ITW, 1\u20135.","journal-title":"ITW"},{"key":"e_1_3_2_203_2","unstructured":"Piyush Tiwary Atri Guha Subhodip Panda and Prathosh A. P. 2023. Adapt then unlearn: Exploiting parameter space semantics for unlearning in generative adversarial networks. arXiv:2309.14054. Retrieved from https:\/\/arxiv.org\/abs\/2309.14054"},{"key":"e_1_3_2_204_2","first-page":"343","article-title":"Incremental and decremental training for linear classification","author":"Tsai Cheng-Hao","year":"2014","unstructured":"Cheng-Hao Tsai, Chieh-Yen Lin, and Chih-Jen Lin. 2014. Incremental and decremental training for linear classification. In KDD, 343\u2013352.","journal-title":"KDD"},{"key":"e_1_3_2_205_2","first-page":"386","article-title":"Multicategory incremental proximal support vector classifiers","author":"Tveit Amund","year":"2003","unstructured":"Amund Tveit and Magnus Lie Hetland. 2003. Multicategory incremental proximal support vector classifiers. In KES, 386\u2013392.","journal-title":"KES"},{"key":"e_1_3_2_206_2","first-page":"422","article-title":"Incremental and decremental proximal support vector classification using decay coefficients","author":"Tveit Amund","year":"2003","unstructured":"Amund Tveit, Magnus Lie Hetland, and H\u00e5avard Engum. 2003. Incremental and decremental proximal support vector classification using decay coefficients. In DaWaK, 422\u2013429.","journal-title":"DaWaK"},{"key":"e_1_3_2_207_2","first-page":"4126","volume-title":"Conference on Learning Theory","author":"Ullah Enayat","year":"2021","unstructured":"Enayat Ullah, Tung Mai, Anup Rao, Ryan A. Rossi, and Raman Arora. 2021. Machine unlearning via algorithmic stability. In Conference on Learning Theory, 4126\u20134142."},{"issue":"2133","key":"e_1_3_2_208_2","article-title":"Algorithms that remember: Model inversion attacks and data protection law","volume":"376","author":"Veale Michael","year":"2018","unstructured":"Michael Veale, Reuben Binns, and Lilian Edwards. 2018. Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A 376, 2133 (2018), 20180083.","journal-title":"Philosophical Transactions of the Royal Society A"},{"key":"e_1_3_2_209_2","first-page":"1","article-title":"Total variation distance and the distribution of relative information","author":"Verd\u00fa Sergio","year":"2014","unstructured":"Sergio Verd\u00fa. 2014. Total variation distance and the distribution of relative information. In ITA, 1\u20133.","journal-title":"ITA"},{"key":"e_1_3_2_210_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2017.08.007"},{"key":"e_1_3_2_211_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-57959-7"},{"key":"e_1_3_2_212_2","first-page":"707","article-title":"Neural cleanse: Identifying and mitigating backdoor attacks in neural networks","author":"Wang Bolun","year":"2019","unstructured":"Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In SP, 707\u2013723.","journal-title":"SP"},{"key":"e_1_3_2_213_2","unstructured":"Benjamin Longxiang Wang and Sebastian Schelter. 2022. Efficiently maintaining next basket recommendations under additions and deletions of baskets and items. arXiv:2201.13313. Retrieved from https:\/\/arxiv.org\/abs\/2201.13313"},{"key":"e_1_3_2_214_2","first-page":"3205","article-title":"Inductive graph unlearning","author":"Wang Cheng-Long","year":"2023","unstructured":"Cheng-Long Wang, Mengdi Huai, and Di Wang. 2023. Inductive graph unlearning. In USENIX, 3205\u20133222.","journal-title":"USENIX"},{"key":"e_1_3_2_215_2","doi-asserted-by":"crossref","unstructured":"Hangyu Wang Jianghao Lin Bo Chen Yang Yang Ruiming Tang Weinan Zhang and Yong Yu. 2024. Towards efficient and effective unlearning of large language models for recommendation. arXiv:2403.03536. Retrieved from https:\/\/arxiv.org\/abs\/2403.03536","DOI":"10.1007\/s11704-024-40044-2"},{"key":"e_1_3_2_216_2","first-page":"622","article-title":"Federated unlearning via class-discriminative pruning","author":"Wang Junxiao","year":"2022","unstructured":"Junxiao Wang, Song Guo, Xin Xie, and Heng Qi. 2022. Federated unlearning via class-discriminative pruning. In WWW, 622\u2013632.","journal-title":"WWW"},{"key":"e_1_3_2_217_2","first-page":"13264","article-title":"KGA: A general machine unlearning framework based on knowledge gap alignment","author":"Wang Lingzhi","year":"2023","unstructured":"Lingzhi Wang, Tong Chen, Wei Yuan, Xingshan Zeng, Kam-Fai Wong, and Hongzhi Yin. 2023. KGA: A general machine unlearning framework based on knowledge gap alignment. In ACL, 13264\u201313276.","journal-title":"ACL"},{"key":"e_1_3_2_218_2","unstructured":"Qizhou Wang Bo Han Puning Yang Jianing Zhu Tongliang Liu and Masashi Sugiyama. 2024. Unlearning with control: Assessing real-world utility for large language model unlearning. arXiv:2406.09179. Retrieved from https:\/\/arxiv.org\/abs\/2406.09179"},{"key":"e_1_3_2_219_2","first-page":"534","article-title":"Learning your identity and disease from research papers: Information leaks in genome wide association study","author":"Wang Rui","year":"2009","unstructured":"Rui Wang, Yong Fuga Li, XiaoFeng Wang, Haixu Tang, and Xiaoyong Zhou. 2009. Learning your identity and disease from research papers: Information leaks in genome wide association study. In CCS, 534\u2013544.","journal-title":"CCS"},{"key":"e_1_3_2_220_2","first-page":"567","article-title":"Bfu: Bayesian federated unlearning with parameter self-sharing","author":"Wang Weiqi","year":"2023","unstructured":"Weiqi Wang, Zhiyi Tian, Chenhan Zhang, An Liu, and Shui Yu. 2023. Bfu: Bayesian federated unlearning with parameter self-sharing. In ASIACCS, 567\u2013578.","journal-title":"ASIACCS"},{"key":"e_1_3_2_221_2","first-page":"8919","article-title":"Towards fairness in visual recognition: Effective strategies for bias mitigation","author":"Wang Zeyu","year":"2020","unstructured":"Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. 2020. Towards fairness in visual recognition: Effective strategies for bias mitigation. In CVPR, 8919\u20138928.","journal-title":"CVPR"},{"key":"e_1_3_2_222_2","first-page":"1464","article-title":"A comprehensive survey of forgetting in deep learning beyond continual learning","author":"Wang Zhenyi","year":"2024","unstructured":"Zhenyi Wang, Enneng Yang, Li Shen, and Heng Huang. 2024. A comprehensive survey of forgetting in deep learning beyond continual learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 47, 3 (2024), 1464\u20131483.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"e_1_3_2_223_2","unstructured":"Alexander Warnecke Lukas Pirch Christian Wressnegger and Konrad Rieck. 2021. Machine unlearning of features and labels. arXiv:2108.11577. Retrieved from https:\/\/arxiv.org\/abs\/2108.11577"},{"key":"e_1_3_2_224_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2024.3358993"},{"key":"e_1_3_2_225_2","unstructured":"Chen Wu Sencun Zhu and Prasenjit Mitra. 2022. Federated unlearning with knowledge distillation. arXiv:2201.09441. Retrieved from https:\/\/arxiv.org\/abs\/2201.09441"},{"key":"e_1_3_2_226_2","first-page":"1","article-title":"Unlearning backdoor attacks in federated learning","author":"Wu Chen","year":"2024","unstructured":"Chen Wu, Sencun Zhu, Prasenjit Mitra, and Wei Wang. 2024. Unlearning backdoor attacks in federated learning. In CNS, 1\u20139.","journal-title":"CNS"},{"key":"e_1_3_2_227_2","first-page":"6861","article-title":"Simplifying graph convolutional networks","author":"Wu Felix","year":"2019","unstructured":"Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying graph convolutional networks. In ICML, 6861\u20136871.","journal-title":"ICML"},{"key":"e_1_3_2_228_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i8.20846"},{"key":"e_1_3_2_229_2","doi-asserted-by":"publisher","DOI":"10.1109\/MNET.001.2200198"},{"key":"e_1_3_2_230_2","first-page":"10355","article-title":"Deltagrad: Rapid retraining of machine learning models","author":"Wu Yinjun","year":"2020","unstructured":"Yinjun Wu, Edgar Dobriban, and Susan B. Davidson. 2020. Deltagrad: Rapid retraining of machine learning models. In ICML, 10355\u201310366.","journal-title":"ICML"},{"key":"e_1_3_2_231_2","first-page":"447","article-title":"Priu: A provenance-based approach for incrementally updating regression models","author":"Wu Yinjun","year":"2020","unstructured":"Yinjun Wu, Val Tannen, and Susan B. Davidson. 2020. Priu: A provenance-based approach for incrementally updating regression models. In SIGMOD, 447\u2013462.","journal-title":"SIGMOD"},{"issue":"2","key":"e_1_3_2_232_2","first-page":"1","article-title":"Deltaboost: Gradient boosting decision trees with efficient machine unlearning","volume":"1","author":"Wu Zhaomin","year":"2023","unstructured":"Zhaomin Wu, Junhui Zhu, Qinbin Li, and Bingsheng He. 2023. Deltaboost: Gradient boosting decision trees with efficient machine unlearning. In SIGMOD 1, 2 (2023), 1\u201326.","journal-title":"SIGMOD"},{"key":"e_1_3_2_233_2","first-page":"1439","article-title":"Exact-Fun: An exact and efficient federated unlearning approach","author":"Xiong Zuobin","year":"2023","unstructured":"Zuobin Xiong, Wei Li, Yingshu Li, and Zhipeng Cai. 2023. Exact-Fun: An exact and efficient federated unlearning approach. In ICDM, 1439\u20131444.","journal-title":"ICDM"},{"key":"e_1_3_2_234_2","doi-asserted-by":"publisher","DOI":"10.1145\/3603620"},{"key":"e_1_3_2_235_2","unstructured":"Heng Xu Tianqing Zhu Lefeng Zhang Wanlei Zhou and Wei Zhao. 2024. Towards efficient target-level machine unlearning based on essential graph. arXiv:2406.10954. Retrieved from https:\/\/arxiv.org\/abs\/2406.10954"},{"key":"e_1_3_2_236_2","doi-asserted-by":"publisher","DOI":"10.1109\/TETCI.2024.3379240"},{"key":"e_1_3_2_237_2","unstructured":"Tomoya Yamashita Masanori Yamada and Takashi Shibata. 2023. One-shot machine unlearning with Mnemonic code. arXiv:2306.05670. Retrieved from https:\/\/arxiv.org\/abs\/2306.05670"},{"key":"e_1_3_2_238_2","first-page":"19","article-title":"ARCANE: An efficient architecture for exact machine unlearning","author":"Yan Haonan","year":"2022","unstructured":"Haonan Yan, Xiaoguang Li, Ziyao Guo, Hui Li, Fenghua Li, and Xiaodong Lin. 2022. ARCANE: An efficient architecture for exact machine unlearning. In IJCAI, 19.","journal-title":"IJCAI"},{"key":"e_1_3_2_239_2","unstructured":"Youngsik Yoon Jinhwan Nam Hyojeong Yun Dongwoo Kim and Jungseul Ok. 2022. Few-shot unlearning by model inversion. arXiv:2205.15567. Retrieved from https:\/\/arxiv.org\/abs\/2205.15567"},{"key":"e_1_3_2_240_2","first-page":"6032","article-title":"Unlearning bias in language models by partitioning gradients","author":"Yu Charles","year":"2023","unstructured":"Charles Yu, Sullam Jeoung, Anish Kasi, Pengfei Yu, and Heng Ji. 2023. Unlearning bias in language models by partitioning gradients. In ACL, 6032\u20136048.","journal-title":"ACL"},{"key":"e_1_3_2_241_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i12.17284"},{"key":"e_1_3_2_242_2","unstructured":"Fisher Yu Ari Seff Yinda Zhang Shuran Song Thomas Funkhouser and Jianxiong Xiao. 2015. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv:1506.03365. Retrieved from https:\/\/arxiv.org\/abs\/1506.03365"},{"key":"e_1_3_2_243_2","first-page":"393","article-title":"Federated unlearning for on-device recommendation","author":"Yuan Wei","year":"2023","unstructured":"Wei Yuan, Hongzhi Yin, Fangzhao Wu, Shijie Zhang, Tieke He, and Hao Wang. 2023. Federated unlearning for on-device recommendation. In WSDM, 393\u2013401.","journal-title":"WSDM"},{"key":"e_1_3_2_244_2","first-page":"363","article-title":"Analyzing information leakage of updates to natural language models","author":"Zanella-B\u00e9guelin Santiago","year":"2020","unstructured":"Santiago Zanella-B\u00e9guelin, Lukas Wutschitz, Shruti Tople, Victor R\u00fchle, Andrew Paverd, Olga Ohrimenko, Boris K\u00f6pf, and Marc Brockschmidt. 2020. Analyzing information leakage of updates to natural language models. In SIGSAC, 363\u2013375.","journal-title":"SIGSAC"},{"key":"e_1_3_2_245_2","unstructured":"Yingyan Zeng Tianhao Wang Si Chen Hoang Anh Just Ran Jin and Ruoxi Jia. 2021. Learning to refit for convex learning problems. arXiv:2111.12545. Retrieved from https:\/\/arxiv.org\/abs\/2111.12545"},{"key":"e_1_3_2_246_2","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-023-00398-y"},{"key":"e_1_3_2_247_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2020.3003660"},{"key":"e_1_3_2_248_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2023.3297905"},{"key":"e_1_3_2_249_2","first-page":"237","article-title":"Machine unlearning for image retrieval: A generative scrubbing approach","author":"Zhang Peng-Fei","year":"2022","unstructured":"Peng-Fei Zhang, Guangdong Bai, Zi Huang, and Xin-Shun Xu. 2022. Machine unlearning for image retrieval: A generative scrubbing approach. In MM, 237\u2013245.","journal-title":"MM"},{"key":"e_1_3_2_250_2","first-page":"250","article-title":"Machine unlearning methodology based on stochastic teacher network","author":"Zhang Xulong","year":"2023","unstructured":"Xulong Zhang, Jianzong Wang, Ning Cheng, Yifu Sun, Chuanyao Zhang, and Jing Xiao. 2023. Machine unlearning methodology based on stochastic teacher network. In ADMA, 250\u2013261.","journal-title":"ADMA"},{"key":"e_1_3_2_251_2","unstructured":"Yimeng Zhang Xin Chen Jinghan Jia Yihua Zhang Chongyu Fan Jiancheng Liu Mingyi Hong Ke Ding and Sijia Liu. 2024. Defensive unlearning with adversarial training for robust concept erasure in diffusion models. arXiv:2405.15234. Retrieved from https:\/\/arxiv.org\/abs\/2405.15234"},{"key":"e_1_3_2_252_2","unstructured":"Yihua Zhang Yimeng Zhang Yuguang Yao Jinghan Jia Jiancheng Liu Xiaoming Liu and Sijia Liu. 2024. Unlearncanvas: A stylized image dataset to benchmark machine unlearning for diffusion models. arXiv:2402.11846. Retrieved from https:\/\/arxiv.org\/abs\/2402.11846"},{"key":"e_1_3_2_253_2","first-page":"13433","article-title":"Prompt certified machine unlearning with randomized gradient smoothing and quantization","author":"Zhang Zijie","year":"2022","unstructured":"Zijie Zhang, Yang Zhou, Xin Zhao, Tianshi Che, and Lingjuan Lyu. 2022. Prompt certified machine unlearning with randomized gradient smoothing and quantization. In NIPS, 13433\u201313455.","journal-title":"NIPS"},{"key":"e_1_3_2_254_2","doi-asserted-by":"publisher","DOI":"10.1145\/3639372"},{"key":"e_1_3_2_255_2","article-title":"Federated unlearning with momentum degradation","author":"Zhao Yian","year":"2023","unstructured":"Yian Zhao, Pengfei Wang, Heng Qi, Jianguo Huang, Zongzheng Wei, and Qiang Zhang. 2023. Federated unlearning with momentum degradation. IEEE Internet of Things Journal (2023).","journal-title":"IEEE Internet of Things Journal"},{"key":"e_1_3_2_256_2","first-page":"36","article-title":"Federated unlearning for medical image analysis","author":"Zhong Yuyao","year":"2024","unstructured":"Yuyao Zhong. 2024. Federated unlearning for medical image analysis. In SPRA, 36\u201343.","journal-title":"SPRA"},{"key":"e_1_3_2_257_2","unstructured":"Jianing Zhu Bo Han Jiangchao Yao Jianliang Xu Gang Niu and Masashi Sugiyama. 2024. Decoupling the class label and the target concept in machine unlearning. arXiv:2406.08288. Retrieved from https:\/\/arxiv.org\/abs\/2406.08288"},{"key":"e_1_3_2_258_2","first-page":"2444","volume-title":"WWW","author":"Zhu Xiangrong","year":"2023","unstructured":"Xiangrong Zhu, Guangyao Li, and Wei Hu. 2023. Heterogeneous federated knowledge graph embedding learning and unlearning. In WWW, 2444\u20132454."},{"key":"e_1_3_2_259_2","doi-asserted-by":"publisher","unstructured":"James Zou and Londa Schiebinger. 2018. AI can be sexist and racist\u2014It\u2019s time to make it fair. Nature 559 (2018) 324\u2013326. DOI: 10.1038\/d41586-018-05707-8","DOI":"10.1038\/d41586-018-05707-8"},{"key":"e_1_3_2_260_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2025.3528551"}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3749987","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,2]],"date-time":"2026-03-02T18:46:43Z","timestamp":1772477203000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3749987"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,18]]},"references-count":259,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2025,10,31]]}},"alternative-id":["10.1145\/3749987"],"URL":"https:\/\/doi.org\/10.1145\/3749987","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"value":"2157-6904","type":"print"},{"value":"2157-6912","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,9,18]]},"assertion":[{"value":"2024-09-27","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-06-20","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-18","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}