{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T21:20:53Z","timestamp":1764969653555,"version":"3.46.0"},"reference-count":78,"publisher":"Association for Computing Machinery (ACM)","issue":"6","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:p>We seek to answer the question: what can a motion-blurred image reveal about a scene's past, present, and future? Although motion blur obscures image details and degrades visual quality, it also encodes information about scene and camera motion during an exposure. Previous techniques leverage this information to estimate a sharp image from an input blurry one, or to predict a sequence of video frames showing what might have occurred at the moment of image capture. However, they rely on handcrafted priors or network architectures to resolve ambiguities in this inverse problem, and do not incorporate image and video priors on large-scale datasets. As such, existing methods struggle to reproduce complex scene dynamics and do not attempt to recover what occurred before or after an image was taken. Here, we introduce a new technique that repurposes a pre-trained video diffusion model trained on internet-scale datasets to recover videos revealing complex scene dynamics during the moment of capture and what might have occurred immediately into the past or future. Our approach is robust and versatile; it outperforms previous methods for this task, generalizes to challenging in-the-wild images, and supports downstream tasks such as recovering camera trajectories, object motion, and dynamic 3D scene structure. Code and data are available at blur2vid.github.io<\/jats:p>","DOI":"10.1145\/3763306","type":"journal-article","created":{"date-parts":[[2025,12,4]],"date-time":"2025-12-04T17:15:39Z","timestamp":1764868539000},"page":"1-15","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Generating the Past, Present and Future from a Motion-Blurred Image"],"prefix":"10.1145","volume":"44","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2679-2881","authenticated-orcid":false,"given":"SaiKiran","family":"Tedla","sequence":"first","affiliation":[{"name":"York University, Toronto, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-1533-6167","authenticated-orcid":false,"given":"Kelly","family":"Zhu","sequence":"additional","affiliation":[{"name":"University of Toronto, Toronto, Canada"},{"name":"Vector Institute, Toronto, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4830-4786","authenticated-orcid":false,"given":"Trevor","family":"Canham","sequence":"additional","affiliation":[{"name":"York University, Toronto, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9717-7181","authenticated-orcid":false,"given":"Felix","family":"Taubner","sequence":"additional","affiliation":[{"name":"University of Toronto, Toronto, Canada"},{"name":"Vector Institute, Toronto, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9840-0795","authenticated-orcid":false,"given":"Michael S.","family":"Brown","sequence":"additional","affiliation":[{"name":"York University, Toronto, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5165-902X","authenticated-orcid":false,"given":"Kiriakos N.","family":"Kutulakos","sequence":"additional","affiliation":[{"name":"University of Toronto, Toronto, Canada"},{"name":"Vector Institute, Toronto, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6999-3958","authenticated-orcid":false,"given":"David B.","family":"Lindell","sequence":"additional","affiliation":[{"name":"University of Toronto, Toronto, Canada"},{"name":"Vector Institute, Toronto, Canada"}]}],"member":"320","published-online":{"date-parts":[[2025,12,4]]},"reference":[{"key":"e_1_2_2_1_1","unstructured":"Andreas Blattmann Tim Dockhorn Sumith Kulal Daniel Mendelevitch Maciej Kilian Dominik Lorenz Yam Levi Zion English Vikram Voleti Adam Letts et al. 2023a. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127 (2023)."},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.02161"},{"key":"e_1_2_2_3_1","first-page":"8","article-title":"Video generation models as world simulators","volume":"1","author":"Brooks Tim","year":"2024","unstructured":"Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, et al. 2024. Video generation models as world simulators. OpenAI Blog 1 (2024), 8.","journal-title":"OpenAI Blog"},{"key":"e_1_2_2_4_1","volume-title":"Coyo-700M: Image-text pair dataset. arXiv preprint arXiv:2303.03378","author":"Byeon M","year":"2022","unstructured":"M Byeon, B Park, H Kim, S Lee, W Baek, and S Coyo Kim. 2022. Coyo-700M: Image-text pair dataset. arXiv preprint arXiv:2303.03378 (2022)."},{"key":"e_1_2_2_5_1","volume-title":"Proc. CVPR.","author":"Chen Jiaben","year":"2024","unstructured":"Jiaben Chen and Huaizu Jiang. 2024. SportsSloMo: A new benchmark and baselines for human-centric video frame interpolation. In Proc. CVPR."},{"key":"e_1_2_2_6_1","volume-title":"Diffusion Image Prior. arXiv preprint arXiv:2503.21410","author":"Chihaoui Hamadi","year":"2025","unstructured":"Hamadi Chihaoui and Paolo Favaro. 2025. Diffusion Image Prior. arXiv preprint arXiv:2503.21410 (2025)."},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/1661412.1618491"},{"key":"e_1_2_2_8_1","volume-title":"Proc. ICLR.","author":"Chung Hyungjin","year":"2023","unstructured":"Hyungjin Chung, Jeongsol Kim, Michael McCann, Marc Klasky, and Jong Chul Ye. 2023. Diffusion Posterior Sampling for General Noisy Inverse Problems. In Proc. ICLR."},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/258734.258884"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/1179352.1141956"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/358669.358692"},{"volume-title":"Multiple view geometry in computer vision","author":"Hartley Richard","key":"e_1_2_2_12_1","unstructured":"Richard Hartley and Andrew Zisserman. 2003. Multiple view geometry in computer vision. Cambridge University Press."},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2011.6126276"},{"key":"e_1_2_2_14_1","volume-title":"Proc. NeurIPS.","author":"Ho Jonathan","year":"2020","unstructured":"Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. In Proc. NeurIPS."},{"key":"e_1_2_2_15_1","volume-title":"Proc. NeurIPS Workshop on Deep Generative Models and Downstream Applications.","author":"Ho Jonathan","year":"2021","unstructured":"Jonathan Ho and Tim Salimans. 2021. Classifier-Free Diffusion Guidance. In Proc. NeurIPS Workshop on Deep Generative Models and Downstream Applications."},{"key":"e_1_2_2_16_1","volume-title":"Proc. NeurIPS.","author":"Ho Jonathan","year":"2022","unstructured":"Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. 2022. Video diffusion models. In Proc. NeurIPS."},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00575"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00663"},{"key":"e_1_2_2_19_1","volume-title":"Proc. ECCV.","author":"Karaev Nikita","year":"2024","unstructured":"Nikita Karaev, Ignacio Rocco, Benjamin Graham, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. 2024. Cotracker: It is better to track together. In Proc. ECCV."},{"key":"e_1_2_2_20_1","volume-title":"Proc. NeurIPS.","author":"Kawar Bahjat","year":"2022","unstructured":"Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. 2022. Denoising diffusion restoration models. Proc. NeurIPS."},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2011.5995521"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/79.489268"},{"key":"e_1_2_2_23_1","volume-title":"VISION-XL: High Definition Video Inverse Problem Solver using Latent Image Diffusion Models. arXiv preprint arXiv:2412.00156","author":"Kwon Taesung","year":"2024","unstructured":"Taesung Kwon and Jong Chul Ye. 2024. VISION-XL: High Definition Video Inverse Problem Solver using Latent Image Diffusion Models. arXiv preprint arXiv:2412.00156 (2024)."},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206815"},{"key":"e_1_2_2_25_1","volume-title":"Affine-modeled video extraction from a single motion blurred image. arXiv preprint arXiv:2104.03777","author":"Li Daoyu","year":"2021","unstructured":"Daoyu Li, Liheng Bian, and Jun Zhang. 2021. Affine-modeled video extraction from a single motion blurred image. arXiv preprint arXiv:2104.03777 (2021)."},{"key":"e_1_2_2_26_1","volume-title":"Sora Generates Videos with Stunning Geometrical Consistency. arXiv preprint arXiv:2402.17403","author":"Li Xuanyi","year":"2024","unstructured":"Xuanyi Li, Daquan Zhou, Chenxu Zhang, Shaodong Wei, Qibin Hou, and Ming-Ming Cheng. 2024b. Sora Generates Videos with Stunning Geometrical Consistency. arXiv preprint arXiv:2402.17403 (2024)."},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52734.2025.00981"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02279"},{"key":"e_1_2_2_29_1","volume-title":"Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models. arXiv preprint arxiv:2402.17177","author":"Liu Yixin","year":"2024","unstructured":"Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jianfeng Gao, Lifang He, and Lichao Sun. 2024. Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models. arXiv preprint arxiv:2402.17177 (2024)."},{"key":"e_1_2_2_30_1","volume-title":"Proc. ICLR.","author":"Loshchilov Ilya","year":"2019","unstructured":"Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. In Proc. ICLR."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1023\/B:VISI.0000029664.99615.94"},{"key":"e_1_2_2_32_1","volume-title":"Proc. ICLR.","author":"Lu Haoyu","year":"2024","unstructured":"Haoyu Lu, Guoxing Yang, Nanyi Fei, Yuqi Huo, Zhiwu Lu, Ping Luo, and Mingyu Ding. 2024. VDT: General-purpose Video Diffusion Transformers via Mask Modeling. In Proc. ICLR."},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10578-9_51"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3503250"},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2019.00251"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.35"},{"key":"e_1_2_2_37_1","volume-title":"Contiguous loss for motion-based, non-aligned image deblurring. Symmetry","author":"Niu Wenjia","year":"2021","unstructured":"Wenjia Niu, Kewen Xia, and Yongke Pan. 2021. Contiguous loss for motion-based, non-aligned image deblurring. Symmetry (2021)."},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-66709-6_6"},{"key":"e_1_2_2_39_1","volume-title":"Image Motion Blur Removal in the Temporal Dimension with Video Diffusion Models. arXiv preprint arXiv:2501.12604","author":"Pang Wang","year":"2025","unstructured":"Wang Pang, Zhihao Zhan, Xiang Zhu, and Yechao Bai. 2025. Image Motion Blur Removal in the Temporal Dimension with Video Diffusion Models. arXiv preprint arXiv:2501.12604 (2025)."},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00387"},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2015.2477819"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00949"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00699"},{"key":"e_1_2_2_44_1","volume-title":"Proc. NeurIPS.","author":"Rout Litu","year":"2023","unstructured":"Litu Rout, Negin Raoof, Giannis Daras, Constantine Caramanis, Alex Dimakis, and Sanjay Shakkottai. 2023. Solving linear inverse problems provably via posterior sampling with latent diffusion models. Proc. NeurIPS."},{"key":"e_1_2_2_45_1","unstructured":"Inc. Runway AI. 2025. Runway Gen-4. https:\/\/runwayml.com\/research\/introducing-runway-gen-4. Accessed: 2025-05-17."},{"key":"e_1_2_2_46_1","volume-title":"Proc. NeurIPS.","author":"Saharia Chitwan","year":"2022","unstructured":"Chitwan Saharia, William Chan, Saurabh Saxena, Lala Jay Li, Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In Proc. NeurIPS."},{"key":"e_1_2_2_47_1","volume-title":"Proc. NeurIPS.","author":"Schuhmann Christoph","year":"2022","unstructured":"Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. 2022. LAION-5B: An open large-scale dataset for training next generation image-text models. Proc. NeurIPS."},{"key":"e_1_2_2_48_1","doi-asserted-by":"crossref","unstructured":"Qi Shan Jiaya Jia and Aseem Agarwala. 2008. High-quality motion deblurring from a single image. ACM Trans. Graph. (2008).","DOI":"10.1145\/1399504.1360672"},{"key":"e_1_2_2_49_1","volume-title":"Learning temporally consistent video depth from video diffusion priors. arXiv preprint arXiv:2406.01493","author":"Shao Jiahao","year":"2024","unstructured":"Jiahao Shao, Yuanbo Yang, Hongyu Zhou, Youmin Zhang, Yujun Shen, Vitor Guizilini, Yue Wang, Matteo Poggi, and Yiyi Liao. 2024. Learning temporally consistent video depth from video diffusion priors. arXiv preprint arXiv:2406.01493 (2024)."},{"key":"e_1_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCE56470.2023.10043423"},{"key":"e_1_2_2_51_1","volume-title":"Proc. NeurIPS.","author":"Siarohin Aliaksandr","year":"2019","unstructured":"Aliaksandr Siarohin, St\u00e9phane Lathuili\u00e8re, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. First order motion model for image animation. Proc. NeurIPS."},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201333"},{"key":"e_1_2_2_53_1","volume-title":"Proc. NeurIPS.","author":"Sohn Kihyuk","year":"2015","unstructured":"Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Proc. NeurIPS."},{"key":"e_1_2_2_54_1","volume-title":"Proc. ICLR.","author":"Song Jiaming","year":"2023","unstructured":"Jiaming Song, Arash Vahdat, Morteza Mardani, and Jan Kautz. 2023. Pseudoinverse-guided diffusion models for inverse problems. In Proc. ICLR."},{"key":"e_1_2_2_55_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2023.127063"},{"key":"e_1_2_2_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.33"},{"key":"e_1_2_2_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298677"},{"key":"e_1_2_2_58_1","volume-title":"Proc. ICLR.","author":"Sun Jingxiang","year":"2024","unstructured":"Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, and Liu Yebin. 2024. DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior. In Proc. ICLR."},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.00123"},{"key":"e_1_2_2_60_1","volume-title":"Lindell","author":"Taubner Felix","year":"2025","unstructured":"Felix Taubner, Ruihang Zhang, Mathieu Tuli, and David B. Lindell. 2025. CAP4D: Creating Animatable 4D Portrait Avatars with Morphable Multi-View Diffusion Models. In Proc. CVPR."},{"key":"e_1_2_2_61_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58536-5_24"},{"key":"e_1_2_2_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3446791"},{"key":"e_1_2_2_63_1","volume-title":"Karol Kurach, Rapha\u00ebl Marinier, Marcin Michalski, and Sylvain Gelly.","author":"Unterthiner Thomas","year":"2019","unstructured":"Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Rapha\u00ebl Marinier, Marcin Michalski, and Sylvain Gelly. 2019. FVD: A new metric for video generation. (2019)."},{"key":"e_1_2_2_64_1","volume-title":"Proc. NeurIPS.","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NeurIPS."},{"key":"e_1_2_2_65_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.18"},{"key":"e_1_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.01813"},{"key":"e_1_2_2_67_1","volume-title":"Survey of Video Diffusion Models: Foundations, Implementations, and Applications. arXiv preprint arXiv:2504.16081","author":"Wang Yimu","year":"2025","unstructured":"Yimu Wang, Xuye Liu, Wei Pang, Li Ma, Shuai Yuan, Paul Debevec, and Ning Yu. 2025. Survey of Video Diffusion Models: Foundations, Implementations, and Applications. arXiv preprint arXiv:2504.16081 (2025)."},{"key":"e_1_2_2_68_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"},{"key":"e_1_2_2_69_1","volume-title":"Single image super-resolution with denoising difusion GANS. Scientific Reports 14, 4272","author":"Xiao Heng","year":"2024","unstructured":"Heng Xiao, Xin Wang, Jun Wang, Jing-Ye Cai, Jian-Hua Deng, Jing-Ke Yan, and Yi-Dong Tang. 2024b. Single image super-resolution with denoising difusion GANS. Scientific Reports 14, 4272 (2024)."},{"key":"e_1_2_2_70_1","volume-title":"Proc. ICLR.","author":"Xiao Jie","year":"2024","unstructured":"Jie Xiao, Ruili Feng, Han Zhang, Zhiheng Liu, Zhantao Yang, Yurui Zhu, Xueyang Fu, Kai Zhu, Yu Liu, and Zheng-Jun Zha. 2024a. Dreamclean: Restoring clean image using deep diffusion prior. In Proc. ICLR."},{"key":"e_1_2_2_71_1","doi-asserted-by":"publisher","DOI":"10.1145\/3614425"},{"key":"e_1_2_2_72_1","volume-title":"Proc. ICLR.","author":"Yang Zhuoyi","year":"2025","unstructured":"Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. 2025. CogVideoX: Text-to-video diffusion models with an expert transformer. In Proc. ICLR."},{"key":"e_1_2_2_73_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394171.3413929"},{"key":"e_1_2_2_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00068"},{"key":"e_1_2_2_75_1","volume-title":"Exposure trajectory recovery from motion blur","author":"Zhang Youjian","year":"2021","unstructured":"Youjian Zhang, Chaoyue Wang, Stephen J Maybank, and Dacheng Tao. 2021. Exposure trajectory recovery from motion blur. IEEE Trans. Pattern Anal. Mach. Intell. (2021)."},{"key":"e_1_2_2_76_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00553"},{"key":"e_1_2_2_77_1","volume-title":"Proc. ECCV.","author":"Zhong Zhihang","year":"2024","unstructured":"Zhihang Zhong, Gurunandan Krishnan, Xiao Sun, Yu Qiao, Sizhuo Ma, and Jian Wang. 2024. Clearer frames, anytime: Resolving velocity ambiguity in video frame interpolation. In Proc. ECCV."},{"key":"e_1_2_2_78_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-19800-7_35"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3763306","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T21:15:51Z","timestamp":1764969351000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3763306"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12]]},"references-count":78,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["10.1145\/3763306"],"URL":"https:\/\/doi.org\/10.1145\/3763306","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2025,12]]},"assertion":[{"value":"2025-05-24","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-08-09","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-12-04","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}