{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T16:35:01Z","timestamp":1773246901825,"version":"3.50.1"},"reference-count":51,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2023,5,17]],"date-time":"2023-05-17T00:00:00Z","timestamp":1684281600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Des. Autom. Electron. Syst."],"published-print":{"date-parts":[[2023,7,31]]},"abstract":"<jats:p>\n            The objective of a leakage recovery step is to make use of positive slack and reduce power by performing appropriate standard-cell swaps such as threshold-voltage (\n            <jats:italic>\n              V\n              <jats:sub>th<\/jats:sub>\n            <\/jats:italic>\n            ) or channel-length reassignments. The resulting engineering change order netlist needs to be timing clean. Because this recovery step is performed several times in a physical design flow and involves long runtimes and high tool-license usage, previous works have proposed graph neural network\u2013based frameworks that restrict feature aggregation to three-hop neighborhoods and do not fully consider the directed nature of netlist graphs. As a result, the intermediate node embeddings do not capture the complete structure of the timing graph. In this article, we propose\n            <jats:italic>DAGSizer<\/jats:italic>\n            , a framework that exploits the\n            <jats:italic>directed acyclic<\/jats:italic>\n            nature of timing graphs to predict cell reassignments in the discrete\n            <jats:italic>gate sizing<\/jats:italic>\n            task. Our DAGSizer (Sizer for DAGs) framework is based on a node ordering-aware recurrent message-passing scheme for generating the latent node embeddings. The generated node embeddings absorb the complete information from the fanin cone (predecessors) of the node. To capture the fanout information into the node embeddings, we enable a bidirectional message-passing mechanism. The concatenated latent node embeddings from the forward and reverse graphs are then translated to nodewise delta-delay predictions using a\n            <jats:italic>teacher sampling<\/jats:italic>\n            mechanism. With eight possible cell-assignments, the experimental results demonstrate that our model can accurately estimate design-level leakage recovery with an absolute relative error \u03b5\n            <jats:sub>\n              <jats:italic>model<\/jats:italic>\n            <\/jats:sub>\n            under 5.4%. As compared to our previous work, GRA-LPO, we also demonstrate a significant improvement in the model mean squared error.\n          <\/jats:p>","DOI":"10.1145\/3577019","type":"journal-article","created":{"date-parts":[[2022,12,16]],"date-time":"2022-12-16T14:38:25Z","timestamp":1671201505000},"page":"1-31","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["DAGSizer: A Directed Graph Convolutional Network Approach to Discrete Gate Sizing of VLSI Graphs"],"prefix":"10.1145","volume":"28","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9865-8390","authenticated-orcid":false,"given":"Chung-Kuan","family":"Cheng","sequence":"first","affiliation":[{"name":"University of California, San Diego, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8548-4539","authenticated-orcid":false,"given":"Chester","family":"Holtz","sequence":"additional","affiliation":[{"name":"University of California, San Diego, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4490-5018","authenticated-orcid":false,"given":"Andrew B.","family":"Kahng","sequence":"additional","affiliation":[{"name":"University of California, San Diego, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0965-7247","authenticated-orcid":false,"given":"Bill","family":"Lin","sequence":"additional","affiliation":[{"name":"University of California, San Diego, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3061-9392","authenticated-orcid":false,"given":"Uday","family":"Mallappa","sequence":"additional","affiliation":[{"name":"University of California, San Diego, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,5,17]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"OpenCores. Retrieved from https:\/\/opencores.org."},{"key":"e_1_3_2_3_2","unstructured":"IWLS. 2005. IWLS 2005 Benchmarks. Retreived from https:\/\/iwls.org\/iwls2005\/benchmarks.html."},{"key":"e_1_3_2_4_2","unstructured":"S. Bao. 2010. Optimizing Leakage Power Using Machine Learning. Retrieved from http:\/\/cs229.stanford.edu\/proj2010\/Bao_OptimizingLeakagePowerUsingMachineLearning.pdf."},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.5555\/2969239.2969370"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/EDAC.1990.136648"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/43.771182"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/LPE.2005.195505"},{"key":"e_1_3_2_9_2","unstructured":"KyungHyun Cho Bart van Merrienboer Dzmitry Bahdanau and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv:1409.1259. Retrieved from http:\/\/arxiv.org\/abs\/1409.1259."},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/1120725.1120881"},{"key":"e_1_3_2_11_2","volume-title":"Proceedings of the Workshop on Deep Learning at the Conference and Workshop on Neural Information Processing Systems (NIPS\u201914)","author":"Chung Junyoung","year":"2014","unstructured":"Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Proceedings of the Workshop on Deep Learning at the Conference and Workshop on Neural Information Processing Systems (NIPS\u201914)."},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/IWSOC.2005.23"},{"key":"e_1_3_2_13_2","volume-title":"Advances in Neural Information Processing Systems","author":"Duvenaud David K.","year":"2015","unstructured":"David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P. Adams. 2015. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (Eds.), Vol. 28. Curran Associates, Inc."},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.vlsi.2019.01.008"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2019.2935053"},{"key":"e_1_3_2_16_2","first-page":"326","volume-title":"Proceedings of the IEEE International Conference on Generalized Anxiety Disorder (GAD","author":"Fishburn John P.","year":"1985","unstructured":"John P. Fishburn and Alfred E. Dunlop. 1985. TILOS: A posynomial programming approach to transistor sizing. In Proceedings of the IEEE International Conference on Generalized Anxiety Disorder (GAD\u201985). 326\u2013328."},{"key":"e_1_3_2_17_2","volume-title":"Advances in Neural Information Processing Systems","author":"Hamilton Will","year":"2017","unstructured":"Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/DATE.2009.5090777"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/2429384.2429428"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1145\/1278480.1278690"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/1960397.1960436"},{"key":"e_1_3_2_23_2","first-page":"5308","article-title":"Structural-RNN: Deep learning on spatio-temporal graphs","author":"Jain Ashesh","year":"2016","unstructured":"Ashesh Jain, Amir Roshan Zamir, Silvio Savarese, and Ashutosh Saxena. 2016. Structural-RNN: Deep learning on spatio-temporal graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR\u201916) (2016), 5308\u20135317.","journal-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR\u201916)"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISQED.2009.4810282"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1137\/S1064827595287997"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00110"},{"key":"e_1_3_2_27_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201917)","author":"Kipf Thomas N.","year":"2017","unstructured":"Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations (ICLR\u201917)."},{"key":"e_1_3_2_28_2","unstructured":"A. Kl\u00f6ckner. 2022. PyMetis: A Python Wrapper for METIS. Retrieved from https:\/\/github.com\/inducer\/pymetis."},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3386263.3406916"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/2429384.2429427"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2009.2035575"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/2647956"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3400302.3415711"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394885.3431574"},{"key":"e_1_3_2_35_2","first-page":"265","volume-title":"Proceedings of Machine Learning and Systems","volume":"4","author":"Mostafa Hesham","year":"2022","unstructured":"Hesham Mostafa. 2022. Sequential aggregation and rematerialization: Distributed full-batch training of graph neural networks on large graphs. In Proceedings of Machine Learning and Systems, D. Marculescu, Y. Chi, and C. Wu (Eds.), Vol. 4. 265\u2013275."},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3489517.3530645"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/43.503929"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/43.766722"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/2160916.2160950"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCAD.2011.6105409"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/DATE.2012.6176440"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/DATE.2011.5763293"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCAS.2013.6572398"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/2744769.2744885"},{"key":"e_1_3_2_45_2","first-page":"3620","article-title":"DAG-recurrent neural networks for scene labeling","author":"Shuai Bing","year":"2016","unstructured":"Bing Shuai, Zhen Zuo, Bing Wang, and G. Wang. 2016. DAG-recurrent neural networks for scene labeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR\u201916). 3620\u20133629.","journal-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR\u201916)"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/996566.996777"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/P15-1150"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCAD.2002.1167564"},{"key":"e_1_3_2_49_2","volume-title":"International Conference on Learning Representations","author":"Thost Veronika","year":"2021","unstructured":"Veronika Thost and Jie Chen. 2021. Directed acyclic graph neural networks. In International Conference on Learning Representations."},{"key":"e_1_3_2_50_2","volume-title":"International Conference on Learning Representations","author":"Veli\u010dkovi\u0107 Petar","year":"2018","unstructured":"Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations."},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASP-DAC52403.2022.9712486"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2009.2028682"}],"container-title":["ACM Transactions on Design Automation of Electronic Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3577019","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3577019","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:51:11Z","timestamp":1750182671000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3577019"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,17]]},"references-count":51,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,7,31]]}},"alternative-id":["10.1145\/3577019"],"URL":"https:\/\/doi.org\/10.1145\/3577019","relation":{},"ISSN":["1084-4309","1557-7309"],"issn-type":[{"value":"1084-4309","type":"print"},{"value":"1557-7309","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,17]]},"assertion":[{"value":"2022-05-03","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-12-10","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-05-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}