{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,13]],"date-time":"2026-03-13T14:56:26Z","timestamp":1773413786077,"version":"3.50.1"},"reference-count":31,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2022,9,1]],"date-time":"2022-09-01T00:00:00Z","timestamp":1661990400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Archit. Code Optim."],"published-print":{"date-parts":[[2022,9,30]]},"abstract":"<jats:p>\n            Deep learning is highly pervasive in today's data-intensive era. In particular, convolutional neural networks (CNNs) are being widely adopted in a variety of fields for superior accuracy. However, computing deep CNNs on traditional CPUs and GPUs brings several performance and energy pitfalls. Several novel approaches based on ASIC, FPGA, and resistive-memory devices have been recently demonstrated with promising results. Most of them target only the inference (testing) phase of deep learning. There have been very limited attempts to design a full-fledged deep learning accelerator capable of both training and inference. It is due to the highly compute- and memory-intensive nature of the training phase. In this article, we propose\n            <jats:italic>LiteCON<\/jats:italic>\n            , a novel analog photonics CNN accelerator.\n            <jats:italic>LiteCON<\/jats:italic>\n            uses silicon microdisk-based convolution, memristor-based memory, and dense-wavelength-division-multiplexing for energy-efficient and ultrafast deep learning. We evaluate\n            <jats:italic>LiteCON<\/jats:italic>\n            using a commercial CAD framework (IPKISS) on deep learning benchmark models including LeNet and VGG-Net. Compared to the state of the art,\n            <jats:italic>LiteCON<\/jats:italic>\n            improves the CNN throughput, energy efficiency, and computational efficiency by up to 32\u00d7, 37\u00d7, and 5\u00d7, respectively, with trivial accuracy degradation.\n          <\/jats:p>","DOI":"10.1145\/3531226","type":"journal-article","created":{"date-parts":[[2022,6,28]],"date-time":"2022-06-28T13:15:24Z","timestamp":1656422124000},"page":"1-22","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":7,"title":["<i>LiteCON<\/i>\n            : An All-photonic Neuromorphic Accelerator for Energy-efficient Deep Learning"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3802-381X","authenticated-orcid":false,"given":"Dharanidhar","family":"Dang","sequence":"first","affiliation":[{"name":"University of California, San Diego, CA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0965-7247","authenticated-orcid":false,"given":"Bill","family":"Lin","sequence":"additional","affiliation":[{"name":"University of California, San Diego, CA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2329-8228","authenticated-orcid":false,"given":"Debashis","family":"Sahoo","sequence":"additional","affiliation":[{"name":"University of California, San Diego, CA"}]}],"member":"320","published-online":{"date-parts":[[2022,9]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-019-52580-0"},{"key":"e_1_3_1_3_2","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems (NIPS\u201912)","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky, I. Sutskever, and G. E. Hinton. 2012. Imagenet classification with deep convolutional neural network. In Proceedings of the International Conference on Neural Information Processing Systems (NIPS\u201912)."},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2014.58"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/2966986.2967011"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001139"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2017.55"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2016.00333"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/2897937.2898010"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNN.2011.2161771"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1038\/nphoton.2017.93"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/HiPC.2017.00022"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-017-07754-z"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1126\/science.aat8084"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSTQE.2018.2836955"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1038\/srep20215"},{"key":"e_1_3_1_17_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201915)","author":"Simonyan K.","year":"2015","unstructured":"K. Simonyan and A. Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR\u201915)."},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-015-0816-y"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_1_20_2","unstructured":"IPKISS-Photonic Framework. 2018. Retrieved from www.lucedaphotonics.com."},{"key":"e_1_3_1_21_2","first-page":"265","volume-title":"Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201916)","author":"Abadi Martin","year":"2016","unstructured":"Martin Abadi et al. 2016. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201916). USENIX Association, 265\u2013283."},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1155\/2020\/6661022"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41566-020-00754-y"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/DAC18072.2020.9218560"},{"key":"e_1_3_1_25_2","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR\u201915)","author":"Szegedy Christian","unstructured":"Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR\u201915)."},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295349"},{"key":"e_1_3_1_27_2","first-page":"4171","volume-title":"Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT\u201919)","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT\u201919). 4171\u20134186."},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2019.01.012"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.23919\/DATE48585.2020.9116494"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCAS.2011.5937494"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSTQE.2020.2982990"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/SOCC.2018.8618542"}],"container-title":["ACM Transactions on Architecture and Code Optimization"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3531226","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3531226","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:31:31Z","timestamp":1750188691000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3531226"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9]]},"references-count":31,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,9,30]]}},"alternative-id":["10.1145\/3531226"],"URL":"https:\/\/doi.org\/10.1145\/3531226","relation":{},"ISSN":["1544-3566","1544-3973"],"issn-type":[{"value":"1544-3566","type":"print"},{"value":"1544-3973","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9]]},"assertion":[{"value":"2021-07-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-04-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-09-01","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}