{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,7]],"date-time":"2026-02-07T09:57:28Z","timestamp":1770458248574,"version":"3.49.0"},"reference-count":51,"publisher":"Institution of Engineering and Technology (IET)","issue":"4","license":[{"start":{"date-parts":[[2021,1,5]],"date-time":"2021-01-05T00:00:00Z","timestamp":1609804800000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["ietresearch.onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["IET Image Processing"],"published-print":{"date-parts":[[2021,3]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>In the professional field of interior designing, sketch colouring is often a time\u2010consuming and vapidity task. The traditional neural network does not handle the semantic relationship of sketch lines well, and the colouring effect is unsatisfactory. This paper proposes visual\u2010attention generative adversarial network (VAGAN), which enhances the processing effect of edge semantics, strengthens the network to line edge recognition ability, as well as reduces colour overflow and improved model colouring result. In addition, a two\u2010stage training mode is used to simplify the training of rare samples. The simple line draft input into the trained VAGAN, output natural, realistic colour pictures. The experimental results show that, compared with the existing methods, the proposed method can better deal with the problem of sketch and generate stable and reliable images.<\/jats:p>","DOI":"10.1049\/ipr2.12080","type":"journal-article","created":{"date-parts":[[2021,1,6]],"date-time":"2021-01-06T23:23:28Z","timestamp":1609975408000},"page":"997-1007","update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Visual\u2010attention GAN for interior sketch colourisation"],"prefix":"10.1049","volume":"15","author":[{"given":"Xinrong","family":"Li","sequence":"first","affiliation":[{"name":"School of Computer Science and Engineering Central South University  Changsha 410000 China"}]},{"given":"Hong","family":"Li","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering Central South University  Changsha 410000 China"}]},{"given":"Chiyu","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering Central South University  Changsha 410000 China"}]},{"given":"Xun","family":"Hu","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering Central South University  Changsha 410000 China"}]},{"given":"Wei","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Architecture Hunan University  Changsha 410000 China"}]}],"member":"265","published-online":{"date-parts":[[2021,1,5]]},"reference":[{"key":"e_1_2_8_2_1","doi-asserted-by":"crossref","unstructured":"Sangkloy P. et\u00a0al.:Scribbler: Controlling deep image synthesis with sketch and color.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.5400\u20135409(2017)","DOI":"10.1109\/CVPR.2017.723"},{"key":"e_1_2_8_3_1","doi-asserted-by":"crossref","unstructured":"Xian W. et\u00a0al.:Texturegan: Controlling deep image synthesis with texture patches.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 6(1) pp.8456\u20138465(2016)","DOI":"10.1109\/CVPR.2018.00882"},{"key":"e_1_2_8_4_1","doi-asserted-by":"crossref","unstructured":"Gatys L.A. et\u00a0al.:Image style transfer using convolutional neural networks.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.2414\u20132423(2016)","DOI":"10.1109\/CVPR.2016.265"},{"key":"e_1_2_8_5_1","unstructured":"Ulyanov D. et\u00a0al.:Texture networks: Feed\u2010forward synthesis of textures and stylized images. Proceedings of the 33rd International Conference on Machine Learning pp.1349\u20131357(2016)"},{"key":"e_1_2_8_6_1","unstructured":"Mirza M. Osindero S. Conditional generative adversarial nets. arXiv:1411.1784 (2014).https:\/\/arxiv.org\/pdf\/1610.07629.pdf"},{"key":"e_1_2_8_7_1","doi-asserted-by":"crossref","unstructured":"Isola P. et\u00a0al.:Image\u2010to\u2010image translation with conditional adversarial networks.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.1125\u20131134(2017)","DOI":"10.1109\/CVPR.2017.632"},{"key":"e_1_2_8_8_1","doi-asserted-by":"crossref","unstructured":"Zhu J.\u2010Y. et\u00a0al.:Unpaired image\u2010to\u2010image translation using cycle\u2010consistent adversarial networks.Proceedings of the IEEE International Conference on Computer Vision pp.2223\u20132232(2017)","DOI":"10.1109\/ICCV.2017.244"},{"key":"e_1_2_8_9_1","doi-asserted-by":"crossref","unstructured":"Johnson J. et\u00a0al.: Perceptual losses for real\u2010time style transfer and super\u2010resolution.European Conference on Computer Vision pp.694\u2013711(2016)","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"e_1_2_8_10_1","unstructured":"Dumoulin V. et\u00a0al.:A learned representation for artistic style. arXiv preprint arXiv:1610.07629 (2016).https:\/\/arxiv.org\/pdf\/1610.07629.pdf"},{"key":"e_1_2_8_11_1","doi-asserted-by":"crossref","unstructured":"Dosovitskiy A. et\u00a0al.:Learning to generate chairs with convolutional neural networks.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.1538\u20131546(2014)","DOI":"10.1109\/CVPR.2015.7298761"},{"key":"e_1_2_8_12_1","doi-asserted-by":"crossref","unstructured":"Li C. Wand M.: Combining Markov random fields and convolutional neural networks for image synthesis.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.2479\u20132486(2016)","DOI":"10.1109\/CVPR.2016.272"},{"key":"e_1_2_8_13_1","doi-asserted-by":"crossref","unstructured":"Li C. Wand M.: Precomputed real\u2010time texture synthesis with Markovian generative adversarial networks.European Conference on Computer Vision pp.702\u2013716(2016)","DOI":"10.1007\/978-3-319-46487-9_43"},{"key":"e_1_2_8_14_1","unstructured":"Chen T.Q. Schmidt M.:Fast patch\u2010based style transfer of arbitrary style. arXiv:1612.04337 (2016).https:\/\/arxiv.org\/pdf\/1612.04337.pdf"},{"key":"e_1_2_8_15_1","doi-asserted-by":"crossref","unstructured":"Ashikhmin M. Synthesizing natural textures.Proceedings of the 2001 Symposium on Interactive 3D Graphics pp.217\u2013226(2001)","DOI":"10.1145\/364338.364405"},{"key":"e_1_2_8_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/38.946629"},{"key":"e_1_2_8_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/566654.566576"},{"key":"e_1_2_8_18_1","unstructured":"Sermanet P. et\u00a0al.:Convolutional neural networks applied to house numbers digit classification. arXiv:1204.3968 (2012).https:\/\/arxiv.org\/pdf\/1204.3968.pdf"},{"key":"e_1_2_8_19_1","unstructured":"Gregor K. et\u00a0al.:Draw: A recurrent neural network for image generation. Computer Science pp.1462\u20131471 (2015)"},{"key":"e_1_2_8_20_1","doi-asserted-by":"crossref","unstructured":"Cheng Z. et\u00a0al.:Deep colorization.Proceedings of the IEEE International Conference on Computer Vision pp.415\u2013423(2016)","DOI":"10.1109\/ICCV.2015.55"},{"key":"e_1_2_8_21_1","unstructured":"Simonyan K. Zisserman A. Very deep convolutional networks for large\u2010scale image recognition. arXiv:1409.1556 (2014).https:\/\/arxiv.org\/pdf\/1409.1556.pdf"},{"key":"e_1_2_8_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925974"},{"key":"e_1_2_8_23_1","doi-asserted-by":"crossref","unstructured":"Yao Y. et\u00a0al.:Attention\u2010aware multi\u2010stroke style transfer.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.1467\u20131475(2019)","DOI":"10.1109\/CVPR.2019.00156"},{"key":"e_1_2_8_24_1","doi-asserted-by":"crossref","unstructured":"Park D.Y. Lee K.H. Arbitrary style transfer with style\u2010attentional networks.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.5880\u20135888(2019)","DOI":"10.1109\/CVPR.2019.00603"},{"key":"e_1_2_8_25_1","unstructured":"Goodfellow I. et\u00a0al.: Generative adversarial nets.International Conference on Neural Information Processing Systems pp.2672\u20132680(2014)"},{"key":"e_1_2_8_26_1","unstructured":"Radford A. et\u00a0al.:Unsupervised representation learning with deep convolutional generative adversarial networks. International Conference on Image and Graphics pp.97\u2013108(2017)"},{"key":"e_1_2_8_27_1","doi-asserted-by":"crossref","unstructured":"Hensman P. Aizawa K.:cGAN\u2010based manga colorization using a single training image.14th IAPR International Conference on Document Analysis and Recognition pp.72\u201377(2017)","DOI":"10.1109\/ICDAR.2017.295"},{"key":"e_1_2_8_28_1","first-page":"1","article-title":"Comicolorization: Semi\u2010automatic manga colorization","volume":"12","author":"Furusawa C.","year":"2017","journal-title":"SIGGRAPH Asia 2017 Technical Briefs"},{"key":"e_1_2_8_29_1","first-page":"261:1\u2013261:14","article-title":"Two\u2010stage sketch colorization","volume":"37","author":"Zhang L.","year":"2018","journal-title":"ACM Trans. Garphics"},{"key":"e_1_2_8_30_1","doi-asserted-by":"crossref","unstructured":"Tang H. et\u00a0al.:Multi\u2010channel attention selection gan with cascaded semantic guidance for cross\u2010view image translation.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.2417\u20132426(2019)","DOI":"10.1109\/CVPR.2019.00252"},{"key":"e_1_2_8_31_1","doi-asserted-by":"crossref","unstructured":"Sudhakaran S. et\u00a0al.:Lsta: Long short\u2010term attention for egocentric action recognition.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.9954\u20139963(2018)","DOI":"10.1109\/CVPR.2019.01019"},{"key":"e_1_2_8_32_1","doi-asserted-by":"crossref","unstructured":"Zhao T. Wu X.:Pyramid feature attention network for saliency detection.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.3085\u20133094(2019)","DOI":"10.1109\/CVPR.2019.00320"},{"key":"e_1_2_8_33_1","doi-asserted-by":"crossref","unstructured":"Wang W. et\u00a0al.:Salient object detection with pyramid attention and salient edges.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.1448\u20131457(2019)","DOI":"10.1109\/CVPR.2019.00154"},{"key":"e_1_2_8_34_1","doi-asserted-by":"crossref","unstructured":"Fan D.\u2010P. et\u00a0al.:Shifting more attention to video salient object detection.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.8554\u20138564(2019)","DOI":"10.1109\/CVPR.2019.00875"},{"key":"e_1_2_8_35_1","doi-asserted-by":"crossref","unstructured":"Guo H. et\u00a0al.:Visual attention consistency under image transforms for multi\u2010label image classification.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.729\u2013739(2019)","DOI":"10.1109\/CVPR.2019.00082"},{"key":"e_1_2_8_36_1","doi-asserted-by":"crossref","unstructured":"Fu J. et\u00a0al.:Dual attention network for scene segmentation.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.3146\u20133154(2019)","DOI":"10.1109\/CVPR.2019.00326"},{"key":"e_1_2_8_37_1","doi-asserted-by":"crossref","unstructured":"Li Y. et\u00a0al.:Attention\u2010guided unified network for panoptic segmentation.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.7026\u20137035(2019)","DOI":"10.1109\/CVPR.2019.00719"},{"key":"e_1_2_8_38_1","doi-asserted-by":"crossref","unstructured":"Riaz Muhammad U. et\u00a0al.:Learning deep sketch abstraction.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.8014\u20138023(2018)","DOI":"10.1109\/CVPR.2018.00836"},{"key":"e_1_2_8_39_1","doi-asserted-by":"crossref","unstructured":"Shen Y. et\u00a0al.:Zero\u2010shot sketch\u2010image hashing.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.3598\u20133607(2018)","DOI":"10.1109\/CVPR.2018.00379"},{"key":"e_1_2_8_40_1","doi-asserted-by":"crossref","unstructured":"Hu C. et\u00a0al.:Sketch\u2010a\u2010classifier: Sketch\u2010based photo classifier generation.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.9136\u20139144(2018)","DOI":"10.1109\/CVPR.2018.00952"},{"key":"e_1_2_8_41_1","doi-asserted-by":"crossref","unstructured":"Dutta A. Akata Z.:Semantically tied paired cycle consistency for zero\u2010shot sketch\u2010based image retrieval.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.5089\u20135098(2019)","DOI":"10.1109\/CVPR.2019.00523"},{"key":"e_1_2_8_42_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-019-04242-5"},{"key":"e_1_2_8_43_1","doi-asserted-by":"crossref","unstructured":"Xu P. et\u00a0al.:Sketchmate: Deep hashing for million\u2010scale human sketch retrieval.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.8090\u20138098(2018)","DOI":"10.1109\/CVPR.2018.00844"},{"key":"e_1_2_8_44_1","doi-asserted-by":"crossref","unstructured":"Yi R. et\u00a0al.:APDrawingGAN: Generating artistic portrait drawings from face photos with hierarchical GANs.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.10743\u201310752(2019)","DOI":"10.1109\/CVPR.2019.01100"},{"key":"e_1_2_8_45_1","doi-asserted-by":"crossref","unstructured":"Chen W. Hays J.:Sketchygan: Towards diverse and realistic sketch to image synthesis.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.9416\u20139425(2018)","DOI":"10.1109\/CVPR.2018.00981"},{"key":"e_1_2_8_46_1","doi-asserted-by":"crossref","unstructured":"Ronneberger O. et\u00a0al.: Convolutional networks for biomedical image segmentation.International Conference on Medical Image Computing and Computer\u2010Assisted Intervention. Springer Cham pp.234\u2013241(2015)","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"e_1_2_8_47_1","unstructured":"Ng A.Y.:Feature selection L 1 vs. L 2 regularization and rotational invariance.Proceedings of the 21st International Conference on Machine Learning pp.78\u201379(2014)"},{"key":"e_1_2_8_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2009.191"},{"key":"e_1_2_8_49_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0031-3203(00)00023-6"},{"key":"e_1_2_8_50_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4842-2766-4_12"},{"key":"e_1_2_8_51_1","unstructured":"Kingma D.P. Ba J.:Adam: A method for stochastic optimization. arXiv:1412.6980 (2014).https:\/\/arxiv.org\/pdf\/1412.6980.pdf"},{"key":"e_1_2_8_52_1","doi-asserted-by":"crossref","unstructured":"Cordts M. et\u00a0al.:The cityscapes dataset for semantic urban scene understanding.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognitionpp.3213\u20133223(2016)","DOI":"10.1109\/CVPR.2016.350"}],"container-title":["IET Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1049\/ipr2.12080","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/full-xml\/10.1049\/ipr2.12080","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/pdf\/10.1049\/ipr2.12080","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,28]],"date-time":"2025-10-28T06:15:40Z","timestamp":1761632140000},"score":1,"resource":{"primary":{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/10.1049\/ipr2.12080"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,1,5]]},"references-count":51,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2021,3]]}},"alternative-id":["10.1049\/ipr2.12080"],"URL":"https:\/\/doi.org\/10.1049\/ipr2.12080","archive":["Portico"],"relation":{},"ISSN":["1751-9659","1751-9667"],"issn-type":[{"value":"1751-9659","type":"print"},{"value":"1751-9667","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,1,5]]},"assertion":[{"value":"2019-07-02","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-09-14","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-01-05","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}