{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T23:35:54Z","timestamp":1761176154014,"version":"build-2065373602"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686318","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T00:00:00Z","timestamp":1761004800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,10,21]]},"abstract":"<jats:p>Multi-task learning has demonstrated remarkable success across a broad spectrum of natural language processing tasks, particularly with neural network-based methods. Despite these advances, a fundamental gap remains in explaining the relationship between task-relatedness and model effectiveness. To address this issue, we propose a novel approach for implicitly modeling task-relatedness by leveraging a quantum physical mathematical framework. In this paper, we introduce a complex-valued neural network designed to encapsulate and analyze task-relatedness. Within this framework, sentences originating from diverse tasks are encoded as mixed quantum systems, represented on a meticulously defined Semantic Hilbert Space. This allows the network to interpret inter-task relationships through the explicit physical semantics of well-constrained components grounded in quantum probability theory. By adhering to these rigorous principles, our model not only establishes a robust method for quantifying task-relatedness but also fosters a deeper, self-explanatory understanding of the underlying processes. To validate the efficacy of our approach, we conducted extensive experiments across five benchmark text classification tasks. The results demonstrate both the superior performance and the interpretability of the proposed model, highlighting its potential as a self-explanatory system for multi-task learning in NLP.<\/jats:p>","DOI":"10.3233\/faia250938","type":"book-chapter","created":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:46:33Z","timestamp":1761126393000},"source":"Crossref","is-referenced-by-count":0,"title":["An Interpretable Quantum-Inspired Model for Multi-Task Natural Language Understanding"],"prefix":"10.3233","author":[{"given":"Peng","family":"Lu","sequence":"first","affiliation":[{"name":"Universit\u00e9 de Montr\u00e9al"}]},{"given":"Jerry","family":"Huang","sequence":"additional","affiliation":[{"name":"Universit\u00e9 de Montr\u00e9al"},{"name":"Mila - Quebec AI Institute"}]},{"given":"Xinyu","family":"Wang","sequence":"additional","affiliation":[{"name":"McGill University"}]},{"given":"Philippe","family":"Langlais","sequence":"additional","affiliation":[{"name":"Universit\u00e9 de Montr\u00e9al"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2025"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA250938","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:46:33Z","timestamp":1761126393000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA250938"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,21]]},"ISBN":["9781643686318"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia250938","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,21]]}}}