Abstract
Visual question answering aims to answer the natural language question about a given image. Existing graph-based methods only focus on the relations between objects in an image and neglect the importance of the syntactic dependency relations between words in a question. To simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question, we propose a novel dual channel graph convolutional network (DC-GCN) for better combining visual and textual advantages. The DC-GCN model consists of three parts: an I-GCN module to capture the relations between objects in an image, a Q-GCN module to capture the syntactic dependency relations between words in a question, and an attention alignment module to align image representations and question representations. Experimental results show that our model achieves comparable performance with the state-of-the-art approaches.
Original language | English |
---|---|
Title of host publication | Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL) |
Publisher | Association for Computing Machinery, Inc |
Pages | 7166-7176 |
Number of pages | 11 |
Publication status | Published - 5 Jul 2020 |