TIARA: Empowering Language Models on Question Answering over Large Knowledge Bases
Published in MSRA, 2022
[PAPER]
Pre-trained language models (PLMs) have shown their effectiveness in multiple scenarios. However, KBQA remains challenging, especially regarding coverage and generalization settings. This is due to two main factors: i) understanding the semantics of both questions and relevant knowledge from the KB; ii) generating executable logical forms with both semantic and syntactic correctness. In this paper, we present a new KBQA model, TIARA, which addresses those issues by applying multi-grained retrieval to help the PLM focus on the most relevant KB contexts, viz., entities, exemplary logical forms, and schema items. Moreover, constrained decoding is used to control the output space and reduce generation errors. Experiments over important benchmarks demonstrate the effectiveness of our approach. TIARA outperforms previous SOTA, including those using PLMs or oracle entity annotations, by at least 4.1 and 1.1 F1 points on GrailQA and WebQuestionsSP, respectively. Specifically on GrailQA, TIARA outperforms previous models in all categories, with an improvement of 4.7 F1 points in zero-shot generalization.
Cite
@article{shu2022tiara,
title={TIARA: Multi-grained Retrieval for Robust Question Answering over Large Knowledge Bases},
author={Shu, Yiheng and Yu, Zhiwei and Li, Yuhan and Karlsson, B{\"o}rje F and Ma, Tingting and Qu, Yuzhong and Lin, Chin-Yew},
journal={arXiv preprint arXiv:2210.12925},
year={2022}
}