Abstract—Abduction is inference to the best explanation.
While abduction has long been considered a promising
framework for natural language processing (NLP), its
computational complexity hinders its application to practical
NLP problems. In this paper, we propose a method to
predetermine the semantic relatedness between predicates and
to use that information to boost the efficiency of first-order
abductive reasoning. The proposed method uses the estimated
semantic relatedness as follows: (i) to block inferences leading
to explanations that are semantically irrelevant to the
observations, and (ii) to cluster semantically relevant
observations in order to split the task of abduction into a set of
non-interdependent subproblems that can be solved in parallel.
Our experiment with a large-scale knowledge base for a real-life
NLP task reveals that the proposed method drastically reduces
the size of the search space and significantly improves the
computational efficiency of first-order abductive reasoning
compared with the state-of-the-art system.
Index Terms—Natural language processing, logical inference,
abduction.
Kazeto Yamamoto, Naoya Inoue, and Kentaro Inui are with Tohoku
University, Japan (e-mail: {kazeto,naoya-i,inui}@cl.ecei.tohoku.ac.jp).
Yuki Arase is with Osaka University, Japan (e-mail:
arase@ist.osaka-u.ac.jp).
Jun’ichi Tsujii is with Microsoft Research Asia, China (e-mail:
jtsujii@microsoft.com).
Cite: Kazeto Yamamoto, Naoya Inoue, Kentaro Inui, Yuki Arase and Jun’ichi Tsujii, "Boosting the Efficiency of First-Order Abductive Reasoning Using Pre-estimated Relatedness between Predicates," International Journal of Machine Learning and Computing vol. 5, no. 2, pp. 114-120, 2015.