ChainsofReasoning

View the Project on GitHub rajarshd/ChainsofReasoning

ChainsofReasoning

Code for paper Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks

Abstract

Our goal is to combine the rich multistep inference of symbolic logical reasoning with the generalization capabilities of neural networks. We are particularly interested in complex reasoning about entities and relations in text and large-scale knowledge bases (KBs). This paper proposes three significant modeling advances: (1) we learn to jointly reason about relations, entities, and entity-types; (2) we use neural attention modeling to incorporate multiple paths; (3) we learn to share strength in a single RNN that represents logical composition across all relations. On a largescale Freebase+ClueWeb prediction task, we achieve 25% error reduction, and a 53% error reduction on sparse relations due to shared strength. On chains of reasoning in WordNet we reduce error in mean quantile by 84% versus previous state-of-the-art.

Dependencies

Instructions for running the code

Data

Get the data from here. (Note: This might change soon, as I will release an updated version of the dataset)

To get the correct format to run the models,

cd data
/bin/bash make_data_format.sh <path_to_input_data> <output_dir>

For example you can run,

cd data
/bin/bash make_data_format.sh examples/data_small_input examples/data_small_output

Model

To start training, first checkout run_scripts/config.sh. This defines all the hyperparams and other inputs to the network. After specifying model parameters, to start training run,

cd run_sripts
/bin/bash train.sh ./config.sh

Path Query Experiment on WordNet (Sec 5.5 of the paper)

Checkout the instructions of the readme page in wordnet_experiment/README.md.

Contact:

Feel free to email me with any questions you have at rajarshi@cs.umass.edu