Embedding Symbolic Knowledge into Deep Networks, Yaqi Xie★, Ziwei Xu, Mohan Kankanhalli, Kuldeep S. Meel, and Harold Soh ★, Neural Information Processing Systems (NeurIPS), 2019
Links: Paper | Github

In this work, we aim to leverage prior symbolic knowledge to improve the performance of deep models. We propose a graph embedding network that projects propositional formulae (and assignments) onto a manifold via an augmented Graph Convolutional Network (GCN). To generate semantically-faithful embeddings, we develop techniques to recognize node heterogeneity, and semantic regularization that incorporate structural constraints into the embedding. Experiments show that our approach improves the performance of models trained to perform entailment checking and visual relation prediction. Interestingly, we observe a connection between the tractability of the propositional theory representation and the ease of embedding. Future exploration of this connection may elucidate the relationship between knowledge compilation and vector representation learning.


You can find our paper here. Check out our repository here on github


Please consider citing our paper if you build upon our results and ideas.

Yaqi Xie★, Ziwei Xu, Mohan Kankanhalli, Kuldeep S. Meel, and Harold Soh ★, “Embedding Symbolic Knowledge into Deep Networks”, Neural Information Processing Systems (NeurIPS), 2019

title={Embedding Symbolic Knowledge into Deep Networks},
author={Yaqi, Xie and Xu, Ziwei and Meel, Kuldeep S and Kankanhalli, Mohan and Soh, Harold},
booktitle={Advances in Neural Information Processing Systems},


If you have questions or comments, please contact Yaqi.


This work was supported in part by a MOE Tier 1 Grant to Harold Soh and by the National Research Foundation Singapore under its AI Singapore Programme [AISG-RP-2018-005]. It was also supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Strategic Capability Research Centres Funding Initiative.

Written by

Yaqi Xie

I love Machine Learning!