Embedding Symbolic Temporal Knowledge into Deep Sequential Models
We embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep sequential models.
We embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep sequential models.
Bridging the gap between symbolic and connectionist paradigms via Graph Neural Network embeddings
Yaqi Xie is awarded the Research Achievement Award for his NeurIPS 2019 paper on Embedding symbolic knowledge into deep networks.
Leveraging prior symbolic knowledge to improve the performance of deep models.
We present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions.