CLeAR

Blog

Welcome to our blog! Below, you’ll find accessible versions of our research. These blog entries are written by CLeAR members.


BlogEnhancing DNN with Symbolic Knowledge (ICRA 2021)

Enhancing DNN with Symbolic Knowledge (ICRA 2021)

In two recent papers, we propose to view symbolic knowledge as another form of data. The big question is: how can represent and train models to be consistent with this data? We propose a graph embedding network framework that projects logic formulae (and assignments) onto a manifold via an augmented Graph Neural Network (GNN). These embeddings can then be used in a logic loss that can be used to guide deep models during training. Experiments show that our approach improves the performance of deep models.
[Click to read more]


BlogDiscriminator Gradient $f$low (ICLR 2021)

Discriminator Gradient $f$low (ICLR 2021)

In this post, we describe a new technique for improving the quality of samples from deep generative models. We introduce Discriminator Gradient $f$low (DG$f$low), which significantly improves samples from deep generative models using the gradient flow of entropy-regularized $f$-divergences between the generated and real data distributions.
[Click to read more]


BlogVisual-tactile Sensing (RSS 2020)

Visual-tactile Sensing (RSS 2020)

Many everyday tasks require multiple sensory modalities to perform successfully. For example, humans use vision to locate the carton and can infer from a simple grasp how much liquid the carton contains. This inference is performed robustly using a power-efficient neural substrate — compared to current artificial systems, human brains require far less energy. In this work, we gain inspiration from biological systems, which are asynchronous and event-driven. We contribute an event-driven visual-tactile perception system, comprising NeuTouch — a biologically-inspired tactile sensor — and the VT-SNN for multi-modal spike-based perception.
[Click to read more]


BlogMulti-Task Trust in Human-Robot Interaction (IJRR 2020)

Multi-Task Trust in Human-Robot Interaction (IJRR 2020)

If you knew a robot could navigate around obstacles, would you trust it to follow a person across a room? Or maybe not? We discover human trust in robots varies across task contexts in a structured manner. Based on this observation, we model trust as a latent function that depends on a “perceptual task space” and develop state-of-the-art Bayesian, Neural, and Hybrid trust models. Our work enables robots to predict and reason about human trust across tasks and potentially other contexts.
[Click to read more]


BlogEfficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization (NeurIPS 2020)

Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization (NeurIPS 2020)

Inverse Reinforcement Learning (IRL) is an ill-posed problem as there typically exist many solutions(reward functions) for which a given behavior is optimal. Given the ill-posed nature of IRL, we adopt the perspective that an IRL algorithm should characterizethe space of solutions rather than output a single answer. However, blindly exploring the reward function space in search of optimal solutions is inefficient. We turn to Bayesian Optimization and describe a new kernel for efficient exploration of the reward space.
[Click to read more]