CLeAR Lab

Collaborative, Learning, and Adaptive Robots Lab at NUS.
We Develop Physical and Social Skills for Robots.

Enhancing Deep Learning with Symbolic Knowledge

Bridging the gap between symbolic and connectionist paradigms via Graph Neural Network embeddings

Discriminator Gradient Flow

Using Gradient Flows to Refine Samples from Deep Generative Models

Event-Driven Visual-Tactile Sensing and Learning

Accurate, Fast, and Low-powered Multi-Sensory Perception via Neuromorphic Sensing and Learning

Multi-Task Trust in Human-Robot Interaction

Human Trust in Robots across Task Contexts

IEEE T-AFFC Best Paper Award!

Our joint work with Desmond Ong, Jamil Zaki and Noah Goodman on Applying Probabilistic Programming to Affective Computing is one of 5 Best Papers (out of 82 ...

Kaiqi wins the 2022 Research Achievement Award.

Kaiqi Chen is awarded the Research Achievement Award for his RSS 2022 paper on Differentiable Social Projection for Human Robot Communication.

Harold wins NUS ATEA 2022.

Harold has been awarded the University Annual Teaching Excellence Award (2022). Harold also won the Faculty Teaching Excellence Award and is now on Faculty H...

CLeAR graduates its first PhD Student.

Abdul Fatir Ansari successfully defended his thesis and is now Dr. Ansari. Congratulations Fatir!

Fairness meets CMDPs

This paper proposes SCALES, a general framework that translates well-established fairness principles into a common representation based on CMDPs.

The Dynamics of Q-learning in Population Games

We develop an accurate physics-inspired model for describing how a population of Q-learning agents adapt as they interact.

MIRROR: Differentiable Deep Social Projection for Assistive Human-Robot Communication

Inspired by Social Projection Theory, we use the robot's self model to efficiently model humans.

Deep Explicit Duration Switching Models for Time Series

We propose a deep switching state space model that can capture both state-dependent and time-dependent switching patterns in time series data.

Extended Tactile Perception

We show that robots can extend their perception through grasped tools/objects via dynamic tactile sensing.

Embedding Symbolic Temporal Knowledge into Deep Sequential Models

We embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep sequential models.

Learning Robust Latent Representation for Reinforcement Learning with Multi-Modal Observations

We construct a shared latent space from different sensory modalities via contrastive learning.