The Dynamics of Q-learning in Population Games
We develop an accurate physics-inspired model for describing how a population of Q-learning agents adapt as they interact.
We develop an accurate physics-inspired model for describing how a population of Q-learning agents adapt as they interact.
We examine the problem of observed adversaries for deep policies, where observations of other agents can hamper robot performance.
This paper proposes SCALES, a general framework that translates well-established fairness principles into a common representation based on CMDPs.
We present a new method for training GANs via characteristic functions
We address the problem of intention and capability calibration in human-robot collaboration with a decision-theoretic approach.
We study and model how trust and attention affect how humans interact with autonomous vehicles.
Training robots that can interactively assist humans with private information
We present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions.
Robot decision-making with Human Trust.
Using Gradient Flows to Refine Samples from Deep Generative Models
We develop a family of stable continuous-time neural state space-models.
We introduce a novel framework centered around the Probable Object Location (POLo) score, which allows the agent to make data-driven decisions for efficient object search.
Transfer source policies to a target reinforcement learning task with safety constraints using Successor Features.
We embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep sequential models.
We construct a shared latent space from different sensory modalities via contrastive learning.
Bridging the gap between symbolic and connectionist paradigms via Graph Neural Network embeddings
We show that iCub robot classifies the surface textures with both sliding and touch movements under loose constraints with high accuracy.
Human Trust in Robots across Task Contexts
We construct a decomposed latent state space model for perspective-taking for human robot interaction.
We propose a sampling-based grasp synthesis framework, GraspFlow, to generate context-specific grasps. We test GraspFlow in a real-world table-top grasping task.
We show that robots can extend their perception through grasped tools/objects via dynamic tactile sensing.
We propose a Spiking Graph Neural Network to take into account taxel geometry.
We show that robots achieve fast classification of textures through Neural Encoding and Spiking Neural Network.
We show that a single unconditional diffusion model performs competitively in out-of-distribution detection tasks by measuring the rate-of-change and curvature of diffusion paths connecting data samples to the standard normal distribution.
We apply techniques from continual learning to the problem of selective forgetting in deep generative models. Our method, dubbed Selective Amnesia, allows users to remap undesired concepts to user-defined ones.
We propose a deep switching state space model that can capture both state-dependent and time-dependent switching patterns in time series data.
Leveraging prior symbolic knowledge to improve the performance of deep models.
We develop a safe planning method for trajectory generation by sampling from diffusion model under different LTLf constraints.
We introduce PhysiCLeaR, an annotated dataset of everyday objects and tactile readings collected from a Gelsight Mini sensor, as well as Octopi, a system that leverages both tactile representation learning and large vision-language models to perform physical reasoning and inference, given tactile videos of multiple objects.
We develop interpolant policies that leverage informative source distributions for imitation learning.
We introduce the third iteration of the Arena platform - Arena 3.0, a platform to develop, train, and benchmark social navigation approaches!
We propose an optimization-based grasp synthesis framework, GRaCE, to generate context-specific grasps in complex scenarios. We test GRaCE in a simulator and a real-world grasping tasks.
Inspired by Social Projection Theory, we use the robot's self model to efficiently model humans.
Accurate, Fast, and Low-powered Multi-Sensory Perception via Neuromorphic Sensing and Learning
Using Bayesian Optimization to address the ill-posed nature of Inverse Reinforcement Learning
We propose a probabilistic programming approach to affective computing.
Kaiqi Chen is awarded the Research Achievement Award for his RSS 2022 paper on Differentiable Social Projection for Human Robot Communication.
Harold has been awarded the University Annual Teaching Excellence Award (2022). Harold also won the Faculty Teaching Excellence Award and is now on Faculty H...
Sreejith Balakrishnan is awarded the Research Achievement Award for his NeurIPS 2020 paper on efficiently exploring reward functions in Inverse Reinforcement...
We have won the Best Paper Award at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021 in October. IROS is a premier flagshi...
Yaqi Xie is awarded the Research Achievement Award for his NeurIPS 2019 paper on Embedding symbolic knowledge into deep networks.
Sreejith Balakrishnan is awarded the 2019 Honor List of Student Tutors for his contribution as a tutor to the module “CS4246: AI Planning and Decision Making”.
We propose a probabilistic programming approach to affective computing.
This paper proposes SCALES, a general framework that translates well-established fairness principles into a common representation based on CMDPs.
We show that a single unconditional diffusion model performs competitively in out-of-distribution detection tasks by measuring the rate-of-change and curvature of diffusion paths connecting data samples to the standard normal distribution.
We develop a safe planning method for trajectory generation by sampling from diffusion model under different LTLf constraints.
We extend gradient flow methods to a variety of high-quality image synthesis tasks using a novel density ratio learning method.
We develop interpolant policies that leverage informative source distributions for imitation learning.
We apply techniques from continual learning to the problem of selective forgetting in deep generative models. Our method, dubbed Selective Amnesia, allows users to remap undesired concepts to user-defined ones.
We construct a decomposed latent state space model for perspective-taking for human robot interaction.
We propose a deep switching state space model that can capture both state-dependent and time-dependent switching patterns in time series data.
We construct a shared latent space from different sensory modalities via contrastive learning.
Using Gradient Flows to Refine Samples from Deep Generative Models
We present a new method for training GANs via characteristic functions
We present an approach to generate new items for groups of users based on their interaction history.
Harold gave a well-received Keynote at ICRA’24 on our group’s work on generative modeling and robotics! He spoke about two of our recent works on new kinds o...
We embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep sequential models.
Leveraging prior symbolic knowledge to improve the performance of deep models.
We show that a single unconditional diffusion model performs competitively in out-of-distribution detection tasks by measuring the rate-of-change and curvature of diffusion paths connecting data samples to the standard normal distribution.
We develop a safe planning method for trajectory generation by sampling from diffusion model under different LTLf constraints.
We extend gradient flow methods to a variety of high-quality image synthesis tasks using a novel density ratio learning method.
We introduce PhysiCLeaR, an annotated dataset of everyday objects and tactile readings collected from a Gelsight Mini sensor, as well as Octopi, a system that leverages both tactile representation learning and large vision-language models to perform physical reasoning and inference, given tactile videos of multiple objects.
We develop interpolant policies that leverage informative source distributions for imitation learning.
We propose an optimization-based grasp synthesis framework, GRaCE, to generate context-specific grasps in complex scenarios. We test GRaCE in a simulator and a real-world grasping tasks.
We introduce a novel framework centered around the Probable Object Location (POLo) score, which allows the agent to make data-driven decisions for efficient object search.
We apply techniques from continual learning to the problem of selective forgetting in deep generative models. Our method, dubbed Selective Amnesia, allows users to remap undesired concepts to user-defined ones.
We study a model of multi-population learning with heterogenous beliefs.
We construct a decomposed latent state space model for perspective-taking for human robot interaction.
We explore the potential of LLMs to act as zero-shot human models for HRI. We contribute an empirical study and case studies on a simulated table-clearing task and a new robot utensil-passing experiment.
We propose a sampling-based grasp synthesis framework, GraspFlow, to generate context-specific grasps. We test GraspFlow in a real-world table-top grasping task.
We develop a family of stable continuous-time neural state space-models.
We contribute an empirical study into the effectiveness of LLMs, specifically GPT-3.5 variants, for the task of natural language goal translation to PDDL.
Transfer source policies to a target reinforcement learning task with safety constraints using Successor Features.
We examine the problem of observed adversaries for deep policies, where observations of other agents can hamper robot performance.
This paper proposes SCALES, a general framework that translates well-established fairness principles into a common representation based on CMDPs.
We develop an accurate physics-inspired model for describing how a population of Q-learning agents adapt as they interact.
Inspired by Social Projection Theory, we use the robot's self model to efficiently model humans.
We propose a deep switching state space model that can capture both state-dependent and time-dependent switching patterns in time series data.
We embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep sequential models.
We construct a shared latent space from different sensory modalities via contrastive learning.
Bridging the gap between symbolic and connectionist paradigms via Graph Neural Network embeddings
Using Gradient Flows to Refine Samples from Deep Generative Models
We present a new method for training GANs via characteristic functions
Training robots that can interactively assist humans with private information
Leveraging prior symbolic knowledge to improve the performance of deep models.
We present an approach to generate new items for groups of users based on their interaction history.
Using Bayesian Optimization to address the ill-posed nature of Inverse Reinforcement Learning
We introduce the third iteration of the Arena platform - Arena 3.0, a platform to develop, train, and benchmark social navigation approaches!
We introduce a novel framework centered around the Probable Object Location (POLo) score, which allows the agent to make data-driven decisions for efficient object search.
Using Bayesian Optimization to address the ill-posed nature of Inverse Reinforcement Learning
Harold gave a well-received Keynote at ICRA’24 on our group’s work on generative modeling and robotics! He spoke about two of our recent works on new kinds o...
All four of our submissions (3 papers, 1 demo) were accepted to R:SS 2024! A fantastic accomplishment by our CLeAR members Kaiqi, Samson, Tasbolat, Linh, Kel...
Both our submitted papers were accepted to Neurips this year! Come join us in New Orleans! Congrats to Alvin and Shuyue. Find out more about the papers below...
CLeAR had three papers accepted to IROS this year! Come join us in Detroit! Congrats to Tasbolat, Bowen, and Kaiqi. Find out more about the papers below: ...
Congratulations Sreejith on successfully defending his thesis! Sreejith’s work was on value alignment in human-centric AI/robots. Check out his Neurips pap...
Harold has been awarded an Early Career Spotlight at the Robotics: Science and Systems 2023. The Early Career Spotlight “* acknowledges the outstanding acc...
Xie Yaqi successfully defended her thesis and is now Dr. Xie. Congratulations Yaqi! You can find out more about Yaqi’s work on embedding symbolic knowledg...
Our joint work with Desmond Ong, Jamil Zaki and Noah Goodman on Applying Probabilistic Programming to Affective Computing is one of 5 Best Papers (out of 82 ...
Kaiqi Chen is awarded the Research Achievement Award for his RSS 2022 paper on Differentiable Social Projection for Human Robot Communication.
Harold has been awarded the University Annual Teaching Excellence Award (2022). Harold also won the Faculty Teaching Excellence Award and is now on Faculty H...
Abdul Fatir Ansari successfully defended his thesis and is now Dr. Ansari. Congratulations Fatir!
Sreejith Balakrishnan is awarded the Research Achievement Award for his NeurIPS 2020 paper on efficiently exploring reward functions in Inverse Reinforcement...
We have won the Best Paper Award at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021 in October. IROS is a premier flagshi...
Harold was awarded the university level and faculty teaching awards for the year 20/21!
Harold was awarded the faculty teaching award for the year 19/20!
Yaqi Xie is awarded the Research Achievement Award for his NeurIPS 2019 paper on Embedding symbolic knowledge into deep networks.
Harold was awarded the faculty teaching award for the year 18/19!
Sreejith Balakrishnan is awarded the 2019 Honor List of Student Tutors for his contribution as a tutor to the module “CS4246: AI Planning and Decision Making”.
Harold was awarded the faculty teaching award for the year 17/18! A first time win.
We develop interpolant policies that leverage informative source distributions for imitation learning.
We propose an optimization-based grasp synthesis framework, GRaCE, to generate context-specific grasps in complex scenarios. We test GRaCE in a simulator and a real-world grasping tasks.
We show that robots can extend their perception through grasped tools/objects via dynamic tactile sensing.
Accurate, Fast, and Low-powered Multi-Sensory Perception via Neuromorphic Sensing and Learning
We propose a Spiking Graph Neural Network to take into account taxel geometry.
We show that robots achieve fast classification of textures through Neural Encoding and Spiking Neural Network.
We show that iCub robot classifies the surface textures with both sliding and touch movements under loose constraints with high accuracy.
We introduce PhysiCLeaR, an annotated dataset of everyday objects and tactile readings collected from a Gelsight Mini sensor, as well as Octopi, a system that leverages both tactile representation learning and large vision-language models to perform physical reasoning and inference, given tactile videos of multiple objects.
We have won the Best Paper Award at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021 in October. IROS is a premier flagshi...
We show that robots can extend their perception through grasped tools/objects via dynamic tactile sensing.
Accurate, Fast, and Low-powered Multi-Sensory Perception via Neuromorphic Sensing and Learning
We propose a Spiking Graph Neural Network to take into account taxel geometry.
We show that robots achieve fast classification of textures through Neural Encoding and Spiking Neural Network.
We show that iCub robot classifies the surface textures with both sliding and touch movements under loose constraints with high accuracy.
We explore the potential of LLMs to act as zero-shot human models for HRI. We contribute an empirical study and case studies on a simulated table-clearing task and a new robot utensil-passing experiment.
Inspired by Social Projection Theory, we use the robot's self model to efficiently model humans.
We examine how recent advances in psychometrics, trustworthy systems, deep learning etc. can help address challenges that arise with respect to trust in real-world human robot interactions.
We address the problem of intention and capability calibration in human-robot collaboration with a decision-theoretic approach.
We study and model how trust and attention affect how humans interact with autonomous vehicles.
Human Trust in Robots across Task Contexts
We present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions.
Robot decision-making with Human Trust.
social
Arena 3.0 - Advancing Social Navigation in Collaborative and Highly Dynamic Environments
We introduce the third iteration of the Arena platform - Arena 3.0, a platform to develop, train, and benchmark social navigation approaches!
Large Language Models as Zero-Shot Human Models for Human-Robot Interaction
We explore the potential of LLMs to act as zero-shot human models for HRI. We contribute an empirical study and case studies on a simulated table-clearing task and a new robot utensil-passing experiment.
Translating Natural Language to Planning Goals with Large-Language Models
We contribute an empirical study into the effectiveness of LLMs, specifically GPT-3.5 variants, for the task of natural language goal translation to PDDL.
Fairness meets CMDPs
This paper proposes SCALES, a general framework that translates well-established fairness principles into a common representation based on CMDPs.
The Dynamics of Q-learning in Population Games
We develop an accurate physics-inspired model for describing how a population of Q-learning agents adapt as they interact.
MIRROR for Assistive Human-Robot Communication
Inspired by Social Projection Theory, we use the robot's self model to efficiently model humans.
Getting to Know One Another Calibrating Intent, Capabilities and Trust for Human-Robot Collaboration
We address the problem of intention and capability calibration in human-robot collaboration with a decision-theoretic approach.
Modeling the Interplay of Trust and Attention in HRI an Autonomous Vehicle Study
We study and model how trust and attention affect how humans interact with autonomous vehicles.
Multi-Task Trust in Human-Robot Interaction
Human Trust in Robots across Task Contexts
Applying Probabilistic Programming to Affective Computing
We propose a probabilistic programming approach to affective computing.
Robot Capability and Intention in Trust-based Decisions across Tasks
We present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions.
Planning with Trust for Human-Robot Collaboration
Robot decision-making with Human Trust.