MIRROR: Differentiable Deep Social Projection for Assistive Human-Robot Communication
Inspired by Social Projection Theory, we use the robot's self model to efficiently model humans.
Collaborative Learning and Adaptive Robots (CLeAR) Lab.
Trust is crucial in shaping human interactions with one another and with robots. We are currently exploring how humans develop trust in robots and how we can use predictive models of trust to enable better human-robot collaboration. Below, you’ll find some of our recent published papers on understanding how humans trust robots in uncertain scenarios, computational trust modeling, and using trust models in robot decision-making.
Inspired by Social Projection Theory, we use the robot's self model to efficiently model humans.
We examine how recent advances in psychometrics, trustworthy systems, deep learning etc. can help address challenges that arise with respect to trust in real-world human robot interactions.
We address the problem of intention and capability calibration in human-robot collaboration with a decision-theoretic approach.
We study and model how trust and attention affect how humans interact with autonomous vehicles.
Human Trust in Robots across Task Contexts
We present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions.