We examine how recent advances in psychometrics, trustworthy systems, deep learning etc. can help address challenges that arise with respect to trust in real-world human robot interactions.
Collaborative Learning and Adaptive Robots (CLeAR) Lab.
Trust is crucial in shaping human interactions with one another and with robots. We are currently exploring how humans develop trust in robots and how we can use predictive models of trust to enable better human-robot collaboration. Below, you’ll find some of our recent published papers on understanding how humans trust robots in uncertain scenarios, computational trust modeling, and using trust models in robot decision-making.
Getting to Know One Another Calibrating Intent, Capabilities and Trust for Human-Robot Collaboration
We address the problem of intention and capability calibration in human-robot collaboration with a decision-theoretic approach.
We study and model how trust and attention affect how humans interact with autonomous vehicles.
We present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions.