Although providing appropriate assistance is commonplace task for humans, it remains a significant challenge for artificial intelligence. We would like our artificial agents to help us in effective and human-interpretable ways, but current AI systems lack a good understanding of human users. Unfortunately, good human models are difficult to develop. Current theory-driven approaches in the cognitive sciences focus on causal explanations for human behaviour and face difficulties scaling up beyond simple scenarios. On the other hand, data-driven deep learning can learn high-performance complex models, but typically requires large amounts of data to generalize well.
We intend to build novel hybrid techniques that learn flexible deep models, yet are able to leverage prior knowledge that is easy to specify and interpret. We envision our methods would yield accurate human models with less data than existing approaches, and plan to demonstrate that the use of human models leads to better assistive AI.