GeNIE for In-the-Wild Navigation
We present a robust navigation framework designed for global deployment that achieved first place in the Earth Rover Challenge at ICRA 2025.
Collaborative, Learning, and Adaptive Robots Lab at NUS.
We Develop Physical and Social Intelligence for Trustworthy Robots.
We present a robust navigation framework designed for global deployment that achieved first place in the Earth Rover Challenge at ICRA 2025.
We introduce PhysiCLeaR, an annotated dataset of everyday objects and tactile readings collected from a Gelsight Mini sensor, as well as Octopi, a system that leverages both tactile representation learning and large vision-language models to perform physical reasoning and inference, given tactile videos of multiple objects.
We develop interpolant policies that leverage informative source distributions for imitation learning.
We propose an optimization-based grasp synthesis framework, GRaCE, to generate context-specific grasps in complex scenarios. We test GRaCE in a simulator and a real-world grasping tasks.
We are excited to share that two papers from CLeAR were accepted to R:SS 2026! More information about the papers is coming soon, but here’s a snapshot: ...
Ce Hao has graduated and is now Dr. Hao. Congratulations Ce!
All three of our submissions were accepted to ICLR 2026! A fantastic accomplishment by our CLeAR members and collaborators. Here’s a snapshot: Know ...
Multiple Postdoc and Research Assistant positions are open.
We introduce Arena 5.0, the fifth iteration of the Arena platform, featuring photorealistic simulation via NVIDIA Isaac Gym, comprehensive benchmarking, and customizable scenario generation!
We introduce the fourth iteration of the Arena platform - Arena 4.0, a platform to develop, train, and benchmark social navigation approaches with generative model–based environment generation!
DISCO uses vision-language models to guide diffusion policies with optimized keyframe inpainting, enabling superior zero-shot open-vocabulary robotic manipulation.
We propose CHD, a unified diffusion-based framework that tightly couples sub-goal and trajectory generation to improve long-horizon task planning.
We present VLA-Touch, a framework for VLA models with dual-level tactile feedback for contact-rich manipulation.
DOPPLER is a new framework that combines diffusion models and hierarchical reinforcement learning to let robots plan and replan complex, long-horizon tasks from offline data with robustness in the real world.