Kaiqu Liang

I am a second year PhD student in Computer Science at Princeton University, advised by Jaime Fernández Fisac.

Previously, I completed my MPhil in Machine Learning at the University of Cambridge, advised by Samuel Albanie and Bill Byrne. I did my undergraduate studies at the University of Toronto, where I was advised by Roger Grosse and Sven Dickinson. During my undergraduate years, I was also a student researcher in Vector Institute, advised by Roger Grosse.

Email  /  Google Scholar  /  Twitter  /  Github  /  Linkedin

profile photo

Research

I'm working on AI safety, focusing on human-AI alignment and uncertainty in foundation model. I also did research on video retrieval and out-of-distribution generalization.

intro Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity
Kaiqu Liang, Zixu Zhang, Jaime Fernández Fisac
Neural Information Processing Systems (NeurIPS), 2024
paper / code / website

We propsoed introspective planning as a systematic method for guiding LLMs in forming uncertainty-aware plans for robotic task execution.

intro Who Plays First? Optimizing the Order of Play in Stackelberg Games with Many Robots
Haimin Hu, Gabriele Dragotto, Zixu Zhang, Kaiqu Liang, Bartolomeo Stellato Jaime Fernández Fisac
Robotics: Science and Systems (RSS), 2024

We introduced Branch and Play (B&P), an algorithm that effectively resolves multi-agent spatial navigation problems by determining the optimal order of play.

intro Simple Baselines for Interactive Video Retrieval with Questions and Answers
Kaiqu Liang, Samuel Albanie
International Conference on Computer Vision (ICCV), 2023
paper / code

We proposed several simple yet effective baselines for interactive video retrieval via question-answering.

intro Path Independent Equilibrium Models Can Better Exploit Test-Time Computation
Cem Anil*, Ashwini Pokle*, Kaiqu Liang*, Johannes Treutlein, Yuhuai Wu, Shaojie Bai, Zico Kolter, Roger Grosse
Neural Information Processing Systems (NeurIPS), 2022

We demonstrated that equilibrium model improves generalization in harder instances due to their path independence, highlighting its importance for model performance and scalability.

intro Out-of-Distribution Generalization with Deep Equilibrium Models
Kaiqu Liang*, Cem Anil*, Yuhuai Wu, Roger Grosse
ICML Workshop on Uncertainty and Robustness in Deep Learning , 2021

We demonstrated and discussed why Deep Equilibrium (DEQ) Models outperform fixed-depth counterparts in generalizing under distribution shifts.


Education

Princeton University, USA
Ph.D. in Computer Science • Aug. 2022 to Now
Cambridge University, UK
MPhil in Machine Learning and Machine Intelligence • Oct. 2021 to Aug. 2022
University of Toronto, Canada
Honours Bachelor of Science • Sep. 2017 to May 2021
Computer Science Specialist & Statistics Major & Mathematics Minor

Teaching

Teaching Assistant • ECE346/COS348/MAE346: Intelligent Robotic Systems • Princeton University
Teaching Assistant • COS 350: Ethics of computing • Princeton University
Teaching Assistant • CSC165: Mathematical Expression and Reasoning for Computer Science • University of Toronto

Reviewer services

Neural Information Processing Systems (NeurIPS)
International Conference on Machine Learning (ICML)
European Conference on Computer Vision (ECCV)

Website source from Jon Barron