About Me

I am a 4th (final) year PhD student in Department of Computer Science, University of Illinois at Urbana-Champaign, working with Prof. Kevin C.C. Chang. I have also spent time at Google DeepMind, NVIDIA, Amazon, and an LLM Startup.

I am enthusiastic about pushing the boundaries of foundation models. I have explored pretraining, posttraining, and prompting, primarily focusing on Knowledge, Reasoning & Ethics in Large Language Models: 1) Knowledge: Factuality, Retrieval-Augmentation; 2) Reasoning: Complex Reasoning, Self-Correction/Improvement; 3) Ethics: Privacy Leakage Analysis, Citation/Attribution.

News


[12/2023] Serve as an Area Chair for ACL and NAACL 2024 (Action Editor for ARR).
[10/2023] New critique paper: Large Language Models Cannot Self-Correct Reasoning Yet [pdf]
[08/2023] We introduce RAVEN: In-Context Learning with Retrieval Augmented Encoder-Decoder Language Models [pdf]
[07/2023] New position paper: Citation: A Key to Building Responsible and Accountable Large Language Models [pdf]
[05/2023] Two new preprints on the analysis of privacy leakage risks in LLMs and ChatGPT. [Quantifying Association] [Multi-step Jailbreaking Privacy Attacks]
[12/2022] New survey: Towards Reasoning in Large Language Models [pdf] [paperlist]
[05/2022] Are Large Pre-Trained Language Models Leaking Your Personal Information? [pdf] [media coverage]