Research
My research focuses on the science of Language Models, particularly on understanding their behavior during both training and inference through methods that probe their internal mechanisms. I aim to understand how LLMs work internally and leverage that understanding to develop technical methods that enhance their performance.
|
News
12/2025, We will present our work at Unireps@NeurIPS 2025!
11/2025, Check out our paper on customizing LLM memory by controlling their hidden states.
11/2025, Pass my comprehensive exam!
06/2025, Start my internship NEC Laboratories America at Princeton, NJ
05/2025, One paper accepted by Findings of ACL 2025
09/2024, One paper accepted by NeurIPS 2024
09/2023, Start my Ph.D. journey at the Dartmouth!
|
Publications
- Representation Interventions Enable Lifelong Unstructured Knowledge Control
Xuyuan Liu,
Zhengzhang Chen,
Xinshuai Dong,
Yanchi Liu,
Xujiang Zhao,
Shengyu Chen,
Haoyu Wang,
Yujun Yan,
Haifeng Chen
Preprint  
- Spectral Insights into Data-Oblivious Critical Layers in Large Language Models
Xuyuan Liu,
Lei Hsiung,
Yaoqing Yang,
Yujun Yan
Findings of ACL2025   also in Unireps@NeurIPS2025
Paper     Project Page
- Exploring Consistency in Graph Representations: from Graph Kernels to Graph Neural Networks
Xuyuan Liu,
Yinghao Cai,
Qihui Yang,
Yujun Yan
NeurIPS 2024  
Paper    Poster
- TreeMAN: Tree-enhanced Multimodal Attention Network for ICD Coding
Zichen Liu,
Xuyuan Liu,
Yanlong Wen,
Guoqing Zhao,
Fen Xia,
Xiaojie Yuan
COLING 2022   (ORAL)
Paper
|
Honors
2023, Dartmouth Fellowship
2022, Academic Excellence Scholarship
2022, Scientific Research Innovation Scholarship
|
Services
Conference Reviewer: NeurIPS 2024,2025, ICLR2025, ICML 2025
|
Latest Update: 12/1/2025
|
Templates from Jon Barron.
|
|