Research
My research focuses on the science of Language Models, especially on understanding their behavior during both the training and inference stage through methods including latent space analysis. I aim to understand how LLMs work and use that understanding to develop technical methods that improve their performance.
|
News
06/2025, Start my internship NEC Laboratories America at Princeton, NJ
05/2025, One paper accepted by Findings of ACL 2025
09/2024, One paper accepted by NeurIPS 2024
09/2023, Start my Ph.D. journey at the Dartmouth!
|
Publications
- Spectral Insights into Data-Oblivious Critical Layers in Large Language Models
Xuyuan Liu,
Lei Hsiung,
Yaoqing Yang,
Yujun Yan
Findings of ACL2025  
Paper     Project Page
- Exploring Consistency in Graph Representations: from Graph Kernels to Graph Neural Networks
Xuyuan Liu,
Yinghao Cai,
Qihui Yang,
Yujun Yan
NeurIPS 2024  
Paper    Poster
- TreeMAN: Tree-enhanced Multimodal Attention Network for ICD Coding
Zichen Liu,
Xuyuan Liu,
Yanlong Wen,
Guoqing Zhao,
Fen Xia
Xiaojie Yuan
COLING 2022   (ORAL)
Paper
|
Honors
2023, Dartmouth Fellowship
2022, Academic Excellence Scholarship
2022, Scientific Research Innovation Scholarship
|
Services
Conference Reviewer: NeurIPS 2024,2025, ICLR2025, ICML 2025
|
Latest Update: 9/26/2024
|
Templates from Jon Barron.
|
|