Yuetai Li

Yuetai Li

PhD Student
University of Washington

About Me

I am a second-year PhD student at Network Security Lab at University of Washington, advised by Prof. Radha Poovendran. I am fortunate to work closely with Dr. Xiang Yue at CMU LTI, Zhangchen Xu, Prof. Luyao Niu, and Prof. Bill Yuchen Lin at UW.

Prior to UW, I obtained my Bachelor degree in Communication Engineering at University of Glasgow (UofG). During my undergrad, I was advised by Prof. Lei Zhang at UofG. I am also fortunate to work with Prof. Jon Crowcroft at University of Cambridge.

I will be interning at Microsoft Research (Redmond) in Summer 2025.

I am open to collaboration and discuss interesting ideas! Email yuetaili@uw.edu if you would like to share any opportunities, collaborations, or just chat~

Research Interests

My primary interests lie broadly in LLM Reasoning, Synthetic Dataset, and Trustworthy AI. I am particularly interested in: (1) understanding the reasoning capabilities of LLMs through rigorous analysis (2) investigating synthetic datasets optimized for effective model learning.

Reasoning and Synthetic Dataset

We observed that during RL training process of Deepseek-R1-1.5B model, 76.7% of AIME problems were solved correctly at some intermediate checkpoint, yet only 30% remained correct in the final model. This indicates that many problems answered correctly during training were ultimately incorrect in the final checkpoint. We term this phenomenon as Temporal Forgetting. Inspired by this, we proposed Temporal Sampling: This method utilizes training dynamics as a source of answer diversity by distributing inference samples across multiple distinct checkpoints from the training trajectory, rather than relying solely on the single final checkpoint.
Temporal Sampling Overview
Our study reveals that over 38% of model responses suffer from false negatives in answer verification for RL training of LLMs, severely impairing training efficiency. We propose TinyV, a lightweight LLM-based verifier that augments existing rule-based methods to provide more accurate reward estimates.
TinyV Overview
We revealed that small models do not consistently benefit from long CoT or distillation from large teachers. Instead, they perform better on shorter, simpler reasoning chains that better align with their intrinsic learning capacity. We term this phenomenon as Small Model Learnability Gap.
Small Model Learnability
We investigated how long CoT impacts safety and revealed that long CoT does not necessarily enhance safety. We introduced SafeChain, a dataset designed to improve safety alignment while preserving reasoning capabilities.
ICLR BiAlign Workshop (Oral) 🏆 Best Honorable Mention
SafeChain
We maintained the official Magpie Hugging Face repository and released open-sourced synthetic data generated from open-source LLMs. Our aligned MagpieLM models are still SOTA small language models for chat.
MagpieLM

Trustworthy AI

A novel decoding algorithm that defends against various generative backdoor attacks, including advertisement injection, code injection, and malicious content generation.
CleanGen