Yuetai Li

Yuetai Li

PhD Student
University of Washington

About Me

I am a second-year PhD student at Network Security Lab at University of Washington, advised by Prof. Radha Poovendran. I am fortunate to work closely with Dr. Xiang Yue at CMU LTI, Zhangchen Xu, Prof. Luyao Niu, and Prof. Bill Yuchen Lin at UW.

Prior to UW, I obtained my Bachelor degree in Communication Engineering at University of Glasgow (UofG) and the University of Electronic Science and Technology of China (UESTC). During my undergrad, I was advised by Prof. Lei Zhang at UofG. I am also fortunate to work with Prof. Jon Crowcroft at University of Cambridge.

I will be interning at Microsoft Research (Redmond) in Summer 2025.

I am open to collaboration and discuss interesting ideas! Email yuetaili@uw.edu if you would like to share any opportunities, collaborations, or just chat~

Research Interests

My primary interests lie broadly in LLM Reasoning, Synthetic Dataset, and Trustworthy AI. I am particularly interested in: (1) understanding the reasoning capabilities of LLMs through rigorous analysis (2) investigating synthetic datasets optimized for effective model learning.

Reasoning and Synthetic Dataset

We revealed that small models do not consistently benefit from long CoT or distillation from large teachers. Instead, they perform better on shorter, simpler reasoning chains that better align with their intrinsic learning capacity. We term this phenomenon as Small Model Learnability Gap.
Small Model Learnability
We find that models RL on math tasks generalize well across non-reasoning domains such as alignment, while SFT models lose such capacity. Latent-space representation and token-space distribution shift analyses reveal that SFT induces substantial representation and output drift, while RL preserves general-domain structure. We finally show that sampling policy is key to generalization: off-policy RL on reasoning tasks compromise non-reasoning performance while on-policy SFT generalizes well.
Math Transferability
We observed that during RL training process of Deepseek-R1-1.5B model, 76.7% of AIME problems were solved correctly at some intermediate checkpoint, yet only 30% remained correct in the final model. This indicates that many problems answered correctly during training were ultimately incorrect in the final checkpoint. We term this phenomenon as Temporal Forgetting. Inspired by this, we proposed Temporal Sampling: This method utilizes training dynamics as a source of answer diversity by distributing inference samples across multiple distinct checkpoints from the training trajectory, rather than relying solely on the single final checkpoint.
Temporal Sampling Overview
We investigated how long CoT impacts safety and revealed that long CoT does not necessarily enhance safety. We introduced SafeChain, a dataset designed to improve safety alignment while preserving reasoning capabilities.
ICLR BiAlign Workshop (Oral) 🏆 Best Honorable Mention
SafeChain
Our study reveals that over 38% of model responses suffer from false negatives in answer verification for RL training of LLMs, severely impairing training efficiency. We propose TinyV, a lightweight LLM-based verifier that augments existing rule-based methods to provide more accurate reward estimates.
TinyV Overview
Four-stage pipeline for generating VisualSphinx of 660K visual logic data for RL training on multimodal reasoning models. In Step 1, we collect 4K seed puzzles with explanations and abstract them into structured rule descriptions using LLMs. In Step 2, we apply a rule-level genetic algorithm to cross over, mutate and diversify the seed rules, scaling them to 40K high-quality rules. In Step 3, each rule is paired with a rendering style and used to generate five correct and three incorrect images via LLM-generated Python scripts. The fifth correct image is designated as the answer option, while the three rule-breaking images serve as distractors. After deduplication, we obtain 110K image groups. In Step 4, we assemble puzzles from each group using three complementary strategies: default assembly, shuffled answer variants, and expanded distractor sets.
VisualSphinx

Trustworthy AI

A novel decoding algorithm that defends against various generative backdoor attacks, including advertisement injection, code injection, and malicious content generation.
CleanGen