We revealed that small models do not consistently benefit from long CoT or distillation from large teachers. Instead, they perform better on shorter, simpler reasoning chains that better align with their intrinsic learning capacity. We term this phenomenon as Small Model Learnability Gap.
We find that models RL on math tasks generalize well across non-reasoning domains such as alignment, while SFT models lose such capacity. Latent-space representation and token-space distribution shift analyses reveal that SFT induces substantial representation and output drift, while RL preserves general-domain structure. We finally show that sampling policy is key to generalization: off-policy RL on reasoning tasks compromise non-reasoning performance while on-policy SFT generalizes well.
We observed that during RL training process of Deepseek-R1-1.5B model, 76.7% of AIME problems were solved correctly at some intermediate checkpoint, yet only 30% remained correct in the final model. This indicates that many problems answered correctly during training were ultimately incorrect in the final checkpoint. We term this phenomenon as Temporal Forgetting. Inspired by this, we proposed Temporal Sampling: This method utilizes training dynamics as a source of answer diversity by distributing inference samples across multiple distinct checkpoints from the training trajectory, rather than relying solely on the single final checkpoint.
We investigated how long CoT impacts safety and revealed that long CoT does not necessarily enhance safety. We introduced SafeChain, a dataset designed to improve safety alignment while preserving reasoning capabilities.
ICLR BiAlign Workshop (Oral)
🏆 Best Honorable Mention
Our study reveals that over 38% of model responses suffer from false negatives in answer verification for RL training of LLMs, severely impairing training efficiency. We propose TinyV, a lightweight LLM-based verifier that augments existing rule-based methods to provide more accurate reward estimates.
Four-stage pipeline for generating VisualSphinx of 660K visual logic data for RL training on multimodal reasoning models. In Step 1, we collect 4K seed puzzles with explanations and abstract them into structured rule descriptions using LLMs. In Step 2, we apply a rule-level genetic algorithm to cross over, mutate and diversify the seed rules, scaling them to 40K high-quality rules. In Step 3, each rule is paired with a rendering style and used to generate five correct and three incorrect images via LLM-generated Python scripts. The fifth correct image is designated as the answer option, while the three rule-breaking images serve as distractors. After deduplication, we obtain 110K image groups. In Step 4, we assemble puzzles from each group using three complementary strategies: default assembly, shuffled answer variants, and expanded distractor sets.