[Paper] Reuse your FLOPs: Scaling RL on Hard Problems by Conditioning on Very Off-Policy Prefixes

Published: (January 26, 2026 at 01:57 PM EST)
2 min read
Source: arXiv

Source: arXiv - 2601.18795v1

Overview

Typical reinforcement learning (RL) methods for LLM reasoning waste compute on hard problems, where correct on‑policy traces are rare, policy gradients vanish, and learning stalls. To bootstrap more efficient RL, we consider reusing old sampling FLOPs (from prior inference or RL training) in the form of off‑policy traces. Standard off‑policy methods supervise against off‑policy data, causing instabilities during RL optimization. We introduce PrefixRL, where we condition on the prefix of successful off‑policy traces and run on‑policy RL to complete them, side‑stepping off‑policy instabilities. PrefixRL boosts the learning signal on hard problems by modulating the difficulty of the problem through the off‑policy prefix length. We prove that the PrefixRL objective is not only consistent with the standard RL objective but also more sample efficient. Empirically, we discover back‑generalization: training only on prefixed problems generalizes to out‑of‑distribution unprefixed performance, with learned strategies often differing from those in the prefix. In our experiments, we source the off‑policy traces by rejection sampling with the base model, creating a self‑improvement loop. On hard reasoning problems, PrefixRL reaches the same training reward 2× faster than the strongest baseline (SFT on off‑policy data then RL), even after accounting for the compute spent on the initial rejection sampling, and increases the final reward by . The gains transfer to held‑out benchmarks, and PrefixRL remains effective when off‑policy traces are derived from a different model family, validating its flexibility in practical settings.

Key Contributions

  • cs.LG
  • cs.AI
  • cs.CL

Methodology

Please refer to the full paper for detailed methodology.

Practical Implications

This research contributes to the advancement of cs.LG.

Authors

  • Amrith Setlur
  • Zijian Wang
  • Andrew Cohen
  • Paria Rashidinejad
  • Sang Michael Xie

Paper Information

  • arXiv ID: 2601.18795v1
  • Categories: cs.LG, cs.AI, cs.CL
  • Published: January 26, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »