[Paper] A Unified Understanding of Offline Data Selection and Online Self-refining Generation for Post-training LLMs

Published: (November 25, 2025 at 11:48 PM EST)
1 min read
Source: arXiv

Source: arXiv

Abstract

Offline data selection and online self‑refining generation, which enhance the data quality, are crucial steps in adapting large language models (LLMs) to specific downstream tasks. We tackle offline data selection and online self‑refining generations through an optimization perspective. Specifically, bilevel data selection is used for offline data selection with respect to the validation dataset, and we treat online self‑refining generation as a model adaptation step of selecting the model trained on current responses that best fits the validation data. Our framework offers a unified understanding of offline data selection and self‑refining generation by assigning a learned data weight to each question and response, either explicitly or implicitly. For the first time, we theoretically demonstrate the effectiveness of the bilevel data selection framework and demonstrate its performance gains over unfiltered direct mixing baselines. By combining offline data with validation‑weighted online generations, our method enhances fine‑tuning performance. Experiments on quality enhancement and safety‑aware LLM fine‑tuning validate its effectiveness.

Subjects

  • Machine Learning (cs.LG)
  • Computation and Language (cs.CL)
  • Optimization and Control (math.OC)

Citation

arXiv: 2511.21056 (cs.LG)
DOI: https://doi.org/10.48550/arXiv.2511.21056

Submission History

  • v1, Wed, 26 Nov 2025 04:48:33 UTC (5,430 KB) – submitted by Quan Xiao.
Back to Blog

Related posts

Read more »