[Paper] Optimal Splitting of Language Models from Mixtures to Specialized Domains

Published: (March 19, 2026 at 01:07 PM EDT)
2 min read
Source: arXiv

Source: arXiv - 2603.19149v1

Overview

Language models achieve impressive performance on a variety of knowledge, language, and reasoning tasks due to the scale and diversity of pretraining data available. The standard training recipe is a two-stage paradigm: pretraining first on the full corpus of data followed by specialization on a subset of high quality, specialized data from the full corpus. In the multi-domain setting, this involves continued pretraining of multiple models on each specialized domain, referred to as split model training. We propose a method for pretraining multiple models independently over a general pretraining corpus, and determining the optimal compute allocation between pretraining and continued pretraining using scaling laws. Our approach accurately predicts the loss of a model of size N with D pretraining and D’ specialization tokens, and extrapolates to larger model sizes and number of tokens. Applied to language model training, our approach improves performance consistently across common sense knowledge and reasoning benchmarks across different model sizes and compute budgets.

Key Contributions

This paper presents research in the following areas:

  • cs.CL
  • cs.LG

Methodology

Please refer to the full paper for detailed methodology.

Practical Implications

This research contributes to the advancement of cs.CL.

Authors

  • Skyler Seto
  • Pierre Ablin
  • Anastasiia Filippova
  • Jiayuan Ye
  • Louis Bethune
  • Angelos Katharopoulos
  • David Grangier

Paper Information

  • arXiv ID: 2603.19149v1
  • Categories: cs.CL, cs.LG
  • Published: March 19, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »