[Paper] Revisiting Generalization Across Difficulty Levels: It's Not So Easy

Published: (November 26, 2025 at 01:59 PM EST)
1 min read
Source: arXiv

Source: arXiv - 2511.21692v1

Overview

We investigate how well large language models (LLMs) generalize across different task difficulties, a key question for effective data curation and evaluation. Existing research is mixed regarding whether training on easier or harder data leads to better results, and whether those gains come on easier or harder test data.

We address this question by conducting a systematic evaluation of LLMs’ generalization across models, datasets, and fine‑grained groups of example difficulty. We rank examples in six datasets using the outputs of thousands of different LLMs and Item Response Theory (IRT), a well‑established difficulty metric in educational testing. Unlike prior work, our difficulty ratings are therefore determined solely by the abilities of many different LLMs, excluding human opinions of difficulty.

With a more objective, larger‑scale, and finer‑grained analysis, we show that cross‑difficulty generalization is often limited; training on either easy or hard data cannot achieve consistent improvements across the full range of difficulties. These results highlight the importance of having a range of difficulties in both training and evaluation data for LLMs, and that taking shortcuts with respect to difficulty is risky.

Authors

  • Yeganeh Kordi
  • Nihal V. Nayak
  • Max Zuo
  • Ilana Nguyen
  • Stephen H. Bach

Categories

  • cs.CL
  • cs.AI

Paper Information

  • arXiv ID: 2511.21692v1
  • Published: November 27, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »