The Appeal and Reality of Recycling LoRAs with Adaptive Merging

Published: (February 25, 2026 at 01:12 PM EST)
2 min read

Source: Hacker News

Abstract

The widespread availability of fine‑tuned LoRA modules for open pre‑trained models has led to an interest in methods that can adaptively merge LoRAs to improve performance. These methods typically include some way of selecting LoRAs from a pool and tuning merging coefficients based on a task‑specific dataset.

While adaptive merging methods have demonstrated improvements in some settings, no past work has attempted to recycle LoRAs found “in the wild” on model repositories like the Hugging Face Hub. To address this gap, we consider recycling from a pool of nearly 1,000 user‑contributed LoRAs trained from the Llama 3.1 8B‑Instruct language model.

Our empirical study includes a range of adaptive and non‑adaptive merging methods, as well as a new method designed via a wide search over the methodological design space. We demonstrate that adaptive merging methods can improve performance over the base model but provide limited benefit over training a new LoRA on the same data used to set merging coefficients.

We additionally find that the specific choice of LoRAs to merge has little importance, and that using LoRAs with randomly initialized parameter values yields similar performance. This raises the possibility that adaptive merging from recycled LoRAs primarily works via some kind of regularization effect, rather than by enabling positive cross‑task transfer.

To better understand why past work has proven successful, we confirm that positive transfer is indeed possible when there are highly relevant LoRAs in the pool. We release the model checkpoints and code online.

0 views
Back to Blog

Related posts

Read more »