[Paper] From Prompt to Protocol: Fast Charging Batteries with Large Language Models

Published: (January 14, 2026 at 11:58 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.09626v1

Overview

The paper presents a novel way to design fast‑charging protocols for lithium‑ion batteries by harnessing large language models (LLMs). Instead of manually crafting or exhaustively searching for charging curves, the authors let an LLM generate candidate protocols (as code or mathematical functions) and then evaluate them in a closed‑loop, gradient‑free optimization pipeline. The approach yields measurable gains in battery health while keeping the number of expensive simulations or experiments low.

Key Contributions

  • LLM‑driven protocol synthesis: Introduces two methods—Prompt‑to‑Optimizer (P2O) and Prompt‑to‑Protocol (P2P)—that let a language model write executable charging‑policy code or explicit current‑vs‑time functions.
  • Gradient‑free closed‑loop optimization: Couples the LLM’s suggestions with an inner training/evaluation loop, avoiding the need for differentiable models of battery dynamics.
  • Empirical superiority: Shows that P2O outperforms neural‑network architectures discovered by Bayesian optimization, evolutionary algorithms, and random search on benchmark charging tasks.
  • Real‑world impact: Demonstrates a ~4.2 % improvement in state‑of‑health (SOH) over a strong multi‑step constant‑current baseline, using the same evaluation budget as traditional methods.
  • Flexibility & constraints: Highlights how natural‑language prompts can embed domain constraints (e.g., safety limits, hardware capabilities) directly into the search space.

Methodology

  1. Problem framing – Battery charging is modeled as a black‑box function: given a charging protocol (current as a function of time), the simulator returns a health metric (SOH). The function is expensive to evaluate and non‑differentiable.
  2. Prompt‑to‑Optimizer (P2O)
    • The LLM receives a textual prompt describing the desired protocol structure (e.g., “small neural network that maps time to current”).
    • It generates Python code for a lightweight neural network (typically a few dense layers).
    • An inner optimization loop trains this network on a small set of simulated charging cycles, adjusting its weights to maximize SOH.
    • The trained network becomes a candidate protocol; its performance is logged, and the best candidates are fed back into the next prompting round.
  3. Prompt‑to‑Protocol (P2P)
    • The LLM is asked to write an explicit analytical function (e.g., a piecewise linear or polynomial expression) with a handful of scalar parameters.
    • A simple optimizer (grid search or CMA‑ES) tweaks those scalars, evaluating the resulting protocol each time.
  4. Closed‑loop iteration – After each batch of evaluations, the system updates the prompt (e.g., “try a deeper network” or “add a plateau”) based on observed performance, letting the LLM explore new functional forms.
  5. Baselines – The authors compare against Bayesian optimization, evolutionary strategies, and random search that all operate on a fixed‑shape neural network architecture.

Results & Findings

MethodEvaluation BudgetSOH improvement vs. baseline
Multi‑step constant current (state‑of‑the‑art)0 % (reference)
Random search (fixed NN)Same as P2O/P2P+1.8 %
Bayesian optimization (fixed NN)Same+2.3 %
Evolutionary algorithm (fixed NN)Same+2.6 %
P2O (LLM‑generated NN)Same+4.2 %
P2P (LLM‑written function)Same+4.2 %
  • P2O discovered neural‑network architectures that were more expressive than those explored by the baselines, leading to the highest SOH gain.
  • P2P matched P2O’s performance while using a simpler functional form, showing that LLMs can directly propose effective analytical protocols without a training loop.
  • Both methods required roughly the same number of costly battery simulations as the traditional baselines, proving the approach is budget‑efficient.

Practical Implications

  • Accelerated R&D: Battery manufacturers can plug an LLM into their simulation pipelines to generate novel charging curves, reducing the time spent on manual trial‑and‑error.
  • Customizable safety constraints: Engineers can embed hardware limits, temperature caps, or regulatory rules directly into the prompt, ensuring generated protocols are compliant from the start.
  • Cross‑domain applicability: The same prompt‑to‑optimizer pattern can be reused for other expensive, black‑box control problems (e.g., power‑grid dispatch, HVAC scheduling, or autonomous vehicle motion planning).
  • Toolchain integration: Because the LLM outputs executable code (Python/NumPy), it can be dropped into existing simulation frameworks (e.g., PyBaMM) with minimal friction.
  • Cost savings: Fewer physical experiments are needed to reach a high‑performing protocol, translating into lower material and labor expenses for prototype testing.

Limitations & Future Work

  • Simulation fidelity: The study relies on high‑quality battery simulators; real‑world hardware validation is still required to confirm transferability.
  • Prompt engineering overhead: Crafting effective prompts and interpreting LLM outputs can be non‑trivial, especially for teams without NLP expertise.
  • Scalability of inner training: While the neural networks are small, training them repeatedly may become a bottleneck for larger search spaces or more complex battery chemistries.
  • Generalization: The methods were evaluated on a specific fast‑charging scenario; extending to other chemistries, temperature regimes, or long‑term aging models remains an open question.
  • Future directions suggested by the authors include: integrating uncertainty quantification into the LLM‑generated protocols, coupling with active learning to decide which simulations to run next, and exploring multimodal prompts (e.g., combining textual constraints with sketch‑based waveform hints).

Authors

  • Ge Lei
  • Ferran Brosa Planella
  • Sterling G. Baird
  • Samuel J. Cooper

Paper Information

  • arXiv ID: 2601.09626v1
  • Categories: cs.LG, cs.AI, eess.SY
  • Published: January 14, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »