The future of AI-powered software optimization (and how it can help your team)

Published: (December 12, 2025 at 03:43 PM EST)
4 min read

Source: GitHub Blog

Making the case for Continuous Efficiency

We believe that once it’s ready for broader adoption, Continuous Efficiency will have a significant positive impact for developers, businesses, and sustainability.

For developers

Digital sustainability and green software are intrinsically aligned to “efficiency,” which is at the core of software engineering. Many developers would benefit from more performant software, better standardization of code, change‑quality assurance, and more.

For businesses

Building for sustainability has measurable business value, including:

  • Reducing power and resource consumption
  • Increasing efficiency
  • Better code quality
  • Improved user experience
  • Lower costs

Despite this, sustainability rarely makes it onto roadmaps, priority lists, or backlogs. Imagine a world where the codebase could continuously improve itself…

A graphic showing Continuous Efficiency = Green Software + Continuous AI

Continuous Efficiency means effortless, incremental, validated improvements to codebases for increased efficiency. It’s an emergent practice based on a set of tools and techniques that we are starting to develop and hope the developer community will expand on.

This emerges at the intersection of Continuous AI and Green Software.

  • Continuous AI – AI‑enriched automation for software collaboration, exploring LLM‑powered automation in platform‑based development and CI/CD workflows.
  • Green Software – Software designed to be more energy‑efficient and have a lower environmental impact, resulting in cheaper, more performant, and more resilient applications.

Continuous Efficiency in (GitHub) Action(s)

While Continuous Efficiency is a generally applicable concept, we have been building implementations on a specific GitHub platform and infrastructure called Agentic Workflows. It’s publicly available and open source, currently in “research demonstrator” status (experimental prototype, pre‑release, subject to change and errors). Agentic Workflows is an experimental framework for exploring proactive, automated, event‑driven agentic behaviors in GitHub repositories, running safely in GitHub Actions.

Our work in this space has focused on two areas:

Implementing rules and standards

With modern LLMs and agentic workflows, we can express engineering standards and code‑quality guidelines directly in natural language and apply them at scale.

Key advantages over traditional linting and static analysis:

  • Declarative, intent‑based rule authoring – describe intent in natural language; the model interprets and implements it.
  • Semantic generalizability – a single high‑level rule applies across diverse code patterns, languages, and architectures.
  • Intelligent remediation – issues are resolved through agentic, platform‑integrated actions (e.g., opening pull requests, adding comments, suggesting edits).

Case study: Code‑base reviews – Green software rules implementation

We partnered with the resolve project to scan its codebase with a set of green‑software rules. The agent delivered proposed improvements; one merged pull request hoisted RegExp literals from hot functions, yielding a small performance gain. With 500 M+ monthly npm downloads, even modest improvements scale significantly.

Case study: Implementing standards – Web sustainability guidelines (WSG)

The W3C WSG provides guidance for more sustainable web products. We translated the Web Development section into 20 agentic workflows, enabling AI to apply the guidelines automatically.

Running these workflows on several GitHub and Microsoft web properties uncovered opportunities such as deferred loading, native browser feature usage, and adoption of the latest language standards.

Heterogeneous performance improvement

Performance engineering is difficult because software is heterogeneous—different languages, architectures, and performance bottlenecks (algorithmic, caching, network, etc.). While expert engineers excel at navigating this complexity, the industry needs scalable tooling.

We’re exploring a “generic agent” that can assess any software repository and make demonstrable performance improvements. This involves a semi‑automatic workflow where an agent:

  1. Discovers how to build, benchmark, and measure the project (“fit‑to‑repo”).
  2. Researches relevant performance tools and runs micro‑benchmarks.
  3. Proposes targeted code changes under human guidance.

Early results vary, but some pilots show that guided automation can meaningfully improve performance at scale.

Case study: Daily Perf Improver

The Daily Perf Improver is a three‑phase workflow intended for small daily sprints:

  1. Research and plan improvements.
  2. Infer build and benchmark procedures.
  3. Iteratively propose measured optimizations.

In a recent pilot on FSharp.Control.AsyncSeq, the workflow produced multiple accepted pull requests, including a rediscovered performance bug fix and microbenchmark‑driven optimizations.

Daily Perf Improver Research Demonstrator

How to build and run agentic workflows

GitHub agentic workflows let you write automation in natural language (Markdown) instead of traditional YAML or scripts. You author a workflow in a .md file that begins with a description of the desired behavior, followed by structured sections that the platform interprets and executes.

Back to Blog

Related posts

Read more »

Let’s talk about GitHub Actions

GitHub Actions has grown massively since its release in 2018; in 2025 alone, developers used 11.5 billion GitHub Actions minutes in public and open‑source proje...