[Paper] Neural Architecture Search for Quantum Autoencoders

Published: (November 24, 2025 at 10:55 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2511.19246v1

Overview

The paper introduces a Neural Architecture Search (NAS) framework that automatically designs quantum autoencoders—variational quantum circuits that compress and reconstruct data. By marrying a genetic algorithm with quantum‑classical hybrid training, the authors show how to discover high‑performing circuit topologies without manual trial‑and‑error, paving the way for practical quantum‑enhanced feature extraction on near‑term hardware.

Key Contributions

  • Genetic‑algorithm‑driven NAS for variational quantum circuits, specifically targeting autoencoder architectures.
  • A search space definition that encodes gate types, connectivity, and layer depth, enabling systematic exploration of quantum circuit designs.
  • Hybrid training loop that evaluates each candidate circuit on a classical loss (reconstruction error) while updating quantum parameters via gradient‑free optimization.
  • Empirical validation on real‑world image datasets (e.g., MNIST‑style data) demonstrating compression ratios and reconstruction quality comparable to hand‑crafted quantum autoencoders.
  • An open‑source prototype implementation that can be adapted to different quantum hardware constraints (gate sets, qubit counts, noise models).

Methodology

  1. Search Space Construction

    • Each individual in the genetic population encodes a VQC: a list of layers, each layer specifying a gate (e.g., RX, RY, CNOT) and the qubits it acts on.
    • The space is deliberately hardware‑aware: only gates supported by the target device are allowed, and connectivity respects the device’s coupling map.
  2. Genetic Algorithm Loop

    • Initialization: Randomly generate a population of candidate circuits.
    • Evaluation: For each circuit, run a hybrid training routine:
      • Encode classical input vectors into quantum states (amplitude or angle encoding).
      • Apply the VQC, measure the reduced subsystem, and decode back to classical space.
      • Compute reconstruction loss (e.g., mean‑squared error).
    • Selection & Crossover: Keep the best‑performing circuits, recombine their “genomes” to produce offspring.
    • Mutation: Randomly alter gates, connections, or layer counts to maintain diversity and avoid local minima.
  3. Hybrid Optimization

    • Circuit parameters (rotation angles) are tuned with a gradient‑free optimizer (e.g., COBYLA) inside each evaluation, ensuring the fitness reflects both architecture and parameter quality.
  4. Stopping Criteria

    • The algorithm halts after a fixed number of generations or when improvement plateaus, returning the top‑ranked quantum autoencoder.

Results & Findings

  • Compression Performance: The GA‑discovered autoencoders achieved ≈ 85 % reconstruction fidelity while compressing 8‑qubit inputs down to 3‑qubit latent spaces, matching manually designed baselines.
  • Search Efficiency: Across 30 generations with a population of 20, the method converged to a near‑optimal architecture in ≈ 2 hours on a simulated noisy quantum device (IBM Qiskit Aer).
  • Robustness to Noise: Architectures that emerged naturally favored shallower depth and gate patterns less susceptible to depolarizing noise, indicating the GA implicitly learned hardware‑friendly designs.
  • Generalization: When transferred to a different dataset (hand‑written digits vs. fashion‑MNIST), the same search pipeline produced distinct but equally effective circuits, showing adaptability to varied data distributions.

Practical Implications

  • Accelerated Prototyping: Developers can plug in their own datasets and hardware constraints, letting the GA handle the tedious circuit‑design phase—much like AutoML does for classical models.
  • Hardware‑Tailored Solutions: Because the search respects device coupling maps and native gate sets, the resulting autoencoders are ready to run on current superconducting or trapped‑ion quantum processors without additional transpilation overhead.
  • Hybrid Feature Extraction: Quantum autoencoders can serve as front‑ends for downstream quantum machine‑learning pipelines (e.g., quantum classifiers), potentially reducing qubit requirements and circuit depth for larger tasks.
  • Noise‑Aware Design: The evolutionary pressure toward noise‑resilient structures offers a systematic way to mitigate errors without resorting to full error‑correction, a crucial advantage for NISQ‑era applications.

Limitations & Future Work

  • Scalability: The current search is limited to ≤ 8 qubits and modest layer counts; scaling to larger registers will demand more efficient encodings or surrogate fitness models.
  • Evaluation Cost: Each candidate requires a full hybrid training run, which can be time‑consuming on real quantum hardware; future work could integrate meta‑learning or performance predictors to prune the search space early.
  • Benchmark Diversity: Experiments focus on image data; extending to time‑series, graph, or quantum‑state datasets would test the generality of the approach.
  • Hybrid Optimization Strategies: Combining gradient‑based parameter updates (when possible) with the GA could speed convergence and improve final fidelity.

Bottom line: By automating the discovery of quantum autoencoder circuits, this research brings us a step closer to practical quantum‑enhanced data compression, offering a toolset that developers can adapt to the quirks of today’s noisy quantum machines.

Authors

  • Hibah Agha
  • Samuel Yen‑Chi Chen
  • Huan‑Hsin Tseng
  • Shinjae Yoo

Paper Information

  • arXiv ID: 2511.19246v1
  • Categories: quant-ph, cs.AI, cs.LG, cs.NE
  • Published: November 24, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »