[Paper] A Novel Approach to Explainable AI with Quantized Active Ingredients in Decision Making

Published: (January 13, 2026 at 12:06 PM EST)
3 min read
Source: arXiv

Source: arXiv - 2601.08733v1

Overview

A recent study explores how quantum‑enhanced machine learning can make AI decisions more transparent. By comparing a Quantum Boltzmann Machine (QBM) with a classical counterpart on a simplified MNIST task, the authors demonstrate that hybrid quantum‑classical models can boost both prediction accuracy and the clarity of feature‑importance explanations.

Key Contributions

  • Hybrid QBM Architecture: Introduces a quantum‑classical Boltzmann machine that leverages entangling layers to learn richer latent representations.
  • Side‑by‑side Benchmark: Provides a systematic comparison between the QBM and a classical Boltzmann Machine (CBM) on the same pre‑processed dataset.
  • Dual Explainability Pipeline: Uses gradient‑based saliency maps for the QBM and SHAP values for the CBM, enabling a direct assessment of feature attribution quality.
  • Quantitative Explainability Metric: Measures the concentration of attributions via entropy, showing that the QBM yields more focused “active ingredient” explanations.
  • Empirical Evidence: Reports a 30‑point accuracy gain (83.5 % vs. 54 %) and lower attribution entropy (1.27 vs. 1.39) for the quantum model.

Methodology

  1. Data Preparation

    • The MNIST digit images are binarised (pixel values → {0,1}) and reduced to a low‑dimensional space using Principal Component Analysis (PCA). This keeps the problem tractable for near‑term quantum hardware.
  2. Model Construction

    • Quantum Boltzmann Machine (QBM): Implemented as a hybrid circuit where a parameterised quantum layer (strongly entangling gates) is followed by a classical energy‑based model. Training uses a quantum‑aware version of contrastive divergence.
    • Classical Boltzmann Machine (CBM): A standard energy‑based network trained with conventional contrastive divergence.
  3. Explainability Techniques

    • QBM: Gradient‑based saliency maps are computed by back‑propagating through the quantum circuit to highlight input pixels that most affect the output energy.
    • CBM: SHAP (Shapley Additive exPlanations) values are derived to attribute each pixel’s contribution to the final prediction.
  4. Evaluation

    • Classification accuracy on a held‑out test split.
    • Entropy of the attribution distribution (lower entropy → more concentrated, i.e., clearer “active ingredient”).

Results & Findings

MetricQBMCBM
Test Accuracy83.5 %54 %
Attribution Entropy1.27 (more concentrated)1.39 (more diffuse)

The quantum‑enhanced model not only classifies digits more reliably but also produces sharper explanations—its saliency maps focus on fewer, more decisive pixels, indicating a clearer understanding of what drives each decision.

Practical Implications

  • Trustworthy AI Services: For fintech, health‑tech, or any high‑stakes SaaS, integrating quantum‑aware components could provide regulators and users with stronger evidence of why a model made a particular call.
  • Feature‑Engineering Efficiency: Concentrated attributions help data scientists pinpoint the most informative features, reducing the time spent on manual feature selection.
  • Hybrid Deployment Strategies: The study shows a viable pathway to embed quantum circuits as “explainability boosters” within existing classical pipelines, without requiring a full‑scale quantum computer.
  • Competitive Edge: Early adopters can differentiate their AI products by offering quantifiable interpretability metrics alongside performance numbers.

Limitations & Future Work

  • Scalability: Experiments are limited to a heavily reduced MNIST subset; real‑world datasets with higher dimensionality may strain current quantum hardware.
  • Hardware Noise: The QBM’s performance depends on the fidelity of near‑term quantum processors; noise mitigation strategies were not explored in depth.
  • Generalisation to Other Architectures: The study focuses on Boltzmann machines; extending the approach to transformers, GNNs, or reinforcement learners remains an open question.
  • Explainability Benchmarks: Entropy is a useful proxy, but richer human‑subject studies are needed to confirm that the explanations are genuinely more useful to end users.

Future research will likely address larger, more complex datasets, integrate error‑corrected quantum devices, and broaden the explainability toolkit to cover a wider array of AI models.

Authors

  • A. M. A. S. D. Alagiyawanna
  • Asoka Karunananda
  • Thushari Silva
  • A. Mahasinghe

Paper Information

  • arXiv ID: 2601.08733v1
  • Categories: cs.LG, quant-ph
  • Published: January 13, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »