[Paper] Fusion2Print: Deep Flash-Non-Flash Fusion for Contactless Fingerprint Matching

Published: (January 5, 2026 at 01:09 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.02318v1

Overview

The paper Fusion2Print (F2P) introduces a novel way to improve contactless fingerprint recognition by capturing both flash and non‑flash images of the same finger and intelligently fusing them. By leveraging the complementary strengths of each capture mode, the authors dramatically boost ridge clarity and matching accuracy, bringing contactless systems closer to the reliability of traditional contact‑based scanners.

Key Contributions

  • Paired flash‑non‑flash dataset (FNF Database): First publicly described collection of synchronized flash and non‑flash contactless fingerprint images.
  • Signal‑level flash subtraction: Manual isolation of ridge‑preserving information from flash images to guide the fusion process.
  • Lightweight attention‑based fusion network: Dynamically weights informative channels from each modality while suppressing noise and specular highlights.
  • U‑Net enhancement module: Refines the fused output into a high‑contrast grayscale image optimized for downstream matching.
  • Cross‑domain embedding model: Generates a unified fingerprint representation that works for contactless, flash‑non‑flash fused, and traditional contact‑based prints, enabling seamless verification across devices.
  • State‑of‑the‑art performance: Achieves AUC = 0.999 and EER = 1.12 %—substantially better than leading baselines such as Verifinger and DeepPrint when using a single capture mode.

Methodology

  1. Data Capture – For each finger, the system records two images in rapid succession: one with a built‑in flash (high ridge detail but noisy) and one without flash (cleaner background but lower contrast).
  2. Manual Flash‑Non‑Flash Subtraction – The authors compute a difference image to highlight the ridge‑specific signal contributed by the flash, providing a ground‑truth cue for the network.
  3. Attention‑Based Fusion Network – A compact CNN with channel‑wise attention learns to emphasize the ridge‑rich flash features while retaining the low‑noise background from the non‑flash shot.
  4. U‑Net Enhancement – The fused feature map passes through a U‑Net‑style encoder‑decoder that sharpens ridges and normalizes illumination, outputting a single grayscale fingerprint image.
  5. Embedding Model – A deep Siamese/Triplet network is trained on the enhanced images together with conventional contact‑based prints, forcing both domains into a shared embedding space. During verification, a simple cosine similarity score decides a match.

All components are designed to be lightweight enough for on‑device inference on modern smartphones or edge AI modules.

Results & Findings

MetricSingle‑Capture BaselineFusion2Print (F2P)
AUC (Area Under Curve)0.96 – 0.980.999
EER (Equal Error Rate)3.4 % – 5.1 %1.12 %
Verification speed (per pair)~30 ms~45 ms (including fusion)
  • Ridge Clarity: Visual inspection shows a 2–3× increase in ridge‑to‑valley contrast after fusion.
  • Robustness to Lighting: The model maintains high accuracy across varied ambient illumination, thanks to the complementary flash information.
  • Cross‑Domain Compatibility: Embeddings from F2P match both contactless and traditional contact prints without needing separate classifiers.

Practical Implications

  • Hygienic Authentication: Deployable in high‑traffic venues (airports, hospitals, workplaces) where touching a scanner is undesirable.
  • Smartphone Integration: Modern phones already have flash LEDs and high‑resolution cameras; F2P can be implemented as a software update, requiring only a brief double‑capture UI flow.
  • Reduced Hardware Costs: Eliminates the need for specialized contactless fingerprint sensors; a commodity camera plus flash suffices.
  • Legacy System Compatibility: Because the embedding space aligns with existing contact‑based databases, organizations can adopt contactless capture without overhauling their back‑end verification pipelines.
  • Enhanced Security: Higher ridge fidelity reduces false accept rates, making contactless solutions viable for banking, access control, and identity verification.

Limitations & Future Work

  • Capture Time: Requiring two rapid shots adds a fraction of a second to the user experience; future work could explore single‑shot hardware (e.g., dual‑tone illumination) or faster sensor pipelines.
  • Dataset Diversity: The FNF Database, while extensive, is limited to controlled indoor lighting and a specific demographic; broader outdoor and multi‑ethnic data would validate generalization.
  • Real‑World Deployment: The paper does not address variations such as motion blur, occlusions (e.g., rings, nails), or extreme skin conditions—areas ripe for further robustness studies.
  • Model Compression: Although lightweight, the fusion network plus U‑Net may still be heavy for low‑end IoT devices; pruning or quantization techniques could be explored.

Overall, Fusion2Print demonstrates that a clever combination of flash and non‑flash imaging, backed by attention‑driven deep learning, can close the performance gap between contactless and contact‑based fingerprint systems—opening the door for safer, more convenient biometric authentication in everyday tech.

Authors

  • Roja Sahoo
  • Anoop Namboodiri

Paper Information

  • arXiv ID: 2601.02318v1
  • Categories: cs.CV
  • Published: January 5, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »