[Paper] Towards Explainable Federated Learning: Understanding the Impact of Differential Privacy

Published: (February 10, 2026 at 01:58 PM EST)
5 min read
Source: arXiv

Source: arXiv - 2602.10100v1

Overview

The paper introduces FEXT‑DP, a federated learning framework that builds on decision‑tree models, adds differential‑privacy guarantees, and strives to keep the resulting models interpretable. By marrying federated learning, differential privacy, and explainable AI, the authors aim to show that privacy‑preserving distributed training can still produce models that developers can understand and trust.

Key Contributions

  • Federated Explainable Trees (FEXT): A novel FL architecture that trains decision‑tree ensembles across multiple clients without sharing raw data.
  • Differential‑Privacy Integration (DP): A mechanism to inject calibrated noise into tree‑building statistics, providing formal privacy guarantees for each participant.
  • Explainability‑Privacy Trade‑off Analysis: Empirical study quantifying how DP noise degrades common interpretability metrics (e.g., feature importance stability, tree depth).
  • Performance Gains: Demonstrated faster convergence (fewer communication rounds) and lower mean‑squared error (MSE) compared with baseline federated neural‑network approaches.
  • Open‑source Prototype: The authors release a lightweight Python implementation compatible with popular FL toolkits (e.g., Flower, PySyft).

Methodology

  1. Model Choice – Decision Trees: Trees are inherently transparent (splits, feature importance, path explanations). The authors use CART‑style binary trees as the base learner.
  2. Federated Training Loop
    • Each client locally builds a partial tree using its private data.
    • Clients compute split statistics (e.g., Gini impurity reductions) and send noisy aggregates to a central server.
    • The server selects the best global split, updates the shared tree structure, and broadcasts it back.
    • The process repeats until a stopping criterion (max depth or convergence) is met.
  3. Differential Privacy Layer
    • Laplace or Gaussian noise (depending on the privacy budget ε) is added to the split statistics before transmission.
    • The privacy budget is split across training rounds using standard composition theorems.
  4. Explainability Evaluation
    • Feature‑importance rankings are compared between the non‑DP baseline and DP‑protected models.
    • Tree depth, number of leaves, and path length distributions are measured as proxies for model complexity.
  5. Benchmarking
    • Experiments run on synthetic regression data and two real‑world datasets (UCI Housing, a medical sensor dataset).
    • Baselines include federated neural networks with DP and a centralized (non‑federated) decision‑tree model.

Results & Findings

MetricCentralized TreeFederated Tree (no DP)FEXT‑DP (ε=1.0)
Rounds to Converge– (single node)128
Test MSE0.840.880.91
Avg. Tree Depth7.27.06.5
Feature‑Importance Stability (Spearman ρ)1.000.960.84
  • Faster convergence: Adding DP noise actually smooths the split statistics, allowing the server to pick more decisive splits earlier, cutting the number of communication rounds.
  • Slight MSE increase: The privacy noise introduces a modest error penalty, but remains competitive with non‑DP federated baselines.
  • Explainability impact: While DP reduces the depth of the final tree (making it simpler), it also perturbs feature‑importance rankings, lowering their stability. The authors quantify this trade‑off and suggest ε ≥ 1.0 as a sweet spot for many practical scenarios.

Practical Implications

  • Edge‑Device Deployments: Developers can now train lightweight, interpretable models on smartphones, IoT sensors, or medical devices without ever moving raw data off‑device.
  • Regulatory Compliance: The DP guarantees help meet GDPR, HIPAA, or CCPA requirements, while the tree‑based explanations satisfy emerging “right‑to‑explain” mandates.
  • Faster Federated Pipelines: Fewer communication rounds translate to lower bandwidth costs and reduced battery drain—critical for constrained networks.
  • Debugging & Auditing: Feature‑importance vectors and decision paths can be inspected post‑training, enabling root‑cause analysis of model failures, something rarely possible with federated deep nets.
  • Integration Path: Because the prototype builds on standard FL APIs, teams can swap a neural‑network client for a FEXT‑DP client with minimal code changes, gaining interpretability “for free”.

Limitations & Future Work

  • Privacy‑Explainability Trade‑off: Stronger DP (smaller ε) degrades interpretability; finding optimal ε values per domain remains an open problem.
  • Scalability to High‑Dimensional Data: Decision trees struggle when the feature space exceeds a few hundred dimensions; the authors plan to explore hybrid models (e.g., tree‑based feature selection followed by federated linear models).
  • Non‑IID Data: Experiments used mildly heterogeneous client data; extreme non‑IID scenarios (e.g., medical centers with vastly different patient populations) may affect split quality.
  • Robustness to Attacks: While DP mitigates membership inference, the paper does not evaluate robustness against model‑poisoning or backdoor attacks in the federated setting.

Future work will address these gaps by (1) adaptive noise allocation across rounds, (2) hierarchical tree ensembles for high‑dimensional workloads, and (3) combined defenses against poisoning and privacy attacks.


Bottom line: FEXT‑DP demonstrates that you don’t have to choose between privacy, performance, and interpretability. With a modest privacy budget, developers can train fast, accurate, and explainable models across distributed data sources—opening the door to trustworthy AI in regulated, edge‑centric environments.

Authors

  • Júlio Oliveira
  • Rodrigo Ferreira
  • André Riker
  • Glaucio H. S. Carvalho
  • Eirini Eleni Tsilopoulou

Paper Information

  • arXiv ID: 2602.10100v1
  • Categories: cs.LG, cs.CR
  • Published: February 10, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »