More Than Just Labels: Building a Skin Lesion Classifier with ResNet-50 and Explainable AI (Grad-CAM)

Published: (February 4, 2026 at 07:50 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Beck_Moulton

Introduction

Have you ever looked at a medical AI demo and wondered, “Sure, it says it’s a rash, but what exactly is the model looking at?” In the world of Computer Vision and Medical AI, the Black Box problem isn’t just a technical hurdle—it’s a trust hurdle.

When building a skin‑lesion screening tool, a simple percentage score isn’t enough. To make a truly useful tool we need Explainable AI (XAI). In this guide we’ll:

  • Build a deep‑learning pipeline with PyTorch and FastAI to classify skin conditions.
  • Implement Grad‑CAM to generate heatmaps that highlight which features (texture, colour, borders) influenced the model’s decision.

Whether you’re interested in Deep Learning, health‑tech, or just want to make your models more transparent, this walkthrough covers the full engineering implementation.

Architecture

Our system follows a classic client‑server model with an added Explainability Layer. We use a fine‑tuned ResNet‑50 for inference and a React Native frontend for the user experience.

graph TD
    A[User Takes Photo] --> B(React Native App)
    B --> C{FastAPI Backend}
    C --> D[ResNet‑50 Classifier]
    D --> E[Inference Result]
    D --> F[Grad‑CAM Hook]
    F --> G[Heatmap Generation]
    G --> H[Overlay Image]
    H --> I[Result + Visualization]
    E --> I
    I --> B

Prerequisites

RequirementDetails
FastAI / PyTorchModel training and fine‑tuning
Grad‑CAMExtract gradients from the final convolutional layer
React NativeCross‑platform mobile interface
Datasete.g., HAM10000 (Human‑Against‑Machine, 10 k training images)

Step 1: Fine‑Tuning ResNet‑50 with FastAI

FastAI makes transfer learning incredibly efficient. Below is a minimal script to fine‑tune a pre‑trained ResNet‑50 on skin‑lesion images.

from fastai.vision.all import *

# Load data – assumes images are organized in folders by label
path = Path('./skin_lesion_data')
dls = ImageDataLoaders.from_folder(
    path,
    valid_pct=0.2,
    item_tfms=Resize(224)
)

# Initialize Learner with ResNet‑50
learn = vision_learner(dls, resnet50, metrics=accuracy)

# Find optimal learning rate and train
learn.fine_tune(5, base_lr=3e-3)

# Export for production
learn.export('skin_classifier.pkl')

Step 2: Implementing Grad‑CAM for Explainability

Grad‑CAM (Gradient‑weighted Class Activation Mapping) uses the gradients flowing into the final convolutional layer to produce a localization map.

import torch
import numpy as np

class GradCAM:
    def __init__(self, model, target_layer):
        self.model = model
        self.target_layer = target_layer
        self.gradients = None
        self.activations = None

        # Register hooks
        self.target_layer.register_forward_hook(self._save_activation)
        self.target_layer.register_full_backward_hook(self._save_gradient)

    def _save_activation(self, module, input, output):
        self.activations = output

    def _save_gradient(self, module, grad_input, grad_output):
        self.gradients = grad_output[0]

    def generate_heatmap(self, input_tensor, category_idx):
        # Forward pass
        output = self.model(input_tensor)
        self.model.zero_grad()

        # Backward pass for the specific category
        loss = output[0, category_idx]
        loss.backward()

        # Weight the activations by the gradients
        weights = torch.mean(self.gradients, dim=(2, 3), keepdim=True)
        heatmap = torch.sum(weights * self.activations, dim=1).squeeze()

        # ReLU and normalize
        heatmap = np.maximum(heatmap.detach().cpu().numpy(), 0)
        heatmap /= np.max(heatmap)
        return heatmap

Note: Integrate this into your API to return both a diagnosis (e.g., “Melanocytic nevi”) and a visual heatmap image.

Scaling to Production

While a prototype is a great start, deploying medical‑grade AI requires:

  • Rigorous testing & validation
  • Data‑privacy compliance (HIPAA, GDPR, etc.)
  • Robust MLOps pipelines

For production‑ready examples, advanced computer‑vision patterns, and deep dives into AI safety, check out the technical resources at WellAlly Tech Blog.

Step 3: Frontend Visualization (React Native)

On the mobile side we display the original image and allow the user (or clinician) to toggle the heatmap overlay.

import React, { useState } from 'react';
import { View, Image, Button, Text } from 'react-native';

const ResultScreen = ({ route }) => {
  const { originalUri, heatmapUri, prediction } = route.params;
  const [showHeatmap, setShowHeatmap] = useState(false);

  return (
    <View>
      <Text>Result: {prediction}</Text>
      <Image source={{ uri: showHeatmap ? heatmapUri : originalUri }} style={{ width: 300, height: 300 }} />
      <Button
        title={showHeatmap ? "Hide Heatmap" : "Show Heatmap"}
        onPress={() => setShowHeatmap(!showHeatmap)}
      />
    </View>
  );
};

export default ResultScreen;
import React, { useState } from "react";
import { View, Switch, Text } from "react-native";

const HeatmapToggle = () => {
  const [showHeatmap, setShowHeatmap] = useState(false);

  return (
    <View>
      <Text>Show Grad‑CAM Heatmap (Logic)</Text>
      <Switch
        value={showHeatmap}
        onValueChange={() => setShowHeatmap(!showHeatmap)}
      />
    </View>
  );
};

export default HeatmapToggle;

Fullscreen Controls

  • Enter fullscreen mode
  • Exit fullscreen mode

Conclusion: Bridging the Gap

By combining FastAI for rapid development and Grad‑CAM for transparency, we transform a simple classifier into a powerful diagnostic aid. This setup doesn’t just provide an answer; it provides a reason.

Key Takeaways

  • Transfer Learning (ResNet‑50) saves weeks of training time.
  • Interpretability is non‑negotiable in high‑stakes fields like medicine.
  • Hybrid Stacks (Python backend + React Native frontend) give the best developer experience for AI products.

What are your thoughts on AI interpretability? Would you trust an AI more if it showed you its “thought process” via heatmaps? Let’s discuss in the comments below!

If you enjoyed this tutorial, don’t forget to ❤️ and 🔖! For more advanced AI implementation strategies, visit the WellAlly Blog.

Back to Blog

Related posts

Read more »