Shamba-MedCare Prompt Engineering

Published: (December 1, 2025 at 04:30 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Introduction

I am building a simple plant disease diagnosis solution using AI, inspired by my farming background and advancements in intelligent technological tools.

You can check out the Shamba‑MedCare App here. Sorry for testing, you’ll have to use your own API keys until the public launch is available. The keys are stored in the browser’s local storage, so they are private.

Shamba‑MedCare screenshot

For context, whenever I mention LLM (Large Language Model), I mostly use Claude. I like to use LLMs because they are generic, and this solution can be fitted to any LLM.

I played around with several prompts in order to nail the best results. This is how I transformed my prompt‑engineering journey with Shamba‑MedCare:

Prompt evolution

My first prompt to LLM Vision was embarrassingly naive:

“What disease does this plant have?”

The response was a 2,000‑word essay about plant pathology in general—helpful for a textbook, useless for a farmer with a dying tomato plant. Getting AI to return structured, actionable, budget‑aware diagnoses took iteration. Here’s what I learned.

The Architecture

Architecture diagram

Two prompts matter:

  • System prompt – defines who the LLM pretends to be.
  • Analysis prompt – tells the LLM what to do with a specific image.

System Prompt: Creating “Shamba”

Prompts work better with a persona. I created the Shamba persona, an agricultural pathologist who:

You are Shamba, an expert agricultural pathologist. You analyze
plant images to identify diseases, pests, and nutrient deficiencies.

Your expertise includes:
- 50+ crop types worldwide
- Fungal, bacterial, viral, and physiological disorders
- Traditional and modern treatment methods
- Practical advice for resource‑limited farmers

Guidelines:
1. Always include at least one FREE/traditional treatment
2. Describe WHERE symptoms appear (for visual mapping)
3. Be honest about uncertainty—use confidence scores
4. Recommend professional help for severe cases

The key line: “Always include at least one FREE/traditional treatment.”
Without that explicit instruction, the LLM defaulted to commercial products—helpful for a suburban gardener, useless for a farmer who can’t afford a $15 fungicide.

Failure #1 – The JSON Nightmare

My first structured attempt asked the LLM to return JSON, which it did, but wrapped in markdown code fences and with commentary:

Here's my analysis:

    ```json
    { "disease": "Early Blight" }
    ```

This is a common fungal disease...

My parser choked. The fix was to make the request explicit:

Return ONLY a valid JSON object. No markdown, no commentary,
no text before or after. Start with { and end with }

Even then it failed about 10 % of the time, so I added backend logic that:

  • Strips markdown fences if present
  • Extracts JSON from surrounding text
  • Validates against the expected schema

Failure #2 – Location Descriptions

For the visual heatmap feature I needed the LLM to describe where damage appeared. My early prompt asked for “affected regions,” and the LLM replied with vague statements like “The affected area is significant.”

The improved prompt:

Describe affected regions with:
- Location (helpful for heatmaps): top‑left, center, lower‑right, edges, margins
- Coverage: percentage of area affected (e.g., "35%")
- Spread direction: "Moving from lower leaves upward."

Resulting output:

{
  "affected_regions": [
    {
      "location": "lower-left",
      "severity": "severe",
      "description": "Dark brown lesions with concentric rings",
      "coverage": 15
    },
    {
      "location": "center",
      "severity": "moderate",
      "coverage": 20
    }
  ]
}

That’s enough to generate a heatmap overlay.

Heatmap example

Failure #3 – Treatment Cost Blindness

Early on, treatments appeared in random order, sometimes listing a $50 systemic fungicide before a free wood‑ash remedy. The LLM has no inherent sense of budget constraints, so I forced an ordering schema:

Provide treatments in EXACTLY this order:
1. FREE TIER: Traditional/home remedies ($0)
2. LOW COST: Basic solutions ($1‑5)
3. MEDIUM COST: Commercial organic ($5‑20)
4. HIGH COST: Synthetic/professional ($20+)

Each tier must have at least one option if applicable.

The enforced response schema:

{
  "treatments": [
    {
      "method": "Wood ash paste",
      "cost_tier": "free",
      "estimated_cost": "$0",
      "ingredients": ["Wood ash", "Water"],
      "application": "Apply directly to affected areas",
      "availability": "Common from cooking fires"
    },
    {
      "method": "Neem oil spray",
      "cost_tier": "low",
      "estimated_cost": "$1-3"
    }
  ]
}

Plant Part‑Specific Prompt Strategy

Different plant parts reveal different problems, so I tailored prompts for each part.

Leaves

Examine: color patterns, spot shapes, curling, holes, coating
Common issues: fungal spots, viral mosaic, nutrient chlorosis, pest damage

Roots

Examine: color (white=healthy, brown/black=rot), texture, galls, structure
Common issues: root rot, nematode damage, waterlogging

Focusing the LLM on the relevant organ dramatically improves diagnostic accuracy.

Back to Blog

Related posts

Read more »

Day 1276 : Career Climbing

Saturday Before heading to the station, I did some coding on my current side project. Made some pretty good progress and then it was time to head out. Made i...

JWT Token Validator Challenge

Overview In 2019 Django’s session management framework contained a subtle but catastrophic vulnerability CVE‑2019‑11358. The framework failed to properly inv...