Update of “Fun project of the week, Mermaid flowcharts generator!” — V2 and more…

Published: (December 7, 2025 at 04:29 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Introduction

Following a previous post on how to generate Mermaid charts using Ollama and local LLMs (because I get really bored with sites which ask me to subscribe and pay a fee), I decided to enhance and update my application. There are two essential reasons for that:

  • My first app contained hard‑coded information regarding the LLM I want to use.
  • The application was not iterative.

Essential Enhancements

The core of the enhancement involves abstracting the LLM choice and creating a clean, repeatable workflow.

Dynamic LLM Selection (Addressing Hardcoded Info)

Instead of having a single hard‑coded model, the application should dynamically discover and utilize any model available via your local Ollama instance.

  • Implement Model Discovery – Send a request to Ollama’s /api/tags endpoint (http://localhost:11434/api/tags). This returns a JSON list of all locally installed models.
  • Create a Selection Interface – CLI – Present the discovered list of models with numbered indices and prompt the user to choose one by number.
  • Create a Selection Interface – GUI – Use a dropdown or radio‑button group (e.g., Streamlit) populated by the retrieved model names.
  • Pass the Model Name – The chosen model name (e.g., llama3:8b-instruct-q4_0) must then be used as a variable in the payload for all subsequent calls to the /api/chat endpoint.

Iterative Workflow and Error Handling (Addressing Iterativeness)

A non‑iterative application forces a restart for every chart, which is frustrating. Iterativeness isn’t just about looping; it’s about handling success/failure gracefully within the same session.

  • Main Execution Loop – Wrap the primary logic (prompt user → call LLM → generate image) in a while True loop that only breaks when the user explicitly chooses to quit.
  • Session State (GUI) – When using a GUI framework like Streamlit, employ st.session_state to preserve the generated Mermaid code and the image path across button clicks and re‑renders.
  • Input Validation – Check if the user’s prompt is empty.
  • Connection Check – Verify that the Ollama server is running before trying to fetch models or generate code.
  • File‑Handling Safety – Since temporary files are created for mmdc, ensure cleanup logic is debuggable (e.g., only delete temp files if DEBUG_MODE is disabled).

Ideas for V3 (to be continued)

EnhancementDescriptionValue Proposition
Code Review/Repair ModeIf mmdc fails to render (due to a syntax error), automatically send the Mermaid code and the mmdc error log back to the LLM (with a specific system prompt) to ask it to fix the syntax.Reduces user frustration and automatically fixes common LLM‑induced syntax errors.
Diagram HistoryStore the generated text prompt, the output code, and the corresponding image file path in a simple local database (like SQLite) or a structured file (like JSON/YAML).Allows users to easily revisit and reuse past diagrams without regenerating them.
Output Format OptionsAdd options to output the diagram in formats other than PNG, such as SVG (better for scaling) or PDF.Increases versatility for users needing high‑quality vector graphics.
Persistent SettingsSave the last used LLM model to a configuration file (e.g., config.json).Saves the user time by automatically selecting their preferred model upon startup.

Code(s) and Implementation(s)

1 — The console version

Set up a virtual Python environment and install the required dependencies:

pip install --upgrade pip
pip install requests

npm install -g @mermaid-js/mermaid-cli

The code 🧑‍💻

# app_V3.py
import subprocess
import os
import requests
import json
import re
import glob
import sys
import time
from pathlib import Path

DEBUG_MODE = True 

OLLAMA_BASE_URL = "http://localhost:11434"
OLLAMA_CHAT_URL = f"{OLLAMA_BASE_URL}/api/chat"
OLLAMA_TAGS_URL = f"{OLLAMA_BASE_URL}/api/tags"

INPUT_DIR = Path("./input")
OUTPUT_DIR = Path("./output")

def check_mmdc_installed():
    """Checks if 'mmdc' is installed."""
    try:
        subprocess.run(['mmdc', '--version'], check=True, capture_output=True, timeout=5)
        return True
    except (FileNotFoundError, subprocess.TimeoutExpired, subprocess.CalledProcessError):
        print("Error: Mermaid CLI (mmdc) not found or misconfigured.")
        print("Try: npm install -g @mermaid-js/mermaid-cli")
        return False

# MODEL SELECTION
def get_installed_models():
    """Fetches locally installed Ollama models."""
    try:
        response = requests.get(OLLAMA_TAGS_URL, timeout=5)
        response.raise_for_status()
        return sorted([m['name'] for m in response.json().get('models', [])])
    except:
        return []

def select_model_interactive():
    """Interactive menu to choose a model."""
    print("\n--- Ollama Model Selection ---")
    models = get_installed_models()

    if not models:
        return input("No models found. Enter model name manually (e.g., llama3): ").strip() or "llama3"

    for idx, model in enumerate(models, 1):
        print(f"{idx}. {model}")

    while True:
        choice = input(f"\nSelect a model (1-{len(models)}) or type custom name: ").strip()
        if choice.isdigit() and 1 <= int(choice) <= len(models):
            return models[int(choice) - 1]
        elif choice:
            return choice

def clean_mermaid_code(code_string):
    """Clean common LLM formatting errors from Mermaid code."""
    cleaned = code_string.replace(u'\xa0', ' ').replace(u'\u200b', '')

    cleaned = cleaned.replace("```mermaid", "").replace("```", "")

    cleaned = re.sub(r'[ \t\r\f\v]+', ' ', cleaned)

    lines = cleaned.splitlines()
    rebuilt = []
    for line in lines:
        s_line = line.strip()
        if s_line:
            rebuilt.append(s_line)

    final = '\n'.join(rebuilt)
    final = re.sub(r'(\])([A-Za-z0-9])', r'\1\n\2', final)
    return final.strip()

def generate_mermaid_code(user_prompt, model_name):
    """Calls Ollama to generate the code."""
    system_msg = (
        "You are a Mermaid Diagram Generator. Output ONLY valid Mermaid code. "
        "Do not include explanations. Start with 'graph TD' or 'flowchart LR'. "
        "Use simple ASCII characters for node IDs."
    )

    payload = {
        "model": model_name,
        "messages": [{"role": "system", "content": system_msg}, {"role": "user", "content": user_prompt}],
        "stream": False,
        "options": {"temperature": 0.1}
    }

    try:
        print(f"Thinking ({model_name})...")
        response = requests.post(OLLAMA_CHAT_URL, json=payload, timeout=60)
        response.raise_for_status()
        content = response.json().get("message", {}).get("content", "").strip()

        match = re.search(r"```mermaid\\n(.*?)```", content, re.DOTALL)
        if match:
            return clean_mermaid_code(match.group(1))
        else:
            return clean_mermaid_code(content)
    except Exception as e:
        print(f"Error generating Mermaid code: {e}")
        return None

def render_mermaid_to_png(mermaid_code, output_path):
    """Uses mmdc to render Mermaid code to PNG."""
    temp_mmd = INPUT_DIR / "temp.mmd"
    temp_mmd.write_text(mermaid_code, encoding="utf-8")

    try:
        subprocess.run(
            ["mmdc", "-i", str(temp_mmd), "-o", str(output_path)],
            check=True,
            capture_output=True,
            timeout=30,
        )
        return True
    except subprocess.CalledProcessError as e:
        print(f"mmdc error: {e.stderr.decode()}")
        return False
    finally:
        if not DEBUG_MODE:
            try:
                temp_mmd.unlink()
            except FileNotFoundError:
                pass

def main():
    if not check_mmdc_installed():
        sys.exit(1)

    INPUT_DIR.mkdir(exist_ok=True)
    OUTPUT_DIR.mkdir(exist_ok=True)

    model_name = select_model_interactive()

    while True:
        user_prompt = input("\nEnter a description for your diagram (or 'quit' to exit): ").strip()
        if user_prompt.lower() in {"quit", "exit"}:
            print("Goodbye!")
            break
        if not user_prompt:
            print("Prompt cannot be empty.")
            continue

        mermaid_code = generate_mermaid_code(user_prompt, model_name)
        if not mermaid_code:
            continue

        timestamp = int(time.time())
        output_file = OUTPUT_DIR / f"diagram_{timestamp}.png"

        if render_mermaid_to_png(mermaid_code, output_file):
            print(f"Diagram saved to: {output_file}")
        else:
            print("Failed to render diagram.")

if __name__ == "__main__":
    main()

Feel free to adapt the code for a GUI version (e.g., Streamlit) or extend it with the V3 ideas listed above.

Back to Blog

Related posts

Read more »

A small friction I finally removed

!Cover image for A small friction I finally removedhttps://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fde...