LLM Chains vs Single Calls in n8n: My Caption Generation Experiment

Published: (February 7, 2026 at 11:00 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

The Two Approaches

Approach A: 3‑Step Chain (Haiku)

Break the task into micro‑steps:

  1. Extract key points from the data.
  2. Draft structure using the Step 1 output.
  3. Polish and add metadata (hashtags, CTA, etc.).

Approach B: Sonnet Single Call

Send the entire prompt to a more capable model in one go, letting it handle the whole flow.


Test Script (Outside n8n)

# Setup
npm init -y
npm i @anthropic-ai/sdk dotenv
export ANTHROPIC_API_KEY="your_key_here"

index.mjs

import "dotenv/config";
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

// Sample data structure
const image_details = [
  { order: 1, attributeA: "value1", attributeB: "value2" },
  { order: 2, attributeA: "value3", attributeB: "value4" },
];

async function callClaude({ model, system, prompt }) {
  const res = await client.messages.create({
    model,
    max_tokens: 1000,
    system,
    messages: [{ role: "user", content: prompt }],
  });
  return res.content?.text ?? "";
}

// Approach A: Haiku 3‑step chain
async function haikuChain() {
  const model = "claude-haiku-4-5";
  const system = "You are a caption editor. Be concise.";

  // Step 1: Extract data
  const step1 = await callClaude({
    model,
    system,
    prompt: `Extract key descriptions from this data:\n${JSON.stringify({ image_details }, null, 2)}`,
  });

  // Step 2: Create structure
  const step2 = await callClaude({
    model,
    system,
    prompt: `Based on this, create hook + body + CTA:\n${step1}`,
  });

  // Step 3: Add tags
  const step3 = await callClaude({
    model,
    system,
    prompt: `Add hashtags to this caption:\n${step2}`,
  });

  return { step1, step2, final: step3 };
}

// Approach B: Sonnet single call
async function sonnetSingle() {
  const model = "claude-sonnet-4-5";
  const system = "You are a caption editor. Be concise.";

  return await callClaude({
    model,
    system,
    prompt: `Create a complete caption (hook + body + bullets + CTA + tags) from:\n${JSON.stringify({ image_details }, null, 2)}`,
  });
}

// Run both
const resultA = await haikuChain();
const resultB = await sonnetSingle();

console.log("=== Haiku Chain ===\n", resultA.final);
console.log("\n=== Sonnet Single ===\n", resultB);

Preparing Data in n8n’s Code Node

// Wrong: LLM can't work with IDs alone
const image_details = transformedImages.map(img => ({ id: img.id }));

// Right: Provide the full content the LLM needs
const image_details = transformedImages.map((img, index) => ({
  order: index + 1,
  ...img, // Spread ALL attributes the LLM needs
}));

return [{ json: { image_details } }];

Root cause: Only image IDs were passed, leaving the model without any descriptive context.
Fix: Spread the entire object (...img) so the LLM receives all relevant attributes.


Metric Comparison

MetricHaiku 3‑ChainSonnet Single
Context awarenessFragments between stepsHolistic understanding
Tone consistencyStitched together, unevenUnified from start to finish
Output qualityInformative but stiffNatural, engaging, flows well
Cost~3× cheaperHigher but worth it

Verdict: For captions (or any creative writing), continuity of context outweighs modest cost savings.


Pattern & Use‑Case Guidance

PatternUse CaseExamples
Lower‑model + ChainClear role separationData extraction, classification, formatting
Mid/upper‑model + SingleContext‑dependent creativityCaptions, articles, copywriting

Key takeaways:

  • Context > Cost: Fragmented chains break narrative flow for creative tasks.
  • Data quality matters: Supplying rich information is essential; chaining cannot compensate for missing context.
  • Haiku’s niche: Excellent for speed and highly structured tasks, but Sonnet excels when “feel” matters.

References

  • n8n Advanced AI Docs
  • Claude Sonnet 4.5 Announcement
  • Claude Haiku on AWS Bedrock
0 views
Back to Blog

Related posts

Read more »

The Origin of the Lettuce Project

Two years ago, Jason and I started what became known as the BLT Lettuce Project with a very simple goal: make it easier for newcomers to OWASP to find their way...