LLM Chains vs Single Calls in n8n: My Caption Generation Experiment
Source: Dev.to
The Two Approaches
Approach A: 3‑Step Chain (Haiku)
Break the task into micro‑steps:
- Extract key points from the data.
- Draft structure using the Step 1 output.
- Polish and add metadata (hashtags, CTA, etc.).
Approach B: Sonnet Single Call
Send the entire prompt to a more capable model in one go, letting it handle the whole flow.
Test Script (Outside n8n)
# Setup
npm init -y
npm i @anthropic-ai/sdk dotenv
export ANTHROPIC_API_KEY="your_key_here"
index.mjs
import "dotenv/config";
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
// Sample data structure
const image_details = [
{ order: 1, attributeA: "value1", attributeB: "value2" },
{ order: 2, attributeA: "value3", attributeB: "value4" },
];
async function callClaude({ model, system, prompt }) {
const res = await client.messages.create({
model,
max_tokens: 1000,
system,
messages: [{ role: "user", content: prompt }],
});
return res.content?.text ?? "";
}
// Approach A: Haiku 3‑step chain
async function haikuChain() {
const model = "claude-haiku-4-5";
const system = "You are a caption editor. Be concise.";
// Step 1: Extract data
const step1 = await callClaude({
model,
system,
prompt: `Extract key descriptions from this data:\n${JSON.stringify({ image_details }, null, 2)}`,
});
// Step 2: Create structure
const step2 = await callClaude({
model,
system,
prompt: `Based on this, create hook + body + CTA:\n${step1}`,
});
// Step 3: Add tags
const step3 = await callClaude({
model,
system,
prompt: `Add hashtags to this caption:\n${step2}`,
});
return { step1, step2, final: step3 };
}
// Approach B: Sonnet single call
async function sonnetSingle() {
const model = "claude-sonnet-4-5";
const system = "You are a caption editor. Be concise.";
return await callClaude({
model,
system,
prompt: `Create a complete caption (hook + body + bullets + CTA + tags) from:\n${JSON.stringify({ image_details }, null, 2)}`,
});
}
// Run both
const resultA = await haikuChain();
const resultB = await sonnetSingle();
console.log("=== Haiku Chain ===\n", resultA.final);
console.log("\n=== Sonnet Single ===\n", resultB);
Preparing Data in n8n’s Code Node
// Wrong: LLM can't work with IDs alone
const image_details = transformedImages.map(img => ({ id: img.id }));
// Right: Provide the full content the LLM needs
const image_details = transformedImages.map((img, index) => ({
order: index + 1,
...img, // Spread ALL attributes the LLM needs
}));
return [{ json: { image_details } }];
Root cause: Only image IDs were passed, leaving the model without any descriptive context.
Fix: Spread the entire object (...img) so the LLM receives all relevant attributes.
Metric Comparison
| Metric | Haiku 3‑Chain | Sonnet Single |
|---|---|---|
| Context awareness | Fragments between steps | Holistic understanding |
| Tone consistency | Stitched together, uneven | Unified from start to finish |
| Output quality | Informative but stiff | Natural, engaging, flows well |
| Cost | ~3× cheaper | Higher but worth it |
Verdict: For captions (or any creative writing), continuity of context outweighs modest cost savings.
Pattern & Use‑Case Guidance
| Pattern | Use Case | Examples |
|---|---|---|
| Lower‑model + Chain | Clear role separation | Data extraction, classification, formatting |
| Mid/upper‑model + Single | Context‑dependent creativity | Captions, articles, copywriting |
Key takeaways:
- Context > Cost: Fragmented chains break narrative flow for creative tasks.
- Data quality matters: Supplying rich information is essential; chaining cannot compensate for missing context.
- Haiku’s niche: Excellent for speed and highly structured tasks, but Sonnet excels when “feel” matters.
References
- n8n Advanced AI Docs
- Claude Sonnet 4.5 Announcement
- Claude Haiku on AWS Bedrock