How to Generate Images Using LLM Gateway and the Vercel AI SDK
Source: Dev.to

Image generation has become a core feature of modern AI applications. But integrating with multiple image providers — each with different APIs, pricing, and capabilities — can be a pain.
LLM Gateway simplifies this by giving you a single, OpenAI‑compatible API for image generation across providers like Google Gemini, Alibaba Qwen, ByteDance Seedream, and more. In this guide, we’ll walk through generating images using both the OpenAI SDK and the Vercel AI SDK.
Prerequisites
- Sign up at and create a project.
- Copy your API key.
Export it in your environment:
export LLM_GATEWAY_API_KEY="llmgtwy_XXXXXXXXXXXXXXXX"
Option 1 – The Images API (/v1/images/generations)
The simplest approach – a drop‑in replacement for OpenAI’s image‑generation endpoint.
Using curl
curl -X POST "https://api.llmgateway.io/v1/images/generations" \
-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-3-pro-image-preview",
"prompt": "A cute cat wearing a tiny top hat",
"n": 1,
"size": "1024x1024"
}'
Using the OpenAI SDK (Node.js)
import OpenAI from "openai";
import { writeFileSync } from "fs";
const client = new OpenAI({
baseURL: "https://api.llmgateway.io/v1",
apiKey: process.env.LLM_GATEWAY_API_KEY,
});
const response = await client.images.generate({
model: "gemini-3-pro-image-preview",
prompt: "A futuristic city skyline at sunset with flying cars",
n: 1,
size: "1024x1024",
});
response.data.forEach((image, i) => {
if (image.b64_json) {
const buf = Buffer.from(image.b64_json, "base64");
writeFileSync(`image-${i}.png`, buf);
}
});
That’s it – point baseURL to LLM Gateway and use the standard OpenAI SDK. No new libraries needed.
Option 2 – The Vercel AI SDK (generateImage)
If you’re already using the Vercel AI SDK, LLM Gateway provides a native provider package: @llmgateway/ai-sdk-provider.
npm install @llmgateway/ai-sdk-provider ai
Simple Image Generation
import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
import { generateImage } from "ai";
import { writeFileSync } from "fs";
const llmgateway = createLLMGateway({
apiKey: process.env.LLM_GATEWAY_API_KEY,
});
const result = await generateImage({
model: llmgateway.image("gemini-3-pro-image-preview"),
prompt:
"A cozy cabin in a snowy mountain landscape at night with aurora borealis",
size: "1024x1024",
n: 1,
});
result.images.forEach((image, i) => {
const buf = Buffer.from(image.base64, "base64");
writeFileSync(`image-${i}.png`, buf);
});
Conversational Image Generation with streamText
The real power comes when you combine image generation with chat. Using the chat‑completion approach, you can build a conversational image‑generation flow – ask the model to create an image, then refine it through follow‑up messages.
Next.js API Route
import {
streamText,
type UIMessage,
convertToModelMessages,
} from "ai";
import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
interface ChatRequestBody {
messages: UIMessage[];
}
export async function POST(req: Request) {
const body = await req.json();
const { messages }: ChatRequestBody = body;
const llmgateway = createLLMGateway({
apiKey: process.env.LLM_GATEWAY_API_KEY,
baseUrl: "https://api.llmgateway.io/v1",
});
try {
const result = streamText({
model: llmgateway.chat("gemini-3-pro-image-preview"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
} catch {
return new Response(
JSON.stringify({ error: "LLM Gateway Chat request failed" }),
{ status: 500 }
);
}
}
Front‑end Hook (useChat)
import { useChat } from "@ai-sdk/react";
export const ImageChat = () => {
const { messages, status, sendMessage } = useChat();
return (
{messages.map((m) => {
if (m.role === "assistant") {
const textContent = m.parts
.filter((p) => p.type === "text")
.map((p) => p.text)
.join("");
const imageParts = m.parts.filter(
(p) => p.type === "file" && p.mediaType?.startsWith("image/")
);
return (
{textContent &&
{textContent}
}
{imageParts.map((part, idx) => (
))}
);
}
return (
{m.parts.map((p, i) => (
{p.type === "text" ? p.text : null}
))}
);
})}
);
};
With these snippets you can generate single images, batch images, and even build a full‑featured conversational image‑generation UI using LLM Gateway together with the OpenAI SDK or the Vercel AI SDK. Happy coding!
Image Generation with LLM Gateway
Below is a quick guide on how to generate, edit, and switch between image providers using LLM Gateway.
1️⃣ Generate an Image
import { Image } from "ai-elements";
export const ImageDemo = () => {
const { data, error, isLoading } = useChatCompletion({
model: "gemini-3-pro-image-preview",
messages: [
{
role: "user",
content: [
{
type: "text",
text: "Create a futuristic cityscape at sunset"
}
]
}
],
// optional: control size/quality
image_config: {
aspect_ratio: "16:9",
image_size: "4K"
}
});
if (isLoading) return
Loading…
;
if (error) return
{error.message}
;
return (
{data?.choices?.map((c, i) => (
{c.message?.content?.map((p, i) =>
p.type === "image" ? (
) : p.type === "text" ? (
{p.text}
) : null
)}
))}
);
};
Tip: You can also use the pre‑built “ component from ai-elements for a polished rendering experience.
2️⃣ Image Editing
LLM Gateway supports editing existing images via /v1/images/edits. Provide an image URL (HTTPS or base64 data URL) together with an edit prompt:
curl -X POST "https://api.llmgateway.io/v1/images/edits" \
-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"images": [
{
"image_url": "https://example.com/source-image.png"
}
],
"prompt": "Add a watercolor effect to this image",
"model": "gemini-3-pro-image-preview",
"quality": "high",
"size": "1024x1024"
}'
3️⃣ Switching Providers
Changing the image provider is a single‑line edit—just swap the model string:
// Google Gemini
model: "gemini-3-pro-image-preview"
// Alibaba Qwen (from $0.03/image)
model: "alibaba/qwen-image-plus"
// ByteDance Seedream (up to 4K output)
model: "bytedance/seedream-4-5"
// Z.AI CogView (great for text in images)
model: "zai/cogview-4"
Provider Comparison
| Provider | Model | Price (per image) | Best For |
|---|---|---|---|
gemini-3-pro-image-preview | Varies | General purpose, high quality | |
| Alibaba | alibaba/qwen-image-max | $0.075 | Highest quality |
| Alibaba | alibaba/qwen-image-plus | $0.03 | Best value |
| ByteDance | bytedance/seedream-4-5 | $0.045 | Up to 4K, multi‑image fusion |
| Z.AI | zai/cogview-4 | $0.01 | Cheapest, bilingual text rendering |
4️⃣ Customizing Output
Use the image_config parameter (for chat completions) or the standard size/quality parameters (for the Images API) to control the result.
// Google: aspect ratio + resolution
{
"image_config": {
"aspect_ratio": "16:9",
"image_size": "4K"
}
}
// Alibaba: pixel dimensions + seed for reproducibility
{
"image_config": {
"image_size": "1024x1536",
"seed": 42
}
}
Wrapping Up
With LLM Gateway, image generation becomes provider‑agnostic. Whether you use the OpenAI SDK, Vercel AI SDK, or raw HTTP, you get:
- One API for many providers
- Easy model swapping without code changes
- Fine‑grained control over size, quality, and aspect ratio
