Seedance 2.0 @Tags: How to Direct AI Videos with Multimodal References

Published: (February 22, 2026 at 06:01 AM EST)
2 min read
Source: Dev.to

Source: Dev.to

Overview

Most AI video generators take a text prompt and produce whatever they feel like. Seedance 2.0 works differently — you upload images, videos, and audio files, then use @tags to tell the model exactly what each file should do.

Think of it like a film director’s shot list. Each uploaded file gets a role:

  • @Image1 as the first frame — pins the opening visual
  • @Video1 for camera‑movement reference — copies the cinematography
  • @Audio1 as background music — sets the soundtrack and rhythm

You can combine up to 12 files (9 images + 3 videos + 3 audio) in a single generation. The format is simple: @ + asset type + number.

Tag Syntax

Tag TypeRangeExample
@Image1‑9@Image1, @Image2
@Video1‑3@Video1
@Audio1‑3@Audio1

Example API Call

import requests

response = requests.post(
    "https://api.evolink.ai/v1/videos/generations",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "model": "seedance-2-0-t2v",
        "prompt": (
            "A cinematic sunset over the ocean, @Image1 as first frame, "
            "@Audio1 as background music. Slow dolly forward with warm golden light."
        ),
        "image_urls": ["https://example.com/sunset.jpg"],
        "audio_urls": ["https://example.com/ambient.mp3"],
        "duration": 10,
        "quality": "1080p"
    }
)

Comparison with Other AI Video APIs

  • Sora 2 – Text + single image input only, no audio reference.
  • Kling 3.0 – Image‑to‑video but no @tag assignment system.
  • Veo 3.1 – Text‑only prompts; generates its own audio.

Seedance 2.0’s @tag system lets you direct the generation rather than merely describe it.

Further Reading

Read the full @Tags guide on seedance2api.app

EvoLink provides unified AI API access — one key for all major AI models including Seedance 2.0, Sora 2, Veo 3.1, and more.

0 views
Back to Blog

Related posts

Read more »