One-command ComfyUI on Cloud GPUs: A Practical, Repeatable Setup

Published: (November 30, 2025 at 04:59 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

What we’re building

A repeatable way to boot a cloud GPU (RunPod or Vast.ai), paste a single command, grab the exact ComfyUI version you want, auto‑install your favorite custom nodes, and download models from Hugging Face / Civitai into the correct folders. No more “did I put that LoRA in the right place?” or “why is this template six months behind?”.

We’ll use a free script generator to produce the one‑liner and show you how to tweak, debug, and extend it for your workflow.

💡 Pro tip: Time is literally money on cloud GPUs. Automating the boring parts pays for itself on the first run.

How the generator works

The generator at https://deploy.promptingpixels.com/ outputs a one‑line shell command that:

  • Installs or updates ComfyUI to a specific version
  • Downloads your selected models into the correct ComfyUI subfolders
  • Installs custom nodes you choose from the ComfyUI registry
  • Adapts paths to your provider (RunPod / Vast.ai) or your local OS
  • Supports tokens for gated downloads

🧭 Heads up: The generated command is provider‑aware. Pick the right target before copying.

Step‑by‑step guide

1. Choose your provider

  • Vast.ai: Use an image/template that includes a Jupyter terminal or shell access.
  • RunPod: Use either the ComfyUI template or a general‑purpose image with CUDA.

2. Open a terminal on the instance

# Example: connect to your instance and open a shell

3. Generate the command

  1. Visit https://deploy.promptingpixels.com/
  2. Choose App: ComfyUI
  3. Pick the provider (Vast.ai or RunPod)
  4. Add Models – search Hugging Face or Civitai; the generator will route each file to the correct ComfyUI directory.
  5. Add Custom Nodes – search popular nodes (e.g., Impact Pack) and add them.
  6. (Optional) Pin the ComfyUI version for reproducible builds.

💡 Pro tip: Use presets to recreate environments from previous projects. Consistency saves debugging time.

4. Prepare tokens (if needed)

# Optional: tokens for gated downloads
export HF_TOKEN=hf_your_read_token_here
export CIVITAI_TOKEN=your_civitai_token_here

5. Run the generated one‑liner

The command usually looks like a wget/curl pipe into bash. Example placeholder:

bash <(curl -sSL https://example.com/generated_script.sh)

6. Install extra Python dependencies (if your workflow needs them)

source /venv/bin/activate 2>/dev/null || true
pip install xformers==0.0.23 safetensors==0.4.3

7. Verify GPU visibility

nvidia-smi || echo "No GPU found (driver/container mismatch?)"

8. Launch ComfyUI

# Adjust the launch command to your setup
python /workspace/ComfyUI/main.py

Debugging tips

  • 403 from Hugging Face: You probably need a token for that model/repo.
    export HF_TOKEN=hf_xxx
  • Slow model downloads: Instance egress may be limited. Test with smaller models first.
  • Not enough disk: Large checkpoints can exceed ephemeral storage. Use a larger volume or a persistent disk.
  • Node missing in the menu: Restart ComfyUI after node install/update.
  • CUDA mismatch errors: Ensure your image, driver, and PyTorch stack align. Templates help; bare images can drift.
  • Case‑sensitive paths: ComfyUI model folders are strict: checkpoints, loras, vae, etc.
  • Port blocked: Verify the provider exposes the ComfyUI port (often 8188) and the service is running.

🛠️ Debug pattern: Tail logs while launching the UI to spot import errors.

ps aux | grep -i comfy
# or check the provider's app logs panel if available
  • Use tmux/screen for long downloads to avoid session drops.
  • Cache model folders on a persistent volume to avoid re‑downloading every session.

Workflow summary

  1. Launch GPU instance (RunPod/Vast.ai) and open a terminal.
  2. In the generator, pick provider + ComfyUI version.
  3. Add models (HF/Civitai) and custom nodes.
  4. Copy the one‑liner, set tokens if needed, paste into terminal.
  5. Launch ComfyUI; restart once to load new nodes.

Helpful environment variables

export HF_TOKEN=hf_xxx
export CIVITAI_TOKEN=xxx
export COMFYUI_ROOT=/workspace/ComfyUI   # adjust if your layout differs

Verify after install:

git -C "$COMFYUI_ROOT" rev-parse --short HEAD
ls "$COMFYUI_ROOT/models/checkpoints" | head
ls "$COMFYUI_ROOT/custom_nodes" | head

Feedback

If you have feature ideas or run into edge cases, the tool is maintained and open to feedback: deploy@promptingpixels.com.

Happy building!

Back to Blog

Related posts

Read more »

Day 1276 : Career Climbing

Saturday Before heading to the station, I did some coding on my current side project. Made some pretty good progress and then it was time to head out. Made i...

JWT Token Validator Challenge

Overview In 2019 Django’s session management framework contained a subtle but catastrophic vulnerability CVE‑2019‑11358. The framework failed to properly inv...