One-command ComfyUI on Cloud GPUs: A Practical, Repeatable Setup
Source: Dev.to
What we’re building
A repeatable way to boot a cloud GPU (RunPod or Vast.ai), paste a single command, grab the exact ComfyUI version you want, auto‑install your favorite custom nodes, and download models from Hugging Face / Civitai into the correct folders. No more “did I put that LoRA in the right place?” or “why is this template six months behind?”.
We’ll use a free script generator to produce the one‑liner and show you how to tweak, debug, and extend it for your workflow.
💡 Pro tip: Time is literally money on cloud GPUs. Automating the boring parts pays for itself on the first run.
How the generator works
The generator at https://deploy.promptingpixels.com/ outputs a one‑line shell command that:
- Installs or updates ComfyUI to a specific version
- Downloads your selected models into the correct ComfyUI subfolders
- Installs custom nodes you choose from the ComfyUI registry
- Adapts paths to your provider (RunPod / Vast.ai) or your local OS
- Supports tokens for gated downloads
🧭 Heads up: The generated command is provider‑aware. Pick the right target before copying.
Step‑by‑step guide
1. Choose your provider
- Vast.ai: Use an image/template that includes a Jupyter terminal or shell access.
- RunPod: Use either the ComfyUI template or a general‑purpose image with CUDA.
2. Open a terminal on the instance
# Example: connect to your instance and open a shell
3. Generate the command
- Visit https://deploy.promptingpixels.com/
- Choose App: ComfyUI
- Pick the provider (Vast.ai or RunPod)
- Add Models – search Hugging Face or Civitai; the generator will route each file to the correct ComfyUI directory.
- Add Custom Nodes – search popular nodes (e.g., Impact Pack) and add them.
- (Optional) Pin the ComfyUI version for reproducible builds.
💡 Pro tip: Use presets to recreate environments from previous projects. Consistency saves debugging time.
4. Prepare tokens (if needed)
# Optional: tokens for gated downloads
export HF_TOKEN=hf_your_read_token_here
export CIVITAI_TOKEN=your_civitai_token_here
5. Run the generated one‑liner
The command usually looks like a wget/curl pipe into bash. Example placeholder:
bash <(curl -sSL https://example.com/generated_script.sh)
6. Install extra Python dependencies (if your workflow needs them)
source /venv/bin/activate 2>/dev/null || true
pip install xformers==0.0.23 safetensors==0.4.3
7. Verify GPU visibility
nvidia-smi || echo "No GPU found (driver/container mismatch?)"
8. Launch ComfyUI
# Adjust the launch command to your setup
python /workspace/ComfyUI/main.py
Debugging tips
- 403 from Hugging Face: You probably need a token for that model/repo.
export HF_TOKEN=hf_xxx - Slow model downloads: Instance egress may be limited. Test with smaller models first.
- Not enough disk: Large checkpoints can exceed ephemeral storage. Use a larger volume or a persistent disk.
- Node missing in the menu: Restart ComfyUI after node install/update.
- CUDA mismatch errors: Ensure your image, driver, and PyTorch stack align. Templates help; bare images can drift.
- Case‑sensitive paths: ComfyUI model folders are strict:
checkpoints,loras,vae, etc. - Port blocked: Verify the provider exposes the ComfyUI port (often 8188) and the service is running.
🛠️ Debug pattern: Tail logs while launching the UI to spot import errors.
ps aux | grep -i comfy
# or check the provider's app logs panel if available
- Use
tmux/screenfor long downloads to avoid session drops. - Cache model folders on a persistent volume to avoid re‑downloading every session.
Workflow summary
- Launch GPU instance (RunPod/Vast.ai) and open a terminal.
- In the generator, pick provider + ComfyUI version.
- Add models (HF/Civitai) and custom nodes.
- Copy the one‑liner, set tokens if needed, paste into terminal.
- Launch ComfyUI; restart once to load new nodes.
Helpful environment variables
export HF_TOKEN=hf_xxx
export CIVITAI_TOKEN=xxx
export COMFYUI_ROOT=/workspace/ComfyUI # adjust if your layout differs
Verify after install:
git -C "$COMFYUI_ROOT" rev-parse --short HEAD
ls "$COMFYUI_ROOT/models/checkpoints" | head
ls "$COMFYUI_ROOT/custom_nodes" | head
Feedback
If you have feature ideas or run into edge cases, the tool is maintained and open to feedback: deploy@promptingpixels.com.
Happy building!