Multiple Deployments, One Config File

Published: (March 5, 2026 at 06:00 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Cover image for Multiple Deployments, One Config File

Valentina

If you’re building with AI agents, you probably don’t have just one. Say you’re building a lead‑aggregation pipeline. You’ve got one agent that scrapes company websites, another that pulls leads from LinkedIn, and a third that mines Reddit and community forums. They all share the same data models and scoring logic, they all run on a schedule, and they all live in the same repo. But each one deploys independently, so each one needs its own crewship.toml and its own deploy commands, which adds up fast.

It works, but it’s clunky. You end up duplicating build settings, keeping exclude lists in sync, and jumping between directories every time you deploy.

We kept hearing this from teams building multi‑agent systems, and honestly ran into it ourselves. So we fixed it.

One file, multiple deployments

You can now define multiple deployments in a single crewship.toml. Instead of one [deployment] section, use named [deployments.<name>] sections:

[build]
exclude = ["tests", "notebooks"]

[deployments.web-scraper]
framework = "crewai"
entrypoint = "leads.web_scraper.crew:WebScraperCrew"
profile = "browser"
python = "3.11"

[deployments.linkedin-miner]
framework = "crewai"
entrypoint = "leads.linkedin.crew:LinkedInCrew"

[deployments.reddit-miner]
framework = "crewai"
entrypoint = "leads.reddit.crew:RedditCrew"

Each named section becomes its own deployment on Crewship, with the name used as the project name. The [build] config is shared across all of them, so you only declare your exclude list once.

That’s it. No wrapper scripts, no monorepo tooling, no separate directories. Three lead miners, one file.

Deploying and targeting

Every CLI command now takes a --name (or -n) flag to target a specific deployment:

crewship deploy --name web-scraper
crewship deploy --name linkedin-miner
crewship deploy --name reddit-miner

The same flag works for environment variables, invocations, and schedules. For a lead pipeline where every source runs on its own schedule, you’d do:

crewship env set --name linkedin-miner LINKEDIN_API_KEY=...
crewship env set --name reddit-miner REDDIT_CLIENT_ID=... REDDIT_CLIENT_SECRET=...

crewship schedule create --name web-scraper "Scrape targets" --cron "0 */6 * * *"
crewship schedule create --name linkedin-miner "LinkedIn sync" --cron "0 8 * * 1-5"
crewship schedule create --name reddit-miner "Reddit sweep" --cron "0 9 * * *"

If you skip --name and there’s only one deployment in the file, it’s picked automatically. If there are multiple, the CLI prompts you to choose. In CI where there’s no TTY, it errors and tells you to pass --name explicitly, so you don’t accidentally deploy the wrong thing.

Deployment IDs are tracked per deployment

After the first deploy, Crewship saves the deployment_id back into the config for each deployment:

[deployments.web-scraper]
framework = "crewai"
entrypoint = "leads.web_scraper.crew:WebScraperCrew"
deployment_id = "dep_abc123"   # auto‑populated after first deploy

[deployments.linkedin-miner]
framework = "crewai"
entrypoint = "leads.linkedin.crew:LinkedInCrew"
deployment_id = "dep_def456"   # auto‑populated after first deploy

This means subsequent deploys know exactly which deployment to update without you having to track IDs manually. Commit the file to version control and your whole team stays in sync.

Nothing breaks

If you already have a crewship.toml with a single [deployment] section, nothing changes—the old format works exactly as before. The new multi‑deployment format is opt‑in, and crewship init still generates the single‑deployment config by default.

The two formats are mutually exclusive. If you accidentally mix [deployment] and [deployments.*] in the same file, the CLI catches it and tells you what to do.

When this matters

The lead‑aggregator setup is a good example, but the pattern applies anywhere you have agents that share code but deploy separately. A few natural use‑cases:

  • Monorepo without the mess – agents share scoring logic, data models, and utilities. With multi‑deployment they stay in one repo and one config file instead of being split across separate projects that drift out of sync.
  • Independent schedules – each source runs on its own cadence (e.g., web scraper every 6 hours, LinkedIn on weekday mornings, Reddit once a day). Use crewship schedule create --name to set them up independently.
  • Gradual rollout – deploy one miner at a time, verify it works, then deploy the next. Each deployment has its own version history and rollback path.

Getting started

If you’re starting from scratch, crewship init sets up a single‑deployment config. When you’re ready to add more agents, edit the file to use the named format:

# Before
[deployment]
framework = "crewai"
entrypoint = "leads.web_scraper.crew:WebScraperCrew"

# After
[deployments.web-scraper]
framework = "crewai"
entrypoint = "leads.web_scraper.crew:WebScraperCrew"

[deployments.linkedin-miner]
framework = "crewai"
entrypoint = "leads.linkedin.crew:LinkedInCrew"

[deployments.reddit-miner]
framework = "crewai"
entrypoint = "leads.reddit.crew:RedditCrew"

Now you have one tidy crewship.toml that powers all of your agents. 🎉

`rew:LinkedInCrew`

Enter fullscreen mode
Exit fullscreen mode

Deploy them, set their env vars, invoke them. Everything else works the same.

Full details are in the configuration docs. If you run into anything or have feedback, reach out — we’d like to hear how you’re using it.

0 views
Back to Blog

Related posts

Read more »