Managing Python Monorepos with uv Workspaces and AWS Lambda
Source: Dev.to
UV Workspaces – A Quick Overview
UV workspaces are a super‑tool when developing interconnected Python packages, especially in mono‑repo setups.
If you have a pyproject.toml at the root of the repo and run uv init inside a sub‑folder, UV will:
-
Create a single virtual environment that contains the dependencies of all projects.
Nice for IDEs – you don’t need to keep switching venvs.
⚠️ Important caveat: see below. -
Reference a local project by adding it under
[tool.uv.sources]. UV will install it as an editable package, so you don’t need to rebuild/re‑install after each change.
[tool.uv.sources]
common_logging = { workspace = true }
Caveats
| Caveat | Description |
|---|---|
| Conflicting dependencies | If any workspace members require incompatible versions, uv sync will fail. For micro‑services it’s usually best to keep dependencies on compatible versions. |
| IDE false‑positives | Because there is only one venv, the IDE may suggest imports that aren’t actually declared in a given service’s dependencies. This can go unnoticed during local development. In such cases you might prefer a path dependency instead of a workspace entry: |
[tool.uv.sources]
common_logging = { path = "../common/logging", editable = true }
Using UV Workspaces in CI / Production
I keep using UV workspaces, but we have a CI check that catches the “conflict” problem before code reaches production.
The examples below assume AWS Lambda container images, which explains paths like /var/task and /var/lang/lib. The same approach works for non‑Lambda containers; you’d just adjust the base image and paths.
Two Key Requirements
- Selective installation – not every workspace dependency should be installed for every micro‑service.
- Layer caching – core (shared) dependencies should live in a separate Docker layer from service‑specific (local) dependencies. This matters for Lambda images because image size and layer caching directly affect cold‑start performance.
Docker Build – Step‑by‑Step
Below is a cleaned‑up, reproducible Dockerfile that:
- Copies only the necessary
pyproject.tomlanduv.lockfiles. - Installs core dependencies (workspace‑wide) with
--packageand--no‑install‑local. - Installs local dependencies based on the workspace graph.
- Copies the final service source code.
# -------------------------------------------------
# Build Stage – install dependencies with UV
# -------------------------------------------------
FROM public.ecr.aws/lambda/python:3.14-arm64 AS builder
# Install UV (binary from the official image)
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
ARG SERVICE_NAME
# Working directory for the build
WORKDIR /build
# -------------------------------------------------
# 1️⃣ Copy dependency definition files
# -------------------------------------------------
COPY pyproject.toml uv.lock /build/
COPY services/${SERVICE_NAME}/pyproject.toml /build/services/${SERVICE_NAME}/pyproject.toml
# -------------------------------------------------
# 2️⃣ Install **core** (shared) dependencies
# -------------------------------------------------
# --no-install-local → ignore other workspace members
# --no-dev → skip dev dependencies
# --package ${SERVICE_NAME} → install only the package we care about
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --frozen --no-install-local --no-dev --package ${SERVICE_NAME}
# -------------------------------------------------
# 3️⃣ Install **common** (local) dependencies
# -------------------------------------------------
# Export the exact requirements for the service, filter only the
# `services/common` entries, then install them with pip into a clean target.
RUN uv export --package ${SERVICE_NAME} \
--no-editable --no-dev --frozen --format requirements.txt \
| grep '^./services/common' > common.txt
RUN --mount=type=bind,source=services/common,target=/build/services/common \
uv pip install --no-deps -r common.txt --target /build/common
# -------------------------------------------------
# 4️⃣ Copy the application code (service source)
# -------------------------------------------------
COPY services/${SERVICE_NAME}/src /build/app
Final Runtime Image
The runtime image contains only what the service needs: the core dependencies, the common (local) dependencies, and the service code. No UV binary, no build‑time caches, and no duplicate source copies.
# -------------------------------------------------
# Runtime Stage – minimal image
# -------------------------------------------------
FROM public.ecr.aws/lambda/python:3.14-arm64 AS runtime
ARG SERVICE_NAME
# Copy the prepared environment from the builder
COPY --from=builder /build/common /var/task/
COPY --from=builder /build/app /var/task/
# (Optional) Set PYTHONPATH if you need to point to the venv location
# ENV PYTHONPATH=/var/task/.venv/lib/python3.14/site-packages
# The Lambda base image automatically looks for a handler in /var/task
# No further commands are required.
Why This Two‑Stage Approach?
- Clean final image – only the runtime dependencies remain; build‑time artefacts (UV, caches, duplicate source trees) are left behind.
- Cache‑friendly layers – core dependencies change rarely, so Docker can reuse that layer across builds. Local/common dependencies are installed in a separate layer, and the service code is the last layer, ensuring fast rebuilds when only code changes.
- UV for resolution, pip for installation – UV still does the heavy lifting of dependency resolution, while
pip install --target …creates a simple, self‑contained site‑packages directory that the Lambda runtime can use directly.
TL;DR
- UV workspaces give you a single venv for the whole repo, which is great for IDE ergonomics.
- Be aware of conflicting dependencies and IDE false‑positives; use path dependencies if needed.
- For Docker/Lambda builds, separate core and local dependency installation into distinct layers, then copy only the needed files into a minimal runtime image.
- The pattern above keeps images small, cache‑efficient, and production‑ready while still leveraging UV’s powerful dependency resolution.
# Use the Python 3.14 runtime for ARM64
FROM python:3.14-arm64 AS final
# Copy core dependencies from the virtual environment built earlier
COPY --from=builder /build/.venv/lib /var/lang/lib
# Copy shared/common files
COPY --from=builder /build/common /var/task
# Copy the application code
COPY --from=builder /build/app /var/task
# Set the command to your Lambda handler
CMD [ "ingestion_worker.main.handler" ]
This results in a smaller, cleaner final image that contains only what Lambda needs at runtime.