Migrating your FastAPI app step by step to serverless
Source: Dev.to
Overview

Serverless is good, serverless is cool, serverless is innovative. But how far can we push its uses? Let’s navigate some ups and downs around serverless technology.
When someone hears and “understands” the serverless concept for the first time, what comes to mind is often:
- I can move all my applications to serverless and save cost?
- I can have managed services that are still very cheap?
- This means no more paying for idle servers?
- This removes the need for DevOps?
- I can scale automatically without any complex setup?
Stop being happy (^_‑), you’re assuming without knowing what’s really going on. The right questions appear when you understand what, why, when, how, and who regarding serverless.

What is Serverless?
Execution Model
Your code runs:
- On demand
- In response to events (HTTP requests, messages, schedules, file uploads, etc.)
For short‑lived executions (typically seconds to minutes). You don’t keep servers running; the platform spins up execution environments only when needed.
Cost Model
| You pay for | Great for | Not always cheaper for |
|---|---|---|
| Execution time | Spiky traffic | Constant high‑throughput workloads |
| Number of requests/events | Infrequent workloads | Long‑running processes |
| Resources consumed (memory, duration) | Event‑driven systems | Predictable, steady traffic |
Scaling Model
| What scaling is | What scaling is not |
|---|---|
| Automatic – the platform adds capacity as needed | You still need to understand concurrency limits |
| Horizontal – more instances can run in parallel | You still must design for back‑pressure and failures |
| Event‑driven – triggered by incoming events | Bad architecture can still cause expensive scaling problems |
Operations Reality
Serverless does not remove DevOps; it changes it.
| What you still need | Where the difference lies |
|---|---|
| Monitoring and logging | Less infrastructure ops |
| CI/CD pipelines | More platform‑focused pipelines |
| Security and IAM | More observability required |
| Cost controls | More architecture work |
| Incident response | More studies of compatibilities |
The Right Questions Come After Understanding
Once the excitement fades, the real questions begin:
- What workloads fit serverless?
- Why should I use it instead of containers or VMs?
- When does it save cost and when does it not?
- How do I design, test, monitor, and debug at scale?
- Who owns reliability, security, and cost control?
Serverless is a powerful tool, not a universal solution.

Steps to Migrate
Step 0 – The Starting Point (FastAPI App)
# app.py
from fastapi import FastAPI, status
app = FastAPI()
@app.get("/health", status_code=status.HTTP_200_OK)
def health():
return {"status": "ok"}
@app.post("/items", status_code=status.HTTP_201_CREATED)
def create_item():
return {"message": "item created"}
@app.put("/items/{item_id}", status_code=status.HTTP_200_OK)
def update_item(item_id: int):
return {"message": f"item {item_id} updated"}
Run locally:
uvicorn app:app --reload
Step 1 – Understand the Serverless Mapping (Mental Model)

Step 2 – Add Mangum (ASGI → Lambda Adapter)
Install dependencies:
pip install fastapi mangum
Update app.py:
from fastapi import FastAPI, status
from mangum import Mangum
app = FastAPI()
@app.get("/health", status_code=status.HTTP_200_OK)
def health():
return {"status": "ok"}
@app.post("/items", status_code=status.HTTP_201_CREATED)
def create_item():
return {"message": "item created"}
@app.put("/items/{item_id}", status_code=status.HTTP_200_OK)
def update_item(item_id: int):
return {"message": f"item {item_id} updated"}
# Lambda entry point
handler = Mangum(app)
handler is what AWS Lambda will invoke.
Step 3 – Create requirements.txt
fastapi
mangum
Step 4 – Package the Application for Lambda
Create a build directory and install the dependencies into it:
mkdir package
pip install -r requirements.txt -t package
cp app.py package/
Create a zip archive:
cd package
zip -r app.zip .
You can now upload app.zip to AWS Lambda (or use your preferred IaC tool) and configure an API Gateway trigger to expose the FastAPI endpoints.
Step 5 – Create the Lambda Function
- Navigate to AWS Lambda.
- Create a function with the following settings:
- Runtime: Python 3.10+
- Architecture: x86_64
- Upload:
app.zip - Handler:
app.handler - Memory: 512 MB (good default)
- Timeout: 10–15 seconds
Step 6 – Create API Gateway (HTTP API)
- Go to API Gateway → Create HTTP API.
- Integration
- Type: Lambda
- Choose your Lambda function.
- Routes
ANY /{proxy+}
- Deploy the API.
This single route lets FastAPI handle all routing internally.
Step 7 – Test Your Endpoint
It’s a wrap!
