How I turned Google Sheets into a Blazing-Fast REST API using AWS Serverless
Source: Dev.to
Introduction
We’ve all been there: you need a quick backend for a prototype, an internal tool, or a NoCode project. Setting up PostgreSQL or MongoDB feels like overkill, so you think, “I’ll just use Google Sheets as a database!”
It sounds great until you hit the reality of the Google Sheets API:
- It’s slow (often 1–2 seconds per request).
- Rate limits are strict (around 60 requests per user per minute).
- No native query language (filtering data is a pain).
I wanted the simplicity of Google Sheets but the performance and security of a real backend. So, I built SheetToJSON, a serverless wrapper that solves these exact issues.
The Architecture Stack
To keep costs low and performance high, I went 100 % Serverless:
- Routing: AWS HTTP API Gateway
- Compute: AWS Lambda (Node.js ARM64)
- Caching: Amazon S3
- Security & Auth: Amazon DynamoDB
- Monetization & Gateway: RapidAPI
Challenge 1: Beating the Rate Limits and Latency
The biggest bottleneck of the Google Sheets API is the response time. To fix this, I implemented an aggressive caching strategy using Amazon S3.
When a user makes a GET request (e.g., fetching a list of products), the Lambda function first checks an S3 bucket for a cached .json version of that sheet.
- Cache Hit: Data is returned in milliseconds.
- Cache Miss: Lambda fetches data from Google Sheets, saves the raw JSON to S3 (with a 5‑minute TTL), and returns the response.
To keep data fresh, I added a write‑through cache: any POST, PATCH, or DELETE request updates Google Sheets and immediately rewrites the local S3 cache. The result is lightning‑fast GET requests without sacrificing freshness.
Challenge 2: Smart Querying (No Database Engine)
Google Sheets doesn’t have a SQL engine. If a user wants to find all products where the price is greater than $50, they usually have to download the whole sheet and filter it on the client side.
I built a custom query engine directly in the Lambda function. When reading from the S3 cache, the API intercepts query parameters and processes them in‑memory:
// Example: GET /v1/SPREADSHEET_ID/Products?price=gt:50&sort=price:desc
// The API engine automatically parses:
// "gt:" (greater than)
// "lt:" (less than)
// "lk:" (like/contains)
// before returning the paginated payload.
This turns a dumb spreadsheet into a queryable NoSQL‑like database.
Challenge 3: Multi‑Tenant Security (Preventing IDOR)
If you are building a SaaS, security is your #1 priority. Because the API uses a single Google Service Account to read sheets, what prevents User A from guessing User B’s Spreadsheet ID and reading their data?
To prevent this Insecure Direct Object Reference (IDOR), I added a mandatory /register step backed by DynamoDB.
- A user registers their sheet; the API links their
X‑RapidAPI‑UserID to thespreadsheetIdin DynamoDB. - Every CRUD request performs a fast (
~15 ms) DynamoDB lookup. - If the requesting user ID doesn’t match the owner ID of the sheet, the API returns 403 Forbidden.
Zero data leakage.
The Result: Developer Experience First
I wanted the integration to be as seamless as possible, so I published zero‑dependency, official SDKs for both Node.js and PHP.
Node.js example (fully typed with TypeScript)
import { SheetToJSON } from 'sheettojson-api';
const db = new SheetToJSON('YOUR_RAPIDAPI_KEY');
const response = await db.get('SPREADSHEET_ID', 'Products', {
limit: 10,
price: 'gt:50'
});
console.log(response.data);
Try it out!
Building this was an amazing journey into AWS Serverless architecture. If you are building a NoCode tool, a quick prototype, or need an instant backend, give it a spin.
I’m currently looking for feedback on the caching system and the query engine. Let me know what you think in the comments! 👇