Next.js Performance When You Have 200,000 Database Rows

Published: (February 9, 2026 at 02:24 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

Most Next.js tutorials show you how to build a blog with 10 posts. Real‑world apps have hundreds of thousands of records. Here’s what actually matters when your database isn’t tiny.

The Problem

I recently worked on a marketplace with over 200,000 product listings. The standard patterns from tutorials and demos fall apart quickly at that scale, so most of what follows is what we figured out along the way to keep things responsive.

Database Queries Matter More Than React

This sounds obvious but I see it ignored constantly: your database is the bottleneck, not React.

If your Postgres query takes 3 seconds, no amount of React optimization will help. Fix the query first.

Indexes Are Not Optional

Every column you filter or sort by needs an index. Period.

// schema.prisma – add index for search
model Product {
  id   Int    @id @default(autoincrement())
  name String

  @@index([name])
}

For text search we used Prisma’s filtering with PostgreSQL GIN trigram indexes underneath. The index lives in a migration, and Prisma handles the query layer.

  • Without the index: searching 200 k rows by name ≈ 4 seconds.
  • With the index: ≈ 45 ms.

Don’t use contains on unindexed columns unless you enjoy watching progress spinners.

Pagination, Not Infinite Scroll (Usually)

Infinite scroll is trendy—but it’s also a trap. Every time the user scrolls you fetch more data, keep it in memory, and re‑render the list. After ~500 items the browser slows down and memory usage explodes.

Use cursor‑based pagination instead:

// Get 20 products after this cursor
const products = await db.product.findMany({
  take: 20,
  skip: 1,
  cursor: { id: lastProductId },
  orderBy: { createdAt: 'desc' },
});
  • The user gets 20 items at a time, can paginate forward/backward, and the browser doesn’t die from holding 10 000 DOM nodes.

Server Components Are Your Friend

The pattern that worked best for us was:

  • Server components for page layout, headings, metadata, filter labels, and any static content that doesn’t change per request.
  • Client component for the product grid, which changes based on search terms, filters, pagination, and sorting.
// app/products/page.tsx – Server Component
export default function ProductsPage() {
  return (
    
      
## Browse Cards

      
Over 200,000 cards from Pokémon, MTG, Yu‑Gi‑Oh and more.

      {/* Static content above renders server‑side immediately */}

       {/* Client Component handles all dynamic stuff */}
    
  );
}
// components/ProductBrowser.tsx – Client Component
'use client';
import { useState } from 'react';
import { useProducts } from '@/hooks/useProducts';
import { SearchBar } from '@/components/SearchBar';
import { FilterSidebar } from '@/components/FilterSidebar';
import { ProductGrid } from '@/components/ProductGrid';
import { Pagination } from '@/components/Pagination';

export function ProductBrowser() {
  const [filters, setFilters] = useState(defaultFilters);
  const { data, isLoading } = useProducts(filters);

  return (
    
       setFilters(f => ({ ...f, query: q }))} />
      
      
      
    
  );
}

Users see the page structure and static content immediately while the product listings load in. The split also means the server‑component output can be cached aggressively (same for every visitor), and only the client component does per‑request work.

Avoid N+1 Queries

Classic mistake:

// Bad: N+1 query
const products = await db.product.findMany();

for (const product of products) {
  product.seller = await db.user.findUnique({
    where: { id: product.sellerId },
  });
}

You just made 1 query for products, then N queries for sellers. With 100 products that’s 101 round‑trips.

Better: use include (or a join):

// Good: 1 query
const products = await db.product.findMany({
  include: { seller: true },
});

With Prisma, include performs a join under the hood—one query, way faster.

Caching Strategy

For data that doesn’t change often, cache it. The project uses Redis for:

  • Search results – cache for 5 minutes
  • Seller profiles – cache for 1 hour
  • Category listings – cache for 1 day
import Redis from 'ioredis';
const redis = new Redis();

export async function getCachedProducts(category: string) {
  const cacheKey = `products:${category}`;
  const cached = await redis.get(cacheKey);

  if (cached) {
    return JSON.parse(cached);
  }

  const products = await db.product.findMany({
    where: { category },
    take: 20,
  });

  await redis.setex(cacheKey, 300, JSON.stringify(products)); // 5 min TTL
  return products;
}

This cuts database load by 80 %+ for repeat visitors.

Image Optimization

With 200 k+ product images, serving full‑resolution PNGs kills bandwidth even when individual files are small. The Next.js Image component handles this automatically:

import Image from 'next/image';

Next.js will:

  • Serve appropriately sized WebP/AVIF variants.
  • Lazy‑load images that are off‑screen.
  • Cache and CDN‑optimize the assets.

TL;DR

  • Index everything you filter/sort.
  • Paginate, not infinite‑scroll.
  • Leverage Server Components for static markup.
  • Avoid N+1 queries with include/joins.
  • Cache heavy‑read data (Redis works great).
  • Let Next.js handle image optimization.

Apply these patterns and a 200 k‑row dataset feels as snappy as a 10‑post blog.

Serve WebP/AVIF where supported

  • Resize images to fit the display size
  • Lazy‑load images below the fold
  • Cache optimized versions

This dropped page weights from 2 MB to 400 KB just by using next/image everywhere.

Streaming for Slow Queries

Sometimes a query is just slow (complex joins, aggregations, whatever). Rather than blocking the whole page, stream the slow part.

import { Suspense } from 'react';

export default function Page() {
  return (
    
      

      }>
        
      

      
    
  );
}

async function SlowProductList() {
  const products = await someSlowQuery();
  return ;
}

The header and footer render immediately. The product list streams in when ready, so users see something fast instead of staring at a blank page.

Measure Everything

Don’t guess. Measure.

We’re self‑hosted, so we use:

  • Prisma query logging to catch slow queries (should probably be doing this more consistently)
  • Redis monitoring to track cache hit rates (another thing we should set up properly)

For a self‑hosted Next.js app, you’d also want:

  • Prisma’s log: ['query'] option to surface anything slow
  • Redis INFO stats for hit/miss ratios
  • Server‑side performance monitoring (New Relic, Datadog, or simple Express middleware logging)
  • Lighthouse CI in your deployment pipeline

If a page is slow, check:

  1. Is the database query slow? (Prisma logs / pg_stat_statements)
  2. Are we missing a cache? (Redis hit rate)
  3. Are we shipping too much JavaScript? (Next.js bundle analyzer)

Usually it’s #1.

What Actually Moved the Needle

Here’s what made the biggest performance impact:

  • Database indexes – cut query times from seconds to milliseconds
  • Redis caching – 80 % fewer database hits
  • Server Components – less client JavaScript, faster initial render
  • Image optimization – page weight dropped 5×

The rest was marginal. Focus on those four and you’ll be fine.

Conclusion

Big datasets break the patterns you learn in tutorials. The fixes aren’t complicated, but you have to think about data flow differently.

  • Database first
  • Cache aggressively
  • Ship less JavaScript

That’s it.

Built something similar and need help scaling it? 👉 morley.media

Originally published on kira.morley.media

0 views
Back to Blog

Related posts

Read more »