Nginx Deep Dive: Architecture, Configuration, and Practical Examples
Source: Dev.to
Introduction
Nginx (“Engine‑X”) is a high‑performance HTTP and reverse‑proxy server widely used in various scenarios such as web services, load balancing, API gateways, reverse proxies, and static‑resource servers. Because of its high performance, low resource consumption, and flexible configuration, Nginx has become the preferred choice for many internet companies, enterprises, and developers.
This article will begin with a basic introduction to Nginx, delve into its working principles, and use practical examples to help readers better understand how to configure and optimize Nginx.
Introduction to Nginx Basics
History and Background of Nginx
Nginx was originally developed by Russian programmer Igor Sysoev and released in 2004. It was initially designed to address the C10K problem (handling 10 000 simultaneous connections), thus exhibiting excellent performance in high‑concurrency scenarios. Due to its outstanding performance and scalability, Nginx has become one of the world’s most popular web servers, especially excelling at serving static resources and acting as a reverse proxy.
Nginx’s Core Functions
- Reverse proxy – forwards client requests to backend servers.
- Load balancing – supports algorithms such as Round Robin, IP Hash, and Least Connections.
- Static file service – efficiently serves HTML, CSS, JavaScript, images, etc.
- HTTP caching – caches response content to improve access performance.
- SSL/TLS support – provides HTTPS services.
- Reverse proxy + load balancing – distributes traffic among multiple backends for high availability.
- WebSocket support – handles long‑lived connections.
How Nginx Works
Event‑Driven Model
Nginx employs an event‑driven architecture. Unlike traditional multi‑threaded or multi‑process models, Nginx runs a small number of worker processes that handle client connections asynchronously. When a new request arrives, Nginx places it in an event queue, which the worker processes schedule. This design allows a single process to handle many concurrent connections while avoiding the context‑switching and memory‑overhead costs of traditional multithreading.
Request Processing Flow
- Receiving requests – Nginx listens for client connections and adds them to the event queue.
- Parsing requests – It parses HTTP request headers (URL, method, hostname, etc.).
- Selecting the appropriate service – Based on the configuration, Nginx either forwards the request to a backend server or serves a static resource directly.
- Response generation – Nginx builds the response from the backend’s reply or from local files and sends it to the client.
- Logging – Requests are logged for analysis and debugging.
Nginx Configuration File Structure
The main configuration file is usually located at /etc/nginx/nginx.conf. Its basic hierarchical structure is:
- Global block – sets global options (worker processes, user, log paths, etc.).
httpblock – configures HTTP‑related settings (caching, compression, load balancing, etc.).serverblock – defines virtual hosts and handles requests for different domains.locationblock – matches request URIs and specifies how each should be processed.
Advanced Features of Nginx
Load Balancing
Nginx provides several load‑balancing algorithms to distribute client requests across multiple backends:
- Round Robin – default method; distributes requests evenly.
- Least Connections – sends traffic to the server with the fewest active connections.
- IP Hash – routes requests based on a hash of the client’s IP address.
Example configuration
http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
Caching and Compression
Caching
Caching static files and upstream responses can dramatically improve performance and reduce backend load.
http {
proxy_cache_path /tmp/cache keys_zone=my_cache:10m;
server {
listen 80;
location / {
proxy_cache my_cache;
proxy_pass http://backend;
}
}
}
Compression
Enabling gzip compression reduces the amount of data transmitted over the network.
http {
gzip on;
gzip_types text/plain application/javascript text/css;
gzip_min_length 1000;
}
SSL/TLS Configuration
Nginx fully supports SSL/TLS, allowing you to serve secure HTTPS sites and protect user privacy.
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
# Recommended TLS settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://backend;
}
}
Nginx Practical Case Studies
1. Static File Server
server {
listen 80;
server_name www.example.com;
root /var/www/html;
index index.html index.htm;
location /images/ {
root /var/www/assets;
}
}
In this configuration, Nginx serves the website homepage from /var/www/html and forwards requests under the /images/ path to resources in /var/www/assets.
2. Reverse Proxy and Load Balancing
http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
server_name www.example.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
3. URL Rewriting and Redirection
Redirect all http requests to https:
server {
listen 80;
server_name www.example.com;
return 301 https://$host$request_uri;
}
4. Performance Optimization and Monitoring
Optimization Tips
- Adjust the number of worker processes – set
worker_processesbased on the number of CPU cores. - Use caching – cache static resources and enable reverse‑proxy caching to reduce backend load.
- Enable GZIP compression – compress responses to lower bandwidth usage and improve page load speed.
Monitoring with stub_status
server {
listen 80;
server_name status.example.com;
location /status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
Conclusion
Nginx, as a high‑performance web server, is an indispensable component of modern web services due to its event‑driven architecture, reverse‑proxy capabilities, load balancing, and caching features. This article provides a foundation for further discussion and deeper exploration of Nginx’s powerful functionalities.