Skip to content

Load Balancing

Distributing incoming requests across multiple servers to optimize resource utilization, minimize latency, and prevent any single server from becoming a bottleneck

TL;DR

Load balancing distributes network traffic or computational workload across multiple servers using algorithms like round-robin, least-connections, or consistent hashing to prevent any single server from being overwhelmed. Essential for scalability, high availability, and optimized resource utilization in systems like AWS ELB, nginx, and HAProxy.

Visual Overview

Load Balancing Overview

Core Explanation

What is Load Balancing?

Load balancing is the process of distributing incoming requests across multiple backend servers to:

  1. Optimize resource utilization: No server is overloaded while others are idle
  2. Maximize throughput: Handle more requests by adding servers
  3. Minimize latency: Route to least-loaded or nearest server
  4. Ensure high availability: Route around failed servers

Load Balancing Algorithms

1. Round Robin

Round Robin Algorithm

2. Weighted Round Robin

Weighted Round Robin Algorithm

3. Least Connections

Least Connections Algorithm

4. Weighted Least Connections

Weighted Least Connections Algorithm

5. Least Response Time

Least Response Time Algorithm

6. IP Hash (Consistent Hashing)

IP Hash (Consistent Hashing)

7. Least Bandwidth

Least Bandwidth Algorithm

Layer 4 vs Layer 7 Load Balancing

Layer 4 (Transport Layer)

Layer 4 Load Balancing

Layer 7 (Application Layer)

Layer 7 Load Balancing

Health Checks & Failover

Health Checks & Failover

Session Persistence (Sticky Sessions)

Session Persistence (Sticky Sessions)

Real Systems Using Load Balancing

SystemTypeAlgorithmsKey FeaturesUse Case
AWS ELB (ALB)Layer 7Round robin, least outstanding requestsContent-based routing, SSL terminationHTTP microservices
AWS NLBLayer 4Flow hashUltra-low latency, static IPTCP services, high throughput
nginxLayer 7Round robin, least_conn, ip_hashOpen source, highly configurableWeb servers, API gateway
HAProxyLayer 4/7Weighted RR, least_conn, consistent hashHigh performance, advanced ACLsEnterprise load balancing
EnvoyLayer 7Weighted RR, least_request, ring_hashService mesh, observabilityKubernetes, microservices
CloudflareLayer 7Geo-routing, weighted poolsDDoS protection, CDNGlobal load balancing

Case Study: AWS Application Load Balancer

AWS ALB Architecture

Case Study: nginx Load Balancer

# nginx.conf - Load Balancer Configuration

# Define upstream backend servers
upstream backend {
    # Load balancing algorithm
    least_conn;  # Use least connections

    # Backend servers with weights
    server backend1.example.com:8080 weight=5;
    server backend2.example.com:8080 weight=3;
    server backend3.example.com:8080 weight=2;

    # Server with max connections limit
    server backend4.example.com:8080 max_conns=100;

    # Backup server (used only if others fail)
    server backup.example.com:8080 backup;

    # Health check configuration
    keepalive 32;  # Keep 32 connections alive
}

# API servers upstream
upstream api_servers {
    # Consistent hashing based on client IP
    ip_hash;

    server api1.example.com:3000;
    server api2.example.com:3000;
    server api3.example.com:3000;
}

server {
    listen 80;
    server_name example.com;

    # Health check endpoint
    location /health {
        access_log off;
        return 200 "healthy\n";
    }

    # Route /api/* to API servers
    location /api/ {
        proxy_pass http://api_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;

        # Retry logic
        proxy_next_upstream error timeout http_502 http_503;
        proxy_next_upstream_tries 3;
    }

    # Route all other traffic to backend
    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # Static files (no load balancing needed)
    location /static/ {
        root /var/www;
        expires 1d;
    }
}

# SSL/TLS configuration
server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/key.pem;

    # SSL termination (decrypt here, forward HTTP to backend)
    location / {
        proxy_pass http://backend;
    }
}

When to Use Load Balancing

✓ Perfect Use Cases

High Traffic Web Applications

High Traffic Web Applications

Microservices Architecture

Microservices Architecture

Global Applications (Geo-Load Balancing)

Global Applications

Database Read Replicas

Database Read Replicas

✕ When NOT to Use (or Use Carefully)

Single Server Deployment

Single Server Deployment Warning

Stateful TCP Connections (Without Sticky Sessions)

Stateful TCP Connections Warning

Very Low Latency Requirements (< 1ms)

Very Low Latency Requirements Warning

Interview Application

Common Interview Question

Q: “Design a load balancing solution for a REST API with 10 backend servers. How would you ensure high availability and optimal performance?”

Strong Answer:

“I’d design a multi-layered load balancing solution:

Architecture:

  • DNS Load Balancing: Route to nearest datacenter (geo-routing)
  • Layer 7 Load Balancer: AWS ALB or nginx (content-based routing)
  • Layer 4 Load Balancer: Optional NLB for TCP services

Algorithm Selection:

  • API Endpoints: Least connections algorithm
    • Why: API requests have variable duration
    • Long-running queries won’t overload single server
  • Static Assets: Round robin
    • Why: Uniform, fast requests
  • User Sessions: IP hash or cookie-based sticky sessions
    • Why: Session affinity if storing state server-side

High Availability:

  1. Health Checks:
    • Active: GET /health every 10s
    • Passive: Monitor 5xx errors in real traffic
    • Threshold: 3 failures → mark unhealthy
  2. Automatic Failover:
    • Failed server removed from pool immediately
    • Traffic redistributed to healthy servers
    • Auto-retry on failure (circuit breaker pattern)
  3. Multi-AZ Deployment:
    • Load balancer across 3 availability zones
    • Servers distributed across zones
    • Tolerate entire AZ failure

Performance Optimizations:

  1. SSL/TLS Termination:
    • Decrypt at load balancer
    • Offload CPU from backend servers
    • Use HTTP between LB and backends
  2. Connection Pooling:
    • Keep-alive connections to backends
    • Reduce TCP handshake overhead
  3. Caching:
    • Cache static responses at LB
    • Reduce backend load

Monitoring:

  • Metrics: Request rate, error rate, latency (p50, p99)
  • Alerts: Health check failures, high latency, 5xx errors
  • Dashboard: Real-time traffic distribution per server

Scaling:

  • Auto Scaling Group: Add servers when CPU > 70%
  • Load balancer auto-registers new instances
  • Graceful shutdown: Drain connections before removing server

Trade-offs:

  • Layer 7 LB adds 5-10ms latency vs Layer 4 (~1ms)
  • But enables advanced routing and SSL termination
  • For ultra-low latency, use Layer 4 or client-side LB”

Code Example

Simple Round Robin Load Balancer

import requests
import time
from typing import List
from dataclasses import dataclass
import threading

@dataclass
class BackendServer:
    """Represents a backend server"""
    host: str
    port: int
    weight: int = 1
    healthy: bool = True
    active_connections: int = 0

class LoadBalancer:
    """
    Simple load balancer implementing multiple algorithms
    """
    def __init__(self, servers: List[BackendServer]):
        self.servers = servers
        self.current_index = 0  # For round robin
        self.lock = threading.Lock()

        # Start health check thread
        self.health_check_thread = threading.Thread(
            target=self._health_check_loop,
            daemon=True
        )
        self.health_check_thread.start()

    def round_robin(self) -> BackendServer:
        """Simple round robin algorithm"""
        with self.lock:
            # Filter healthy servers
            healthy_servers = [s for s in self.servers if s.healthy]

            if not healthy_servers:
                raise Exception("No healthy servers available")

            # Get next server in round-robin fashion
            server = healthy_servers[self.current_index % len(healthy_servers)]
            self.current_index += 1

            return server

    def weighted_round_robin(self) -> BackendServer:
        """Weighted round robin based on server capacity"""
        with self.lock:
            healthy_servers = [s for s in self.servers if s.healthy]

            if not healthy_servers:
                raise Exception("No healthy servers available")

            # Build weighted list (repeat servers based on weight)
            weighted_list = []
            for server in healthy_servers:
                weighted_list.extend([server] * server.weight)

            # Round robin through weighted list
            server = weighted_list[self.current_index % len(weighted_list)]
            self.current_index += 1

            return server

    def least_connections(self) -> BackendServer:
        """Route to server with fewest active connections"""
        with self.lock:
            healthy_servers = [s for s in self.servers if s.healthy]

            if not healthy_servers:
                raise Exception("No healthy servers available")

            # Find server with minimum connections
            server = min(healthy_servers, key=lambda s: s.active_connections)

            return server

    def weighted_least_connections(self) -> BackendServer:
        """Weighted least connections (connections / weight)"""
        with self.lock:
            healthy_servers = [s for s in self.servers if s.healthy]

            if not healthy_servers:
                raise Exception("No healthy servers available")

            # Find server with minimum connections/weight ratio
            server = min(healthy_servers,
                        key=lambda s: s.active_connections / s.weight)

            return server

    def ip_hash(self, client_ip: str) -> BackendServer:
        """Consistent hashing based on client IP"""
        with self.lock:
            healthy_servers = [s for s in self.servers if s.healthy]

            if not healthy_servers:
                raise Exception("No healthy servers available")

            # Hash client IP to select server
            hash_value = hash(client_ip)
            server_index = hash_value % len(healthy_servers)

            return healthy_servers[server_index]

    def forward_request(self, request_path: str, algorithm: str = 'round_robin',
                       client_ip: str = None) -> dict:
        """
        Forward request to backend server using specified algorithm
        """
        # Select server based on algorithm
        if algorithm == 'round_robin':
            server = self.round_robin()
        elif algorithm == 'weighted_round_robin':
            server = self.weighted_round_robin()
        elif algorithm == 'least_connections':
            server = self.least_connections()
        elif algorithm == 'weighted_least_connections':
            server = self.weighted_least_connections()
        elif algorithm == 'ip_hash':
            if not client_ip:
                raise ValueError("client_ip required for ip_hash algorithm")
            server = self.ip_hash(client_ip)
        else:
            raise ValueError(f"Unknown algorithm: {algorithm}")

        print(f"Routing to {server.host}:{server.port} "
              f"(connections: {server.active_connections})")

        # Increment connection count
        with self.lock:
            server.active_connections += 1

        try:
            # Forward request to backend
            url = f"http://{server.host}:{server.port}{request_path}"
            response = requests.get(url, timeout=5)

            return {
                'status': response.status_code,
                'body': response.text,
                'server': f"{server.host}:{server.port}"
            }

        except requests.RequestException as e:
            print(f"Error forwarding to {server.host}:{server.port}: {e}")
            # Mark server as unhealthy on error
            with self.lock:
                server.healthy = False
            raise

        finally:
            # Decrement connection count
            with self.lock:
                server.active_connections -= 1

    def _health_check_loop(self):
        """Background thread to perform health checks"""
        while True:
            time.sleep(10)  # Check every 10 seconds

            for server in self.servers:
                healthy = self._check_health(server)

                with self.lock:
                    if healthy and not server.healthy:
                        print(f"✓ Server {server.host}:{server.port} is now HEALTHY")
                        server.healthy = True
                    elif not healthy and server.healthy:
                        print(f"✗ Server {server.host}:{server.port} is now UNHEALTHY")
                        server.healthy = False

    def _check_health(self, server: BackendServer) -> bool:
        """Check if server is healthy"""
        try:
            url = f"http://{server.host}:{server.port}/health"
            response = requests.get(url, timeout=5)
            return response.status_code == 200
        except requests.RequestException:
            return False

    def get_status(self) -> dict:
        """Get load balancer status"""
        with self.lock:
            return {
                'total_servers': len(self.servers),
                'healthy_servers': sum(1 for s in self.servers if s.healthy),
                'servers': [
                    {
                        'host': s.host,
                        'port': s.port,
                        'healthy': s.healthy,
                        'active_connections': s.active_connections,
                        'weight': s.weight
                    }
                    for s in self.servers
                ]
            }

# Usage Example
if __name__ == '__main__':
    # Create backend servers
    servers = [
        BackendServer('server1.example.com', 8080, weight=5),
        BackendServer('server2.example.com', 8080, weight=3),
        BackendServer('server3.example.com', 8080, weight=2),
    ]

    lb = LoadBalancer(servers)

    # Test different algorithms
    print("=== Round Robin ===")
    for i in range(5):
        try:
            result = lb.forward_request('/api/users', algorithm='round_robin')
            print(f"Request {i+1}{result['server']}")
        except Exception as e:
            print(f"Request {i+1} failed: {e}")

    print("\n=== Least Connections ===")
    for i in range(5):
        try:
            result = lb.forward_request('/api/users', algorithm='least_connections')
            print(f"Request {i+1}{result['server']}")
        except Exception as e:
            print(f"Request {i+1} failed: {e}")

    print("\n=== IP Hash (Sticky Sessions) ===")
    client_ips = ['1.2.3.4', '5.6.7.8', '1.2.3.4', '5.6.7.8']
    for i, ip in enumerate(client_ips):
        try:
            result = lb.forward_request('/api/users', algorithm='ip_hash',
                                       client_ip=ip)
            print(f"Client {ip}{result['server']}")
        except Exception as e:
            print(f"Request from {ip} failed: {e}")

    # Get status
    print("\n=== Load Balancer Status ===")
    import json
    print(json.dumps(lb.get_status(), indent=2))

Layer 7 HTTP Load Balancer with Path Routing

from flask import Flask, request, Response
import requests

app = Flask(__name__)

# Define backend pools
BACKEND_POOLS = {
    'api': [
        'http://api1.example.com:3000',
        'http://api2.example.com:3000',
        'http://api3.example.com:3000',
    ],
    'web': [
        'http://web1.example.com:80',
        'http://web2.example.com:80',
    ],
    'admin': [
        'http://admin1.example.com:8080',
    ]
}

# Round robin counters
counters = {pool: 0 for pool in BACKEND_POOLS}

def select_backend(pool_name: str) -> str:
    """Select backend using round robin"""
    pool = BACKEND_POOLS[pool_name]
    counter = counters[pool_name]
    backend = pool[counter % len(pool)]
    counters[pool_name] += 1
    return backend

@app.route('/<path:path>', methods=['GET', 'POST', 'PUT', 'DELETE'])
def load_balance(path):
    """Layer 7 load balancer with path-based routing"""

    # Path-based routing
    if path.startswith('api/'):
        backend = select_backend('api')
    elif path.startswith('admin/'):
        backend = select_backend('admin')
    else:
        backend = select_backend('web')

    # Forward request to backend
    url = f"{backend}/{path}"

    # Preserve headers
    headers = {key: value for key, value in request.headers if key != 'Host'}

    # Add X-Forwarded-For header
    headers['X-Forwarded-For'] = request.remote_addr
    headers['X-Real-IP'] = request.remote_addr

    try:
        # Forward request
        response = requests.request(
            method=request.method,
            url=url,
            headers=headers,
            data=request.get_data(),
            cookies=request.cookies,
            allow_redirects=False,
            timeout=30
        )

        # Return response
        return Response(
            response.content,
            status=response.status_code,
            headers=dict(response.headers)
        )

    except requests.RequestException as e:
        return Response(f"Bad Gateway: {e}", status=502)

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=80)

Prerequisites:

Related Concepts:

Used In Systems:

  • AWS ELB/ALB/NLB: Cloud load balancing
  • nginx/HAProxy: Open-source load balancers
  • Kubernetes: Service load balancing with kube-proxy

Explained In Detail:

  • System Design Deep Dive - Load balancing in production systems

Quick Self-Check

  • Can explain load balancing in 60 seconds?
  • Know difference between Layer 4 and Layer 7 load balancing?
  • Understand 3+ load balancing algorithms and their trade-offs?
  • Can explain health checks and failover mechanisms?
  • Know when to use sticky sessions vs session replication?
  • Can design a load balancing solution for given requirements?
Interview Notes
💼80% of system design interviews
Interview Relevance
80% of system design interviews
🏭AWS ELB, nginx, HAProxy
Production Impact
Powers systems at AWS ELB, nginx, HAProxy
Optimal response times
Performance
Optimal response times query improvement
📈Horizontal scaling
Scalability
Horizontal scaling