Skip to content

CQRS (Command Query Responsibility Segregation)

An architectural pattern that separates read and write operations into distinct models, optimizing each for its specific use case

TL;DR

CQRS separates write operations (commands) from read operations (queries) into distinct models, each optimized for its specific purpose. Commands modify state and are validated against business rules. Queries read from denormalized, optimized read models. An event backbone (like Kafka) propagates changes from write model to read models asynchronously, enabling independent scaling and optimization of each side.

Visual Overview

Traditional vs CQRS Architecture

Core Explanation

What is CQRS?

CQRS (Command Query Responsibility Segregation) is based on a simple principle:

  • Commands: Change state (write operations)
  • Queries: Return data (read operations)
  • Never mix them: Commands don’t return data, queries don’t change state
Before and After CQRS

Write Side (Command Model)

Responsibilities:

  • Validate business rules
  • Enforce invariants
  • Maintain consistency
  • Generate events
  • Persist state changes
// Command definition
public class UpdateEmailCommand {
    private final String userId;
    private final String newEmail;
    private final String requesterId;

    // Commands are immutable
}

// Command handler
@CommandHandler
public class UserCommandHandler {
    private final UserRepository writeRepo;
    private final EventStore eventStore;
    private final KafkaProducer<String, DomainEvent> eventBus;

    public void handle(UpdateEmailCommand command) {
        // Load aggregate
        User user = writeRepo.findById(command.getUserId())
            .orElseThrow(() -> new UserNotFoundException());

        // Business validation
        validateEmailChange(user, command.getNewEmail());

        // Business logic
        String oldEmail = user.getEmail();
        user.setEmail(command.getNewEmail());
        user.setVersion(user.getVersion() + 1);

        // Generate domain event
        EmailChangedEvent event = EmailChangedEvent.builder()
            .aggregateId(command.getUserId())
            .oldEmail(oldEmail)
            .newEmail(command.getNewEmail())
            .version(user.getVersion())
            .timestamp(Instant.now())
            .userId(command.getRequesterId())
            .build();

        // Persist (atomic transaction)
        writeRepo.save(user);
        eventStore.append(command.getUserId(), event);

        // Publish to event bus (after commit)
        eventBus.send(new ProducerRecord<>("user-events",
            command.getUserId(), event));
    }

    private void validateEmailChange(User user, String newEmail) {
        if (user.getEmail().equals(newEmail)) {
            throw new NoChangeException("Email unchanged");
        }

        if (!EmailValidator.isValid(newEmail)) {
            throw new InvalidEmailException(newEmail);
        }

        if (emailAlreadyTaken(newEmail)) {
            throw new EmailTakenException(newEmail);
        }
    }
}

Read Side (Query Model)

Responsibilities:

  • Optimize for query patterns
  • Denormalize data
  • Pre-compute aggregations
  • Support multiple views
  • Handle eventual consistency
// Query model (denormalized)
public class UserView {
    private String userId;
    private String email;
    private String fullName;
    private UserStatus status;
    private int orderCount;           // Pre-computed
    private BigDecimal totalSpent;    // Pre-computed
    private Instant lastLoginAt;
    private Instant createdAt;
    private Instant updatedAt;

    // Optimized for reads - no business logic
}

// Query handler
@QueryHandler
public class UserQueryHandler {
    private final UserViewRepository queryRepo;
    private final ElasticsearchClient searchClient;
    private final RedisTemplate<String, UserView> cache;

    public UserView getUser(String userId) {
        // Try cache first
        UserView cached = cache.opsForValue().get(userId);
        if (cached != null) return cached;

        // Read from optimized read DB
        UserView user = queryRepo.findById(userId)
            .orElseThrow(() -> new UserNotFoundException());

        // Cache for future reads
        cache.opsForValue().set(userId, user,
            Duration.ofMinutes(5));

        return user;
    }

    public List<UserView> searchUsers(String query, int page, int size) {
        // Use specialized search index
        return searchClient.search(query, page, size);
    }

    public UserStatsView getUserStats(String userId) {
        // Query from pre-aggregated stats view
        return statsRepo.findStatsByUserId(userId);
    }
}

Event Propagation (Write to Read)

Event-Driven Synchronization:

// Projection builder (updates read models)
@Component
public class UserProjectionBuilder {

    @KafkaListener(topics = "user-events",
                   groupId = "user-view-projection")
    public void handleEvent(DomainEvent event) {

        switch (event.getEventType()) {
            case "UserCreated" ->
                handleUserCreated((UserCreatedEvent) event);

            case "EmailChanged" ->
                handleEmailChanged((EmailChangedEvent) event);

            case "StatusUpgraded" ->
                handleStatusUpgraded((StatusUpgradedEvent) event);
        }
    }

    private void handleUserCreated(UserCreatedEvent event) {
        // Create new read model entry
        UserView view = UserView.builder()
            .userId(event.getAggregateId())
            .email(event.getEmail())
            .fullName(event.getName())
            .status(UserStatus.ACTIVE)
            .orderCount(0)
            .totalSpent(BigDecimal.ZERO)
            .createdAt(event.getTimestamp())
            .build();

        userViewRepo.save(view);

        // Also update search index
        searchClient.index(view);

        // Invalidate cache
        cache.delete(event.getAggregateId());
    }

    private void handleEmailChanged(EmailChangedEvent event) {
        // Update existing read model
        UserView view = userViewRepo.findById(event.getAggregateId())
            .orElseThrow();

        view.setEmail(event.getNewEmail());
        view.setUpdatedAt(event.getTimestamp());

        userViewRepo.save(view);
        searchClient.update(view);
        cache.delete(event.getAggregateId());
    }
}

Multiple Read Models

Specialized Views for Different Use Cases:

Multiple Read Models

Handling Eventual Consistency

Challenges and Solutions:

public class EventualConsistencyPatterns {

    // Pattern 1: Optimistic UI (assume success)
    public void updateEmail_OptimisticUI(UpdateEmailCommand cmd) {
        // Send command
        commandBus.send(cmd);

        // Immediately update UI (optimistic)
        ui.updateEmail(cmd.getNewEmail());

        // Later: Event arrives and confirms
        eventBus.subscribe("user-events", event -> {
            if (event instanceof EmailChangedEvent) {
                ui.confirmEmailChanged(); // Remove "pending" indicator
            }
        });
    }

    // Pattern 2: Versioning (detect stale reads)
    public UserView getUserWithVersion(String userId) {
        UserView user = queryRepo.findById(userId);

        // Include version in response
        // Client can detect if data is stale by comparing versions
        return user; // version included
    }

    // Pattern 3: Eventual consistency window
    public void waitForConsistency(String userId, long expectedVersion) {
        // Poll until read model catches up
        int maxAttempts = 10;
        int attempt = 0;

        while (attempt < maxAttempts) {
            UserView view = queryRepo.findById(userId);

            if (view.getVersion() >= expectedVersion) {
                return; // Consistent now
            }

            Thread.sleep(100); // Wait 100ms
            attempt++;
        }

        throw new EventualConsistencyTimeoutException();
    }

    // Pattern 4: Read from write model (fallback)
    public UserView getUserStronglyConsistent(String userId) {
        // If strong consistency required, read from write model
        User domainModel = writeRepo.findById(userId);

        // Convert to view (slower, but consistent)
        return UserView.fromDomainModel(domainModel);
    }
}

Tradeoffs

Advantages:

  • Optimized models for read and write
  • Independent scaling (scale reads separately)
  • Multiple read models from single write model
  • Improved performance (denormalized reads)
  • Better security (separate permissions)
  • Flexibility (different databases for different needs)

Disadvantages:

  • Eventual consistency complexity
  • More code and infrastructure
  • Data duplication across read models
  • Synchronization overhead
  • Debugging complexity (distributed events)
  • Steep learning curve

Real Systems Using This

Netflix

  • Implementation: CQRS with Cassandra (write) + Elasticsearch (read)
  • Scale: Billions of events per day
  • Use case: User profiles, viewing history, recommendations
  • Benefit: Independent scaling of reads (99% traffic) vs writes

Amazon

  • Implementation: Event-sourcing + CQRS for order processing
  • Write model: DynamoDB for order commands
  • Read models: Multiple views for different teams (finance, logistics, customer service)
  • Benefit: Different teams optimize their own read models

Uber

  • Implementation: CQRS for trip lifecycle
  • Write: Trip commands (request, accept, complete)
  • Read: Multiple projections (driver view, rider view, analytics)
  • Benefit: Real-time rider/driver apps with stale analytics (acceptable)

When to Use CQRS

Perfect Use Cases

High Read-to-Write Ratio

High Read-to-Write Ratio

Complex Queries on Write-Optimized Data

Complex Queries on Write-Optimized Data

Multiple Consumers of Same Data

Multiple Consumers of Same Data

Different Scalability Requirements

Different Scalability Requirements

When NOT to Use

Simple CRUD Applications

Simple CRUD Applications

Strong Consistency Requirements

Strong Consistency Requirements

Small Scale Systems

Small Scale Systems

Interview Application

Common Interview Question

Q: “Design Netflix’s recommendation system that handles billions of viewing events while serving personalized recommendations in real-time.”

Strong Answer:

“I’d use CQRS to separate high-volume writes (viewing events) from low-latency reads (recommendations):

WRITE SIDE (Event Ingestion):

Events: ViewStarted, ViewCompleted, ViewSkipped, VideoRated

Command Handler:
- Validate event (user exists, video exists)
- Append to Kafka (topic: viewing-events)
- Update write model (user viewing history in DynamoDB)

Kafka Configuration:
- Partitions: 1000 (partition by userId)
- Retention: 90 days
- Throughput: 1M events/sec

READ SIDE (Multiple Projections):

Projection 1: Real-time Recommendations (Low Latency)

Storage: Redis (cached recommendations)
Update: Every 5 minutes via Kafka Streams
Query: under 10ms p99
Data: Top 50 recommendations per user (pre-computed)

Kafka Streams Processor:
viewing-events  aggregate by user  ML model  Redis

Projection 2: Historical Analytics (Complex Queries)

Storage: ClickHouse (columnar DB)
Update: Batch processing every hour
Query: Complex aggregations across billions of events
Data: Watch time by genre/region/time

Projection 3: User Profile View (Fast Lookups)

Storage: Cassandra (denormalized)
Update: Real-time via projection builder
Query: Recently watched, continue watching
Data: {userId, recentVideos[], continueWatching[]}

Event Flow:

User watches video
   ViewStarted event  Kafka
   Projection 1: Update Redis recommendations (5min lag OK)
   Projection 2: Append to ClickHouse (1hr lag OK)
   Projection 3: Update Cassandra (real-time)

Benefits:

  • Write throughput: 1M+ events/sec (Kafka can handle)
  • Read latency: under 10ms for recommendations (Redis)
  • Complex analytics: Supported (ClickHouse)
  • Independent scaling: Scale read replicas without affecting writes
  • Eventual consistency: Acceptable for recommendations

Tradeoff: Recommendations lag by 5 minutes, but that’s acceptable for this use case.”

Why this is good:

  • Complete architecture
  • Multiple specialized read models
  • Specific technologies and configurations
  • Quantifies performance
  • Explains consistency tradeoffs
  • Shows real-world thinking

Red Flags to Avoid

  • Not understanding eventual consistency
  • Using CQRS for simple CRUD
  • Not explaining why CQRS is needed
  • Confusing CQRS with event sourcing (they’re different!)
  • Not considering operational complexity

Quick Self-Check

Before moving on, can you:

  • Explain CQRS in 60 seconds?
  • Draw the separation between command and query sides?
  • Describe how events propagate from write to read?
  • Explain eventual consistency challenges?
  • Identify when to use vs not use CQRS?
  • Show how multiple read models work?

Prerequisites

Used In Systems

  • Event-Driven Architectures - Broader pattern
  • Microservices - CQRS per service

Explained In Detail


Next Recommended: Review all concepts and explore deep dives for implementation details

Interview Notes
⭐ Must-Know
💼75% of L6+ interviews
Interview Relevance
75% of L6+ interviews
🏭Netflix, Amazon, Uber
Production Impact
Powers systems at Netflix, Amazon, Uber
10-100x
Performance
10-100x query improvement
📈Independent scaling
Scalability
Independent scaling