Skip to content

Exactly-Once Semantics

How distributed messaging systems guarantee each message is processed exactly once, eliminating duplicates while ensuring atomicity across multiple operations

TL;DR

Exactly-once semantics (EOS) in distributed messaging ensures each message is processed precisely one time, even in the presence of failures, retries, and network issues. It combines idempotent producers (deduplicating retries) with transactions (atomic multi-message operations) to eliminate duplicates while maintaining strong consistency guarantees.

Visual Overview

Three Delivery Guarantee Levels

Core Explanation

What is Exactly-Once Semantics?

Exactly-once semantics guarantees that:

  1. Every message sent is delivered to the consumer
  2. No message is delivered more than once
  3. Messages are processed atomically across multiple operations

This is achieved through two mechanisms:

Exactly-Once Mechanisms

How Idempotent Producers Work

Producer ID and Sequence Numbers:

Producer ID and Sequence Numbers

Configuration:

Properties props = new Properties();

// Enable idempotency (also sets acks=all, retries=MAX, max.in.flight=5)
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);

KafkaProducer<String, String> producer = new KafkaProducer<>(props);

// Automatic behavior:
// - acks=all (wait for ISR replication)
// - retries=Integer.MAX_VALUE (retry indefinitely)
// - max.in.flight.requests.per.connection=5 (pipeline 5 requests)

How Transactions Work

Transaction Coordinator Architecture:

Transaction Coordinator System

Two-Phase Commit Protocol:

Two-Phase Commit Protocol

Code Example:

public class ExactlyOnceProcessor {
    private final KafkaProducer<String, String> producer;
    private final KafkaConsumer<String, String> consumer;

    public ExactlyOnceProcessor() {
        // Producer setup
        Properties producerProps = new Properties();
        producerProps.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG,
            "payment-processor-1");
        producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);

        producer = new KafkaProducer<>(producerProps);
        producer.initTransactions(); // Initialize transaction state

        // Consumer setup
        Properties consumerProps = new Properties();
        consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "payment-group");
        consumerProps.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
        consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);

        consumer = new KafkaConsumer<>(consumerProps);
    }

    public void processExactlyOnce() {
        consumer.subscribe(Arrays.asList("input-topic"));

        while (true) {
            ConsumerRecords<String, String> records =
                consumer.poll(Duration.ofMillis(1000));

            if (!records.isEmpty()) {
                // Start transaction
                producer.beginTransaction();

                try {
                    // Process records and produce outputs
                    for (ConsumerRecord<String, String> record : records) {
                        String result = processRecord(record);

                        producer.send(new ProducerRecord<>(
                            "output-topic",
                            record.key(),
                            result
                        ));
                    }

                    // Commit offsets as part of transaction
                    Map<TopicPartition, OffsetAndMetadata> offsets =
                        getOffsetsToCommit(records);

                    producer.sendOffsetsToTransaction(
                        offsets,
                        consumer.groupMetadata()
                    );

                    // Atomic commit: outputs + offsets together
                    producer.commitTransaction();

                    // Exactly-once guarantee:
                    // - Input consumed exactly once (offset committed)
                    // - Output produced exactly once (transaction committed)
                    // - Both atomic (all or nothing)

                } catch (Exception e) {
                    // Abort transaction on any failure
                    producer.abortTransaction();

                    // Offsets NOT committed, will reprocess from last commit
                    // Output messages NOT visible to consumers
                }
            }
        }
    }
}

Zombie Producer Fencing

The Zombie Problem:

The Zombie Problem

How Epochs Work:

// Automatic epoch management

// Producer 1 (original)
producer1.initTransactions(); // Gets epoch=5
producer1.beginTransaction();
producer1.send(record);
// ... network partition ...

// Producer 2 (new instance, same transactional.id)
producer2.initTransactions(); // Gets epoch=6 (incremented)
producer2.beginTransaction();
producer2.send(record);
producer2.commitTransaction(); // Succeeds [OK]

// Producer 1 (zombie, network recovers)
producer1.send(record);
// Broker rejects: epoch=5 < current epoch=6
// Throws: ProducerFencedException
// Producer 1 cannot write anymore [FENCED]

Performance Impact

Throughput and Latency Tradeoffs:

Performance Benchmark Comparison

Optimization Strategies:

// Optimize transactional performance

Properties props = new Properties();

// Batch more aggressively
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 65536);     // 64 KB
props.put(ProducerConfig.LINGER_MS_CONFIG, 50);         // 50ms wait

// Larger buffer for transactions
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 134217728); // 128 MB

// Compression (helps with large transactions)
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "lz4");

// Result: Can achieve ~60% of non-transactional throughput

Tradeoffs

Advantages:

  • Eliminates duplicates completely
  • Atomic multi-message operations
  • Strong consistency guarantees
  • Automatic zombie producer fencing
  • Simplifies application logic (no deduplication needed)

Disadvantages:

  • 40-60% throughput reduction
  • 2-4x latency increase
  • Higher CPU and memory usage
  • More complex operational model
  • Limited to Kafka-to-Kafka operations

Real Systems Using This

Apache Kafka

  • Implementation: Idempotent producers + transactional API
  • Scale: LinkedIn processes trillions of messages with exactly-once
  • Performance: ~300K msg/sec with transactions (production workloads)
  • Use case: Financial transactions, order processing, exactly-once ETL
  • Implementation: Two-phase commit with checkpoints
  • Integration: Uses Kafka transactions for exactly-once sinks
  • Scale: Processes 1+ trillion events per day at Alibaba
  • Use case: Real-time analytics with exactly-once guarantees

Google Cloud Dataflow

  • Implementation: Exactly-once processing with idempotent writes
  • Guarantee: End-to-end exactly-once in streaming pipelines
  • Scale: Petabyte-scale data processing
  • Use case: Financial reporting, regulatory compliance

When to Use Exactly-Once Semantics

Perfect Use Cases

Financial Transactions

Financial Transactions Use Case

Order Processing

Order Processing Use Case

Database Change Data Capture (CDC)

CDC Use Case

Audit and Compliance Logs

Audit and Compliance Use Case

When NOT to Use

High-Volume Metrics/Logs

When NOT to Use - High-Volume Logs

Performance-Critical Real-Time Systems

When NOT to Use - Performance-Critical

Idempotent Consumers

When NOT to Use - Idempotent Consumers

Interview Application

Common Interview Question 1

Q: “Explain how Kafka achieves exactly-once semantics and why it’s difficult in distributed systems.”

Strong Answer:

“Exactly-once is challenging because distributed systems face network failures, crashes, and retries that naturally cause duplicates. Kafka solves this with two mechanisms:

1. Idempotent Producers (Eliminates retry duplicates):

  • Producer gets unique Producer ID (PID) from broker
  • Each message tagged with sequence number per partition
  • Broker tracks (PID, partition) → last sequence seen
  • Duplicate retries: Same sequence number → Broker ignores, returns success
  • Example: Send seq=5 → timeout → retry seq=5 → Broker: ‘Already have seq=5, ignoring’

2. Transactions (Atomic multi-message operations):

  • Transaction coordinator manages distributed commit via two-phase protocol
  • Phase 1: PREPARE_COMMIT written to __transaction_state topic
  • Phase 2: COMMIT markers written to all involved partitions
  • Consumers with read_committed see only completed transactions
  • All messages in transaction visible atomically, or none at all

3. Zombie Fencing (Prevents split-brain):

  • Each producer instance gets incrementing epoch number
  • Old instance (zombie) has stale epoch → Broker rejects writes
  • Guarantees only one active producer per transactional ID

Why it’s hard:

  • Requires coordination across multiple brokers
  • Two-phase commit adds latency (2-4x increase)
  • Need to track sequence numbers per (producer, partition)
  • Zombie producer detection and fencing complexity

Tradeoff: 40% throughput reduction for guaranteed consistency. Worth it for financial transactions, not worth it for logs.”

Why this is good:

  • Explains all three mechanisms clearly
  • Provides concrete examples
  • Explains why it’s difficult
  • Quantifies performance impact
  • Shows decision-making on when to use

Common Interview Question 2

Q: “Design a payment processing system that guarantees no duplicate charges, even if services restart or experience network failures.”

Strong Answer:

“I’d use Kafka with exactly-once semantics for atomic payment processing:

Architecture:

API Gateway  Kafka (payment-requests)  Payment Processor 
  ├─ Kafka (payment-completed)
  └─ Kafka (payment-failed)

Exactly-Once Configuration:

// Producer (API Gateway)
props.put("transactional.id", "payment-api-" + instanceId);
props.put("enable.idempotence", true);
producer.initTransactions();

// Consumer (Payment Processor)
props.put("isolation.level", "read_committed");
props.put("enable.auto.commit", false);

Processing Flow:

  1. API Gateway produces payment request with transaction
  2. Payment Processor consumes with read_committed
  3. Process payment (call payment gateway)
  4. Atomic transaction: Write result + commit offset
  5. On failure: Abort transaction, will retry from last commit

Idempotency Key Design:

  • Client generates UUID: idempotency_key
  • Store in Redis: SET payment:{key} PROCESSING NX EX 3600
  • If exists: Return cached result (client retry)
  • Kafka EOS ensures: Same payment processed exactly once
  • Redis ensures: Client retries don’t cause duplicates

Failure Scenarios:

  • Network timeout: Retry handled by Kafka producer, no duplicate
  • Processor crash: Kafka transactions ensure atomicity
  • Partial transaction: Aborted, consumer will reprocess
  • Zombie processor: Fenced by epoch, cannot write

Result: Zero duplicate charges, 99.99% durability guarantee, ~8ms p99 latency (acceptable for payments).”

Why this is good:

  • Complete system design
  • Shows exactly-once configuration
  • Explains both Kafka EOS and application-level idempotency
  • Covers multiple failure scenarios
  • Provides performance metrics

Red Flags to Avoid

  • Confusing at-least-once with exactly-once
  • Not understanding idempotency vs transactions
  • Thinking EOS is “free” (ignoring performance cost)
  • Not knowing zombie fencing mechanism
  • Believing EOS covers external systems (it doesn’t)

Quick Self-Check

Before moving on, can you:

  • Explain exactly-once semantics in 60 seconds?
  • Describe how idempotent producers eliminate duplicates?
  • Draw the two-phase commit protocol flow?
  • Explain zombie producer fencing with epochs?
  • Quantify the performance impact of EOS?
  • Identify when to use exactly-once vs at-least-once?

Prerequisites

Used In Systems

  • Distributed Message Queues - Core reliability feature
  • Event-Driven Architectures - Exactly-once event processing

Explained In Detail


Next Recommended: Event Sourcing - Architecture pattern leveraging exactly-once guarantees

Interview Notes
⭐ Must-Know
💼85% of senior interviews
Interview Relevance
85% of senior interviews
🏭Financial systems, order processing
Production Impact
Powers systems at Financial systems, order processing
No duplicates
Performance
No duplicates query improvement
📈Atomic transactions
Scalability
Atomic transactions