Event-driven architecture (EDA) is the pattern behind every high-scale system you use daily. When you place an Amazon order, dozens of services — inventory, payment, shipping, notifications — react to a single OrderPlaced event without knowing each other exist. EDA enables this decoupling, horizontal scaling, and fault isolation. This guide explains the patterns and trade-offs.
⚡ TL;DR: Events notify (OrderPlaced). Commands request (ProcessPayment). Queries ask (GetOrderStatus). Use SQS for point-to-point queues, SNS for fan-out, Kafka for ordered event streams with replay. Outbox pattern prevents lost events. Consumer groups enable parallel processing.
Events vs Commands vs Queries
// Event: something that happened (past tense, no expected response)
// OrderPlaced, UserRegistered, PaymentFailed, InventoryUpdated
// Producer fires and forgets — doesn't care who consumes
// Command: request to do something (imperative, expects action)
// ProcessPayment, SendEmail, UpdateInventory
// Usually has a single consumer
// Query: request for data (no side effects)
// GetOrderStatus, ListUserOrders
// Usually synchronous REST/GraphQL
// EDA shines for events:
const event = {
type: 'order.placed',
id: uuid(),
timestamp: new Date().toISOString(),
version: '1.0',
data: {
orderId: '12345',
userId: 'u-789',
items: [{ productId: 'p-1', quantity: 2 }],
total: 99.98
}
};
// → Inventory service: reserve stock
// → Payment service: charge card
// → Email service: send confirmation
// → Analytics service: record sale
SQS vs SNS vs Kafka — when to use each
// SQS (Simple Queue Service): point-to-point, at-least-once delivery
// - One consumer group processes each message
// - Visibility timeout: message hidden during processing
// - DLQ: failed messages after N retries go here
// Use for: task queues, background jobs, decoupled services
// SNS (Simple Notification Service): fan-out pub/sub
// - One message → many subscribers (SQS, Lambda, HTTP, email)
// - No replay, no ordering guarantee
// Use for: notifications fan-out, cross-service events
// SNS + SQS pattern (reliable fan-out):
// Order Placed → SNS Topic
// → SQS Queue → Inventory Lambda
// → SQS Queue → Payment Lambda
// → SQS Queue → Email Lambda
// Each service gets its own queue with DLQ and retry logic
// Kafka: ordered, replayable event streams
// - Events persisted to disk (configurable retention)
// - Consumer groups: each group gets all events
// - Partitions: parallel consumption within a group
// Use for: event sourcing, audit logs, data pipelines, replay
The Outbox Pattern — no lost events
// Problem: writing to DB and publishing to queue are two operations
// If queue publish fails after DB write: event is lost forever
// WRONG:
async function placeOrder(order) {
await db.saveOrder(order); // Saved
await queue.publish('OrderPlaced', order); // If this fails: lost!
}
// RIGHT: Outbox pattern
async function placeOrder(order) {
await db.transaction(async trx => {
await trx.saveOrder(order);
await trx.saveOutbox({
type: 'OrderPlaced',
payload: order,
status: 'pending'
});
}); // Atomic: both succeed or both fail
}
// Separate process: poll outbox and publish
async function processOutbox() {
const pending = await db.getOutbox({ status: 'pending' });
for (const event of pending) {
await queue.publish(event.type, event.payload);
await db.updateOutbox(event.id, { status: 'published' });
}
}
Consumer groups and parallel processing
// Consumer group: multiple consumers sharing partitions
// Each message processed by ONE consumer in the group
// AWS SQS with multiple Lambda instances:
// SQS → Lambda (scales automatically to match queue depth)
// Each Lambda processes different messages in parallel
// Kafka consumer group:
const { Kafka } = require('kafkajs');
const kafka = new Kafka({ brokers: ['kafka:9092'] });
const consumer = kafka.consumer({ groupId: 'order-processor' });
await consumer.subscribe({ topic: 'orders', fromBeginning: false });
await consumer.run({
partitionsConsumedConcurrently: 4, // Process 4 partitions in parallel
eachMessage: async ({ topic, partition, message }) => {
const event = JSON.parse(message.value.toString());
await processOrderEvent(event);
}
});
- ✅ Events are immutable facts — never update an event, publish a new one
- ✅ Outbox pattern for reliable event publishing with ACID guarantees
- ✅ SQS DLQ for every consumer — catch and investigate failed messages
- ✅ Idempotent consumers — handle duplicate delivery gracefully
- ✅ Consumer group ID per service — all instances share work
- ❌ Never lose events by publishing without persistence
- ❌ Never use events for synchronous operations that need immediate response
Event-driven architecture powers the AWS fraud detection architecture and the clickstream analytics pipeline. External reference: Martin Fowler on event-driven architecture.
Recommended Reading
→ Designing Data-Intensive Applications — The essential book every senior developer needs.
→ The Pragmatic Programmer — Timeless engineering wisdom for writing better code.
Affiliate links. We earn a small commission at no extra cost to you.
Free Weekly Newsletter
🚀 Don’t Miss the Next Cheat Code
Join 1,000+ senior developers getting expert JS, Python, AWS and system design secrets weekly.
Discover more from CheatCoders
Subscribe to get the latest posts sent to your email.
