Your Node.js server is running fine at 10 requests/second. At 100 requests/second, memory climbs steadily. At 500, it OOMs and dies — no error message, just a dead process. The cause is almost always the same: unhandled stream backpressure. This is one of the most common production bugs in Node.js applications and one of the least understood.
⚡ TL;DR: Backpressure happens when a writable stream can’t consume data as fast as a readable stream produces it. The unconsumed data buffers in memory — unboundedly. Fix it with
pipe(),pipeline(), or by respecting the boolean returned bywrite().
What Backpressure Actually Is
In Node.js streams, data flows from a Readable source to a Writable destination. The writable stream has a buffer with a size limit called highWaterMark (default: 16KB for object mode, 16384 bytes for byte streams). When the writable buffer fills up, write() returns false — the signal that the producer should pause. If your code ignores this signal and keeps writing, data accumulates in memory with no bound.
The Bug — Ignoring the write() Return Value
Node.js streaming resources
→ Node.js Complete Guide (Udemy) — Deep chapter on streams, backpressure, and pipeline() in production.
Sponsored links. We may earn a commission at no extra cost to you.
// ❌ Classic backpressure bug — write() return ignored
const readable = fs.createReadStream('huge-file.csv');
const writable = someSlowDestination.createWriteStream();
readable.on('data', (chunk) => {
writable.write(chunk); // Returns false when buffer full — IGNORED!
// readable keeps firing 'data' events, chunks pile up in memory
});
// Result: memory grows until process crashes
// ✅ Fix 1: Respect the write() return value manually
readable.on('data', (chunk) => {
const ok = writable.write(chunk);
if (!ok) {
readable.pause(); // Stop reading when buffer full
writable.once('drain', () => readable.resume()); // Resume when drained
}
});
readable.on('end', () => writable.end());
Fix 2: pipe() — Automatic Backpressure Handling
// ✅ pipe() handles backpressure automatically
// It pauses/resumes the readable based on writable drain events
readable.pipe(writable);
// With transform streams (e.g., gzip compression)
const zlib = require('zlib');
fs.createReadStream('input.csv')
.pipe(zlib.createGzip()) // Transform: compress
.pipe(fs.createWriteStream('output.csv.gz')); // Write compressed
// Limitation of pipe(): errors don't propagate cleanly
// If transform throws, writable is not automatically closed
Fix 3: pipeline() — The Production-Ready Solution
const { pipeline } = require('stream/promises'); // Node.js 15+
const zlib = require('zlib');
async function processFile() {
try {
await pipeline(
fs.createReadStream('input.csv'),
zlib.createGzip(),
fs.createWriteStream('output.csv.gz')
);
console.log('Done — all streams closed cleanly');
} catch (err) {
// All streams automatically destroyed on error
console.error('Pipeline failed:', err);
}
}
// pipeline() advantages over pipe():
// ✅ Propagates errors to all streams automatically
// ✅ Destroys all streams on error (prevents leaks)
// ✅ Returns a Promise (works with async/await)
// ✅ Handles backpressure identically to pipe()
Detecting Backpressure in Production
// Monitor stream buffer sizes to detect backpressure
const { Writable } = require('stream');
class MonitoredWritable extends Writable {
_write(chunk, encoding, callback) {
const bufferLength = this.writableLength;
const hwm = this.writableHighWaterMark;
if (bufferLength > hwm * 0.8) {
console.warn(`Buffer at ${Math.round(bufferLength/hwm*100)}% capacity — backpressure building`);
}
// Simulate slow write
setTimeout(() => callback(), 10);
}
}
// Check stream buffer programmatically
setInterval(() => {
const memUsage = process.memoryUsage();
console.log({
heapUsed: Math.round(memUsage.heapUsed / 1024 / 1024) + 'MB',
external: Math.round(memUsage.external / 1024 / 1024) + 'MB',
});
}, 1000);
// Growing 'external' memory = stream buffer accumulation = backpressure bug
HTTP Streaming with Backpressure — Express Example
const express = require('express');
const app = express();
// ❌ BAD: No backpressure handling — client slow = server OOM
app.get('/download', (req, res) => {
const stream = fs.createReadStream('huge-dataset.json');
stream.on('data', chunk => res.write(chunk)); // Ignores backpressure!
stream.on('end', () => res.end());
});
// ✅ GOOD: pipeline() handles everything
app.get('/download', async (req, res) => {
res.setHeader('Content-Type', 'application/json');
res.setHeader('Transfer-Encoding', 'chunked');
try {
await pipeline(
fs.createReadStream('huge-dataset.json'),
res // res is a Writable stream — pipeline handles backpressure
);
} catch (err) {
if (err.code !== 'ERR_STREAM_DESTROYED') {
// Client disconnected — this is normal, not an error
res.status(500).end('Stream error');
}
}
});
highWaterMark Tuning for Performance
// Default highWaterMark is 16KB — often too small for large file transfers
// Increase for better throughput, decrease to limit memory usage
// Large file transfer — bigger buffer = fewer pause/resume cycles
const readable = fs.createReadStream('video.mp4', {
highWaterMark: 64 * 1024 // 64KB chunks instead of default 16KB
});
// Object mode streams (e.g., database rows)
const dbStream = db.query('SELECT * FROM logs').stream({
highWaterMark: 100 // Buffer 100 objects (rows) instead of default 16
});
// Rule of thumb:
// - High throughput, large files: highWaterMark = 64KB–1MB
// - Memory-constrained: highWaterMark = 4KB–16KB (default)
// - Object streams: highWaterMark = number of objects to buffer
Backpressure Cheat Sheet
- ✅ Always use
pipeline()for production stream processing - ✅ Check
write()return value —falsemeans pause your readable - ✅ Listen for
'drain'event before resuming after afalsewrite - ✅ Monitor
process.memoryUsage().externalfor buffer leaks - ✅ Tune
highWaterMarkbased on your throughput requirements - ❌ Never use
pipe()in production — no error propagation - ❌ Never ignore the boolean returned by
write()
Stream backpressure is a specific manifestation of the broader Node.js event loop blocking problem. For deploying streaming APIs on AWS, the Lambda cold start guide covers streaming responses from Lambda function URLs — a newer pattern that sidesteps some backpressure complexity. External reference: Node.js official backpressure guide.
Recommended resources
- Node.js Design Patterns (3rd Edition) — The streams chapter is the most comprehensive treatment of backpressure, piping, and the Transform stream pattern in any book. Directly extends this post.
- You Don’t Know JS: Async & Performance — Understanding JavaScript’s async model at depth makes streams and backpressure intuitive rather than mysterious.
Disclosure: This post contains affiliate links. If you purchase through these links, CheatCoders earns a small commission at no extra cost to you. We only recommend tools and books we genuinely find valuable.
Discover more from CheatCoders
Subscribe to get the latest posts sent to your email.

Pingback: JavaScript Generator Functions: Lazy Evaluation and Infinite Sequences Done Right - CheatCoders