A CI/CD pipeline that takes 30 minutes is not a productivity tool — it is a productivity tax. Developers context-switch away, forget what they were doing, and batch changes to avoid the wait. These best practices get pipelines under 10 minutes without sacrificing safety: parallel jobs, smart caching, the right test pyramid, and deployment strategies that make rollback a one-click operation.
⚡ TL;DR: Parallelize independent jobs. Cache dependencies aggressively (node_modules, pip, Docker layers). Use test pyramid (many unit, few integration, fewer E2E). Deploy with blue-green or canary. Automate rollback on error rate spike. Never merge to main without green CI.
The fast pipeline template
# .github/workflows/ci.yml
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
# Run all checks in parallel — don't wait for each other!
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm } # Cache node_modules!
- run: npm ci --prefer-offline
- run: npm run lint
# Typical: 45 seconds
type-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm ci --prefer-offline
- run: npx tsc --noEmit
# Typical: 30 seconds
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm ci --prefer-offline
- run: npm test -- --coverage
# Typical: 2-3 minutes
build:
runs-on: ubuntu-latest
needs: [lint, type-check, test] # Only build if all checks pass
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v5
with:
cache-from: type=gha # GitHub Actions cache for Docker layers!
cache-to: type=gha,mode=max
push: true
tags: my-image:${{ github.sha }}
deploy:
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main' # Only deploy from main
steps:
- run: |
# Blue-green deploy: create new stack, shift traffic, delete old
aws ecs update-service --service my-api --force-new-deployment
Caching strategy — biggest speed win
# Node.js dependency caching
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm # Cache ~/.npm (not node_modules!)
- run: npm ci --prefer-offline # Uses cache, fast
# Without cache: 45s to install. With cache: 5s.
# Python caching
- uses: actions/setup-python@v5
with:
python-version: '3.12'
cache: pip
# Docker layer caching — biggest Docker build speedup
- uses: docker/build-push-action@v5
with:
cache-from: type=gha # Read from GitHub Actions cache
cache-to: type=gha,mode=max # Write to GitHub Actions cache
# First build: 4 minutes. Subsequent (no code change): 20 seconds.
# Order Dockerfile layers for maximum cache hit:
# COPY package.json . ← changes weekly
# RUN npm ci ← cached unless package.json changed
# COPY src/ . ← changes per commit (last!)
Test pyramid — right balance
# The test pyramid for fast CI:
#
# /\ E2E tests: 5-10 tests
# / \ Run in CI on main only, not every PR
# /----\ (Playwright, Cypress)
# / \
# /--------\ Integration tests: 50-100 tests
# / \ Test API routes, DB queries, service contracts
# /------------\ Run on every PR, target < 2 min total
# / \
#/----------------\ Unit tests: 500-2000 tests
# Fast, isolated, no I/O
# Run on every commit, target < 30s
# Split tests by speed in CI:
# Fast job (every commit): unit tests only
# Slower job (every PR): unit + integration
# Nightly job: E2E full suite
# Parallelize test files:
- run: npx jest --maxWorkers=4 --testPathPattern=unit
CI/CD best practices checklist
- ✅ Parallelize lint, type-check, test — never run them sequentially
- ✅ Cache dependencies and Docker layers — single biggest speed win
- ✅ Fail fast — run fastest checks first, skip slow ones on failure
- ✅ Cache test results — Jest –cache, pytest-cache
- ✅ Deploy only on green main — protected branch rules
- ✅ Automated rollback — monitor error rates, auto-rollback on spike
- ❌ Never run E2E tests on every commit — too slow
- ❌ Never deploy directly from feature branches
CI/CD pipelines deploy Docker containers — the Docker best practices guide ensures your containers are optimized for fast builds. For AWS deployments, the Lambda ARM64 guide covers cross-platform builds for ARM targets. External reference: GitHub Actions documentation.
Recommended Reading
→ Designing Data-Intensive Applications — The essential book every senior developer needs. Covers distributed systems, databases, and production architecture.
→ The Pragmatic Programmer — Timeless engineering wisdom for writing better, more maintainable code at any level.
Affiliate links. We earn a small commission at no extra cost to you.
Free Weekly Newsletter
🚀 Don’t Miss the Next Cheat Code
Join 1,000+ senior developers getting expert-level JS, Python, AWS, system design and AI secrets every week. Zero fluff, pure signal.
Discover more from CheatCoders
Subscribe to get the latest posts sent to your email.
