Go’s concurrency primitives are the language’s biggest selling point and its biggest footgun. Goroutines are cheap (2KB stack) and Go can run millions simultaneously. Channels make communication explicit. But goroutine leaks, deadlocks, and data races are easy to introduce silently. These are the patterns that prevent production incidents.
⚡ TL;DR: Always provide a goroutine a way to stop (context cancellation or done channel). Use buffered channels for producer-consumer. Use
sync.WaitGroupto wait for completion. Useerrgroupfor error propagation. Use worker pools to limit concurrency. The-raceflag catches data races at development time.
Goroutines vs OS threads
// Goroutines are NOT OS threads
// OS thread: ~1MB stack, kernel-managed, expensive context switch
// Goroutine: ~2KB stack (grows dynamically), Go runtime-managed, cheap
// You can run 1 million goroutines:
for i := 0; i < 1_000_000; i++ {
go func(i int) {
time.Sleep(10 * time.Second)
}(i)
}
// Memory: ~2GB for 1M goroutines (2KB each)
// vs 1TB for 1M OS threads (1MB each)
// GOMAXPROCS controls how many OS threads run goroutines in parallel
// Default: number of CPUs
// Production: leave at default — Go scheduler handles it
Worker pool pattern — limit concurrency
func workerPool(jobs <-chan Job, results chan<- Result, numWorkers int) {
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for job := range jobs { // Receive until channel closed
results <- processJob(job)
}
}()
}
go func() {
wg.Wait()
close(results) // Signal consumers we're done
}()
}
// Usage:
jobs := make(chan Job, 100) // Buffered: don't block producer
results := make(chan Result, 100)
go workerPool(jobs, results, 10) // 10 concurrent workers max
// Send jobs:
for _, j := range allJobs {
jobs <- j
}
close(jobs) // Signal workers: no more jobs
// Collect results:
for result := range results {
process(result)
}
Context cancellation — the correct way
func longRunningTask(ctx context.Context) error {
for {
select {
case <-ctx.Done():
return ctx.Err() // context.Canceled or DeadlineExceeded
default:
// Do one unit of work
if err := doWork(); err != nil {
return err
}
}
}
}
// With timeout:
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel() // ALWAYS defer cancel to release resources
err := longRunningTask(ctx)
// Propagate context through call chain:
func handleRequest(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() // Request context — cancelled when client disconnects
result, err := database.Query(ctx, "SELECT ...") // Pass ctx down
if err != nil {
if errors.Is(err, context.Canceled) {
return // Client disconnected — stop work
}
}
}
errgroup — parallel tasks with error propagation
import "golang.org/x/sync/errgroup"
func fetchAll(ctx context.Context, ids []string) ([]Data, error) {
g, ctx := errgroup.WithContext(ctx)
results := make([]Data, len(ids))
for i, id := range ids {
i, id := i, id // Capture loop variables!
g.Go(func() error {
data, err := fetch(ctx, id)
if err != nil {
return fmt.Errorf("fetch %s: %w", id, err)
}
results[i] = data // Safe: each goroutine writes different index
return nil
})
}
if err := g.Wait(); err != nil {
return nil, err // Returns first error; cancels others via ctx
}
return results, nil
}
// errgroup cancels the group context when first goroutine errors
// All other goroutines should check ctx.Done() to stop early
Fan-out / fan-in pattern
// Fan-out: distribute work across multiple goroutines
func fanOut(input <-chan Item, numWorkers int) []<-chan Result {
channels := make([]<-chan Result, numWorkers)
for i := 0; i < numWorkers; i++ {
channels[i] = worker(input)
}
return channels
}
// Fan-in: merge multiple channels into one
func fanIn(channels ...<-chan Result) <-chan Result {
merged := make(chan Result)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan Result) {
defer wg.Done()
for result := range c {
merged <- result
}
}(ch)
}
go func() {
wg.Wait()
close(merged)
}()
return merged
}
Detecting goroutine leaks
// Common goroutine leak: goroutine blocked on channel send/receive with no reader
func leak() {
ch := make(chan int) // Unbuffered
go func() {
ch <- computeResult() // Blocked forever if nobody reads ch!
}()
// Function returns, ch goes out of scope, goroutine leaks
}
// Fix: use context or buffered channel
func noLeak(ctx context.Context) {
ch := make(chan int, 1) // Buffered: goroutine can always send
go func() {
select {
case ch <- computeResult():
case <-ctx.Done(): // Goroutine exits if context cancelled
}
}()
}
// Detect leaks in tests:
// goleak.VerifyNone(t) from go.uber.org/goleak
// Also: expvar.Func to expose goroutine count via /debug/vars
import "runtime"
fmt.Println(runtime.NumGoroutine()) // Track in production metrics
Go concurrency checklist
- ✅ Always give goroutines a way to stop — context cancellation or done channel
- ✅ Use
errgroupfor parallel tasks that need error propagation - ✅ Worker pools to control maximum concurrent goroutines
- ✅ Capture loop variables with
i, id := i, idbefore goroutine closure - ✅ Always
defer cancel()immediately aftercontext.WithTimeout - ✅ Run
go test -race— catches data races the compiler misses - ❌ Never share mutable data between goroutines without synchronization
- ❌ Never start a goroutine without ensuring it can exit
- ❌ Never use
time.Sleepto wait for goroutines — use WaitGroup or channels
Go's concurrency model solves many of the same problems as the Node.js async hooks tracing approach but at a lower level. For system design context on when to choose Go vs Node.js for concurrent workloads, the Python asyncio vs threading benchmark covers the same concurrency trade-offs from a different language perspective. External reference: Go concurrency patterns blog.
Recommended Books
→ Designing Data-Intensive Applications — The essential deep-dive on distributed systems, databases, and production engineering at scale.
→ The Pragmatic Programmer — Timeless principles for writing better code, debugging smarter, and advancing as an engineer.
Affiliate links. We earn a small commission at no extra cost to you.
Discover more from CheatCoders
Subscribe to get the latest posts sent to your email.
