Python asyncio vs Threading vs Multiprocessing: When to Use Each

Python asyncio vs Threading vs Multiprocessing: When to Use Each

Python concurrency confuses developers because there are three completely different models — and choosing the wrong one gives you zero speedup. The GIL prevents true thread parallelism for CPU-bound work. asyncio gives concurrency without threads. multiprocessing bypasses the GIL. This guide explains exactly when each wins.

TL;DR: asyncio for I/O-bound with many connections (web scraping, API calls, WebSockets). threading for I/O-bound that uses blocking libraries. multiprocessing for CPU-bound work (number crunching, image processing). The GIL blocks threading for CPU work.

The GIL — why threading has limits

# GIL (Global Interpreter Lock): only ONE thread runs Python bytecode at a time
# Threading DOES help for I/O-bound: GIL released during I/O waits
# Threading DOES NOT help for CPU-bound: GIL held during computation

# CPU-bound benchmark:
import threading, time

def count(n):
    for _ in range(n): pass

# Sequential: 2.1s
count(100_000_000)
count(100_000_000)

# Threaded: 2.1s (same! GIL prevents parallelism)
t1 = threading.Thread(target=count, args=(100_000_000,))
t2 = threading.Thread(target=count, args=(100_000_000,))
t1.start(); t2.start(); t1.join(); t2.join()

# Multiprocessing: 1.1s (true parallelism, bypasses GIL)
from multiprocessing import Pool
with Pool(2) as p:
    p.map(count, [100_000_000, 100_000_000])

asyncio — for I/O-bound with many connections

import asyncio, aiohttp

async def fetch_url(session, url):
    async with session.get(url) as response:
        return await response.json()

async def fetch_all(urls):
    async with aiohttp.ClientSession() as session:
        tasks = [fetch_url(session, url) for url in urls]
        return await asyncio.gather(*tasks)  # All concurrent, one thread!

# 100 URLs:
# Sequential (requests): 20 seconds
# asyncio (aiohttp): 0.5 seconds (40x faster)
# Threads (requests): 1.5 seconds
# asyncio wins for high-concurrency I/O

concurrent.futures — unified API for both

from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import requests

urls = ['https://api.example.com/'+str(i) for i in range(100)]

# I/O-bound: ThreadPoolExecutor
def fetch(url): return requests.get(url).json()

with ThreadPoolExecutor(max_workers=20) as ex:
    results = list(ex.map(fetch, urls))

# CPU-bound: ProcessPoolExecutor
def crunch(data): return sum(x**2 for x in data)

with ProcessPoolExecutor() as ex:
    results = list(ex.map(crunch, big_datasets))

Decision guide

  • ✅ asyncio: many concurrent I/O connections, async libraries available (aiohttp, asyncpg)
  • ✅ threading: I/O-bound with sync libraries you cannot change
  • ✅ multiprocessing: CPU-bound (computation, image processing, ML inference)
  • ✅ concurrent.futures: simple unified API for both thread and process pools
  • ❌ Never use threading for CPU-bound — GIL prevents any speedup
  • ❌ Never use asyncio with blocking libraries — blocks the event loop

Python concurrency connects to Python performance optimization — understanding the GIL determines which technique to reach for first. External reference: Python asyncio documentation.

Recommended Reading

Designing Data-Intensive Applications — The essential book every senior developer needs.

The Pragmatic Programmer — Timeless engineering wisdom for writing better code.

Affiliate links. We earn a small commission at no extra cost to you.

Free Weekly Newsletter

🚀 Don’t Miss the Next Cheat Code

Join 1,000+ senior developers getting expert JS, Python, AWS and system design secrets weekly.

✓ No spam✓ Unsubscribe anytime

Discover more from CheatCoders

Subscribe to get the latest posts sent to your email.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply