2025, Nov 27 15:00
Understanding Django 5 PostgreSQL connection pooling: per-process psycopg3 pools vs shared PgBouncer
Compare Django 5 psycopg3 connection pooling with PgBouncer. Learn how Gunicorn workers, threads and async affect per-process vs shared PostgreSQL pools.
When people compare FastAPI-style "one process with a shared pool" to a typical Django deployment behind Gunicorn, confusion often comes from how many workers you run and what kind of concurrency they provide. Django 5.x adds native PostgreSQL connection pooling via psycopg3, and there is also the long-standing option of running PGBouncer as a separate pooling proxy. Understanding where a pool lives and who can actually use it at the same time is the key to picking the right approach.
What you are really running
In production, Django apps are usually managed by a server like Gunicorn. That server can run your code in several ways. A process can be fully synchronous and handle only one request at a time. The same process might be synchronous but use multiple threads so several requests are handled in parallel within that single process. Or the process can be asynchronous, using an event loop to serve multiple requests concurrently while I/O is in flight. These modes matter far more for pooling than whether a worker process is "permanent" in the philosophical sense; the practical question is whether multiple in-flight requests within the same process can use multiple database connections simultaneously.
How Django’s native pool works
Django’s built-in pooling in 5.x uses psycopg3 and the psycopg_pool library. The pool exists inside a single process and is not shared across processes. Internally there is a worker thread that manages the connections, which makes the behavior conceptually similar to how an asyncpg pool works in a single FastAPI process. The crucial distinction is that if you run multiple processes, each process gets its own independent pool.
Where PGBouncer fits
PGBouncer is an external process. It builds and manages its own connection pool in its address space, and every Django worker can talk to it. Because it runs outside your application, the pool is naturally shared across processes.
Code example: a pool that cannot help a single-threaded worker
Consider a simplified model. A single-threaded synchronous worker can process only one request at a time. Even if you create a pool with many connections, that worker can only check out one connection during that request. The extra connections sit idle.
import time
from queue import LifoQueue
class TinyPool:
def __init__(self, size):
self.bucket = LifoQueue()
for i in range(size):
self.bucket.put(f"conn-{i}")
def take(self):
return self.bucket.get()
def give_back(self, conn):
self.bucket.put(conn)
pool = TinyPool(size=5)
# This simulates a synchronous worker that handles exactly one request at a time.
# Even though `size=5`, only one connection is used for each request.
def handle_sync_request_once():
conn = pool.take()
try:
# do some database work
time.sleep(0.1)
finally:
pool.give_back(conn)
for _ in range(3):
handle_sync_request_once()
In this setup the pool exists, but there is no opportunity to consume more than one connection concurrently in that worker.
Why this happens
The reason is straightforward. A process that is fully synchronous and single-threaded executes one request, then the next. There is no concurrency inside the process to justify checking out multiple connections from a pool. A pool starts to make sense only when multiple requests can run at once in the same process, which happens in two cases described earlier: synchronous worker processes with threading, or asynchronous worker processes.
Solution paths and what changes
If your workers are synchronous and single-threaded, using Django’s native connection pool offers no practical benefit over persistent connections. A single persistent connection per worker process is sufficient because there is only one in-flight request at a time. If, however, you run multiple threads per process or use an async server, the built-in pool is appropriate. Each process will create and use its own pool, and the pool will serve multiple in-flight requests within that process. If you need a shared pool across multiple processes, PGBouncer provides that by design because it is a separate process.
Adjusted example: reusing a single persistent connection for a sync worker
For a single-threaded synchronous worker, you can model the idea of a persistent connection with a single reusable handle, underscoring why a multi-connection pool is unnecessary in that mode.
import time
_singleton = {"conn": None}
def get_persistent_handle():
if _singleton["conn"] is None:
_singleton["conn"] = "conn-0"
return _singleton["conn"]
def handle_sync_request_with_persistence():
conn = get_persistent_handle()
# do some database work using the same handle
time.sleep(0.1)
for _ in range(3):
handle_sync_request_with_persistence()
This simple sketch mirrors the idea behind persistent connections in a single-threaded process: one process, one ongoing connection, one request at a time.
Clarifying two common questions
First, the native psycopg3 pool does not create one pool in a worker and let other workers share it across processes. A pool is process-local. If you launch multiple processes, each has its own pool. Second, PGBouncer is indeed a separate process that maintains its pool independently. All of your Django workers can connect to it.
Why it matters
Choosing the right connection management strategy is about matching concurrency to the tool. If a worker never needs more than one database connection at a time, a pool is overhead without benefit. If a single process can run many requests simultaneously via threads or an event loop, a pool can keep connections ready and reduce wait time for those concurrent code paths. And if you need a pool shared across processes, you reach for a standalone pooler like PGBouncer.
Bottom line
Think in terms of process boundaries and in-process concurrency. For synchronous single-threaded workers, persistent connections are sufficient. For threaded or asynchronous workers, Django’s psycopg3-based pool is a good fit and will exist per process. If you need a pool that all processes can use together, PGBouncer provides that as an external pooler. The distinction between a "permanent" process and a replaceable worker is not the deciding factor; the important axis is whether there is concurrency inside the process that can actually make use of multiple connections at once.