2025, Oct 06 03:00

Reliable POSIX shared memory across Docker containers in GitLab CI with pytest and multiprocessing.shared_memory using a shared /dev/shm volume

Learn how to share POSIX shared memory across Docker containers in GitLab CI with pytest: mount a Docker volume at /dev/shm and avoid ipc_mode for reliable runs.

Sharing Python multiprocessing.shared_memory across containers feels straightforward on a developer laptop, but CI changes the rules. The moment pytest runs inside a container and orchestrates additional containers via the Docker SDK, familiar patterns like bind-mounting /dev/shm become brittle, and toggling ipc_mode yields no benefit. Below is a practical walkthrough of what actually governs POSIX shared memory in this setup and how to make it work reliably on a GitLab runner.

Problem statement

The goal is to let pytest create and write to a POSIX shared memory block that other Docker containers can read from. Locally, a bind-mount of the host’s /dev/shm into test containers appears to work. In GitLab CI, pytest itself runs inside a container, talking to a Docker daemon via the Docker SDK, which complicates shared memory visibility and container-to-container isolation.

Code sample that shows the initial approach

The local approach creates a POSIX shared memory block, bind-mounts /dev/shm, and starts a container that can open the same shared memory by name:

from multiprocessing.shared_memory import SharedMemory
import docker
api = docker.from_env()
mem_label = "example_shm"
segment = SharedMemory(create=True, name=mem_label, size=int(1e6))
bind_map = docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")
api.containers.run(
    image="alpine",
    name="sample_worker",
    detach=True,
    remove=True,
    command="tail -f /dev/null",
    mounts=[bind_map],
    environment={
        "SHM_NAME": mem_label
    }
)

In CI, an attempt to pivot to ipc_mode by detecting the pytest container ID is tempting:

from multiprocessing.shared_memory import SharedMemory
import docker
api = docker.from_env()
mem_label = "example_shm"
container_id = fetch_self_id()  # Some logic to discover the current container ID
segment = SharedMemory(create=True, name=mem_label, size=int(1e6))
if container_id is None:
    mount_list = [docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")]
    ipc_cfg = None
else:
    mount_list = []
    ipc_cfg = f"container:{container_id}"
api.containers.run(
    image="alpine",
    name="sample_worker",
    detach=True,
    remove=True,
    command="tail -f /dev/null",
    mounts=mount_list,
    ipc_mode=ipc_cfg,
    environment={
        "SHM_NAME": mem_label
    }
)

And a variant that spawns an extra “anchor” container marked as shareable:

import docker
api = docker.from_env()
anchor_ref = api.containers.run(
    image="alpine",
    name="mem_anchor",
    detach=True,
    remove=True,
    command="tail -f /dev/null",
    ipc_mode="shareable",
)
ipc_cfg = f"container:{anchor_ref.id}"

These ideas run into trouble in a GitLab runner because discovering the current container ID can be unreliable in the provided environment, and, more importantly, ipc_mode is not the control plane you need for POSIX shared memory.

What’s really going on

IPC namespaces govern System V IPC, not POSIX shared memory. The ipc_namespaces(7) documentation describes the namespace effects for SysV objects, and sysvipc(7) lists System V IPC syscalls. Python’s multiprocessing.shared_memory implements POSIX style shared memory via shm_open rather than SysV. In other words, toggling ipc_mode will not make POSIX segments visible across containers.

Whether two processes can attach to the same POSIX shared memory block hinges on both seeing the same filesystem backing under /dev/shm so that the shm_open call resolves to the same object. The practical way to achieve this across containers is to share the same directory with a Docker volume mounted at /dev/shm in every participant, rather than relying on IPC namespaces or bind-mounting the host’s /dev/shm.

A minimal, reproducible check

The following setup demonstrates two containers sharing a POSIX shared memory region without any use of --ipc. A Docker named volume is mounted at /dev/shm in both containers, and the SharedMemory name matches on both sides.

Runner script:

#!/usr/bin/env bash
set -euo pipefail
docker build . -t memx-test
docker run -i -e PYTHONUNBUFFERED=1 -v shared_mem:/dev/shm memx-test python3 /code/server.py &
docker run -i -e PYTHONUNBUFFERED=1 -v shared_mem:/dev/shm memx-test python3 /code/client.py &
wait

Dockerfile:

FROM python:3.13
RUN mkdir /code
COPY server.py /code
COPY client.py /code

Server process:

from multiprocessing.shared_memory import SharedMemory
import time
region = SharedMemory(create=True, name='foo2', size=int(1e6))
print("Created SHM")
time.sleep(1)
region.buf[0] = 10
print("wrote value")
time.sleep(1)
region.unlink()
print("Removed shm")

Client process:

from multiprocessing.shared_memory import SharedMemory
import time
time.sleep(0.5)
view = SharedMemory(create=False, name='foo2', size=int(1e6), track=False)
print("buf0", view.buf[0])
time.sleep(1)
print("buf0", view.buf[0])

Running this prints that the client sees buf[0] update from 0 to 10, proving the memory is truly shared. This works even without an --ipc flag, and also works with --ipc=private, confirming ipc_mode is irrelevant for POSIX shared memory in this scenario.

The practical fix

Instead of bind-mounting the host’s /dev/shm, create a Docker volume and mount it as /dev/shm into both the pytest container and every container started during the test. As long as both sides use the same SharedMemory name and see the same /dev/shm via the shared volume, they attach to the same region.

There is one CI-specific caveat. If pytest talks to a separate Docker-in-Docker daemon while pytest itself is not running in that same DinD environment, the DinD daemon cannot see volumes owned by the host Docker daemon. The GitLab documentation clarifies this split. To keep everything in a single Docker context, run pytest within the Docker-in-Docker environment as well, and mount that DinD daemon’s docker.sock into the pytest container so pytest can manage containers in the same daemon that holds the volume.

Why this matters

Relying on a host bind of /dev/shm is fragile. It risks leftover state from previous runs and cross-talk between concurrent jobs. A named Docker volume provides a clean, reproducible path that you can mount into exactly the containers that need to participate, and it aligns with how POSIX shared memory selects the underlying object via shm_open. It also avoids conflating POSIX and System V IPC models, which behave differently under namespaces.

Conclusion

For pytest-driven integration tests that need shared memory across containers, treat POSIX shared memory as a file-backed resource and make /dev/shm the same directory in all participants with a Docker volume. Do not rely on ipc_mode, since it targets System V IPC and won’t affect shm_open. In GitLab CI, ensure pytest executes against the same Docker daemon that runs the test containers. When those conditions are met, the Python SharedMemory name and a shared /dev/shm are all you need for reliable cross-container sharing.

The article is based on a question from StackOverflow by user2416984 and an answer by Nick ODell.