0

I'm writing a pytest that should communicate to Docker containers through shared memory. Locally, that's pretty easy to achieve using something like this:

# Create a shared memory block
shm: SharedMemory = SharedMemory(create=True, name=shm_name, size=1e6)
# Create a docker mount for the shared memory
shm_mount = docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")
# Run a container with access to the shared memory
client.containers.run(
    image="alpine",
    name="my_container",
    detach=True,
    remove=True,
    command="tail -f /dev/null",
    mounts=[shm_mount],
    environment={
        "SHM_NAME": shm_name
    }
)

Now that test needs to run in a Gitlab runner. To use the Docker SDK, I'm using a Docker-in-Docker configuration looking like this:

unittest:
  image:
    name: docker:27.0.2
    pull_policy: if-not-present
  tags:
    - general-runner-dind
  services:
    - name: docker:27.0.2-dind
      alias: docker-service
      command: [dockerd-entrypoint.sh, --tls=0]
  variables:
    DOCKER_HOST: tcp://docker-service:2375
    DOCKER_TLS_CERTDIR: ""
    FF_NETWORK_PER_BUILD: "true"
  before_script:
    - apk update && apk upgrade && apk add --no-cache python3 python3-dev py3-pip curl
    - curl -LsSf https://astral.sh/uv/0.6.17/install.sh | sh
    - source $HOME/.local/bin/env
  script:
    - cd /path/to/project
    - uv sync --frozen
    - uv run --no-sync pytest

So pytest runs inside a docker image now. In addition, control over the parameters used to spin up the container where pytest is running in is rather limited. Things that I've tried to get a shared memory writeable within pytest and shareable with other docker containers spun up with the Docker SDK:

  1. Detect the container ID of the pytest container and pass that to ipc_mode option of docker.container.run:
cid = get_container_id() # Some logic to get the container ID.
shm: SharedMemory = SharedMemory(create=True, name=shm_name, size=1e6)
if cid is None:
    # Running on host
    shm_mount = docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")
    mounts = [shm_mount]
    ipc_mode = None
else:
    # Running inside docker
    mounts = []
    ipc_mode = f"container:{cid}"
client.containers.run(
    image="alpine",
    name="my_container",
    detach=True,
    remove=True,
    command="tail -f /dev/null",
    mounts=mounts,
    ipc_mode=ipc_mode
    environment={
        "SHM_NAME": shm_name
    }
)

This doesn't work because the docker-in-docker setup on Gitlab seems to be rather specialized and any means to get the container ID doesn't work (/.dockerenv doesn't exist, or /proc/self/cgroup (cgroups v1)//proc/self/mountinfo (cgroups v2) doesn't seem to contain the docker container ID).

  1. Create an additional container during the pytest that will have a shareable shared memory block accessible by other containers:
ipc_anchor = client.containers.run(
    image="alpine",
    name=ipc_anchor_name,
    detach=True,
    remove=True,
    command="tail -f /dev/null",
    ipc_mode="shareable",
)
ipc_mode = f"container:{ipc_anchor.id}"

Although this option looks promising, it's unclear to me how to reach the shared memory from pytest to write into (with SharedMemory(create=False, shm_name=...)).

Any suggestions on how to get either of the two apporoaches above completely working? Or a different approach that I might still be missing?

1
  • Any suggestions run one privileged image, in this one image run docker daemon in the backgroud and run your tests. how to reach the shared memory from pytest to write into You forward and execute commands in the container docker exec <container> bash -c 'cat stuff > /dev/shm/stuff'. The -v/dev/shm is a hack, that is not going to work over docker on different container. You have to forward your commands to remote docker. For that matter, you could do everything in that remote alpine container, rsync your code and install python and execute tests from it. Commented Aug 18 at 11:58

1 Answer 1

1

ipc_mode doesn't matter

Things that I've tried to get a shared memory writeable within pytest and shareable with other docker containers spun up with the Docker SDK: 1) Detect the container ID of the pytest container and pass that to ipc_mode option of docker.container.run:

If you look at the ipc_namespaces(7) man page, it says that it affects System V IPC objects, such as Sys V style shared memory. On the sysvipc(7) man page, none of syscalls it mentions include shm_open. However, Python uses POSIX style shared memory, like shm_open to implement shared memory.

In this module, shared memory refers to “POSIX style” shared memory blocks (though is not necessarily implemented explicitly as such) and does not refer to “distributed shared memory”.

(Source.)

So, even if you had a way to set the ipc_mode parameter, it wouldn't help you. It appears whether two processes can share memory under POSIX style shared memory is gated based on whether the two processes can obtain a file descriptor on the same file, not whether they share an IPC namespace. Sharing a file between two containers can be done using volumes.

I also tested this, in case the effect of IPC namespaces on POSIX shared memory was simply undocumented. I tested the following approach to sharing memory.

To reproduce my testing, you can do the following.

Add a script called test.sh:

#!/usr/bin/env bash
set -euo pipefail
docker build . -t shm-test
docker run -i -e PYTHONUNBUFFERED=1 -v shared_mem:/dev/shm shm-test python3 /code/server.py &
docker run -i -e PYTHONUNBUFFERED=1 -v shared_mem:/dev/shm shm-test python3 /code/client.py &
wait

Add a Dockerfile:

FROM python:3.13

RUN mkdir /code
COPY server.py /code
COPY client.py /code

Add a server.py:

from multiprocessing.shared_memory import SharedMemory
import time
sm = SharedMemory(create=True, name='foo2', size=int(1e6))
print("Created SHM")
time.sleep(1)
sm.buf[0] = 10
print("wrote value")
time.sleep(1)
sm.unlink()
print("Removed shm")

Add a client.py:

from multiprocessing.shared_memory import SharedMemory
import time
time.sleep(0.5)
sm = SharedMemory(create=False, name='foo2', size=int(1e6), track=False)
print("sm.buf[0]", sm.buf[0])
time.sleep(1)
print("sm.buf[0]", sm.buf[0])

This example can be run like so:

$ ./test.sh
Created SHM
sm.buf[0] 0
wrote value
sm.buf[0] 10
Removed shm

The fact that sm.buf[0] changes in the client proves that the two processes share memory. Since the test.sh file launches the containers without the --ipc flag, this proves the flag isn't required. (I also tested with --ipc=private. This also works.)

Using volumes instead

shm_mount = docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")

This set-up, of mounting the host's shared memory into the client, seems awfully fragile. In particular, there is a possibility that previous tests will fail to clean up /dev/shm, or that concurrently running tests will interfere with one another.

What I would suggest instead is to create a volume, then mount the volume into both the pytest container and the tested container under the path /dev/shm. I've not used the Python Docker SDK before, but hopefully the docker run example above is clear enough about what needs to be done.

Although this option looks promising, it's unclear to me how to reach the shared memory from pytest to write into (with SharedMemory(create=False, shm_name=...)).

As I understand it, as long as /dev/shm is the same directory in both containers, (which can be done with volumes) and name is the same in both calls to SharedMemory, they will get the same shared memory.

On Docker-in-Docker

One last remark:

If I understand the GitLab docs correctly, I don't think you are running pytest in Docker-in-Docker. You're just running it in Docker. That would mean that the approach I outline above wouldn't work, as the Docker-in-Docker daemon would not be able to see a volume that is held by the Docker daemon. To fix that aspect, I would suggest running pytest within Docker-in-Docker as well, as described in the documentation I linked. For pytest to be able to manipulate the Docker-in-Docker daemon from within its container, it would need the docker.sock of the Docker-in-Docker daemon mounted into its container.

Sign up to request clarification or add additional context in comments.

1 Comment

The most useful part is the last remark. I was already suspecting that, but couldn't find any source about it, so thanks for sharing the link. Running the test itself inside Docker indeed removes a lot of complexity from the interaction between test and the services, but brings other complexities with it (ideally the output in the console and the artifacts are uniform between the "normal" unit tests and such integration tests).

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.