I'm writing a pytest that should communicate to Docker containers through shared memory. Locally, that's pretty easy to achieve using something like this:
# Create a shared memory block
shm: SharedMemory = SharedMemory(create=True, name=shm_name, size=1e6)
# Create a docker mount for the shared memory
shm_mount = docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")
# Run a container with access to the shared memory
client.containers.run(
image="alpine",
name="my_container",
detach=True,
remove=True,
command="tail -f /dev/null",
mounts=[shm_mount],
environment={
"SHM_NAME": shm_name
}
)
Now that test needs to run in a Gitlab runner. To use the Docker SDK, I'm using a Docker-in-Docker configuration looking like this:
unittest:
image:
name: docker:27.0.2
pull_policy: if-not-present
tags:
- general-runner-dind
services:
- name: docker:27.0.2-dind
alias: docker-service
command: [dockerd-entrypoint.sh, --tls=0]
variables:
DOCKER_HOST: tcp://docker-service:2375
DOCKER_TLS_CERTDIR: ""
FF_NETWORK_PER_BUILD: "true"
before_script:
- apk update && apk upgrade && apk add --no-cache python3 python3-dev py3-pip curl
- curl -LsSf https://astral.sh/uv/0.6.17/install.sh | sh
- source $HOME/.local/bin/env
script:
- cd /path/to/project
- uv sync --frozen
- uv run --no-sync pytest
So pytest runs inside a docker image now. In addition, control over the parameters used to spin up the container where pytest is running in is rather limited. Things that I've tried to get a shared memory writeable within pytest and shareable with other docker containers spun up with the Docker SDK:
- Detect the container ID of the pytest container and pass that to
ipc_modeoption ofdocker.container.run:
cid = get_container_id() # Some logic to get the container ID.
shm: SharedMemory = SharedMemory(create=True, name=shm_name, size=1e6)
if cid is None:
# Running on host
shm_mount = docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")
mounts = [shm_mount]
ipc_mode = None
else:
# Running inside docker
mounts = []
ipc_mode = f"container:{cid}"
client.containers.run(
image="alpine",
name="my_container",
detach=True,
remove=True,
command="tail -f /dev/null",
mounts=mounts,
ipc_mode=ipc_mode
environment={
"SHM_NAME": shm_name
}
)
This doesn't work because the docker-in-docker setup on Gitlab seems to be rather specialized and any means to get the container ID doesn't work (/.dockerenv doesn't exist, or /proc/self/cgroup (cgroups v1)//proc/self/mountinfo (cgroups v2) doesn't seem to contain the docker container ID).
- Create an additional container during the pytest that will have a shareable shared memory block accessible by other containers:
ipc_anchor = client.containers.run(
image="alpine",
name=ipc_anchor_name,
detach=True,
remove=True,
command="tail -f /dev/null",
ipc_mode="shareable",
)
ipc_mode = f"container:{ipc_anchor.id}"
Although this option looks promising, it's unclear to me how to reach the shared memory from pytest to write into (with SharedMemory(create=False, shm_name=...)).
Any suggestions on how to get either of the two apporoaches above completely working? Or a different approach that I might still be missing?
Any suggestionsrun one privileged image, in this one image run docker daemon in the backgroud and run your tests.how to reach the shared memory from pytest to write intoYou forward and execute commands in the containerdocker exec <container> bash -c 'cat stuff > /dev/shm/stuff'. The-v/dev/shmis a hack, that is not going to work over docker on different container. You have to forward your commands to remote docker. For that matter, you could do everything in that remote alpine container,rsyncyour code and install python and execute tests from it.