I am building a Python package which hevily relies on the Docker daemon.

I am using Pixi as a package manager for Python and have the Pypi Docker SDK in my dependencies however my package wouldn't work without the Docker daemon which is not installed locally (in the context of my project) but globally (I installed it with the docker client).

I want to make the dependency on Docker explicit in my project dependencies and I want to fix a Docker version (since I am parsing Docker logs and performing actions programmatically).

What I have tried:

  • Docker in docker (works but kinda slow)

  • Asking the user to input a docker daemon URL (workaround)

Desider outcome

Pixi manages a project-local Docker version as system level dependency. (Pixi installs the exact Docker version required by my project and the local docker version is available inside the Pixi shell)

4 Replies 4

A language-specific package manager can't install a system-level dependency, full stop. This is especially true of Docker, which has some low-level system dependencies, a dedicated filesystem tree in /var, can't be run twice easily, and potentially grants root-level access to anyone who can use it. Tricks like popping a binary into your package and running could work for smaller CLI tools, but not Docker.

External dependencies like this are fairly common: it's normal to require a database, or an authorization service, or some network-accessible service to exist and not be directly installed by your program. I'd accept this for Docker too. Since you seem to be writing a tool that directly manages Docker it seems to be reasonable enough to say "this tool will process the logs from containers already running in a Docker daemon you already have".

One thing that might help usability is to check very early in your program whether Docker is actually available, for example

import docker
client = docker.from_env()
client.version()

You don't necessarily care about the Docker daemon version per se but it's a lightweight call that will throw an exception if it doesn't work.

IME you will rarely need a non-default Docker daemon URL (and exposing the daemon socket over the network is incredibly dangerous) but if you do, docker.from_env() uses standard mechanisms like the DOCKER_HOST environment variable.

Also remember that, if you're not on a native-Linux host, then you need some sort of Linux VM to run Docker, maybe via the Docker Desktop app (another reason you can't just run dockerd). There are other container systems like Podman that might be in use instead (Docker Desktop isn't open-source and has been growing an increasing number of non-container-related features). In production environments I've worked in, it's much more common to use a cluster system like Kubernetes, but the Kubernetes API is completely different from Docker's.


It also sounds from the question like you might be trying to dynamically launch a collection of related containers as part of your program and want to launch a program-private Docker daemon. It's more common to do this by having some sort of launcher that can launch your program and its dependencies together; in Docker, Docker Compose; in Kubernetes, Helm. Your program can be a normal program that doesn't have any direct dependency on any container system, and it will be much easier to develop and unit-test.

If a private Docker daemon is really unavoidable, packaging your application in a virtual machine might be the best approach, particularly if it provides a Web interface. That will let it have its own isolated Docker daemon that runs at VM boot time.

My package is supposed to infer from a Python project a Dockerfile usable to run it

For typical Python programs this is almost boilerplate. The Dockerfile will be very close to

FROM python:3.13 AS base

FROM base AS build
WORKDIR /app
RUN python3 -m venv /venv
RUN /venv/bin/pip install .

FROM base
COPY --from=build /venv /venv
ENV PATH=/venv/bin:$PATH
CMD ["???"]

where the final CMD will be some Python entry point script declared in the pyproject.toml. You may need other steps (setting up a non-root user; build-stage OS-level packages to build non-wheel dependencies; final-stage OS-level packages for things like database client libraries) but they will be small add-ons to this core framework.

Now: for this task you need to examine the project source tree, and you need to write a Dockerfile, but you do not directly need Docker daemon access. The Dockerfile is a normal text file and you can use ordinary Python file I/O to write it. Once you've generated the Dockerfile I'd normally expect it to be checked into the source tree, and a downstream user can docker build or docker-compose build the image without needing your custom tool.

This is not relevant to the question though, anybody coming here knows this

Your Reply

By clicking “Post Your Reply”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.