I have a Dockerfile that I'm using to build a runtime environment for some bash scripts, currently it looks like this:
FROM debian:buster-slim
RUN apt-get update
RUN apt-get upgrade
RUN apt-get install curl -y
# Install MySQL tools
RUN apt-get install default-mysql-client -y
# Install AWS CLI v2
RUN apt-get install awscli -y
# Install Docker
RUN curl -fsSL https://get.docker.com -o get-docker.sh
RUN chmod +x ./get-docker.sh
RUN ./get-docker.sh
RUN rm get-docker.sh
# Copy jobs into container
WORKDIR /
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
The docker-entrypoint.sh script contains the following:
#!/bin/bash
docker --version
/jobs/$1/run.sh
Finally, the job that I'm running (the run.sh script that ends up being run by docker-entrypoint.sh) looks like this:
#!/bin/bash
docker --version
The first execution of docker --version inside of docker-entrypoint.sh behaves as expected and outputs the Docker version. The second execution from run.sh fails to run with the following message:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
For some reason it seems like the inner script doesn't have access to the docker daemon while the outer script does.
Interestingly, if I actually run ./docker-entrypoint.sh [job-name] locally, instead of within the Docker container, I get the expected output from BOTH executions.
Is anyone able to explain why this is occurring and help with a solution?