3

I've got a GitHub action (running on a self-hosted Gitea server) that runs a few docker compose commands to set up a test environment and run tests. The compose files work on my local machine and with nektos act.
The runner is set up following Gitea's guide for act runner with docker compose, which mounts the docker socket to the runner.

An example workflow:

jobs:
  test_job:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Setup
        run: docker compose --profile setup up --wait
      - name: Test
        run: docker compose run --rm test
      - name: Cleanup
        if: always()
        run: docker compose --profile setup down

I've narrowed the problem down to the volumes not being mounted how I'd expect. My compose file has a database service with a volume:

services:
  db:
    image: postgres:17
    volumes:
      - ./test/db/schema.sql:/docker-entrypoint-initdb.d/11schema.sql

If I attach to the database service in the action, it prints an error:

test-db  | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/11schema.sql
test-db  | psql:/docker-entrypoint-initdb.d/11schema.sql: error: could not read from input file: Is a directory
test-db exited with code 1

Usually "Is a directory" is a result of the mount path being empty and docker creating folders in their place, how can I make sure the volume is mounted so the compose files work both locally and in the action?

1 Answer 1

4

What's happening

  1. Gitea's actions are workspaced to $GITHUB_WORKSPACE (/workspace/[Org]/[repo])
  2. When the action runs and calls the docker compose command it mounts volumes relative to the workspace, meaning ./test/db/schema.sql is mapped to /workspace/Org/repo/test/db/schema.sql
  3. The runner container has access to /var/run/docker.sock, which starts the containers on the host machine
  4. The database container looks for the workspace folder and since it doesn't exist on the host machine it creates the /workspace/Org/repo/test/db/schema.sql folders instead

A solution

  1. You need the volumes to match between the container and the host. Github actions let you add volumes with jobs.<job_id>.container.volumes. Adjust your job to add the container.volumes attribute:

    jobs:
      test_job:
        runs-on: ubuntu-latest
        container:
          image: ubuntu:latest
          volumes:
            - /workspace/Org/repo/app:/workspace/Org/repo/app
    

    Note on the path: If you try to mount /workspace/Org/repo:/workspace/Org/repo the container will fail in the "Set up job" step with the error:

    failed to create container: 'Error response from daemon: Duplicate mount point: /workspace/Org/repo'

    This is because the runner already handles mounting the workspace directory with some auto-generated "GITEA-ACTIONS-TASK-1234_WORKFLOW-Test_JOB-test_job" name. This can be worked around by using a subdirectory.

  2. In order to use the volume adjust your runner.yaml config for the gitea runner and restart the runner:

    container:
      valid_volumes:
        - '/workspace/**'
    
  3. Add the defaults.run.working-directory attribute to your job:

    jobs:
      test_job:
        defaults:
          run:
            working-directory: /workspace/Org/repo/app
    
  4. If using actions/checkout, add a path so it clones the code to the right folder:

    steps:
      - name: Checkout
        uses: actions/checkout@v4
        with:
          path: "app"
    
  5. Make sure to empty the mounted volume after the test:

    steps:
      - name: Cleanup
        if: always()
        run: |
          docker compose --profile setup down
          rm -rf /workspace/Org/repo/app/*
    
    

My final workflow file:

jobs:
  test_job:
    runs-on: ubuntu-latest
    container:
      image: ubuntu:latest
      volumes:
        - /workspace/Org/repo/app:/workspace/Org/repo/app
    defaults:
      run:
        working-directory: /workspace/Org/repo/app
    steps:
      - name: Checkout
        uses: actions/checkout@v4
        with:
          path: "app"
      - name: Setup
        run: docker compose --profile setup up --wait
      - name: Test
        run: docker compose run --rm test
      - name: Cleanup
        if: always()
        run: |
          docker compose --profile setup down
          rm -rf /workspace/Org/repo/app/*

Some notes:

  • You might be able to use ${{ env.JOB_CONTAINER_NAME }} to get the container name for the volumes_from attribute, but I couldn't get that to run locally with act
  • You might be able to solve this with dind, but some blogs from 2015-2016 makes me wary of that solution
  • I'm not sure what happens if the same job runs multiple times simultaneously
  • You might be able to use a named volume and a subpath, but I couldn't figure that out
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.