0

I have deployed GCP VM with cos-stable-121-18867-90-97 as boot disk. It is running systemd service on boot that is executing script. Script is executing docker image like this:

#!/bin/bash

# Script to manage task runner container
# Actions: start, stop, cleanup

set -e -o pipefail

get_access_token () {
  local endpoint=http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
  curl -s -H 'Metadata-Flavor: Google' "$endpoint" | cut -d'"' -f 4
}

container_name="task-runner"

case "$1" in
  start)
    echo "Logging into registry..."

    docker login -u oauth2accesstoken -p "$(get_access_token)" europe-docker.pkg.dev

    echo "Starting task runner container..."
    # Run the container with the specified configuration
    docker run \
      --name "$container_name" \
      -v "/etc/secrets/dotenv_file:/secrets/.env:ro" \
      -v "/etc/app/cloudsql_client_ssl_identity_pkcs12:/cloudsql-client-certs/client_identity.p12:ro" \
      -v "/etc/app/cloudsql_client_ssl_ca:/cloudsql-client-certs/client_ca.cert:ro" \
      --env-file "/etc/app/container_env_file" \
      "europe-docker.pkg.dev/xx/xx/app:x.y.z" \
      /usr/local/bin/supercronic -json "/etc/production.crontab"
    ;;

  stop)
    echo "Stopping task runner container..."
    # Stop the container if it's running
    docker stop "$container_name" 2>/dev/null || true
    ;;

  cleanup)
    echo "Cleaning up task runner container..."
    # Remove the container
    docker rm -f "$container_name" 2>/dev/null || true
    ;;

  *)
    echo "Usage: $0 {start|stop|cleanup}"
    exit 1
    ;;
esac

exit 0

All is running fine for around two weeks until VM disk gets full of docker logs from the running container. I've tried to set /etc/docker/daemon.json like this

{
  "live-restore": true,
  "log-opts": {
    "tag": "{{.Name}}",
    "max-size": "100m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "mtu": 1460
}

but it still keeps happening. It is like docker is not using this configuration at all.

Now I'm testing adding --log-opt max-size=10m and --log-opt max-file=5 to docker run command in the script. But will take some time to see if it works or not.

Any idea why docker configuration was not loaded or maybe was wrong from the beginning? Is there a nice way to limit how much logs is stored on disk by docker with COS boot disk?

//EDIT

Passing --log-opt to docker run seems to work, but why not daemon.json?

2
  • Changes to e.g. /etc/docker aren't persisted (across reboots). Given you've a single container, making the change to it will work as you've discovered. Commented Jul 22 at 14:55
  • The problem is I did not reboot the VM. I'm using cloud init to create this file and VM is not restarted for few weeks. Maybe it was restarted by google for some maintenance. I cant check it any more since it was recreated when I tested things. Anyways seems like --log-opt does the trick so should be fine. Thanks for your answer. Commented Jul 22 at 19:00

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.