198

I have set up a Docker Django/PostgreSQL app closely following the Django Quick Start instructions on the Docker site.

The first time I run Django's manage.py migrate, using the command sudo docker-compose run web python manage.py migrate, it works as expected. The database is built inside the Docker PostgreSQL container just fine.

Changes made to the Django app itself are likewise reflected in the Docker Django container, the moment I save them. It's great!

But if I then change a model in Django, and try to update the Postgres database to match the model, no changes are detected so no migration happens no matter how many times I run makemigrations or migrate again.

Basically, every time I change the Django model, I have to delete the Docker containers (using sudo docker-compose rm) and start afresh with a new migration.

I'm still trying to get my head around Docker, and there's an awful lot I don't understand about how it works, but this one is driving me nuts. Why doesn't migrate see my changes? What am I doing wrong?

2
  • 2
    Did you figure out why? I get the answer below and it works: You just have to log into your running docker container and run your commands. but what is the reason that it behaves that way? @LouisBarranqueiro Commented Sep 10, 2017 at 10:30
  • try to update the Postgres database to match the model, - please share how exactly you tried to do that Commented Sep 12 at 6:47

10 Answers 10

189

You just have to log into your running docker container and run your commands.

  1. Build your stack : docker-compose build -f path/to/docker-compose.yml
  2. Launch your stack : docker-compose up -f path/to/docker-compose.yml
  3. Display docker running containers : docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                         NAMES
3fcc49196a84        ex_nginx          "nginx -g 'daemon off"   3 days ago          Up 32 seconds       0.0.0.0:80->80/tcp, 443/tcp   ex_nginx_1
66175bfd6ae6        ex_webapp         "/docker-entrypoint.s"   3 days ago          Up 32 seconds       0.0.0.0:32768->8000/tcp       ex_webapp_1
# postgres docker container ...
  1. Get the CONTAINER ID of you django app and log into :
docker exec -t -i 66175bfd6ae6 bash
  1. Now you are logged into, then go to the right folder : cd path/to/django_app

  2. And now, each time you edit your models, run in your container : python manage.py makemigrations and python manage.py migrate

I also recommend you to use a docker-entrypoint for your django docker container file to run automatically :

  • collecstatic
  • migrate
  • runserver or start it with gunicorn or uWSGI

Here is an example (docker-entrypoint.sh) :

#!/bin/bash

# Collect static files
echo "Collect static files"
python manage.py collectstatic --noinput

# Apply database migrations
echo "Apply database migrations"
python manage.py migrate

# Start server
echo "Starting server"
python manage.py runserver 0.0.0.0:8000
Sign up to request clarification or add additional context in comments.

15 Comments

I also recommend you to use a docker-entrypoint for your django docker container file to run automatically - such operations should never be run automatically - I mean migrate especially.
why is that? we are in development environment.
It doesn't matter on which environment you're - deployment should always look the same. If migrations are automated the might be run concurrently which is highly discouraged. E.g. on heroku - migrations are never run as a part of the deploy.
concurently? Here we are in a dev env. I run makemigrations. the next time that I launch my stack, migrate will update the database with the last migrations undone, otherwise django app will not works correctly... It just a shortcut in dev env to be sure you got the right database schema with the current app
@LouisBarranqueiro, I meant multiple instances, single DB.
|
97

I use this method:

services:
  web:
    build: .
    image: uzman
    command: python manage.py runserver 0.0.0.0:8000
    ports:
      - "3000:3000"
      - "8000:8000"
    volumes:
      - .:/code
    depends_on:
      - migration
      - db
  migration:
    image: uzman
    command: python manage.py migrate --noinput
    volumes:
      - .:/code
    depends_on:
      - db

Using docker hierarchy we made, the service migration runs after setting up the database and before running the main service. Now when you run your service docker will run migrations before running the server; look that the migration server is applied over the same image as the web server, it means that all migrations will be taken from your project, avoiding problems.

You avoid making entry points or whatever other thing this way.

8 Comments

How does build: . work with image: I get the error that migration can't pull the named image
I resolved it by putting the build: on migration since it will run before web
Doesn't this keep the uzman image running and consuming RAM forever? Also, what is the uzman image?
depends_on only guarantees that the migration service starts before the web service, but it does not ensure that the migration service has completed running migrations before the web service starts. In cases where the database takes longer to start or migrations take more time, this approach could still face issues.
It's worth adding a health check to the db service so the migration runs when the service has actually finished starting up and to let the web service depend upon successful migration. I found this github.com/docker/compose/issues/9260#issue-1164962300 very useful.
|
59

Have your stack running then fire off a one shot docker-compose run command. E.g

#assume django in container named web
docker-compose run web python3 manage.py migrate

This works great for the built-in (default) SQLite database, but also for an external dockerized database that's listed as dependency. Here's an example docker-compose.yaml file

version: '3'

services:
  db:
    image: postgres
  web:
    build: .
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code
    ports:
      - "8000:8000"
    depends_on:
      - db

https://docs.docker.com/compose/reference/run/

1 Comment

Very nice, this is the cleanest method IMO
35

you can use docker-entrypoint.sh or a newer solution would be multiple comments in your docker-compose.yml

version: '3.7'

services:
  web:
    build: ./
    command: >
      sh -c "python manage.py collectstatic --noinput &&
             python manage.py migrate &&
             python manage.py runserver 0.0.0.0:8000"
    volumes:
      - ./:/usr/src/app/
    ports:
      - 8000:8000
    env_file:
      - ./.env
    depends_on:
      - postgres

  postgres:
    image: postgres:13.0-alpine
    ports:
      - 5432:5432
    volumes:
      - ./data/db:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=postgres

Comments

32

You can use docker exec command

docker exec -it container_id python manage.py migrate

1 Comment

To get the container_id mentioned, do docker ps and then look for the column COMMAND for django server.
10

If you have something like this in your docker-compose.yml

version: "3.7"

services:

  app:
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
    - 8000:8000
    volumes:
        - ./:/usr/src/app
    depends_on:
      - db

  db:
    image: postgres
    restart: always
    environment:
      POSTGRES_USER: docker
      POSTGRES_PASSWORD: docker
      POSTGRES_DB: docker

Then you can simple run...

~$ docker-compose exec app python manage.py makemigrations
~$ docker-compose exec app python manage.py migrate

3 Comments

Pretty sure makemigrations should be used during development time. By the time you push your build forward only migrate should be necessary.
Evandro's advice is right, AFAIK. However, I'm struggling to find a documentation link that says this explicitly. Can anyone share a link?
"The migration files for each app live in a “migrations” directory inside of that app, and are designed to be committed to, and distributed as part of, its codebase. You should be making them once on your development machine and then running the same migrations on your colleagues’ machines, your staging machines, and eventually your production machines." docs.djangoproject.com/en/5.2/topics/migrations
9

A common thing people get wrong with Docker + Django is when the migration commands should be run. Some common guidelines we should follow are:

  • Don't run makemigrations and / or migrate in the Dockerfile. makemigrations is a command that is part of the development workflow, running it as part of the image creation can cause issues where you end up having an inconsistent history of migration files across images. migrate is part of the deployment, an image build step is not necessarily tied with a deployment and should hence be avoided.

  • Don't run makemigrations as part of the entrypoint / command of your container. Containers are meant to be ephemeral, unless your files are mounted on a volume any changes you make in a container are lost when the container exits. This can cause you to have an inconsistent migration history.

  • Avoid running migrate as part of the container that starts your server. When using Docker Compose or some other orchestrator, you might have multiple replicas of this container. Although Django does run the migrations in a transaction, its better not to take any chances.

Here is what we should be doing when containerizing a Django application:

  1. Run makemigrations manually on your codebase as and when you change the models. This needs to be done out of the container (unless you're using something like devcontainers, or are mounting your code as a volume).
  2. Have a container / docker compose service to run your Django application.
  3. Have another container meant to run migrate, make sure the application containers depend on this and run after this has finished running. If using Kubernetes or similar try to set this up as a Job that can be run manually when needed.

Comments

6

Using docker exec, I was getting the following error:

AppRegistryNotReady("Models aren't loaded yet.")

So I used this command instead:

docker-compose -f local.yml run django python manage.py makemigrations

Comments

4

I know this is old, and maybe I am missing something here (if so, please enlighten me!), but why not just add the commands to your start.sh script, run by Docker to fire up your instance? It will take only a few extra seconds.

N.B. I set the DJANGO_SETTINGS_MODULE variable to make sure the correct database is used, as I use different databases for development and production (although I know this is not 'best practice').

This solved it for me:

#!/bin/bash
# Migrate the database first
echo "Migrating the database before starting the server"
export DJANGO_SETTINGS_MODULE="edatool.settings.production"
python manage.py makemigrations
python manage.py migrate
# Start Gunicorn processes
echo "Starting Gunicorn."
exec gunicorn edatool.wsgi:application \
    --bind 0.0.0.0:8000 \
    --workers 3

Comments

0

If you only want to use Dockerfile, you can add ENTRYPOINT[] command. Example how to run .sh script:

FROM python:3.9.4
RUN apt-get update
RUN apt-get install libpq-dev --assume-yes
RUN pip3 install psycopg2

COPY . /app
WORKDIR /app

RUN pip install -r requirements.txt
RUN pip3 install debugpy

ENTRYPOINT ["/app/docker-entrypoint.sh"]

CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

3 Comments

How does this solve the issue?
on AWS i did not find a way to run docker-compose.yml in ECS task... so I optioned to use just Dockerfile , and run migrations from it (/app/docker-entrypoint.sh contains those commands)
If you use ENTRYPOINT and CMD at the same time, then the contents of CMD will be appended to ENTRYPOINT as arguments. I don't think this answer takes that into account. See: docs.docker.com/reference/dockerfile/…

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.