2

I am creating a FastAPI server with simple CRUD functionalities with Postgresql as database. Everything works well in my local environment. However, when I tried to make it run in containers using docker-compose up, it failed. I was getting this error:

rest_api_1  |   File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1  |     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1  | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
rest_api_1  |   Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1  |   TCP/IP connections on port 5432?
rest_api_1  | 
rest_api_1  | (Background on this error at: https://sqlalche.me/e/14/e3q8)
networks_lab2_rest_api_1 exited with code 1

The directory structure:

├── Dockerfile
├── README.md
├── __init__.py
├── app
│   ├── __init__.py
│   ├── __pycache__
│   ├── crud.py
│   ├── database.py
│   ├── main.py
│   ├── models.py
│   ├── object_store
│   └── schemas.py
├── docker-compose.yaml
├── requirements.txt
├── tests
│   ├── __init__.py
│   ├── __pycache__
│   ├── assets
│   ├── test_create.py
│   ├── test_delete.py
│   ├── test_file.py
│   ├── test_get.py
│   ├── test_heartbeat.py
│   └── test_put.py
└── venv
    ├── bin
    ├── include
    ├── lib
    └── pyvenv.cfg

My docker-compose.yaml

version: "3"

services:
  db:
    image: postgres:13-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    environment:
      POSTGRES_USER: ${DATABASE_TYPE}
      POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
      POSTGRES_DB: ${DATABASE_NAME}
    ports:
      - "5432:5432"

  rest_api:
    build: .
    command: uvicorn app.main:app --host 0.0.0.0
    env_file:
      - ./.env
    volumes:
      - .:/app
    ports:
      - "8000:8000"
    depends_on:
      - db



volumes:
  postgres_data:

My Dockerfile for the fastAPI server (under ./app)

FROM python:3.8-slim-buster

RUN apt-get update \
    && apt-get -y install libpq-dev gcc \
    && pip install psycopg2

WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

COPY requirements.txt .
RUN pip install -r requirements.txt


# copy project
COPY . .

My database.py

from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

from dotenv import load_dotenv
import os


# def create_connection_string():
#     load_dotenv()
#     db_type = os.getenv("DATABASE_TYPE")
#     username = os.getenv("DATABASE_USERNAME")
#     password = os.getenv("DATABASE_PASSWORD")
#     host = os.getenv("DATABASE_HOST")
#     port = os.getenv("DATABASE_PORT")
#     name = os.getenv("DATABASE_NAME")
# 
#     return "{0}://{1}:{2}@{3}/{4}".format(db_type, username, password, host, name)


SQLALCHEMY_DATABASE_URI = "postgresql://postgres:postgres@db:5432/postgres"

engine = create_engine(
    SQLALCHEMY_DATABASE_URI
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

Base = declarative_base()

My main.py

from typing import List, Optional
import os, base64, shutil
from functools import wraps

from fastapi import Depends, FastAPI, HTTPException, UploadFile, File, Request, Header
from fastapi.responses import FileResponse
from sqlalchemy.orm import Session

from . import crud, models, schemas
from .database import SessionLocal, engine

models.Base.metadata.create_all(bind=engine)

app = FastAPI()
SECRET_KEY = os.getenv("SECRET")


# Dependency
def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()


def check_request_header(x_token: str = Header(...)):
    if x_token != SECRET_KEY:
        raise HTTPException(status_code=401, detail="Unauthorized")


# endpoints
@app.get("/heartbeat", dependencies=[Depends(check_request_header)], status_code=200)
def heartbeat():
    return "The connection is up"

A more complete log is:

Creating db_1 ... done
Creating rest_api_1 ... done
Attaching to db_1, rest_api_1
db_1        | The files belonging to this database system will be owned by user "postgres".
db_1        | This user must also own the server process.
db_1        | 
db_1        | The database cluster will be initialized with locale "en_US.utf8".
db_1        | The default database encoding has accordingly been set to "UTF8".
db_1        | The default text search configuration will be set to "english".
db_1        | 

...

db_1        | selecting dynamic shared memory implementation ... posix
db_1        | selecting default max_connections ... 100
db_1        | selecting default shared_buffers ... 128MB
db_1        | selecting default time zone ... UTC
db_1        | creating configuration files ... ok
db_1        | running bootstrap script ... ok
db_1        | performing post-bootstrap initialization ... sh: locale: not found
db_1        | 2021-09-29 18:13:35.027 UTC [31] WARNING:  no usable system locales were found
rest_api_1  | Traceback (most recent call last):
rest_api_1  |   File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3240, in _wrap_pool_connect
rest_api_1  |     return fn()

...

rest_api_1  |   File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 584, in connect
rest_api_1  |     return self.dbapi.connect(*cargs, **cparams)
rest_api_1  |   File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1  |     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1  | psycopg2.OperationalError: could not connect to server: Connection refused
rest_api_1  |   Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1  |   TCP/IP connections on port 5432?
rest_api_1  | 
rest_api_1  | 
rest_api_1  | The above exception was the direct cause of the following exception:
rest_api_1  | 
rest_api_1  | Traceback (most recent call last):
rest_api_1  |   File "/usr/local/bin/uvicorn", line 8, in <module>
rest_api_1  |     sys.exit(main())


...


est_api_1  |   File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 584, in connect
rest_api_1  |     return self.dbapi.connect(*cargs, **cparams)
rest_api_1  |   File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1  |     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1  | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
rest_api_1  |   Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1  |   TCP/IP connections on port 5432?
rest_api_1  | 
rest_api_1  | (Background on this error at: https://sqlalche.me/e/14/e3q8)
rest_api_1 exited with code 1
db_1        | ok
db_1        | syncing data to disk ... ok
db_1        | 
db_1        | 
db_1        | Success. You can now start the database server using:
...

db_1        | 2021-09-29 18:13:36.325 UTC [1] LOG:  starting PostgreSQL 13.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
db_1        | 2021-09-29 18:13:36.325 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1        | 2021-09-29 18:13:36.325 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db_1        | 2021-09-29 18:13:36.328 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1        | 2021-09-29 18:13:36.332 UTC [48] LOG:  database system was shut down at 2021-09-29 18:13:36 UTC
db_1        | 2021-09-29 18:13:36.336 UTC [1] LOG:  database system is ready to accept connections

I have gone through a very extensive search and read docs/tutorials about running FastAPI server and Postgresql with docker-compose, such as

https://testdriven.io/blog/fastapi-docker-traefik/ https://github.com/AmishaChordia/FastAPI-PostgreSQL-Docker/blob/master/FastAPI/docker-compose.yml https://www.jeffastor.com/blog/pairing-a-postgresql-db-with-your-dockerized-fastapi-app

Their approach is the same as mine but it just keeps giving me this Connection refused Is the server running on host "db" (172.29.0.2) and accepting TCP/IP connections on port 5432? error message ...

Can anyone help me out here? Any help will be appreciated !!

1
  • First question: please supply Your environ when runing docker-compose up, especialy the DATABASE_TYPE, DATABASE_PASSWORD and DATABASE_NAME variables thet You are using to set the postgres container variables. They should always match the content of SQLALCHEMY_DATABASE_URI variable in Your database.py. Commented Sep 29, 2021 at 19:56

2 Answers 2

2

First, the SQLALCHEMY_DATABASE_URI in database.py should match the user, password and database name suplied in Your docker-compose.yaml. Ensure that You are running docker-compose up with correct environ. In Your case, the environ for docker-compose up should be:

DATABASE_TYPE=postgres
DATABASE_PASSWORD=postgres
DATABASE_NAME=postgres

But I think that Your problem is somewhere else. Even if you declare Your API service as depends_on: - db, the postgres server can be not ready yet. depends_on ensures that the target image will not be instantiated before the referenced one, but does not ensure anything more. It takes some time for postgres server to be initialized, up and runing inside running container, and if Your API will try to connect before that happens, it will fail.

The common and simplest solution is to write a bunch of code that will check over and over if database is up and running before actual connection happen. As You are not supplying the whole traceback (actually, You have replaced the most important part with ...) I can only guess on what line of Your code the connection event is triggered. I would recommend modifying Your database.py to look like (not tested, may require some adjustments):

from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

from dotenv import load_dotenv
import os
import time

def wait_for_db(db_uri):
    """checks if database connection is established"""

    _local_engine = create_engine(db_uri)

    _LocalSessionLocal = sessionmaker(
        autocommit=False, autoflush=False, bind=_local_engine
    )

    up = False
    while not up:
        try:
            # Try to create session to check if DB is awake
            db_session = _LocalSessionLocal()
            # try some basic query
            db_session.execute("SELECT 1")
            db_session.commit()
        except Exception as err:
            print(f"Connection error: {err}")
            up = False
        else:
            up = True
        
        time.sleep(2)


SQLALCHEMY_DATABASE_URI = "postgresql://postgres:postgres@db:5432/postgres"

wait_for_db(SQLALCHEMY_DATABASE_URI)

engine = create_engine(
    SQLALCHEMY_DATABASE_URI
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

Base = declarative_base()

A more sophisticated solution would be playing with docker-compose healtchecks (v2 only). For docker-compose v3, they recommend doing it manually similar to the solution presented above.

To improve this solution, include wait_for_db in a python, commandline script and run it in some kind of image entrypoint in a prestart stage. You will need a prestart stage in an entrypoint anyway for running migrations (You do include migrations in Your projects, right?)

Sign up to request clarification or add additional context in comments.

Comments

0

You could also handle retries by using Docker's restart mechanism. If you use max attempts and the right dela, you can make it so the DB will most likely be ready by the second try while preventing infinite restart.

rest_api:
    ...
    deploy:
      restart_policy:
        condition: on-failure
        delay: 5s # default
        max_attempts: 5
    ...

Note that I'm not a Docker expert, but this seems like it better aligns with the containers as cattle instead of pets paradigm. Why add complexity to the application when the issue can be handled by existing functionality in a higher layer?

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.