1

I'm trying to use Tensorflow module in a Python application running in a Docker container (actually I am using Keras but errors come from Tensorflow)

I have models (.json and .h5 files) that I would like to load in order to use it :

import logging
import os
from keras.models import model_from_json # library for machine learning
from numpy import array
import json

def load_models():
    global loaded_h_model
    global loaded_u_model
    global loaded_r_model
    global loaded_c_model

    modelPath = os.getenv("MODELPATH", "./models/")

    # load models
    json_h_file = open(modelPath+'model_HD.json', 'r')
    loaded_model_h_json = json_h_file.read()
    json_h_file.close()
    loaded_h_model = model_from_json(loaded_model_h_json)
    loaded_h_model.load_weights(modelPath+"model_HD.h5")

    json_u_file = open(modelPath+'model_UD.json', 'r')
    loaded_model_u_json = json_u_file.read()
    json_u_file.close()
    loaded_u_model = model_from_json(loaded_model_u_json)
    loaded_u_model.load_weights(modelPath+"model_UD.h5")

    json_r_file = open(modelPath+'model_RD.json', 'r')
    loaded_model_r_json = json_r_file.read()
    json_r_file.close()
    loaded_r_model = model_from_json(loaded_model_r_json)
    loaded_r_model.load_weights(modelPath+"model_RD.h5")

    json_c_file = open(modelPath+'model_CD.json', 'r')
    loaded_model_c_json = json_c_file.read()
    json_c_file.close()
    loaded_c_model = model_from_json(loaded_model_c_json)
    loaded_c_model.load_weights(modelPath+"model_CD.h5")

Here is the Dockerfile I use:

FROM python:3.7

# copy source code files
COPY machinelearning.py ./

# copy models files
COPY models/* ./models/

# install dependencies
RUN pip3 install --upgrade pip \
    && pip3 install h5py \
    && pip3 install tensorflow \
    && pip3 install keras

# run script
CMD [ "python", "./machinelearning.py" ]

But when I run the Docker container, I have the following Warnings/Errors:

2020-01-29 09:40:24.542588: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2020-01-29 09:40:24.542727: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2020-01-29 09:40:24.542743: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Using TensorFlow backend.
2020-01-29 09:40:25.394254: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-01-29 09:40:25.394289: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303)
2020-01-29 09:40:25.394321: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (dd231f397f1f): /proc/driver/nvidia/version does not exist
2020-01-29 09:40:25.394539: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-29 09:40:25.419513: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1992000000 Hz
2020-01-29 09:40:25.420250: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55cab5bf9760 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-01-29 09:40:25.420299: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

I believe I need to install libraries or a different version of Tensorflow/Keras in my Dockerfile.

How can I solve this issue ? Thanks

1
  • Hey, better have an "ENTRYPOINT" command instead of "CMD" Commented Sep 22, 2023 at 16:37

1 Answer 1

6

First of all, you need to COPY requirements.txt /to/destination. your requirements.txt should contain dependencies with the version number.

FROM python:latest
COPY requirements.txt /usr/src/code/

After that run

RUN pip3 install -r requirements.txt

Instead of below code in your Dockerfile

RUN pip3 install --upgrade pip \
    && pip3 install h5py \
    && pip3 install tensorflow \
    && pip3 install keras 

I hope the problem will get resolved by mentioning version numbers in requirements.txt, not just --upgrade tag.

Also don't run upgrades if not needed.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks, indeed I was using tensorflow 2.x and I downgraded to 1.15.2, the latest 1.x version. Now it works

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.