4

I am trying to connect local Ollama 2 model, that uses port 11434 on my local machine, with my Docker container running Linux Ubuntu 22.04. I can confirm that Ollama model definitely works and is accessible through http://localhost:11434/. In my Docker container, I am also running GmailCTL service and was able to successfully connect with Google / Gmail API to read and send emails from Google account. Now I want to wait for an email and let the LLM answer the email back to the sender. However, I am not able to publish the 11434 port in order to connect model with container.

I tried setting up devcontainer.json file to forward the ports: { "name": "therapyGary", "build": { "context": "..", "dockerfile": "../Dockerfile" }, "forwardPorts": [80, 8000, 8080, 11434] }

I tried exposing the ports in the Dockerfile:

EXPOSE 80 EXPOSE 8000 EXPOSE 8080 EXPOSE 11434`

These seem to add the ports to the container and Docker is aware of them, but when I check the port status for the currently used image, I get this message: "Error: No public port '11434' published for 5ae41009199a"

I also tried setting up the docker-compose.yaml file: services: my_service: image: 53794c7c792c # Replace with your actual Docker image name ports: - "11434:11434" - "8000:8000" - "8080:8080" - "80:80"

But there seems to be a problem with it, where any container with it automatically stops.

I tried stopping the Ollama model, before running the container as to not create a conflict, but that did not help either. Any suggestions are very welcome.

Thanks!

-- edit -- adding Dockerfile code: FROM ubuntu:22.04

ENV DEBIAN_FRONTEND=noninteractive ENV GMAILCTL_VERSION=0.10.1

RUN apt-get update && apt-get install -y
python3
python3-pip
xdotool
curl
software-properties-common
libreoffice
unzip
&& apt-get clean

RUN pip3 install --upgrade pip RUN pip3 install google-api-python-client google-auth-httplib2 google-auth-oauthlib pandas requests

RUN useradd -ms /bin/bash devuser

RUN mkdir -p /workspace && chown -R devuser:devuser /workspace

USER root

WORKDIR /workspace

COPY . .

RUN chown -R devuser:devuser /workspace

EXPOSE 80 EXPOSE 8000 EXPOSE 8080 EXPOSE 11434

CMD [ "bash" ]

4
  • So it sounds like you’ve got the LLM running on your host machine, right? What the expose command does is open the port in the container, so you’re opening the port in the container, where the model isn’t running. You’d need to change the network on the container to HOST, so it can see services running on your local network, and have it connect to the OLLAMA port, not expose it in the container. In order to tell you how to do that, we’d need to see what container you’re trying to run. Commented Jun 30, 2024 at 18:58
  • 1
    Yes that's correct, the LLM is running on my local Windows machine and I was trying to connect to it from my Docker container running Linux. Your answer did make me think however, why not simply run the LLM within the container, which I did and it solved the issue. However, I would still be interested in solving this problem "the hard way" just for the sake of learning a new trick (if that's not too much trouble!). I edited original post to contain the Dockerfile. Thanks! Commented Jul 1, 2024 at 21:36
  • One curious question: Since you have ran your chatbot application inside the Docker Container, did you also pull the Ollama Models (like Mistral:latest, or nomic-embed-text:latest) inside the docker container? If yes, then i would like to know how did you achieve this. Commented Sep 10, 2024 at 12:17
  • Yes, pulling the Ollama model inside the Docker container was the key solution to my issue. I used this command: ollama run llama2 where "llama2" is just an example of a model. It automatically downloads and runs the given model and lets you interact with it inside the container. Commented Sep 11, 2024 at 13:44

2 Answers 2

5

If you are on Windows 11 use http://host.docker.internal:11434 as the Base URL on your connection credentials for your Ollama account.

Self n8n via Docker conneted to Ollama on local Machine

Sign up to request clarification or add additional context in comments.

Comments

1

So remove the EXPOSE 11434 statement, what that does is let you connect to a service in the docker container using that port. 11434 is running on your host machine, not your docker container.

To let the docker container see port 11434 on your host machine, you need use the host network driver, so it can see anything on your local network. To do this, you can use the runArgs parameter:

{ "name": "therapyGary", 
"build": 
  { "context": "..", 
    "dockerfile": "../Dockerfile" 
  }, 
"forwardPorts": [80, 8000, 8080, 11434] 
}

would become

{ "name": "therapyGary", 
"build": 
  { "context": "..", 
    "dockerfile": "../Dockerfile" 
  }, 
"runArgs": ["--net=host"]
}

Then, from within your container, you should be able to contact the LLM on port 11434 by referencing localhost or 127.0.0.1 from your container. E.g. in netcat nc localhost 11434. If you're using Docker Desktop, you need to enable host networking by going into Features in development tab in Settings and select the Enable host networking option, per the documentation here: Docker Desktop

As a side note, you can use --net=host or --network=host, both work on my machine using Windows 11 and Docker Desktop.

If you want to use a docker compose yaml file, you would use the network_mode parameter:

services: 
  my_service: 
    image: 53794c7c792c 
    # Replace with your actual Docker image name 
    network_mode: "host"

Because you're putting the container on the host network, there is no need to expose ports, since it's like plugging your container directly into your network. See the Note in the documentation.

References:

Dev Container Image Specific Properties

Docker Engine Host Network Driver Reference

2 Comments

Thank you so much halldk, everything now works as expected. Couldn't have made it without you!
Happy to help! Would you mind marking this as the accepted answer? @PiotrGrochowski

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.