0

I want to have a Docker container with image of the Angular application to be served from inside. So user could pull the image, run the application and start developing conveniently.

My current Dockerfile is:

FROM node:20.11.0-alpine

COPY . /ng17/

WORKDIR /ng17

RUN npm install -g @angular/cli

RUN npm install

CMD ["ng", "serve", "--host", "0.0.0.0"]

Docker runs my app on 4201 port and it's fine. Still, the problem is that code is not being compiled realtime like if I would "ng serve" regularly w/o docker. I believe "ng serving" from inside the container resolves it.

My docker flow is basic:

docker build -t ng17:0.1 .
docker run -d -p 4201:4200 ng17:0.1

I suspect the problem somewhere around CMD ["ng", "serve", "--host", "0.0.0.0"] and docker run -d -p 4201:4200 ng17:0.1, but can't figure this out alone. My humble Docker experience tells me the answer connected with exec, bash etc.

ps. I'm learning Docker's key principles and specially to understand how handy it can be in my Frontend development. Next I am to go with docker-compose, fix some local backend etc. Any strategies, pros and cons, setups and just advices of this topic are very appreciated.

3
  • How are you editing the source files? Are the files edited in code also changing in the container? Are you mounting a volume or copying the files? Commented Feb 11, 2024 at 21:54
  • @Jeppe, I'm editing from IDE WebStorm. The files are not being changed, because I'm making docker build before, so as I understand that is "static". I'm not mounting the volume, neither copying the files (except of the commands of dockerfile). Commented Feb 11, 2024 at 21:57
  • Well that's why it's not happening. Either mount the working directory so the CLI (inside the container) sees the changes, or ng build then serve the results in a multi-stage build if you want a container to deploy. Commented Feb 11, 2024 at 22:03

1 Answer 1

0

the problem is that code is not being compiled realtime like if I would "ng serve" regularly w/o docker.

This is exactly expected in you current workflow as you are copying files into Docker image once during docker build, exactly one line COPY . /ng17/

I suggest you to build simple image by using

FROM node:20.11.0-alpine

WORKDIR /work

RUN npm install -g @angular/cli

CMD npm install && ng serve --host 0.0.0.0

You need only angular CLI installed during build, rest will be provided during starting container as you want real-time update

Of course to build it use the same command that you have already: docker build -t ng17:0.1 .

To start a container use a little bit longer command

docker run --init --rm -t --net host -v $PWD:/work:rw -u $(id -u):$(id -g) ng17:0.1
  • Instead of using port mapping (-p 4201:4200), I think(for development) simpler is using --net host but it's about preferences both will work.
  • --rm is just to remove container when it exits as it's not needed, all our changes are mount using -v, so we don't care about other changes inside container and it keeps docker containers cleaner (if you type docker ps -a without cleaning old containers list might be long and just eat your disk space)
  • Angular ng serve as I tested doesn't work well with Ctrl+C to stop it, quick fix for it is use --init (it can used with any application that is not correctly handling signals)
  • -t this added tty to running container, so when npm install packages you will see progress, without this you might think nothing going on as progress bar will not appear
  • I removed -d as like to have open console open somewhere and see logs, without this when your angular app fails to compile you have to use docker log $CONTAINER_NAME to see what went wrong, it makes life harder. Also when you are done with work just pres Ctrl+C in that console and container exits and everything is cleaned up
  • Next -v $PWD:/work:rw it will mount your current working directory into running container and this what makes changes in your IDE instantly visible by the container to ng serve can reload app. I also added :rw as all changes made by the container are visible on the host, mostly because when npm install install packages that we want to persist between container restarts
  • Last parameter is -u $(id -u):$(id -g), by default most Docker images run as root but when you local user(on host machine) is in 99% cases different so, it will end up that some files are owned by your local user and rest(mostly node_modules) by the root and this makes it harder to deleted when you have to, so force same user in the container as is used in the host machine

All this docker run can be changed to docker-compose, so you will be able to setup everything by shorter command: docker-compose up

version: '3.9'

services:
  frontend:
    build: .
    init: true
    tty: true
    user: "${UID:-1000}:${GID:-1000}"
    network_mode: host
    volumes:
    - $PWD:/work:rw

Unfortunately this version has harder to makes correct user id inside the container, so I default it to 1000 that is in many cases correct, you can manually change it to correct value or start docker-compose using

UID=$(id -u) GID=$(id -g) docker-compose up

Note: If you have latest docker version you can use docker compose instead of docker-compose command (space instead of dash(-))

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.