1

I am running an Azure Container Job, where I spin up a different Docker container manually like this:

jobs:
      - job: RunIntegrationTests
        pool:
          vmImage: "ubuntu-18.04"
        container:
          image: mynamespace/frontend_image:latest
          endpoint: My Docker Hub Endpoint
        steps:
          - script: |
              docker run --rm --name backend_container -p 8000:8000 -d backend_image inv server

I have to create the container manually since the image lives in AWS ECR, and the password authentication scheme that Azure provides for it can only be used with a token that expires, so it seems useless. How can I make it so that my_container is reachable from within subsequent steps of my job?. I have tried starting my job with:

options: --network mynetwork

And share it with "backend_container", but I get the error:

docker: Error response from daemon: Container cannot be connected to network endpoints: mynetwork

While starting the "frontend" container, which might be because Azure is trying to start a container on multiple networks.

2
  • stackoverflow.com/questions/60301221/… Commented Oct 24, 2020 at 17:09
  • I don't see how that answer solves my problem. I get that I can't start container on multiple networks, but that's not the issue I am having Commented Oct 24, 2020 at 17:34

3 Answers 3

5

I was struggling with this same issue and couldn't use the solution of querying the docker networks because our build node has multiple agents installed that may run concurrently, leading to additional vsts_network_* entries.

I dug into the Azure Pipelines Agent source code and found a better solution. When the agent creates service or job containers, the Pipeline Agent creates the vsts_network_<guid> network, and the GUID is assigned randomly. But it also exposes this network name in an (undocumented) Agent variable $(Agent.ContainerNetwork) that can be accessed by your pipeline.

steps:
  - task: DownloadPipelineArtifact@2
    inputs:
      artifactName: my-image.img
      targetPath: images
    target: host    # Important, to run this on the host and not in the container

  # Use the Agent Variable to connect the container to the agent's docker network
  - bash: |
      docker load -i images/my-image.img
      docker run --rm --network $(Agent.ContainerNetwork) --name my-container -p 8042:8042 my-image
    target: host
Sign up to request clarification or add additional context in comments.

1 Comment

Awesome! Such a "secret"/hidden env variable can also be found when printing all env variables in a step. This is a quite elegant solution.
4

To run a container job, and attach a custom image to the created network, you can use a step as showed in the below example:

steps:
  - task: DownloadPipelineArtifact@2
    inputs:
      artifactName: my-image.img
      targetPath: images
    target: host    # Important, to run this on the host and not in the container

  - bash: |
      docker load -i images/my-image.img
      docker run --rm --name my-container -p 8042:8042 my-image

      # This is not really robust, as we rely on naming convections in Azure Pipelines
      # But I assume they won't change to a really random name anyway.
      network=$(docker network list --filter name=vsts_network -q)

      docker network connect $network my-container
      docker network inspect $network
    target: host

Note: it's important the these steps run in the host, and not in the container (that is run for the container-job). This is done by specifying target: host for the task.

In the example the container from the custom image can the be addressed by my-container.

3 Comments

This was just what I was looking for. Once I had the network I was able to use it via docker run --network $network .....
This sounds like a better answer :-). Thanks for taking the time, I marked this as the correct one.
How would you run later script steps from within the container? Using this method do you then have to use the 'docker exec' command to run every step you need to complete a CI test?
1

I ended up not using the container: property altogether, and started all containers manually, so that I can specify the same network:

steps:
    - task: DockerInstaller@0
    displayName: Docker Installer
    inputs:
        dockerVersion: 19.03.8
        releaseType: stable
    - task: Docker@2
    displayName: Login to Docker hub
    inputs:
        command: login
        containerRegistry: My Docker Hub
    - script: |
        docker network create integration_tests_network
        docker run --rm --name backend --network integration_tests_network -p 8000:8000 -d backend-image inv server
        docker run --rm --name frontend -d --network integration_tests_network frontend-image tail -f /dev/null

And run subsequents commands on the frontend container with docker exec

2 Comments

Thanks for sharing your solution here, would you please accept your solution as the answer? So it would be helpful for other members who get the same issue to find the solution easily. Have a nice day:)
I guess I was waiting to see if someone had a better answer, but in the meantime I'll mark my solution as the answer

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.