Let's say you have a repository in gitlab. For facilitated distribution of the application and according development throughout a team, you convert the app into a Docker Container, which can be spinned up on any laptop via Docker Desktop. You've then also setup completely automated API tests which fire a set of API requests to your docker container booted up at e.g. https://localhost:8080/. These tests are implemented via code, so they are executable locally either via CLI, the language, or the integration used for it provided by the IDE. The mission now is, how do I replicate this in a gitlab CI pipeline ?

From what I understand, I need two docker container environments within my CI pipeline:

  1. one mirroring my local IDE development environment from within which the test requests are fired

  2. Another one to actually run the docker container that hosts the REST API to which the test requests shall be fired.

How is such a setup mirrored in a gitlab CI? What I am currently thinking of is:

  1. The first container represents an environment that is very unlikely to change, as the IDE development environment is predictable and isolated from the running application. So for this container 1, I would create a Dockerfile that you add into the repository of the same project which replicates your local development environment (of the IDE from which you normally execute automated tests locally). Build the image of that and upload it to the private container registry provided by gitlab. This image is then used within the CI pipeline file via the image keyword to provide the above-mentioned container 1.

  2. Container 2 is where I am confused. It seems I cannot run docker-compose or similar within a CI pipeline and would need something like `Docker in Docker` to be able to bootup my localhost REST API container within the CI pipeline. But before starting to do this, I wanted to be sure that there's no better way to set this up? As the build of the container 2 must precisely be tested within the CI pipeline, it does not make sense to upload an image to my registry for this container 2. I also would like to not need to replicate e.g. the build logic of my Dockerfile within my CI pipeline, as that would effectively eliminate the single source of truth for the image build.

2 Replies 2

You should run this integration test against the actual image you're about to publish. The CI pipeline should roughly look like

  1. Run unit tests against the code you're testing (may or may not use a container)
  2. Build the image from the Dockerfile
  3. Run integration tests against that image
  4. Push the image to a registry
  5. (Optional) Trigger some downstream deployment pipeline

That is, you are not building a special "developer localhost image" to run these tests: you are testing the actual image you're getting ready to push out to a registry and potentially run in production.

There is no technical requirement that everything runs in containers. If the test driver is just accessing the service's HTTP APIs – maybe it's a Python script using pytest and requests – you could happily run the test driver outside a container, but targeting an application packaged in a container. Similarly, you may be able to run the unit tests outside a container, or use an out-of-the-box language-runtime image, before you start doing anything with a locally-built image.

On some level, Docker Compose is "just" a CLI tool that calls the Docker API. If you can interact with Docker containers, you should be able to also run Compose, possibly by running it as a shell command. You can also recreate some of the work Compose does by manually creating Docker networks and attaching containers to them. I've done this on Jenkins in the past by creating a network, starting the service under test, and then starting the client program as a container on the same network.

Uhm, any idea why I cannot add a comment to your answer @David Maze? My follow-up question would be that I intended to do exactly as syou described, but for your steps:

Build the image from the Dockerfile

Run integration tests against that image

You would need docker-in-docker, right ? I guess that what you described is exactly what's specified here:

You can specify an additional image by using the services keyword. This additional image is used to create another container, which is available to the first container. The two containers have access to one another and can communicate when running the job.

My only issue is: The generation of the image (the first of my steps quoted from you above) is somewhat complex, and I don't necessarily want to replicate the entire build flow of my Dockerfile within my CI pipeline ? So if the build is very long (around 500 lines in my dockerfile), how would you replicate that into a CI build job ?

Your Reply

By clicking “Post Your Reply”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.