How to test a dockerized application in an Azure DevOps (Server) pipeline? - python

I have a simple python dockerized application whose structure is
/src
- server.py
- test_server.py
Dockerfile
requirements.txt
in which the docker base image is Linux-based, and server.py exposes a FastAPI endpoint.
For completeness, server.py looks like this:
from fastapi import FastAPI
from pydantic import BaseModel
class Item(BaseModel):
number: int
app = FastAPI(title="Sum one", description="Get a number, add one to it", version="0.1.0")
#app.post("/compute")
async def compute(input: Item):
return {'result': input.number + 1}
Tests are meant to be done with pytest (following https://fastapi.tiangolo.com/tutorial/testing/) with a test_server.py:
from fastapi.testclient import TestClient
from server import app
import json
client = TestClient(app)
def test_endpoint():
"""test endpoint"""
response = client.post("/compute", json={"number": 1})
values = json.loads(response.text)
assert values["result"] == 2
Dockerfile looks like this:
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY . /app
RUN pip install -r requirements.txt
WORKDIR /app/src
EXPOSE 8000
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
At the moment, if I want to run the tests on my local machine within the container, one way to do this is
Build the Docker container
Run the container, get its name via docker ps
Run docker exec -it <mycontainer> bash and execute pytest to see the tests passing.
Now, I would like to run tests in Azure DevOps (Server) before pushing the image to my Docker registry and triggering a release pipeline. If this sounds an OK thing to do, what's the proper way to do it?
So far, I hoped that something along the lines of adding a "PyTest" step in the build pipeline would magically work:
I am currently using a Linux agent, and the step fails with
The failure is not surprising, as (I think) the container is not run after being built, and therefore pytest can't run within it either :(
Another way to solve the solve this is to include pytest commands in the Dockerfile and deal with the tests in a release pipeline. However I would like to decouple the testing from the container that is ultimately pushed to the registry and deployed.
Is there a standard way to run pytest within a Docker container in Azure DevOps, and get a graphical report?

Update your azure-pipelines.yml file as follows to run the tests in Azure Pipelines
Method-1 (using docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
inputs:
command: 'build'
Dockerfile: '**/Dockerfile'
arguments: '-t fast-api:$(Build.BuildId)'
- script: |
docker run fast-api:$(Build.BuildId) python -m pytest
displayName: 'Run PyTest'
Successfull pipeline screenshot
Method-2 (without docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python37:
python.version: '3.7'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
pip install pytest pytest-azurepipelines
python -m pytest
displayName: 'pytest'
BTW, I have one simple FastAPI project, you can reference if your want.

Test your docker script using pytest-azurepipelines:
- script: |
python -m pip install --upgrade pip
pip install pytest pytest-azurepipelines
pip install -r requirements.txt
pip install -e .
displayName: 'Install dependencies'
- script: |
python -m pytest /src/test_server.py
displayName: 'pytest'
Running pytest with the plugin pytest-azurepipelines will let you see your test results in the Azure Pipelines UI.
https://pypi.org/project/pytest-azurepipelines/

You can run your unit tests directly from within your Docker container using pytest-azurepipelines (that you need to install previously in the Docker image):
- script: |
docker run --mount type=bind,source="$(pwd)",target=/results \
--entrypoint /bin/bash my_docker_image \
-c "cd results && pytest"
displayName: 'tests'
continueOnError: true
pytest will create an xml file containing the test results, that will be made available to Azure DevOps pipeline thanks to the --mount flag in the docker run command. Then pytest-azurepipelines will publish directly the results to Azure DevOps.

Related

Keep postgres docker container in azure pipeline job running

I'm rather new to Azure and currently playing around with the pipelines. My goal is to run a postgres alpine docker container in the background, so I can perform tests through my python backend.
This is my pipeline config
trigger:
- main
pool:
vmImage: ubuntu-latest
variables:
POSTGRE_CONNECTION_STRING: postgresql+psycopg2://postgres:passw0rd#localhost/postgres
resources:
containers:
- container: postgres
image: postgres:13.6-alpine
trigger: true
env:
POSTGRES_PASSWORD: passw0rd
ports:
- 1433:1433
options: --name postgres
stages:
- stage: QA
jobs:
- job: test
services:
postgres: postgres
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: $(PYTHON_VERSION)
- task: Cache#2
inputs:
key: '"$(PYTHON_VERSION)" | "$(Agent.OS)" | requirements.txt'
path: $(PYTHON_VENV)
cacheHitVar: 'PYTHON_CACHE_RESTORED'
- task: CmdLine#2
displayName: Wait for db to start
inputs:
script: |
sleep 5
- script: |
python -m venv .venv
displayName: create virtual environment
condition: eq(variables.PYTHON_CACHE_RESTORED, 'false')
- script: |
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: pip install
condition: eq(variables.PYTHON_CACHE_RESTORED, 'false')
- script: |
source .venv/bin/activate
python -m pytest --junitxml=test-results.xml --cov=app --cov-report=xml tests
displayName: run pytest
- task: PublishTestResults#2
condition: succeededOrFailed()
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: 'test-results.xml'
testRunTitle: 'Publish FastAPI test results'
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: 'coverage.xml'
But the pipeline always fails at the step "Initialize Containers", giving this error:
Error response from daemon: Container <containerID> is not running as if it was just shutting down because there is nothing to do. Which seems right, but I don't know how to keep it running until my tests are done, the backend just runs pytest against the database. I also tried adding that resource as container using the container property, but then the pipeline crashes at the same step, saying that the container was just running less than a second.
I'm thankful for any ideas!
I'm suspicious that your container is not stopping because of "there is nothing to do", the postgres image is configured in a way to act as a service. Your container is probably stopping because of an error.
I'm sure there is something to improve: you have to add the PGPORT env var to your container and set to 1433 because that port is not the default port for the postgres docker image, so opening that port on your container like you are doing with ports is not doing too much in this case.
Also, your trigger: true property would mean that you are expecting updates on the official DockerHub repository for postgres and in case of a new image release, run your pipeline. I think that does not makes too much sense, you should remove it, just in case, although this is marginal problem from the perspective of your question.

Running pytest command outside of docker container failes because container stopped

I use docker selenium grid and pytest to execute tests. What i now do is:
Spin up selenium grid via a makerfile
Spin up the docker container (with a volume pointing to my local pc for the tests). The container also runs the pytest command.
This all works good, except that i would rather split the second action and be able to run the test on an already running container. Preferred setup:
Spin up selenium grid + docker container with pyton+pytest
A command to run the tests (with the container as interpretor)
When i tried to do this, i faced the issue that the python+pytest container stops running when the commands are all done. There is no long living process.
Dockerfile
FROM python:3.9.0-alpine
RUN apk add tk
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN ls ..
CMD pytest --junitxml ../r/latest.xml
My docker-compose file looks like:
docker-compose.yml
version: "3.0"
services:
pytest:
container_name: pytest
build:
context: .
dockerfile: Dockerfile
volumes:
- ./t:/t
- ./r:/r
working_dir: /t/
networks:
default:
name: my_local_network #same as selenium grid
It does not 'feel' good to have this pytest command in the container settings itself.
Container shutting down
That's because the CMD pytest --junitxml ../r/latest.xml line will execute once and when complete it will exit the container.
To run a cmd on an existing container
You can run commands on an existing docker container using this command:
docker exec <container_name> python -m pytest
Where <container_name> would be pytest in your case, since that is what the container is called in your docker-compose.yml file.
See here for more info: https://docs.docker.com/engine/reference/commandline/exec/
Using Make
If you want to extend this to a makefile command:
docker:
docker-compose up -d
ci-tests: docker
docker exec <container_name> python -m pytest
To both spin up AND run tests you can use:
make ci-tests
You could run selenium-grid in docker too if you wanted to make this solution completely portable: https://www.conductor.com/nightlight/running-selenium-grid-using-docker-compose/

Why pytest-sugar doesn't work in GitLab CI?

When tests are launched in GitLab CI, pytest-sugar doesn't show output like in local launching. What the problem can be?
My gitlab config:
image: project.com/path/dir
stages:
- tests
variables:
TESTS_ENVIRORMENT:
value: "--stage my_stage"
description: "Tests launch my_stage as default"
before_script:
- python3 --version
- pip3 install --upgrade pip
- pip3 install -r requirements.txt
api:
stage: tests
script:
- pytest $TESTS_ENVIRORMENT Tests/API/ -v
Local:
GitLab:
It seems that there's a problem with pytest-sugar inside containers. Add --force-sugar option to pytest call, it worked for me
By default docker container do not allocate a pseudo-terminal(tty) as a result not stdout, its simple output from console.
There is not clear solution for that case, mostly needs to do workarounds and try special python libraries.

DockerFile for a Python script with Firefox-based Selenium web driver, Flask & BS4 dependencies

Super new to python, and never used docker before. I want to host my python script on Google Cloud Run but need to package into a Docker container to submit to google.
What exactly needs to go in this DockerFile to upload to google?
Current info:
Python: v3.9.1
Flask: v1.1.2
Selenium Web Driver: v3.141.0
Firefox Geckodriver: v0.28.0
Beautifulsoup4: v4.9.3
Pandas: v1.2.0
Let me know if further information about the script is required.
I have found the following snippets of code to use as a starting point from here. I just don't know how to adjust to fit my specifications, nor do I know what 'gunicorn' is used for.
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.7
# Install manually all the missing libraries
RUN apt-get update
RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
# Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
# Install Python dependencies.
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 main:app
# requirements.txt
Flask==1.0.2
gunicorn==19.9.0
selenium==3.141.0
chromedriver-binary==77.0.3865.40.0
Gunicorn is an application server for running your python application instance, it is a pure-Python HTTP server for WSGI applications. It allows you to run any Python application concurrently by running multiple Python processes within a single dyno.
Please have a look into the following Tutorial which explains in detail regarding gunicorn.
Regarding Cloud Run, to deploy to Cloud Run, please follow next steps or the Cloud Run Official Documentation:
1) Create a folder
2) In that folder, create a file named main.py and write your Flask code
Example of simple Flask code
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
name = os.environ.get("NAME", "World")
return "Hello {}!".format(name)
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
3) Now your app is finished and ready to be containerized and uploaded to Container Registry
3.1) So to containerize your app, you need a Dockerfile in the same directory as the source files (main.py)
3.2) Now build your container image using Cloud Build, run the following command from the directory containing the Dockerfile:
gcloud builds submit --tag gcr.io/PROJECT-ID/FOLDER_NAME
where PROJECT-ID is your GCP project ID. You can get it by running gcloud config get-value project
4) Finally you can deploy to Cloud Run by executing the following command:
gcloud run deploy --image gcr.io/PROJECT-ID/FOLDER_NAME --platform managed
You can also have a look into the Google Cloud Run Official GitHub Repository for a Cloud Run Hello World Sample.

Using GitLab CI with the Python 'onbuild' image, the packages in requirements.txt don't seem to get installed

I'm trying to familiarize myself with the Gitlab CI environment with a test project, https://gitlab.com/khpeek/CI-test. The project has the following .gitlab-ci.yml:
image: python:2.7-onbuild
services:
- rethinkdb:latest
test_job:
script:
- pytest
The problem is that the test_job job in the CI pipeline fails with the following error message:
Running with gitlab-ci-multi-runner 9.0.1 (a3da309)
on docker-auto-scale (e11ae361)
Using Docker executor with image python:2.7-onbuild ...
Starting service rethinkdb:latest ...
Pulling docker image rethinkdb:latest ...
Using docker image rethinkdb:latest ID=sha256:23ecfb08823bc5483c6a955b077a9bc82899a0df2f33899b64992345256f22dd for service rethinkdb...
Waiting for services to be up and running...
Using docker image sha256:aaecf574604a31dd49a9d4151b11739837e4469df1cf7b558787048ce4ba81aa ID=sha256:aaecf574604a31dd49a9d4151b11739837e4469df1cf7b558787048ce4ba81aa for predefined container...
Pulling docker image python:2.7-onbuild ...
Using docker image python:2.7-onbuild ID=sha256:5754a7fac135b9cae7e02e34cc7ba941f03a33fb00cf31f12fbb71b8d389ece2 for build container...
Running on runner-e11ae361-project-3083420-concurrent-0 via runner-e11ae361-machine-1491819341-82630004-digital-ocean-2gb...
Cloning repository...
Cloning into '/builds/khpeek/CI-test'...
Checking out d0937f33 as master...
Skipping Git submodules setup
$ pytest
/bin/bash: line 56: pytest: command not found
ERROR: Job failed: exit code 1
However, there is a requirements.txt in the repository with the single line pytest==3.0.7 in it. It seems to me from the Dockerfile of the python:2.7-onbuild image, however, that pip install -r requirements.txt should get run on build. So why is pytest not found?
If you look at the Dockerfile you linked to, you'll see pip install -r requirements.txt is part of an onbuild command. This is useful if you want to create a new container from that first one and install a bunch of requirements. The pip install -r requirements.txt command is therefore not executed within the container in your CI pipeline and if it were, it would be executed at the very beginning, even before your gitlab repository was cloned.
I would suggest you modify your .gitlab-ci.yml file this way
image: python:2.7-onbuild
services:
- rethinkdb:latest
test_job:
script:
- pip install -r requirements.txt
- pytest
The problem seems to be intermittent: although the first time it took 61 minutes to run the tests (which initially failed), now it takes about a minute (see screen grab below).
For reference, the testing repository is at https://gitlab.com/khpeek/CI-test. (I had to add a before_script with some pip installs to make the job succeed).

Categories