I have this Dockerfile that contains a line RUN py.test -vv.
FROM bitnami/python:3.6-prod
#MORE DIRECTIVES
RUN py.test -vv
COPY . /files
WORKDIR /files
EXPOSE 8080
When I run docker-compose build, I am getting this error.
Step 16/21 : RUN py.test -vv
---> Running in 5b3f55f10025
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.2.1, py-1.8.0, pluggy-0.13.0 -- /opt/bitnami/python/bin/python
cachedir: .pytest_cache
rootdir: /
plugins: ordering-0.6, cov-2.8.1, docker-compose-3.1.2, celery-4.3.0
collecting ... collected 0 items / 1 errors
==================================== ERRORS ====================================
________________________ ERROR collecting test session _________________________
opt/bitnami/python/lib/python3.6/site-packages/_pytest/config/__init__.py:456: in _importconftest
return self._conftestpath2mod[key]
E KeyError: PosixPath('/opt/bitnami/python/lib/python3.6/site-packages/matplotlib/tests/conftest.py')
During handling of the above exception, another exception occurred:
opt/bitnami/python/lib/python3.6/site-packages/_pytest/config/__init__.py:462: in _importconftest
mod = conftestpath.pyimport()
opt/bitnami/python/lib/python3.6/site-packages/py/_path/local.py:701: in pyimport
__import__(modname)
opt/bitnami/python/lib/python3.6/site-packages/matplotlib/tests/__init__.py:16: in <module>
'The baseline image directory does not exist. '
E OSError: The baseline image directory does not exist. This is most likely because the test data is not installed. You may need to install matplotlib from source to get the test data.
During handling of the above exception, another exception occurred:
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:377: in visit
for x in Visitor(fil, rec, ignore, bf, sort).gen(self):
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:429: in gen
for p in self.gen(subdir):
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:429: in gen
for p in self.gen(subdir):
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:429: in gen
for p in self.gen(subdir):
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:429: in gen
for p in self.gen(subdir):
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:429: in gen
for p in self.gen(subdir):
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:429: in gen
for p in self.gen(subdir):
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:429: in gen
for p in self.gen(subdir):
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:418: in gen
dirs = self.optsort([p for p in entries
opt/bitnami/python/lib/python3.6/site-packages/py/_path/common.py:419: in <listcomp>
if p.check(dir=1) and (rec is None or rec(p))])
opt/bitnami/python/lib/python3.6/site-packages/_pytest/main.py:606: in _recurse
ihook = self.gethookproxy(dirpath)
opt/bitnami/python/lib/python3.6/site-packages/_pytest/main.py:424: in gethookproxy
my_conftestmodules = pm._getconftestmodules(fspath)
opt/bitnami/python/lib/python3.6/site-packages/_pytest/config/__init__.py:434: in _getconftestmodules
mod = self._importconftest(conftestpath)
opt/bitnami/python/lib/python3.6/site-packages/_pytest/config/__init__.py:470: in _importconftest
raise ConftestImportFailure(conftestpath, sys.exc_info())
E _pytest.config.ConftestImportFailure: (local('/opt/bitnami/python/lib/python3.6/site-packages/matplotlib/tests/conftest.py'), (<class 'OSError'>, OSError('The baseline image directory does not exist. This is most likely because the test data is not installed. You may need to install matplotlib from source to get the test data.',), <traceback object at 0x7f814caaef88>))
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
============================== 1 error in 11.83s ===============================
ERROR: Service 'testproject' failed to build: The command '/bin/sh -c py.test -vv' returned a non-zero code: 2
I have tried adding pip install matplotlib in the Dockerfile but I am still getting the same error.
I have a NodeJS app before that was also Dockerized that has some tests on it using mocha and putting RUN mocha inside the Dockerfile works fine. I'm not sure what's the issue here in Python.
I feel that the issue here is because pytest is not pre-installed in python. Hence you have to add steps for pytest installation in the docker container. Personally I have been running this by using a seperate dockerfile for pytest, which is using for installing and setting the ENTRYPOINT as pytest.
I have attached the docker-compose.yaml, dockerfile and pytest.dockerfile for you reference. Alternatively you can directly mention the pytest installation steps in the .yaml file/dockerfile itself as you are not having any other services to be added and to avoid the additional dockerfile. This set-up is runing perfectly for me for running Selenium-pytest test automation using docker containers. Please try this and let us know the feedback.
version: '3.7'
services:
test:
volumes:
- .:/files
build:
context: .
dockerfile: pytest.dockerfile
docker-compose.yaml
FROM python:3.7-alpine
MAINTAINER xyz
ADD . /files
WORKDIR /files
ENV PYTHONDONTWRITEBYTECODE=true
EXPOSE 4444
dockerfile.
The option to expose the port is again optional here.
FROM python:3.7-alpine
MAINTAINER xyz
RUN pip install pytest
ENTRYPOINT [ "pytest" ]
pytest.dockerfile
Related
When switching from Docker Desktop to colima I encountered problems with setting up Run configuration in Pycharm thru Docker-compose feature.
Example setup
I keep getting this error from Pycharm:
no such service:
container:9200f38c022e09065fbb972683cd8843c6faedf4b722ee573ea34303f604b843:ro
Process finished with exit code 1
Versions:
Colima v 0.3.2
docker-compose v 2.2.3
PyCharm 2021.3.2 (Professional Edition)
I don't have a solution yet but I believe the culprit is that pycharm uses syntax for specifying containers unsupported by your (our) setup.
Pycharm integrates with docker-compose like this:
docker container with helper scripts in volume is created
$ docker ps -a
...
346cc60545f6 aac5779e964d "/bin/sh" Created pycharm_helpers_PY-213.6777.50
it creates it's own overlay. My project docker-compose.yml is version: 2, pycharm's override is:
$ cat /Users/.../Library/Caches/JetBrains/PyCharm2021.3/tmp/docker-compose.override.2.yml
version: "2"
services:
app:
command:
- "python"
- "-V"
entrypoint: ""
environment:
PYTHONUNBUFFERED: "1"
restart: "no"
volumes: []
volumes_from:
- "container:346cc60545f6e7955661fc6f8f578c6f3f871a7330b068cb35224efbee05aae7:ro"
is calls docker-compose with the overlay:
docker-compose \
-f /Users/.../projects/pythonProject1/docker-compose.yml \
-f /Users/.../Library/Caches/JetBrains/PyCharm2021.3/tmp/docker-compose.override.9.yml \
run --rm --no-deps app
Now if I try container: syntax with docker run I get an error:
$ docker run --rm -it \
--volumes-from container:346cc60545f6e7955661fc6f8f578c6f3f871a7330b068cb35224efbee05aae7:ro \
python:3.9 bash
docker: Error response from daemon: invalid mode: 346cc60545f6e7955661fc6f8f578c6f3f871a7330b068cb35224efbee05aae7:ro.
See 'docker run --help'.
With container: prefix removed it works:
$ docker run --rm -it \
--volumes-from 346cc60545f6e7955661fc6f8f578c6f3f871a7330b068cb35224efbee05aae7:ro \
python:3.9 bash
root#0e5ba9104c62:/# mount | grep pycharm
/dev/disk/by-label/data-volume on /opt/.pycharm_helpers type ext4 (ro,relatime)
root#0e5ba9104c62:/# ls /opt/.pycharm_helpers/
Dockerfile docstring_formatter.py pockets pycharm_matplotlib_backend six.py
MathJax epydoc profiler pycodestyle.py sphinxcontrib
__pycache__ extra_syspath.py py2ipnb_converter.py pydev syspath.py
check_all_test_suite.py generator3 py2only python-skeletons third_party
conda_packaging_tool.py icon-robots.txt py3only remote_sync.py tools
coverage_runner packaging_tool.py pycharm rest_runners typeshed
coveragepy pip-20.3.4-py2.py3-none-any.whl pycharm_display setuptools-44.1.1-py2.py3-none-any.whl virtualenv.pyz
But with docker-compose.yml version: 3 I get different error
error during connect: Get "http://unix:2375/Users/.../.colima/docker.sock/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.project%3Dpythonproject1%22%3Atrue%7D%7D&limit=0": dial tcp: lookup unix on 1.1.1.1:53: no such host
Process finished with exit code 1
Pycharm's docker-compose for 3.8 doesn't use container: syntax anymore:
$ cat /Users/.../Library/Caches/JetBrains/PyCharm2021.3/tmp/docker-compose.override.9.yml
version: "3.8"
services:
app:
command:
- "python"
- "-V"
entrypoint: ""
environment:
PYTHONUNBUFFERED: "1"
restart: "no"
volumes:
- "pycharm_helpers_PY-213.6777.50:/opt/.pycharm_helpers"
volumes:
pycharm_helpers_PY-213.6777.50: {}
Wrapper script replacing the overlay with the one with contaner: prefix stripped would likely help us for version: 2, but the problem with version: 3.8 would persist.
I use docker selenium grid and pytest to execute tests. What i now do is:
Spin up selenium grid via a makerfile
Spin up the docker container (with a volume pointing to my local pc for the tests). The container also runs the pytest command.
This all works good, except that i would rather split the second action and be able to run the test on an already running container. Preferred setup:
Spin up selenium grid + docker container with pyton+pytest
A command to run the tests (with the container as interpretor)
When i tried to do this, i faced the issue that the python+pytest container stops running when the commands are all done. There is no long living process.
Dockerfile
FROM python:3.9.0-alpine
RUN apk add tk
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN ls ..
CMD pytest --junitxml ../r/latest.xml
My docker-compose file looks like:
docker-compose.yml
version: "3.0"
services:
pytest:
container_name: pytest
build:
context: .
dockerfile: Dockerfile
volumes:
- ./t:/t
- ./r:/r
working_dir: /t/
networks:
default:
name: my_local_network #same as selenium grid
It does not 'feel' good to have this pytest command in the container settings itself.
Container shutting down
That's because the CMD pytest --junitxml ../r/latest.xml line will execute once and when complete it will exit the container.
To run a cmd on an existing container
You can run commands on an existing docker container using this command:
docker exec <container_name> python -m pytest
Where <container_name> would be pytest in your case, since that is what the container is called in your docker-compose.yml file.
See here for more info: https://docs.docker.com/engine/reference/commandline/exec/
Using Make
If you want to extend this to a makefile command:
docker:
docker-compose up -d
ci-tests: docker
docker exec <container_name> python -m pytest
To both spin up AND run tests you can use:
make ci-tests
You could run selenium-grid in docker too if you wanted to make this solution completely portable: https://www.conductor.com/nightlight/running-selenium-grid-using-docker-compose/
I have a simple python dockerized application whose structure is
/src
- server.py
- test_server.py
Dockerfile
requirements.txt
in which the docker base image is Linux-based, and server.py exposes a FastAPI endpoint.
For completeness, server.py looks like this:
from fastapi import FastAPI
from pydantic import BaseModel
class Item(BaseModel):
number: int
app = FastAPI(title="Sum one", description="Get a number, add one to it", version="0.1.0")
#app.post("/compute")
async def compute(input: Item):
return {'result': input.number + 1}
Tests are meant to be done with pytest (following https://fastapi.tiangolo.com/tutorial/testing/) with a test_server.py:
from fastapi.testclient import TestClient
from server import app
import json
client = TestClient(app)
def test_endpoint():
"""test endpoint"""
response = client.post("/compute", json={"number": 1})
values = json.loads(response.text)
assert values["result"] == 2
Dockerfile looks like this:
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY . /app
RUN pip install -r requirements.txt
WORKDIR /app/src
EXPOSE 8000
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
At the moment, if I want to run the tests on my local machine within the container, one way to do this is
Build the Docker container
Run the container, get its name via docker ps
Run docker exec -it <mycontainer> bash and execute pytest to see the tests passing.
Now, I would like to run tests in Azure DevOps (Server) before pushing the image to my Docker registry and triggering a release pipeline. If this sounds an OK thing to do, what's the proper way to do it?
So far, I hoped that something along the lines of adding a "PyTest" step in the build pipeline would magically work:
I am currently using a Linux agent, and the step fails with
The failure is not surprising, as (I think) the container is not run after being built, and therefore pytest can't run within it either :(
Another way to solve the solve this is to include pytest commands in the Dockerfile and deal with the tests in a release pipeline. However I would like to decouple the testing from the container that is ultimately pushed to the registry and deployed.
Is there a standard way to run pytest within a Docker container in Azure DevOps, and get a graphical report?
Update your azure-pipelines.yml file as follows to run the tests in Azure Pipelines
Method-1 (using docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
inputs:
command: 'build'
Dockerfile: '**/Dockerfile'
arguments: '-t fast-api:$(Build.BuildId)'
- script: |
docker run fast-api:$(Build.BuildId) python -m pytest
displayName: 'Run PyTest'
Successfull pipeline screenshot
Method-2 (without docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python37:
python.version: '3.7'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
pip install pytest pytest-azurepipelines
python -m pytest
displayName: 'pytest'
BTW, I have one simple FastAPI project, you can reference if your want.
Test your docker script using pytest-azurepipelines:
- script: |
python -m pip install --upgrade pip
pip install pytest pytest-azurepipelines
pip install -r requirements.txt
pip install -e .
displayName: 'Install dependencies'
- script: |
python -m pytest /src/test_server.py
displayName: 'pytest'
Running pytest with the plugin pytest-azurepipelines will let you see your test results in the Azure Pipelines UI.
https://pypi.org/project/pytest-azurepipelines/
You can run your unit tests directly from within your Docker container using pytest-azurepipelines (that you need to install previously in the Docker image):
- script: |
docker run --mount type=bind,source="$(pwd)",target=/results \
--entrypoint /bin/bash my_docker_image \
-c "cd results && pytest"
displayName: 'tests'
continueOnError: true
pytest will create an xml file containing the test results, that will be made available to Azure DevOps pipeline thanks to the --mount flag in the docker run command. Then pytest-azurepipelines will publish directly the results to Azure DevOps.
I am trying to dockerize airflow, my Dockerfile looks like this
FROM python:3.5.2
RUN mkdir -p /src/airflow
RUN mkdir -p /src/airflow/logs
RUN mkdir -p /src/airflow/plugins
WORKDIR /src
COPY . .
RUN pip install psycopg2
RUN pip install -r requirements.txt
COPY airflow.cfg /src/airflow
ENV AIRFLOW_HOME /src/airflow
ENV PYTHONPATH "${PYTHONPATH}:/src"
RUN airflow initdb
EXPOSE 8080
ENTRYPOINT ./airflow-start.sh
while my docker-compose.yml looks like this
version: "3"
services:
airflow:
container_name: airflow
network_mode: host
build:
context: .
dockerfile: Dockerfile
ports:
- 8080:8080
The output of $ docker-compose build comes up like normal, every step executes and then
Step 12/14 : RUN airflow initdb
---> Running in 8b7ebe406978
[2020-04-21 10:34:21,419] {__init__.py:45} INFO - Using executor LocalExecutor
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 17, in <module>
from airflow.bin.cli import CLIFactory
File "/usr/local/lib/python3.5/site-packages/airflow/bin/cli.py", line 59, in <module>
from airflow.www.app import cached_app
File "/usr/local/lib/python3.5/site-packages/airflow/www/app.py", line 20, in <module>
from flask_cache import Cache
File "/usr/local/lib/python3.5/site-packages/flask_cache/__init__.py", line 24, in <module>
from werkzeug import import_string
ImportError: cannot import name 'import_string'
ERROR: Service 'airflow' failed to build: The command '/bin/sh -c airflow initdb' returned a non-zero code: 1
postgres is running on host system.
I have tried multiple ways but this keeps on happening.
I even tried puckel/docker-airflow image and the same error occurred.
Can someone tell me what am I doing wrong?
Project Structure:
root
-airflow_dags
-Dockerfile
-docker-compose.yml
-airflow-start.sh
-airflow.cfg
In case it's relevant: airflow-start.sh
In airflow.cfg:
dags_folder = /src/airflow_dags/
sql_alchemy_conn = postgresql://airflow:airflow#localhost:5432/airflow
If possible get your code running without touching docker ... run it directly on your host ... of course this means your host ( your laptop or wherever you are executing your commands, could be a remote VPS debian box ) must have the same OS as your Dockerfile, I see in this case FROM python:3.5.2 is actually using debian 8
Short of doing above launch a toy container which does nothing yet executes and lets you login to it to manually run your commands to aid troubleshooting ... so use following as this toy container's Dockerfile
FROM python:3.5.2
CMD ["/bin/bash"]
so now issue this
docker build --tag saadi_now . # creates image saadi_now
now launch that image
docker run -d saadi_now sleep infinity # launches container
docker ps # lets say its container_id is b91f8cba6ed1
now login to that running container
docker exec -ti b91f8cba6ed1 bash
cool so you are now inside the docker container so run the commands which were originally in the real Dockfile ... this sometime makes it easier to troubleshoot
one by one add to this toy Dockerfile your actual commands from the real Dockerfile and redo above until you discover the underlying issues
Most likely this is related to either a bug in airflow with the werkzeug package, or your requirements might be clobbering something.
I recommend checking the versions of airflow, flask, and werkzueg that are used in the environment. It may be that you need to pin the version of flask or werkzueg as discussed here.
I'm writing some integration tests that involve a Python application running under uwsgi.
To test an aspect of this, I am running an uwsgi spooler, which requires that the master process is running.
If pytest has a failed test, it returns a non-zero exit code, which is great.
Without the master process, the entire uwsgi process also returns this exit code, and so our continuous integration server responds appropriately.
However, when the master process is running, it always exits with a zero exit code - regardless of failed tests.
I need it to pass on the first non-zero exit code of a subprocess if there is one.
Note: I'm not really interested in mocking this out - I need to test this working.
I've created a Dockerized Minimal, Complete, and Verifiable Example that illustrates my issue:
Dockerfile:
FROM python:3.6.4-slim-stretch
WORKDIR /srv
RUN apt-get update \
&& apt-get install -y build-essential \
&& pip install uwsgi pytest
COPY test_app.py /srv/
CMD ['/bin/bash']
test_app.py:
import pytest
def test_this():
assert 1==0
Given the above 2 files in a directory, the following shows the return code if I run this failing test under uwsgi without the master process:
$ docker build -t=test .
$ docker run test uwsgi --chdir /srv --pyrun /usr/local/bin/pytest
...
============================= test session starts ==============================
platform linux -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /srv, inifile:
collected 1 item
test_app.py F [100%]
=================================== FAILURES ===================================
__________________________________ test_this ___________________________________
def test_this():
> assert 1==0
E assert 1 == 0
test_app.py:4: AssertionError
=========================== 1 failed in 0.05 seconds ===========================
$ echo $?
1
Note: you can see that the return code from this process (last line) is non-zero as required
Now, changing nothing other than running uwsgi with the master process, we get the following output:
$ docker run test uwsgi --set master=true --chdir /srv --pyrun /usr/local/bin/pytest
...
============================= test session starts ==============================
platform linux -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /srv, inifile:
collected 1 item
test_app.py F [100%]
=================================== FAILURES ===================================
__________________________________ test_this ___________________________________
def test_this():
> assert 1==0
E assert 1 == 0
test_app.py:4: AssertionError
=========================== 1 failed in 0.05 seconds ===========================
worker 1 buried after 0 seconds
goodbye to uWSGI.
$ echo $?
0
Note: this time the return code from this process (last line) is zero - even though the test failed
How can I get uwsgi to forward the exit code from a failing process to the master?
This works, but feels a little hacky. I'll happily accept a better answer if one comes along.
I've made this work with the addition of two additional files (and a small update to the Dockerfile):
Dockerfile:
FROM python:3.6.4-slim-stretch
WORKDIR /srv
RUN apt-get update \
&& apt-get install -y build-essential \
&& pip install uwsgi pytest
COPY test_app.py test run_tests.py /srv/
CMD ['/bin/bash']
test:
#!/bin/bash
uwsgi --set master=true --chdir /srv --pyrun /srv/run_tests.py
exit $(cat /tmp/test_results)
run_tests.py:
#!/usr/bin/python
import re
import subprocess
import sys
from pytest import main
def write_result(retcode):
path = r'/tmp/test_results'
with open(path, 'w') as f:
f.write(str(retcode))
def run():
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
retcode = 1
try:
retcode = main()
finally:
write_result(retcode)
sys.exit(retcode)
if __name__ == '__main__':
run()
The way it works is that I've copied and tweaked the pytest program into run_tests.py, where it writes out the return code of the tests to a temporary file. The tests are run via a bash script: test, that runs uwsgi, which runs the tests, then exits the script with the return code from the tests.
Results now look like:
$ docker build -t=test .
$ docker run test /srv/test
...
============================= test session starts ==============================
platform linux -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /srv, inifile:
collected 1 item
test_app.py F [100%]
=================================== FAILURES ===================================
__________________________________ test_this ___________________________________
def test_this():
> assert 1==0
E assert 1 == 0
test_app.py:4: AssertionError
=========================== 1 failed in 0.05 seconds ===========================
worker 1 buried after 0 seconds
goodbye to uWSGI.
$ echo $?
1