I am trying to show a coverage badge for a Python project in a private Gitlab CE installation (v11.8.6), using coverage.py for Python. However, the badge always says unknown.
This is the relevant job in my .gitlab-ci.yaml file:
coverage:
stage: test
before_script:
- pip3.6 install coverage
- mkdir -p public
script:
- coverage run --source=my_service setup.py test
- coverage report | tee public/coverage.txt
artifacts:
paths:
- public/coverage.txt
coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
I expected the badge to show the actual coverage at this URL, so this is what I have entered in the project settings under General/Badges:
http://<privategitlaburl>/%{project_path}/badges/%{default_branch}/coverage.svg?job=coverage
I read these instructions using Gitlab pages. However, I do not want to use pages just for this purpose, and I am dealing with a Python project.
According to the example in the CI/CD settings, and in this post, the regex in the coverage entry should work. which I could confirm by trying it locally:
$ grep -P "TOTAL\s+\d+\s+\d+\s+(\d+%)" public/coverage.txt
TOTAL 289 53 82%
I also tried the same regex in the field Test coverage parsing in the project settings under CI/CD/Pipeline settings, but the badge shown on that same page keeps showing unknown.
The documentation is not quite clear to me, as it does not describe the whole procedure. It is clear how to use a badge once created, and there is a manual for publishing a coverage report to pages, but there seems to be no clear path from extracting the score to displaying the badge.
Should I use the coverage entry in my .gitlab-ci.yaml file or fill in the regex in the pipeline settings?
Either way, is Gitlab CI supposed to update the coverage badge based on that, or do I need to use additional tools like coverage-badge to do so?
Where is the extracted coverage score supposed to be reported; how can I find out if my regex works?
I finally got the coverage badge displaying a percentage instead of unknown today for my python project. Here's the relevant content from my .gitlab-ci.yml:
job:
script:
- 'python -m venv venv'
- '.\venv\Scripts\activate'
- 'python -m pip install -r requirements.txt'
- 'coverage run --source=python_project -m unittest discover ./tests'
- 'coverage report --omit=things_that_arent_mine/*'
- 'coverage xml'
artifacts:
reports:
cobertura: 'coverage.xml'
I'm also using the regex for gcovr listed in the repo CI/CD Settings > General Pipelines > Test coverage parsing which I found after reading this as well as the second to last comment on this:
^TOTAL.*\s+(\d+\%)$
In the repo General Settings > Badges, my badge link is:
http://gitlab-server/%{project_path}/-/jobs
and my badge image url is:
http://gitlab-server/%{project_path}/badges/%{default_branch}/coverage.svg
I don't quite know what the Cobertura report artifact is for (I think it specifically has to do with merge requests) but I have it in there because the tutorials took me down that road. I did confirm that removing the following from the .gitlab-ci.yml doesn't break the badge or coverage number on the jobs page:
- 'coverage xml'
artifacts:
reports:
cobertura: 'coverage.xml'
While removing or commenting:
- 'coverage report --omit=things_that_arent_mine/*'
does break the badge as well as the coverage number displayed on the CI/CD jobs page of the repo. I also tried some regex variations that I tested in rubular but the only one that didn't cause gitlab to barf was the gcovr one.
Hopefully this helps out, it was kind of difficult and arduous to piece together what was needed for this badge to work on a python project.
EDIT:
Also just figured out how to add some sexy precision to the coverage percentage number. If you change the coverage report line of the .gitlab-ci.yml to:
- 'coverage report --omit=things_that_arent_mine/* --precision=2'
And the regex in CI/CD Settings > General Pipelines > Test coverage parsing to:
^TOTAL.+?(\d+.\d+\%)$
That should give you a very precise coverage number that effectively no one but you and I will care about. But by gosh we'll know for sure if coverage is 99.99% or 100%.
A solution that worked for me:
In the .gitlab-ci.yml file, in the job that runs the coverage report, I added the following lines:
script:
# some lines omitted for brevity
- coverage report --omit=venv/*
coverage: '/TOTAL.*\s+(\d+\%)/'
In particular, I couldn't get the badge to show anything but unknown until I added the coverage directive to my test job. Bear in mind, different tools may print different output and so you will likely have to change the regular expression, as I did.
Spent three days on the problem above myself so I thought I'd post my working config. My project is a pyscaffolding project, uses tox, the pipeline triggers when you push a commit to a branch, and it pushes a pip package to the github packages library.
Badge link is: http://gitlab.XXXX.com/XXmeXX/python-template/-/commits/develop
Badge image is : http://gitlab.XXXX.com/XXmeXX/python-template/badges/develop/coverage.svg
Regex is same as above.
PYPIRC is an environment variable set in that looks like a .pypirc file and points to my internal pip registry.
I used this tutorial
updated -> --cov-report xml is in my setup.cfg and I'm pretty sure b/c of that I don't need the coverage command, but I haven't tested that since I wrote this post. I'll check that next time I'm in there.
My gitlab-ci.yml:
build-package:
stage: deploy
image: python:3.7
script:
- set
- cat $PYPIRC > /tmp/.pypirc
- pip3 install twine setuptools setuptools_scm wheel tox coverage
# build the pip package
- python3 setup.py bdist_wheel
# $CI_COMMIT_TAG only works with the tagging pipeline, if you want to test a branch push directly, pull from fs
- VERSION=$(python setup.py --version)
# You can issue ls -al commands if you want, like to see variables or your published packages
# - echo $VERSION
# - ls -al ./dist |grep whl
- tox
- coverage xml -o coverage.xml
# If you want to put artifacts up for storage, ...
# - mkdir public
- python3 -m twine upload --repository pythontemplate ./dist/my_python_template-${VERSION}-py2.py3-none-any.whl --config-file /tmp/.pypirc
artifacts:
reports:
cobertura : 'coverage.xml'
when: always
# If you were copying artifacts up for later.
# paths:
# - public
only:
- branches
Biggest thing I learned was that first you get it showing as "Coverage" column in the jobs list, then you know you're parsing everything correctly and the regexes are working. From there you work on the coverage xml and the badge links.
I also digged quite a little into this. And just as you said: The command coverage report produces output similar to this:
[...]
tests/__init__.py 0 0 100%
tests/test_ml_squarer.py 4 0 100%
tests/test_squarer.py 4 0 100%
----------------------------------------------
TOTAL 17 2 88%
test_service run-test: commands[4] | coverage xml
and depending on the regex-expression that is saved, it simply looks for the 88% next to TOTAL. I used the recommend for pytest-cov (Python) , i.e. ^TOTAL.+?(\d+\%)$
So running coverage xml looks rather optional to me at the moment. However, it is not working for me using: GitLab Community Edition 12.10.11
Related
I am trying to test views and models for Django REST API written in pycharm and have installed pytest for this. I have written some tests and when I wanted to start them I got the following error message:
ERROR: usage: _jb_pytest_runner.py [options] [file_or_dir] [file_or_dir] [...]
_jb_pytest_runner.py: error: unrecognized arguments: --cov=frontend --cov-report=html
I have then checked if I have installed pytest properly and it seems I have. I have both Python 2.7.16 as well as Python 3.9.6 installed but am using Python 3. Could this be a compatibility problem or is it something else?
I have tried starting the tests both through the terminal using py.test and just in the IDE itself. I keep getting this same error message.
I have tried the approach below:
py.test: error: unrecognized arguments: --cov=ner_brands --cov-report=term-missing --cov-config
yet I seem to get the same error.
ERROR: usage: _jb_pytest_runner.py [options] [file_or_dir] [file_or_dir] [...]
_jb_pytest_runner.py: error: unrecognized arguments: --cov=frontend --cov-report=html
Does anyone know how I could solve this issue?
Thanks in advance.
First of all, yes, Python 3.9.6 is compatible with pytest 6.2.5, however, you appear to be missing a few dependencies. pytest is one of many different Python packages, and you appear to have installed that successfully, so you're halfway there.
There are a few different coverage plugins that work with pytest, and those need to be installed separately. Here are the two most common coverage plugins for Python and pytest:
https://pypi.org/project/coverage/
https://pypi.org/project/pytest-cov/
The first one, coverage is installed with:
pip install coverage
The second one, pytest-cov is installed with:
pip install pytest-cov
Based on your run command, you appear to want to use pytest-cov. After you've installed that, you can verify that pytest has those new options by calling pytest --help:
> pytest --help
...
coverage reporting with distributed testing support:
--cov=[SOURCE] Path or package name to measure during execution (multi-allowed). Use --cov= to
not do any source filtering and record everything.
--cov-reset Reset cov sources accumulated in options so far.
--cov-report=TYPE Type of report to generate: term, term-missing, annotate, html, xml (multi-
allowed). term, term-missing may be followed by ":skip-covered". annotate, html
and xml may be followed by ":DEST" where DEST specifies the output location.
Use --cov-report= to not generate any output.
--cov-config=PATH Config file for coverage. Default: .coveragerc
--no-cov-on-fail Do not report coverage if test run fails. Default: False
--no-cov Disable coverage report completely (useful for debuggers). Default: False
--cov-fail-under=MIN Fail if the total coverage is less than MIN.
...
Alternatively, you might be able to get the same results you're looking for using coverage:
coverage run -m pytest
coverage html
coverage report
And that will also give you a coverage report even if not using pytest-cov options.
We have a flake8 build stage in our circle-ci workflow, and more often than not this step fails due to timeout:
Too long with no output (exceeded 10m0s): context deadline exceeded
At the same time, this same stage runs quite ok locally on our macbooks:
% time make lint
poetry run black .
All done! ✨ 🍰 ✨
226 files left unchanged.
isort -y
Skipped 2 files
PYTHONPATH=/path/to/project poetry run flake8 --show-source
0
make lint 44.00s user 4.90s system 102% cpu 47.810 total
We tried to debug the issue by adding the -vv flag to flake8 thinking we would get some plugin name that takes too long, but we don't even have the timestamps in the log:
flake8.processor ForkPoolWorker-31 1004 WARNING Plugin requested optional parameter "visitor" but this is not an available parameter.
flake8.processor ForkPoolWorker-8 1080 WARNING Plugin requested optional parameter "visitor" but this is not an available parameter.
flake8.bugbear ForkPoolWorker-26 1082 INFO Optional warning B950 not present in selected warnings: ['E', 'F', 'W', 'C90']. Not firing it at all.
Are there any known reasons why flake8 would freeze on CircleCI? How can one debug the issue?
When using a virtual-environment such as venv you should ignore the folder in the [flake8]-config (that's what happened to me). Assuming you are creating a virtualenv with virtualenv .venv it would look like this:
[flake8]
exclude = .venv
The same was for my coverage which was fixed by adding an omit to that config (solution found here):
# pyproject.toml file content
[tool.coverage.run]
omit = [
"tests/*",
".venv/*",
]
For now, the solution we seem to have found was to limit the number of cores running flake8:
.flake8
[flake8]
...
jobs = 6
Not sure it is the correct solution, but there you go. I will accept a better solution if there is one.
I've also experienced a timeout in circle-ci only, but it was due to the specific way dependencies are installed on the pipeline, creating a .venv folder which was not excluded in flake8 configuration.
The -v option helped me to notice the huge amount of files flkae8 was analyzing.
In pytest-cov documentation it says:
Note that this plugin controls some options and setting the option in
the config file will have no effect. These include specifying source
to be measured (source option) and all data file handling (data_file
and parallel options).
However it doesn't say how to change these options. Is there a way to change it (parallel=True)?
I want to change this because after coverage is upgraded from < 5 to latest (5.1) I got these:
Failed to generate report: Couldn't use data file '/path/to/jenkins/workspace/pr/or/branch/.coverage': no such table: line_bits
Note: using coverage < 5 do not have this problem
I have also tried adding .coveragerc with the following but still get the same issue.
[run]
parallel = True
The way it is run in jenkins:
pytest ./tests --mpl -n 4 \
--junitxml=pyTests.xml --log-cli-level=DEBUG -s \
--cov=. --cov-report --cov-report html:coverage-reports
This is due to pytest-cov using coverage combine, which combines all coverage results: In parallel it mixes results from other runs, that may or may not be completed, and in any cases are irrelevant.
I think if you're having the issue, it may be because you're running multiple tests in parallel, like multiple versions of Python.
In which case it's easily solved by specifying a unique COVERAGE_FILE for each run, like:
export COVERAGE_FILE=.coverage.3.7
for the Python 3.7 run, an so on.
See: https://github.com/nedbat/coveragepy/issues/883#issuecomment-650562896
background
I am setting up a ci/cd for a python project that has a c++ library dependence that is in the folder foo/bar/c_code_src. The build stage runs python setup.py install and compiles the c++ library binaries and output it to foo/bar/bin/
I then run a python unittest that fails if foo/bar/bin doesn't exists.
The script below is my first attempt at using .gitlab-ci.yml:
stages:
- build
- test
build:
stage: build
script:
- python setup.py install
artifacts:
paths:
- foo/bar/c_code_src
test:
stage: test
dependencies:
- build
script:
- python -m unittest foo/bar/test_bar.py
This is able to work fine however because it takes a relatively long time to compile c_code_src, and the resulting bin is fairly large, and the code in c_code_src doesn't change much I want to be able to cache the bin folder for future pipelines and only run build stage if the code in c_code_src changes. after reading the documentation it seems that I want to use cache instead of (or along side with) artifacts.
Here is my attempt at revising the .gitlab-ci.yml:
stages:
- build
- test
build:
stage: build
script:
- python setup.py install
cache:
- key: bar_cache
paths:
- foo/bar/bin
test_bar:
stage: test
dependencies:
- build
cache:
key: bar_cache
paths:
- foo/bar/bin
policy: pull
script:
- python -m unittest foo/bar/test_bar.py
What I am unsure of is how to set the condition that only run build if c_code_src changes.
In short I want:
only run build if bin does not exists or there's changes to c_code_src
cache bin such that the test stage always have the up-to-date bin even if build stage did not run
Not sure you'd be able to have a condition where bin does not exist, because typically, the job condition will look at what changes are being made in a commit without running through the whole repository code.
You can however fairly easily make the job check for changes. You can either use only:changes or rules:changes. So for the relevant job, add:
rules:
changes:
- foo/bar/c_code_src/*
As to caching, what you've written looks fine. Cache isn't always guaranteed to work but will pull a new version if necessary, so having the most recent shouldn't be a problem.
How can I generate test report using pytest? I searched for it but whatever i got was about coverage report.
I tried with this command:
py.test sanity_tests.py --cov=C:\Test\pytest --cov-report=xml
But as parameters represents it generates coverage report not test report.
Ripped from the comments: you can use the --junitxml argument.
$ py.test sample_tests.py --junitxml=C:\path\to\out_report.xml
You can use a pytest plugin 'pytest-html' for generating html reports which can be forwarded to different teams as well
First install the plugin:
$ pip install pytest-html
Second, just run your tests with this command:
$ pytest --html=report.html
You can also make use of the hooks provided by the plugin in your code.
import pytest
from py.xml import html
def pytest_html_report_title(report)
report.title = "My very own title!"
Reference: https://pypi.org/project/pytest-html/
I haven't tried it yet but you can try referring to https://github.com/pytest-dev/pytest-html. A python library that can generate HTML output.
py.test --html=Report.html
Here you can specify your python file as well. In this case, when there is no file specified it picks up all the files with a name like 'test_%' present in the directory where the command is run and executes them and generates a report with the name Report.html
You can also modify the name of the report accordingly.