This is the path to the project
D:\QA\test-framework\python-client
This is a test frame work implemented by python
This is the python file that contains tests
This is the path to the test case that I need to run
D:\QA\test-framework\python-client\test_data\tests\curve.json
This is the beginning of the curve.json file.
{
"Sklearn - Sklearn - Regression - Curve M2" : [
{
"dataImport": {
"
"
"
]
}
This is the tox.ini file
[tox]
envlist = py38
[testenv]
deps =
pytest
pytest-html
pytest-sugar
pytest-logger
allure-pytest
pytest-xdist
pytest_steps
datetime
oauth2client
gspread
aiclub
commands =
pytest -s -v -k _workflow --html=test_report.html --alluredir=allure-
results/ -n auto --dist=loadfile
allure serve allure-results
pytest {posargs}
I need to run only this curve.json using tox command
Related
This is my tox.ini file:
# Tox (https://tox.readthedocs.io/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
#
# See also https://tox.readthedocs.io/en/latest/config.html for more
# configuration options.
[tox]
# Choose your Python versions. They have to be available
# on the system the tests are run on.
# skipsdist=True
ignore_basepython_conflict=false
[testenv:{setup,lint,codestyle,docstyle,tests,doc-linux,doc-darwin,doc-win32}]
basepython=python3.9
envdir = {toxworkdir}/py39
setenv =
PROJECT_NAME = project_name
passenv =
WINDIR
install_command=
pip install \
--find-links=pkg \
--trusted-host=pypi.python.org \
--trusted-host=pypi.org \
--trusted-host=files.pythonhosted.org \
{opts} {packages}
platform = doc-linux: linux
doc-darwin: darwin
doc-win32: win32
deps =
-r{toxinidir}/requirements-dev.txt
-r{toxinidir}/requirements.txt
commands =
setup: python -c "print('All SetUp')"
# Mind the gap, use a backslash :)
lint: pylint -f parseable -r n --disable duplicate-code \
lint: --extension-pkg-whitelist=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-modules=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-classes=PyQt5,numpy,torch,cv2,boto3 \
lint: project_name \
lint: {toxinidir}/script
lint: pylint -f parseable -r n --disable duplicate-code \
lint: demo/demo_file.py
codestyle: pycodestyle --max-line-length=100 \
codestyle: --exclude=project_name/third_party/* \
codestyle: project_name demo script
docstyle: pydocstyle \
docstyle: --match-dir='^((?!(third_party|deprecated)).)*' \
docstyle: project_name demo script
doc-linux: make -C {toxinidir}/doc html
doc-darwin: make -C {toxinidir}/doc html
doc-win32: {toxinidir}/doc/make.bat html
tests: python -m pytest -v -s --cov-report xml --durations=10 \
tests: --cov=project_name --cov=script \
tests: {toxinidir}/test
tests: coverage report -m --fail-under 100
On tox<4.0 it was very convinient to run tox -e lint to fix linting stuff or tox -e codestyle tox fix codestyle stuff, etc. But now, with version tox>4.0 each time I run one of these commands I get this message (for instance):
codestyle: recreate env because env type changed from {'name': 'lint', 'type': 'VirtualEnvRunner'} to {'name': 'codestyle', 'type': 'VirtualEnvRunner'}
codestyle: remove tox env folder .tox/py39
And it takes forever to run these commands since the evironments are recreated each time ...
I also use these structure for running tests on jenkins so I can map each of these commands to a jenkins stage.
How can I reuse the environment? I have read that it is possible to do it using plugins, but no idea how this can be done, or how to install/use plugins.
I have tried this:
tox multiple tests, re-using tox environment
But it does not work in my case.
I spect to reuse the environment for each of the environments defined in the tox file.
As an addition to N1ngu's excellent answer...
You could re-structure your tox.ini as following:
[tox]
...
[testenv]
<here goes all the common configuration>
[testenv:lint]
<here goes the lint specific configuration>
[testenv:codestyle]
...
And so on. This is a common setup.
While still the environments need to be created at least once, they won't get recreated on each invocation.
This all said, you could also have a look at https://pre-commit.com/ to run your linters, which is very common in the Python community.
Then you would have a tox.ini like the following...
[tox]
...
[testenv]
<here goes all the common configuration>
[testenv:lint]
deps = pre-commit
commands = pre-commit run --all-files
There is now a definite answer about re-use of environments in the faq:
https://tox.wiki/en/latest/upgrading.html#re-use-of-environments
I fear the generative names + factor-specific commands solution you linked relied on tox-3 not auto-recreating the environments by default, which is among the new features in tox 4. Now, environment recreation is something that can be forced (--recreate) but can't be opted-out.
Official answer on this https://github.com/tox-dev/tox/issues/425 boils down to
Officially we don't allow sharing tox environments at the moment [...] As of today, each tox environment has to have it's own virtualenv even if the Python version and dependencies are identical [...] We'll not plan to support this. However, tox 4 allows one to do this via a plugin, so we'd encourage people [...] to try it [...]. Once the project is stable and widely used we can revisit accepting it in core.
So that's it, write a plugin. No idea on how to do that either, so my apologies if this turns out as "not an answer"
Below is the failure message on running tox. I don't see this exact error reported on any forum.
Any guidance here would be of great help.
I'm invoking the tox in 3.8-slim-buster docker container & installed the required dependencies - pip install tox flake8 black pylint
Error:
File ".tox/lint/lib/python3.6/site-packages/hacking/core.py", line 185
except ImportError, exc:
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
.tox/lint/lib/python3.6/site-packages/flake8/plugins/manager.py", line 168, in load_plugin
raise failed_to_load
flake8.exceptions.FailedToLoadPlugin: Flake8 failed to load plugin "H000" due to invalid syntax (core.py, line 185).
my tox.ini file.
[tox]
minversion = 1.8
envlist =
unit
lint
format-check
skipsdist = true
[testenv]
usedevelop = true
basepython = python3
passenv = *
setenv =
COVERAGE_FILE={toxworkdir}/.coverage
PIP_EXTRA_INDEX_URL=https://maven.com/artifactory/api/pypi/simple/
extras =
test
[testenv:unit]
commands =
python -m pytest {posargs}
[testenv:lint]
commands =
python -m flake8
[testenv:format]
commands =
python -m black {toxinidir}
[testenv:format-check]
commands =
python -m black --diff --check {toxinidir}
[testenv:build-dists-local]
usedevelop = false
skip_install = true
commands =
python -m pep517.build \
--source \
--binary \
--out-dir {toxinidir}/dist/ \
{toxinidir}
[testenv:build-dists]
commands =
rm -rfv {toxinidir}/dist/
{[testenv:build-dists-local]commands}
whitelist_externals =
rm
[testenv:publish-dists]
commands =
bash -c '\
twine upload {toxinidir}/dist/*.whl \
-u $TWINE_USERNAME \
-p $TWINE_PASSWORD \
--repository-url $TWINE_REPOSITORY \
'
whitelist_externals =
bash
[flake8]
max-line-length = 100
format = pylint
exclude =
.eggs/
.tox/,
.venv*,
build/,
dist/,
doc/,
#- [H106] Don't put vim configuration in source files.
#- [H203] Use assertIs(Not)None to check for None.
#- [H904] Delay string interpolations at logging calls.
enable-extensions = H106,H203,H904
ignore = E226,E302,E41
[pytest]
testpaths = test/
addopts = -v -rxXs --doctest-modules --cov metarelease --cov-report term-missing --showlocals
norecursedirs = dist doc build .tox .eggs
[coverage:run]
omit =
metarelease/cmd/*
metarelease/shell.py
[coverage:report]
fail_under =
100
This is neither a flake8 nor a tox bug, but a bug in hacking, which you will notice when you have a close look at the traceback.
The syntax
File ".tox/lint/lib/python3.6/site-packages/hacking/core.py", line 185
except ImportError, exc:
is only valid in Python 2, but you use Python 3.
I never heard of the hacking project before, but using a search engine revealed https://pypi.org/project/hacking/
You should report a bug a their bug tracker.
In my Python project, I'm reading environment variables from a .env file. I am actually using pydantic to read/verify the env vars.
When using tox, the .env file will be completely ignored. I am wondering how to make tox acknowledging the existence of .env?
Here's my tox.ini
[tox]
envlist = py39
[testenv]
deps = -r requirements-dev.txt
commands = pytest {posargs}
My .env file:
ENV_STATE="prod" # dev or prod
At first, I thought maybe pydantic loads the content of the .env file as environment variables, that is why I wrote this as my first answer:
original answer
tox does some isolation work, so your builds / tests are more reproducible.
This means that e.g. environment variables are filtered out, except you whitelist them.
You probably need to set
passenv = YOUR_ENVIRONMENT_VARIABLE
Also see in the tox documentation.
updated answer
This does not seem to be a tox issue at all.
I just created a simple project with pydantic and dotenv, and it works like a charm with tox.
tox.ini
[tox]
envlist = py39
skipsdist = True
[testenv]
deps = pydantic[dotenv]
commands = pytest {posargs}
.env
ENVIRONMENT="production"
main.py
from pydantic import BaseSettings
class Settings(BaseSettings):
environment: str
class Config:
env_file = ".env"
env_file_encoding = "utf-8"
test_main.py
from main import Settings
def test_settings():
settings = Settings(_env_file=".env")
assert settings.environment == "production"
I have a simple python dockerized application whose structure is
/src
- server.py
- test_server.py
Dockerfile
requirements.txt
in which the docker base image is Linux-based, and server.py exposes a FastAPI endpoint.
For completeness, server.py looks like this:
from fastapi import FastAPI
from pydantic import BaseModel
class Item(BaseModel):
number: int
app = FastAPI(title="Sum one", description="Get a number, add one to it", version="0.1.0")
#app.post("/compute")
async def compute(input: Item):
return {'result': input.number + 1}
Tests are meant to be done with pytest (following https://fastapi.tiangolo.com/tutorial/testing/) with a test_server.py:
from fastapi.testclient import TestClient
from server import app
import json
client = TestClient(app)
def test_endpoint():
"""test endpoint"""
response = client.post("/compute", json={"number": 1})
values = json.loads(response.text)
assert values["result"] == 2
Dockerfile looks like this:
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY . /app
RUN pip install -r requirements.txt
WORKDIR /app/src
EXPOSE 8000
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
At the moment, if I want to run the tests on my local machine within the container, one way to do this is
Build the Docker container
Run the container, get its name via docker ps
Run docker exec -it <mycontainer> bash and execute pytest to see the tests passing.
Now, I would like to run tests in Azure DevOps (Server) before pushing the image to my Docker registry and triggering a release pipeline. If this sounds an OK thing to do, what's the proper way to do it?
So far, I hoped that something along the lines of adding a "PyTest" step in the build pipeline would magically work:
I am currently using a Linux agent, and the step fails with
The failure is not surprising, as (I think) the container is not run after being built, and therefore pytest can't run within it either :(
Another way to solve the solve this is to include pytest commands in the Dockerfile and deal with the tests in a release pipeline. However I would like to decouple the testing from the container that is ultimately pushed to the registry and deployed.
Is there a standard way to run pytest within a Docker container in Azure DevOps, and get a graphical report?
Update your azure-pipelines.yml file as follows to run the tests in Azure Pipelines
Method-1 (using docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
inputs:
command: 'build'
Dockerfile: '**/Dockerfile'
arguments: '-t fast-api:$(Build.BuildId)'
- script: |
docker run fast-api:$(Build.BuildId) python -m pytest
displayName: 'Run PyTest'
Successfull pipeline screenshot
Method-2 (without docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python37:
python.version: '3.7'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
pip install pytest pytest-azurepipelines
python -m pytest
displayName: 'pytest'
BTW, I have one simple FastAPI project, you can reference if your want.
Test your docker script using pytest-azurepipelines:
- script: |
python -m pip install --upgrade pip
pip install pytest pytest-azurepipelines
pip install -r requirements.txt
pip install -e .
displayName: 'Install dependencies'
- script: |
python -m pytest /src/test_server.py
displayName: 'pytest'
Running pytest with the plugin pytest-azurepipelines will let you see your test results in the Azure Pipelines UI.
https://pypi.org/project/pytest-azurepipelines/
You can run your unit tests directly from within your Docker container using pytest-azurepipelines (that you need to install previously in the Docker image):
- script: |
docker run --mount type=bind,source="$(pwd)",target=/results \
--entrypoint /bin/bash my_docker_image \
-c "cd results && pytest"
displayName: 'tests'
continueOnError: true
pytest will create an xml file containing the test results, that will be made available to Azure DevOps pipeline thanks to the --mount flag in the docker run command. Then pytest-azurepipelines will publish directly the results to Azure DevOps.
In node, you can define a package.json. Then define a script block as following:
"scripts": {
"start": "concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"",
},
So in root directory, I can just do yarn start to run concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"
What is the equivalent of that in Python 3? If I want to have a script called python test to run python -m unittest discover -v
use make, its great.
create a Makefile and add some targets to run specific shell commands:
install:
pip install -r requirements.txt
test:
python -m unittest discover -v
# and so on, you got the idea
run with (assuming that Makefile is in the current dir):
make test
NOTE: if you want to run more commands but in the same environment from within a target do this:
install:
source ./venv/bin/activate; \
pip install -r requirements.txt; \
echo "do other stuff after in the same environment"
the key is the ;\ which puts the commands in a single run and make executes these commands as a single line because of the ;\. the space in ; \ its just for aesthetics.
Why don't you just use pipenv? It is the python's npm and you can add a [scripts] very similar to the one of npm on your Pipfile.
See this other question to discover more: pipenv stack overflow question
Not the best solution really. This totally works if you already familiar with npm, but like others have suggested, use makefiles.
Well, this is a work around, but apparently you can just use npm if you have it installed. I created a file package.json in root directory of python app.
{
"name": "fff-connectors",
"version": "1.0.0",
"description": "fff project to UC Davis",
"directories": {
"test": "tests"
},
"scripts": {
"install": "pip install -r requirements.txt",
"test": "python -m unittest discover -v"
},
"keywords": [],
"author": "Leo Qiu",
"license": "ISC"
}
then I can just use npm install or yarn install to install all dependencies, and yarn test or npm test to run test scripts.
You can also do preinstall and postinstall hooks. For example, you may need to remove files or create folder structures.
Another benefit is this setup allows you to use any npm libraries like concurrently, so you can run multiple files together and etc.
Answer specifically for tests, create a setup.py like this within your package/folder:
from setuptools import setup
setup(name='Your app',
version='1.0',
description='A nicely tested app',
packages=[],
test_suite="test"
)
Files are structured like this:
my-package/
| setup.py
| test/
| some_code/
| some_file.py
Then run python ./setup.py test to run the tests. You need to install setuptools as well (as a default you can use distutils.core setup function but it doesn't include much options).