I wish pre-commit to run the tests before committing my code.
The command python -m unittest discover is working in the command line.
D:\project_dir>python -m unittest discover
...
...
...
----------------------------------------------------------------------
Ran 5 tests in 6.743s
OK
But when trying to commit I am getting
D:\project_dir>git commit -m "fix tests with hook"
run tests................................................................Failed
hookid: tests
usage: python.exe -m unittest discover [-h] [-v] [-q] [--locals] [-f] [-c]
[-b] [-k TESTNAMEPATTERNS] [-s START]
[-p PATTERN] [-t TOP]
python.exe -m unittest discover: error: unrecognized arguments: bigpipe_response/processors_manager.py
usage: python.exe -m unittest discover [-h] [-v] [-q] [--locals] [-f] [-c]
[-b] [-k TESTNAMEPATTERNS] [-s START]
[-p PATTERN] [-t TOP]
python.exe -m unittest discover: error: unrecognized arguments: tests/test_processors.py
Here is my .pre-commit-config.yaml file.
- repo: local
hooks:
- id: tests
name: run tests
entry: python -m unittest discover
language: python
types: [python]
stages: [commit]
Also for language I try to use system. I got the same result.
How can I solve this? Please help.
You can try the following YAML. Of course, you should change the pattern in args option if you are using different one.
- id: unittest
name: unittest
entry: python -m unittest discover
language: python
'types': [python]
args: ["-p '*test.py'"] # Probably this option is absolutely not needed.
pass_filenames: false
stages: [commit]
You should set to false the pass_filenames parameter because in other case the files will be passed as arguments and as you mentioned in your question these are "unrecognized" parameters.
The accepted answer won't work if your application depends on some packages. In this case, the language:python setting lets pre-commit use a custom virtual environment (within the ~/.cache folder), that you can add dependencies using the additional-dependencies flag. However this usually means copying the contents of 'requirements.txt`
So in order to use your application's virtual environment, you have to set the language:system setting and put your tests into the repo: local. Then the virtual environment that is activated when running the pre-commit hooks is used.
A working configuration would look like this:
- repo: local
hooks:
- id: unittests
name: run unit tests
entry: python -m unittest
language: system
pass_filenames: false
args: ["discover"]
Related
When I launch my code to gitlab it goes through the CI, but the pytest session fails while on local machine doesn't.
On my gitlab-ci.yaml I tried installing pytest inside and outside the requirements.txt file, this is how it looks now:
.test:
tags: ["CsLib"]
before_script:
- python3 -m pip install -r requirements.txt
- python3 -m pip install pytest
pytest:
stage: test
extends: ".test"
script:
- nox -s test
After doing the installation it goes to the pytest defined session, that looks like this:
#session(python=["3.9"])
def test(s: Session) -> None:
s.posargs.append("--no-install")
s.run("python", "-m", "pytest", "tests", external=True)
I have enabled the use of existing virtualenvs, and typed external=True for it to use already installed packages. Yet it gives the error
$ nox -s test
nox > Running session test-3.9
nox > Creating virtual environment (virtualenv) using python3.9 in .nox/test-3-9
nox > python -m pytest tests
/builds/RnD/python-libs/.nox/test-3-9/bin/python: No module named pytest
nox > Command python -m pytest tests failed with exit code 1
nox > Session test-3.9 failed.
The thing is, when doing a session lint that it has almost the same structure as the test session but using flake8 it gives no error.
#session(python=["3.9"])
def lint(s: Session) -> None:
s.posargs.append("--no-install")
s.run("flake8", external=True)
This 'flake8' is installed within the requirements file, I tried doing that with pytest but it does not work.
If I type 'nox -s test' in my local machine it executes it without any problem, so I must be doing something wrong on the CI part that I can not see.
I'm having trouble implementing an sample program that runs pytest within .gitlab-ci.yml on Windows:
Using Shell executor...
Please find below .gitlab-ci.yml:
# .gitlab-ci.yml
test_sample:
stage: test
tags:
- test_sam
script:
- echo "Testing"
- pytest -s Target\tests
when: manual
CI/CD terminal output:
pytest is not recognized as the name of a cmdlet, function, script file, or operable program.
Python and pytest is already installed on the Windows OS on which the test is running but still the test fails.
I have tried the solution suggested in below thread but it doesn't work:
gitlab-ci.yml: 'script: -pytest cannot find any tests to check'
Could you please suggest how to make this test pass on windows?
If python is recognized, you could replace pytest, as with this similar project, with:
unittests:
script: python -m unittest discover tests -v
core doctests:
script: python -m doctest -v AmpScan/core.py
registration doctests:
script: python -m doctest -v AmpScan/registration.py
align doctests:
script: python -m doctest -v AmpScan/align.py
(The initial switch to pytest failed)
If you want to use pytest, you would need to use a python Docker image in your .gitlab.yml.
See "Setting Up GitLab CI for a Python Application" from Patrick Kennedy.
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
stages:
- Static Analysis
- Test
...
unit_test:
stage: Test
script:
- pwd
- ls -l
- export PYTHONPATH="$PYTHONPATH:."
- python -c "import sys;print(sys.path)"
- pytest
The below command worked without any issues:
py -m pytest -s Target\tests
I'm running Python unittest using the discover mode:
% python -m unittest discover
The system prints a dot for each test, but I'd rather see a test name.
Is there an option that makes this happen?
The verbose flag (-v) is what you're looking for:
$ python -m unittest discover -v
test_a (tests.test_a.TestA) ... ok
test_b (tests.test_b.TestB) ... ok
...
For more options, check:
$ python -m unittest --help
usage: python -m unittest [-h] [-v] [-q] [--locals] [-f] [-c] [-b]
[tests [tests ...]]
positional arguments:
tests a list of any number of test modules, classes and test
methods.
optional arguments:
-h, --help show this help message and exit
-v, --verbose Verbose output
-q, --quiet Quiet output
--locals Show local variables in tracebacks
-f, --failfast Stop on first fail or error
-c, --catch Catch Ctrl-C and display results so far
-b, --buffer Buffer stdout and stderr during tests
...
I have found some code that I think will allow me to communicate with my Helios Heat recovery unit. I am relatively new to Python (but not coding in general) and I really cannot work out how to use this code. It is obviously written for smarthome.py but I'd like to use it from the command line.
I can also see that the way this file is constructed is probably not the best way to construct an __init__.py but I'd like to try and use it first.
So, how do I run this code? https://github.com/mtiews/smarthomepy-helios
Cheers
After git clone https://github.com/mtiews/smarthomepy-helios.git: either
invoke python with the __init__.py script as argument:
python smarthomepy-helios/__init__.py
or
make the __init__.py executable and run it:
chmod u+x smarthomepy-helios/__init__.py
smarthomepy-helios/__init__.py
Running it either way gives me
2016-02-20 18:07:51,791 - root - ERROR - Helios: Could not open /dev/ttyUSB0.
Exception: Not connected
But passing --help I get some nice synopsis:
$> python smarthomepy-helios/__init__.py --help
usage: __init__.py [-h] [-t PORT] [-r READ_VAR] [-w WRITE_VAR] [-v VALUE] [-d]
Helios ventilation system commandline interface.
optional arguments:
-h, --help show this help message and exit
-t PORT, --tty PORT Serial device to use
-r READ_VAR, --read READ_VAR
Read variables from ventilation system
-w WRITE_VAR, --write WRITE_VAR
Write variable to ventilation system
-v VALUE, --value VALUE
Value to write (required with option -v)
-d, --debug Prints debug statements.
Without arguments all readable values using default tty will be retrieved.
I set up the fixture like this:
def pytest_addoption(parser):
parser.addoption('--env', action='store', default='qa',
help='Specify environment: "qa", "aws", "prod".')
#pytest.fixture(scope='module')
def testenv(request):
return request.config.getoption('--env')
And this works when I call py.test against a filename, for example:
py.test -v --env prod functionaltests/test_health_apps.py
But it does not work when I invoke py.test with markers, as with the following variations:
py.test -m selenium --env prod
py.test -m 'selenium' --env prod
py.test --env prod -m selenium
py.test --env prod -m 'selenium'
These return:
Usage: py.test [options] [file_or_dir] [file_or_dir] [...]
py.test: error: no such option: --env
Are markers and command line options incompatible?
They are compatible. My guess will be that your configuration file (conftest.py) is not in the same directory as where you launch your tests from. (I might be wrong here)
My suggestion will be to create separate file for configurations:
#configs.py
def pytest_addoption(parser):
parser.addoption('--env',
dest='testenv',
choices=["qa","aws","prod"],
default='qa',
help='Specify environment: "qa", "aws", "prod".')
#pytest.fixture(scope='session')
def testenv(request):
return request.config.option.testenv
and create the runner.py that you'll be using as py.test command:
#runner.py
import pytest
import sys
import configs
def main():
plgns = [configs]
pytest.main(sys.argv[1:], plugins=plgns)
if __name__=="__main__":
main()
then you can start as python runner.py --env prod -m selenium
This works for me very well.