How to use flake8 as unittest case? - python

I want to make flake8 a unittest case for all my source files. The unittest have to fail when the code is not PEP8 conform.
I still do this with pycodestyle that way.
pep = pycodestyle.Checker(filename)
return pep.check_all() == 0
But I don't know how to do this with flake8 after import flake8.

As others have pointed out, this is not something you should be doing in unit tests. Unit tests should be used to check behavior and functionality of the code, something like linting and code style enforcement is best left to pre-commit checks or CI. The flake8 documentation has instructions on version control integration where you can see how to integrate it with pre-commit
But if you REALLY want to do it your way for some reason, you can see the documentation for legacy flake8 python api. And you can do something like
from flake8.api import legacy as flake8
style_guide = flake8.get_style_guide(
ignore=['E24', 'W5'],
select=['E', 'W', 'F'],
format ='pylint',
)
result = style_guide.input_file("filename")
if result.total_errors:
# do whatever you want here. Raise errors or whatever.
pass

Related

flake8 max-complexity per file

I have a legacy project using flake8 to check code quality and complexity, but the project has some very complicated (terrible) services which are returning complexity WARNING messages:
./service1.py:127:1: C901 'some_method' is too complex (50)
We are slowly transitioning into making them better, but we need to make jenkins (which is running tests and flake8) pass.
Is there a way to specify ignoring a code error or complexity per file, or even per method?
If you have Flake8 3.7.0+, you can use the --per-file-ignores option to ignore the warning for a specific file:
flake8 --per-file-ignores='service1.py:C901'
This can also be specified in a config file:
[flake8]
per-file-ignores =
service1.py: C901
You can use flake8-per-file-ignores:
pip install flake8-per-file-ignores
And then in your config file:
[flake8]
per-file-ignores =
your/legacy/path/*.py: C901,E402
If you want a per-method/function solution you can use the in-source # noqa: ignore=C901 syntax.
In your flake config add:
[flake8]
ignore = C901
max-complexity = <some_number>
Try to experiment with the value for max-complexity to get more relevant number for your project.
Edit:
You can also ignore a line of your code or a file.
After you are done with the refactoring don't forget to change these settings.

python unittest with coverage report on (sub)processes

I'm using nose to run my "unittest" tests and have nose-cov to include coverage reports. These all work fine, but part of my tests require running some code as a multiprocessing.Process. The nose-cov docs state that it can do multiprocessing, but I'm not sure how to get that to work.
I'm just running tests by running nosetests and using the following .coveragerc:
[run]
branch = True
parallel = True
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain about missing debug-only code:
def __repr__
#if self\.debug
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
def __main__\(\):
omit =
mainserver/tests/*
EDIT:
I fixed the parallel switch in my ".coveragerc" file. I've also tried adding a sitecustomize.py like so in my site-packages directory:
import os
import coverage
os.environ['COVERAGE_PROCESS_START']='/sites/metrics_dev/.coveragerc'
coverage.process_startup()
I'm pretty sure it's still not working properly, though, because the "missing" report still shows lines that I know are running (they output to the console). I've also tried adding the environment variable in my test case file and also in the shell before running the test cases. I also tried explicitly calling the same things in the function that's called by multiprocessing.Process to start the new process.
First, the configuration setting you need is parallel, not parallel-mode. Second, you probably need to follow the directions in the Measuring Subprocesses section of the coverage.py docs.
Another thing to consider is if you see more than one coverage file while running coverage. Maybe it's only a matter of combining them afterwards.
tl;dr — to use coverage + nosetests + nose’s --processes option, set coverage’s --concurrency option to multiprocessing, preferably in either .coveragerc or setup.cfg rather than on the command-line (see also: command line usage and configuration files).
Long version…
I also fought this for a while, having followed the documentation on Configuring Python for sub-process coverage to the letter. Finally, upon re-examining the output of coverage run --help a bit more closely, I stumbled across the --concurrency=multiprocessing option, which I’d never used before, and which seems to be the missing link. (In hindsight, this makes sense: nose’s --processing option uses the multiprocessing library under the hood.)
Here is a minimal configuration that works as expected:
unit.py:
def is_even(x):
if x % 2 == 0:
return True
else:
return False
test.py:
import time
from unittest import TestCase
import unit
class TestIsEvenTrue(TestCase):
def test_is_even_true(self):
time.sleep(1) # verify multiprocessing is being used
self.assertTrue(unit.is_even(2))
# use a separate class to encourage nose to use a separate process for this
class TestIsEvenFalse(TestCase):
def test_is_even_false(self):
time.sleep(1)
self.assertFalse(unit.is_even(1))
setup.cfg:
[nosetests]
processes = 2
verbosity = 2
[coverage:run]
branch = True
concurrency = multiprocessing
parallel = True
source = unit
sitecustomize.py (note: located in site-packages)
import os
try:
import coverage
os.environ['COVERAGE_PROCESS_START'] = 'setup.cfg'
coverage.process_startup()
except ImportError:
pass
$ coverage run $(command -v nosetests)
test_is_even_false (test.TestIsEvenFalse) ... ok
test_is_even_true (test.TestIsEvenTrue) ... ok
----------------------------------------------------------------------
Ran 2 tests in 1.085s
OK
$ coverage combine && coverage report
Name Stmts Miss Branch BrPart Cover
-------------------------------------------
unit.py 4 0 2 0 100%

Disabling PEP-257 warnings in ]vim python-mode

I know that I can disable pylint warnings by leaving a comment # pylint: disable=XXXX.
How do I do the same thing for pep257 errors?
1 C0110 Exported classes should have docstrings. [pep257]
2 C0110 Exported definitions should have docstrings. [pep257]
I am writing unit tests and (I believe) I do not need to worry about docstrings for every single test method - everything is quite self-explanatory.
I am using the https://github.com/klen/python-mode.
Assuming you followed the recommended pathogen installation,
.vim/bundle/python-mode/pylint.ini
has a disable = line to which you may add C0110

how to know if my Python tests are running in coverage mode?

I am running Ned Batchelder's coverage module on continuous integration using Travis CI but I want to run only integration tests and skip functional ones because they take too long and coverage measurement is not affected by them.
I created a special configuration for this, but I want to know if there is an alternate method of knowing, inside a Python script, is the code is being run by coverage or not.
nose can definitely help with it:
Cover: code coverage plugin
Attribute selector plugin
you can mark tests with #attr("no-coverage") decorator and run your coverage tests with -a '!no-coverage' option
nose-exclude plugin
you can exclude specific test dirs and test files from running using --exclude-dir and --exclude-dir-file options
Hope that helps.
Based on the wording of your question I am assuming that you are not limiting what tests you are running with coverage and would like the functional tests to notice they are being run with coverage, and do nothing. A hacky way might be to look at sys.argv in the functional tests and do things differently if you detect coverage usage. But I think a better approach would be to have functional tests and unit tests in separate sibling directories, and tell coverage to run only the tests in the unit test directory. Potentially you could also use the --omit option to limit which tests are being run.
Travis CI provides a couple of environment variables that can be used for this; in my case any of this will serve:
CI=true
TRAVIS=true
even as both answers provided before were really useful, I think this solution is easier to implement for what I need.
I needed to determine if my tests were running under plain debug mode, with coverage, or just normally. After a good deal of experimentation I came up with this:
import sys
# Detect PyCharm debugging mode
get_trace = getattr(sys, 'gettrace', lambda: None)
if get_trace() is None:
debug = False
print('runnin normsies')
else:
debug = True
print('debuggin')
if 'coverage' in sys.modules.keys():
print('covered')
Not sure how robust it is, but it works for me.
Here's an implementation of the check whether a test is run in coverage mode. The nice thing about this is that you can use gettrace_result to check other conditions, e.g., whether the test is run by a debugger instead of coverage:
import sys
def is_run_with_coverage():
"""Check whether test is run with coverage."""
gettrace = getattr(sys, "gettrace", None)
if gettrace is None:
return False
else:
gettrace_result = gettrace()
try:
from coverage.pytracer import PyTracer
from coverage.tracer import CTracer
if isinstance(gettrace_result, (CTracer, PyTracer)):
return True
except ImportError:
pass
return False
You can use pytest.mark.skipif to skip tests that shouldn't be run in coverage mode.
#pytest.mark.skipif(is_run_with_coverage())
def test_to_skip_in_coverage_mode():
...
From the coverage.py documentation:
Coverage.py sets an environment variable, COVERAGE_RUN to indicate
that your code is running under coverage measurement.
Important: this option is only available in version 6.1 and later of the coverage module.
If detectcoverage.py contains:
import os
def detect_coverage():
return os.environ.get('COVERAGE_RUN', None) is not None
if detect_coverage():
print("running in coverage mode")
else:
print("not running in coverage mode")
then running that looks like:
$ coverage run detectcoverage.py
running in coverage mode
$ python detectcoverage.py
not running in coverage mode

Python CLI program unit testing

I am working on a python Command-Line-Interface program, and I find it boring when doing testings, for example, here is the help information of the program:
usage: pyconv [-h] [-f ENCODING] [-t ENCODING] [-o file_path] file_path
Convert text file from one encoding to another.
positional arguments:
file_path
optional arguments:
-h, --help show this help message and exit
-f ENCODING, --from ENCODING
Encoding of source file
-t ENCODING, --to ENCODING
Encoding you want
-o file_path, --output file_path
Output file path
When I made changes on the program and want to test something, I must open a terminal,
type the command(with options and arguments), type enter, and see if any error occurs
while running. If error really occurs, I must go back to the editor and check the code
from top to end, guessing where the bug positions, make small changes, write print lines,
return to the terminal, run command again...
Recursively.
So my question is, what is the best way to do testing with CLI program, can it be as easy
as unit testing with normal python scripts?
I think it's perfectly fine to test functionally on a whole-program level. It's still possible to test one aspect/option per test. This way you can be sure that the program really works as a whole. Writing unit-tests usually means that you get to execute your tests quicker and that failures are usually easier to interpret/understand. But unit-tests are typically more tied to the program structure, requiring more refactoring effort when you internally change things.
Anyway, using py.test, here is a little example for testing a latin1 to utf8 conversion for pyconv::
# content of test_pyconv.py
import pytest
# we reuse a bit of pytest's own testing machinery, this should eventually come
# from a separatedly installable pytest-cli plugin.
pytest_plugins = ["pytester"]
#pytest.fixture
def run(testdir):
def do_run(*args):
args = ["pyconv"] + list(args)
return testdir._run(*args)
return do_run
def test_pyconv_latin1_to_utf8(tmpdir, run):
input = tmpdir.join("example.txt")
content = unicode("\xc3\xa4\xc3\xb6", "latin1")
with input.open("wb") as f:
f.write(content.encode("latin1"))
output = tmpdir.join("example.txt.utf8")
result = run("-flatin1", "-tutf8", input, "-o", output)
assert result.ret == 0
with output.open("rb") as f:
newcontent = f.read()
assert content.encode("utf8") == newcontent
After installing pytest ("pip install pytest") you can run it like this::
$ py.test test_pyconv.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev1
collected 1 items
test_pyconv.py .
========================= 1 passed in 0.40 seconds =========================
The example reuses some internal machinery of pytest's own testing by leveraging pytest's fixture mechanism, see http://pytest.org/latest/fixture.html. If you forget about the details for a moment, you can just work from the fact that "run" and "tmpdir" are provided for helping you to prepare and run tests. If you want to play, you can try to insert a failing assert-statement or simply "assert 0" and then look at the traceback or issue "py.test --pdb" to enter a python prompt.
Start from the user interface with functional tests and work down towards unit tests. It can feel difficult, especially when you use the argparse module or the click package, which take control of the application entry point.
The cli-test-helpers Python package has examples and helper functions (context managers) for a holistic approach on writing tests for your CLI. It's a simple idea, and one that works perfectly with TDD:
Start with functional tests (to ensure your user interface definition) and
Work towards unit tests (to ensure your implementation contracts)
Functional tests
NOTE: I assume you develop code that is deployed with a setup.py file or is run as a module (-m).
Is the entrypoint script installed? (tests the configuration in your setup.py)
Can this package be run as a Python module? (i.e. without having to be installed)
Is command XYZ available? etc. Cover your entire CLI usage here!
Those tests are simplistic: They run the shell command you would enter in the terminal, e.g.
def test_entrypoint():
exit_status = os.system('foobar --help')
assert exit_status == 0
Note the trick to use a non-destructive operation (e.g. --help or --version) as we can't mock anything with this approach.
Towards unit tests
To test single aspects inside the application you will need to mimic things like command line arguments and maybe environment variables. You will also need to catch the exiting of your script to avoid the tests to fail for SystemExit exceptions.
Example with ArgvContext to mimic command line arguments:
#patch('foobar.command.baz')
def test_cli_command(mock_command):
"""Is the correct code called when invoked via the CLI?"""
with ArgvContext('foobar', 'baz'), pytest.raises(SystemExit):
foobar.cli.main()
assert mock_command.called
Note that we mock the function that we want our CLI framework (click in this example) to call, and that we catch SystemExit that the framework naturally raises. The context managers are provided by cli-test-helpers and pytest.
Unit tests
The rest is business as usual. With the above two strategies we've overcome the control a CLI framework may have taken away from us. The rest is usual unit testing. TDD-style hopefully.
Disclosure: I am the author of the cli-test-helpers Python package.
So my question is, what is the best way to do testing with CLI program, can it be as easy as unit testing with normal python scripts?
The only difference is that when you run Python module as a script, its __name__ attribute is set to '__main__'. So generally, if you intend to run your script from command line it should have following form:
import sys
# function and class definitions, etc.
# ...
def foo(arg):
pass
def main():
"""Entry point to the script"""
# Do parsing of command line arguments and other stuff here. And then
# make calls to whatever functions and classes that are defined in your
# module. For example:
foo(sys.argv[1])
if __name__ == '__main__':
main()
Now there is no difference, how you would use it: as a script or as a module. So inside your unit-testing code you can just import foo function, call it and make any assertions you want.
Maybe too little too late,
but you can always use
import os.system
result = os.system(<'Insert your command with options here'>
assert(0 == result)
In that way, you can run your program as if it was from command line, and evaluate the exit code.
(Update after I studied pytest)
You can also use capsys.
(from running pytest --fixtures)
capsys
Enable text capturing of writes to sys.stdout and sys.stderr.
The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
This isn't for Python specifically, but what I do to test command-line scripts is to run them with various predetermined inputs and options and store the correct output in a file. Then, to test them when I make changes, I simply run the new script and pipe the output into diff correct_output -. If the files are the same, it outputs nothing. If they're different, it shows you where. This will only work if you are on Linux or OS X; on Windows, you will have to get MSYS.
Example:
python mycliprogram --someoption "some input" | diff correct_output -
To make it even easier, you can add all these test runs to your 'make test' Makefile target, which I assume you already have. ;)
If you are running many of these at once, you could make it a little more obvious where each one ends by adding a fail tag:
python mycliprogram --someoption "some input" | diff correct_output - || tput setaf 1 && echo "FAILED"
The short answer is yes, you can use unit tests, and should. If your code is well structured, it should be quite easy to test each component separately, and if you need to to can always mock sys.argv to simulate running it with different arguments.
pytest-console-scripts is a Pytest plugin for testing python scripts installed via console_scripts entry point of setup.py.
For Python 3.5+, you can use the simpler subprocess.run to call your CLI command from your test.
Using pytest:
import subprocess
def test_command__works_properly():
try:
result = subprocess.run(['command', '--argument', 'value'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as error:
print(error.stdout)
print(error.stderr)
raise error
The output can be accessed via result.stdout, result.stderr, and result.returncode if needed.
The check parameter causes an exception to be raised if an error occurs. Note Python 3.7+ is required for the capture_output and text parameters, which simplify capturing and reading stdout/stderr.
Given that you are explicitly asking about testing for a command line application, I believe that you are aware of unit-testing tools in python and that you are actually looking for a tool to automate end-to-end tests of a command line tool. There are a couple of tools out there that are specifically designed for that. If you are looking for something that's pip-installable, I would recommend cram. It integrates well with the rest of the python environment (e.g. through a pytest extension) and it's quite easy to use:
Simply write the commands you want to run prepended with $ and the expected output prepended with . For example, the following would be a valid cram test:
$ echo Hello
Hello
By having four spaces in front of expected output and two in front of the test, you can actually use these tests to also write documentation. More on that on the website.
You can use standard unittest module:
# python -m unittest <test module>
or use nose as a testing framework. Just write classic unittest files in separate directory and run:
# nosetests <test modules directory>
Writing unittests is easy. Just follow online manual for unittesting
I would not test the program as a whole this is not a good test strategy and may not actually catch the actual spot of the error. The CLI interface is just front end to an API. You test the API via your unit tests and then when you make a change to a specific part you have a test case to exercise that change.
So, restructure your application so that you test the API and not the application it self. But, you can have a functional test that actually does run the full application and checks that the output is correct.
In short, yes testing the code is the same as testing any other code, but you must test the individual parts rather than their combination as a whole to ensure that your changes do not break everything.

Categories