Flake8 times out on circle-ci - python

We have a flake8 build stage in our circle-ci workflow, and more often than not this step fails due to timeout:
Too long with no output (exceeded 10m0s): context deadline exceeded
At the same time, this same stage runs quite ok locally on our macbooks:
% time make lint
poetry run black .
All done! ✨ 🍰 ✨
226 files left unchanged.
isort -y
Skipped 2 files
PYTHONPATH=/path/to/project poetry run flake8 --show-source
0
make lint 44.00s user 4.90s system 102% cpu 47.810 total
We tried to debug the issue by adding the -vv flag to flake8 thinking we would get some plugin name that takes too long, but we don't even have the timestamps in the log:
flake8.processor ForkPoolWorker-31 1004 WARNING Plugin requested optional parameter "visitor" but this is not an available parameter.
flake8.processor ForkPoolWorker-8 1080 WARNING Plugin requested optional parameter "visitor" but this is not an available parameter.
flake8.bugbear ForkPoolWorker-26 1082 INFO Optional warning B950 not present in selected warnings: ['E', 'F', 'W', 'C90']. Not firing it at all.
Are there any known reasons why flake8 would freeze on CircleCI? How can one debug the issue?

When using a virtual-environment such as venv you should ignore the folder in the [flake8]-config (that's what happened to me). Assuming you are creating a virtualenv with virtualenv .venv it would look like this:
[flake8]
exclude = .venv
The same was for my coverage which was fixed by adding an omit to that config (solution found here):
# pyproject.toml file content
[tool.coverage.run]
omit = [
"tests/*",
".venv/*",
]

For now, the solution we seem to have found was to limit the number of cores running flake8:
.flake8
[flake8]
...
jobs = 6
Not sure it is the correct solution, but there you go. I will accept a better solution if there is one.

I've also experienced a timeout in circle-ci only, but it was due to the specific way dependencies are installed on the pipeline, creating a .venv folder which was not excluded in flake8 configuration.
The -v option helped me to notice the huge amount of files flkae8 was analyzing.

Related

Running multiple tox testenv matching a name fragment

Rationale
With complex dependency matrix, tox testenv name patterns end up being a list like
py37-pytest5-framework1
py37-pytest5-framework2
py37-pytest6-framework1
py37-pytest6-framework2
py38-pytest5-framework1
py38-pytest5-framework2
py38-pytest6-framework1
py38-pytest6-framework2
...
py310-pytest6-framework2
While the inner tox.ini syntax allows configuring a lot of things with name fragments, e.g.
[testenv]
basepython =
py37: python3.7
py38: python3.8
py39: python3.9
py310: python3.10
deps =
pytest5: pytest ~= 5.0
pytest6: pytest ~= 6.0
framework1: framework ~= 1.0
framework2: framework ~= 2.0
setenv =
framework2: FOO=bar
I find there is no way in telling the tox CLI in running all testenvs matching a name fragment like tox -e py39 or tox -e framework2.
Issues
The main drawback is that most usually CI testing jobs will end up being segregated by python version, so you end up writing instructions like
tox -e $PY-pytest5-framework1,$PY-pytest5-framework2,$PY-pytest6-framework1,$PY-pytest6-framework2
but then the CI jobs definition is coupled to the tox test matrix because it must be aware of:
testenvs being added or removed
matrix exclusions like pytest-5 is not compatible with python-3.10
And this is cumbersome to maintain.
Incomplete workaround
An easy-to-go workaround is simply running tox --skip-missing-interpreters, but the drawbacks are:
CI jobs can't be segregated by framework version instead of python version, for example to reuse some special framework cache
CI VMs could feature system python installations beyond the one targeted by each job, so you could en up with e.g. python-3.8 being run in all CI jobs.
Question
Am I missing some out-of-the-box mechanism to filter the testenvs to be run with a fragment that powers me to write CI jobs agnostic to the tox dependency matrix? I mean something like tox -e '*-framework2'.
Am I bound to filter and aggregate the output of tox --listenvs with shell tricks?
You could negate a regex pattern for the TOX_SKIP_ENV as the following:
$ env TOX_SKIP_ENV='.*[^-framework2]$' tox
tox4, which will be introduced within the next couple of months, introduces labels. While this may be not an immediate help for your problem, maybe you see a way to simplify your tox.ini.

For pytest with pytest-cov: how to specify parallel=True for coverage version >= 5

In pytest-cov documentation it says:
Note that this plugin controls some options and setting the option in
the config file will have no effect. These include specifying source
to be measured (source option) and all data file handling (data_file
and parallel options).
However it doesn't say how to change these options. Is there a way to change it (parallel=True)?
I want to change this because after coverage is upgraded from < 5 to latest (5.1) I got these:
Failed to generate report: Couldn't use data file '/path/to/jenkins/workspace/pr/or/branch/.coverage': no such table: line_bits
Note: using coverage < 5 do not have this problem
I have also tried adding .coveragerc with the following but still get the same issue.
[run]
parallel = True
The way it is run in jenkins:
pytest ./tests --mpl -n 4 \
--junitxml=pyTests.xml --log-cli-level=DEBUG -s \
--cov=. --cov-report --cov-report html:coverage-reports
This is due to pytest-cov using coverage combine, which combines all coverage results: In parallel it mixes results from other runs, that may or may not be completed, and in any cases are irrelevant.
I think if you're having the issue, it may be because you're running multiple tests in parallel, like multiple versions of Python.
In which case it's easily solved by specifying a unique COVERAGE_FILE for each run, like:
export COVERAGE_FILE=.coverage.3.7
for the Python 3.7 run, an so on.
See: https://github.com/nedbat/coveragepy/issues/883#issuecomment-650562896

Coverage badge in Gitlab CI with Python coverage always unknown

I am trying to show a coverage badge for a Python project in a private Gitlab CE installation (v11.8.6), using coverage.py for Python. However, the badge always says unknown.
This is the relevant job in my .gitlab-ci.yaml file:
coverage:
stage: test
before_script:
- pip3.6 install coverage
- mkdir -p public
script:
- coverage run --source=my_service setup.py test
- coverage report | tee public/coverage.txt
artifacts:
paths:
- public/coverage.txt
coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
I expected the badge to show the actual coverage at this URL, so this is what I have entered in the project settings under General/Badges:
http://<privategitlaburl>/%{project_path}/badges/%{default_branch}/coverage.svg?job=coverage
I read these instructions using Gitlab pages. However, I do not want to use pages just for this purpose, and I am dealing with a Python project.
According to the example in the CI/CD settings, and in this post, the regex in the coverage entry should work. which I could confirm by trying it locally:
$ grep -P "TOTAL\s+\d+\s+\d+\s+(\d+%)" public/coverage.txt
TOTAL 289 53 82%
I also tried the same regex in the field Test coverage parsing in the project settings under CI/CD/Pipeline settings, but the badge shown on that same page keeps showing unknown.
The documentation is not quite clear to me, as it does not describe the whole procedure. It is clear how to use a badge once created, and there is a manual for publishing a coverage report to pages, but there seems to be no clear path from extracting the score to displaying the badge.
Should I use the coverage entry in my .gitlab-ci.yaml file or fill in the regex in the pipeline settings?
Either way, is Gitlab CI supposed to update the coverage badge based on that, or do I need to use additional tools like coverage-badge to do so?
Where is the extracted coverage score supposed to be reported; how can I find out if my regex works?
I finally got the coverage badge displaying a percentage instead of unknown today for my python project. Here's the relevant content from my .gitlab-ci.yml:
job:
script:
- 'python -m venv venv'
- '.\venv\Scripts\activate'
- 'python -m pip install -r requirements.txt'
- 'coverage run --source=python_project -m unittest discover ./tests'
- 'coverage report --omit=things_that_arent_mine/*'
- 'coverage xml'
artifacts:
reports:
cobertura: 'coverage.xml'
I'm also using the regex for gcovr listed in the repo CI/CD Settings > General Pipelines > Test coverage parsing which I found after reading this as well as the second to last comment on this:
^TOTAL.*\s+(\d+\%)$
In the repo General Settings > Badges, my badge link is:
http://gitlab-server/%{project_path}/-/jobs
and my badge image url is:
http://gitlab-server/%{project_path}/badges/%{default_branch}/coverage.svg
I don't quite know what the Cobertura report artifact is for (I think it specifically has to do with merge requests) but I have it in there because the tutorials took me down that road. I did confirm that removing the following from the .gitlab-ci.yml doesn't break the badge or coverage number on the jobs page:
- 'coverage xml'
artifacts:
reports:
cobertura: 'coverage.xml'
While removing or commenting:
- 'coverage report --omit=things_that_arent_mine/*'
does break the badge as well as the coverage number displayed on the CI/CD jobs page of the repo. I also tried some regex variations that I tested in rubular but the only one that didn't cause gitlab to barf was the gcovr one.
Hopefully this helps out, it was kind of difficult and arduous to piece together what was needed for this badge to work on a python project.
EDIT:
Also just figured out how to add some sexy precision to the coverage percentage number. If you change the coverage report line of the .gitlab-ci.yml to:
- 'coverage report --omit=things_that_arent_mine/* --precision=2'
And the regex in CI/CD Settings > General Pipelines > Test coverage parsing to:
^TOTAL.+?(\d+.\d+\%)$
That should give you a very precise coverage number that effectively no one but you and I will care about. But by gosh we'll know for sure if coverage is 99.99% or 100%.
A solution that worked for me:
In the .gitlab-ci.yml file, in the job that runs the coverage report, I added the following lines:
script:
# some lines omitted for brevity
- coverage report --omit=venv/*
coverage: '/TOTAL.*\s+(\d+\%)/'
In particular, I couldn't get the badge to show anything but unknown until I added the coverage directive to my test job. Bear in mind, different tools may print different output and so you will likely have to change the regular expression, as I did.
Spent three days on the problem above myself so I thought I'd post my working config. My project is a pyscaffolding project, uses tox, the pipeline triggers when you push a commit to a branch, and it pushes a pip package to the github packages library.
Badge link is: http://gitlab.XXXX.com/XXmeXX/python-template/-/commits/develop
Badge image is : http://gitlab.XXXX.com/XXmeXX/python-template/badges/develop/coverage.svg
Regex is same as above.
PYPIRC is an environment variable set in that looks like a .pypirc file and points to my internal pip registry.
I used this tutorial
updated -> --cov-report xml is in my setup.cfg and I'm pretty sure b/c of that I don't need the coverage command, but I haven't tested that since I wrote this post. I'll check that next time I'm in there.
My gitlab-ci.yml:
build-package:
stage: deploy
image: python:3.7
script:
- set
- cat $PYPIRC > /tmp/.pypirc
- pip3 install twine setuptools setuptools_scm wheel tox coverage
# build the pip package
- python3 setup.py bdist_wheel
# $CI_COMMIT_TAG only works with the tagging pipeline, if you want to test a branch push directly, pull from fs
- VERSION=$(python setup.py --version)
# You can issue ls -al commands if you want, like to see variables or your published packages
# - echo $VERSION
# - ls -al ./dist |grep whl
- tox
- coverage xml -o coverage.xml
# If you want to put artifacts up for storage, ...
# - mkdir public
- python3 -m twine upload --repository pythontemplate ./dist/my_python_template-${VERSION}-py2.py3-none-any.whl --config-file /tmp/.pypirc
artifacts:
reports:
cobertura : 'coverage.xml'
when: always
# If you were copying artifacts up for later.
# paths:
# - public
only:
- branches
Biggest thing I learned was that first you get it showing as "Coverage" column in the jobs list, then you know you're parsing everything correctly and the regexes are working. From there you work on the coverage xml and the badge links.
I also digged quite a little into this. And just as you said: The command coverage report produces output similar to this:
[...]
tests/__init__.py 0 0 100%
tests/test_ml_squarer.py 4 0 100%
tests/test_squarer.py 4 0 100%
----------------------------------------------
TOTAL 17 2 88%
test_service run-test: commands[4] | coverage xml
and depending on the regex-expression that is saved, it simply looks for the 88% next to TOTAL. I used the recommend for pytest-cov (Python) , i.e. ^TOTAL.+?(\d+\%)$
So running coverage xml looks rather optional to me at the moment. However, it is not working for me using: GitLab Community Edition 12.10.11

VSCode pytest test discovery fails

Pytest test discovery is failing. The UI states:
Test discovery error, please check the configuration settings for the tests
The output window states:
Test Discovery failed:
Error: Traceback (most recent call last):
File "C:\Users\mikep\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\testing_tools\run_adapter.py", line 16, in <module>
main(tool, cmd, subargs, toolargs)
File "C:\Users\mikep\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\testing_tools\adapter\__main__.py", line 90, in main
parents, result = run(toolargs, **subargs)
File "C:\Users\mikep\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\testing_tools\adapter\pytest.py", line 43, in discover
raise Exception('pytest discovery failed (exit code {})'.format(ec))
Exception: pytest discovery failed (exit code 3)
Here are my settings:
{
"python.pythonPath": ".venv\\Scripts\\python.exe",
"python.testing.pyTestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.pyTestEnabled": true
}
I can run pytest from the command line successfully FWIW.
I spent ages trying to decipher this unhelpful error after creating a test that had import errors. Verify that your test suite can actually be executed before doing any deeper troubleshooting.
pytest --collect-only is your friend.
This is not a complete answer as I do not know why this is happening and may not relate to your problem, depending how you have your tests structured.
I resolved this issue by putting an __init__.py file in my tests folder
E.G.:
β”œβ”€β”€β”€.vscode
β”‚ settings.json
β”‚
β”œβ”€β”€β”€app
β”‚ myapp.py
β”‚
└───tests
test_myapp.py
__init__.py
this was working a few days ago without this but the python extension was recently updated. I am not sure if this is the intended behavior or a side effect of how discoveries are now being made
https://github.com/Microsoft/vscode-python/blob/master/CHANGELOG.md
Use Python code for discovery of tests when using pytest. (#4795)
I just thought I would add my answer here as this might well affect someone who uses a .env file for their project's environment settings since it is such a common configuration for 12 factor apps.
My example assumes that you're using pipenv for your virtual environment management and that you have a .env file at the project's root directory.
My vscode workspace settings json file looks like below. The crucial line for me here was "python.envFile": "${workspaceFolder}/.env",
{
"python.pythonPath": ".venv/bin/python",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.linting.pycodestyleEnabled": false,
"python.linting.flake8Enabled": false,
"python.linting.pylintPath": ".venv/bin/pylint",
"python.linting.pylintArgs": [
"--load-plugins=pylint_django",
],
"python.formatting.provider": "black",
"python.formatting.blackArgs": [
"--line-length",
"100"
],
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.pytestPath": ".venv/bin/pytest",
"python.envFile": "${workspaceFolder}/.env",
"python.testing.pytestArgs": [
"--no-cov"
],
}
I hope this saves someone the time I spent figuring this out.
In my case the problem with vscode being unable to discover tests was the coverage module being enabled in the setup.cfg (alternatively this could be a pytest.ini), i.e.
addopts= --cov <path> -ra
which caused the test discovery to fail due to low coverage. The solution was to remove that line from the config file.
Also, as suggested in the documentation:
You can also configure testing manually by setting one and only one of the following settings to true: python.testing.unittestEnabled, python.testing.pytestEnabled, and python.testing.nosetestsEnabled.
In settings.json you can also disable coverage using --no-cov flag:
"python.testing.pytestArgs": ["--no-cov"],
EDIT:
Slightly related - in case of more complex projects it might be also necessary to change the rootdir parameter (inside your settings.json) when running tests with pytest (in case of ModuleNotFoundError):
"python.testing.pytestArgs": [
"--rootdir","${workspaceFolder}/<path-to-directory-with-tests>"
],
Seems like a bug in the latest version of VS Code Python extension. I had the same issue, then I downgraded the Python extension to 2019.3.6558 and then it works again. So we should go to our VS Code extensions list, select the Python extension and "Install another version..." from the setting of that extension.
I hope this works for you too.
I resolved the issue by upgrading pytest to the latest version: 4.4.1 with "pip install --upgrade pytest". I was apparently running an old version 3.4.2
Before begin the tests discovery, check that python.testing.cwd points correctly to your tests dir and your python.testing.pytestEnabled is set to true.
Once those requirements are set correctly, run tests discovery and its output (see OUTPUT window). You should see something like this:
python $HOME/.vscode/extensions/ms-python.python-$VERSION/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir $ROOT_DIR_OF_YOUR_PROJECT -s --cache-clear
Test Discovery failed:
Error: ============================= test session starts ==============================
<SETTINGS RELATED TO YOUR MACHINE, PYTHON VERSION, PYTEST VERSION,...>
collected N items / M errors
...
It's important to highlight here the last line: collected N items / M errors. The following lines will contain info about the tests discovered by pytest. So your tests are discoverable but there are errors related to their correct execution.
The following lines will contain the errors which in most of the cases will be related to an incorrect import.
Check that all your dependencies have been downloaded previously. If you are working with specific versions of dependencies (on therequirements.txt file you have something like your_package == X.Y.Z) be sure that it's the right version that you need to.
If you are having trouble with pytest I was struggling with the discovery test part.
Reading some open issue (other) in vscode I found a workaround using the test-adapter extension
market_place_link
The extention works like a charm. (and solve my discovery problem)
In my case, it was the vscode python extension problem.
I switched the test platform from pytest and switched to it back again, and the tests got discovered.
It seems that when testing is universally enabled for all python projects and it fails to discover tests at the beginning, it fails forever!
Test files need to be named test_*.py or *_test.py for pytest collection to work.
Then run pytest --collect-only at the command line to make sure all of the tests are found. Then magically, the flask icon in VSCode suddenly shows the test files and their tests.
As a noob, I was putting my unit tests inside the module files, since I was following the pattern of unittest, and then running pytest *.py. That doesn't work with pytest, and I didn't find a command line argument override the naming convention.
I searched in the settings for "python" and found this:
Switching to pytest automatically detect my tests:
This error is so frustrating..
In my case the error fixed by modifying the python.pythonPath parameter in settings.json (found inside .vscode folder under project root directory), to the path which I obtained using which python in terminal (e.g. /usr/local/var/pyenv/shims/python)
I use pyenv with python3.9, and my error said previously:
Error: Process returned an error: /Users/"user-name"/.pyenv/shims/python:
line 21: /usr/local/Cellar/pyenv/1.2.23/libexec/pyenv: No such file or
directory
at ChildProcess.
(/Users/"user-name"/.vscode/extensions/littlefoxteam.vscode-python-test-adapter-0.6.8/out/src/processRunner.js:35:36)
at Object.onceWrapper (events.js:422:26) at ChildProcess.emit
(events.js:315:20) at maybeClose (internal/child_process.js:1021:16)
at Process.ChildProcess._handle.onexit
(internal/child_process.js:286:5)
In my case, I had to make sure that I was using the right interpreter where the libraries where installed (I used a virtual environment) but the interpreter was pointing to the globally installed python. After changing the interpreter to that of the virtual environment, it worked.
Looking at https://docs.pytest.org/en/latest/usage.html#possible-exit-codes it seems pytest itself is falling over do to some internal error.
I had this same issue and tracked it down to my pytest.ini file. Removing most of the addopts from that file fixed the issue.
I had this problem and struggle with it for hours. I think that there is a special resolution for every other platform configuration. My platform is:
VSCode: 1.55.0 (Ubuntu)
Pytest: 6.2.3
MS Python extension (ms-python.python 2021.3.680753044)
Python Test Explorer for Visual Studio Code (littlefoxteam.vscode-python-test-adapter - 0.6.7)
The worst thing has that the tool itself does not have a standard output (at least that I know of or could find easily on the internet).
In the end, the problem was
--no-cov
parameters that was not recognized by the VSCode testing explorer tool (that I copied from some page on the internet) and the error was showed by the extension littlefoxteam.vscode-python-test-adapter and it may help you to find where things are broken.
In my case, same problem appeared each time when flake8 linter reported errors. Even one error was enough to fail VS Code test discovery.
So the fix is either disable the linter or fix linter errors.
I use setup described here.
2021-12-22
Please find below the settings I used to get pytest working in VsCode after much frustration. I found many helpful pieces of advice here and elsewhere on the internet, but none were complete enough to spare me a bit of cursing. I hope the following will help someone out. This setup allows me to run testing visually from the test explorer extension and also from the integrated terminal. I am using a src format in my workspace and Conda for environment management. The settings related to the terminal setup keep me from having to manually enable my Conda environment or set the python path. Possibly people who have been using VSCODE for more than two days could add something nice to make this better and/or more complete.
##### Vscode info:
Version: 1.63.2 (Universal)
Commit: 899d46d82c4c95423fb7e10e68eba52050e30ba3
Date: 2021-12-15T09:37:28.172Z (1 wk ago)
Electron: 13.5.2
Chromium: 91.0.4472.164
Node.js: 14.16.0
V8: 9.1.269.39-electron.0
OS: Darwin x64 20.6.0
##### Testing Extension:
Python Test Explorer for Visual Studio Code
extension installed, v. 0.7.0
##### Pytest version
pytest 6.2.5
##### Directory Structure:
workspace
.env
./src
__init__.py
code1.py
code2.py
./tests
__init__.py
test_code1.py
test_code2.py
##### .env file (in root, but see the "python.envFile" setting in settings.json)
PYTHONPATH=src
#####. settings.json
{
"workbench.colorTheme": "Visual Studio Dark",
"editor.fontFamily": " monospace, Menlo, Monaco, Courier New",
"python.testing.unittestEnabled": false,
"python.testing.cwd": ".",
"terminal.integrated.inheritEnv": true,
"python.envFile": "${workspaceFolder}/.env",
"python.defaultInterpreterPath":
"~/anaconda3/envs/mycurrentenv/bin/python",
"pythonTestExplorer.testFramework": "pytest",
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"tests"
],
"python.terminal.activateEnvironment": true,
"python.terminal.activateEnvInCurrentTerminal": true,
"terminal.integrated.env.osx": {
"PYTHONPATH": "${workspaceFolder}/src:${env:PYTHONPATH}"
}
}
Here is generic way to get Django tests to run with full vscode support
Configure python tests
Choose unittest
Root Directory
test*.py
Then each test case will need to look like the following:
from django.test import TestCase
class views(TestCase):
#classmethod
def setUpClass(cls):
import django
django.setup()
def test_something(self,):
from user.model import something
...
Any functions you want to import have to be imported inside the test case (like shown). The setUpClass runs before the test class is setup and will setup your django project. Once it's setup you can import functions inside the test methods. If you try to import models/views at the top of your script, it will raise an exception since django isn't setup. If you have any other preinitialization that needs to run for your django project to work, run it inside setUpClass. Pytest might work the same way, I haven't tested it.
There are several causes for this.
The simple answer is, it is not perfect. Furthermore, Python is not native to VS Code; Python extensions may not be playing nice either.
I have observed some mitigating measures, though.
NB VS Code didn't "design" test discovery; that is the job of the testing framework. Make sure you have a basic grasp on how it works.
Troubleshooting
For every "battery" of tests, VS Code will gladly tell you what went wrong, right in the Test view panel. To see this, expand the bar with the name of the project, and hover your mouse over the line that is revealed. Look at the traceback.
In the image above, the error is "No module named src". This is more interesting than it sounds. If I import a file from another file, that import path may not be visible to the test discovery mechanism. It may be running from a different "cwd". The best you can do is try to figure out the topmost path of your project, and either add or remove path qualifiers. Until it works.
Main causes
From time to time, VS Code will lose info on the project test configuration. Use command window (Ctrl + Shift + P) to either:
(re)scan tests
(re)configure test spec for the project <-- important!
restart Python Language Server
Python is supposed to remove the need for the empty __init__.py; it seems VS Code loves those. They should be in each folder that leads to a test folder. Not every folder, but the top-down path.
Depending on what you select as the "root test folder" in the test configuration, it might have different meaning based on what VS Code thinks is the root of your project. That probably goes for any specific folder too.
import. VS Code doesn't like syntax errors. It is possible some syntax errors will not get highlighted in the code.
This (unfortunately) goes for all the imports in your file. But you shouldn't test invalid code anyway, right?
Minor buggy behaviors
Running some other visible test might help refresh the discovery process.
VS Code should automatically (by setting) refresh tests on each Save operation. But it doesn't hurt to refresh it manually.
TLDR
Look at the error items in the test panel
Re-configure project test discovery parameters from time to time
Make sure you don't have syntax errors (visible or not)
Create empty __init__.pys in each folder of your project that leads to the test folder
Clean up your import logic
P.S. The Test view has been extensively worked on and improved much over the course of 1 year. Expect changes in behavior.

Vim Flake8 ignoring project config file

vim-flake8 seems to be ignoring my project-specific config file. If I run flake8 from the command line in my project root, it works, but when I open vim and try to run flake8 against my files, its not picking up that setting. I know this because its using a default line-length of 79, instead of my project-specific 120.
I read this post: flake8 not picking up config file, but it doesn't seem to help. It mentions a bug fixed over a year ago in the comments.
In my project root, I have a .flake8 file with a [flake8] section.
How does vim-flake8 determine what the project root is and where to look for the config file? Does it just use the directory in which Vim is opened?
I ran into a similar issue today, and I got around it by adding the following to my ~/.vimrc (or actually, my ~/.config/nvim/init.vim) file:
let g:syntastic_python_flake8_config_file='.flake8'
This was based on syntastic's official documentation on language-specific configuration files.
I ran into the same problem today. Flake8 ran fine from the command line, but inside of vim every config file seemed do be ignored by syntastic. Running flake8 inside of vim itself (with :!flake8) picked up the config.
Based on the answer from Tomi I fixed it by adding
let g:syntastic_python_flake8_args='--config=setup.cfg'
to my vim config, which should work if vim is started from the project root. Still a bit hacky but at least the flake8 config stays in a single place.
Had the same problem too on my OSX, and partially solved it. Had the latest versions of syntastic (git clone today) and flake8 3.0.4. Vim 7.4.
flake8 ran fine from command line and picked my global ~/.config/flake8. Vim did not output anything if I had the config file, but worked fine without the flake8 config file.
I partially solved the problem by having the flake8 config not in file system but in my .vimrc:
let g:syntastic_python_flake8_args='--ignore=E203,E231'
but this is not the best solution as the config is not shared.
For the initiated developers, when I enable debugging
let g:syntastic_debug = 1
I get this output:
syntastic: 4.516990: &shell = '/bin/bash', &shellcmdflag = '-c', &shellpipe = '2
>&1| tee', &shellquote = '', &shellredir = '>%s 2>&1', &shellslash = 0, &shellte
mp = 1, &shellxquote = '', &shellxescape = ''
syntastic: 4.517587: UpdateErrors (auto): default checkers
syntastic: 4.517927: CacheErrors: default checkers
syntastic: 4.518502: g:syntastic_aggregate_errors = 0
syntastic: 4.518666: getcwd() = '/Volumes/myproject/src'
syntastic: 4.525418: CacheErrors: Invoking checker: python/flake8
syntastic: 4.526113: SyntasticMake: called with options: {'errorformat': '%E%f:%
l: could not compile,%-Z%p^,%A%f:%l:%c: %t%n %m,%A%f:%l: %t%n %m,%-G%.%#', 'make
prg': 'flake8 main.py', 'env': {'TERM': 'dumb'}}
syntastic: 4.727963: system: command run in 0.201426s
syntastic: 4.729751: getLocList: checker python/flake8 returned 1
syntastic: 4.730094: getLocList: checker python/flake8 run in 0.204568s
I couldn't get g:syntastic_python_flake8_* variations working on my MacOS.
The shortcut that worked for me was to add a symlink to the project base directory:
ln -s /path/to/common/.flake8 .flake8
with this link syntastic is forwarded to the .flake8 in the desired location.

Categories