I have a legacy project using flake8 to check code quality and complexity, but the project has some very complicated (terrible) services which are returning complexity WARNING messages:
./service1.py:127:1: C901 'some_method' is too complex (50)
We are slowly transitioning into making them better, but we need to make jenkins (which is running tests and flake8) pass.
Is there a way to specify ignoring a code error or complexity per file, or even per method?
If you have Flake8 3.7.0+, you can use the --per-file-ignores option to ignore the warning for a specific file:
flake8 --per-file-ignores='service1.py:C901'
This can also be specified in a config file:
[flake8]
per-file-ignores =
service1.py: C901
You can use flake8-per-file-ignores:
pip install flake8-per-file-ignores
And then in your config file:
[flake8]
per-file-ignores =
your/legacy/path/*.py: C901,E402
If you want a per-method/function solution you can use the in-source # noqa: ignore=C901 syntax.
In your flake config add:
[flake8]
ignore = C901
max-complexity = <some_number>
Try to experiment with the value for max-complexity to get more relevant number for your project.
Edit:
You can also ignore a line of your code or a file.
After you are done with the refactoring don't forget to change these settings.
Related
I've used random.choice for tests. And Bandit is showing warnings.
x = random.choice(lists)
I know I could use # nosec comment to suppress the warning. But it would be inconvinent to do it in all lines
x = random.choice(lists) # nosec
I want to allow random for file with tests_*.py using .bandit configuration files. I've found from other samples that you can do it for things like asserts like:
.bandit
assert_used:
skips: ['test.py$', '^test_*.py']
So is there any way for B311 ?
This is okay according to python -m bandit -r test
def test_fuzz(): # nosec
for i in range(10):
length = random.randint(0, 200)
If you don't want to label a line (which allows # nosec B311) or function with # nosec which also ignores B101, use --skips:
python -m bandit --skip B311 -r test
Maybe python -m pip install --upgrade bandit as 1.7 supports pyproject.toml though not by default so python -m bandit -r test --config pyproject.toml:
[tool.bandit]
skips = ["B101", "B311"]
pyproject.toml replaces setup.cfg in at least Visual Studio Code, so you might prefer python -m bandit -r test --ini setup.cfg:
[bandit]
skips = B101,B311
YAML's nesting allows configuration per test plugin, as you noted. Unfortunately B311 is not a plugin, but i've filed an enhancement request for that.
You can skip it adding #nosec to the code.
Or You can skip the B311 using the --skip argument to the command line.
If you're using Python 3.6 or above then, in general, use the "secrets" library rather than the "random" library.
From the documentation The secrets module is used for generating cryptographically strong random numbers suitable for managing data such as passwords, account authentication, security tokens, and related secrets.
Whilst you might not need cryptographically strong random numbers for your tests, it's likely that it will not hurt either, unless your random number generated is seeded.
Seeding the random number generator will ensure that the random number generator emits the same random numbers on each run. This ensures your tests are reproducible. This is usually desirable.
If for some reason you do want to use truly random numbers then use secrets and bandit will not have a problem with it, and it avoids any special bandit configuration.
Previously when using pylint I have have used custom comment settings to ignore undefined vars when editing in vscode, for example:
# Make pylint think that it knows about additional builtins
data = data # pylint:disable=invalid-name,used-before-assignment,undefined-variable
DEBUG = DEBUG # pylint:disable=invalid-name,used-before-assignment,undefined-variable
VERBOSE = VERBOSE # pylint:disable=invalid-name,used-before-assignment,undefined-variable
Note my application has it's own python based cut down scripting language hence the additional builtins.
I've not been able to find an equivalent for pylance.
Anyone have any suggestions?
You can add the following settings in settings.json configuration file:
"python.analysis.diagnosticSeverityOverrides": {
"reportUndefinedVariable": "none"
}
Or you can search for python.analysis.diagnosticSeverityOverrides in the settings, click Add Item button to select "reportUndefinedVariable", "none":
Result:
Following the issue that #Jill Cheng linked in the comments, the pylance devs suggest using a # type: ignore comment after the line in question. You will lose out on other linting abilities for this line though so apply with caution.
As a side note, if you're using this to quiet the messages due to "import <module> could not be resolved" then you should look into correctly configuring your workspace rather than overriding the message. Here's an example answer to help you solve that issue.
Is there a way to ignore all of the errors in certain packages within my project?
Some of the code in my project is compiled Protocol Buffers code which doesn't pass a MyPy check. It all lives within a directory /myproj/generated/proto.
Here's what I have in my mypy config file:
[mypy-myproject.generated]
ignore_missing_imports = True
ignore_errors = True
What can I add to this to make it ignore all error messages generated from an analysis of anything that's inside of myproject.generated?
This is a duplicate of a question on GitHub.
You can use a glob.
[mypy-myproject.generated.*]
ignore_errors = True
But you have to assure that you have __init__.py in /generated
You can also use ignore an entire folder or file via the exclude option.
Here is an example of an ini file:
[mypy]
exclude = generated
Well, of course it's a crutch. I do not pretend that this is the most correct way.
I've written a python test file called scraping_test.py, with a single test class, using unittest, called TestScrapingUtils
"""Tests for the scraping app"""
import unittest
from bs4 import BeautifulSoup as bs4
from mosque_scraper.management.commands import scraping_utils
from mosque_scraper.selectors import MOSQUE_INFO_ROWS_SELECTOR
class TestScrapingUtils(unittest.TestCase):
"""Test scraping_utils.py """
def setup(self):
"""Setup McSetupface."""
pass
def test_get_keys_from_row(self):
""" Test that we extract the correct keys from the supplied rows."""
test_page_name = "test_page.html"
with open(test_page_name) as test_page_file:
test_mosque = bs4(test_page_file, 'html.parser')
rows = test_mosque.select(MOSQUE_INFO_ROWS_SELECTOR)
field_dict = scraping_utils.get_fields_from_rows(rows)
self.assertDictEqual(field_dict, {})
My settings for unit tests are:
{
"python.unitTest.unittestEnabled": true,
"python.unitTest.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*test.py"
]
}
It looks like it should work, but when I click to run the tests in VSCode it says that no tests were discovered:
No tests discovered, please check the configuration settings for the tests.
How do I make it work?
You have to run it once by using shortcut key shift+ctrl p, and type "Python run all unit tests".
It won't show up in the editor until it was successfully executed at least once or use the discover unit test method.
However one thing catch me many times is that the Python file has to be a valid Python file. The intellisense in VS Code for Python is not complex(compare to Javascript or Typescript), and it won't highlight any syntax error. You can verify that by force it to run all unit test and observe the Python Test Log window.
What caught me is that the __init__.py file must be created in every subdirectory, from the root folder specified with -s option (in the example, the current directory ".") to the subdirectory where the test module is located. Only then was I able to discover tests successfully.
In the question example, both project_dir/ and project_dir/scraping_app/ should contain __init__.py. This is assuming that settings.json is located in project_dir/.vscode and the tests are run from project_dir/ directory.
Edit: Alternatively, use "-s", "./scraping_app/" as the root test directory so you don't have to put __init__.py to project_dir/.
Instead of file name 'scraping_test.py' it shall be 'test_scraping.py'
string shall start from 'test' prefix
I had the same error with a slightly different configuration. (I am posting this here because this is the question that comes up when you search for this error.)
In addition to what was said above, it is also important to not use periods in the test file names (e.g. use module_test.py instead of module.test.py).
You can add the DJANGO_SETTINGS_MODULE variable and django.setup() inside the __init__.py file of tests package.
import os
import django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_app.settings')
django.setup()
In my case, the problem was that my test was importing a module which was reading an environment variable using os.environ['ENV_NAME']. If the variable does not exist, it throws an error. But VS Code does not log anything (or at least I couldn't find it).
So, the reason was that my .env file was NOT in the workspace root. So I had to add "python.envFile": "${workspaceFolder}/path/to/.env" to the settings.json file.
After that, the test was discovered successfully.
Also had this issue.
for me the issue was, make sure there are no errors, and comment out all code in files that rely on pytest, just for the initial load up.
Another issue that causes the unit tests not be discovered is when using a conda environment that contains an explicit dependency on the conda package itself. This happens when the enviroment.yml contains the line:
- conda
Removing this line and creating the environment from scratch makes the unit tests discoverable. I have created a bug report in Github for this: https://github.com/microsoft/vscode-python/issues/19643
(This is my second solution to this issue; I decided to create another answer since this is entirely different from the previous one.)
This is my first time using unittest in vscode. I found that the file names cannot contain spaces and dots. and cannot start with numbers.
for the dots, I guess anything after a dot is considered by the extension by unittest.
for the spaces, I guess they do not use "" to surround the filename.
For me Discovering the unit tests did the trick.
SHIFT+CTRL+P and execute "Python: Discover unit tests"
After running this I get the "Run Test|Debug Test" over each test function.
My problem is that I'm using Django-nose with coverage but it shows the mentioned statements as non-executed, lowering my coverage percentages in a bad way. For some reason I have not discovered yet, my Django project loads functions and classes before coverage gets into action, as it is explained here:
Does coverage.py measure the function and class definitions?
I have tried the second solution, which is exactly what I am asking for, but got no result. I have fixed it temporarily with # pragma cover to get accurate percentages, but that is tedious, dirty and it is not obviously the way to do this.
Here you have the relevant details and settings of my project, related to this issue:
Django settings:
INSTALLED_APPS += ('django_nose', )
TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
NOSE_ARGS = [
'--with-coverage',
'--cover-min-percentage=80',
'--cover-package=home, main, study, administrate, examine',
'--cover-inclusive',
'--cover-erase',
'--cover-html-dir=' + BASE_DIR + "/tests/.coverage_report",
'--cover-html',
'--verbosity=3',
'--exe',
]
.coveragerc:
[run]
omit = *migrations*
*admin.py*
*urls.py*
*__init__.py*
[report]
exclude_lines =
pragma: no cover
import *
dev-requirements.txt:
Django==1.7.2
requests==2.6.0
coverage==3.7.1
django-debug-toolbar==1.3.2
django-nose==1.4.1
nose==1.3.7
sqlparse==0.1.16
I've been trying to fix this for hours and it's really frustrating! Thanks so much for any sort of advice.
As Carlos said,
Ditch nose args and run your coverage directly for accurate coverage
coverage run --branch --source=my_app1,my_app2 ./manage.py test
coverage report
and this will still use your .coveragerc file
[run]
omit = *migrations*
*admin.py*
*urls.py*
*__init__.py*
.
.