Disabling PEP-257 warnings in ]vim python-mode - python

I know that I can disable pylint warnings by leaving a comment # pylint: disable=XXXX.
How do I do the same thing for pep257 errors?
1 C0110 Exported classes should have docstrings. [pep257]
2 C0110 Exported definitions should have docstrings. [pep257]
I am writing unit tests and (I believe) I do not need to worry about docstrings for every single test method - everything is quite self-explanatory.
I am using the https://github.com/klen/python-mode.

Assuming you followed the recommended pathogen installation,
.vim/bundle/python-mode/pylint.ini
has a disable = line to which you may add C0110

Related

How do I fix the import error that comes up in visual studio code? [duplicate]

When importing pygame pylint is going crazy:
E1101:Module 'pygame' has no 'init' member
E1101:Module 'pygame' has no 'QUIT' member
I have searched the net and I have found this:
"python.linting.pylintArgs": ["--ignored-modules=pygame"]
It solves the problem with pygame, but now pylint is going crazy in other way: crazy_pylint.png.
Then I have found "python.linting.pylintArgs": ["--ignored-files=pygame"], but what it does is completely disabling pylint for the whole directory I am working in.
So how do I say pylint that everything is OK with pygame?
For E1101:
The problem is that most of Pygame is implemented in C directly. Now, this is all well and dandy in terms of performance, however, pylint (the linter used by VSCode) is unable to scan these C files.
Unfortunately, these same files define a bunch of useful things, namely QUIT and other constants, such as MOUSEBUTTONDOWN, K_SPACE, etc, as well as functions like init or quit.
To fix this, first things first, stop ignoring the pygame module by removing all your arguments in "python.linting.pylintArgs". Trust me, the linter can come in handy.
Now to fix the problems. For your constants (anything in caps), manually import them like so:
from pygame.constants import (
MOUSEBUTTONDOWN, QUIT, MOUSEMOTION, KEYDOWN
)
You can now use these without prepending them with pygame.:
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
if event.type == KEYDOWN:
# Code
Next, for your init and other functions errors, you can manually help the linter in resolving these, by way of 2 methods:
Either add this somewhere in your code: # pylint: disable=no-member. This will deactivate member validation for the entire file, preventing such errors from being shown.
Or you can encase the line with the error:
# pylint: disable=no-member
pygame.quit()
# pylint: enable=no-member
This is similar to what the first method does, however it limits the effect to only that line.
Finally, for all your other warnings, the solution is to fix them. Pylint is there to show you places in which your code is either pointless, or nonconforming to the Python specs. A quick glance at your screenshot shows for example that your module doesn't have a docstring, that you have declared unused variables...
Pylint is here to aid you in writing concise, clear, and beautiful code. You can ignore these warnings or hide them (with # pylint: disable= and these codes) or spend a little time cleaning up everything.
In the long run, this is the best solution, as it'll make your code more readable and therefore maintainable, and just more pleasing to look at.
For a specific binary module you can whitelist it for pylint. For the pygame module it would be as follows:
{
"python.linting.pylintArgs": [
"--extension-pkg-whitelist=pygame"
]
}
OP You can also maintain the pylint pygame fix you found in vscode by including the vscode default arguments yourself.
The linter is going nuts (crazy_pylint.png) because you were clobbering the default pylint arguments with your own custom python.linting.pylintArgs.
The pygame module ignore fix does work, and the linter can return to non-crazy mode by also including the clobbered default arguments in your own custom python.linting.pylintArgs.
From the docs:
These arguments are passed whenever the python.linting.pylintUseMinimalCheckers is set to true (the default).
If you specify a value in pylintArgs or use a Pylint configuration file (see the next section), then pylintUseMinimalCheckers is implicitly set to false.
The defaults vscode passes according to this: https://code.visualstudio.com/docs/python/linting are:
--disable=all,
--enable=F,E,unreachable,duplicate-key,unnecessary-semicolon,global-variable-not-assigned,unused-variable,binary-op-exception,bad-format-string,anomalous-backslash-in-string,bad-open-mode
So, here is how to pass all those defaults as well as the --ignored-modules=pygame in user settings within vscode:
"python.linting.pylintArgs": [
"--disable=all",
"--enable=F,E,unreachable,duplicate-key,unnecessary-semicolon,global-variable-not-assigned,unused-variable,binary-op-exception,bad-format-string,anomalous-backslash-in-string,bad-open-mode",
"--ignored-modules=pygame"
]
Per #C._ comment above, he's definitely speaking truth; the linter will help!
I'm writing better code with it enabled for sure.
Also, I discovered that you can further fine-tune your pylinter with the enable line and comma delimited "readable pylint messages"
listed here: https://github.com/janjur/readable-pylint-messages/blob/master/README.md
So to not ignore also trailing-newlines, you would append the enable= list argument to include simply trailing-newlines.
I really hope this helps you OP :) It helped me!
Thanks for asking the question, and sharing --ignored-modules.

How to use flake8 as unittest case?

I want to make flake8 a unittest case for all my source files. The unittest have to fail when the code is not PEP8 conform.
I still do this with pycodestyle that way.
pep = pycodestyle.Checker(filename)
return pep.check_all() == 0
But I don't know how to do this with flake8 after import flake8.
As others have pointed out, this is not something you should be doing in unit tests. Unit tests should be used to check behavior and functionality of the code, something like linting and code style enforcement is best left to pre-commit checks or CI. The flake8 documentation has instructions on version control integration where you can see how to integrate it with pre-commit
But if you REALLY want to do it your way for some reason, you can see the documentation for legacy flake8 python api. And you can do something like
from flake8.api import legacy as flake8
style_guide = flake8.get_style_guide(
ignore=['E24', 'W5'],
select=['E', 'W', 'F'],
format ='pylint',
)
result = style_guide.input_file("filename")
if result.total_errors:
# do whatever you want here. Raise errors or whatever.
pass

Disable pylint execution on a section of code or function

I am using pylint 0.27 with python 2.7.3. Pylint has a known bug which hits when it analyses code having a .next() call. As given in http://www.logilab.org/122793 link, it fails with the given traceback.
I cannot change my python and pylint versions but I would like to workaround this problem by disabling pylint on the piece of code that has .next() call by adding a #pylint: MAGIC comment in my code.
I could find support for disabling pylint checking on a file using #pylint: skip-file but I am interested at doing this at function level or rather at line level.
Any other workarounds are also welcomed!
You can accomplish this by adding the pylint comment as the first line of the function body like so:
def wont_raise_pylint():
# pylint: disable=W0212
some_module._protected_member()
some_module._protected_member()
def will_still_raise_pylint():
some_module._protected_member()
When I refactor code, I wrap the portion I have yet to touch
in a class ToDo and start that class with a directive
to ignore all messages. At the end of the class the directive
goes out of scope. MWE:
def unused_variable(foo=''):
"generate W0613"
class ToDo:
#pylint: disable = E, W, R, C
#disables W0613 and E0213
def unused_variable(foo=''):
"generate W0613 and E0213"
def main(foo=''):
"generate W0613"
To disable pylint entirely for a function without needing to enumerate each pylint violation code, specify all to the disable, eg:
def foo():
# pylint: disable=all
"""You put the disable right under the function signature
to disable pylint for the entire function scope.
"""
pass
Unfortunatly you won't be able to locally avoid the error you encounter.
One could expect to locate in the source code the offending piece of code, and then locally disable the involved message(s), but this won't work because it will still be run. This is because when locally disabling a message, the code detecting this message is run, only the message isn't be emitted in the end.
However, it may work if you globally disable this message (depending on the way it's implemented). In your case it seems unfortunatly that you would have to skip the whole 'logging' checker.
To sum up, to avoid your traceback you may either:
locally use pylint: skip-file, in which case every pylint's features will be activated but the whole offending file will be skipped
globally disable the 'logging' checker (--disable=logging), in which case the whole code (including the offending file) will be checked but without the 'logging' checker messages

Auto generate doctest output with Sphinx extension

I think I am missing something about the sphinx extension for doctest.
The typical example in the documentation is:
.. doctest::
>>> print 1
1
Isn't there a way to let sphinx generate the output (here: 1) automatically?
As far as I understood, it is possible to run:
$ make doctest
which has the effect to test the code snippets, and compare the real output with the expected output. For example, if you have
.. doctest::
>>> print 1
3
doctest will warn you that it got 1 while it was expecting 3.
Instead, I would like sphinx to insert the real output alone in my docstring or in my .rst file. For example, if we have something like:
.. doctest::
>>> print 1
>>> print [2*x for x in range(3)]
I would like that when we run make doctest with an option, it changes the docstring to:
.. doctest::
>>> print 1
1
>>> print [2*x for x in range(3)]
[0,2,4]
I'm sure it's possible, and would be very convenient!
I have to strongly (but kindly) advise against what you're trying to do.
What you're asking is against the "test part" of the doctest module:
The doctest module searches for pieces of text that look like interactive Python sessions, and then executes those sessions to verify that they work exactly as shown.
These tests have a reasons to be if you write the input and the expected output and let Python check if the expected output match the actual output.
If you let Python produce the expected output, well.. it will no longer be expected (by the user/author), so the doctests will never fail, hence those tests will be useless.
Note: If inside a function there's no logic (if/else, while-loops, appends, etc..) there's no need to test them. And tests must not reproduce the testing logic, otherwise they're not testing the function anymore.
I found this video about test driven development very interesting, maybe it could be of interest to you if you want to know more about this argument.
Here is a suggestion on how you could achieve what I suspect you might be looking for:
Doug Hellmann has written an interesting article called Writing Technical Documentation with Sphinx, Paver, and Cog. It has a section describing how the Cog tool can be used to automatically run code examples and capture the output for inclusion in Sphinx-built documentation.
There is also a contributed Sphinx extension called autorun that can execute code in a special
runblock directive and attach the output to the documentation.
This feature is available as part of pytest-accept, and extension of pytest: https://github.com/max-sixty/pytest-accept
Quote:
pytest-accept is a pytest plugin for automatically updating doctest
outputs. It runs doctests, observes the generated outputs, and writes
them to the doctests' documented outputs.
It's designed for a couple of use cases:
People who work with doctests and don't enjoy manually copying generated outputs from the pytest error log and pasting them into
their doctests' documented outputs. pytest-accept does the copying &
pasting for you.
People who generally find writing tests a bit annoying, and prefer to develop by "running the code and seeing whether it works". This
library aims to make testing a joyful part of that development loop.

_shutdown AttributeError (ignored) when linting code that uses M2Crypto

I'm running lint as follows:
$ python -m pylint.lint m2test.py
with this code:
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
The lint output ends with:
Exception AttributeError: '_shutdown' in <module 'threading' from '/usr/lib/python2.7/site-packages/M2Crypto-0.21.1-py2.7-linux-x86_64.egg/M2Crypto/threading.pyc'> ignored
This code works fine when run (the above is actually a minimal test case; but the full version does work). The exception is ignored, but Bitten considers this a failure, so stops on this step.
I've tried adding 'M2Crypto.threading.init()'/'M2Crypto.threading.cleanup()' around the definition of the function, but that didn't fix the problem.
How can I prevent this problem from occurring?
I'm using M2Crypto 0.21.1, pylint 0.24 and Python 2.7 (also tried 2.7.2) on Debian Lenny x86_64.
The exception that you are seeing is caused by a bug in the astng package (presumably “Abstract Syntax Tree, Next Generation”?) which is a toolkit on which pylint depends, written by the same people. I should note in passing that I always encourage people to use pyflakes instead of pylint when possible, because it is quick, simple, fast, and predictable, whereas pylint tries to do several kinds of deep magic that are not only slow but that can get it into exactly this kind of trouble. :)
Here are the two packages on PyPI:
http://pypi.python.org/pypi/pylint
http://pypi.python.org/pypi/astng
And note that this problem had to be, necessarily, a bug in pylint and not in your code, because pylint does not run your code in order to produce its report — imagine the havoc that could be wreaked if it did (since code being linted might delete files, etcetera)! Since your code does not get run, no amount of caution, like protecting your call with threading init() or cleanup() functions, could possibly have prevented this error — unless the code snippets happened, for other reasons, to alter the behavior we are about to investigate.
So, on to your actual exception.
I had never actually heard of _shutdown before! A quick search of the Python standard library showed its definition in threading.py but not a call of the function from anywhere; only by searching the Python C source code did I discover where in pythonrun.c, during interpreter shutdown, the function is actually called:
static void
wait_for_thread_shutdown(void)
{
...
PyObject *threading = PyMapping_GetItemString(tstate->interp->modules,
"threading");
if (threading == NULL) {
/* threading not imported */
PyErr_Clear();
return;
}
result = PyObject_CallMethod(threading, "_shutdown", "");
if (result == NULL) {
PyErr_WriteUnraisable(threading);
}
...
}
Apparently it is some sort of cleanup function that the threading Standard Library module requires, and they have special-cased the Python interpreter itself to make sure that it gets called.
As you can see from the code above, Python quietly and without complaint handles the case where the threading module never gets imported during a program's run. But if threading does get imported, and still exists at shutdown time, then the interpreter looks inside for a _shutdown function and goes so far as to print an error message — and then return a non-zero exit status, the cause of your problems — if it cannot call it.
So we have to discover why the threading module exists but has no _shutdown method at the moment when pylint is done examining your program and Python is exiting. Some instrumention is called for. Can we print out what the module looks like as pylint exits? We can! The pylint/lint.py module, in its last few lines, runs its “main program” by instantiating a Run class it has defined:
if __name__ == '__main__':
Run(sys.argv[1:])
So I opened lint.py in my editor — one of the magnificent things about having each little project installed in a Python Virual Environment is that I can jump in and edit third-party code for quick experiments — and added the following print statement down at the bottom of the Run class's __init__() method:
sys.path.pop(0)
print "*****", sys.modules['threading'].__file__ # added by me!
if exit:
sys.exit(self.linter.msg_status)
I re-ran the command:
python -m pylint.lint m2test.py
And out came the __file__ string of the threading module:
***** /home/brandon/venv/lib/python2.7/site-packages/M2Crypto/threading.pyc
Well, look at that.
This is the problem!
According to this path, there actually exists an M2Crypto/threading.py module that, under all normal circumstances, should just be called M2Crypto.threading, and therefore sit in the sys.modules dictionary under the name:
sys.modules['M2Crypto.threading']
But somehow that file is also getting loaded as the main Python threading module, shadowing the official threading module that sits in the Standard Library. Because of this, the Python exit logic is quite correctly complaining that the Standard Library _shutdown() function is missing.
How could this happen? Top-level modules can only appear in paths that are listed explicitly in sys.path, not in sub-directories beneath them. This leads to a new question: is there any point during the pylint run that the …/M2Crypto/ directory itself is getting put on sys.path as though it contained top-level modules? Let's see!
We need more instrumentation: we need to have Python tell us the moment that a directory with M2Crypto in the name appears in sys.path. It will really slow things down, but let's add a trace function to pylint's __init__.py — because that is the first module that gets imported when you run -m pylint.lint — that will write an output file telling us, for every line of code executed, whether sys.path has any bad values in it:
def install_tracer():
import sys
output = open('mytracer.out', 'w')
def mytracer(frame, event, arg):
broken = any(p.endswith('M2Crypto') for p in sys.path)
output.write('{} {}:{} {}\n'.format(
broken, frame.f_code.co_filename, frame.f_lineno, event))
return mytracer
sys.settrace(mytracer)
install_tracer()
del install_tracer
Note how careful I am here: I define only one name in the module's namespace, and then carefully delete it to clean up after myself before I let pylint continue loading! And all of the resources that the trace function itself needs — namely, the sys module and the output open file — are available in the install_tracer() closure so that, from the outside, pylint looks exactly the same as always. Just in case anyone tries to introspect it, like pylint might!
This generates a file mytracer.out of about 800k lines, that each look something like this:
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
The False says that sys.path looks clean, the filename and line number are the line of code being executed, and call indicates what stage of execution the interpreter is in.
So does sys.path ever get poisoned? Let's look at just the first True or False on each line, and see how many successive lines start with each value:
$ awk '{print$1}' mytracer.out | uniq -c
607997 False
3173 True
4558 False
33217 True
4304 False
41699 True
2953 False
110503 True
52575 False
Wow! That's a problem! For runs of several thousand lines at a time, our test case is True, which means that the interpreter is running with …/M2Crypto/ — or some variant of a pathname with M2Crypto in it — on the path, where it should not be; only the directory that contains …/M2Crypto should ever be on the path. Looking for the first False to True transition in the file, I see this:
False /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:132 line
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
...
False /home/brandon/venv/lib/python2.7/posixpath.py:124 line
False /home/brandon/venv/lib/python2.7/posixpath.py:124 return
True /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:133 line
And looking at lines 132 and 133 in the builder.py file reveals our culprit:
130 # build astng representation
131 try:
132 sys.path.insert(0, dirname(path)) # XXX (syt) iirk
133 node = self.string_build(data, modname, path)
134 finally:
135 sys.path.pop(0)
Note the comment, which is part of the original code, not an addition of my own! Obviously, XXX (syt) iirk is an exclamation in this programmer's strange native language for the phrase, “put this module's parent directory on sys.path so that pylint will break mysteriously every time someone forces pylint to introspect a package with a threading sub-module.” It is, obviously, a very compact native language. :)
If you adjust the tracing module to watch sys.modules for the actual import of threading — an exercise I will leave to the reader — you will see that it happens when SocketServer, which is imported by some other Standard Library module during the analysis, in turn tries to innocently import threading.
So let us review what is happening:
pylint is dangerous magic.
As part of its magic, if it sees you import foo, then it runs off trying to find foo.py on disk, to parse it, and to predict whether you are loading valid or invalid names from its namespace.
[See my comment, below.] Because you call .split() on the return value of RSA.as_pem(), pylint tries to introspect the as_pem() method, which in turn uses the M2Crypto.BIO module, which in turn makes calls that induce pylint to import threading.
As part of loading any module foo.py, pylint throws the directory containing foo.py on sys.path, even if that directory is inside a package, and therefore gives modules in that directory the privilege of shadowing Standard Library modules of the same name during its analysis.
When Python exits, it is upset that the M2Crypto.threading library is sitting where threading belongs, because it wants to run the _shutdown() method of threading.
You should report this as a bug to the pylint / astng folks at logilab.org. Tell them I sent you.
If you decide to keep using pylint after it has done this to you, then there seem to be two solutions in this case: either don't inspect code that calls M2Crypto, or import threading during the pylint import process — by sticking import threading into the pylint/__init__.py, for example — so that the module gets the chance to grab the sys.modules['threading'] slot before pylint gets all excited and tries to let M2Crypto/threading.py grab the slot instead.
In conclusion, I think the author of astng says it best: XXX (syt) iirk. Indeed.
Many thanks to Brandon Craig Rhodes for having tracing this down and for such a detailed post.
I've removed the offending line from astng, code available from the hg repository until logilab-astng 0.23.0 is out. And I can confirm this fixes the OP's pb.
This looks more like a hack but I think it works. Copying the result of "as_pem()" and splitting it.
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None)[:].split("\n")
I'm using Python 2.6.7, M2Crypto 0.21.1, pylint 0.23
I was unable to reproduce (pylint 0.24 and M2Crypto 0.21.1 on Ubuntu 11.04 64bit) but two suggestions:
Explicitly initialize threading:
import M2Crypto
def f():
M2Crypto.threading.init()
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
M2Crypto.threading.cleanup()
Or recompile without threading:
m2crypto = Extension(name = 'M2Crypto.__m2crypto',
sources = ['SWIG/_m2crypto.i'],
extra_compile_args = ['-DTHREADING'],
#extra_link_args = ['-Wl,-search_paths_first'], # Uncomment to build Universal Mac binaries
)

Categories