We've been long-time fans of pylint. Its static analysis has become a critical part of all our python projects and has saved tons of time chasing obscure bugs. But after upgrading from 1.3 -> 1.4, almost all compiled c extensions result in E1101(no-member) errors.
Projects that previously run perfectly clean through pylint 1.3 now complain about almost every C extension member with E1101. We've been forced to disable E1101 errors, but this materially detracts from the usefulness of pylint.
For example, this perfectly valid use of the lxml package
r"""valid.py: demonstrate pylint 1.4 error"""
from lxml import etree
print etree.Element('mydoc')
Run this through pylint, and it reports:
$ pylint -rn valid.py
No config file found, using default configuration
************* Module valid
E: 3, 6: Module 'lxml.etree' has no 'Element' member (no-member)
But it is perfectly valid:
$ python valid.py
<Element mydoc at 7fddf67b1ba8>
Here's where it gets really weird. A very small handful of C extensions seem to work just fine through pylint, e.g.:
r"""valid2.py: this one works fine"""
import sqlite3
print sqlite3.version
$ pylint -rn valid2.py
No config file found, using default configuration
My question is, has anyone else witnessed this? And if so, would you be willing to share your workaround/solution?
We've experimented with trying to create plugins to suppress these warnings
(http://docs.pylint.org/plugins.html#enter-plugin), but we're having difficulty making heads or tails of the docs -- and the astroid base class is uber-complex and has defied our attempts to grok it.
For real bonus points (and our eternal gratitude) we'd love to understand what changed in pylint. We'd be happy to fix the code (or at least publish a best practice document for C extension authors) that would satisfy pylint.
Platform details
$ pylint --version
No config file found, using default configuration
pylint 1.4.0,
astroid 1.3.2, common 0.63.2
Python 2.7.5 (default, Jul 1 2013, 18:09:11)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-3)]
Shortly after posting my question, I found the answer. The change was in fact done on purpose as a security measure. Pylint imports modules to effectively identify valid methods and attributes. It was decided that importing c extensions that are not part of the python stdlib is a security risk and could introduce malicious code.
This was done in the release of Astroid 1.3.1 https://mail.python.org/pipermail/code-quality/2014-November/000394.html
Only C extensions from trusted sources (the standard library) are
loaded into the examining Python process to build an AST from the live
module.
There are four solutions if you want to use pylint on projects that import non-stdlib c extensions.
1) Disable safety using the --unsafe-load-any-extension=y command line option. This feature is undocumented and classified as a hidden option (https://mail.python.org/pipermail/code-quality/2014-November/000439.html).
2) Disable safety using the pylint.rc setting unsafe-load-any-extensions=yes. This is recommended over option 1 and includes full documentation in the default pylint.rc file (created with --generate-rcfile).
3) Specifically list packages or modules names that you trust to be loaded by pylint in the pylint.rc file using the extension-pkg-whitelist= option.
4) Create a plugin to manipulate the AST (I have no idea how to effect this -- but it's regularly discussed on on the pylint mailing list).
We opted for Option 3. We added the following line to our project pylint.rc file:
extension-pkg-whitelist=lxml
#user590028, thanks a lot for your answer! I just ran into this same problem with the libraries win32api, win32evtlog, win32file, win32gui, and win32process, and your solution worked.
I used another method I think is worth posting here, which is to call pylint and pass the whitelisted packages as a parameter:
pylint --extension-pkg-whitelist=win32api,win32evtlog,win32file,win32gui,win32process myfile.py
For those of you using VS Code, it's a bit tricky to find where to put the command as I couldn't find my executable.
In VS Code;
click on File > Preferences > Settings.
Scroll down to "Python Configurations" in the left window
scroll down to "Python Linting: Mypy Args" in the right window
click on "Edit in settings.json" link
edit the json to include:
"--extension-pkg-whitelist="
I had to do all this because PyLint isn't executable from my Windows command line...
If you're using VS Code for Mac, this is what you need to do in order to edit the settings.json file:
Click on Code (i.e. the Visual Studio Code tab which is on the left of the 'File' tab) -> Preferences - > Settings
Scroll down to Extensions and click on Python in the list.
Click on any of the Edit in settings.json links. This opens up settings.json for editing.
Add the line "python.linting.pylintArgs": ["----extension-pkg-whitelist=1xml"].
Related
Hi I've been working on this code for months without any of the pylint errors that have suddenly appeared? How do I fix this?
You need pandas and torch installed in the same environment than the one you run pylint.
From the documentation at https://pylint.pycqa.org/en/latest/user_guide/messages/error/no-member.html:
If you are getting the dreaded no-member error, there is a possibility that either:
pylint found a bug in your code
You're launching pylint without the dependencies installed in its environment.
pylint would need to lint a C extension module and is refraining to do so.
Linting C extension modules is not supported out of the box, especially since pylint has no way to get an AST object out of the extension module.
But pylint actually has a mechanism which you might use in case you want to analyze C extensions. Pylint has a flag, called extension-pkg-allow-list (formerly extension-pkg-whitelist), through which you can tell it to import that module and to build an AST from that imported module:
pylint --extension-pkg-allow-list=your_c_extension
Be aware though that using this flag means that extensions are loaded into the active Python interpreter and may run arbitrary code, which you may not want. This is the reason why we disable by default loading C extensions. In case you do not want the hassle of passing C extensions module with this flag all the time, you can enable unsafe-load-any-extension in your configuration file, which will build AST objects from all the C extensions that pylint encounters:
pylint --unsafe-load-any-extension=y
Alternatively, since pylint emits a separate error for attributes that cannot be found in C extensions, c-extension-no-member, you can disable this error for your project.
I seem to be using PEP 484 and PEP 526 type annotations more and more in my code.
I mostly use vim+syntastic, but I'm exploring my options a little, specifically I'm looking into how I'd like using PyCharm for some projects.
PyCharm has two mypy plugins. For now at least, I've gone with the one called "Mypy (Official)".
Is there a way of configuring PyCharm to use "mypy --disallow-untyped-calls --ignore-missing-imports file1.py file2.py" ?
My intention is to make PyCharm more strict in its type checking (--disallow-untyped-calls), while at the same time not generating errors about dependencies that don't include type annotations (--ignore-missing-imports).
I know I could write a shell wrapper to do this, but I'd rather go through the PyCharm GUI this time.
Thanks!
If you navigate to Configure plugin... for the MyPy (Official) plugin in PyCharm (right click on the "MyPy Terminal" pane), you can customize the command used to run mypy. On my machine (MacOS) with a clean install of the MyPy (Official) plugin, the default is:
dmypy run -- --follow-imports=error .
If you change this to
dmypy run -- --follow-imports=error --disallow-untyped-calls --ignore-missing-imports .
then the options you requested should be enabled.
Note that --follow-imports must be set to skip or error when using the plugin.
It's a small annoyance, but, pywin32 modules get reported as unresolved imports by the python linter in VSCode.
I've got pywin32 installed, and I'm able to import the module(s) in a terminal session, and when the script runs.
My python.pythonPath is set correctly in my settings.json.
The linter just can't seem to find the modules.
But if I use the following syntax the linter works, but the import obviously fails at run time.
import win32.lib.win32event
Any ideas on how I can "make" VSCode or the linter match the correct import?
Pylint has a configuration file, loaded by the command-line switch --rcfile. One of the entries in the configuration file is ignored-modules=. A comment line describes this option as a "list of module names for which member attributes should not be checked". There are several other options that are similar.
These options are needed because pylint is a static type checker, meaning that it doesn't load the module but merely inspects its source code. In the case of a module like the win32 collection, it uses .dll files that pylint cannot inspect. Therefore there is no way that pylint can figure out what names are exposed by win32. So the best you can do is tell pylint to suppress error messages.
I just installed ycm, everything looks good, but I found small problem. The problem is as following:
import os # os is built-in library
os. # ycm helps to complete members of the class.
import numpy # numpy is not built-in library, where its location is site-packages.
numpy. # nothing happened. ycm shows 'pattern not found' message.
I think, this would be a simple problem. But I could not find the solution yet. I think, there is some configuration file in which I can define 'search path' for my project.
It would be grateful if I can find a way to solve it.
Best,
Je-Hoon Song
I had the same issue with module 'mpmath' and fixed it in the following manner:
First I retrieved the path where the module was located:
%python3
>>>import mpmath
>>>print(mpmath.__file__)
/usr/lib/python3.4/site-packages/mpmath/__init__.py
Here I found the path of all my "installed" python3 packages to be:
/usr/lib/python3.4/site-packages/
I then simply added to my PYTHONPATH environment variable this path:
%export PYTHONPATH=/usr/lib/python3.4/site-packages/
Then when I used vim sample.py typing import mpmath and following it up with mpmath. YCM showed me all the autocompletions for the mpmath module.
Hope this helps.
I use anaconda python to be my python interpreter in ycm to solve this.
First I modified my vimrc according to full pythong setting in vim.
Then I change g:ycm_python_interpreter_path by
let g:ycm_python_interpreter_path = '/usr/local/anaconda3/bin/python3.8'
In this way I didn't change the system environment variables.
Addtional Info 1:
I think the main problem is that,
my Python interpreter for YCM is my system python (/usr/local/opt/python#3.9/bin/python3.9 ),
which only has limited locally built libraries.
So using the libraries comes with anaconda (/usr/local/anaconda3/bin/python3.8 ) can solve.
Additional Info 2:
By reading :YcmDebugInfo, the main different after edited g:ycm_python_interpreter_path is that:
-- Python completer debug information:
-- Python interpreter: /usr/local/opt/python#3.9/bin/python3.9
-- Python path: ['/usr/local/Cellar/python#3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python39.zip', '/usr/local/Cellar/python#3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9', '/usr/local/Cellar/python#3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload', '/usr/local/lib/python3.9/site-packages']
-- Python version: 3.9.6
change to
-- Python completer debug information:
-- Python interpreter: /usr/local/anaconda3/bin/python3.8
-- Python path: ['/usr/local/anaconda3/lib/python38.zip', '/usr/local/anaconda3/lib/python3.8', '/usr/local/anaconda3/lib/python3.8/lib-dynload', '/usr/local/anaconda3/lib/python3.8/site-packages', '/usr/local/anaconda3/lib/python3.8/site-packages/aeosa', '/usr/local/anaconda3/lib/python3.8/site-packages/locket-0.2.1-py3.8.egg']
-- Python version: 3.8.8
Additional Info 3: about how to read list of locally installed pyton modules
use https://stackoverflow.com/a/740018/11226687
e.g. in my case
$ /usr/local/opt/python#3.9/bin/python3.9
>>> help('modules')
# only return limitted modules
$ /usr/local/anaconda3/bin/python3
>>> help('modules')
# list out all the modules included in Anaconda, including numpy/matplotlib/scipy ect
numpy is kind of a difficult library because it dynamically builds its namespace on import, making it hard for static code analysis tools to know when you're write the code what names should be available. Because the names available in the namespace numpy are only really known at runtime, YCM probably doesn't have any useful suggestions for you.
One simple way to fix is activate your python environment, then open vim. For example
(django_mdn) ➜ locallibrary git:(master) ✗ vim
and in the vim run :echo $PATH.
Then you should be able to see that your venv path is at the first like this:
/Users/gwanghyeongim/.virtualenvs/django_mdn/bin:/usr/local/opt/tcl-tk/bin:...
Then see if your python packages are auto-complete.
It worked.
If you want to set a certain site-packages to be auto complete permanently, you need to make a file called .ycm_ extra_conf.py in your project root directory or global_extra_conf.py and set vim configuration if you want to set it globally.
P.S.
By running export PYTHONPATH=/usr/lib/python3.4/site-packages/ in the shell before opening vim didn't work for me. Besides, unless setting PYTHONPATH permanently, which will cause issue, you will have to set export PYTHONPATH everytime you want dependencies to be auto complete.
I've got a PyDev project that uses protocol buffers. The protocol buffer files are located in a zip file generated by the protoc compiler. Everything works when I run the program, however PyDev reports "Undefined variable from import" for every enumeration constant. So for example:
import model_pb2
value = model_pb2.Expression(type = model_pb2.Expression.PARAMETER)
It reports the enum constant "PARAMETER" as being an undefined variable. There are several dozen similar errors in my program, and I'd like to fix them "properly" (i.e. not simply suppressing the warning.)
I found that using for builtins as can work, but only if all the proto files are in a separate located in an external library (see http://pydev.org/manual_101_project_conf2.html).
This should work:
Move (or unzip) the compiled proto files including model_pb2.py into a directory outside the pydev project.
Add an empty __init__.py file to the same directory as model_pb2.py to ensure it can be imported as a library.
In eclipse, go to Windows -> Preferences -> pydev -> Interpreter
Add the directory with model_pb2.py to the Libraries.
Add model_pb2 to the forced buildins.
If you're not addicted to autocomplete, you may using ctrl+1 to ignoring these errors instead as described in this answer. This was tested with Eclipse Kepler and pydev 2.8.
I encountered this issue with protobuf 2.6.1 and PyDev 4.5.5. I tried the suggestions above both none of them helped in my case. What ended up getting rid of the 'undefined variable' errors when using protobuf enums was simple:
Access the enum on an instantiated protobuf object rather than on the protobuf module.
I'm not sure if this could be applied to the OP's use case, but in mine it was as easy as:
from myprotobuf_module import SomeProtobufMessage
some_protobuf_object = SomeProtobufMessage()
some_enum = some_protobuf_object.SOME_ENUM
Have you tried adding "model_pb2" to your forced builtins? See: http://pydev.org/manual_101_project_conf2.html for details.