I recently enabled GitHub Codacy scans on my repo. The Pylint* and Prospector modules (if that is the right terminology) report a lot of warnings:
I have to believe there's a way to configure what they flag, perhaps via an rc file or a .yml placed somewhere, but I haven't figured out what the config files should be named, where they should be placed, and what the allowable syntax(es) are. I'd be happy to RTFM if I could figure out the FM to R.
How do I configure the linters invoked by Codacy code scanners on GitHub?
Codacy will pick up the default config files for each linter.
Prospector (http://prospector.landscape.io/en/master/profiles.html#profiles-configuration)
.landscape.yml, .landscape.yaml, landscape.yml, landscape.yaml,
.prospector.yml, .prospector.yaml, prospector.yml, prospector.yaml
Pylint (http://pylint.pycqa.org/en/latest/faq.html#how-do-i-find-the-option-name-for-pylintrc-corresponding-to-a-specific-command-line-option)
pylintrc, .pylintrc
You can check Codacy's docs here for the details of more tools:
https://docs.codacy.com/repositories-configure/code-patterns/#i-have-my-own-tool-configuration-file
Also, if you set up your project in Codacy's APP, you can also configure those patterns in the settings instead of using a config files.
Related
Background
I am wrangling some legacy code into shape.
I use PDM to manage dependencies, which places all dependent packages in a __pypackages__ folder directly under the repo root level. PDM also uses the relatively new pyproject.toml package config file.
I am trying to adopt pre-commit Git hooks so that I can have automated checks for formatting and style before trying to commit, merge, and/or create PRs.
I am asking pre-commit to use only a few Python tools for now: pylint and black.
Issue
Most of that toolset works great together. However, pylint cannot find any of the modules that are stored in the __pypackages__ folder. Most of what I have read suggests that I alter my $PYTHONPATH to find the modules.
This solution seems very outdated. But also, I am not sure how I can do this in a robust way across the team. I can alter the Git hooks, but the $PYTHONPATH may be different for each engineer, so this will only work for my machine.
I would like to be able to add something in the pyproject.toml file to have pylint find it. I am not sure what to write, though, so that it generically works across the whole team. Something like
[tools.pylint]
pypackages = "./__pypackages__"
Any ideas how I can do this?
Details
I am not sure more details are needed, but here it is:
My actions:
> pre-commit run --all-files # The --all-files flag is just to allow me to test without a commit
Trim Trailing Whitespace.................................................Passed
Fix End of Files.........................................................Passed
Check Yaml...........................................(no files to check)Skipped
Check for added large files..............................................Passed
black....................................................................Passed
pylint...................................................................Failed
- hook id: pylint
- exit code: 30
************* Module testfile
testfile.py:18:0: E0401: Unable to import 'boto3' (import-error)
boto3 is in the __pypackages__ mentioned above. None of the modules can be imported, but I limited the output for clarity.
I can pdm run ... everything correctly and VS Code sees the modules fine. But pylint is not finding it because it cannot find this __pypackages__ folder.
You can get around this by updating the PYTHONPATH environment variable used by the extension, by creating a file named .env in your workspace (project folder) and adding the following entry:
PYTHONPATH=D:/commonScripts
Note: Relative paths are also supported.
Further info on .env files can be found here https://code.visualstudio.com/docs/python/environments#_environment-variable-definitions-file
I have an package thats imported from the parent path everywhere. So i have to set PYTHONPATH enviromentvariable when i want to serve the docs for that package.
Ive searched the Docs,stack overflow, google but couldn't find an solution to configure that in the mkdocs.yml or run an piece of python code to append it to sys.path
Edit:
handlers:
python:
setup_commands:
- import sys;sys.path.append('..');print(sys.path)
could be what i search for, but during mkdocs build (or serve) the print is never called
The solution is really to use the setup_commands. Prints don't seems to be showed there and my problem was that an used package wasn't installed. The error message then is the same as when the import isn't be found.
So I have the recommended setup for smaller projects, where you have multiple module YAML files all in the main file, all sharing source. Like here: https://cloud.google.com/appengine/docs/python/modules/#devserver
I only have 2 modules: the default module, and my backend module for running tasks, pipeline, etc.
Default is on version 22, backend is on version 'uno' (the first and only version of this module).
I cannot get backend to update to version 'dos'. Whenever I test things I am getting 404's, like the source files don't exist on the backend module. The requests make it to the correct module, but error out.
I have tried to update using: appcfg.py update main_directory app.yaml backend.yaml
But it always looks like it is only doing a 'default module' update. I never see anything about the backend module. Even when I try the above command minus the app.yaml (which is acting as my default module YAML).
In the developer console I can only see the single version for my backend module. It has not added a 2nd version despite my attempts to add a 'dos' version, and a 'v2' version' - both never "worked".
Anyone else have problems updating a 'backend' module to a new version? Is it the 'all in one directory' setup giving me problems? Am I just not using the right appcfg incantation?
Update 1: My directory structure looks like this
where module1.yaml is app.yaml and module2.yaml is backend.yaml.
Drop the main_directory from the update command:
appcfg.py update app.yaml backend.yaml
Specifying a directory only works for single-module apps, for uploading modules only the respective modules' .yaml files should be specified:
You can also update a single module or a subset of the apps modules by specifying only the .yaml files for the desired module(s).
My context is appengine_config.py, but this is really a general Python question.
Given that we've cloned a repo of an app that has an empty directory lib in it, and that we populate lib with packages by using the command pip install -r requirements.txt --target lib, then:
dirname ='lib'
dirpath = os.path.join(os.path.dirname(__file__), dirname)
For importing purposes, we can add such a filesystem path to the beginning of the Python path in the following way (we use index 1 because the first position should remain '.', the current directory):
sys.path.insert(1, dirpath)
However, that won't work if any of the packages in that directory are namespace packages.
To support namespace packages we can instead use:
site.addsitedir(dirpath)
But that appends the new directory to the end of the path, which we don't want in case we need to override a platform-supplied package (such as WebOb) with a newer version.
The solution I have so far is this bit of code which I'd really like to simplify:
sys.path, remainder = sys.path[:1], sys.path[1:]
site.addsitedir(dirpath)
sys.path.extend(remainder)
Is there a cleaner or more Pythonic way of accomplishing this?
For this answer I assume you know how to use setuptools and setup.py.
Assuming you would like to use the standard setuptools workflow for development, I recommend using this code snipped in your appengine_config.py:
import os
import sys
if os.environ.get('CURRENT_VERSION_ID') == 'testbed-version':
# If we are unittesting, fake the non-existence of appengine_config.
# The error message of the import error is handled by gae and must
# exactly match the proper string.
raise ImportError('No module named appengine_config')
# Imports are done relative because Google app engine prohibits
# absolute imports.
lib_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'libs')
# Add every library to sys.path.
if os.path.isdir(lib_dir):
for lib in os.listdir(lib_dir):
if lib.endswith('.egg'):
lib = os.path.join(lib_dir, lib)
# Insert to override default libraries such as webob 1.1.1.
sys.path.insert(0, lib)
And this piece of code in setup.cfg:
[develop]
install-dir = libs
always-copy = true
If you type python setup.py develop, the libraries are downloaded as eggs in the libs directory. appengine_config inserts them to your path.
We use this at work to include webob==1.3.1 and internal packages which are all namespaced using our company namespace.
You may want to have a look at the answers in the Stack Overflow thread, "How do I manage third-party Python libraries with Google App Engine? (virtualenv? pip?)," but for your particular predicament with namespace packages, you're running up against a long-standing issue I filed against site.addsitedir's behavior of appending to sys.path instead of inserting after the first element. Please feel free to add to that discussion with a link to this use case.
I do want to address something else that you said that I think is misleading:
My context is appengine_config.py, but this is really a general Python
question.
The question actually arises from the limitations of Google App Engine and the inability to install third-party packages, and hence, seeking a workaround. Rather than manually adjusting sys.path and using site.addsitedir. In general Python development, if your code uses these, you're Doing It Wrong.
The Python Packaging Authority (PyPA) describes the best practices to put third party libraries on your path, which I outline below:
Create a virtualenv
Mark out your dependencies in your setup.py and/or requirements files (see PyPA's "Concepts and Analyses")
Install your dependencies into the virtualenv with pip
Install your project, itself, into the virtualenv with pip and the -e/--editable flag.
Unfortunately, Google App Engine is incompatible with virtualenv and with pip. GAE chose to block this toolset in an attempt sandbox the environment. Hence, one must use hacks to work around the limitations of GAE to use additional or newer third party libraries.
If you dislike this limitation and want to use standard Python tooling for managing third-party package dependencies, other Platform as a Service providers out there eagerly await your business.
I'm running django_coverage over a project with the command test_coverage. It's working, but it's including in the output and final calculation code in /usr/local/lib/python2.6/dist-packages. I'm not interested in knowing about the coverage of those modules, only the test coverage for my project. I see in the django_coverage documentation on BitBucket that there is a COVERAGE_PATH_EXCLUDES, but that seems to apply only to subdirectories of the project and not absolute system paths. Also, I see that the default for COVERAGE_MODULE_EXCLUDES is to exclude any imports with "django" in it, but I'm still getting output for /usr/local/lib/python2.6/dist-packages/django.
Any thoughts on how to fix this?
Do you have 'django' listed in COVERAGE_PATH_EXCLUDES? I have a similar setup (django 1.1.2, python 2.6) don't see the output for any django packages in my test coverage results. Can you post what you are using for the excludes?
I'm not using django so I can't confirm this, but is it possible that you have modified the original code settings file rather than including the settings in your own as it says in step 3 (from the readme excerpt below):
Install as a Django app
Place the entire django_coverage app in your third-party apps directory.
Update your settings.INSTALLED_APPS to include django_coverage.
Include test coverage specific settings in your own settings file. See settings.py for more detail.