Pylint: Disable specific warnings for specific folder - python

We have a Python project laid out like this:
project/
├── .pylintrc
├── module1.py
├── module2.py
└── tests/
├── test_module1.py
└── test_module2.py
Our unit and function tests reside in the folder called tests/. When it comes to tests the pylint warnings missing-docstring, invalid-name and protected-access are not relevant. On the other hand, these warnings are very useful for the actual code in the project.
My question is whether it is possible to add ignores for missing-docstring, invalid-name and protected-access in the .pylintrc-file that apply to modules in the tests/-folder only?
If possible, we do not want to add #-disables for these warnings to every test-module inside the folder.

As far as I'm aware you can't disable specific warnings for entire directories or files.
However, you can disable all warnings for specific directories using the following on the command line:
--ignore=<file[,file]>
The file here can be a directory.
Personally, and I know you said you'd rather not, I'd add a disable to the top of each file.

Yes, you can create .pylintrc in the tests folder, and another in the project folder.
Add tests to the "ignore" section of the project
[MASTER]
ignore=tests
See: https://docs.pylint.org/en/1.6.0/run.html
Then run separately:
pylint project
pylint project/tests

Related

Can I avoid getting pylint errors from not having __init__.py without disabling rules?

This is my project
root
├── main.py
└── utils
└── tool.py
This is my main
from utils.tool import some_func
It works but I'm getting Pylint(E0611:no-name-in-module). I'm pretty new to Python so I don't want to break any rules. But is there any way to get rid of this message without disabling the rule or breaking commonly accepted conventions?
I'm asking because I dislike the idea of spraying __init__.py files into all of my project folders.

Proper ways to set the path of my app in Python

I have a question in how to properly create a path in Python (Python 3.x).
I developed a small scraping app in Python with the following directory structure.
root
├── Dockerfile
├── README.md
├── tox.ini
├── src
│   └── myapp
│   ├── __init__.py
│   ├── do_something.py
│   └── do_something_else.py
└── tests
├── __init__.py
├── test_do_something.py
└── test_do_something_else.py
When I want to run my code, I can go to the src directory and do with
python do_something.py
But, because do_something.py has an import statement from do_something_else.py, it fails like:
Traceback (most recent call last):
File "src/myapp/do_something.py", line 1, in <module>
from src.myapp.do_something_else import do_it
ModuleNotFoundError: No module named 'src'
So, I eventually decided to use the following command to specify the python path:
PYTHONPATH=../../ python do_something.py
to make sure that the path is seen.
But, what are the better ways to feed the path so that my app can run?
I want to know this because when I run pytest via tox, the directory that I would run the command tox would be at the root so that tox.ini is seen by tox package. If I do that, then I most likely run into a similar problem due to the Python path not properly set.
Questions I want to ask specifically are:
where should I run my main code when creating my own project like this? root as like python src/myapp/do_something.py? Or, go to the src/myapp directory and run like python do_something.py?
once, the directory where I should execute my program is determined, what is the correct way to import modules from other py file? Is it ok to use from src.myapp.do_something_else import do_it (this means I must add path from src directory)? Or, different way to import?
What are ways I can have my Python recognize the path? I am aware there are several ways to make the pass accessible as below:
a. write export PYTHONPATH=<path_of_my_choice>:$PYTHONPATH to make the
path accessible temporarily, or write that line in my .bashrc to make it permanent (but it's hard to reproduce when I want to automate creating Python environment via ansible or other automation tools)
b. write import sys; sys.path.append(<root>) to have the root as an accessible path
c. use pytest-pythonpath package (but this is not really a generic answer)
Thank you so much for your inputs!
my environment
OS: MacOS and Amazon Linux 2
Python Version: 3.7
Dependency in Python: pytest, tox
I would suggest to use setup.py to make this a python package. Then you can install it in development mode python setup.py develop. This way it will be available in your python environment w/o needing to specify the PYTHONPATH.
For testing, you can simply install the package python setup.py install.
Hope that helps.
Two simple steps should make it happen. Python experts can comment if this is a good way to do it (especially going by the concluding caution raised towards the end of this post).
I would have done it like below.
First I would have put a "__init__.py" in root so that hierarchy looks like below. This way python will treat the folder as a package.
root
├── Dockerfile
├── README.md
├── tox.ini
├── __init__.py
├── src
│ └── myapp
│ ├── __init__.py
│ ├── do_something.py
│ └── do_something_else.py
└── tests
├── __init__.py
├── test_do_something.py
└── test_do_something_else.py
Then in "do_something.py", I would have added these lines at the top. In the second line please put the full path to the "root" directory.
import sys
sys.path += ['/home/SomeUserName/SomeFolderPath/root']
from src.myapp.do_something_else import do_it
Please note that the second line will essentially modify the sys.path by adding the root folder path (I guess until the interpreter quits). If this is not what you can afford then I am sorry.

py.test - Error collecting when 2 conftest.py in different directories

We using py.test. We try to put different conftest.py files in different folders to split our fixtures:
tests/api/
├── conftest.py
├── folder1
│   └── conftest.py
├── folder2
│   └── conftest.py
But when run the tests this error occurs:
____ ERROR collecting api/folder1/conftest.py ____
import file mismatch:
imported module 'conftest' has this __file__ attribute:
/tests/api/folder2/conftest.py
which is not the same as the test file we want to collect:
/tests/api/folder1/conftest.py
HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
Why is that? How fix it?
PS. Removing __pycache__.pyc did not help.
PPS. __init__.py files already exist in each folder.
I had the same issue. To solve this you need to create python packages instead of directories. Then pytest will look at the conftest.py in your package instead of root directory. Hope, this will help you.
tests/api/
├── conftest.py
├── package1 # not folder
│ └── conftest.py
├── package2 # not folder
│ └── conftest.py
Your use case sounds like this example in the pytest documentation. Because of that I think it's possible to use conftest.pys at different levels to override fixtures.
The errors you're seeing may be related to incorrect imports. Is your test code importing from conftest files directly? Are your conftest files importing from your tests? Are any of your imports relative instead of absolute? If any of these are true, that may be your issue. I recommend only using absolute imports, and avoid imports between conftest.pys and test files.
Rename one (or both) of the test files Pytest is complaining about. Pytest is telling you in the error message to do this (i.e. change the basename, meaning don't name all your test files conftest.py). For example, you can fix it by doing:
tests/api/
├── conftest.py
├── folder1
│   └── test_conf1.py
├── folder2
│   └── test_conf2.py
In your case, the module names conflict (you have three conftest.pys). This is a quirk of Pytest AFAIK. Pytest could get around this by managing full package/module paths: but it doesn't do this (probably for good reason, but I do not maintain/contribute to pytest so I can't shed light on the issue). Pytest is a fantastic framework (it's even telling you exactly why it can't run your tests): I'm sure they have a good reason for not supporting this behavior.
You claim that you want to:
separate tests and fixtures by different functionalities.
So do that. Separating the test fixtures/functionalities has nothing to do with what you name the files.
I commonly run into this error when splitting up unit/integration/acceptance tests. I split them up so I can run my (fast) unit tests without having to run my (potentially slow) integration/acceptance tests. I might have some module, call it Abc. And I have something like:
tests/
├── unit
│   └── test_abc.py
├── integration
│   └── test_abc.py
But then pytest barfs with the identical error you've shown, and so I just rename integration/test_abc.py to integration/test_abc_integration.py and move on with my day. Like this:
tests/
├── unit
│   └── test_abc.py
├── integration
│   └── test_abc_integration.py
Is it annoying? A little. How long does the fix take? 5 whole seconds.
P.S. You might have to remove __pycache__ directories or you .pyc files for the first run after you get the error you've posted about (if you don't you'll just get the same error again even if you rename).
P.S.S. You can stop the Cpython interpreter (and most others) from writing out __pycache__ and .pyc files by calling python -B -m pytest .... The -B option makes the interpreter not save the bytecode to your filesystem. This results in some performance penalty whenever you run your test suite, but the penalty is usually very small (milage may vary). I typically use this option because I don't like the clutter in my repositories and the performance loss is typically negligible.

Can a Python script in a (sub)module import from upstream in its directory hierarchy?

I realize there are a slew of posts on SO related to Python and imports, but it seems like a fair number of these posts are asking about import rules/procedures with respect to creating an actual Python package (vs just a project with multiple directories and python files). I am very new to Python and just need some more basic clarification on what is and is not possible with regard to access/importing within the context of multiple py files in a project directory.
Let's say you have the following project directory (to be clear, this is not a package that is somewhere on sys.path, but say, on your Desktop):
myProject/
├── __init__.py
├── scriptA.py
└── subfolder
├── __init__.py
└── scriptB.py
└── subsubfolder
├── __init__.py
└── scriptC.py
└── foo.py
Am I correct in understanding that the only way scriptC.py could import and use methods or classes within scriptB.py if scriptC.py is run directly via $ python scriptC.py and from within the subsubfolder directory is if I add the parent directory and path to scriptB.py to the Python path at runtime via sys.path ?
It is possible, however, for scriptC.py to import foo.py or for scriptB.py to import scriptC.py or foo.py without dealing with sys.path, correct? Adjacent py files and py files in subdirectories are accessible just by using relative import paths, you just can't import python scripts that live in parent or sibling directories (without using sys.path) ?
What's Possible
Anything.
No, really. See the imp module, the the imputil module -- take a look at how the zipimport module is written if you want some inspiration.
If you can get a string with your module's code in a variable, you can get a module into sys.modules using the above, and perhaps hack around with its contents using the ast module on the way.
A custom import hook that looks in parent directories? Well within the range of possibilities.
What's Best Practice
What you're proposing isn't actually good practice. The best-practice approach looks more like the following:
myProject/
├── setup.py
└── src/
├── moduleA.py
└── submodule/
├── __init__.py
├── moduleB.py
└── subsubmodule/
├── __init__.py
└── moduleC.py
Here, the top of your project is always in myProject/src. If you use setup.py to configure moduleA:main, submodule.moduleB:main and submodule.subsubmodule.moduleC:main as entry points (perhaps named scriptA, scriptB and scriptC), then the functions named main in each of those modules would be invoked when the user ran the (automatically generated by setuptools) scripts so named.
With this layout (and appropriate setuptools use), your moduleC.py can absolutely import moduleA, or import submodule.moduleB.
Another approach, which doesn't involve entrypoints, to invoke the code in your moduleC.py (while keeping the module's intended hierarchy intact, and assuming you're in a virtualenv where python setup.py develop has been run) like so:
python -m submodule.subsubmodule.moduleC

Where should I put tests when packaging python modules?

I have a module that sits in a namespace. Should tests and data the tests rely on go in the namespace or in the top level where setup.py sites?
./company/__init__.py
./company/namespace/__init__.py
./company/namespace/useful.py
./company/namespace/test_useful.py
./company/namespace/test_data/useful_data.xml
./setup.py
or
./company/__init__.py
./company/namespace/__init__.py
./company/namespace/useful.py
./test_useful.py
./test_data/useful_data.xml
./setup.py
Does the question amount to whether tests should be installed or not?
The Sample Project stores the tests outside the module.
The directory structure looks like this:
├── data
│   └── data_file
├── MANIFEST.in
├── README.rst
├── sample
│   ├── __init__.py
│   └── package_data.dat
├── setup.cfg
├── setup.py
└── tests
├── __init__.py
└── test_simple.py
Related: The Packing Guide: https://packaging.python.org/en/latest/
Hint: Don't follow the "The Hitchhiker's Guide to Packaging". It has not been updated since 2010!
(do not confuse both pages. The "The Hitchhiker’s Guide to Python" is a very solid book)
You should put your test module inside the module it tests according to The Hitchhiker's Guide to Packaging.
Here is their example:
TowelStuff/
bin/
CHANGES.txt
docs/
LICENSE.txt
MANIFEST.in
README.txt
setup.py
towelstuff/
__init__.py
location.py
utils.py
test/
__init__.py
test_location.py
test_utils.py
This way your module will be distributed with its tests and users can use them to verify that it works with their set up.
See http://the-hitchhikers-guide-to-packaging.readthedocs.org/en/latest/creation.html.
I personally create a single tests package as a sub package of the main package for a few reasons:
If tests is in parallel with the root package there's an off chance you, or a user may misconfigure setup.py and accidentally expose a global package named tests that will cause a great deal of confusion and headache until you realize what has happened. Putting it in the main module solves this as it's now under a (hopefully) globally unique namespace.
I don't like putting a test module within user package because test runners have to search through production code. This is probably not a problem for most. But, if you happen to be a hardware test engineer, you probably use the word 'test' a lot in your production code and don't want the unit test runner to pick that stuff up. It's much easier if all the tests are in one place separate from the production code.
I can further subdivide my tests folder into the types of tests, such as unit, functional and integration. My functional tests tend to have dependencies on weird proprietary hardware, data or are slow. So it's easy for me to continuously run just the fast unit test folder as I develop.
It can sometimes be convenient to have the tests be inside of the same package hierarchy as what it is testing.
Overall though, I think it's important to think for yourself about what's best for your particular problem domain after taking everyone's advice into account. 'Best practices' are great starting points, not end points, for developing a process.

Categories